Ex Parte Luther et alDownload PDFPatent Trial and Appeal BoardJan 23, 201410965624 (P.T.A.B. Jan. 23, 2014) Copy Citation UNITED STATES PATENT AND TRADEMARK OFFICE ____________ BEFORE THE PATENT TRIAL AND APPEAL BOARD ____________ Ex parte PAUL S. LUTHER and ROBERT BRUCE MAHAFFEY ____________ Appeal 2011-011904 Application1 10/965,624 Technology Center 2100 ____________ Before TONI R. SCHEINER, DEMETRA J. MILLS, and ULRIKE W. JENKS, Administrative Patent Judges. JENKS, Administrative Patent Judge DECISION ON APPEAL This is an appeal under 35 U.S.C. § 134 involving claims directed to a user interface aggregator that controls the presentation of the user interface for multiple software applications. The Examiner has rejected the claims as obvious. We have jurisdiction under 35 U.S.C. § 6(b). We affirm-in-part. 1 Appellants identify International Business Machines Corporation of Armonk, New York as the Real Party in Interest (App. Br. 2). Appeal 2011-011904 Application 10/965,624 2 STATEMENT OF THE CASE Claims 8, 10-12, 14, and 27-40 are on appeal, and can be found in the Claims Appendix (App. Br. 23-29). Claim 8 is illustrative of the claims on appeal, and reads as follows: 8. A method to control a user interface [UI] for performing input and output of information with multiple software applications, comprising the steps of: interacting with a UI aggregator to specify the user interface for the multiple software applications, the UI aggregator including (i) a first component that applies uniform aspects of the user interface across all applications and computing devices, (ii) a second component that controls a styling of information presented on the user interface, and (iii) a third component that modifies the user interface according to conditions in a computing environment, wherein the second component accepts at least one styling command, the at least one styling command allowing a selection of terse or discursive presentation of the information on the user interface, the at least one styling command allowing presentation of (i) key words, (ii) summarized information, (iii) elaborate information, and (iv) condensed information, based on the information; interacting between the UI aggregator and the multiple software applications to implement the user interface; uniformly controlling across the multiple software applications a presentation of the output using the UI aggregator such that the UI aggregator performs a combination of (i) adjusting a parameter of a user interface of an application in the multiple software applications, and (ii) transforming a content of an application in the multiple software applications to a format of the user interface; and creating in the UI aggregator user modifiable profiles configured to uniformly control the presentation of the output using the user interface based on user profile instructions received at the UI aggregator. Appeal 2011-011904 Application 10/965,624 3 The Examiner has rejected claims 8, 10-12, 14, and 27-40 under 35 U.S.C. § 103(a) as being unpatentable over US 2003/0126330 A1, published Jul. 3, 2003 (Balasuriya), US 6,976,217 B1, issued Dec. 13, 2005 (Vertaschitsch et al.), referred to as Vertaschitsch and US 2003/0013492 A1, published Jan. 16, 2003 (Bokhari et al.), referred to as Bokhari. (Ans. 4.) The Issue The Examiner takes the position that Balasuriya disclosed all the limitations of claim 8 but for identifying “multiple software applications, [however,] it would be obvious that accessing of the multimodal apparatus includes accessing multiple software applications” (Ans. 7), and showing that “the second component accepts at least one styling command, the styling command allowing a selection of terse or discursive presentation of the information on the user interface” (id.). The Examiner looks to Vertaschitsch for teaching a multimodal apparatus using multiple software applications and Bokhari for teaching that “[t]he user having the ability to choose the number of characters and lines of text represents a styling command for selecting a terse or discursive presentation of text data.” (Id.) Does the preponderance of the evidence of record support the Examiner’s conclusion that the combination of references renders the claims obvious? Findings of Fact FF1. Balasuriya disclosed: [A] multimodal communication system and method creates and accesses a multimodal profile that contains at least multimodal preference information, such as desired input modality and a Appeal 2011-011904 Application 10/965,624 4 desired output modality for a given communication apparatus. The multimodal profile also includes at least one identifier associated with the multimodal preference information or multi- modal preference information for a given scenario without an identifier. When used, the identifier may identify, for example, an environmental situation that a user may encounter, such as the user being in a meeting, in a vehicle other environment, or other utilizing a specific service. For example, a multimodal profile is customizable and may dictate that the multimodal communication apparatus use voice as the mechanism for inputting information and uses voice for outputting information when the multimodal communication device is in a car, but another set of multimodal preference information for a given user profile may dictate that the communication apparatus use a tactile interface for receiving input from a user and provide a visual interface for outputting of information on the communication device. Accordingly, the multimodal communication method and apparatus configures at least one multimodal communication apparatus for a multimodal communication session based on the accessed multimodal preference information from the multimodal profile. (Balasuriya 1: 0012.) FF2. Balasuriya disclosed: [T]he multimodal communication apparatus and method creates a multimodal profile by presenting a user interface to a user, that is adapted to receive input and output modality preference data to define differing multimodal preference information for a plurality of multimodal communication scenarios. Each of the multimodal communication scenarios is associated with an identifier. For example, one multimodal communication scenario may be, as noted above, that the user and device are located in a vehicle and hence text output may not be desirable. Another multimodal communication scenario may be that the user and communication apparatus is present in a meeting so that audio output may not be desirable to avoid interruption of dialog occurring during the meeting. The method and apparatus also stores received input and output modality Appeal 2011-011904 Application 10/965,624 5 preference data and associates the identifier to the designated input and output modality preference data. This may be done, for example, through an object oriented database or any other suitable linking mechanism. (Balasuriya 2: 0014.) FF3. Balasuriya disclosed: [T]he method includes receiving, via the user interface, input and output modality preference data 200a-200n that defines different multimodal preference information 202 for different multimodal communication scenarios. The multimodal preference information 202 may include, for example, media preference information (e.g. such as whether output text is in all caps, whether voice out is in pulse code modulated (PCM) format, whether output text is encrypted or other suitable variables), session preference information (e.g., network control information), ambient condition levels, or any other desired preference information. . . . Accordingly, each identifier identifies a different multimodal profile. There may be multiple multimodal profiles for a given user, a software application or multimodal communication apparatus. (Balasuriya 3: 0025.) FF4 Balasuriya disclosed: [W]ith the “in meeting” scenario, media preference information 206a may be selected by a user so that all text that is output to the graphical output interface 118 is presented in all capital letters. (Balasuriya 3: 0026.) FF5. Bokhari disclosed: In one aspect of the present invention, the content is first aggregated in a habitat. . . . As an option, a maximum character length of text content displayed upon selection of a link on the wireless device is configurable. This allows the user to select how many characters of the text content are displayed. As another option, a number of lines of text content displayed upon Appeal 2011-011904 Application 10/965,624 6 selection of a link on the wireless device is configurable. This allows the user to select how many lines of the text content is displayed. (Bokhari 1: 0005.) Principle of Law “The combination of familiar elements according to known methods is likely to be obvious when it does no more than yield predictable results.” KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 416 (2007). Analysis Appellants contend that there is insufficient motivation to combine the references (App. Br. 15-20). According to Appellants, “Balasuriya only teaches adjusting an output of the multimodal user interface according to a user profile but does not teach or suggest adjusting a parameter of another application.” (Id. at 16.) We are not persuaded by Appellants’ contention. Balasuriya recognizes that “[s]pecifying such input and output modality preferences each time a user is in a communication session can be time consuming and potentially complicated.” (Balasuriya 1: 0004.) According to Balasuriya, it was known to store information for an individual application “that may contain user information such as a user’s preferences for a software application, contact information and, for example, a communication device’s capabilities, such as whether or not the communication device can encrypt or decrypt information, and other capabilities.” (Balasuriya 1: 0005.) Balasuriya’s method “facilitates customization of input and output interface Appeal 2011-011904 Application 10/965,624 7 selection along with other desired multimodal preferences.” (Balasuriya 1: 0006.) We agree with the Examiner’s position that Balasuriya’s “multimodal profile that contains at least multimodal preference information, such as desired input modality and a desired output modality for a given communication apparatus” (FF1) meets the limitation of “a first component that applies uniform aspects of the user interface across all applications and computing devices.” (Ans. 5; FF1; see also FF3.) According to the Examiner “[w]hen used, the identifier may identify, for example, an environmental situation that a user may encounter, such as the user being in a meeting, in a vehicle other environment, or other utilizing a specific service” (FF1) this meets the limitation of “a third component that modifies the user interface according to conditions in a computing environment” (Ans. 5; FF1). We agree that Balasuriya teaches that depending on the environmental situation, the input and output parameters of the communication device are changed to adjust to a particular situation such as being in a car or in a meeting (FFs 1-3). We also agree with the Examiner that Balasuriya’s teaching of using different scenarios that require different media preferences for the information depending on whether the device is used in the car or meeting, including presenting different styles so that for example the “output interface 118 is presented in all capital letters,” meets the limitation of “a second component that controls a styling of information presented on the user interface” (Ans. 5; FF4). We find that the Examiner has reasonably interpreted the disclosure of Balasuriya as meeting the contested claim limitations. Accordingly, we are not persuaded by Appellants’ contention that the combination of references Appeal 2011-011904 Application 10/965,624 8 does not teach “the three specific components included in the UI aggregator” as recited by the claims (App. Br. 15). Appellants contend that “Vertaschitsch teaches that many software applications exist and they make people’s lives easier. Appellants agree that many applications exist in the world and they make people’s lives easier. However, this revelation of Vertaschitsch is insufficient to teach or suggest the adjusting feature as recited in amended claim 8.” (App. Br. 18.) We are not persuaded. Vertaschitsch not only provides that the personal digital assistant can make life easier but this “small, slim, device, about the size of your wallet . . . can run many different software applications.” (Vertaschitsch, col. 1, ll. 44-48.) We find no error with the Examiner’s conclusion that “[t]he teachings in Vertaschitsch disclose that it is obvious to one skilled in the art at the time of the invention that a portable device that is multi-modal [also] includes multiple software applications.” (Ans. 12.) Appellants contend that “[a]s is commonly understood in the English language, terse text is distinct from truncated text. Bokhari does nothing more than truncate the text at a fixed length - either by a character count or line count - nothing more.” (Reply Br. 5.) Appellants contend that Bokhari teaches that a user may configure the device to display a fixed number of characters or lines from the text associated with a link. In other words, when a user selects a link to display the link’s contents, the display of that content can be limited [to] a fixed number of characters or lines. This teaching in Bokhari is insufficient to teach or suggest the specific aspects of the second component of the UI aggregator according to claim 8. (App. Br. 19.) Appeal 2011-011904 Application 10/965,624 9 We are not persuaded. As recognized by the Examiner Bokhari discloses adjusting the amount of information that is displayed based on the size of the user interface. The truncation of text data allows for the text data to be presented in a terse manner where the data is displayed succinctly to convey the context of the information while efficiently using the screen space. The truncation and displaying of terse data also reads on presentation of key words, summarized information and condensed information. The truncation of the link data leaves key words that are still displayed and associated with the link data. (Ans. 12.) Here, the Examiner has provided a reasonable interpretation and application of the term terse as it applies to the claims. Appellants fail to identify any evidentiary basis on this record that rebuts the Examiner’s reasoning that truncation of text in a manner that allows the text to be displayed on a small screen would not be considered presenting the information in a terse manner as required by the claims. In re Geisler, 116 F.3d 1465, 1471 (Fed. Cir. 1997) (Argument of counsel cannot take the place of evidence). We conclude that the combination of references establishes a prima facie case of obviousness and Appellants have not presented sufficient rebuttal or other evidence to overcome the Examiner’s case. See In re Rijckaert, 9 F.3d 1531, 1532 (Fed. Cir. 1993). Accordingly, we affirm the rejection of claim 8 under 35 U.S.C. § 103(a) as being obvious over Balasuriya, Vertaschitsch, and Bokhari. As claims 10, 11, 14, 27-33, and 35-40 have not been argued separately, they fall with claim 8 (App. Br. 20). 37 C.F.R. § 41.37 (c)(1)(vii). With respect to claims 12 and 34, Appellants contend that “neither Balasuriya nor Vertaschitsch nor Bokhari teach or suggest anything that can Appeal 2011-011904 Application 10/965,624 10 be considered analogous to a mode that modifies one of (i) pitch, (ii) rate, and (ii) spacing between words in spoken information, or a mode that enhances reading of a text in the manner” recited in the claims (App. Br. 21). We find that Appellants have the better position. The Examiner cites to paragraph 12 and 21 of Balasuriya for teaching “that mode modifies one of pitch, rate, and spacing between words in spoken information” (Ans. 9). According to the Examiner “Balasuriya discloses environments and examples where a certain comprehension need is determined” (id. at 13). The mere disclosure of an environment that may benefit from making adjustments to the pitch, rate, or spacing between words does not mean that the reference necessarily functions in that manner to achieve better comprehension. “[O]bviousness requires a suggestion of all limitations in a claim.” CFMT, Inc. v. Yieldup Int’l Corp., 349 F.3d 1333, 1342 (Fed. Cir. 2003). Here the limitation of pitch, rate, or spacing between words is not found in the cited portions of Balasuriya. It is well settled that inherency “may not be established by probabilities or possibilities. The mere fact that a certain thing may result from a given set of circumstances is not sufficient.” In re Oelrich, 666 F.2d 578, 581 (CCPA 1981). Because the Examiner has not established that these limitations are necessarily present in Balasuriya we reverse the rejection claims 12 and 34 that recite the limitation “wherein a mode modifies one of (i) pitch, (ii) rate, and (ii) spacing between words in spoken information.” Appeal 2011-011904 Application 10/965,624 11 SUMMARY We affirm the rejection of claims 8, 10, 11, 14, 27-33, and 35-40 under 35 U.S.C. § 103(a) as being obvious over Balasuriya, Vertaschitsch and Bokhari. We reverse the rejection of claims 12 and 34 under 35 U.S.C. § 103(a) as being obvious over Balasuriya, Vertaschitsch and Bokhari. TIME PERIOD FOR RESPONSE No time period for taking any subsequent action in connection with this appeal may be extended under 37 C.F.R. § 1.136(a). AFFIRMED-IN-PART cdc Copy with citationCopy as parenthetical citation