Ex Parte Said et alDownload PDFPatent Trial and Appeal BoardDec 26, 201210686127 (P.T.A.B. Dec. 26, 2012) Copy Citation UNITED STATES PATENT AND TRADEMARK OFFICE UNITED STATES DEPARTMENT OF COMMERCE United States Patent and Trademark Office Address: COMMISSIONER FOR PATENTS P.O. Box 1450 Alexandria, Virginia 22313-1450 www.uspto.gov APPLICATION NO. FILING DATE FIRST NAMED INVENTOR ATTORNEY DOCKET NO. CONFIRMATION NO. 10/686,127 10/15/2003 Joe P. Said 7450/9 7636 7590 12/26/2012 CHARLES C. VALAUSKAS BANIAK PINE & GANNON Suite 1200 150 N. Wacker Drive Chicago, IL 60606 EXAMINER BORSETTI, GREG ART UNIT PAPER NUMBER 2658 MAIL DATE DELIVERY MODE 12/26/2012 PAPER Please find below and/or attached an Office communication concerning this application or proceeding. The time period for reply, if any, is set in the attached communication. PTOL-90A (Rev. 04/07) UNITED STATES PATENT AND TRADEMARK OFFICE ____________________ BEFORE THE PATENT TRIAL AND APPEAL BOARD ____________________ Ex parte JOE P. SAID and DAVID A. SCHLEPPENBACH ____________________ Appeal 2010-006410 Application 10/686,127 Technology Center 2600 ____________________ Before DENISE M. POTHIER, ERIC B. CHEN, and TRENTON A. WARD, Administrative Patent Judges. WARD, Administrative Patent Judge. DECISION ON APPEAL Appellants appeal under 35 U.S.C. § 134(a) from the Examiner’s non-final rejection of claims 1-4, 6-12, and 14-22. Claims 5 and 13 have been cancelled. We have jurisdiction under 35 U.S.C. § 6(b). We affirm-in- part. STATEMENT OF THE CASE Appellants’ invention relates to a universal processing system and methods for production of outputs accessible by people with disabilities. Appeal 2010-006410 Application 10/686,127 2 See generally Abstract. Claim 1 is illustrative with certain disputed limitations italicized: 1. A method of input conversion to output, said conversion method comprising: entering a first stage input and a second input; translating the first input and the second input into an electronic format; converting the electronic format into standard XML encoded with accessibility information; transforming the standard XML encoded with accessibility information into individual version of XML dependent on desired output; and utilizing a rendering engine to modify the individual version of XML into a format for the output. THE REJECTION The Examiner rejected claims 1-4, 6-12, and 14-22 under 35 U.S.C. § 103(a) as unpatentable over Johnson (US 5,748,974; issued May 5, 1998) and NATURAL LANGUAGE SEMANTICS MARKUP LANGUAGE FOR THE SPEECH INTERFACE FRAMEWORK (2000) [hereinafter W3C]. Ans. 3-15.1 CONTENTIONS CLAIM 1 The Examiner finds that Johnson discloses every recited feature of claim 1, except for “converting the electronic format into standard XML encoded with accessibility information, transforming the standard XML encoded with accessibility information into individual version of XML 1 Throughout this opinion, we refer to (1) the Appeal Brief (“App. Br.”) filed Dec. 15, 2008 and corrected on July 16, 2009, (2) the Examiner’s Answer (“Ans.”) mailed March 17, 2009, and (3) the Reply Brief (“Reply Br.”) filed May 15, 2009. Appeal 2010-006410 Application 10/686,127 3 dependent on desired output and utilizing a rendering engine to modify the individual version of XML into a format for the output.” Ans. 4 (quoting claim 1). The Examiner cites W3C as teaching this feature in concluding that the claim would have been obvious. Ans. 4-5. Appellants argue that the claim term, “accessibility information,” is “implicitly defined in paragraphs [0023] and [0060] of the present publication” to mean “non-textual, semantic information that indicates non-verbal aspects of the textual communication, such as tone of voice and emotional content.” App. Br. 11 (brackets in original). Appellants argue that claim 1 requires converting the electronic format into standard XML encoded with “accessibility information,” and “Johnson is completely silent as to providing indications of non-verbal aspects of the textual communication, such as tone of voice and emotional content.” App. Br. 10 (emphasis omitted). ISSUE Under § 103, has the Examiner erred in rejecting claim 1 by finding that the combination of the cited prior art teaches “converting the electronic format into standard XML encoded with accessibility information”? ANALYSIS On this record, we find no error in the Examiner’s obviousness rejection of claim 1. Appellants argue that they served as the lexicographer and implicitly defined the claim term “accessibility information” in paragraphs [0023] and [0060] of the Specification (which correspond to Spec. 9:16-22, 23:15-26) to mean “non-textual, semantic information that Appeal 2010-006410 Application 10/686,127 4 indicates non-verbal aspects of the textual communication, such as tone of voice and emotional content.” App. Br. 10-11. Furthermore, Appellants argue that the cited references fail to teach or suggest encoding this “accessibility information” with the standard XML, as required by claim 1. App. Br. 10 (quoting claim 1). Contrary to Appellants’ arguments, the Examiner finds that “paragraphs [0023] and [0060] do not clearly redefine the [accessibility information] claim term.” Ans. 17 (first and second brackets in original). We are not persuaded of error in the Examiner’s position because “the written description must clearly redefine the claim term and set forth the uncommon definition so as to put one reasonably skilled in the art on notice that the appellant intended to so redefine that claim term.” Ans. 16-17; see also MPEP § 2111.01(IV). We agree with the Examiner’s findings that Appellants failed to clearly state their intent to assign a special definition to the term “accessibility information.” Id.; see also Laryngeal Mask Co. Ltd. v. Ambu A/S, 618 F.3d 1367, 1372 (Fed. Cir. 2010) (citations omitted) (concluding the patentee did not act as his own lexicographer because “[t]o be his own lexicographer, a patentee must use a ‘special definition of the term [that] is clearly stated in the patent specification or file history’”). Specifically, we agree that the two paragraphs from the Specification cited by Appellants for an implicit definition of “accessibility information” do not “define accessibility information either explicitly nor implicitly, but merely state verbal and non-verbal information can be expressed” through a particular format of XML. Ans. 17. Appellants do not dispute that both Johnson and W3C disclose “providing ‘semantic interpretations for a variety of inputs, including . . . Appeal 2010-006410 Application 10/686,127 5 speech and natural language text input.’” App. Br. 11 (quoting W3C). Appellants merely argue that “[t]he semantic interpretations of W3C, like those of Johnson, do not produce accessibility information.” Id. (emphasis omitted). As Appellants failed to act as a lexicographer with respect to the claim term “accessibility information,” Appellants’ arguments are not commensurate with the broadest reasonable interpretation of claim 1. See In re Am. Acad. of Sci. Tech Ctr., 367 F.3d 1359, 1364 (Fed. Cir. 2004) (finding that the USPTO must give claims their broadest reasonable interpretation consistent with the specification.). The Examiner finds that “Johnson does disclose determining verbal and non-verbal semantic information (accessibility information) from the input.” Ans. 4. The cited portions of Johnson (Ans. 4) disclose that a “natural language processor 45 directs the combined multimodal input to a parser/semantic interpreter 46,” and the “parsed input is subjected to further semantic interpretation by the dialog manager 49.” Johnson, col. 3, l. 67 – col. 4, l. 5. In one cited embodiment, the user types in the name “John Smith” as a first input, and provides the spoken request, “Find address,” as a second input to be processed by the natural language processor. Johnson, col. 4, ll. 55-61. In turn, these inputs are parsed and subjected to further semantic interpretation producing some accessibility information. Additionally, W3C further teaches providing semantic interpretations of speech and language inputs. See Ans. 5 (citing §§ 1.1, 2.8). Based on this disclosure, we see no error in the Examiner’s position that it would have been obvious to combine the multimodal interface in Johnson with the W3C “specifications for an XML language that provides semantic interpretations for a variety of speech and language inputs.” Ans. 5, 18. Accordingly, we Appeal 2010-006410 Application 10/686,127 6 agree with the Examiner’s findings that the combination of Johnson and W3C render obvious “convert[ing] the electronic format into standard XML encoded with accessibility information.” Id. We are therefore not persuaded that the Examiner erred in rejecting independent claim 1, and dependent claims 2, 4, 10-12, and 14-18 not separately argued with particularity. CLAIM 3 Appellants argue that claim 3 is separately patentable because neither Johnson nor W3C discloses that the “entering step includes gesturing.” App. Br. 12 (emphasis omitted). The Examiner finds that Johnson discloses that “that the input to the system can include any modality,” which suggests the system is capable of interpreting gestures as an input. Ans. 6 (citing Johnson, col. 3, ll. 44-46). In the Response to Arguments, the Examiner further finds that Johnson discloses a touch input, which teaches gesturing. Ans. 19 (citing Johnson, col. 3, ll. 38-42). Appellants responded to this finding by arguing that a touch-input is not gesturing because the “present publication effectively defines gesturing as ‘sign language.’” Reply Br. 4. Appellants’ argument for a narrow definition of gesturing is that “paragraph [0014] (which corresponds to Spec. 5:25-6:9) of the present publication states that the user may enter information by ‘gesturing (sign language).’” Id. (brackets in original). Once again, Appellants failed to serve as a lexicographer because Appellants failed to clearly redefine “gesturing” to only include sign language. See MPEP § 2111.01(IV). Moreover, claim 3 broadly recites that the entering step includes gesturing but does not require the inputs include Appeal 2010-006410 Application 10/686,127 7 gesturing. Thus, given its broadest reasonable construction, a user entering inputs through touch, such as taught by Johnson, includes gesturing. Accordingly, we sustain the Examiner’s rejection of claim 3. CLAIM 6 Appellants argue that claim 6 is separately patentable because neither Johnson nor W3C disclose that the “entering step includes semantic information,” as required by claim 6. App. Br. 12. Specifically, Appellants argue that “Johnson is completely silent as to entered inputs including semantic information.” Id. Appellants’ arguments are contrary to the express disclosures of Johnson. See, e.g., Johnson, col. 3, l. 67 – col. 4, l. 5. As cited by the Examiner, Johnson discloses a “natural language processor 45 [that] directs the combined multimodal input to a parser/semantic interpreter 46,” and expressly states that the “grammar and semantic interpretation rules used in the natural language processor 45 insure the intended meaning is recovered.” Ans. 4, 6 (citing Johnson, col. 3, l. 67 – col. 4, l. 5; col. 5, ll. 29-32). Accordingly, Appellants’ argument that Johnson is silent as to inputs with semantic information is unfounded. We are not persuaded of error in the Examiner’s findings; thus, we sustain the Examiner’s rejection of claim 6. CLAIM 7 Appellants argue that claim 7 is separately patentable because neither Johnson nor W3C discloses that the “entering step includes providing formatting and structure for text,” as required by claim 7. App. Br. 12 (emphasis omitted). The Examiner finds that Johnson “does disclose that a Appeal 2010-006410 Application 10/686,127 8 response from the system includes text which is pasted into the current application.” Ans. 7 (citing col. 4, ll. 49-53). While Appellants argue that pasting text does not provide format and structure for text (App. Br. 13), pasting text also suggests pasting its format and structure and thus at least suggests providing format and structure to the text as recited. Accordingly, we are not persuaded of error in the Examiner’s findings that Johnson discloses retaining formatting and structure (Ans. 7); thus, we sustain the Examiner’s rejection of claim 7. CLAIM 8 Appellants argue that claim 8 is separately patentable because neither Johnson nor W3C discloses that the “translating step includes the use of automatic speech recognition of the first input and the second input to produce the electronic format, the first input including verbal information, and the second input including non-verbal information,” as required by claim 8. App. Br. 13 (emphasis omitted). More particularly, Appellants argue that “Johnson is completely silent as to a first spoken input including verbal information and a second spoken input including non-verbal information.” Id. (emphasis omitted). The Examiner finds that Johnson discloses “the use of automatic speech recognition of the first input and the second input to produce the electronic format, the first input including verbal information, and the second input including non-verbal information.” Ans. 7 (citing Johnson, col. 3, ll. 44-49). Specifically, Johnson discloses that “speech is input via microphone 32” and “[a]t the same time, the user may also provide non- speech input; e.g., by keyboard 24, mouse 26, a touch screen (not shown) Appeal 2010-006410 Application 10/686,127 9 attached to display 38, or the like.” Johnson, col. 3, ll. 44-51 (brackets in original). The Examiner finds that Johnson discloses that the second input can be received from another device, yet claim 8 requires that the second input be recognized by the automatic speech recognition and include non-verbal information. See claim 8. In an example in Appellants’ Specification, it is disclosed that, in at least one embodiment, it may be beneficial “to detect emotional and other non-verbal cues in the speaker’s voice.” Spec. 9:7-8. In this embodiment in Appellants’ Specification, both the first and second input are translated by the use of automatic speech recognition, the first input being verbal and the second input being non-verbal. Based on the record before us, the cited disclosures of Johnson fail to teach or suggest the claim 8 requirement of both the first and second input being recognized by automatic speech recognition and the second input including non-verbal information. Accordingly, the Examiner’s rejection of claim 8 is not sustained. CLAIM 9 Appellants argue that claim 9 is separately patentable because “Johnson is completely silent as to a first gestured input including verbal information and a second gestured input including non-verbal information,” as required by claim 9. App. Br. 13 (emphasis omitted). The Examiner finds that Johnson discloses “the input to the system can include any modality,” which “suggests the system is capable of interpreting gestures, including sign language, as in input.” Ans. 7 (citing Johnson, col. 3, ll. 44-46). Similar to claim 8, the Examiner’s findings fail Appeal 2010-006410 Application 10/686,127 10 to address the requirement of claim 9 that the translating step must include the use of sign language recognition for the first and second inputs and the second input must include non-verbal information. See claim 9. Based on the record before us, the cited portions of Johnson fail to teach or suggest this requirement of claim 9. Accordingly, the Examiner’s rejection of claim 9 is not sustained. CLAIM 19 Appellants argue that independent claim 19 is patentable on two grounds. First, similar to arguments discussed above with respect to claim 1, Appellants argue that the cited references fail to teach or suggest the “accessibility information” required by claim 19 to be encoded with the standard XML. App. Br. 14 (quoting claim 19) (emphasis omitted). Appellants assert that they served as a lexicographer for the term “accessibility information.” Id. As found above for claim 1, we are not persuaded of error with respect to Examiner’s findings regarding “accessibility information.” See also Ans. 11-12. Second, Appellants argue that claim 19 requires entering a first input including verbal information and a second input including non-verbal information, the second input being entered in parallel with the first input, the second input being entered via a same medium as the first input, the same medium being one of a visual medium and an audial medium. App. Br. 14. With respect to these limitations in claim 19, the Examiner finds that Johnson discloses a system that “accepts multimodal input that is either typed or handwritten along with non-speech input that is entered through mouse or touch screen.” Ans. 10 (emphasis omitted). Appellants Appeal 2010-006410 Application 10/686,127 11 argue that none of the mediums of typing, handwriting, using a mouse, or using touch screen is a visual medium because the Johnson “system may receive inputs from any one of these mediums in the absence of light and sound.” App. Br. 14. In response to this argument, the Examiner finds that typing, handwriting, using a mouse, and using touch screen all require “visual confirmation of their modalities.” Ans. 20. For example, handwriting is a visual medium to the extent that it requires the user to view what is written. See id. Appellants argue that the term visual medium can be a video input of a user providing sign language. App. Br. 14. Appellants’ claim 19, however, is not so limited, as the claim merely recites a “visual medium.” Claim 19. Thus, we agree with the Examiner the claim term “visual medium” can be satisfied by any medium, or means of conveying something, that is visual. See Ans. 20. Additionally, Johnson teaches using a microphone (col. 3, ll. 44-47) or an audial medium, which can pick up both speech and background noise (e.g., non-verbal information). We are not persuaded of error in the Examiner’s findings; thus, we sustain the Examiner’s rejection of claim 19. CLAIM 20 Appellants argue that claim 20 is patentable for reasons similar to those argued with respect to claim 1. App. Br. 15-16. Accordingly, we find Appellants’ arguments unpersuasive for the reasons indicated for claim 1 and sustain the Examiner’s rejection of claim 20. Appeal 2010-006410 Application 10/686,127 12 CLAIM 21 Appellants argue that claim 21 is separately patentable for reasons similar to those argued with respect to claim 8. App. Br. 15. We find Appellants’ arguments persuasive for the reasons indicated for claim 8. Thus we find the Examiner’s rejection of claim 21 is not sustained. CLAIM 22 Appellants argue that claim 22 is separately patentable for reasons similar to those argued with respect to claim 9. App. Br. 15. We find Appellants’ arguments persuasive for the reasons indicated for claim 9. Thus we find the Examiner’s rejection of claim 22 is not sustained. ORDER The Examiner’s decision rejecting claims 1-4, 6, 7, 10-12, and 14-20 is affirmed. The Examiner’s decision rejecting claims 8, 9, 21, and 22 is reversed. No time period for taking any subsequent action in connection with this appeal may be extended under 37 C.F.R. § 1.136(a)(1)(iv). AFFIRMED-IN-PART babc Copy with citationCopy as parenthetical citation