Janet Louise. SlifkaDownload PDFPatent Trials and Appeals BoardSep 30, 201913793856 - (D) (P.T.A.B. Sep. 30, 2019) Copy Citation UNITED STATES PATENT AND TRADEMARK OFFICE UNITED STATES DEPARTMENT OF COMMERCE United States Patent and Trademark Office Address: COMMISSIONER FOR PATENTS P.O. Box 1450 Alexandria, Virginia 22313-1450 www.uspto.gov APPLICATION NO. FILING DATE FIRST NAMED INVENTOR ATTORNEY DOCKET NO. CONFIRMATION NO. 13/793,856 03/11/2013 Janet Louise Slifka P7865 4891 136714 7590 09/30/2019 Pierce Atwood LLP (Attn: Amazon Docketing) Attn: Patent Docketing (Amazon) 100 Summer Street Suite 2250 Boston, MA 02110 EXAMINER ZHU, RICHARD Z ART UNIT PAPER NUMBER 2675 NOTIFICATION DATE DELIVERY MODE 09/30/2019 ELECTRONIC Please find below and/or attached an Office communication concerning this application or proceeding. The time period for reply, if any, is set in the attached communication. Notice of the Office communication was sent electronically on above-indicated "Notification Date" to the following e-mail address(es): patent@pierceatwood.com PTOL-90A (Rev. 04/07) UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD Ex parte JANET LOUISE SLIFKA Appeal 2018-005584 Application 13/793,856 Technology Center 2600 Before JOSEPH L. DIXON, JAMES R. HUGHES, and LARRY J. HUME, Administrative Patent Judges. DIXON, Administrative Patent Judge. DECISION ON APPEAL STATEMENT OF THE CASE Pursuant to 35 U.S.C. § 134(a), Appellant1 appeals from the Examiner’s decision to reject claims 1–7, 10–14, 16, and 18–23. Final Act. 1. Claims 8, 9, 15, and 17 are canceled. We have jurisdiction under 35 U.S.C. § 6(b). We reverse. 1 We use the word “Appellant” to refer to “Applicant” as defined in 37 C.F.R. § 1.42(a). Appellant identifies the real party in interest as Amazon Technologies, Inc. (App. Br. 4). Appeal 2018-005584 Application 13/793,856 2 CLAIMED SUBJECT MATTER The claims are directed to a domain and intent name feature identification and processing. Appellant further described the disclosed invention as: A natural language processor takes a textual input (one provided either as the output from an automatic speech recognition (ASR) or from some other source) and performs named entity recognition (NER) on the textual input to identify and tag the meaningful portions of the text so that a later component may properly form a command to send to another application. To improve NER processing, additional information beyond the textual input may be provided to the NER processor ahead of time. Such information may be referred to as pre-features. Pre- features may describe any data that may assist natural language processing such as user identification, user preferences, typical user queries, etc. As described below, pre-features include features that are not derived from the text to be processed. Pre- features may be fed as inputs to an NER processor. Other inputs to the NER processor may include a domain (a category describing the context of the textual input) or an intent (an indicator of the intended command of a user to be executed by a device). While a domain and/or intent may typically be determined later in a natural language process, determining them before NER processing, and offering them as inputs to a NER processor, may improve overall natural language output. The domain and intent may be pre-features (that is, not derived from the text input itself) or may be derived from the text but still input into the NER processor. (Spec. ¶ 11). Claims 1 and 7, reproduced below, are illustrative of the claimed subject matter: 1. A method of performing natural language processing, the method comprising: receiving an audio signal comprising an utterance; Appeal 2018-005584 Application 13/793,856 3 obtaining first text of the utterance using automatic speech recognition; determining at least one of a category of commands or a potential intent corresponding to the utterance, wherein the at least one of the category or the potential intent are determined using data associated with the utterance and are not determined from the first text or based on a previous utterance that is associated with the utterance; performing semantic tagging of the first text using a named entity recognition model based at least in part on at least one of the category or the potential intent, wherein the named entity recognition model was trained using a corpus of data comprising a first training text, and wherein the first training text was associated with at least one of a first category or a first intent; determining, using the results of the semantic tagging, a selected intent for the utterance; and performing an action using the selected intent. 7. A computing device, comprising: at least one processor; a memory device including instructions operable to be executed by the at least one processor to perform a set of actions, configuring the at least one processor: to receive audio data corresponding to an input utterance of a user; to perform automatic speech recognition (ASR) on the audio data to obtain text; to determine one or more pre-features, wherein the one or more pre-features are data associated with the input utterance and are not Appeal 2018-005584 Application 13/793,856 4 determined from the text or based on a previous input utterance of the user; to associate the one or more pre-features with the text; to determine, based at least in part on the one or more pre-features, at least one of a category of commands corresponding to the input utterance or a potential intent corresponding to the input utterance, the potential intent corresponding to an intended command to be executed; to generate a pre-feature vector including the one or more pre-features; to associate an entity with at least one word of the text based at least in part on the pre-feature vector and the category or the intent; and to determine, based at least in part on the entity, a selected intent for the input utterance. REFERENCES The prior art relied upon by the Examiner as evidence is: Suleman et al. US 2015/0039292 A1 Feb. 05, 2015 Thomson et al. US 2008/0037720 A1 Feb. 14, 2008 Arnold et al. US 2005/0234723 A1 Oct. 20, 2005 REJECTIONS Claims 1–7, 10, 12–14, 16, 18, 19, and 21–23 stand rejected under 35 USC § 102(e)(l) as being anticipated by Suleman. Claim 11 stands rejected under USC § 103(a) as being unpatentable over Suleman in view of Arnold. Appeal 2018-005584 Application 13/793,856 5 Claim 20 stands rejected under USC § 103(a) as being unpatentable over Suleman in view of Thomson. OPINION 35 U.S.C. § 102 With respect to independent claims 1, 7, and 14, Appellant sets forth arguments with regards to independent claim 7 regarding “pre-features,” but we note that independent claim 1 does not expressly recite “pre-features,” but includes similar limitations regarding the determining the category of commands or potential intent corresponding to the utterance. We address independent claim 7 as the illustrative claim for the group. With respect to independent claim 7, Appellant contends that the prior art does not disclose the claimed “pre-features,” and that the pre-features are associated with text of an utterance but not derived from or determined based on the text itself. (App. Br. 13). Appellant contends that the claimed invention is directed to a database of contextual information that is available to the device, including information associated with multiple users (e.g., a complete user history for each of the multiple users). (App. Br. 14). Appellant further contends “determining the pre-features corresponds to selecting a subset of data from the database (e.g., specific contextual information at the time the utterance was received), wherein the subset of data is associated with the utterance but not derived from the text itself.” (App. Br.16). The Examiner maintains that the Suleman reference discloses the claimed “determine one or more pre-features, wherein the one or more pre- features are data associated with the input utterance and are not determined from the text or based on a previous input utterance of the user” (Final Act. Appeal 2018-005584 Application 13/793,856 6 14). See Final Act. 14–15 (citing Suleman ¶ 42); Ans. 5–17 (citing Suleman ¶¶ 32, 41–45, 49–52, 57, 58). Although we agree with the Examiner that the Suleman reference discloses the use of pre-stored or pre-calculated data in database(s), the Examiner has not shown that the Suleman reference discloses the claimed step “determine one or more pre-features, wherein the one or more pre- features are data associated with the input utterance and are not determined from the text or based on a previous input utterance of the user.” From our review of Figures 3 and 4 of the Suleman reference, each of these disclosed processes starts with the input of the query, and the Examiner has not identified a clear disclosure of the claimed step/process to “determine one or more pre-features, wherein the one or more pre-features are data associated with the input utterance and are not determined from the text or based on a previous input utterance of the user.” For example, paragraph 33 of the Suleman reference discloses “NLP [Natural Language Processing] service 114 analyzes the user query to determine meaning and specific commands with which to provide the services.” Additionally, in Figure 3, the user query is input to each of the units 306, 308, and 310 to be used in the processing. (Suleman ¶ 39). In Figure 4, “step 404, user query 302 is subjected to binary classification such as via a support vector machine (SVM)” which again uses the user query. (Suleman ¶ 50). The Examiner’s anticipation rejection is not well supported by the express disclosure of the Suleman reference. Accordingly, we agree with Appellant the Examiner’s determination that the claimed invention is anticipated is not supported by a preponderance of the evidence. See In re Caveney, 761 F.2d 671, 674 (Fed. Cir. 1985) (Examiner’s burden of proving non-patentability is by a preponderance of the evidence); see also In re Appeal 2018-005584 Application 13/793,856 7 Warner, 379 F.2d 1011, 1017 (CCPA 1967) (“The Patent Office has the initial duty of supplying the factual basis for its rejection. It may not, because it may doubt that the invention is patentable, resort to speculation, unfounded assumptions or hindsight reconstruction to supply deficiencies in its factual basis.”). We will not resort to such speculation or assumptions to cure the deficiencies in the factual basis in order to support the Examiner’s anticipation rejection. Consequently, we cannot sustain the rejection of independent claim 7 and its dependent claims 10, 12, 13, and 22 based on anticipation. The Examiner relies up on the same basis in the anticipation rejection of independent claims 1 and 14. Independent claim 14 contains the same claim language found lacking in the anticipation rejection of independent claim 7 and therefore we cannot sustain the anticipation rejection thereof and dependent claims 16, 18, 19, and 23. Independent claim 1 does not recite “pre-features,” but contains the same step of “determining.” As a result, we cannot sustain the Examiner’s anticipation rejection of independent claim 1 and dependent claims 2–6. 35 U.S.C. § 103 With respect to dependent claims 11 and 20, the Examiner does not identify how the additional prior art references remedy the noted deficiency above. (Ans. 18–20). As a result we cannot sustain the Examiner’s obviousness rejections of dependent claims 11 and 20. CONCLUSION We reverse the Examiner’s anticipation and obviousness rejections. Appeal 2018-005584 Application 13/793,856 8 DECISION SUMMARY Claims Rejected Basis Affirmed Reversed 1–7, 10, 12–14, 16, 18, 19, 21– 23 § 102 Suleman 1–7, 10, 12–14, 16, 18, 19, 21– 23 11 § 103 Suleman, Arnold 11 20 § 103 Suleman, Thomson 20 Overall Outcome 1–7, 10– 14, 16, 18– 23 REVERSED Copy with citationCopy as parenthetical citation