VB Assets, LLCDownload PDFPatent Trials and Appeals BoardFeb 1, 2022IPR2020-01346 (P.T.A.B. Feb. 1, 2022) Copy Citation Trials@uspto.gov Paper 25 571-272-7822 Date: February 1, 2022 UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD AMAZON.COM, INC., AMAZON.COM LLC, AMAZON WEB SERVICES, INC., A2Z DEVELOPMENT CENTER, INC. D/B/A LAB126, RAWLES LLC, AMZN MOBILE LLC, AMZN MOBILE 2 LLC, AMAZON.COM SERVICES, INC. F/K/A AMAZON FULFILLMENT SERVICES, INC., and AMAZON.COM SERVICES LLC (formerly AMAZON DIGITAL SERVICES LLC), Petitioner, v. VB ASSETS, LLC, Patent Owner. IPR2020-01346 Patent 9,015,049 B2 Before ROBERT L. KINDER, SCOTT C. MOORE, and SEAN P. O’HANLON, Administrative Patent Judges. MOORE, Administrative Patent Judge. JUDGMENT Final Written Decision Determining All Challenged Claims Unpatentable 35 U.S.C. § 318(a) IPR2020-01346 Patent 9,015,049 B2 2 I. INTRODUCTION A. Background and Summary Amazon.com, Inc., Amazon.com LLC, Amazon Web Services, Inc., A2Z Development Center, Inc. d/b/a Lab126, Rawles LLC, AMZN Mobile LLC, AMZN Mobile 2 LLC, Amazon Services, Inc. f/k/a Amazon Fulfillment Services, Inc., and Amazon.com Services LLC (formerly Amazon Digital Services LLC) (collectively “Petitioner”) filed a Petition requesting an inter partes review of claims 1-20 of U.S. Patent No. 9,015,049 B2 (Ex. 1001, “the ’049 Patent”). Paper 1 (“Pet.”). VB Assets, LLC (“Patent Owner”) filed a Preliminary Response. Paper 6. We instituted an inter partes review as to all claims and grounds set forth in the Petition. Paper 7 (“Institution Decision”). After institution, Patent Owner filed a Response to the Petition (Paper 13, “Response” or “Resp.”), Petitioner filed a Reply to the Response (Paper 15, “Reply”), and Patent Owner filed a Sur-Reply (Paper 21, “Sur-Reply”). An oral hearing was held on November 4, 2021, and a transcript of the hearing is in the record. Paper 24 (“Tr.”). We have jurisdiction under 35 U.S.C. § 6. This Final Written Decision is issued pursuant to 35 U.S.C. § 318(a). For the reasons that follow, we determine that Petitioner has shown by a preponderance of the evidence that claims 1-20 are unpatentable. B. Related Matters Patent Owner asserts the ’049 Patent against Petitioner in the following litigation: VB Assets, LLC v. Amazon.com Inc., et al., Case No. 1:19-cv-01410-MN (D. Del.) (filed July 29, 2019) (the “Related District Court Litigation”). Pet. 2; Paper 5, 2. IPR2020-01346 Patent 9,015,049 B2 3 C. The ’049 Patent The ’049 Patent is directed to “a cooperative conversational model for a human to machine voice user interface.” Ex. 1001, 1:21-22. The disclosed cooperative conversational voice user interface understands free- form human utterances, freeing users from being restricted to a fixed set of commands and/or requests. Id. at 2:8-12. The ’049 Patent discloses a system that receives an input, which may include a human utterance (i.e., words, syllables, phonemes, or any other audible sound made by a human being), where the utterance includes one or more requests (i.e., command, directive, other instruction for a device, computer or other machine, to retrieve information, perform a task, or take some other action). Ex. 1001, 2:18-26. The utterance component of the input is processed by a speech recognition engine to generate one or more preliminary interpretations of the utterance. Id. at 2:28-32. The one or more preliminary interpretations are then provided to a conversational speech engine for further processing, where the conversational speech engine communicates with one or more databases to generate an adaptive conversational response that may then be returned to the user as an output. Id. at 2:32-37. Figure 1 of the ’049 Patent, reproduced below, depicts an embodiment of a cooperative conversational voice user interface. Ex. 1001, 7:9-10, 7:19-21. IPR2020-01346 Patent 9,015,049 B2 4 Figure 1 depicts a system architecture for implementing a cooperative conversational voice user interface. Ex. 1001, 7:19-21. The depicted system receives an input 105 from a user, where input 105 is an utterance received by an input device (e.g., a microphone), and where the utterance includes one or more requests. Id. at 7:22-25. The utterance component of input 105 is processed by speech recognition engine 110 (i.e., Automatic Speech Recognizer (“ASR”) 110) to generate one or more preliminary interpretations of the utterance. Id. at 7:36-40. The one or more preliminary interpretations generated by speech recognition engine 110 are then provided to conversational speech engine 115 for further processing. Id. at 7:49-52. Conversational speech engine 115 communicates with one or more databases 130 to generate an adaptive conversational response, which is returned to the user as output 140. Id. at 7:55-58. D. Illustrative Claim Petitioner challenges all claims (1-20) of the ’049 Patent. Of those, claims 1 and 11 are independent. Claims 2-10 depend from claim 1, and claims 12-20 depend from claim 11. IPR2020-01346 Patent 9,015,049 B2 5 Claim 1, reproduced below, is representative. 1. A computer-implemented method of facilitating conversation- based responses, the method being implemented by a computer system that includes one or more physical processors executing one or more computer program instructions which, when executed, perform the method, the method comprising: receiving, at the computer system, a natural language utterance during a conversation between a user and the computer system; identifying, by the computer system, a first model that includes short-term knowledge about the conversation, wherein the short-term knowledge is based on one or more prior natural language utterances received during the conversation; identifying, by the computer system, based on the short term knowledge, context information for the natural language utterance; determining, by the computer system, based on the context information, an interpretation of the natural language utterance; and generating, by the computer system, based on the interpretation of the natural language utterance, a response to the natural language utterance. Ex. 1001, 19:65-20:20. Independent claim 11 is similar to claim 1, except that claim 11 recites a system rather than a method. Ex. 1001, 21:7-25. IPR2020-01346 Patent 9,015,049 B2 6 E. Challenged Claims and Asserted Grounds Petitioner asserts two grounds of unpatentability (Pet. 3-4): Ground Claims Challenged 35 U.S.C. § 1 Reference(s)/Basis 1 1-6, 10-16, 20 103(a) Kennewick 2 7-9, 17-19 103(a) Kennewick, Cooper The Petition also relies on testimony from the Declaration of Padhraic Smyth, Ph.D. (Ex. 1002). Patent Owner’s Response relies on the Declaration of Dr. Anatole Gershman (Ex. 2001). The Parties also rely on other exhibits and evidence, as discussed below. II. ANALYSIS A. Principles of Law 1. Burden In an inter partes review, the burden of proof is on the petitioner to show that the challenged claims are unpatentable, and that burden never shifts to the patentee. See 35 U.S.C. § 316(e); In re Magnum Oil Tools Int’l, Ltd., 829 F.3d 1364, 1375 (Fed. Cir. 2016) (citing Dynamic Drinkware, LLC v. Nat’l Graphics, Inc., 800 F.3d 1375, 1378 (Fed. Cir. 2015)). 2. Obviousness under 35 U.S.C. § 103(a) A patent claim is unpatentable under 35 U.S.C. § 103(a) if the differences between the claimed subject matter and the prior art are such that the subject matter, as a whole, would have been obvious at the time the 1 The ’049 Patent issued from an application filed April 19, 2013, and claims priority to an application filed on October 16, 2006. See Ex. 1001, codes (22), (60). Thus, the pre-AIA versions of 35 U.S.C. §§ 102 and 103 apply. Leahy-Smith America Invents Act, Pub. L. No. 112-29, §3(c), 125 Stat. 284, 293 (2011) (the pre-AIA version of the Patent Act generally applies to patents with effective filing dates before March 16, 2013). IPR2020-01346 Patent 9,015,049 B2 7 invention was made to a person of ordinary skill in the art to which said subject matter pertains. KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 406 (2007). The question of obviousness is resolved on the basis of underlying factual determinations including: (1) the scope and content of the prior art; (2) any differences between the claimed subject matter and the prior art; (3) the level of ordinary skill in the art; and (4) when in evidence, objective evidence of nonobviousness. Graham v. John Deere Co., 383 U.S. 1, 17-18 (1966). In determining obviousness when all elements of a claim are found in various pieces of prior art, “the factfinder must further consider the factual questions of whether a person of ordinary skill in the art would be motivated to combine those references, and whether in making that combination, a person of ordinary skill would have had a reasonable expectation of success.” Dome Patent L.P. v. Lee, 799 F.3d 1372, 1380 (Fed. Cir. 2015); see also WMS Gaming, Inc. v. Int’l Game Tech., 184 F.3d 1339, 1355 (Fed. Cir. 1999) (“When an obviousness determination relies on the combination of two or more references, there must be some suggestion or motivation to combine the references.”). “Both the suggestion and the expectation of success must be founded in the prior art, not in the applicant’s disclosure.” In re Dow Chemical Co., 837 F.2d 469, 473 (Fed. Cir. 1988). B. Level of Ordinary Skill in the Art The level of skill in the art is “a prism or lens” through which we view the prior art and the claimed invention. Okajima v. Bourdeau, 261 F.3d 1350, 1355 (Fed. Cir. 2001) (“the level of skill in the art is a prism or lens through which a judge, jury, or the Board views the prior art and the claimed invention”). IPR2020-01346 Patent 9,015,049 B2 8 Petitioner asserts the person of ordinary skill in the art (a “POSITA”) “would have at least a Bachelor-level degree in computer science, computer engineering, electrical engineering, or a related field in computing technology, and two years of experience with automatic speech recognition and natural language understanding, or equivalent education, research experience, or knowledge.” Pet. 4. Patent Owner does not dispute Petitioner’s assertion. Resp. 3. Patent Owner and its declarant, Dr. Gershman, both apply Petitioner’s definition of a POSITA. Id.; Ex. 2021 ¶ 41. On this record and given the absence of any dispute, we adopt and apply Petitioner’s proposed formulation regarding the level of ordinary skill in the art, which we find is consistent with the level of skill reflected in the cited prior art references. See Okajima, 261 F.3d at 1355. C. Claim Construction We construe claims using the same claim construction standard that would be used to construe the claims in a civil action under 35 U.S.C. § 282(b). 37 C.F.R. § 42.100 (2019). In applying this claim construction standard, we are guided by the principle that the words of a claim “are generally given their ordinary and customary meaning,” as understood by a person of ordinary skill in the art in question at the time of the invention. Phillips v. AWH Corp., 415 F.3d 1303, 1312-13 (Fed. Cir. 2005) (en banc) (citation omitted). When construing a claim term, “we look principally to the intrinsic evidence of record, examining the claim language itself, the written description, and the prosecution history, if in evidence.” DePuy Spine, Inc. v. Medtronic Sofamor Danek, Inc., 469 F.3d 1005, 1014 (Fed. Cir. 2006) (citing Phillips, 415 F.3d at 1312-17). There is a “heavy presumption,” however, that a claim term carries its ordinary and customary IPR2020-01346 Patent 9,015,049 B2 9 meaning. CCS Fitness, Inc. v. Brunswick Corp., 288 F.3d 1359, 1366 (Fed. Cir. 2002). “[W]e need only construe terms ‘that are in controversy, and only to the extent necessary to resolve the controversy.’” Nidec Motor Corp. v. Zhongshan Broad Ocean Motor Co., 868 F.3d 1013, 1017 (Fed. Cir. 2017) (quoting Vivid Techs., Inc. v. Am. Sci. & Eng’g, Inc., 200 F.3d 795, 803 (Fed. Cir. 1999)). 1. “model” Petitioner asserts “the challenged claims should be interpreted in accordance with ‘the ordinary and customary meaning of such claim as understood by one of ordinary skill in the art and the prosecution history pertaining to the patent.’” Pet. 6-7 (citing 37 C.F.R. § 42.100(b)). Petitioner then contends that the plain and ordinary meaning of “model” encompasses at least the following IEEE definition of “model”: “An approximation, representation, or idealization of selected aspects of the structure, behavior, operation, or other characteristics of a real-world process, concept, or system.” Pet. 19; Ex. 1007 (IEEE definition of “model”). In its Response, “Patent Owner does not dispute the use of the IEEE definition of ‘model.’” Resp. 32. Accordingly, and consistent with our Institution Decision (see Institution Decision, 11), we determine that the claim limitation “model” encompasses an “approximation, representation, or idealization of selected aspects of the structure, behavior, operation, or other characteristics of a real-world process, concept, or system.” See Pet. 19.2 We decline to further construe this claim term. See Nidec, 868 F.3d at 1017. 2 We note that this construction is consistent with the agreed-upon construction in the Related District Court Litigation. See Ex. 1032, 1. IPR2020-01346 Patent 9,015,049 B2 10 2. “short-term knowledge” Petitioner contends that a dialog history made up of tagged, recognized utterances constitutes “short-term knowledge” of the type required by the ’049 Patent claims. Pet. 17-21; Ex. 1002 ¶ 73. In particular, Petitioner contends that Kennewick’s dialog history, which includes “recognize[d] words and phrases,” and tags that “tie that utterance to the correct user,” constitutes short-term knowledge. Pet. 17, 19; Ex. 1002 ¶ 73. Patent Owner argues that the recited “short-term knowledge” must include “knowledge about the conversation.” Resp. 25-26. According to Patent Owner, although “short-term knowledge can include ‘conversation history,’ . . . the ’049 Patent’s ‘conversation history’ is a history of interpreted utterances.” Id. at 26 (emphasis in original); see also Sur-Reply 2 (“short-term knowledge requires interpreted utterances.”). In Patent Owner’s view, “mere tagged recognized words do not constitute short-term knowledge about a conversation” because “recognized words are ‘not sufficient for an algorithm to understand the user’s goals and intent.’” Resp. 28 (quoting Ex. 1002 ¶ 42). In other words, Patent Owner contends that short-term knowledge must include some interpretation of the meaning of the recited natural language utterances. See id. at 24-31. In Patent Owner’s view, a collection of tagged and recognized words and phrases would not be “short-term knowledge” unless the words and phrases had been interpreted to understand the user’s goals, intent, or preferences. See id. at 28-29 (“Since Kennewick’s tagged recognized words do not provide the system with understanding of the user’s goals, intent, or preferences, they do not provide the system with short-term knowledge about the conversation.”) We begin our analysis with the language of the claims. Independent claims 1 and 11 recite “short-term knowledge about the conversation” that is IPR2020-01346 Patent 9,015,049 B2 11 “based on one or more prior natural language utterances received during the conversation.” Ex. 1001, 20:7-11, 21:15-18. These claims require that the short-term knowledge be “about the conversation,” but they do not recite that the recited short-term knowledge must be derived from interpreting the utterances, or that the recited short-term knowledge must provide the system with understanding of a user’s goals, intent, or preferences. The Specification of the ’049 Patent teaches that “shared knowledge may include both short-term and long-term knowledge. Short-term knowledge may accumulate during a single conversation, where input received during a single conversation may be retained.” Ex. 1001, 4:63-67. The Specification then explains that “the shared knowledge [which includes short-term knowledge] may be used to build one or more intelligent hypotheses using current and relevant information.” Id. at 5:5-7. This portion of the Specification describes interpretation (i.e., building intelligent hypotheses about the dialog) as being something performed on short-term knowledge. The Specification thus implies that short-term knowledge itself need not include interpretations of user utterances. Patent Owner and its declarant argue that the Specification of the ’049 Patent explains that “short-term knowledge is built by the Session Input Accumulator using recognized words as an input, whereupon assumptions and expectations are extracted from the utterances to build short-term knowledge.” Resp. 26-27 (citing Ex. 1001, 1:35-43, 4:63-65, 13:57, 13:60-62, 15:59-61); Ex. 2001 ¶ 63 (citing Ex. 1001, 4:58-65, 5:37-42, 13:57-62, 14:37-40). But the portions of the ’049 Patent cited by Patent Owner and its declarant do not support Patent Owner’s argument. See Ex. 1001, 1:35-43 (not discussing short-term knowledge); 4:58-65 (indicating that a “model” may perform intelligent hypothesis building, but not IPR2020-01346 Patent 9,015,049 B2 12 discussing how short-term knowledge is built), 5:37-42 (indicating that “shared knowledge may enable a user and a voice user to share assumptions and expectations,” but not stating that short-term knowledge must include such assumptions or expectations); 13:57-62 (indicating that the “Session Input Accumulator” may build “shared knowledge models,” but not indicating that the Session Input Accumulator generates short-term knowledge); 14:37-40 (indicating that “shared knowledge” may be used to populate the “Intelligent Hypothesis Builder,” but not referencing short-term knowledge); 15:59-61 (indicating that the Hypothesis Builder may use short- term shared knowledge “from the Session Input Accumulator,” but not indicating that the Session Input Accumulator necessarily generated the short-term knowledge). In addition, each of these Specification excerpts merely describes the background of the invention or an aspect of preferred embodiment. See Ex. 1001, 1:35-43, 4:58-65, 5:37-42, 13:57-62, 14:37-40, 15:59-61. Neither Patent Owner nor its declarant has identified any lexicographic definition or disclaimer of claim scope sufficient to limit the claim term “short-term knowledge” so as to require interpretations of recognized user utterances. On this record, and consistent with the ordinary and customary meaning of the claim language as used in the Specification, we reject Patent Owner’s contention that the recited “short-term knowledge” must necessarily include interpreted utterances, or interpreted information regarding user intent, assumptions, expectations, goals, or preferences. We decline to further construe this claim term. See Nidec, 868 F.3d at 1017. IPR2020-01346 Patent 9,015,049 B2 13 D. Overview of the Asserted References 1. Kennewick Kennewick is directed to “the retrieval of online information and processing of commands through a speech interface in a vehicle environment.” Ex. 1003 ¶ 2. Kennewick discloses “a mobile interactive natural language speech system . . . that includes a speech unit.” Id. ¶ 12. The system includes a speech interface device that receives spoken natural language queries, commands and/or other utterances from a user, and a computer device or system that receives input from the speech unit, processes the input (e.g., retrieves information responsive to the query, takes action consistent with the command, etc.), and responds to the user with a natural language speech response. Id. ¶ 14. Figure 5, reproduced below, illustrates an interactive natural language speech processing system. Ex. 1003 ¶ 92. IPR2020-01346 Patent 9,015,049 B2 14 Figure 5 depicts the components of an interactive natural language speech processing system. Ex. 1003 ¶ 118. The main user interface for the system is speech unit 128. Id. ¶ 121. Speech unit 128 includes one or more microphones (e.g., array microphone 134) to receive the utterances of a user. Id. Speech unit 128 encodes the received speech via speech code 138 and transmits the coded speech via transceiver module 130 to the main unit 98. Id. The coded speech is then passed to speech coder 122 for decoding. Id. ¶ 123. The decoded speech is then processed by the speech recognition engine 120 using data in the dictionary and phrases module 112 and data received from agents 106. Id. The recognized words and phrases are then processed by parser 118, which transforms them into complete commands and questions using data supplied by the agents, where the agents then process the commands or questions. Id. The agents are used to determine context for the questions and commands. Id. ¶ 128. 2. Cooper Cooper is directed to “a computer-based, personal virtual assistant for managing communications and information, whose behavior can be changed by the user and whose behavior automatically adapts to the user.” Ex. 1004, 1:13-16. The virtual assistant includes “a voice user interface for inputting information into and receiving information from the virtual assistant by speech,” “a communications network,” and “a virtual assistant application running on a remote computer,” where the remote computer is “electronically coupled to the user interface via the communications network,” and where “the behavior of the virtual assistant changes responsive to user input.” Id. at 2:36-45. In one example, the virtual assistant has a politeness setting, which, when enabled, causes the virtual assistant to include words or phrases IPR2020-01346 Patent 9,015,049 B2 15 associated with polite discourse in the output from the virtual assistant (e.g., “please,” “thank you,” “thanks,” “excuse,” “pardon,” “may I,” “would you mind,” etc.). Ex. 1004, 43:48-55. In this example, if words of polite discourse are included in the input received from the user, the virtual assistant automatically enables the politeness setting. Id. at 43:55-61. In another example, the user information input in the virtual assistant is information about the user’s emotion, which is based on information about the user’s voice volume, word choice and speech rate. Id. at 43:62-65. Based on this information, the virtual assistant automatically determines the user’s emotional state (e.g., calm or angry) and includes words in the output of the virtual assistant appropriate for the user’s emotional state (e.g., words associated with submissive discourse, such as “sorry,” “regret” and “apologize,” when the user’s emotional state is angry). Id. at 43:65-44:6. E. Obviousness under 35 U.S.C. § 103 1. Objective Evidence of Nonobviousness Patent Owner does not rely on objective evidence of nonobviousness in this case. See generally Resp. (not alleging that any such objective evidence is present). 2. Patent Owner’s Argument that Dr. Smyth’s Testimony is Unreliable Patent Owner also argues that we should attribute little weight to the testimony of Petitioner’s declarant, Dr. Smyth, because “his evasiveness during his deposition undermines his credibility and because his testimony lacks independence.” Resp. 14. For example, Patent Owner contends that Dr. Smyth provided evasive answers when questioned about disclosures in Kennewick. Resp. 14-15. Patent Owner also contends that Dr. Smyth’s testimony “lacks IPR2020-01346 Patent 9,015,049 B2 16 independence” because “his declaration and the petition are essentially identical in language.” Id. at 16. Patent Owner also argues that Dr. Smyth was evasive when questioned about his discussions with counsel during a deposition break. Id. at 17. Patent Owner has not sought to exclude Dr. Smyth’s testimony; it has merely raised questions about Dr. Smyth’s credibility and the weight we should provide this evidence. We have reviewed Patent Owner’s arguments regarding Dr. Smyth’s credibility and considered them when deciding how much weight to accord Dr. Smyth’s testimony. Notably, however, our Decision is supported by the entirety of the record, including the teachings of the prior art references detailed below. 3. Scope and Content of the Prior Art; Differences between Claimed Subject Matter and Prior Art; Obviousness a) Ground 1: Obviousness of Claims 1-6, 10-16, and 20 over Kennewick (1) Independent Claims 1 and 11 Claim 1 Claim 11 1. A computer-implemented method of facilitating conversation- based responses, the method being implemented by a computer system that includes one or more physical processors executing one or more computer program instructions which, when executed, perform the method, the method comprising: 11. A system for facilitating conversation-based responses, the system comprising: one or more physical processors programmed with one or more computer program instructions such that, when executed, the one or more computer program instructions cause the one or more physical processors to: With respect to the preambles of claims 1 and 11 and the first sub- paragraph of claim 11, Petitioner alleges that Kennewick discloses a method of facilitating conversation-based responses using a computer that includes IPR2020-01346 Patent 9,015,049 B2 17 one or more physical processors that execute one or more sets of computer program instructions. Pet. 12-16, 43-44; Ex. 1002 ¶¶ 61-66, 103-104. In particular, Petitioner alleges that Kennewick’s “systems and methods for responding to natural language speech utterance” (Pet. 12 (citing Ex. 1003, Code (54))) are implemented on one or more “computer alternatives” such as a “fixed computer” or “handheld device” (Pet. 13 (citing Ex. 1003 ¶ 13, Fig. 3)), that contain “one or more processing units” (Pet. 14 (citing Ex. 1003 ¶¶ 111, 115)) used to execute “software that is installed onto the computer” (Pet. 15 (citing Ex. 1003 ¶ 13)). Patent Owner does not contest Petitioner’s contentions that Kennewick teaches or suggests all elements of the preambles of claims 1 and 11 and the first sub-paragraph of claim 11. See Resp. 17-40. Thus, the sufficiency of Petitioner’s contentions is not subject to dispute. See In re Nuvasive, Inc., 841 F.3d 966, 974 (Fed. Cir. 2016) (“The Board, having found the only disputed limitations together in one reference, was not required to address undisputed matters.”); see also Paper 8, 8 (emphasizing that “arguments not raised in the response may be deemed waived”). We also find that Petitioner’s contentions are fully supported by the record. See Pet. 12-16, 43-44; Ex. 1002 ¶¶ 61-66, 103-104. Accordingly, we are persuaded that Kennewick teaches or suggests all elements of the preambles of claims 1 and 11 and the first sub-paragraph of claim 11.3 Claim 1 Claim 11 receiving, at the computer system, a natural language utterance during a conversation between a user and the computer system; receive a natural language utterance during a conversation between a user and the system; 3 In view of this determination, we need not determine whether the preambles of claims 1 and 11 are limiting. IPR2020-01346 Patent 9,015,049 B2 18 With respect to the first sub-paragraph of claim 1 and the second sub- paragraph of claim 11, Petitioner contends that Kennewick “receives spoken natural language queries, commands and/or other utterances from a user,” “processes the input,” and “responds to the user with a natural language speech response.” Pet. 16 (citing Ex. 1003 ¶¶ 12, 14, 92, Fig. 5, claims 1, 28, 43), 44; Ex. 1002 ¶¶ 67-68, 105. Patent Owner does not contest Petitioner’s contentions regarding these claim limitations. See Resp. 17-40; Nuvasive, 841 F.3d at 974; Paper 8, 5. We also determine that Petitioner’s contentions are fully supported by the record. See Pet. 16, 44; Ex. 1002 ¶¶ 67-68, 105; Ex. 103 ¶¶ 12, 14, 92, Fig. 5, Claims 1, 28, 43. Accordingly, we are persuaded that Kennewick teaches or suggests all elements of the first sub-paragraph of claim 1 and the second sub-paragraph of claim 11. Claim 1 Claim 11 identifying, by the computer system, a first model that includes short-term knowledge about the conversation, wherein the short-term knowledge is based on one or more prior natural language utterances received during the conversation; identify a first model that includes short-term knowledge about the conversation, wherein the short- term knowledge is based on one or more prior natural language utterances received during the conversation; Petitioner contends that Kennewick discloses identifying a first model that includes short-term knowledge about the conversation. Pet. 17-21, 45; Ex. 1002 ¶¶ 69-74, 106. In particular, Petitioner contends that during a dialog, Kennewick stores each utterance along with tags that tie the utterance to the correct dialog and user. Pet. 17 (citing Ex. 1003 ¶¶ 155-156). Petitioner contends that Kennewick then employs this tagged dialog history (i.e., the utterances and associated tags) to derive a “fuzzy set IPR2020-01346 Patent 9,015,049 B2 19 of probabilities or prior possibilities” that are used to “maximize[] the probability of correct recognition at each stage of the dialog.” Pet. 17-18 (quoting Ex. 1003 ¶¶ 157, 162). Petitioner contends that Kennewick also uses this tagged dialog history to “score the relevance of the results” and “determine a context for an utterance.” Pet. 17 (quoting Ex. 1003 ¶¶ 170, 160). Petitioner contends that one of ordinary skill in the art would have understood “that the dialog history including the tags represents the claimed first model including short-term knowledge about the conversation.” Pet. 19 (citing Ex. 1002 ¶ 73). Petitioner also contends that this tagged dialog history is “the claimed short-term knowledge reflecting details about the user’s utterances during the conversation.” Id. (citing Ex. 1002 ¶ 73). Patent Owner argues in response that Kennewick does not teach or suggest “a first model that includes short-term knowledge about the conversation.” Resp. 18. In particular, Patent Owner argues that “Kennewick’s tagged recognized words are not short-term knowledge” and that “Kennewick’s tagged recognized words are not a model that includes short-term knowledge.” Id. at 24, 31. Patent Owner bases many of its arguments on its interpretation of “short-term knowledge,” which we examined above. Patent Owner does not contest Petitioner’s contentions regarding the remaining elements recited in these sub-paragraphs of claims 1 and 11. See Resp. 17-40; Nuvasive, 841 F.3d at 974; Paper 8, 5. We also determine that Petitioner’s contentions regarding the undisputed portions of these sub- paragraphs are fully supported by the record. Pet. 17-21, 45; Ex. 1002 ¶¶ 69-74, 106. We now turn to the disputed portions of these sub- paragraphs. IPR2020-01346 Patent 9,015,049 B2 20 Patent Owner first contends that Petitioner’s claim mapping is ambiguous because “[t]he [P]etition asserts a ‘dialog history’ is the first model and ‘dialog history information’ is the short-term knowledge, yet the [P]etition does not explain how these terms are different (if at all).” Resp. 19. Patent Owner then indicates that it will resolve this ambiguity by “analyz[ing] Petitioner’s arguments under the presumption that Petitioners assert that Kennewick’s tagged recognized words serve as both the claimed short-term knowledge and the first model including short-term knowledge.” Id. at 21. Patent Owner’s presumption is consistent with how we read Petitioner’s contentions in our Institution Decision. See Institution Decision, 16-17 (“Petitioner contends . . . ‘that the dialog history including the tags represents the claimed first model including short-term knowledge’. . . . Petitioner also contends that this tagged dialog history constitutes ‘the claimed short-term knowledge.’”). We apply the same reading of Petitioner’s contentions in this Decision. Accordingly, any ambiguity in Petitioner’s claim mapping had been resolved and, further, Patent Owner was provided sufficient notice such that it did not prevent Patent Owner from responding to the substance of Petitioner’s contentions. Next, Patent Owner discusses what it says are “components of natural language conversational systems at the time of the ’049 Patent.” Resp. 22. Pointing to declaration testimony, Patent Owner argues that prior art interactive conversational systems typically included “automatic speech recognition (‘ASR’) (which convert’s a user’s utterance into recognized words), natural language understanding (‘NLU’) (which interprets the recognized words), and user modeling (‘UM’) (which uses the interpreted words to create representations of knowledge about the user).” Id. (citing Ex. 2001 ¶¶ 54-56; Ex. 1002 ¶¶ 39-42). According to Patent Owner, IPR2020-01346 Patent 9,015,049 B2 21 “recognized words alone” are typically not sufficient “to understand the user’s goals and intent,” so it is “up to the NLU component to ‘interpret the meaning of the recognized words.’” Resp. 22. Addressing the “short-term knowledge” claim limitation, Patent Owner argues that “Kennewick’s dialog history, which consists of tagged recognized words, is created by an ASR component. However, the ’049 Patent’s description of models that include short-term knowledge necessitates interpretation of the words, which would be performed by NLU and UM components.” Resp. 24; see also Resp. 30 (“the ’049 Patent’s ‘conversation history,’ which is built by an NLU component, is clearly not the same as Kennewick’s tagged recognized words, which is built by an ASR component.”). Patent Owner similarly argues that “the petition fails to explain how tagged recognized words would constitute any sort of knowledge about the conversation” because Kennewick’s tagged recognized words are not “interpreted utterances.” Resp. 25-26. These arguments are not persuasive because we have rejected Patent Owner’s contention that the limitation “short-term knowledge” requires interpreted utterances of the type that Patent Owner contends would only be generated by an NLU or UM component. See supra § II.C.2. As we explained in our Institution Decision, “Kennewick’s tagged dialog history is an electronic representation of a user’s utterances. See Ex. 1003 ¶¶ 155-156. This tagged dialog history is not ‘the dialog itself’ . . .; it is electronic information that represents the dialog and, thus, the user’s behavior. See Ex. 1003 ¶¶ 155-156.” Institution Decision, 17. The ’049 Patent Specification indicates that short-term knowledge is knowledge that “may accumulate during a single conversation, where input received during a single conversation may be retained.” Ex. 1001, 4:63-67. Patent Owner IPR2020-01346 Patent 9,015,049 B2 22 does not persuasively dispute that Kennewick’s tagged dialog history is information that is accumulated during a single conversation, or that Kennewick’s tagged dialog history is electronic information that represents a dialog, and thus, the user’s behavior. See Resp. 17-40. Accordingly, we are persuaded on this record that Kennewick’s tagged dialog history is “short- term knowledge about the conversation, wherein the short-term knowledge is based on one or more prior natural language utterances received during the conversation,” as recited in claims 1 and 11. See Pet. 17-21, 45; Ex. 1002 ¶¶ 69-74, 106; Reply 6-18. Patent Owner next argues that Kennewick’s tagged dialog history is not a “model that includes short-term knowledge” because “the ’049 Patent describes ‘models that include short-term knowledge about the conversation’ as knowledge models” and “Kennewick’s dialog history . . . is not a representation of knowledge and therefore not a model that includes short- term knowledge about the conversation.” Resp. 31; see also Resp. 34 (“[t]he ’049 patent’s specification . . . generally refers to [models] as ‘knowledge models’”), 35 (“the claimed ‘first model’” must be “some ‘approximation, representation, or idealization’ of short term knowledge about the conversation, i.e., a knowledge model or knowledge representation as described in the specification”). These arguments are not persuasive because we have rejected Patent Owner’s contention that short-term knowledge must include interpreted utterances. See supra § II.C.2. In addition, Claims 1 and 11 merely recite a “first model that includes short-term knowledge about the conversation.” They do not recite that the “first model” must be a “knowledge model” or “knowledge representation,” and Patent Owner does not identify any lexicographic definition or clear and unambiguous disclaimer that could IPR2020-01346 Patent 9,015,049 B2 23 justify limiting the ordinary and customary meaning of the claims in this manner. See Resp. 31-40. Patent Owner also argues that if we accept Petitioner’s “overly broad application of the definition of ‘model,’ the Board’s institution decision would be a model, as would a digital copy of Moby Dick, simply because they purportedly represent the author’s behavior of writing the material.” Resp. 32. But Petitioner’s application of this definition is not as broad as Patent Owner contends. Kennewick’s tagged dialog history is information generated when speech recognition engine 120 attempts to recognize words and phrases, together with tags generated when speech recognition engine 120 attempts to identify the user’s identity. See Ex. 1003 ¶ 155. This tagged dialog history bears little resemblance to words typed directly into a computer by a judge or author. Patent Owner additionally argues that “academic literature” supports its contentions that “mere tagged recognized words do not constitute representation of shared knowledge.” Resp. 37. But, as discussed above, Claims 1 and 11 do not require that the recited “first model” be a “representation of shared knowledge.” Moreover, this extrinsic literature evidence has little bearing on the issue of whether Kennewick’s disclosure teaches or suggests a “first model” of the type recited in claims 1 and 11. Patent Owner further argues that during prosecution, “the examiner made comments consistent with the understanding” that a model could not be “mere tagged recognized words.” Resp. 39-40 (citing Ex. 2001 ¶ 90; Ex. 1008, 106). This argument is not persuasive at least because there is no evidence that the Examiner was considering a tagged dialog history of the type taught by Kennewick, and because the agreed-to construction of “model” was not before the Examiner. See Ex. 1008, 106. IPR2020-01346 Patent 9,015,049 B2 24 The claim term “model” includes an “approximation, representation, or idealization of selected aspects of the structure, behavior, operation, or other characteristics of a real-world process, concept, or system.” See supra § II.C.2. As we found in our Institution Decision, “Kennewick’s tagged dialog history is an electronic representation of a user’s utterances.” Institution Decision, 17. This tagged dialog history contains the results of speech recognition engine 120’s attempts to recognize words and phrases (i.e., representations and approximations of spoken words and phrases), together with tags generated when speech recognition engine 120 attempts to identify the speaker of words and phrases (i.e., representations and approximations of the speaker’s identity). See Ex. 1003 ¶ 155. Kennewick’s dialog history, thus, fits squarely within the agreed definition of “model.” Accordingly, on this record, we are persuaded that Kennewick’s tagged dialog history teaches or suggests a “model that includes short-term knowledge” of the type recited in claims 1 and 11. Pet. 17-21, 45; Ex. 1002 ¶¶ 69-74, 106; Reply 3-6. IPR2020-01346 Patent 9,015,049 B2 25 Claim 1 Claim 11 identifying, by the computer system, based on the short-term knowledge, context information for the natural language utterance; determining, by the computer system, based on the context information, an interpretation of the natural language utterance; and generating, by the computer system, based on the interpretation of the natural language utterance, a response to the natural language utterance. identify, based on the short-term knowledge, context information for the natural language utterance; determine, based on the context information, an interpretation of the natural language utterance; and generate, based on the interpretation of the natural language utterance, a response to the natural language utterance. With respect to the last three sub-paragraphs of claims 1 and 11, Petitioner contends that Kennewick’s computer-implemented system and method “may determine a context for an utterance by applying prior probabilities or fuzzy possibilities” to the “dialog history” and “context stack contents” (i.e., the Petitioner-identified short-term knowledge). Pet. 21-23 (citing Ex. 1003 ¶¶ 7, 160), 45; Ex. 1002 ¶¶ 75-76, 107. Petitioner also contends that Kennewick uses this context information to determine how to interpret the natural language utterance. Pet. 23-26. For example, Kennewick’s system may use context information to determine whether the keyword “temperature” is being used to describe weather or a measurement. Pet. 23 (citing Ex. 1003 ¶ 60), 45; Ex. 1002 ¶¶ 77-78, 108. Petitioner further contends that Kennewick generates natural language responses based on these context-based interpretations of natural language utterances. Pet. 26-27. For example, Kennewick uses context-based interpretations to select an appropriate agent for responding to the user’s utterance, and the IPR2020-01346 Patent 9,015,049 B2 26 selected agent develops a response that is delivered to the user by way of text-to-speech engine 124 and speaker 136. Pet. 26-27 (citing Ex. 1002 ¶¶ 124, 165, 178-82), 45; Ex. 1002 ¶¶ 82, 109. Patent Owner does not contest Petitioner’s contentions regarding these claim limitations. See Resp. 17-40; Nuvasive, 841 F.3d at 974; Paper 8, 5. We also determine that Petitioner’s contentions are fully supported by the record. Pet. 21-23, 45; Ex. 1002 ¶¶ 75-76, 107. Accordingly, we are persuaded that Kennewick teaches or suggests all elements recited in the final three sub-paragraphs of claims 1 and 11. (2) Dependent Claims 2, 4, 10, 12, 14, and 20 2. The method of claim 1, wherein determining the interpretation of the natural language utterance comprises determining, based on the context information, an interpretation of one or more recognized words of the natural language utterance. 12. The system of claim 11, wherein determining the interpretation of the natural language utterance comprises determining, based on the context information, an interpretation of one or more recognized words of the natural language utterance. 4. The method of claim 1, further comprising: identifying, by the computer system, a second model that includes long-term knowledge about one or more prior conversations between the user and the computer system, wherein determining the context information comprises determining, based on the short- term knowledge and the long- term knowledge, the context information. 14. The system of claim 11, wherein the one or more physical processors are caused to: identify a second model that includes long-term knowledge about one or more prior conversations between the user and the system, wherein determining the context information comprises determining, based on the short- term knowledge and the long- term knowledge, the context information. 10. The method of claim 1, wherein the natural language utterance and the one or more prior 20. The system of claim 11, wherein the natural language utterance and the one or more prior IPR2020-01346 Patent 9,015,049 B2 27 natural language utterances are associated with the user. natural language utterances are associated with the user. Petitioner contends that Kennewick teaches or suggests all limitations recited in dependent claims 2, 4, 10, 12, 14, and 20, and supports these contentions with declaration testimony. See Pet. 27-30, 33-37, 42-43, 45-48; Ex. 1002 ¶¶ 83-87, 93, 94, 99-102, 110, 113, 118. Patent Owner does not contest Petitioner’s contentions regarding these claim limitations. See Resp. 17-40; Nuvasive, 841 F.3d at 974; Paper 8, 5. We determine that Petitioner’s contentions regarding these dependent claims are fully supported by the record. See Pet. 27-30, 33-37, 42-43, 45-48; Ex. 1002 ¶¶ 83-87, 93, 94, 99-102, 110, 113, 118. Accordingly, we are persuaded that Kennewick teaches or suggests all elements recited in dependent claims 2, 4, 10, 12, 14, and 20. (3) Dependent Claims 3, 5, 6, 13, 15, and 16 3. The method of claim 1, further comprising: updating, by the computer system, based on information about the natural language utterance, during the conversation, the short-term knowledge in the first model, wherein the updated short-term knowledge is used to determine subsequent context information for one or more subsequent natural language utterances received during the conversation. 13. The system of claim 11, wherein the one or more physical processors are caused to: update, based on information about the natural language utterance, during the conversation, the short-term knowledge in the first model, wherein the updated short-term knowledge is used to determine subsequent context information for one or more subsequent natural language utterances received during the conversation. 5. The method of claim 4, further comprising: updating, by the computer system, based on information about the natural language utterance, the 15. The system of claim 14, wherein the one or more physical processors are caused to: update, based on information about the natural language IPR2020-01346 Patent 9,015,049 B2 28 long-term knowledge in the second model, wherein the updated long-term knowledge is used to determine subsequent context information for one or more subsequent natural language utterances received during the conversation. utterance, the long-term knowledge in the second model, wherein the updated long-term knowledge is used to determine subsequent context information for one or more subsequent natural language utterances received during the conversation. 6. The method of claim 5, wherein the updated long-term knowledge is used to determine subsequent context information for one or more subsequent conversations. 16. The system of claim 15, wherein the updated long-term knowledge is used to determine subsequent context information for one or more subsequent conversations. Petitioner contends that Kennewick teaches or suggests all limitations recited in dependent claims 3, 5, 6, 13, 15, and 16, and supports these contentions with declaration testimony. Pet. 30-33, 38-42, 46-48; Ex. 1002 ¶¶ 88-92, 95-98, 111-112, 115-116. Patent Owner argues in response that Kennewick does not teach or suggest “determining a subsequent context within the same conversation.” Resp. 40. Patent Owner does not contest Petitioner’s contentions regarding the other elements recited in claims 3, 5, 6, 13, 15, and 16. See Resp. 17-40; Nuvasive, 841 F.3d at 974; Paper 8, 5. We also determine that Petitioner’s contentions regarding the undisputed portions of these dependent claims are fully supported by the record. Pet. 30-33, 38-42, 46-48; Ex. 1002 ¶¶ 88-92, 95-98, 111-112, 115-116. We now turn to the disputed limitations. According to Patent Owner, claims 3 and 13 “recite multiple context determinations for multiple utterances within the same conversation,” but “the [P]etition does not cite to anything in Kennewick that discloses multiple IPR2020-01346 Patent 9,015,049 B2 29 context determinations in the same conversation, much less anything approaching the level of detail disclosed in the ’049 Patent.” Resp. 41-42. Patent Owner’s “level of detail” argument is unpersuasive because it improperly compares Kennewick to the Specification of the ’049 Patent. With respect to Patent Owner’s remaining arguments, we note that claims 3 and 13 recite updating “based on information about the natural language utterance, during the conversation, the short-term knowledge,” and then using the updated short-term knowledge “to determine subsequent context information for one or more subsequent natural language utterances received during the conversation.” Petitioner points out that Kennewick’s system builds a dialog history of a conversation over time by recognizing words and phrases in each utterance, and tagging the words and phrases. Pet. 30-31, 46; see also Ex. 1003 ¶ 155 (“The Speech recognition engine 120 may recognize words and phrases . . . . [and] determine the user's identity . . . by voice and name for each utterance. Recognized words and phrases are tagged with this identity in all further processing.”). As Petitioner also notes (see Pet. 31-32), Kennewick’s “parser may determine a context for an utterance by applying prior probabilities or fuzzy possibilities to” several types of information, including “dialog history.” Ex. 1003 ¶ 160. The “probabilities and possibilities” that Kennewick’s parser uses to determine context “can be dynamically updated based on a number of criteria,” including “user dialog history.” Id. ¶ 157. It is thus apparent that Kennewick’s tagged dialog history (i.e., short-term knowledge) is updated based on information about each natural language utterance in a conversation, and that Kennewick’s parser, which determines context for each utterance, uses the updated tagged dialog history to determine context information for subsequent utterances within the same conversation. IPR2020-01346 Patent 9,015,049 B2 30 Accordingly, we are persuaded that Kennewick teaches updating “based on information about the natural language utterance, during the conversation, the short-term knowledge,” and then using the updated short-term knowledge “to determine subsequent context information for one or more subsequent natural language utterances received during the conversation,” as recited in claims 3 and 13. Pet. 30-33, 46; Ex. 1002 ¶¶ 88-92, 111; Reply 18-19. Claims 5 and 15 recite updating “based on information about the natural language utterance, the long-term knowledge” and then using the updated log-term knowledge “to determine subsequent context information for one or more subsequent natural language utterances received during the conversation.” Petitioner contends that Kennewick’s user profiles are “long- term knowledge” of the type recited in these claims, and Patent Owner does not dispute this contention. See Pet. 33-36; Resp. 17-40. Patent Owner contends that Petitioner’s analysis of claims 5, 6, 15, and 16 “is similar to the [P]etition’s analysis for claims 3 and 13,” and that Petitioner’s contentions similarly fail “because [the petition] does not identify nor explain how Kennewick discloses or renders obvious multiple context determinations.” Resp. 43. Patent Owner’s argument regarding multiple context determinations is unpersuasive because, as discussed above, Kennewick’s parser may determine a context for an utterance by applying prior probabilities or fuzzy possibilities (see Ex. 1003 ¶ 160), and these “probabilities and possibilities” may be dynamically updated with each utterance (see id. ¶¶ 155, 157). Kennewick teaches that user profile (i.e., long-term knowledge) includes information that is updated based on each utterance. See, e.g., id. ¶ 31 (“Information in the [user] profile may be added and updated IPR2020-01346 Patent 9,015,049 B2 31 automatically as the user uses the system. . . . Examples of information in a user profile includes history of questions asked.”). Kennewick also teaches that the probabilities and possibilities that Kennewick’s parser uses to determine context for each utterance are “dynamically updated based on a number of criteria” including “the user profile.” Id. ¶¶ 155, 157. Accordingly, we are persuaded that Kennewick teaches or suggests updating “based on information about the natural language utterance, the long-term knowledge” and then using the updated log-term knowledge “to determine subsequent context information for one or more subsequent natural language utterances received during the conversation,” as recited in or required by claims 5, 6, 15, and 16. Pet. 38-42, 47-48; Ex. 1002 ¶¶ 95-97, 115-116; Reply 18-19. (4) Legal Conclusion regarding Obviousness For the reasons explained above, Petitioner has demonstrated persuasively that Kennewick teaches or suggest all limitations recited in claims 1-6, 10-16, and 20. Weighing all of the evidence and arguments offered by the parties, we determine that Petitioner has demonstrated by a preponderance of the evidence that claims 1-6, 10-16, and 20 are unpatentable over Kennewick pursuant to 35 U.S.C § 103(a). b) Ground 2: Obviousness of Claims 7-9 and 17-19 over Kennewick and Cooper (1) Dependent Claims 7-9 and 17-19 7. The method of claim 1, further comprising: identifying, by the computer system, based on the short-term knowledge, a manner in which the natural language utterance is spoken, 17. The system of claim 11, wherein the one or more physical processors are caused to: identify, based on the short-term knowledge, a manner in which the natural language utterance is spoken, IPR2020-01346 Patent 9,015,049 B2 32 wherein generating the response comprises generating, based on the identified manner and the interpretation of the natural language utterance, the response. wherein generating the response comprises generating, based on the identified manner and the interpretation of the natural language utterance, the response. 8. The method of claim 7, further comprising: identifying, by the computer system, a second model that includes long-term knowledge about one or more prior conversations between the user and the computer system, wherein identifying the manner comprises identifying, based on the short-term knowledge and the long-term knowledge, the manner. 18. The system of claim 17, wherein the one or more physical processors are caused to: identify a second model that includes long-term knowledge about one or more prior conversations between the user and the computer system, wherein identifying the manner comprises identifying, based on the short-term knowledge and the long-term knowledge, the manner. 9. The method of claim 7, wherein generating the response comprises generating, based on a response format associated with the identified manner, the response. 19. The system of claim 17, wherein generating the response comprises generating, based on a response format associated with the identified manner, the response. Petitioner identifies portions of Kennewick and Cooper that teach or suggest each limitation recited in claims 7-9 and 17-19 (see Pet. 48-61), and Petitioner’s contentions are supported by testimony from Dr. Smyth (see Ex. 1002 ¶¶ 119-143). Petitioner also identifies rationales for why a person of ordinary skill in the art would have had reason to combine the teachings of Kennewick and Cooper to arrive at the claimed subject matter. Pet. 48-52, 54-61 (citing Ex. 1002 ¶¶ 119-124, 127-129, 132-143). Patent Owner does not contest Petitioner’s contentions that Kennewick and Cooper teach or suggest the limitations recited in dependent claims 7-9 and 17-19. See Resp. 44-59; Nuvasive, 841 F.3d at 974; Paper IPR2020-01346 Patent 9,015,049 B2 33 8, 5. Petitioner’s contentions regarding these claim limitations are also fully supported by the record. Pet. 48-61; Ex. 1002 ¶¶ 119-143. Patent Owner argues that claims 7-9 and 17-19 depend from independent claims 1 and 11, respectively, and that Ground 2 thus fails for the same reasons discussed above with respect to Ground 1. Resp. 44. This argument is not persuasive because Petitioner has demonstrated that Kennewick teaches or suggests all limitations of claims 1 and 11. See supra § II.E.3.a. Patent Owner also argues that “the [P]etition has not established an adequate rationale to combine Kennewick and Cooper.” See Resp. 44. We address this argument in detail below, after summarizing Petitioner’s rationales to combine. Petitioner contends that “both Kennewick and Cooper describe speech recognition technologies that generally receive speech input and provide appropriate speech responses based on that input.” Pet. 49 (citing Ex. 1002 ¶¶ 121; Ex. 1003 ¶ 11; Ex. 1004, 2:36-40). Petitioner also contends that both Kennewick and Cooper “disclose the use of context in connection with speech processing systems.” Pet. 49 (citing Ex. 1003 ¶ 118; Ex. 1004, 44:13-14). Thus, Petitioner reasons, Kennewick and Cooper are both directed to the “same field of speech processing,” and “design incentives or other market forces would have motivated a POSITA to combine the reference in predictable ways . . . to yield the claimed invention.” Pet. 49. Petitioner also contends that teachings in Kennewick would have led a POSITA to incorporate the teachings of Cooper into Kennewick’s system. Pet. 49 (citing Ex. 1002 ¶ 122). For example, Petitioner contends that Kennewick teaches a system that “may employ one or more dynamically invokeable personalities” with characteristics such as “sympathy, irritation, IPR2020-01346 Patent 9,015,049 B2 34 and helpfulness” in order to “make the user’s responses to questions and commands seem more natural.” Pet. 50 (emphases omitted); Ex. 1003 ¶ 35. According to Petitioner, “Cooper also recognizes a similar need in the prior art to adapt system behavior based on the user’s input, using examples of a polite discourse and emotional state.” Pet. 50 (citing Ex. 1004, 2:26-32); see also Ex. 1004, 44:1-6 (“If the user’s emotional state is angry, the output of the virtual assistant could automatically include words associated with submissive discourse, such as ‘sorry,’ ‘regret’ and ‘apologize.’”). Petitioner argues that “given Kennewick’s teaching that use of simulated personality characteristics is desirable to present responses in a manner most natural to the user, a POSITA would have been motivated to seek other ways to match responses to the user’s personality and manner like those found in Cooper.” Pet. 50-51. Petitioner additionally contends that Kennewick discloses that the user’s “special use of terminology” may be included in a user profile and used to interpret the user’s commands and questions. Pet. 51 (citing Ex. 1003 ¶ 143). According to Petitioner, a POSITA would have been motivated to improve Kennewick’s system by incorporating Cooper’s disclosure of tailoring the terminology used in a discourse to the user’s personality. Pet. 51 (citing Ex. 1003 ¶ 143; Ex. 1004, 44:3-4, 43:51-55). Patent Owner argues in response that Petitioner’s “analogous art assertions are an insufficient rationale to combine.” Resp. 45. But the ’049 Patent is directed to an interface that receives and recognizes voice input and generates an appropriate response. See, e.g., Ex. 1001, 2:18-37 (“The system may receive an input, which may include a human utterance. . . . The utterance component of the input may be processed by a speech recognition engine. . . . “[T]he conversational speech engine may IPR2020-01346 Patent 9,015,049 B2 35 communicate with one or more databases to generate an adaptive conversational response.”) We find on this record that Kennewick and Cooper are analogous art because both references describe speech recognition systems that receive speech input and provide appropriate spoken responses. See Pet. 49 (citing Ex. 1002 ¶ 121; Ex. 1003 ¶ 11; Ex. 1004, 2:36-40); Reply 18-22. Patent Owner is correct in its assertion that the analogous art inquiry is “merely a threshold inquiry,” and that establishing analogousness does not demonstrate a rationale to combine. See Resp. 48-49. But, as discussed above, Petitioner has set forth at least two specific additional rationales to combine the teachings of Kennewick and Cooper. Accordingly, we are not persuaded that Petitioner is improperly relying solely on the analogous nature of Kennewick and Cooper to establish a rationale to combine. Patent Owner also argues that Petitioner has failed “to explain how the proposed combination would be achieved and would lead to predictable results.” Resp. 49. This argument also is unpersuasive. Patent Owner argues that we should disregard Dr. Smyth’s testimony that the teachings of Cooper and Kennewick could have been combined in a modular way because Dr. Smyth “did not consider how Cooper implements the relied- upon functionality.” Id. But the test for obviousness is whether “a skilled artisan would have been motivated to combine the teachings of the prior art references to achieve the claimed invention.” Pfizer, Inc. v. Apotex, Inc., 480 F.3d 1348, 1361 (Fed. Cir. 2007). The test “is not whether the features of a secondary reference may be bodily incorporated into the structure of the primary reference.” In re Keller, 642 F.2d 413, 425 (CCPA 1981). Thus, even if Dr. Smyth did not consider the specifics of how Cooper implemented the relied-upon functionality, this does not demonstrate that Dr. Smyth erred IPR2020-01346 Patent 9,015,049 B2 36 in determining that a skilled artisan would have been motivated to incorporate Cooper’s teachings regarding recognizing and responding appropriately to the user’s manner of speech into Kennewick’s system. On this record, we find that Petitioner has demonstrated that the proposed combination would have been achievable and predictable. See Pet. 48-59; Ex. 1002 ¶¶ 121-132. Patent Owner next argues that the Petition’s “teachings, suggestions, and motivations to combine Kennewick and Cooper are flawed.” Resp. 51. In particular, Patent Owner argues that “the petition fails to explain how Cooper’s discourse constitutes a simulated, natural-sounding personality” of the type described in Kennewick. Id. at 52. We disagree because Cooper’s teachings of tailoring responses to the user’s emotional state (see, e.g., Ex. 1004, 2:26-32, 44:1-6) would necessarily result in more natural- sounding (i.e., human-like) responses. Patent Owner additionally argues that Kennewick’s teaching of “special use of terminology” would not have motivated a POSITA to look to Cooper’s teachings “because the petition does not explain how standard polite language constitutes ‘special use of terminology.’” Resp. 54. But as Petitioner points out, Cooper’s system looks to the types of words spoken by the user to determine the manner of response. See, e.g., Pet. 53 (citing Ex. 1002 ¶¶ 125-127; Ex. 1004, 2:26-30, 43:51-61); Reply 24 (citing Ex. 1004, 43:55-61). Dr. Smyth also testifies that words of polite discourse are a type of “terminology,” and that the use of such words is a “special use of terminology.” Pet. 51 (citing Ex. 1002 ¶ 123); Reply 24 (citing Ex. 1002 ¶ 123). Thus, Petitioner has, in fact, explained this assertion. Patent Owner’s final argument is that a POSITA would not have been motivated “to incorporate Cooper’s primitive speech recognition teachings IPR2020-01346 Patent 9,015,049 B2 37 into the natural language system of Kennewick.” Resp. 55. In particular, Patent Owner contends that “Cooper’s system of pre-determined queries and responses is too primitive to improve the simulated personalities taught in Kennewick.” Id. But the test for obviousness is not whether Cooper’s pre- determined queries and responses could be bodily incorporated into Kennewick’s system. In re Keller, 642 F.2d at 425. Patent Owner also fails to identify any instance in which Petitioner mentioned, much less relied on, Cooper’s teaching of pre-determined queries and responses as part of an allegedly-obvious combination. See Resp. 55-59. On this record, we are persuaded that a POSITA would have had reason to combine the teachings of Kennewick and Cooper to arrive at the claimed invention. In particular, we agree with Petitioner that Kennewick teaches the desirability of simulating aspects of human personality to, for example, avoid upsetting the user. See Pet. 50-51 (citing Ex 1002 ¶ 122; Ex. 1003 ¶¶ 24, 35). We also agree with Petitioner that this teaching of Kennewick would have motivated a POSITA to seek out other teachings, such as those in Cooper, regarding adapting system behavior to a user’s emotional state. See Pet. 50 (citing Ex. 1004, 2:26-32), 53 (citing Ex. 1002 ¶¶ 125-127; Ex. 1004, 43:51-67, 2:26-30). We further agree that Kennewick’s teaching of formulating responses based on a user’s “special use of terminology” would have motivated a POSITA to seek out teachings in other references, such as Cooper, that disclose analyzing the user’s use of terminology in order to improve the nature of responses. See Pet. 51 (citing Ex. 1002 ¶ 123; Ex. 1003 ¶ 143; Ex. 1004, 43:51-55, 44:3-4). We additionally agree with Petitioner that these rationales would have led a POSITA to combine the teachings of Kennewick and Cooper to arrive at the IPR2020-01346 Patent 9,015,049 B2 38 inventions recited in claims 7-9 and 17-19 with a reasonable expectation of success. See Pet. 48-61; Ex. 1002 ¶¶ 119-143; Reply 20-28. (2) Legal Conclusion regarding Obviousness For the reasons explained above, Petitioner has demonstrated persuasively that Kennewick and Cooper teaches or suggest all limitations recited in claims 7-9 and 17-19. Petitioner also has set forth at least two separate rationales having rational underpinnings for why a POSITA would have combined the teachings of Kennewick and Cooper to arrive at the inventions recited in claims 7-9 and 17-19 with a reasonable expectation of success. Weighing all of the evidence and arguments offered by the parties, we determine that Petitioner has demonstrated by a preponderance of the evidence that claims 7-9 and 17-19 are unpatentable over Kennewick and Cooper pursuant to 35 U.S.C § 103(a). IPR2020-01346 Patent 9,015,049 B2 39 III. CONCLUSION For the foregoing reasons, Petitioner has shown by a preponderance of the evidence that challenged claims 1-20 are unpatentable.4 In summary: Claim(s) Challenged 35 U.S.C. § Reference(s) Claims Shown Unpatentable Claims Not Shown Unpatentable 1-6, 10-16, 20 103 Kennewick 1-6, 10-16, 20 7-9, 17-19 103 Kennewick, Cooper 7-9, 17-19 Overall Outcome 1-20 IV. ORDER In consideration of the foregoing, it is hereby: ORDERED that claims 1-20 of U.S. Patent No. 9,015,049 B2 have been shown to be unpatentable; and FURTHER ORDERED that because this is a final written decision, parties to the proceeding seeking judicial review of this decision must comply with the notice and service requirements of 37 C.F.R. § 90.2. 4 Should Patent Owner wish to pursue amendment of the challenged claims in a reissue or reexamination proceeding subsequent to the issuance of this decision, we draw Patent Owner’s attention to the April 2019 Notice Regarding Options for Amendments by Patent Owner Through Reissue or Reexamination During a Pending AIA Trial Proceeding. See 84 Fed. Reg. 16,654 (Apr. 22, 2019). If Patent Owner chooses to file a reissue application or a request for reexamination of the challenged patent, we remind Patent Owner of its continuing obligation to notify the Board of any such related matters in updated mandatory notices. See 37 C.F.R. § 42.8(a)(3), (b)(2). IPR2020-01346 Patent 9,015,049 B2 40 FOR PETITIONER: David Hadden Dhadden-ptab@fenwick.com Saina Shamilov Sshamilov-ptab@fenwick.com Johnson Kuncheria jkuncheria@fenwick.comm Allen Wang Allen.wang@fenwick.com Brian Hoffman bhoffman@fenwick.com FOR PATENT OWNER: Matthew Argenti margenti@wsgr.com Michael Rosato mrosato@wsgr.com James Yoon jyoon@wsgr.com Ryan Smith rsmith@wsgr.com Copy with citationCopy as parenthetical citation