Parus Holdings, Inc.Download PDFPatent Trials and Appeals BoardSep 9, 2021IPR2020-00687 (P.T.A.B. Sep. 9, 2021) Copy Citation Trials@uspto.gov Paper No. 37 571-272-7822 Date: September 9, 2021 UNITED STATES PATENT AND TRADEMARK OFFICE ____________ BEFORE THE PATENT TRIAL AND APPEAL BOARD ____________ APPLE INC., Petitioner, v. PARUS HOLDINGS, INC., Patent Owner. ____________ IPR2020-00687 Patent 9,451,084 B2 ____________ Before DAVID C. MCKONE, STACEY G. WHITE, and SHELDON M. MCGEE, Administrative Patent Judges. MCGEE, Administrative Patent Judge. JUDGMENT Final Written Decision Determining No Challenged Claims Unpatentable 35 U.S.C. § 318(a) Denying Patent Owner’s Motion to Exclude 37 C.F.R. § 42.64 IPR2020-00687 Patent 9,451,084 B2 2 I. INTRODUCTION Apple Inc. (“Petitioner”) filed a Petition requesting an inter partes review of claims 1–7, 10, and 14 of U.S. Patent No. 9,451,084 B2 (Ex. 1030, “the ’084 patent”). Paper 1 (“Pet.”). Parus Holdings, Inc., (“Patent Owner”) filed a Preliminary Response to the Petition. Paper 6. We authorized Petitioner to file a Reply to Patent Owner’s Preliminary Response (Paper 7), and Patent Owner filed a Sur-reply (Paper 8). After considering these filings by both parties, we instituted an inter partes review of claims 1–7, 10, and 14 of the ’084 patent on all grounds of unpatentability alleged in the Petition. Paper 9 (“Institution Decision” or “Dec.”). After institution of trial, Patent Owner filed a Patent Owner Response. Paper 15 (“PO Resp.”). Petitioner filed a Reply. Paper 19 (“Reply”). Patent Owner filed a Sur-reply (Paper 21 (“Sur-reply”)), as well as Objections to Evidence (Paper 20 (“PO Obj. Evid.”)). Patent Owner filed a Motion to Exclude certain evidence submitted by Petitioner (Paper 29 (“Mot. Excl.”)), to which Petitioner filed an Opposition (Paper 30 (“Opp. Mot. Excl.”)). Patent Owner filed a Reply to Petitioner’s Opposition to its Motion to Exclude (styled a “Sur-reply”). Paper 32 (“Reply Mot. Excl.”). An oral hearing was held on June 22, 2021, and a transcript of the hearing is included in the record. Paper 36 (“Tr.”). We have jurisdiction under 35 U.S.C. § 6. This Final Written Decision is issued pursuant to 35 U.S.C. § 318(a). For the reasons that follow, we determine Petitioner has not established by a preponderance of the evidence that claims 1, 7–10, and 14 of the ’084 patent are unpatentable. We also deny Patent Owner’s Motion to Exclude. IPR2020-00687 Patent 9,451,084 B2 3 A. Related Proceedings The parties identify the following district court proceeding as related to the ’084 patent: Parus Holdings Inc. v. Apple, Inc., No. 6:19-cv-00432 (W.D. Tex.) (“the Texas case”); Parus Holdings Inc. v. Amazon.com, Inc., No. 6:19-cv-00454 (W.D. Tex.); Parus Holdings Inc. v. Samsung Electronics Co., Ltd., No. 6:19-cv-00438 (W.D. Tex.); Parus Holdings Inc. v. Google LLC, No. 6:19-cv-00433 (W.D. Tex.); and Parus Holdings Inc. v. LG Electronics, Inc., No. 6:19-cv-00437 (W.D. Tex.). Pet. 68; Paper 5, 1. The parties also identify U.S. Patent No. 6,721,705 and U.S. Patent No. 7,076,431 as related to the ’084 patent, and further identify that U.S. Patent No. 7,076,431 has been asserted in the district court proceedings listed supra, and is the subject of IPR2020-00686. Pet. 68; Paper 5, 1. B. The ’084 Patent (Ex. 1030) The ’084 patent, titled “Robust Voice Browser System and Voice Activated Device Controller,” issued September 20, 2016. Ex. 1030, codes (54), (45). The ’084 patent relates to a “robust and highly reliable system that allows users to browse web sites and retrieve information by using conversational voice commands.” Id. at 1:35–38. Systems disclosed by the ’084 patent allow devices connected to a network to be controlled by conversational voice commands spoken into any voice enabled device interconnected with the network. Id. at 3:37–41. Systems disclosed by the ’084 patent also allow users to access and browse web sites when the users do not have access to computers with Internet access, by providing users with a voice browsing system to browse web sites using conversational voice commands spoken into voice enabled devices, such as wireline or wireless telephones. Id. at 3:29–32, 3:52–59. The users’ spoken commands IPR2020-00687 Patent 9,451,084 B2 4 are converted into data messages by a speech recognition software engine, and are transmitted to the user’s desired web site over the Internet. Id. at 3:60–65. Responses sent from the web site are received and converted into audio messages via a speech synthesis engine or a pre-recorded audio concatenation application, and finally transmitted to the user’s voice enabled device. Id. at 3:65–4:3. The disclosed voice browsing system maintains a database containing a list of information sources (e.g., Internet web sites), with rank numbers assigned to the information sources. Id. at 3:17–20, 4:5– 20. The ’084 patent explains that: the voice browser system and method uses a web site polling and ranking methodology that allows the system to detect changes in web sites and adapt to those changes in real-time. This enables the voice browser system of a preferred embodiment to deliver highly reliable information to users over any voice enabled device. This ranking system also enables the present invention to provide rapid responses to user requests. Long delays before receiving responses to requests are not tolerated by users of voice-based systems, such as telephones. When a user speaks into a telephone, an almost immediate response is expected. This expectation does not exist for non-voice communications, such as email transmissions or accessing a web site using a personal computer. In such situations, a reasonable amount of transmission delay is acceptable. The ranking system . . . implemented by a preferred embodiment of the present invention ensures users will always receive the fastest possible response to their request. Id. at 4:4–21. Figure 1 of the ’084 patent, reproduced below, illustrates a voice browsing system. Id. at 4:29–30. IPR2020-00687 Patent 9,451,084 B2 5 Figure 1 illustrates a voice browsing system. Id. at 4:29–30. Voice browsing system 118 illustrated in Figure 1 includes media servers 106 (which may contain a speech recognition engine), database 100, web browsing servers 102, and firewalls 104 and 108. Id. at 5:10–18, 6:10– 12, 6:20–23, 20:26–34. Voice browsing system 118 connects on one side to voice-enabled device 112 (e.g., a telephone) through public switched telephone network (“PSTN”) 106, and to individual websites 114 through internet 110 on the other side. Id. at 19:56–20:38. Specifically, a user of the voice browsing system establishes a connection between voice enabled device 112 and media server 106 by, e.g., calling a telephone number associated with the voice browsing system. Id. at 19:59–62. Once the connection is established, media server 106 IPR2020-00687 Patent 9,451,084 B2 6 initiates an interactive voice response (IVR) application that plays audio messages to the user presenting a list of options, such as, “stock quotes,” “flight status,” “yellow pages,” “weather,” and “news.” Id. at 19:62–67. The user selects the desired option (e.g., “yellow pages”) by speaking the name of the option into the voice-enabled device 112. Id. at 20:4–18. The system asks the user further details of the user’s search, and the user speaks into telephone 112 the details of the user’s search (e.g., looking for “restaurants,” types of restaurants, zip codes for the restaurants). Id. Media server 106 uses the speech recognition engine to interpret the user’s speech commands; for example, media server 106 may identify keywords in the user’s speech. Id. at 6:60–7:2, 20:19–21. Media server 106 then uses the recognized keywords to search website records stored in database 100, retrieves an appropriate web site record from the database, and provides the record to the web browsing server 102. Id. at 6:65–7:2, 20:20–23. Information is then retrieved from the responding web site and transmitted to media server 106, for conversion into audio messages—performed by a speech synthesis software or by selecting among a database of prerecorded voice responses contained within database 100. Id. at 20:35–46. Database 100 contains sets of records for each web site accessible by the voice browsing system. Id. at 5:17–20. Figure 2, reproduced below, illustrates an example of web site record 200 in the database. Id. at 4:31–32, 5:19–20. IPR2020-00687 Patent 9,451,084 B2 7 Figure 2 illustrates an example of a web site record in database 100. Id. at 4:31–32, 5:19–20. Each web site record 200 contains rank number 202 of the web site, associated Uniform Resource Locator (URL) 204 for the website, and command 206 that enables an extraction agent to generate proper requests to the web site and to format data received from the web site. Id. at 5:20–25. For each category searchable by a user, database 100 may list several web sites, each with a different rank number. Id. at 20:47–50. As an example, three different web sites may be listed as searchable under the category of “restaurants,” and each of those web sites will be assigned a rank number such as 1, 2, or 3. Id. at 20:50–53. The web site with the highest rank (i.e., rank=1) will be the first web site accessed by web browsing server 102. Id. at 20:53–55. If the information requested by the user cannot be found at this first web site, web browsing server 102 will then search the second ranked web site and so forth down the line, until the requested information is retrieved or no more web sites are left to be checked. Id. at 20:55–59. IPR2020-00687 Patent 9,451,084 B2 8 C. Illustrative Claim Petitioner challenges claims 1–7, 10, and 14, of which sole independent claim 1 is illustrative. Claim 1 is reproduced below with Petitioner’s identification of claim elements in brackets, and the dispositive claim limitation in this proceeding italicized: 1. [Preamble] A system for acquiring information from one or more sources maintaining a listing of web sites by receiving speech commands uttered by users into a voice-enabled device and for providing information retrieved from the web sites to the users in an audio form via the voice-enabled device, the system comprising: [1(a)] at least one computing device, the computing device operatively coupled to one or more networks; [1(b)] at least one speaker-independent speech-recognition device, the speaker-independent speech-recognition device operatively connected to the computing device and configured to receive the speech commands; [1(c)] at least one speech-synthesis device, the speech- synthesis device operatively connected to the computing device; [1(d)] memory operatively associated with the computing device with at least one instruction set for identifying the information to be retrieved, the instruction set being associated with the computing device, the instruction set comprising: a plurality of web site addresses for the listing of web sites, each web site address identifying a web site containing the information to be retrieved; [1(e)] at least one recognition grammar associated with the computing device, each recognition grammar corresponding to each instruction set and corresponding to a speech command, [1(f)] the speech command comprising an information request provided by the user, [1(g)] the speaker-independent speech-recognition device IPR2020-00687 Patent 9,451,084 B2 9 configured to receive the speech command from the users via the voice-enabled device and to select the corresponding recognition grammar upon receiving the speech command; [1(h)] the computing device configured to retrieve the instruction set corresponding to the recognition grammar provided by the speaker-independent speech-recognition device; [1(i)] the computing device further configured to access at least one of the plurality of web sites identified by the instruction set to obtain the information to be retrieved, [1(j)] wherein the computing device is further configured to periodically search via the one or more networks to identify new web sites and to add the new web sites to the plurality of web sites, [1(k)] the computing device configured to access a first web site of the plurality of web sites and, if the information to be retrieved is not found at the first web site, the computer configured to access the plurality of web sites remaining in an order defined for accessing the listing of web sites until the information to be retrieved is found in at least one of the plurality of web sites or until the plurality of web sites have been accessed; [1(l)] the speech synthesis device configured to produce an audio message containing any retrieved information from the plurality of web sites, and the speech synthesis device further configured to transmit the audio message to the users via the voice-enabled device. Ex. 1030, 24:2–59 (limitation numbering designated by Petitioner; see Pet. 70–72 (“CLAIMS LISTING APPENDIX”)) (emphasis added). D. Instituted Challenges to Patentability We instituted inter partes review of claims 1–7, 10, and 14 of the ’084 patent on the following challenges. Dec. 2, 47. IPR2020-00687 Patent 9,451,084 B2 10 Claims Challenged 35 U.S.C. § References 1–6, 10, 14 103(a)1 Ladd,2 Kurosawa,3 Goedken4 7 103(a) Ladd, Kurosawa, Goedken, Madnick5 5, 6 103(a) Ladd, Kurosawa, Goedken, Houser6 1–6, 10, 14 103(a) Ladd, Kurosawa, Goedken, Rutledge7 7 103(a) Ladd, Kurosawa, Goedken, Rutledge, Madnick 5, 6 103(a) Ladd, Kurosawa, Goedken, Rutledge, Houser Petitioner relies on two Declarations from Loren Terveen, Ph.D. Exs. 1003, 1040.8 Patent Owner relies on a Declaration from Benedict Occhiogrosso. Ex. 2025. 1 The Leahy-Smith America Invents Act (“AIA”), Pub. L. No. 112-29, 125 Stat. 284, 287–88 (2011), amended 35 U.S.C. § 103. Because the ’084 patent was filed before March 16, 2013, the effective date of the relevant amendment, the pre-AIA version of § 103 applies. 2 Ladd et al., US 6,269,336 B1, issued July 31, 2001 (Ex. 1004). 3 Kurosawa, JP H09-311869, published December 2, 1997 (Ex. 1005). We rely on the certified translation of JP H09-311869 (Ex. 1005). 4 Goedken, US 6,393,423 B1, issued May 21, 2002 (Ex. 1006). 5 Madnick et al., US 5,913,214, issued June 15, 1999 (Ex. 1007). 6 Houser et al., US 5,774,859, issued June 30, 1998 (Ex. 1008). 7 Rutledge et al., US 6,650,998 B1, issued November 18, 2003 (Ex. 1010). 8 Portions of Exhibit 1040 are the subject of Patent Owner’s Motion to Exclude Evidence, which we deny, infra. Paper 29. IPR2020-00687 Patent 9,451,084 B2 11 II. ANALYSIS A. Claim Construction Petitioner filed its Petition on March 18, 2020. Pet. 98. Based on that filing date, we apply the same claim construction standard that is applied in civil actions under 35 U.S.C. § 282(b), which is articulated in Phillips v. AWH Corp., 415 F.3d 1303 (Fed. Cir. 2005) (en banc). See 83 Fed. Reg. 51,340 (Oct. 11, 2018) (applicable to inter partes reviews filed on or after November 13, 2018). Under Phillips, claim terms are afforded “their ordinary and customary meaning.” 415 F.3d at 1312. “[T]he ordinary and customary meaning of a claim term is the meaning that the term would have to a person of ordinary skill in the art in question at the time of the invention.” Id. at 1313. Only terms that are in controversy need to be construed, and then only to the extent necessary to resolve the controversy. Vivid Techs., Inc. v. Am. Sci. & Eng’g, Inc., 200 F.3d 795, 803 (Fed. Cir. 1999). In the Petition, Petitioner contended that we should give the claim terms their ordinary and customary meaning, and did not identify any claim term for construction. Pet. 13. After pre-institution briefing was completed, but before we instituted trial, the court in the Texas case issued a claim construction ruling, construing “speaker-independent speech recognition device” to mean “speech recognition device that recognizes spoken words without adapting to individual speakers or using predefined voice patterns.” Ex. 1041, 2.9 9 The court in the Texas case issued other constructions pertaining to the challenged claims, but the parties do not advance them in this proceeding IPR2020-00687 Patent 9,451,084 B2 12 The parties agree that this term at least requires a “speech recognition device that recognizes spoken words without . . . using predefined voice patterns,” but disagree as to whether it should require a device that recognizes spoken words “without adapting to individual speakers.” PO Resp. 21 (“The proper construction of ‘speaker-independent speech recognition device’ is consistent with the construction issued by the Western District of Texas, though it does not include all of that court’s construction, and requires at least ‘speech recognition device that recognizes spoken words without using predefined voice patterns.’”); Reply 2 (“For purposes of this IPR, Apple submits the Court’s construction should be applied.”). The dispute regarding whether the term should preclude adapting to individual speakers does not impact any issue in this proceeding, and Petitioner has agreed to Patent Owner’s construction in this proceeding, so long as we do not resolve the dispute over adapting to individual speakers. Tr. 12:24–13:4 (“JUDGE McKONE: So you’d be happy if we essentially adopted Parus’s construction with a footnote or some kind of note that we’re not resolving the issue of adapting to individual speakers? MS. BAILEY: That would be fine for purposes of this IPR, Your Honor.”). We adopt the parties’ agreed approach. For purposes of this proceeding, “speaker- independent speech recognition device” means “speech recognition device that recognizes spoken words without using predefined voice patterns.” This is consistent with the ’084 patent’s statement (relied on by both parties) that “[t]he voice browsing system recognizes naturally spoken voice commands and is speaker-independent; it does not have to be trained to recognize the and we do not find it necessary to adopt them in order to resolve the parties’ dispute. IPR2020-00687 Patent 9,451,084 B2 13 voice patterns of each individual user. Such speech recognition systems use phonemes to recognize spoken words and not predefined voice patterns.” Ex. 1030, 4:52–56; see also PO Resp. 21–22 (citing Ex. 1030, 4:47–56); Reply 2–3 (citing Ex. 1030, 4:51–56). We take no position on whether the construction also should include “without adapting to individual speakers.” Based on the record before us, we do not find it necessary to provide express claim constructions for any other terms. See Nidec Motor Corp. v. Zhongshan Broad Ocean Motor Co., 868 F.3d 1013, 1017 (Fed. Cir. 2017) (noting that “we need only construe terms ‘that are in controversy, and only to the extent necessary to resolve the controversy’”) (quoting Vivid Techs., Inc. v. Am. Sci. & Eng’g, Inc., 200 F.3d 795, 803 (Fed. Cir. 1999)). B. Level of Ordinary Skill in the Art Petitioner asserts that one of ordinary skill in the art would have had at least a Bachelor’s degree in Electrical Engineering, Computer Engineering, Computer Science, or an equivalent degree, with at least two years of experience in interactive voice response systems, automated information retrieval systems, or related technologies. Pet. 7 (citing Ex. 1003 ¶ 28). On the complete record, we adopt Petitioner’s definition of one of ordinary skill in the art for essentially three reasons. First, Patent Owner does not contest Petitioner’s assertion. See generally PO Resp. Also, neither party argues that the outcome of this case would differ based on our adoption of any particular definition of one of ordinary skill in the art. Moreover, the level of ordinary skill in the art is also reflected by the references themselves. See Okajima v. Bourdeau, 261 F.3d 1350, 1355 (Fed. Cir. 2001) (“[T]he absence of specific findings on the level of skill in the art does not give rise to reversible error ‘where the prior art itself reflects IPR2020-00687 Patent 9,451,084 B2 14 an appropriate level and a need for testimony is not shown.’”); In re GPAC Inc., 57 F.3d 1573, 1579 (Fed. Cir. 1995) (finding that the Board of Patent Appeals and Interferences did not err in concluding that the level of ordinary skill in the art was best determined by the references of record). C. Asserted Obviousness of Claims 1–7, 10, and 14 (Grounds 1–3) Petitioner asserts that claims 1–7, 10, and 14 would have been unpatentable under 35 U.S.C. § 103(a) as obvious over the combined teachings of Ladd, Kurosawa, and Goedken (claims 1–6, 10, and 14); Ladd, Kurosawa, Goedken, and Madnick (claim 7); and Ladd, Kurosawa, Goedken, and Houser (claims 5 and 6).10 Pet. 19–64. For support, Petitioner relies on declaration testimony from Dr. Loren Terveen (Exs. 1003, 1040). 1. Overview of Ladd Ladd discloses a voice browser for interactive services. Ex. 1004, codes (54), (57). Ladd’s system uses a markup language to provide the interactive services, by creating and using a markup language document, the markup language document having a dialog element including a plurality of markup language elements and a prompt element, including an announcement that is read to a user. Id. Ladd’s communication method selects the prompt element, and defines a voice communication / announcement in the prompt element to be read to the user. Id. Ladd’s system also accepts data inputted by the user. Id. 10 A discussion of Madnick and Houser are not necessary to our disposition of Petitioner’s assertions. IPR2020-00687 Patent 9,451,084 B2 15 With Ladd’s system, users can access information from information sources or content providers, using voice inputs and commands. Id. at 2:48– 58. For example, users can access up-to-date information, such as news updates and traffic conditions. Id. The system also allows users to perform transactions, such as order flowers, place restaurant orders, obtain bank account balances, and receive directions to various destinations. Id. Ladd’s Figure 1 is illustrative and is reproduced below. Figure 1 is a block diagram of system 100 including network access device 102, electronic network 104, and information source 106. Id. at 2:21–25. Device 102 is connected to network 104 via line 108, and information source 106 is connected to network 104 via line 110. Id. at 26–29. “The lines 108 and 110 can include, but are not limited to, a telephone line or link, an ISDN line, a coax[ial] line, a cable television line, a fiber optic line, a computer network line, a digital subscriber line, or the like.” Id. at 2:29–32. Alternatively, device 102 and information source 106 can wirelessly communicate with network 104. Id. at 2:33–35. Network 104 “can include an open, wide area network such as the Internet, the World Wide Web (WWW), and/or an on-line service,” and can also include “an intranet, an extranet, a local area network, a telephone network, (i.e., a public switched IPR2020-00687 Patent 9,451,084 B2 16 telephone network), a cellular telephone network,” and other networks. Id. at 3:27–39. Ladd’s Figure 3, reproduced below, illustrates a system including a voice browser, enabling “a user to access information from any location in the world via a suitable communication device.” Id. at 4:62–67. Figure 3 illustrates a system 200 including a voice browser 250, enabling a user to access information. Id. at 4:62–67. System 200 includes one or more communication devices or network access apparatuses (e.g., 201, 202, 203, 204), an electronic network (206), and one or more information sources such as content providers (e.g., 208, 209) and markup language servers. Id. at 5:12–19. A user can retrieve the information from the information sources using speech commands or DTMF (dual-tone multi-frequency signaling) tones. Id. at 5:17–19. When a user dials into electronic network 206, a communication node 212 answers the IPR2020-00687 Patent 9,451,084 B2 17 incoming call and retrieves an appropriate announcement (i.e., a welcome greeting) from a database, server, or browser. Id. at 6:3–17. In response to audio inputs from the user, communication node 212 retrieves information from a destination or database of the information sources (content providers 208 and 209 or markup language servers). Id. at 6:18–25. After communication node 212 retrieves the information, communication node provides a response to the user based upon the retrieved information. Id. Communication node 212 may include telephone switch 230, voice or audio recognition (VRU) client 232, voice recognition (VRU) server 234, and voice browser 250. Id. at 6:65–7:7. In response to voice inputs from the user or DTMF tones, voice browser 250 generates a content request (i.e., an electronic address) to navigate to a destination of one or more information sources. Id. at 11:30–34. The content request can use at least a portion of a URL, a URN, an IP, a page request, or an electronic email. Id. at 11:34–36. Voice browser 250 processes markup language that may include text, recorded sound samples, navigational controls, and input controls for voice applications. Id. at 15:60–64. The markup language enables developers of service or content providers to create application programs for instructing the voice browser to provide a desired user interactive voice service such as providing up-to-date news, weather, or traffic. Id. at 15:65–16:5. Content providers 208 and 209 send requested information to the voice browser. Id. at 11:60–63. The content providers include a server to operate web pages or documents in markup language. Id. at 11:55–58. Content providers 208 and 209 can also include a database, scripts, and/or markup language documents or pages, where the scripts may include images, audio, grammars, and computer programs. Id. at 11:57–61. IPR2020-00687 Patent 9,451,084 B2 18 2. Overview of Kurosawa Kurosawa discloses an Internet search server that obtains requested information from a plurality of URLs, and delivers a search report to a client. Ex. 1005, code (57). Figure 2 of Kurosawa, reproduced below, illustrates an Internet search server. Figure 2 is a functional block diagram of an Internet search server. Id. ¶ 20. Internet search server 10 illustrated in Figure 2 includes a URL database 11 having a comparison table with a plurality of keywords representing search condition elements, and URLs relating to the keywords. Id. ¶¶ 20–21. Figure 5 of Kurosawa, reproduced below, illustrates a keyword table 21 (of URL database 11) listing keywords used in a URL table 22 (of URL database 11) illustrated in Figure 6 (also reproduced IPR2020-00687 Patent 9,451,084 B2 19 below). Id. ¶ 21. Kurosawa provides that “anything that is not listed in the keyword table 21 cannot be searched for.” Id. URL table 22 illustrated in Figure 6 is a comparison table of a plurality of URLs and keywords related thereto. Id. ¶ 21. Search server 10 regularly updates the URL table 22 in URL database 11 using automatic search tools, such as Internet web crawlers. Id. ¶ 23. When a client sends a search request to Internet search server 10, search condition element extraction unit 13 (of server 10) extracts search condition elements from the client’s search request, and URL search unit 14 (of server 10) extracts keywords (included in the search condition elements) from keyword table 21, and selects URLs (from URL table 22) having the extracted keywords listed therein. Id. ¶¶ 26–28. A URL listing order arranging unit 15 (of server 10) determines a listing order for the selected URL addresses (URLs), based on priority conditions for efficient searching. Id. ¶ 29. Thereafter, URL listing unit 16 (of server 10) sequentially lists the URL addresses of the respective URLs in the determined order, and accesses respective webpages of the URLs. Id. ¶ 30. A URL information gathering unit 17 (of server 10) sequentially accumulates information from the URL pages, for presentation to the client. Id. ¶¶ 30–31. IPR2020-00687 Patent 9,451,084 B2 20 Figure 5 illustrates a keyword table 21 in URL database 11. Id. ¶ 21. IPR2020-00687 Patent 9,451,084 B2 21 Figure 6 illustrates a URL table 22 in URL database 11. Id. ¶ 21. 3. Overview of Goedken Goedken describes a method and apparatus for facilitating information exchange, via a network, between an information requestor/searcher and one or more information custodians, which are persons that “know[] where to locate and/or ha[ve] custody of the information that interests the searcher.” Ex. 1006, code (57), 1:41–44. The searcher creates an information request message and sends it to the apparatus, and the apparatus determines an appropriate custodian and sends a request message to that custodian. Id. at code (57). Based on the messages, the apparatus provides a final answer message to the searcher, and may also record the answer message for subsequent retrieval. Id. For example, the apparatus may record portions of final answer messages developed by information custodians, in a knowledge database. Id. at 19:43–48. Goedken provides that its disclosed apparatus may be incorporated into an Internet portal. Id. at 21:40–43. 4. The Parties’ Contentions Regarding Independent Claim 1 Petitioner asserts that claim 1 would have been obvious over the combined disclosures Ladd, Kurosawa, and Goedken. Pet. 19–51. Relevant to our disposition of this proceeding, Petitioner asserts that Ladd discloses the claimed “speaker-independent speech-recognition device” in the form of “automatic speech recognition unit (ASR) 254.” Pet. 23. According to Petitioner, “Ladd expressly teaches the ASR unit is a ‘speaker-independent’ ‘speech recognition’ device” because “[t]he ASR unit 254 of the VRU server 234 provides speaker independent automatic speech recognition of speech inputs . . . .” and such “ASR unit ‘processes the speech inputs from the user IPR2020-00687 Patent 9,451,084 B2 22 to determine whether a word or a speech matter matches any of the [stored] grammars. . . .’” Id. at 23–24. Patent Owner disputes Petitioner’s assertion regarding this claim element. PO Resp. 35–38. Specifically, Patent Owner asserts that, consistent with its proffered claim construction, “[t]he ̓ 084 patent disclaims the use of voice patterns to recognize a spoken command, which restricts a system to a sharply limited vocabulary.” Id. at 36 (citing Ex. 1030, 4:55–56; Ex. 2025 ¶ 82). According to Patent Owner, Ladd operates in the manner that is disclaimed in the ʼ084 patent because “its ASR recognizes a predefined speech pattern.” Id. at 37 (citing Ex. 1004, 9:36–39). Patent Owner avers that Ladd’s “express requirement . . . that the ASR 254 relies on ‘recognized voice pattern[s]’ runs directly contrary to the” claim element at issue. Id. at 38. In response, Petitioner argues that “Ladd’s determination of voice patterns occurs at a different step in the speech recognition process than [Patent Owner’s] excluded voice patterns.” Reply 4. Specifically, according to Petitioner, speech recognition devices generally use a two-step approach to accomplish speech recognition: 1) converting a “user’s spoken words into text using various voice recognition algorithms, including analyzing voice patterns,” and 2) “[a]fter converting the spoken words into text, the content of the word, e.g., what is commanded by the word, is determined by comparing the text to a recognition grammar.” IPR2020-00687 Patent 9,451,084 B2 23 Id. at 4–5 (citing Ex. 1039, 50:21–51:8, 51:14–52:22, 54:6–55:7, 66:5– 67:17; Ex. 1030, 6:57–7:2).11 According to Petitioner, “Ladd’s speech/voice pattern is determined at the second step of identifying the instructed command in the Ladd [interactive voice response] system (i.e., identifying a key word or phrase) and subsequent to converting the speech into text.” Reply 4 (citing Ex. 1040 ¶¶ 2–9). Petitioner asserts that Patent Owner’s claim construction, which excludes the use of predefined voice patterns, applies only to the first step of the speech recognition process, i.e., when a user’s speech is converted into text. Reply 7. Petitioner urges that Ladd’s ASR “unit 254 first recognizes the words from the user’s speech input and then performs the second step of determining whether the speech inputs match any key word or phrase via comparison to a stored grammar or vocabulary.” Id. (citing Ex. 1004, 9:31– 36, 8:23–25; Ex. 1040 ¶¶ 4–8). According to Petitioner, Ladd’s “[d]etermination of a voice pattern, i.e., a key word or phrase, is performed only after the speech input is converted into text,” and “[t]hus, Ladd’s speech/voice patterns are distinct from ‘voice patterns’ that the ʼ084 Patent allegedly[12] excludes.” Id. at 7–8. Petitioner then details why it believes Ladd’s voice patterns are distinct from those excluded from Patent Owner’s claim construction. Id. at 8–12. Petitioner also argues that Patent Owner “makes no showing [that] Ladd’s voice patterns are the excluded 11 We note here that Petitioner cites to Mr. Occhiogrosso’s deposition testimony, which was provided on behalf of Patent Owner. 12 As noted supra, Petitioner asserts that we should adopt a claim construction that likewise excludes such voice patterns. Reply 2, 14. IPR2020-00687 Patent 9,451,084 B2 24 ‘predefined’ voice patterns,” and thus “has not established that Ladd fails to teach a speaker-independent speech recognition device.” Id. at 12. In its sur-reply, Patent Owner disputes that Ladd discloses a two-step speech recognition process and asserts that “[s]peech recognition in Ladd is a single step process” which “requires a speech recognition device that is based on voice patterns, something that is expressly disclaimed by the ʼ084 Patent.” Sur-Reply 6. Specifically, Patent Owner alleges that “[t]here is no disclosure or teaching that [] ASR unit 254 performs a first step of converting speech to text during speech recognition.” Id. at 7. Rather, according to Patent Owner, “Ladd does not convert speech to text. Instead, it performs recognition based on voice patterns.” Id. at 7–8. For support, Patent Owner points out that Ladd already contains a speech-to-text or “STT” unit 256 which would be superfluous if ASR unit 254 converted speech to text. Id. at 9. Patent Owner asserts that “Ladd is unambiguous that ‘[w]hen the ASR unit 254 identifies a selected speech pattern of the speech inputs, the ASR unit 254 sends an output signal to implement the specific function associated with the recognized voice pattern.’” Id. at 12–13. Thus, Patent Owner argues, if Petitioner’s interpretation of Ladd requiring a two-step process “is correct, Ladd would convert the audio input, which is a voice pattern, into text, then it would have to convert the text back to the voice pattern in order to match it to a particular voice pattern related to a grammar.” Id. at 13. Patent Owner also disputes that Mr. Occhiogrosso’s testimony supports Petitioner’s position that speech recognition necessarily requires two steps. Id. at 13–15. Rather, Patent Owner contends that many of Mr. Occhiogrosso’s statements pertained to the ʼ084 patent––not Ladd–– and IPR2020-00687 Patent 9,451,084 B2 25 that he distinguished between use of phonemes and voice pattern recognition. Id. Finally, Patent Owner argues that Ladd’s voice patterns are identical to those that are excluded from the District Court’s construction. Id. at 15–18. 5. Obviousness Analysis of Claim 1 Consistent with our claim construction of “speaker-independent speech-recognition device” discussed supra, this limitation requires a “speech recognition device that recognizes spoken words without using predefined voice patterns.” A claim is unpatentable under 35 U.S.C. § 103(a) if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious to a person of ordinary skill in the art at the time the invention was made. KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 406 (2007). Obviousness is resolved based on underlying factual determinations, including: (1) the scope and content of the prior art; (2) any differences between the claimed subject matter and the prior art; (3) the level of ordinary skill in the art; and (4) objective evidence of nonobviousness. See Graham v. John Deere Co., 383 U.S. 1, 17–18 (1966). Petitioner bears the burden of proving unpatentability of the challenged claims, and the burden of persuasion never shifts to Patent Owner. Dynamic Drinkware, LLC v. Nat’l Graphics, Inc., 800 F.3d 1375, 1378 (Fed. Cir. 2015). Petitioner must demonstrate obviousness by a preponderance of the evidence. 35 U.S.C. § 316(e); 37 C.F.R. § 42.1(d); see also Harmonic Inc. v. Avid Tech., Inc., 815 F.3d 1356, 1363 (Fed. Cir. 2016) (citing 35 U.S.C. § 312(a)(3) (requiring inter partes review petitions to IPR2020-00687 Patent 9,451,084 B2 26 identify “with particularity . . . the evidence that supports the grounds for the challenge to each claim”)). We have analyzed both parties’ arguments and evidence consistent with these legal principles and, on the complete record, find Petitioner has not met its burden to establish––by a preponderance of the evidence––that Ladd’s ASR unit 254 does not use predefined voice patterns to recognize spoken words. We begin by noting that Petitioner relies solely on Ladd’s automatic speech recognition (ASR) unit 254 to evince the claimed “speaker- independent speech-recognition device.” Pet. 23–24. Petitioner asserts that “[t]he ASR unit ‘processes the speech inputs from the user to determine whether a word or a speech matter matches any of the [stored] grammars.’” Id. at 24 (citing Ex. 1004, 9:28–44, 8:19–28). Ladd expressly teaches that ASR unit 254 processes the speech inputs from the user to determine whether a word or a speech pattern matches any of the grammars or vocabulary stored in the database server unit 244 or downloaded from the voice browser. When the ASR unit 254 identifies a selected speech pattern of the speech inputs, the ASR unit 254 sends an output signal to implement the specific function associated with the recognized voice pattern. The ASR unit 254 is preferably a speaker independent speech recognition software package, Model No. RecServer, available from Nuance Communications. It is contemplated that the ASR unit 254 can be any suitable speech recognition unit to detect voice communications from a user. Ex. 1004, 9:31–44. Thus, in the Petition, Petitioner expressly relied on Ladd’s descriptions of detecting or identifying speech patterns or voice patterns when Petitioner identified the disclosure in Ladd that teaches a “speaker- IPR2020-00687 Patent 9,451,084 B2 27 independent speech recognition device.” Petitioner confirmed this reliance at oral argument. Tr. 23:21–23 (“This paragraph at column 9, beginning at line 28, was relied on in the petition for supporting the speaker independent speech recognition device.”). Additionally, Dr. Terveen provided expert testimony supporting the arguments in the Petition, characterizing Ladd in the same manner, and citing to essentially the same disclosure in Ladd. Ex. 1003 ¶¶ 90–91 (citing Ex. 1004, 8:19–28, 9:28–44). Petitioner appears to still rely on Ladd’s statements of detecting or identifying speech or voice patterns. Tr. 22:4 (“We’re still relying on the same disclosure that is in Ladd.”), 23:24–24:3 (“JUDGE McKONE: Are you withdrawing your reliance on this paragraph or portions of this paragraph? MS. BAILEY: No, Your Honor. I’m not withdrawing it at all. In fact, I think it does still continue to support our position.”). Assuming that Petitioner is correct, and that Ladd’s description of voice/speech patterns applies only to an unclaimed second step, and not the claimed speaker-independent speech recognition, this portion of Ladd fails to evince that ASR unit 254 recognizes spoken language without using predefined voice patterns because Ladd does not describe how it recognizes speech. Petitioner acknowledged this fact at oral argument. Tr. 47:20–24 (“I think what’s going on at this discussion regarding the detection unit in Ladd is speech recognition is so well-known that the patent drafter simply said we’re going to look at audio inputs and compare to the recognition grammar and didn’t describe all the intermediate steps.”). Moreover, Mr. Occhiogrosso testified on cross-examination that there are a variety of methods in the art for recognizing words spoken by a user, including some that use voice patterns and some (e.g., artificial intelligence or using IPR2020-00687 Patent 9,451,084 B2 28 phonemes) that do not. Ex. 1039, 54:6–55:7. Petitioner confirmed at oral argument that this is true. Tr. 15:20–16:12: MS. BAILEY: . . . During Mr. Occhiogrosso’s deposition, we started talking about speech recognition algorithms, and he explained that voice patterns and analyzing the voice patterns is just one class of speech recognition algorithms. He talked about voice patterns. He talked about phonemes. But he said there’s all kinds of speech recognition algorithms that can be used to recognize the speech and convert it into text. For example, you can -- JUDGE McKONE: Do you agree with that statement? That’s not something you’re challenging, is it? MS. BAILEY: We’re not challenging that there are different types of speech recognition algorithms. JUDGE McKONE: Okay. Some that use voice patterns and some that do not. MS. BAILEY: That is correct. And there are others beyond that. He talked about during his deposition that you could use AI, you could use neural networks, statistical analyses. There’s probably even more but those are the ones that he mentioned during his deposition. Dr. Terveen testifies that “[t]here are a number of methods by which a system may perform this first step of converting the spoken words into text, but Ladd is not specific on how it requires step one to occur.” Ex. 1040 ¶ 3 (emphasis added). Thus, based on the testimony of both experts, simply saying that a device uses “speaker independent automatic speech recognition,” as Ladd does (Ex. 1004, 9:28–30), does not establish that the device performs that recognition “without using predefined voice patterns,” as the claim construction for limitation 1[c] requires. On the fully developed trial record, Petitioner has not introduced persuasive evidence to fill the gap between Ladd’s silence on how it IPR2020-00687 Patent 9,451,084 B2 29 performs “speaker independent automatic speech recognition” and what Petitioner would need to in order to establish performance of speech recognition without using predefined voice patterns. The argument and evidence submitted with the Reply does not add to Ladd’s disclosure materially, and the thrust of Petitioner’s Reply arguments is that Patent Owner failed to show an equivalency between the ’084 patent’s “voice patterns” and Ladd’s “speech” and “voice” patterns. Reply 8–12. However, Petitioner bears the burden to show that Ladd performs speech recognition without using predefined speech patterns; Patent Owner bears no burden to show that Ladd performs recognition using predefined speech patterns. See 35 U.S.C. § 316(e) (“In an inter partes review instituted under this chapter, the petitioner shall have the burden of proving a proposition of unpatentability by a preponderance of the evidence.”); Dynamic Drinkware, 800 F.3d at 1378. Petitioner’s Reply and Dr. Terveen’s testimony do not show how Ladd’s ASR 254 performs its speaker-independent automatic speech recognition, and specifically do not show that it performs recognition without using predefined speech patterns. Reply 3–12; Ex. 1003 ¶ 90; Ex. 1040 ¶¶ 2–25. During oral argument, Petitioner was unable to point to affirmative evidence supporting its position: JUDGE McKONE: Now, does Ladd have any specific disclosure that this speaker independent automatic speaker recognition is happening without the use of voice patterns? MS. BAILEY: Well, let me – that’s actually where I was about to go. So -- JUDGE McKONE: I have read in your brief that you are saying – you’re probably going to tell me that the parts you highlighted in blue do not pertain to the speech recognition. And I’d be IPR2020-00687 Patent 9,451,084 B2 30 happy to hear about that in a moment. But is there anything affirmative you want to point me to that shows that this ASR unit is performing speaker independent automatic speech recognition without the use of voice patterns? MS. BAILEY: There is nothing affirmative that says without the use of voice patterns, using Parus’s definition of the spectral energy as a function of time. Ladd doesn’t mention a voice pattern that is a spectral energy as a function of time. It has no disclosure to that respect. So Ladd doesn’t have anything that says the speaker independent speech recognition device does not identify the -- or analyze the spectral energy of the utterance as a function of time. Tr. 19:24–20:18 Both the ’084 patent and Ladd mention “Nuance” software. Ex. 1030, 6:30–35; Ex. 1004, 9:38–41. Ladd’s mention of Nuance software does not provide persuasive evidence that Ladd’s ASR 254 performs speaker- independent speech recognition without using predefined voice patterns. Dr. Terveen testifies that Ladd describes using a commercially-available product from a company called Nuance to transform speech into text. This is the same commercially-available product from Nuance that the ’431 and ’084 Patents describe, further indicating to me that Ladd . . . teaches a speaker independent speech recognition device substantially similar to the speaker independent speech recognition device described and claimed in the ’431 Patent. Ex. 1040 ¶ 9 (citing Ex. 1001, 6:4–24; Ex. 1004, 8:23–28). Dr. Terveen’s testimony does not state the basis for his conclusion that Ladd and the ’431 patent describe the same Nuance software, rather than different software IPR2020-00687 Patent 9,451,084 B2 31 made by the same company,13 and we see no evidence to support that testimony. See 37 C.F.R. § 42.65(a) (“Expert testimony that does not disclose the underlying facts or data on which the opinion is based is entitled to little or no weight.”). In any case, Petitioner admitted at oral argument that the record does not include any evidence to support Dr. Terveen’s opinion on the Nuance software. Tr. 46:10–14 (“JUDGE McKONE: . . . Do we have any evidence in the record as to what this Nuance software is and whether or not it is the same between Ladd and the challenged patents? MS. BAILEY: No, we do not, Your Honor.”). In light of both experts’ testimony that “speaker-independent speech recognition” can be performed in a variety of ways (including using predefined voice patterns), and Petitioner’s failure to offer evidence of how Ladd’s ASR 254 operates, we find that, at most, Ladd is silent as to whether its “speaker independent automatic speech recognition” is performed “without using predefined voice patterns.” This is insufficient for Petitioner to meet its burden of proof.14 13 The ̓ 084 patent states that “[a] preferred speech recognition engine is developed by Nuance Communications,” without further identifying the specific software. Ex. 1030, 6:30–32; see also id. at 22:51–52 (same). Ladd states that “[t]he voice recognition engine is preferably a RecServer software package, available from Nuance Communications.” Ex. 1004, 8:25–28. 14 AC Technologies S.A. v. Amazon.com, Inc., 912 F.3d 1358 (Fed. Cir. 2019), and Sud-Chemie, Inc. v. Multisorb Technologies, Inc., 554 F.3d 1001 (Fed. Cir. 2009) do not lead to a different outcome. In AC Technologies, the Federal Circuit determined that substantial evidence supported a Board finding that prior art taught copying that was independent of an access to a computer unit, despite the lack of an express disclosure of this “negative limitation,” based on inferences the Board drew from the prior art’s disclosure and expert testimony explaining the prior art. 912 F.3d at 1366– IPR2020-00687 Patent 9,451,084 B2 32 And Petitioner’s theory that “[s]peech recognition that determines the content of the spoken word is broadly divided into two steps: (1) converting the spoken utterance into text, i.e., words; and (2) determining the content of the recognized words, e.g., determining if words are keywords that issue a command” fares no better. Reply 5 (citing Ex. 1039, 51:14–52:22). Even assuming that Petitioner’s two-step process theory is true,15 Petitioner has not demonstrated sufficiently that Ladd’s ASR 254 even converts speech into text, much less that any such conversion by ASR 254 does not use predefined voice patterns. Pet. 23–24; see Ex. 1040 ¶ 3 (Dr. Terveen testifying that “[t]here are a number of methods by which a system may perform this first step of converting the spoken words into text, but Ladd is not specific on how it requires step one to occur.”). In sum, Petitioner fails to demonstrate by a preponderance of evidence that Ladd discloses the claimed “speaker-independent speech-recognition device.” It follows that Petitioner has not shown by a preponderance of the evidence that claim 1 is unpatentable. 67. In Sud-Chemie, the Federal Circuit affirmed a district court finding that prior art disclosed an “uncoated” film based on description in the prior art that did not describe the film as coated and did not suggest the necessity of coatings. 554 F.3d at 1004–05. In this proceeding, Petitioner does not need to show that Ladd affirmatively states a negative limitation. However, neither Petitioner nor its expert witness provides any persuasive evidence showing why we should infer from Ladd’s silence that its ASR 254 performs speaker-independent speech recognition in a manner consistent with claim limitation 1[b]. 15 Patent Owner disputes Petitioner’s theory. Sur-reply 6–8. We need not, and do not, determine whether Ladd’s process is a two-step process as alleged by Petitioner. See Reply 4. IPR2020-00687 Patent 9,451,084 B2 33 6. Obviousness Analysis of Claims 2–7, 10, and 14 As with claim 1, Petitioner relies on Ladd’s ASR 254 to satisfy the claimed “speaker-independent speech-recognition device” incorporated into dependent claims 2–7, 10, and 14. As explained supra, Petitioner has failed to meet its burden to establish by a preponderance of evidence that Ladd’s ASR unit 254 satisfies this claim element. It follows, then, that Petitioner has failed to establish that claims 2–7, 10, and 14 are unpatentable. D. Remaining Obviousness Grounds 4–6 Petitioner advances three additional obviousness grounds regarding claims 1–7, 10, and 14. Pet. 64–67. Each of these grounds, however, relies on the same Ladd disclosure of ASR unit 254 to evince the claimed “speaker-independent speech-recognition device” required by the challenged claims. Id. at 64. As we determined supra, Petitioner has failed to meet its burden to establish by a preponderance of evidence that Ladd’s ASR unit 254 satisfies this claim element. It follows, then, that Petitioner has failed to establish that claims 1–7, 10, and 14 are unpatentable on these additional grounds. III. MOTION TO EXCLUDE Patent Owner moves to exclude paragraphs 2–25 of Dr. Terveen’s Supplemental Declaration (Ex. 1040) for three reasons. First Patent Owner argues that this testimony does not respond to arguments raised in the Patent Owner Response. Mot. Excl. 2–5. Second, Patent Owner argues that this testimony is an unauthorized and late submission of supplemental information under 37 C.F.R. § 42.123(b). Id. at 6–9. Third, Patent Owner IPR2020-00687 Patent 9,451,084 B2 34 argues that this testimony was improperly incorporated by reference into the Reply. Id. at 9. Petitioner argues that Patent Owner’s bases for moving to exclude are not proper and that, to challenge the scope of Dr. Terveen’s declaration, Patent Owner should have filed a motion to strike. Paper 30, 1, 4–6. A motion to exclude should be directed to the admissibility of evidence. See 37 C.F.R. § 42.64; Patent Trial and Appeal Board Consolidated Trial Practice Guide 79 (November 2019) (“TPG”) (“A motion to exclude must explain why the evidence is not admissible (e.g., relevance or hearsay) but may not be used to challenge the sufficiency of the evidence to prove a particular fact.”). According to the Trial Practice Guide, a motion to exclude is not a vehicle to “address arguments or evidence that a party believes exceeds the proper scope of reply or sur-reply.” TPG 79. Rather, “[i]f a party believes that a brief filed by the opposing party raises new issues, is accompanied by belatedly presented evidence, or otherwise exceeds the proper scope of reply or sur-reply, it may request authorization to file a motion to strike.” Id. at 80. Patent Owner did not file a motion to strike. Patent Owner’s Motion to Exclude does not address the admissibility of Dr. Terveen’s testimony. Rather, it argues that the testimony exceeds the proper scope of reply evidence, either because it does not respond to the Patent Owner Response or because Petitioner did not follow our rules in seeking to submit supplemental information. See generally Mot. Excl. Thus, we agree with Petitioner that Patent Owner’s Motion to Exclude does not state a proper basis for excluding evidence and, therefore, we deny the motion for that reason. IPR2020-00687 Patent 9,451,084 B2 35 IV. CONCLUSION16 We conclude Petitioner has not satisfied its burden of demonstrating, by a preponderance of the evidence, that the subject matter of claims 1–7, 10, and 14 of the ’084 patent is unpatentable. In summary: Claims 35 U.S.C. § Reference(s)/Basis Claims Shown Unpatentable Claims Not shown Unpatentable 1–6, 10, 14 103(a) Ladd, Kurosawa, Goedken 1–6, 10, 14 7 103(a) Ladd, Kurosawa, Goedken, Madnick 7 5, 6 103(a) Ladd, Kurosawa, Goedken, Houser 5, 6 1–6, 10, 14 103(a) Ladd, Kurosawa, Goedken, Rutledge 1–6, 10, 14 7 103(a) Ladd, Kurosawa, Goedken, Rutledge, Madnick 7 5, 6 103(a) Ladd, Kurosawa, Goedken, Rutledge, Houser 5, 6 16 Should Patent Owner wish to pursue amendment of the challenged claims in a reissue or reexamination proceeding subsequent to the issuance of this decision, we draw Patent Owner’s attention to the April 2019 Notice Regarding Options for Amendments by Patent Owner Through Reissue or Reexamination During a Pending AIA Trial Proceeding. See 84 Fed. Reg. 16,654 (Apr. 22, 2019). If Patent Owner chooses to file a reissue application or a request for reexamination of the challenged patent, we remind Patent Owner of its continuing obligation to notify the Board of any such related matters in updated mandatory notices. See 37 C.F.R. § 42.8(a)(3), (b)(2). IPR2020-00687 Patent 9,451,084 B2 36 V. ORDER In consideration of the foregoing, it is hereby: ORDERED that claims 1–7, 10, and 14 of U.S. Patent No. 9,451,084 B2 have not been shown to be unpatentable; and FURTHER ORDERED that Patent Owner’s Motion to Exclude Exhibit 1040 is denied; FURTHER ORDERED that, because this is a Final Written Decision, parties to the proceeding seeking judicial review of the decision must comply with the notice and service requirements of 37 C.F.R. § 90.2. IPR2020-00687 Patent 9,451,084 B2 37 FOR PETITIONER: Jennifer Bailey Adam Seitz ERISE IP, P.A. jennifer.bailey@eriseip.com adam.seitz@eriseip.com FOR PATENT OWNER: Michael McNamara Michael Renaud William Meunier MINTZ, LEVIN, COHN, FERRIS GLOVSKY AND POPEO, P.C. mmcnamara@mintz.com mtrenaud@mintz.com wameunier@mintz.com Copy with citationCopy as parenthetical citation