Honeywell International Inc.Download PDFPatent Trials and Appeals BoardDec 30, 20212020003555 (P.T.A.B. Dec. 30, 2021) Copy Citation UNITED STATES PATENT AND TRADEMARK OFFICE UNITED STATES DEPARTMENT OF COMMERCE United States Patent and Trademark Office Address: COMMISSIONER FOR PATENTS P.O. Box 1450 Alexandria, Virginia 22313-1450 www.uspto.gov APPLICATION NO. FILING DATE FIRST NAMED INVENTOR ATTORNEY DOCKET NO. CONFIRMATION NO. 16/133,010 09/17/2018 Akash Nandi H0065561-5602 9082 89953 7590 12/30/2021 HONEYWELL/FOGG Intellectual Property Services Group 855 S. Mint Street Charlotte, NC 28202 EXAMINER SHAH, BHARATKUMAR S ART UNIT PAPER NUMBER 2677 NOTIFICATION DATE DELIVERY MODE 12/30/2021 ELECTRONIC Please find below and/or attached an Office communication concerning this application or proceeding. The time period for reply, if any, is set in the attached communication. Notice of the Office communication was sent electronically on above-indicated "Notification Date" to the following e-mail address(es): docket@fogglaw.com eofficeaction@appcoll.com patentservices-us@honeywell.com PTOL-90A (Rev. 04/07) UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD Ex parte AKASH NANDI and SHOWVIK CHAKRABORTY Appeal 2020-003555 Application 16/133,010 Technology Center 2600 Before THU A. DANG, ELENI MANTIS MERCADER, and JAMES R. HUGHES, Administrative Patent Judges. MANTIS MERCADER, Administrative Patent Judge. DECISION ON APPEAL STATEMENT OF THE CASE Pursuant to 35 U.S.C. § 134(a), Appellant1 appeals from the Examiner’s decision to reject claims 1-20. See Final Act. 1. We have jurisdiction under 35 U.S.C. § 6(b). We REVERSE. 1 We use the term “Appellant” to refer to “applicant” as defined in 37 C.F.R. § 1.42 (2018). Appellant identifies the real party in interest as Honeywell International Inc. Appeal Br. 1. Appeal 2020-003555 Application 16/133,010 2 CLAIMED SUBJECT MATTER The claimed invention is directed to at least one artificial neural network configured to: receive an audio signal for a time period, determine if at least one human voice audio spectrum is in the audio signal for the time period, identify at least one human voice audio power spectrum; for the time period, extract each of the at least one identified human voice audio power spectrum; remove artifacts from each extracted human voice audio power spectrum to synthesize an estimation of an original human voice prior to its distortion and transmit the synthesized estimation of an original human voice. Abstract. Claim 1, reproduced below, is illustrative of the claimed subject matter: 1. A system, comprising: at least one artificial neural network configured to: receive an audio signal; determine that at least one human voice audio power spectrum is in the audio signal during a time period; upon determining that at least one human voice audio power spectrum is in the audio signal, then identify, using known human voice audio power spectra, at least one human voice audio power spectrum in the audio signal during the time period; extract the at least one identified human voice audio power spectrum; remove artifacts from each of the at least one extracted human voice audio power spectrum to synthesize an estimation of an original human voice prior to its distortion; and transmit the synthesized estimation of an original human voice. Appeal 2020-003555 Application 16/133,010 3 REFERENCES The prior art relied upon by the Examiner is: Name Reference Date Avendano US 2016/0078880 Al Mar. 17, 2016 Hetherington US 2004/0167777 Al Aug. 26, 2004 Olabiyi US 10,152,970 Bl Dec. 11, 2018 REJECTIONS Claim(s) Rejected 35 U.S.C. § Reference(s)/Basis 1-5, 8-10, 13-16, 19, 20 103 Avendano, Hetherington 6, 7, 11, 12, 17, 18 103 Avendano, Hetherington, Olabiyi OPINION Claims rejected 1-5, 8-10, 13-16 and 19-20 under 35 U.S.C. § 103 Appellant argues, inter alia, that the Examiner has not demonstrated that the combination of Avendano and Hetherington teaches or suggests that all of the receiving, determining, identifying, extracting, removing, and transmitting elements of claim 1 are performed by at least one artificial neural network. See Appeal Br. 6. In pertinent part, Appellant emphasizes that claim 1 recites: at least one artificial neural network configured to: receive an audio signal; determine that at least one human voice audio power spectrum is in the audio signal during a time period; upon determining that at least one human voice audio power spectrum is in the audio signal, then identify, using known human voice audio power spectra, at least one human voice audio power spectrum in the audio signal during the time period; Appeal 2020-003555 Application 16/133,010 4 extract the at least one identified human voice audio power spectrum; remove artifacts from each of the at least one extracted human voice audio power spectrum to synthesize an estimation of an original human voice prior to its distortion; and transmit the synthesized estimation of an original human voice. Appeal Br. 6. Appellant argues that Avendano uses a deep neural network (DNN) 315 to restore speech components in damaged frequency regions. Appeal Br. 7 (citing para. 38 and Fig. 3). Appellant further argues that the DNN 315 is only part of a speech restoration module 330, which is but one module in a multi-module audio processing system 210. Appeal Br. 7 (citing para. 38 and Fig. 3). Avendano does not teach using the DNN for other processing, e.g., performed by the frequency analysis module 310, noise reduction module 320, and reconstruction module 340 of the audio processing system 210. Id. Appellant argues that because Avendano teaches using a DNN to perform only speech restoration, Avendano does not teach all of the elements expressly recited in claim 1. Appellant contends that Avendano does not use the DNN 315, or any artificial neural network, in determining and identifying speech components. Appeal Br. 7 (citing Avendano, para. 36 and Fig. 3; Final Act. 3). Rather, Appellant explains that these tasks are performed only by the frequency analysis module 310, which does not include the DNN 315. Id. (citing Avendano, para. 36 and Fig. 3). Appellant further contends that an element requiring all subsets of the claimed subject matter to be performed by at least one artificial neural network is not a mere triviality, but a substantive element that must be accorded patentable weight. Id. Appellant argues that the Examiner asserted that Avendano Appeal 2020-003555 Application 16/133,010 5 teaches that all of these elements are performed by at least one artificial neural network only because Avendano, in paragraph 39, teaches a deep neural network which is artificial intelligence with multiple layers. Appeal Br. 7-8 (citing Final Act. 2). Appellant argues that this reasoning is irrelevant because the fact that an artificial neural network comprises multiple layers provides no insight as to the (claimed) functions performed by the network. Appeal Br. 8. Appellant argues that the layering of an artificial neural network refers to its structure, not its function and even though the DNN is taught in Avendano as a multi-layered network, the reference still lacks any teaching or suggestion that the DNN performs all the asserted claim elements. Appeal Br. 8. Appellant then argues that the Examiner has not provided a reasoned explanation supported by evidence how a DNN disclosed to perform only speech restoration can be construed to perform all of the elements expressly recited in claim 1. Id. We agree with Appellant’s arguments. The Examiner finds that Avendano teaches deep neural network which is an artificial intelligence-AI with multiple layers per paragraph 39. Ans. 4 (citing Avendano para. 39). We agree with Appellant that the layering of an artificial neural network refers to its structure and not its function. See Appeal Br. 8. The only recited function is that “DNN 315 may extract learned higher-order spectro- temporal features of the clean or undamaged spectral envelopes . . . used in the speech restoration module 330 to refine predictions . . . for restoring speech components in the distorted frequency regions.” Avendano para. 39. Furthermore, we agree with Appellant’s argument that as clearly shown in Avendano’s Figure 3 the DNN 315 is part of the speech restoration 330 and, thus, DNN 315 is not utilized for determining and identifying speech Appeal 2020-003555 Application 16/133,010 6 components. See Appeal Br. 7 (citing Avendano, para. 36 and Fig. 3; Final Act. 3). We agree with Appellant that these tasks are performed only by the frequency analysis module 310, which does not include the DNN 315. See id. (citing Avendano, para. 36 and Fig. 3). Accordingly, we are constrained from the record before us to reverse the rejection of claim 1 and for the same reasons the rejection of claims 2-20. We note that the additional references of Hetherington and Olabiyi do not cure the above cited deficiencies. Should there be further prosecution the Examiner should consider whether there is a reference that shows that is known for neural networks to execute any function of interest, such as the functions of determining and identifying speech components. CONCLUSION The Examiner’s rejections of claims 1-20 are REVERSED. REJECTIONS Claim(s) Rejected 35 U.S.C. § Reference(s)/Basis 1-5, 8-10, 13-16, 19, 20 103 Avendano, Hetherington 6, 7, 11, 12, 17, 18 103 Avendano, Hetherington, Olabiyi Appeal 2020-003555 Application 16/133,010 7 DECISION SUMMARY Claim(s) Rejected 35 U.S.C. § Reference(s)/Basis Affirmed Reversed 1-5, 8-10, 13-16, 19, 20 103 Avendano, Hetherington 1-5, 8-10, 13-16, 19, 20 6, 7, 11, 12, 17, 18 103 Avendano, Hetherington, Olabiyi 6, 7, 11, 12, 17, 18 Overall Outcome 1-20 REVERSED Copy with citationCopy as parenthetical citation