Ex Parte LeviDownload PDFPatent Trial and Appeal BoardAug 21, 201813777781 (P.T.A.B. Aug. 21, 2018) Copy Citation UNITED STA TES p A TENT AND TRADEMARK OFFICE APPLICATION NO. FILING DATE 13/777,781 02/26/2013 93379 7590 Setter Roche LLP 14694 Orchard Parkway Building A, Suite 200 Westminster, CO 80023 08/23/2018 FIRST NAMED INVENTOR Avram Levi UNITED STATES DEPARTMENT OF COMMERCE United States Patent and Trademark Office Address: COMMISSIONER FOR PATENTS P.O. Box 1450 Alexandria, Virginia 22313-1450 www .uspto.gov ATTORNEY DOCKET NO. CONFIRMATION NO. 648.0114 6709 EXAMINER LELAND III, EDWIN S ART UNIT PAPER NUMBER 2677 NOTIFICATION DATE DELIVERY MODE 08/23/2018 ELECTRONIC Please find below and/or attached an Office communication concerning this application or proceeding. The time period for reply, if any, is set in the attached communication. Notice of the Office communication was sent electronically on above-indicated "Notification Date" to the following e-mail address(es): sarah@setterroche.com pair_avaya@firsttofile.com uspto@setterroche.com PTOL-90A (Rev. 04/07) UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD Ex parte AVRAM LEVI Appeal2018-002374 Application 13/777, 781 Technology Center 2600 Before ALLEN R. MacDONALD, JEREMY J. CURCURI, and NABEEL U. KHAN, Administrative Patent Judges. PERCURIAM DECISION ON APPEAL 1 1 Appellant indicates the real party in interest is Avaya Inc. App. Br. 2. Appeal2018-002374 Application 13/777,781 STATEMENT OF CASE Appellant appeals under 35 U.S.C. § 134(a) from a final rejection of claims 1-8, 10-17, 19, and 20. Claims 9 and 18 are withdrawn from consideration. Final Act. 1. We have jurisdiction under 35 U.S.C. § 6(b ). We AFFIRM. Illustrative Claim Illustrative claim 1 under appeal reads as follows ( emphases, formatting, and bracketed material added): 1. A method of operating a voice command system, comprising: [A.] receiving audio information spoken by a user during a time period; [B.] determining whether the audio information includes a voice command to request an action; [C.] determining a first orientation of the user during the time period, [i.] wherein the first orientation comprises a movement of the body of the user performed while the user speaks the voice command, and [ii.] wherein the movement comprises one of a plurality of movements each associated with a corresponding sub-action of the action; and [D.] complying with the voice command based on the first orientation by at least triggering the corresponding sub-action. 2 Appeal2018-002374 Application 13/777,781 Rejections A. The Examiner rejected claims 1, 2, 7, 8, 10, 11, 16, 17, 19, and 20 under 35 U.S.C. § 103 as being unpatentable over the combination of Byers (US 6,219,645 B 1; pub. Apr. 17, 2001) and Galor et al. (US 2013/0055120 Al; pub. Feb. 28, 2013). 2 We select claim 1 as representative. Appellant does not present arguments for claims 2, 7, 8, 10, 11, 16, 17, 19, and 20. Except for our ultimate decision, we do not address the § 103 rejection of claims 2, 7, 8, 10, 11, 16, 17, 19, and 20 further herein. B. The Examiner rejects claims 3 and 12 under 3 5 U.S. C. § 1 0 3 (a) as being unpatentable over the combination of Byers, Galor, and Kim et al. (US 2013/0033643 Al; pub. Feb. 7, 2013). Final Act. 23-26. Appellant does not present arguments for claims 3 and 12. Thus, the rejection of these claims tum on our decision as to claim 1. Except for our ultimate decision, we do not address the § 103 rejection of claims 3 and 12 further herein. C. The Examiner rejected claims 4---6, and 13-15 under 35 U.S.C. § I03(a) as being unpatentable over the combination of Byers, Galor, Kim, and Parker et al. (US 2013/ 0307771 Al; pub. Nov. 21, 2013). 2 Both the Examiner (Final Act. 3) and the Appellant (App. Br. 7) mistakenly list claims 5 and 6 as included in this rejection. We treat claims 5 and 6 as included in the third listed ground of rejection as only that rejection includes an explanation of the basis for rejecting these claims. Final Act. 10-11. 3 Appeal2018-002374 Application 13/777,781 We select claim 4 as representative. Appellant does not present arguments for claims 5, 6, and 13-15. Except for our ultimate decision, we do not address the§ 103 rejection of claims 5, 6, and 13-15 further herein. Issue on Appeal Did the Examiner err in rejecting claims 1 and 4 as being obvious? ANALYSIS We have reviewed the Examiner's rejections in light of Appellant's arguments (Appeal Brief and Reply Brief) that the Examiner has erred. We disagree with Appellant. Except as noted below, we adopt as our own ( 1) the findings and reasons set forth by the Examiner in the action from which this appeal is taken and (2) the reasons set forth by the Examiner in the Examiner's Answer in response to Appellant's Appeal Brief. We concur with the conclusions reached by the Examiner. We highlight the following points. A. Appellant raises the following argument in contending that the Examiner erred in rejecting claim 1 under 35 U.S.C. § 103(a). Galor fails to disclose movements, performed while speaking a voice command, associated with corresponding sub-actions of an action requested by that voice command. In particular, in its response to arguments, the final Office action asserts Galor discloses that motions made by the user may provide a control input for user interaction with interface 20 and those motions may be combined with interactions in other modalities, such as voice commands (see final Office action, p. 11; citing Galor, ,r 0028). Galor does not go into further detail regarding how movement may be used in combination with a voice command. Rather, Galor merely provides examples where the user's static orientation effects the action of the voice 4 Appeal2018-002374 Application 13/777,781 command. One such example is the user pointing at a particular light that should be turned on when saying "light" (see Galor, ,r,r 0053-54). Galor does not disclose that movement is necessarily part of the pointing gesture. As such, movement would not be inherent in the examples since Galor could simply consider a static pointing position as satisfying the pointing gesture. Galor fails to disclose that the movement is performed while the user speaks the voice command, as required by claim 1. Moreover, Galor fails to disclose how movement would be related to the voice command. Specifically, Galor fails to disclose that a user's movement, performed while speaking a voice command, is one of a plurality of movements that each correspond to a sub-action of the action requested by the voice command, as required by claim 1. App. Br. 7. The Examiner present the following response to Appellant's above argument. Figure 2, items 30 & 34 and paragraphs [0030] & [0033] of Galor, which relate to receiving a series of 30 maps of the users hand and detecting a gesture (including a pointing gesture) towards a device. Paragraphs [0034-0038] define the process of identifying a pointing gesture. Paragraph [0036] further states a condition of identifying such a gesture is "Defining an interaction region 48 within pyramid shaped region 42 and identifying a pointing gesture upon the 30 maps indicating user 22 positioning hand 29 within region 48 and moving the hand towards the display." Paragraph [0038] states a further condition is "Defining a minimum time period ( e.g., 200 milliseconds), and identifying a pointing gesture upon the 30 maps indicating user 22 pausing hand 29 for the minimum time period after extending hand 29 towards the display." Therefore, a pointing gesture is one where there is movement towards an object followed by a pause. In other words, Galor defines a "pointing gesture" as one that necessarily involves movement as an integral part of the gesture, not simply a static orientation as argued by the Appellant. 5 Appeal2018-002374 Application 13/777,781 Ans. 11-12. Since there are multiple light fixtures (see Figure 4, items 64 & 66 and paragraphs [0053-0054]), the pointing gesture to designate a particular light of the multiple lights as a target is a subaction of a plurality of possible subactions (i.e. the pointing towards each different light is a different movement resulting in a different subaction). These subactions are a part of the "tum on a light" action triggered by the voice command "Light". Therefore, under the broadest reasonable interpretation of the claim limitations, the combination of Byers and Galor renders the limitations obvious. Ans. 13. We agree with the Examiner's reasoning. In the Reply Brief, Appellant further argues: [T]he movement itself [in Galor] is not associated with a corresponding subaction of an action ( as opposed to another movement being associated with another subaction of the action), as required by claim 1. Rather, the direction in which the user is pointing is what is associated with a corresponding action. For instance, in the example cited by the Examiner's Answer, the user says "light" while pointing in proximity to lighting fixture 66. Aside from the user's movement being used to confirm that the pointing gesture is not a false positive, the movement is irrelevant to identification of lighting fixture 66. Reply Br. 2 ( emphasis added). We disagree with Appellant's further argument as Galor explicitly states that the pointing gesture is relevant to identification of the lighting fixture. In one example, computer 26 can tum on ( or off) a given one of the lighting fixtures in response to detecting, in the 3D maps, a pointing gesture directed toward the given lighting fixture. Galor ,r 45 (emphasis added). 6 Appeal2018-002374 Application 13/777,781 B. Appellant also raises the following arguments in contending that the Examiner erred in rejecting claim 4 under 35 U.S.C. § 103(a). Claim 4 provides that a head nod up triggers the volume to go up and a head nod down triggers the volume to go down . . . . Parker merely recites that an "interaction set may also be a gesture library of driver motions that can be used to control the satellite radio, such as hand, arm or head motions that cause the satellite radio volume or channel to change" (see Parker, ,r 0061 ). Parker does not disclose any specific motions that correspond to volume up and down commands. Also, Parker fails to disclose that such motions would be performed while saying a voice command rather than independently. In fact, one would not be motivated to add a voice command to the motions described by Parker because adding a voice command to Parker would add unnecessary complication to those driver motions. App. Br. 8-9. The Examiner present the following response to Appellant's above argument. While the Appellant is correct that no particular head gesture is tied to either the volume up or volume down command, since there are a finite number of head motions, it would be obvious to try them as it is simply choosing from a finite number of identified, predictable solutions with a reasonable expectation of success. The interaction between gesture and voice command for changing volume are taught by Kim. Parker simply replaces the volume direction specifying hand gestures of Kim with volume direction specifying head gestures. Ans. 14. We agree with the Examiner's reasoning. In the Reply Brief, Appellant further argues: With respect to claim 4, the Examiner's Answer asserts that, while Parker discloses no particular head gesture tied to either volume up or down, it would be obvious to try one of a 7 Appeal2018-002374 Application 13/777,781 finite number of head motions. While there are not a finite number of head motions, as an infinite number of different motions ( even minutely different) may be performed, even if it were obvious to try the head nod up and head nod down motions, Parker does not disclose that any voice command need be spoken while making such a motion. Reply Br. 3 ( emphasis added). We disagree with Appellant's further arguments. First, contrary to Appellant's asserted "infinite number of different [head] motions," we conclude there are only a very limited number of head motions that are reasonably discemable as being distinct motions (and thus reasonably usable to convey information). This limited set of head motion consists of: (1) head up, (2) head down, (3) both head up and down together in sequence; (4) head right; ( 5) head left; and ( 6) both head right and left together in sequence. Of this limited set, Appellant uses ( 1) and (2) to convey information. Spec. ,r 48 ("head up or down"). Second, the Examiner does not cite Parker for the argued "voice command" limitation. Rather, the Examiner cites Byers for "a voice command system" (Final Act. 3), and the Examiner cites Galor for "a movement of the body of the user performed while the user speaks the voice command" (Final Act. 5). Thus, Appellant does not address the actual reasoning of the Examiner's rejection. Instead, Appellant attacks the Parker reference singly for lacking a teaching that the Examiner relied on a combination of references to show. It is well-established that one cannot show non-obviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413,425 (CCPA 1981); In re Merck& Co., Inc., 800 F.2d 1091, 1097 (Fed. Cir. 1986). References must be read, not in isolation, but for what they 8 Appeal2018-002374 Application 13/777,781 fairly teach in combination with the prior art as a whole. Merck, 800 F .2d at 1097. CONCLUSIONS (1) The Examiner has not erred in rejecting claims 1-8, 10-17, 19, and 20 as being unpatentable under 35 U.S.C. § 103(a). (2) Claims 1-8, 10-17, 19, and 20 are not patentable. DECISION TheExaminer'srejectionsofclaims 1-8, 10-17, 19, and20are affirmed. No time period for taking any subsequent action in connection with this appeal may be extended under 37 C.F.R. § 1.136(a)(l)(iv). AFFIRMED 9 Copy with citationCopy as parenthetical citation