Ex Parte Givon et alDownload PDFPatent Trial and Appeal BoardNov 30, 201613061568 (P.T.A.B. Nov. 30, 2016) Copy Citation United States Patent and Trademark Office UNITED STATES DEPARTMENT OF COMMERCE United States Patent and Trademark Office Address: COMMISSIONER FOR PATENTS P.O.Box 1450 Alexandria, Virginia 22313-1450 www.uspto.gov APPLICATION NO. FILING DATE FIRST NAMED INVENTOR ATTORNEY DOCKET NO. CONFIRMATION NO. 13/061,568 03/01/2011 Dor Givon XTR-PU-004-US1 7243 60956 7590 12/02/2016 Professional Patent Solutions; EXAMINER P.O. BOX 654 KHAN, IBRAHIM A HERZELIYA PITUACH, 46105 ISRAEL ART UNIT PAPER NUMBER 2692 NOTIFICATION DATE DELIVERY MODE 12/02/2016 ELECTRONIC Please find below and/or attached an Office communication concerning this application or proceeding. The time period for reply, if any, is set in the attached communication. Notice of the Office communication was sent electronically on above-indicated "Notification Date" to the following e-mail address(es): office @propats .com vsherman @ propats, com utalmi@propats.com PTOL-90A (Rev. 04/07) UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD Ex parte DOR GIVON, OFER SADKA, ILYA KOTTEL, and IGOR BUNIMOVICH Appeal 2016-001132 Application 13/061,568 Technology Center 2600 Before DEBRA K. STEPHENS, KEVIN C. TROCK, and JESSICA C. KAISER, Administrative Patent Judges. TROCK, Administrative Patent Judge. DECISION ON APPEAL Introduction Appellants1 appeal under 35 U.S.C. § 134(a) from the Examiner’s Final Rejection of claims 1—6, 10—12, and 15—21, which constitute all the claims pending in this application. We have jurisdiction under 35 U.S.C. § 6(b). Claims 7—9, 13, and 14 have been cancelled.2 We AFFIRM. 1 According to Appellants, the real party in interest is Extreme Reality Ltd. App. Br. 2. 2 Final Act. 2. Appeal 2016-001132 Application 13/061,568 Invention The claims are directed to a system for correlating gestures, which are defined by manipulating a computerized graphic model of a human, to computer input signals. Spec. 130. Exemplary Claim Claim 1, reproduced below, is illustrative of the claimed subject matter with disputed limitations emphasized: 1. A system for signal converting, said system comprising: a display; a data storage adapted to store mapping tables comprising correlations of indications of human gestures produced by an image based human-machine interface (IBHMI) to standard mouse or keyboard output signals; a rendering module adapted to render a graphic model of at least a portion of a human body upon said display; a graphic user interface adapted to (1) allow a user to manipulate elements of the graphic model of the at least a portion of the human body by manipulating one or more functionally associated user input components, and (2) receive a selection of a first standard mouse or keyboard output signal, wherein the one or more user input components are not image based; a gesture generating module adapted to define a human gesture based on positions of elements of the graphic model during a series of one or more user manipulations, using the one or more user input components, of elements of the graphic model of the at least a portion of the human body, and a mapping module adapted to correlate an indication of the defined human gesture produced by the IBHMI to the first 2 Appeal 2016-001132 Application 13/061,568 standard mouse or keyboard output signal and store the correlation in said mapping tables. Applied Prior Art The prior art relied upon by the Examiner in rejecting the claims on appeal is: Lee et al. (“Lee”) US 6,160,899 Dec. 12, 2000 Hillis et al. (“Hillis”) US 2008/0013826 Al Jan. 17, 2008 Morita et al. (“Morita”) US 2008/0104547 Al May 1, 2008 REJECTIONS The Examiner made the following rejections: Claims 1—6, 10, 11, 15—19, and 21 stand rejected under 35 U.S.C. § 103(a) as being unpatentable over Hillis and Lee. Final Act. 2—20. Claims 12 and 20 stand rejected under 35 U.S.C. § 103(a) as being unpatentable over Hillis, Lee, and Morita. Id. at 20-22. ANALYSIS We have reviewed the Examiner’s rejections and the evidence of record in light of Appellants’ argument that the Examiner has erred. We disagree with Appellants’ arguments and conclusions. We adopt as our own the findings and reasons set forth by the Examiner in the action from which this appeal is taken (Final Act. 2—22) and the findings and the reasons set forth in the Examiner’s Answer (Ans. 2—9). We concur with the conclusions reached by the Examiner and further highlight specific findings and argument for emphasis as follows. 3 Appeal 2016-001132 Application 13/061,568 Independent Claims 1, 10, and 15 “a graphic model of at least a portion of a human body ” Appellants contend neither Hillis nor Lee teach “a rendering module adapted to render a graphic model of at least a portion of a human body upon said display,” as recited in claim 1 and similarly recited in claims 10 and 15. App. Br. 16—17. Specifically, Appellants argue that Lee teaches a “system in which a user sees an image of himself/herself on the display as they perform gestures,” but that displayed user image is “a captured image — not a rendered graphic model.” Id. at 17. Additionally, Appellants argue Lee’s user image “is controlled by ... an image based input component” and “is not used for gesture definition.” Id. Additionally, Appellants argue neither Hillis’ “cursor” nor “the image captured by the cameras” are graphic models of a human. Id. at 16—17. We are not persuaded. The Examiner finds, and we agree, Lee “render[s] a graphical model of a human body on a display.” Final Act. 4 (citing Lee 1:46—52, Figs. 1—2); Ans. 6. Indeed, Fee uses “a camera 1 for capturing a user’s image” and the “user’s image is displayed” on a screen. Fee 2:60-67, Fig. 1. Although Appellants argue the image that Fee’s “user sees ... of himself/herself on the display” “is a captured image” and “not a rendered graphic model” (App. Br.17), neither the claim nor the Specification preclude a captured image from being within the meaning of a graphic model. The claim does not recite limitations precluding displayed captured images from the scope of a “graphic model,” and the Specification does not provide a definition precluding displayed captured images from being within the meaning of a “graphic model” (or any definition of a graphic model). 4 Appeal 2016-001132 Application 13/061,568 We agree with the Examiner’s broad, but reasonable, interpretation that “a graphic model can be a representation of the object” (Ans. 6), and, accordingly, Lee’s displayed user image is a rendered graphic model because it is a representation of the user displayed on a screen (Lee Lig. 1, 2:66—67). Additionally, Appellants’ arguments that Lee’s user image is controlled by an image based input component and is not used for gesture definition (App. Br. 17) are directed to features not recited in the present limitation and, moreover, are features defining how a graphic model is used rather than limiting what the graphic model is. furthermore, Appellants’ argument that Hillis does not teach a rendered graphic model (App. Br. 16—17) does not address the Examiner’s findings regarding Lee’s displayed user image. Accordingly, we are not persuaded the Examiner erred in finding Lee teaches “a rendering module adapted to render a graphic model of at least a portion of a human body upon said display,” as recited in claim 1 and similarly recited in claims 10 and 15. “user input components are not image based” Appellants contend neither Hillis nor Lee teaches “a graphic user interface adapted to (1) allow a user to manipulate elements of the graphic model... by manipulating one or more functionally associated user input components . . . wherein the one or more user input components are not image based,” as recited in claim 1 and similarly recited in claims 10 and 15. App. Br. 18, Reply Br. 5—6. Specifically, Appellants argue the Examiner “analogiz[es] the user’s hands [in Hillis and Lee] to the input components recited” (App. Br. 18), but “a human hand is clearly not functionally associated with a [graphic user interface]” and so cannot be an “input 5 Appeal 2016-001132 Application 13/061,568 component” (Reply Br. 5—6). Appellants further argue that even if the user’s hands were an input component, “this input component is clearly image based because the input is conveyed by images,” i.e., Appellants contend that the hands of the user are an image-based input because the hands are captured by a camera. Reply Br. 5; App. Br. 17—18. We are not persuaded. The Examiner finds, and we agree, Hillis teaches a “user’s hand or finger is an input component because it is used to make inputs via gestures.” Ans. 8 (citing Hillis H 16—17, 34); Final Act. 4, 22. The Examiner further finds, and we agree, Lee also teaches a “gesture recognition interface system” where a user’s hand or head is an input component. Final Act. 4, 22; Ans. 4 (citing Lee 3:9—32, Figs. 1—2, 5), 8—9 (citing Lee 3:40-45). Moreover, the Examiner finds, and we agree, that a user’s actual, physical hand, is “not image based.” Ans. 7. We are not persuaded by Appellants’ argument that a user’s actual hand cannot be a non-image based input component functionally associated with a graphic user interface. Reply Br. 5—6; App. Br. 17—18. As the Examiner points out, Appellants’ Specification does not preclude a user’s hand from being a non-image input component (see Ans. 4—5 (citing Spec. 130, Fig. 4), 7), and we agree with the Examiner’s broad, but reasonable, interpretation that the actual, physical user body parts in Hillis and Lee are input components which are not image based (Ans. 7—8; Final Act. 22). Indeed, as the Examiner points out (Final Act. 3—4), Hillis teaches that a “user’s hand” is a “sensorless input object” for a graphic interface (Hillis 1119, 22 (emphasis added); see Hillis 129, Fig. 3). Furthermore, in Lee, “[w]hen the user moves his hand . . . the user’s hand image on the screen also moves” according to the user’s physical hand, i.e., the manipulation of 6 Appeal 2016-001132 Application 13/061,568 the user’s physical hand is the input which manipulates the graphic model’s hand. See Lee 3:9—22. Although cameras capture the physical hand, the Examiner does not assert the camera is the recited “input component.” Instead, the Examiner finds the “input component” is the actual physical hand itself, which is not image based. Ans. 4, 7—8. Accordingly, we are not persuaded the Examiner erred in finding Hillis and Lee teach “a graphic user interface adapted to allow a user to manipulate elements of the graphic model... by manipulating one or more functionally associated user input components . . . wherein the one or more user input components are not image based,” as recited in claim 1 and similarly recited in claims 10 and 15. “define a human gesture ” Appellants contend Hillis does not teach “a gesture generating module adapted to define a human gesture based on positions of elements of the graphic model during a series of one or more user manipulations, using the one or more user input components, of elements of the graphic model of the at least a portion of the human body,” as recited in claim 1 and similarly recited in claims 10 and 15. App. Br. 19. Specifically, Appellants argue “[t]here is no explanation how” Hillis’ teaching of “defining features by performing them in front of a camera” teaches “defining gestures by manipulating a graphic model.” Id. (emphasis omitted). Appellants further argue Hillis’ gestures are not defined “using non-image based input elements.” Id. We are not persuaded. The Examiner finds, and we agree, Hillis defines a new gesture using a “‘begin gesture sample’ operation” where the user “perform[s] the new gesture [and] capture[s] the appropriate images of the new gesture.” Ans. 9 (citing Hillis 135). The Examiner further finds, 7 Appeal 2016-001132 Application 13/061,568 and we agree, Lee teaches a graphic model performing gestures, e.g., a head nod. Ans. 9 (citing Lee 3:40-45 Fig. 1). The Examiner’s combination incorporates Lee’s gesturing graphic model with Hillis’ new gesture defining operation. Ans. 9; Final Act. 4. Appellants’ argument that the Examiner fails to explain how Hillis defines a gesture (App. Br. 19) is not persuasive. The Examiner explains that Hillis “defme[s] a human gesture” (Ans. 9) because Hillis’ gesture memory is “dynamically programmable, such that new gestures can be added” by “captur[ing] the appropriate images of the new gesture” (Hillis 135). Further, Appellants’ argument directed to using a graphic model to define gestures (App. Br. 19) inappropriately attacks Hillis individually when the Examiner’s rejection combines the teachings and suggestions of Hillis and Lee to “modify the gesture recognition interface of Hillis to include [Lee’s] graphical model” (Final Act. 4; Ans. 9). In re Keller, 642 F.2d 413, 426 (CCPA 1981) (citation omitted). Further, Appellants’ argument that the gesture definition is not defined “using non-image based input” (App. Br. 20) is not persuasive because we agree with the Examiner’s finding that the user’s physical body part, e.g., a hand or head, is a non image based input (which manipulates the user image), as discussed supra. Accordingly, we are not persuaded the Examiner erred in finding the combination of Hillis and Lee teaches a gesture generating module adapted to define a human gesture based on positions of elements of the graphic model during a series of one or more user manipulations, using the one or more user input components, of elements of the graphic model of the at least a portion of the human body, within the meaning of claims 1,10, and 15. 8 Appeal 2016-001132 Application 13/061,568 Dependent Claims 2 6, 11, 12, and 16—21 Appellants do not argue separate patentability for dependent claims 2— 6, 11, 12, and 16—21 which depend from claims 1,10, and 15. See App. Br. 20-21. For the reasons set forth above, therefore, we are not persuaded the Examiner erred in rejecting these claims. See In re Lovin, 652 F.3d 1349, 1356 (Fed. Cir. 2011) (“We conclude that the Board has reasonably interpreted Rule 41.37 to require applicants to articulate more substantive arguments if they wish for individual claims to be treated separately.”). Accordingly, we sustain the Examiner’s rejections of claims 2—6, 11, 12, and 16-21. See 37 C.F.R. § 41.37(c)(l)(iv). DECISION We AFFIRM the Examiner’s 35 U.S.C. § 103 rejections of claims 1— 6, 10-12, and 15-21. No time period for taking any subsequent action in connection with this appeal may be extended under 37 C.F.R. § 1.136(a)(l)(iv). AFFIRMED 9 Copy with citationCopy as parenthetical citation