Ex Parte Hosenpud et alDownload PDFPatent Trial and Appeal BoardDec 27, 201814792844 (P.T.A.B. Dec. 27, 2018) Copy Citation UNITED STA TES p A TENT AND TRADEMARK OFFICE APPLICATION NO. FILING DATE FIRST NAMED INVENTOR 14/792,844 07/07/2015 Jonathan J. Hosenpud 35690 7590 12/31/2018 MEYERTONS, HOOD, KIVLIN, KOWERT & GOETZEL, P.C. P.O. BOX 398 AUSTIN, TX 78767-0398 UNITED STATES DEPARTMENT OF COMMERCE United States Patent and Trademark Office Address: COMMISSIONER FOR PATENTS P.O. Box 1450 Alexandria, Virginia 22313-1450 www .uspto.gov ATTORNEY DOCKET NO. CONFIRMATION NO. 6430-04501 8419 EXAMINER NGUYEN, KIMBINH T ART UNIT PAPER NUMBER 2612 NOTIFICATION DATE DELIVERY MODE 12/31/2018 ELECTRONIC Please find below and/or attached an Office communication concerning this application or proceeding. The time period for reply, if any, is set in the attached communication. Notice of the Office communication was sent electronically on above-indicated "Notification Date" to the following e-mail address(es): patent_docketing@intprop.com PTOL-90A (Rev. 04/07) UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD Ex parte JONATHAN J. HOSENPUD, ARTHUR L. BERMAN, JEROME C. TU, KEVIN D. MORISHIGE, and DAVID A. CHAVEZ Appeal2018-009073 Application 14/792,844 Technology Center 2600 Before CARLA M. KRIVAK, IRVINE. BRANCH, and JOSEPH P. LENTIVECH, Administrative Patent Judges. KRIVAK, Administrative Patent Judge. DECISION ON APPEAL Appellants appeal under 35 U.S.C. § 134(a) from a final rejection of claims 1-20. We have jurisdiction under 35 U.S.C. § 6(b). We reverse. Appeal2018-009073 Application 14/792,844 STATEMENT OF THE CASE Appellants' invention is directed to "methods and systems to view and capture still and video images from a perspective related to the tip of a user input device. The user input device is used in connection with a stereoscopic display system. (Spec. ,r 2). Independent claim 1, reproduced below, is exemplary of the subject matter on appeal. 1. A computer-implemented method for capturing a virtual two dimensional (2D) image of a portion of a virtual three dimensional (3D) scene, comprising: a computer performing: rendering a virtual 3D scene on a display from a user's point of view (POV); activating a camera mode in response to first user input via a user input device in communication with the computer, wherein the user input device comprises a stylus, wherein the camera mode enables the user input device to trigger the computer to capture virtual images within the virtual 3D scene based on a POV of the user input device; determining the POV of the user input device, wherein the POV of the user input device is independent of the user's POV· ' determining a virtual 2D frame of the virtual 3D scene based on the POV of the user input device; capturing the virtual 2D image based on the virtual 2D frame and in response to second user input via the user input device; and storing the virtual 2D image. REJECTIONS and REFERENCES The Examiner rejected claims 1--4 under 35 U.S.C. § I03(a) based upon the teachings of Catt (US 2015/0358539 Al, published Dec. 10, 2015), 2 Appeal2018-009073 Application 14/792,844 Seidl (US 2014/0098186 Al, published Apr. 10, 2014), and Zhang (US 2014/0043445 Al, published Feb. 13, 2014). The Examiner rejected claims 5-20 under 35 U.S.C. § I03(a) based upon the teachings of Catt, Seidl, Zhang, and Nakayama (US 2014/0002351 Al, published Jan. 2, 2014). ANALYSIS Appellants contend the Examiner erred in finding the combination of Catt, Seidl, and Zhang fails to teach and suggest "capturing a virtual two dimensional (2D) image of a portion of a virtual three dimensional (3D) scene" where "the camera mode enables the user input device to trigger" a computer and "capture virtual images within the virtual 3D scene bases on a POV [point of view] of the user input device," the "POV of the user input device is independent of the user's POV." App. Br. 7. Particularly, Catt teaches displaying 3D images and capturing 3D-pannable images using a camera system and Seidl teaches a wide field of view optical imaging device configured to image at least two different viewpoints of a scene. App. Br. 10. Further Seidl's cameras narrow the field of view of the user to navigate by head turns, all being 3D images. App. Br. 11. Thus, Appellants contend, neither Catt nor Seidl teaches "determining a virtual 2D frame of the virtual 3D scene based on the POV of the user input device," (emphases added) as claimed. Id. We agree. The Examiner's Answer, for the most part, recites the same arguments as in the Final Office Action, and cites portions of the references that do not teach a virtual 2D image or "capturing the virtual 2D image based on the 3 Appeal2018-009073 Application 14/792,844 virtual 2D frame and in response to second user input via the user input device." Ans. 3-7; see App. Br. 11-12. We also note the Examiner's reliance on Seidl for teaching "determining the POV of the user input device, wherein the POV of the user input device is independent of the user's POV," is misplaced. Ans. 6. Seidl's paragraph 4 teaches an "imaging device having at least two optical imaging elements configured to image at least two different viewpoints of a scene." Ans. 6. The Examiner's reliance on Zhang's Figure 5 and paragraph 33 for teaching user input is a stylus as claimed, is also misplaced. Ans. 5 (also relying on Catt ,r,r 25-26). Thus, the Examiner erred in finding the combination of Catt, Seidl, and Zhang teaches and suggests Appellants' independent claim 1, and independent claims 2 and 3, which recite similar limitations. We therefore do not sustain the rejection of independent claims 1-3 and claims 4--20 dependent therefrom. DECISION The Examiner's decision rejecting claims 1-20 is reversed. REVERSED 4 Copy with citationCopy as parenthetical citation