Samsung Electronics Company, Ltd.Download PDFPatent Trials and Appeals BoardJun 30, 20212020001658 (P.T.A.B. Jun. 30, 2021) Copy Citation UNITED STATES PATENT AND TRADEMARK OFFICE UNITED STATES DEPARTMENT OF COMMERCE United States Patent and Trademark Office Address: COMMISSIONER FOR PATENTS P.O. Box 1450 Alexandria, Virginia 22313-1450 www.uspto.gov APPLICATION NO. FILING DATE FIRST NAMED INVENTOR ATTORNEY DOCKET NO. CONFIRMATION NO. 14/856,493 09/16/2015 Sajid Sadi 082499.0124 4789 121588 7590 06/30/2021 Baker Botts L.L.P./Samsung 1001 Page Mill Road Building One SUITE 200 Palo Alto, CA 94304-1007 EXAMINER WU, MING HAN ART UNIT PAPER NUMBER 2616 NOTIFICATION DATE DELIVERY MODE 06/30/2021 ELECTRONIC Please find below and/or attached an Office communication concerning this application or proceeding. The time period for reply, if any, is set in the attached communication. Notice of the Office communication was sent electronically on above-indicated "Notification Date" to the following e-mail address(es): Cal-PTOmail@bakerbotts.com PTOmail1@bakerbotts.com patent@samsung.com PTOL-90A (Rev. 04/07) UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD Ex parte SAJID SADI, SERGIO PERDICES-GONZALEZ, RAHUL BUDHIRAJA, BRIAN DONGWOO LEE, AYESHA MUDASSIR KHWAJA, PRANAV MISTRY, LINK HUANG, CATHY KIM, MICHAEL NOH, RANHEE CHUNG, SANGWOO HAN, JASON YEH, JUNYEON CHO, SOICHAN NGET, BRIAN HARMS, YEDAN QIAN, and RUOKAN HE Appeal 2020-001658 Application 14/856,493 Technology Center 2600 Before MIRIAM L. QUINN, ADAM J. PYONIN, and NABEEL U. KHAN, Administrative Patent Judges. PYONIN, Administrative Patent Judge. DECISION ON APPEAL Pursuant to 35 U.S.C. § 134(a), Appellant1 appeals from the Examiner’s rejection. We have jurisdiction under 35 U.S.C. § 6(b). We REVERSE. 1 Herein, “Appellant” refers to “applicant” as defined in 37 C.F.R. § 1.42(a). Appellant identifies the real party in interest as Samsung Electronics Company, Ltd. Appeal Br. 3. Appeal 2020-001658 Application 14/856,493 2 STATEMENT OF THE CASE Introduction The Application is directed to systems with the “capability to capture and reconstruct 3-D video” (Spec. ¶ 45), in which “cameras or depth cameras may be used to pull in objects from the user’s ‘real-world’ environment into perspective” (Spec. ¶ 179). Claims 1–32 are pending; claims 1, 25, 28, and 31 are independent. Appeal Br. 15–20. Claim 1 is reproduced below for reference (emphasis added2): 1. A method comprising presenting to a user, on a display of a head-worn client computing device, a three-dimensional video; receiving, at the head-worn client computing device, an indication that an event occurred in the user’s physical environment; determining, based on the event, to present on the display at least a portion of the environment in which the event occurred; detecting, by an image sensor, an image of the environment in which the event occurred; extracting, from the image, the portion of the environment in which the event occurred; determining a location of the portion of the environment relative to the user’s physical location in the physical environment; and presenting to the user, on the display of the head-worn client computing device, the image of the portion of the environment in which the event occurred, wherein the image of the portion of the environment is integrated with the three- dimensional video and appears in the three-dimensional video in a location relative to the user’s location in the three-dimensional video that corresponds to the location of the portion of the environment relative to the user's physical location in the physical environment. 2 We refer to the emphasized portions as the “integration limitation.” Appeal 2020-001658 Application 14/856,493 3 References and Rejections The Examiner relies on the following prior art: Name Reference Date Kamuda US 9,342,929 B2 May 17, 2016 Matsumura US 2009/0110291 A1 Apr. 30, 2009 Iwanaka US 2012/0154555 A1 June 21, 2012 Geisner US 2013/0083173 A1 Apr. 4, 2013 Huston US 2013/0222369 A1 Aug. 29, 2013 McCulloch US 2013/0286004 A1 Oct. 31, 2013 Perez US 2013/0293468 A1 Nov. 7, 2013 Benson US 2013/0293723 A1 Nov. 7, 2013 Kobayashi US 2014/0191942 A1 July 10, 2014 White US 2015/0019983 A1 Jan. 15, 2015 Fermon US 2015/0121287 A1 Apr. 30, 2015 Claims 1–6, 11, 12, and 25–32 are rejected under 35 U.S.C. § 103 as being unpatentable over Geisner, Benson, Matsumura, and Kamuda. Final Act. 3. Claims 7–10 are rejected under 35 U.S.C. § 103 as being unpatentable over Geisner, Benson, Matsumura, Kamuda, and McCulloch. Final Act. 16, 18. Claims 13–15 are rejected under 35 U.S.C. § 103 as being unpatentable over Geisner, Benson, Matsumura, Kamuda, and Kobayashi. Final Act. 19. Claim 16 is rejected under 35 U.S.C. § 103 as being unpatentable over Geisner, Benson, Matsumura, Kamuda, and Iwanaka et al. Final Act. 21. Claims 17–19 are rejected under 35 U.S.C. § 103 as being unpatentable over Geisner, Benson, Matsumura, Kamuda, and Fermon. Final Act. 22. Appeal 2020-001658 Application 14/856,493 4 Claims 20–22 are rejected under 35 U.S.C. § 103 as being unpatentable over Geisner, Benson, Matsumura, Kamuda, and Perez. Final Act. 24. Claims 23 and 24 are rejected under 35 U.S.C. § 103 as being unpatentable over Geisner, Benson, Matsumura, Kamuda, and Huston. Final Act. 25. ANALYSIS Appellant argues that the Examiner’s rejection of independent claim 1 is in error, because the cited references do not teach or suggest the claimed features of “specifically how the portion of the environment in which the event occurred is integrated within the video,” which require that, “from the user’s perspective, in the three-dimensional video[,] the location of the integrated portion relative to the user mirrors the location of the portion in the physical environment relative to the user.” Appeal Br. 7, 8. Particularly, Appellant contends that the combination of Benson and Kumada fails to teach or suggest the “integration limitation”: while user 54 in Kamuda is not even viewing an image of their physical environment, even if such an image were introduced to Benson and presented on Benson's display, there is no teaching in any reference to place such image of the environment anywhere other than where Benson places its image, i.e., as a picture-in-picture display. Neither that combination nor looking at one’s own environment through a transparent display, as Kamuda discloses user 54 in Kamuda is doing, discloses the integration feature specifically recited by the claims. Id. at 13. There is no dispute on the record before us that Benson—although displaying a portion of a physical environment to a user based on an event— Appeal 2020-001658 Application 14/856,493 5 does not teach displaying an environment portion in a corresponding location relative to the user, as recited in the “integration limitation.” See Final Act. 7; Appeal Br. 8; Benson ¶ 90. Nor does the Examiner rely on Geisner or Matsumura for these limitations. See Final Act. 7. Thus, the issue before us is whether Kamuda teaches or suggests the missing limitation. See Final Act. 8. We are persuaded that Kamuda fails to render obvious an image “that corresponds to the location of the portion of the environment relative to the user’s,” as claimed. See Appeal Br. 11. Kamuda, as cited, teaches a transparent display that enables a user “to perceive a 3D holographic image located within the physical environment 32 that the user is viewing, while also allowing the user to view physical objects in the physical environment.” Kamuda 3:30–35; Final Act. 8. Kamuda’s transparent display teachings relate to displaying a virtual image overlaid on the physical environment; essentially, these teachings are the inverse of the claimed display of a portion of a physical image overlaid on a virtual environment. Compare, e.g, Kamuda Fig. 4B, with Fig. 37. The Examiner does not provide reasoning or analysis to explain why one of skill in the art would modify Kamuda’s transparent display, as claimed. The Examiner further cites Kamuda’s teaching of a system that can “stitch together crowd-sourced geo-located structural data items received from multiple sources . . . to generate a 3D spatial shared world model 55 of the physical environment.” Kamuda 5:8–30; Final Act. 9. Although Kamuda’s world model teaches displaying location images in locations that correspond to the images’ placements in a physical environment, Kamuda fails to teach integrating a portion of these images in a three dimensional Appeal 2020-001658 Application 14/856,493 6 video relative to the user’s location, as required by claim 1. See Kamuda 5:10–45. Rather, we agree with Appellant that the world model displays other users’ locations, because Kamuda is directed to creating a “more comprehensive and/or higher fidelity” immersive view of different locations or times. Kamuda 7:31–32; see also Kamuda 8:23-27; Appeal Br. 10. Accordingly, we are persuaded that Kamuda’s shared world model does not teach or suggest displaying an image “relative to the user’s physical location,” as claimed. The Examiner, moreover, does not provide reasoning or analysis to explain how combining Kamuda’s shared world model with Benson’s object display would yield the disputed limitations. See, e.g. Ans. 29–45. Without sufficient underpinnings supporting the finding of obviousness, we are constrained to reverse the Examiner’s rejection of independent claim 1. We find the rejection of independent claim 1 is in error. Independent claims 25, 28, and 31 recite similar limitations and are rejected under the same grounds as claim 1. Accordingly, we do not sustain the Examiner’s obviousness rejection of the independent claims, or the claims dependent thereon. Appeal 2020-001658 Application 14/856,493 7 DECISION SUMMARY Claim(s) Rejected 35 U.S.C. § Reference(s)/Basis Affirmed Reversed 1–6, 11, 12, 25–32 103 Geisner, Benson, Matsumura, Kamuda 1–6, 11, 12, 25–32 7–10 103 Geisner, Benson, Matsumura, Kamuda, Mcculloch, 7–10 13–15 103 Geisner, Benson, Matsumura, Kamuda, Kobayashi, 13–15 16 103 Geisner, Benson, Matsumura, Kamuda, White 16 17–19 103 Geisner, Benson, Matsumura, Kamuda, Fermon 17–19 20–22 103 Geisner, Benson, Matsumura, Kamuda, Perez 20–22 23, 24 103 Geisner, Benson, Matsumura, Kamuda, Huston 23, 24 Overall Outcome 1–32 REVERSED Copy with citationCopy as parenthetical citation