Sony Computer Entertainment Inc.Download PDFPatent Trials and Appeals BoardMay 19, 20212019005511 (P.T.A.B. May. 19, 2021) Copy Citation UNITED STATES PATENT AND TRADEMARK OFFICE UNITED STATES DEPARTMENT OF COMMERCE United States Patent and Trademark Office Address: COMMISSIONER FOR PATENTS P.O. Box 1450 Alexandria, Virginia 22313-1450 www.uspto.gov APPLICATION NO. FILING DATE FIRST NAMED INVENTOR ATTORNEY DOCKET NO. CONFIRMATION NO. 14/830,133 08/19/2015 Masaaki OKA SCED 25.732(100809-00718) 8019 26304 7590 05/19/2021 KATTEN MUCHIN ROSENMAN LLP 575 MADISON AVENUE NEW YORK, NY 10022-2585 EXAMINER LE, SARAH ART UNIT PAPER NUMBER 2619 NOTIFICATION DATE DELIVERY MODE 05/19/2021 ELECTRONIC Please find below and/or attached an Office communication concerning this application or proceeding. The time period for reply, if any, is set in the attached communication. Notice of the Office communication was sent electronically on above-indicated "Notification Date" to the following e-mail address(es): doreen.devito@katten.com nycuspto@katten.com samson.helfgott@katten.com PTOL-90A (Rev. 04/07) UNITED STATES PATENT AND TRADEMARK OFFICE ____________ BEFORE THE PATENT TRIAL AND APPEAL BOARD ____________ Ex parte MASAAKI OKA ___________ Appeal 2019-005511 Application 14/830,133 Technology Center 2600 ____________ Before BRADLEY W. BAUMEISTER, ERIC B. CHEN, and JAMES B. ARPIN, Administrative Patent Judges. CHEN, Administrative Patent Judge. DECISION ON APPEAL STATEMENT OF THE CASE Pursuant to 35 U.S.C. § 134(a), Appellant1 appeals from the Examiner’s decision to reject claims 1–8, all the pending. We have jurisdiction under 35 U.S.C. § 6(b). We reverse. 1 We use the word “Appellant” to refer to “applicant” as defined in 37 C.F.R. § 1.42(a). Appellant identifies the real party in interest as Sony Computer Entertainment, Inc. (Appeal Br. 2.) Appeal 2019-005511 Application 14/830,133 2 CLAIMED SUBJECT MATTER The claimed subject matter is directed to an image processing apparatus. (Abstract.) Claim 1, reproduced below with a disputed limitation emphasized, is illustrative: 1. An image processing apparatus comprising: an input device; an image information acquiring part configured to acquire two-dimensional image information including a texture image; a polygon model information acquiring part configured to acquire polygon model information representing a three- dimensional polygon model as an object on which to map the texture image, the polygon model information including position information about a plurality of vertexes, wherein the plurality of vertexes are arranged into an upper layer model and a lower layer model; an adjustment information acquiring part for acquiring adjustment information instructions from a user using the input device, wherein the adjustment information instructions include instructions to move one or more of the vertexes of the polygon model, wherein a first instruction to move a first vertex also moves at least one other vertex not specified by the first instruction, and wherein a second instruction to move a second vertex in the lower layer model causes all the vertexes in the upper layer to remain at fixed positions; a polygon model information updating part configured to update the position information about at least one and other vertexes included in the polygon model information, on a basis of predetermined relations reflecting vertex movement Appeal 2019-005511 Application 14/830,133 3 information acquired from the adjustment information instructions; and a mapping part configured to map the texture image after each adjustment information instruction inputted by the user on the polygon model based on the updated polygon model information, wherein the mapping part is ended by a user instruction. REFERENCES Name Reference Date Petrov et al. US 2002/0050988 A1 May 2, 2002 Marugame US 2006/0285758 A1 Dec. 21, 2006 Suzuki et al., Interactive Mesh Dragging with Adaptive Remeshing Technique, 16 The Visual Computer 159–176 (2000). Sato et al., Realistic 3D Facial Animation Using Parameter-based Deformation and Texture Remapping, Proceedings of the Sixth IEEE International Conference on Automatic Face and Gesture Recognition (2004). REJECTIONS The Examiner rejects claims 1–8 under 35 U.S.C. § 103 as obvious over the combined teachings of Sato, Petrov, Marugame, and Suzuki. OPINION We are persuaded by Appellant’s arguments (Appeal Br. 7) that the Examiner improperly combined the teachings of Sato, Petrov, Marugame, and Suzuki. The Examiner finds that Marugame’s action area division unit 2, which allows a user to add expression to a face image, teaches or suggests the limitation “an adjustment information acquiring part for acquiring adjustment information instructions from a user using the input device.” (Final Act. 10–11.) The Examiner concluded that “[i]t would have been Appeal 2019-005511 Application 14/830,133 4 obvious . . . to apply concepts of user’s mouse dragging operation as seen in [Marugame] into user input instruction to map texture image to 3D polygonal model of Sato . . . to move the position of each control point to a desired position.” (Final Act. 13; see also Ans. 7.) We do not agree with the Examiner’s findings and conclusions. Sato relates to “constructing 3D human facial images which maintains the characteristics of video input image.” (Abstract.) In particular, Sato explains the following: Our method combines the advantages of both the model- based and image-based techniques, and is carried out in the following steps . . . Step 1 Orthogonal (front and profile) images of an expressionless human face are used to construct the structure of a personalized 3D facial model. Its vertices are deformable by extensively using a muscle-based model developed for facial expression recognition. The aforementioned front image is texture mapped onto the model in its initial stage. This process is carried out offline. Step 2 Facial feature points are tracked from input video frames, captured from a camera placed in front of a person’s face. The tracking results are used to modify the input video frame to reduce effects of facial position and rotation. Step 3 Feature points located on the contours of facial components are displaced quantitatively based on actual facial muscle movements. The vertices composing the 3D facial model mentioned in Step 1 are displaced accordingly. (§ 2, col. 2 (footnote omitted).) Since human faces are relatively similar in shape and structure, an individual’s best-fit model could be obtained by deforming the vertices of a generic face model than creating one from scratch. The parametric facial model created by Parke et al. was modified accordingly to create the basic 3D polygonal model, consisting of 466 vertices and 822 polygons. Appeal 2019-005511 Application 14/830,133 5 Additional polygons arc added in eye and mouth regions by connecting the vertices located on the lip and eye contours. (§ 3.1, col. 1 (footnotes omitted).) Marugame relates to “a shape deforming device.” (¶ 1.) Marugame’s Figure 1 illustrates entire face area 1000, which includes action area 1001 subjected to deformation. (¶ 68.) Marugame explains that “[t]he shape deforming device deforms a human expressionless face image into an expressive action face image,” such that “input device KEY is composed of, for example, a keyboard and a mouse, and is used for accepting various kinds of data and directions provided from the user.” (¶ 80; see also fig. 2.) Moreover, Marugame explains that “the action area division unit 2 moves the position of each control point to a desired position in accordance with user’s mouse dragging operation or the like, and also makes minor adjustment.” (¶ 89.) Although the Examiner proposes to modify Sato’s teachings with Marugame’s, the Examiner has not provided an articulated reasoning with some rational underpinning as to why one of ordinary skill in the art would modify Sato’s teachings. See In re Kahn, 441 F.3d 977, 988 (Fed. Cir. 2006) (“[R]ejections on obviousness grounds cannot be sustained by mere conclusory statements; instead, there must be some articulated reasoning with some rational underpinning to support the legal conclusion of obviousness”); see also KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 418 (2007). In particular, Sato is based upon a “parametric facial model created by Parke et al. [that] was modified accordingly to create the basic 3D polygonal model, consisting of 466 vertices and 822 polygons” (§ 3.1, col. 1), such that “vertices are deformable by extensively using a muscle-based model Appeal 2019-005511 Application 14/830,133 6 developed for facial expression recognition” (§ 2, col. 2). Thus, the Examiner has not provided sufficient articulated reasoning as to why one of ordinary skill would modify Sato’s teachings, to incorporate an input device to add expression manually to a human face image, when Sato already provides for a muscle-based model for facial expression recognition using a preexisting parametric facial model. Accordingly, we are persuaded by Appellant’s following argument: As stated above, the facial tracking of the user’s face is used to “modify the input video frame” and the initial wireframe is created using image “of an expressionless human face.” Further, the vertices of the wireframe are moved in the animation “by extensively using a muscle-based model developed for facial expression.” Thus, Sato makes it explicit that a muscle-based model is used to modify the wire frame based on images of the user captured by the camera and requires no user intervention to modify the vertices. (Appeal Br. 7 (emphases omitted).) Thus, the Examiner improperly combines the teachings of Sato, Petrov, Marugame, and Suzuki to reject claims 1–8 under 35 U.S.C. § 103. Accordingly, we do not sustain the rejection of claims 1–8 under 35 U.S.C. § 103. Appeal 2019-005511 Application 14/830,133 7 CONCLUSION We reverse the Examiner’s decision rejecting claims 1–8 under 35 U.S.C. § 103. DECISION In summary: Claim(s) Rejected 35 U.S.C. § Reference(s)/Basis Affirmed Reversed 1–8 103 Sato, Petrov, Marugame, Suzuki 1–8 REVERSED Copy with citationCopy as parenthetical citation