Ex Parte Bai et alDownload PDFPatent Trial and Appeal BoardApr 26, 201612721801 (P.T.A.B. Apr. 26, 2016) Copy Citation UNITED STATES PATENT AND TRADEMARK OFFICE UNITED STATES DEPARTMENT OF COMMERCE United States Patent and Trademark Office Address: COMMISSIONER FOR PATENTS P.O. Box 1450 Alexandria, Virginia 22313-1450 www.uspto.gov APPLICATION NO. FILING DATE FIRST NAMED INVENTOR ATTORNEY DOCKET NO. CONFIRMATION NO. 12/721,801 03/11/2010 Fan Bai P009291-US-NP/RD/MJL 1843 81466 7590 04/26/2016 MacMillan, Sobanski & Todd, LLC - GM One Maritime Plaza 720 Water Street 5th Floor Toledo, OH 43604 EXAMINER ANDERSON II, JAMES M ART UNIT PAPER NUMBER 2486 MAIL DATE DELIVERY MODE 04/26/2016 PAPER Please find below and/or attached an Office communication concerning this application or proceeding. The time period for reply, if any, is set in the attached communication. PTOL-90A (Rev. 04/07) UNITED STATES PATENT AND TRADEMARK OFFICE ____________________ BEFORE THE PATENT TRIAL AND APPEAL BOARD ____________________ Ex parte FAN BAI, WENDE ZHANG, and CEM U. SARAYDAR ____________________ Appeal 2014-006408 Application 12/721,801 Technology Center 2400 ____________________ Before JEFFREY S. SMITH, JON M. JURGOVAN, and MICHAEL M. BARRY, Administrative Patent Judges. JURGOVAN, Administrative Patent Judge. DECISION ON APPEAL Appellants seek review under 35 U.S.C. § 134(a) from a rejection of claims 1–21. We have jurisdiction under 35 U.S.C. § 6(b). We affirm. Appeal 2014-006408 Application 12/721,801 2 STATEMENT OF THE CASE The claims are directed to video sharing in a vehicle-to-entity communication system. (Spec. Abstract.) Claims 1 and 12, reproduced below, are illustrative of the claimed subject matter: 1. A method for scene information sharing in a vehicle-to- entity communication system, the method comprising the steps of: capturing scene data by an image capture device of an event in a vicinity of a source entity, wherein the image capture device is associated with the source entity; determining a spatial relationship between a location corresponding to the captured event and a location of a remote vehicle, wherein the source entity and associated image capture device is at a location remote from a location of the remote vehicle; determining a temporal relationship between a time-stamp of the captured scene data and a current time; determining a utility value as a function of the spatial relationship and the temporal relationship; determining a network utilization parameter of a communication network for transmitting and receiving the scene data; applying a selected level of compression to the captured scene data as a function of the utility value and available bandwidth; and transmitting the compressed scene data from the source entity to the remote vehicle. 12. A vehicle-to-entity communication system having adaptive scene compression for video sharing between a source entity and a remote vehicle, the system comprising: an image capture device of the source entity for capturing video scene data of an event in a vicinity of the source entity; an information utility module for determining a utility value that is a function of a spatial relationship between a location corresponding to the captured event and a location of the Appeal 2014-006408 Application 12/721,801 3 remote vehicle and a temporal relationship between a time-stamp of the captured scene data and a current time; a network status estimation module for determining a network utilization parameter of a communication network; a processor for applying a selected amount of compression to the captured scene data as a function of the utility value and the network utilization parameter of the communication network; and a transmitter for transmitting the compressed scene data to the remote vehicle. REJECTIONS Claims 1, 2, 6, and 7 stand rejected under 35 U.S.C. § 103(a) based on Franks (US 2006/0082730 A1; Apr. 20, 2006) and Cohen-solal (US 2003/0052911 A1; Mar. 20, 2003). (Final Act. 10–12.) Claims 3 and 4 stand rejected under 35 U.S.C. § 103(a) based on Franks, Cohen-solal, and Fong (US 8,174,375 B2; May 8, 2012). (Final Act. 12–13.) Claim 5 stands rejected under 35 U.S.C. § 103(a) based on Franks, Cohen-solal, Fong, and Tokoro (US 7,689,359 B2; Mar. 30, 2010). (Final Act. 14–15.) Claim 8 stands rejected under 35 U.S.C. § 103(a) based on Franks, Cohen-solal, Williams (US 7,394,877 B2; July 1, 2008), and Zhang (US 2008/0059861 A1; Mar. 6, 2008). (Final Act. 15–16.) Claim 9 stands rejected under 35 U.S.C. § 103(a) based on Franks, Cohen-solal, Williams, and Davis (US 2008/0317111 A1; Dec. 25, 2008). (Final Act. 16.) Claim 10 stands rejected under 35 U.S.C. § 103(a) based on Franks, Cohen-solal, and Jannard (US 8,174,560 B2; May 8, 2012). (Final Act. 17.) Appeal 2014-006408 Application 12/721,801 4 Claim 11 stands rejected under 35 U.S.C. § 103(a) based on Franks, Cohen-solal, and Kostrzewski (US 6,167,155; Dec. 26, 2000). (Final Act. 17–18.) Claims 12–15 and 17–19 stand rejected under 35 U.S.C. § 103(a) based on Fong and Lu (US 2009/0045323 A1; Feb. 19, 2009). (Final Act. 18–22.) Claim 16 stands rejected under 35 U.S.C. § 103(a) based on Fong, Lu, and Tokoro. (Final Act. 23.) Claim 20 stands rejected under 35 U.S.C. § 103(a) based on Fong, Lu, Williams, and Zhang. (Final Act. 23–24.) Claim 21 stands rejected under 35 U.S.C. § 103(a) based on Fong, Lu, and Kostrzewski. (Final Act. 24–25.) ANALYSIS We have reviewed the Examiner’s rejections in light of Appellants’ arguments that the Examiner has erred. We disagree with Appellants’ conclusions. We adopt as our own the findings and reasons set forth by the Examiner in the Final Office Action (2–25) from which this appeal is taken and the reasons set forth in the Examiner’s Answer (14–29) in response to Appellants’ Appeal Brief. We highlight and address specific findings and arguments for emphasis as follows. Regarding claim 4, Appellants argue Fong only describes the basic concept of displaying images captured by the image capture device, and that the combination of Fong with other references fails to describe the claimed feature of “applying image abstraction to the capture[d] scene data for extracting a still image.” (App. Br. 15.) We interpret claims in light of the Appeal 2014-006408 Application 12/721,801 5 specification. In re Am. Acad. of Sci. Tech. Ctr., 367 F.3d 1359, 1369 (Fed. Cir. 2004); In re Van Geuns, 988 F.2d 1181 (Fed. Cir. 1993). The Specification (¶ 19) discloses: Image abstraction includes extracting a still image from either the compressed video scene data or a still scene image may be extracted directly from captured video scene data. Image abstraction may further include decreasing the resolution and compression quality of the still image. (Emphasis added). Thus, paragraph 19 of Appellants’ Specification does not require image abstraction to do anything more than extract a still image. The Examiner finds, and we agree, that Fong teaches a video frame is captured and displayed on an LCD screen, which is “image abstraction” as described by the examples in Appellants’ Specification. In other words, a video sequence is composed of still images displayed one after the other. Extracting a still image for display necessarily entails “image abstraction” within the meaning of claim 4, when read in light of the examples given in Appellants’ Specification. Thus, we do not find Appellants’ argument persuasive of Examiner error. Regarding claim 5, Appellants argue Tokoro’s edge detection does not involve generating a feature sketch from a still image as part of an image abstraction. (App. Br. 16.) The Specification (¶ 19) states: a feature sketch of the extracted image may be generated through scene understanding techniques. No further explanation of “feature sketch” or “scene understanding techniques” is given in the Specification. This fact suggests Appellants regard “feature sketch” and “scene understanding techniques” to be well known in the art. Moreover, Tokoro’s edge extraction certainly suggests to Appeal 2014-006408 Application 12/721,801 6 a person of ordinary skill the ability to generate a sketch or outline of objects in an image (see Ans. 21 citing Tokoro 8:1–5). Accordingly, we are not persuaded by this argument. Regarding claim 8, Appellants argue Franks, Cohen-solal, Williams, and Zhang fail to describe that the network utilization parameter utilizes a performance history of the communication network where the performance history is based on a function of a packet delivery ratio, a latency, a jitter, and a throughput of previous broadcast messages (App. Br. 16–17.) While acknowledging the claimed elements are known terms, Appellants argue the Examiner used impermissible hindsight to pick and choose elements from the prior art. The Examiner cited Zhang (¶ 8) which states: A good error resilient application system is Quality of Service (QoS) adaptive. QoS may be used to describe the overall performance of a communication system. To be QoS adaptive is to trade off different QoS requirements. For wireless video applications, QoS may be measured by the reliability, latency and bandwidth usage, which are in terms of Peak Signal to Noise Ratio (PSNR), Packet Loss Rate (PLR), Delay, Delay Variation/Jitter and Bit Throughput Rate (BTR). The person of ordinary skill in the art would appreciate the benefit of combining this teaching of resilient communication of Zhang with the communication methods taught by the other references, yielding the predictable result of adaptive error resilience for streaming video transmission over a wireless network as suggested by the title of Zhang. Thus, we agree with the Examiner no impermissible hindsight was used in the rejection. See In re McLaughlin, 443 F.2d 1392, 1395 (CCPA 1971) (“[a]ny judgment on obviousness is in a sense “necessarily a reconstruction based upon hindsight reasoning, but so long as it takes into account only Appeal 2014-006408 Application 12/721,801 7 knowledge which was within the level of ordinary skill at the time the claimed invention was made and does not include knowledge gleaned only from applicant’s disclosure, such a reconstruction is proper.”). Claims 14–16 and 20, though dependent from claim 12 rather than claim 1, contain similar features to those previously discussed. Accordingly, our analysis applies to these claims as well. CONCLUSION To support a rejection under § 103(a), “there must be some articulated reasoning with some rational underpinning to support the legal conclusion of obviousness.” KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 418 (2007) (quoting In re Kahn, 441 F.3d 977, 988 (Fed. Cir. 2006)). On this record, we find the Examiner’s reasoning and underpinning adequate to support the conclusion of obviousness. Furthermore, in light of the claim scope, we find the claimed invention is no more than the predictable use of prior art elements according to their established functions. Id. at 417. Appellants have provided no persuasive evidence to show combining features from the references was “uniquely challenging or difficult for one of ordinary skill in the art.” See Leapfrog Enters., Inc. v. Fisher-Price, Inc., 485 F.3d 1157, 1162 (Fed. Cir. 2007) (citing KSR 550 U.S. at 419.) Thus, we are not persuaded of reversible Examiner error, and we sustain the Examiner’s rejections. DECISION For the above reasons, the Examiner’s Decision to reject claims 1–21 is affirmed. Appeal 2014-006408 Application 12/721,801 8 No time period for taking any subsequent action in connection with this appeal may be extended under 37 C.F.R. § 1.136(a)(1)(iv). AFFIRMED Copy with citationCopy as parenthetical citation