Shopper Scientist LLCDownload PDFPatent Trials and Appeals BoardOct 2, 20202019005590 (P.T.A.B. Oct. 2, 2020) Copy Citation UNITED STATES PATENT AND TRADEMARK OFFICE UNITED STATES DEPARTMENT OF COMMERCE United States Patent and Trademark Office Address: COMMISSIONER FOR PATENTS P.O. Box 1450 Alexandria, Virginia 22313-1450 www.uspto.gov APPLICATION NO. FILING DATE FIRST NAMED INVENTOR ATTORNEY DOCKET NO. CONFIRMATION NO. 14/473,954 08/29/2014 Herb Sorensen SHP14301 5939 145327 7590 10/02/2020 Alleman Hall Creasman & Tuttle LLP 900 SW 5th Avenue Suite 2300 Portland, OR 97204 EXAMINER KYEREME-TUAH, AKOSUA P ART UNIT PAPER NUMBER 3623 NOTIFICATION DATE DELIVERY MODE 10/02/2020 ELECTRONIC Please find below and/or attached an Office communication concerning this application or proceeding. The time period for reply, if any, is set in the attached communication. Notice of the Office communication was sent electronically on above-indicated "Notification Date" to the following e-mail address(es): patentdocket@allemanhall.com uspto@dockettrak.com PTOL-90A (Rev. 04/07) UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD Ex parte HERB SORENSEN Appeal 2019-005590 Application 14/473,954 Technology Center 3600 Before DEBRA K. STEPHENS, JEFFREY S. SMITH, and JASON V. MORGAN, Administrative Patent Judges. Opinion for the Board filed by Administrative Patent Judge STEPHENS. Opinion dissenting filed by Administrative Patent Judge MORGAN. STEPHENS, Administrative Patent Judge. DECISION ON APPEAL STATEMENT OF THE CASE Pursuant to 35 U.S.C. § 134(a), Appellant1 appeals from the Examiner’s decision to reject claims 1, 3, 8, 10 and 15−24. See Final Act. 1. We have jurisdiction under 35 U.S.C. § 6(b). 1 We use the word Appellant to refer to “applicant” as defined in 37 C.F.R. § 1.42. Appellant identifies the real party in interest as Shopper Scientist LLC (Appeal Br. 3). Appeal 2019-005590 Application 14/473,954 2 We AFFIRM. CLAIMED SUBJECT MATTER The claims are directed to a product exposure analysis in a shopping environment. Claim 1, reproduced below, is illustrative of the claimed subject matter: 1. A method for analyzing product exposure to one or more shoppers in a physical shopping environment, the method comprising: at a processor of a computing device: developing a three-dimensional virtual reality model of the physical shopping environment using planogram data indicating a product location in the physical shopping environment of each of a plurality of products, and storing the model in non-volatile memory associated with the processor of the computing device; receiving a plurality of images of shoppers traveling through the physical shopping environment captured via at least two overhead cameras aimed at a shopping region, wherein the at least two overhead cameras include: a first overhead camera located on a first side of the shopping region; and a second overhead camera located on a second side of the shopping region, wherein the second side is opposite the first side, and wherein each of the shoppers within the shopping region is captured by both the first and the second overhead cameras so that a face of each shopper within the shopping region is visible to at least one of the first and second overhead cameras; computing an estimated field of view of each shopper within the shopping region captured in the plurality of images by: determining a location of a facial feature in the plurality of images, wherein the facial feature includes one or more eyes of a shopper, indicating the face of the shopper; Appeal 2019-005590 Application 14/473,954 3 determining a head pose based on the location of the facial feature, the head pose including a position and an orientation; and assigning the estimated field of view based on the position and the orientation of the head pose in which the estimated field of view defines a three-dimensional volume or region of the physical shopping environment that is aligned with and surrounds the head pose; based on the three-dimensional virtual reality model, computing that a product location associated with a target product is within the estimated field of view for each shopper; and generating a visibility metric for the target product at the product location based on an extent to which the product location exists within each estimated field of view, wherein the visibility metric includes an average amount of time in which the product location associated with the target product lies within the estimated field of view of each shopper. REFERENCES The prior art relied upon by the Examiner is: Name Reference Date Sorensen ’085 US 2002/0178085 A1 Nov. 28, 2002 Kilner US 2005/0197923 A1 Sept. 8, 2005 Gruttadauria US 2008/0043013 A1 Feb. 21, 2008 Sorensen ’756 US 2008/0306756 A1 Dec. 11, 2008 HU US 2012/0139832 A1 June 7, 2012 REJECTION Claims Rejected 35 U.S.C. § Reference(s)/Basis 1, 3, 8, 10, 15, 16 103 Gruttadauria, Hu, Kilner, and Sorensen ’085 17–24 103 Gruttadauria, Hu, Kilner, Sorensen ’085, and Sorensen ’756 Appeal 2019-005590 Application 14/473,954 4 We have only considered those arguments that Appellant actually raised in the Brief. Arguments Appellant could have made but chose not to make in the Brief have not been considered and are deemed to be waived (See 37 C.F.R. § 41.37(c)(1)(iv)). OPINION 35 U.S.C. §103 over Gruttadauria, Hu, Kilner, and Sorensen ’085: Claims 1, 3, 8, 10, 15, and 16 We adopt the findings of fact made by the Examiner in the Final Action and the Examiner’s Answer. We agree with the conclusions made by the Examiner for the reasons given in the Examiner’s Answer. We highlight the following for emphasis. “assigning the estimated field of view . . .” Appellant contends the cited combination of Gruttadauria, Hu, Kilner, and Sorensen ’085 fails to disclose recited independent claims 1, 8, 15, and 16, and specifically, fails to disclose: assigning the estimated field of view based on the position and the orientation of the head pose in which the estimated field of view defines a three-dimensional volume or region of the physical shopping environment that is aligned with and surrounds the head pose, as recited in independent claim 1 and as commensurately recited in independent claims 8, 15, and 16 (Appeal Br. 12). Appeal 2019-005590 Application 14/473,954 5 “estimated field of view” Appellant asserts “the estimated field of view, which is assigned based on the position and orientation of the head pose, defines a 3D volume or region of the physical shopping environment that is aligned with and surrounds the head pose” (id. at 12 (citing Spec. Fig. 4)). Appellant has not identified that any of these terms are defined explicitly in the Specification, but identifies paragraph 16 of the Specification as “describ[ing] a non- limiting example” (Appeal Br. 13 (emphasis added); see generally Appeal Br.). This paragraph describes the “estimated field of view 28” as being “represented here as a probability ellipse with a point of focus at the light of sight of the shopper, forming an elliptical cone, but it may also be a circular cone, two merged cones, a pyramid, a flat-top cone, etc.” (Spec. ¶ 16; Appeal Br. 13). Thus, an “estimated field of view” is broadly defined to include such views as all viewed within a user’s sight or part of what is in a user’s sight. Nothing in the description requires that the user be parallel to the display or include all of the user’s view; rather the Specification describes the field of view “may be aligned with the head pose 40 “ but does not require it (id. ¶ 20). Appellant also states that Figure 4 is “an example in which an estimated field of view 28 is assigned for shopper 34,” which is not disclosed by the prior art (Appeal Br. 12). For the reasons discussed below, Appellant has not persuaded us of error in the Examiner’s findings that the combination of Gruttadauria, Hu, and Sorensen ’085 teaches or suggests a field of view, as recited in the disputed claims. Appeal 2019-005590 Application 14/473,954 6 Appellant further asserts “the recited ‘field of view’ differs from both the head pose and a person’s focus” (id. at 13). We agree with Appellant that the three terms have differing meaning. Appellant has not persuaded us the Examiner has erred in interpreting these elements as not being three different elements. Alleged that “field of view” necessarily known to the computer Appellant further asserts “[i]n each of Gruttadauria and Hu, a person’s field of view within a virtual environment or other computer-generated environment is necessarily known to the computer in order for the computer to display graphical content representing that environment via the display device” (id. at 14 (emphasis added)). Here, Appellant concedes that Gruttadauria and Hu each disclose the claimed “field of view,” but allege that neither Gruttadauria nor Hu determine the field of view. Thus, according to Appellant, ‘[b]ecause the person’s field of view within the computer-generated environment is already known to the computer, the computer need not determine a person’s field of view within that environment based on a head pose of that person” (id.). We do not find this argument persuasive. More specifically, we are not persuaded the person’s field of view within a virtual environment is necessarily known in either Gruttadauria or Hu. Although the graphical content of the virtual environment may be known, Appellant has not persuaded us the estimated field of view of the user would have been known absent user interaction. Indeed, the Examiner finds Gruttadauria teaches “a head mounted display . . . configured using eye-tracking technology that Appeal 2019-005590 Application 14/473,954 7 records what the eyes of the participant are focused on at any given point” (Ans. 4 (citing Gruttadauria ¶ 47)). Paragraph 47 of Gruttadauria further teaches: [s]uitable eye-tracking tools may include headgear with eye tracking features, where one camera views both the region the consumer is facing and another imaging device observes the motions of the wearer's eye to determine the direction of the eye. The data can then be assimilated to show which part of the field of view was being looked at by the wearer of the headgear. Appellant does not rebut the Examiner’s findings regarding paragraph 47 of Gruttadauria. Thus, we disagree with Appellant’s contention that Gruttadauria teaches the person’s field of view is already known. Rather, we find Gruttadauria teaches determining the region (field of view) and eye direction and assimilating that data to show which part of the field of view is being looked at by the user. Similarly, we find Hu teaches selecting a region a user is looking at based on the head pose (Hu ¶ 8 (The “device can track the user’s attention area based on an estimated head pose”; see also id. ¶ 45 (describing detecting the user’s face based on the image data from camera 216 and determining where within display area 202 the user’s face is directed, thereby estimating at which one of the selectable regions the user was looking)). Further, the Examiner finds that Sorensen ’085 determines a field of view in a shopping environment using head pose and line of sight, and that a person of ordinary skill would have modified the combined teachings of Gruttadauria, Hu, and Kilner to determine a field of view in a shopping environment as taught by Sorensen ’085 for the benefit of obtaining accurate information concerning a customer’s shopping habits, in order to develop a Appeal 2019-005590 Application 14/473,954 8 metric to effectively organize products to increase sales as taught by Sorensen ’085 (Ans. 5–6 (citing Sorensen ’085 Figs. 3, 4, ¶¶ 3, 6, 8, 22, 32)). Appellant does not address the Examiner’s finding that Sorensen ’085 determines a field of view, nor does Appellant address the combined teachings of the references. We additionally note the Examiner relies on Kilner to teach the physical shopping environment (Final Act. 10–11). In particular, the Examiner finds Kilner teaches cameras located on different sides of shopping region ––“receiving a plurality of images of shoppers traveling through the physical shopping environment captured via at least two overhead cameras aimed at a shopping region . . . a first overhead camera located on a first side of the shopping region, and a second overhead camera located on a second side of the shopping region” (Final Act. 10–11 (citing Kilner ¶¶ 22, 33, 34, Fig. 7)). Thus, Appellant is arguing the references individually while the Examiner is relying on the combination of references (see In re Keller, 642 F.2d 413, 426 (CCPA 1981) (“one cannot show non-obviousness by attacking references individually where . . . the rejections are based on combinations of references”)). In particular, the Examiner relies on Kilner to teach noting user responses and interactions, using cameras in a physical environment (Final Act. 10–11; Kilner ¶¶ 13, 22, 33, 34, 88, Fig. 7) and Sorensen ’085 to teach an “estimated field of view defines a [. . .] region of the physical shopping environment” (Final Act. 12). The test for obviousness is not whether the features of a secondary reference may be bodily incorporated into the structure of the primary reference; nor is it that Appeal 2019-005590 Application 14/473,954 9 the claimed invention must be expressly suggested in any one or all of the references. Rather, the test is what the combined teachings of the references would have suggested to those of ordinary skill in the art (see In re Keller, 642 F.2d 413, 425 (CCPA 1981)). Appellant does not rebut the Examiner’s finding that the combined teachings of the references would have suggested estimating the field of view. Thus, Appellant has not persuaded us the combination of Gruttadauria, Hu, Kilner, and Sorensen ’085 teaches the field of view is necessarily known. Alleged Lack of Teaching of “assigning an estimated field of view based on the position and the orientation of the head pose” Next, Appellant argues “Hu discloses assessing a user’s face and head pose so that a computer can determine which graphical element a user is looking towards to enable automatic selection of that graphical element” (Appeal Br. 15 (citing Hu ¶ 8)). According to Appellant, “Hu is concerned with determining a user’s focus, rather than a field of view of the user” (id.). Appeal 2019-005590 Application 14/473,954 10 Initially, we note Appellant points to Figure 4 as disclosing the recited field of view (Appeal Br. 5, 12–13, 16). “Figure 4 shows an image of the shopper with an estimated field of view” (Spec. ¶ 9). Figure 4 “shows the image 22 of the shopper 34 with the estimated field of view 28” (Spec. ¶ 19). Estimated field of view 28 is computed “using computer vision techniques to determine a location of one or more facial features 38 in the plurality of images 22, indicating a face of the shopper 34” (id.). Appellant identifies, among others, paragraph 16 as disclosing the recited “computing an estimated field of view” (Appeal Br. 5). Paragraph 16 of the Specification incorporates by reference Sorensen ’756, Appeal 2019-005590 Application 14/473,954 11 published Dec. 11, 2008 (Spec. ¶ 16), which was published well before the filing date of the current application. Sorensen ’756 discloses a view tracking system including a camera coupled with the head of a user, and oriented so as to point in a direction aligned with the orientation of the user’s head (Sorensen ’756, Abstract, Fig. 1, ¶ 35). Sorensen ’756 describes estimating a field of view around a focal point of a user’s estimated line of sight (id. ¶ 45). Figure 7a, reproduced below, illustrates an “example field of view[ ]” (id. ¶ 31). Figure 7a illustrates an estimated field of view defined by probability ellipse 104 surrounding focal point 106 in a captured image 100 of the shopper view and tracking analysis system 10 (id. ¶¶ 45, 49, 55). Appeal 2019-005590 Application 14/473,954 12 Figure 7b, reproduced below, illustrates an “example schematic map[ ] that may be generated from the respective field[ ] of view using the view tracking and analysis system” (id. ¶ 31). Figure 7b is a schematic floor plan illustrating an imputed position 120 of the shopper (Sorensen ’756 ¶ 55). Sorensen ’756 describes “[t]he size of the objects in the estimated field of view 32 contained within probability ellipse 104 (shown in Figure 7a) may indicate that shopper is a first distance 132 from a display shelf 134” (id.). Sorensen ’756 further teaches a shopper’s path 128 and estimated line of site 124 (id. ¶ 52). We agree with Appellant, that Sorensen ’756 discloses “assigning the estimated field of view based on the position and the orientation of the head pose,” as claimed (Appeal Br. 5, 12–13; Spec. ¶ 16). However, Sorensen ’756 is prior art because it was published well before the filing date of the current application. Thus, Appellant admits “assigning the estimated field of view based on the position and the orientation of the head pose,” as recited Appeal 2019-005590 Application 14/473,954 13 in claim 1 was within the background knowledge possessed by an artisan of ordinary skill. This admission from Appellant’s Specification is binding on Appellant for purposes of our obviousness analysis (see Koninklijke Philips v. Google, 948 F.3d 1330, 1339 (Fed. Cir. 2020) (citing PharmaStem Therapeutics, Inc. v. ViaCell, Inc., 491 F.3d 1342, 1362 (Fed. Cir. 2007) (“Admissions in the specification regarding the prior art are binding on the patentee for purposes of later inquiry into obviousness.”))). Given Appellant’s admission, we determine that “assigning the estimated field of view based on the position and the orientation of the head pose,” as claimed would have been known to a person of ordinary skill (see id. at 1337 (citing Randall Mfg. v. Rea, 733 F.3d 1355, 1362–63 (Fed. Cir. 2013) (In an ex parte reexamination, determining that, “[a]s KSR established, the knowledge of such an artisan is part of the store of public knowledge that must be consulted when considering whether a claimed invention would have been obvious.”)). Accordingly, we are not persuaded by Appellant’s contention that this limitation was unknown in the prior art. Moreover, as cited by Appellant, Hu teaches “based on the estimated user’s head pose, the selectable region nearest to the user’s likely attention area is automatically activated” (Hu ¶ 8). Hu further teaches determining the user’s head pose and the display area where the user’s face is directed, to determine the user’s likely attention area, and activate the selectable region nearest to the user’s likely attention area (Hu ¶¶ 45–46). Thus, Hu teaches using the head pose to activate (assign) the selectable region (field of view). Therefore, we are not persuaded by Appellant’s contention that Hu does not assign the estimated field of view based on the position and orientation of Appeal 2019-005590 Application 14/473,954 14 the head pose. Whether Hu is also concerned with the user’s focus does not negate our finding that Hu also teaches assigning the estimated field of view as recited. Additionally, the Examiner further points to Sorensen ’085 as disclosing “head pose and rotating head to determine field of view” (Ans. 5). The Examiner finds finds Sorensen ’085 teaches a “field of view 48 is typically calculated by determining an angle θ, which represents the angular breadth of the shopper’s field of view” (Ans. 5–6 (citing Sorensen ’085, ¶ 32, Figs. 3, 4); Final Act. 12). Appellant does not rebut the Examiner’s findings that the combined teachings of the references would have suggested to a person of ordinary skill in the art at the time of invention “assigning an estimated field of view based on the position and the orientation of the head pose” as claimed. Thus, Appellant has not persuaded us the combination of Gruttadauria, Hu, and Sorensen ’085 fails to teach or suggest “assigning an estimated field of view based on the position and the orientation of the head pose,” as recited in the claim. Alleged Lack of Teaching “assigning the estimated field of view . . . in which the estimated field of view defines a three-dimensional volume or region of the physical shopping environment” Appellant further argues “Hu does not disclose determining an estimated field of view that defines a three-dimensional volume or region of a physical shopping environment” because “Hu shows exemplary selectable regions 204, 206, and 208 as being two-dimensional” (Appeal Br. 15). Appeal 2019-005590 Application 14/473,954 15 However, the Examiner relies on Gruttadauria to teach “computing an estimating field of view” (Final Act. 6). The Examiner further finds Gruttadauria teaches a “three dimensional view of [a] shopping environment showing a field of view [being] looked at” (Ans. 4). Indeed, as noted by the Examiner, Gruttadauria teaches a “three-dimensional image of the virtual reality shopping environment” (Gruttadauria ¶¶ 6, 7, 26, 47). Appellant does not rebut the Examiner’s finding that Gruttadauria teaches a three-dimensional region of a shopping environment. The Examiner further finds that Sorensen ’085 teaches a physical shopping environment (Ans. 5–6). Appellant additionally does not rebut this finding, and Appellant does not rebut the Examiner’s findings regarding the combined teachings of the references. Thus, we are not persuaded the combination of references fails to teach the “field of view defines a three- dimensional volume or region of the physical shopping environment,” as recited in claim 1. Appellant further argues “Kilner does not disclose ‘assigning the estimated field of view based on the position and the orientation of the head pose in which the estimated field of view defines a three-dimensional volume or region of the physical shopping environment that is aligned with and surrounds the head pose,” as recited in claim 1 (Appeal Br. 16). According to Appellant, “Kilner is concerned with concepts that are, at most, analogous to a person’s focus or attention, rather than computing an estimated field of view” (id.). Appellant further contends Kilner’s determination of change in orientation of the detected face, is based on a Appeal 2019-005590 Application 14/473,954 16 direct line of sight rather than a three-dimensional volume or region that is aligned with and surrounds a head pose (id.). The Examiner, however, does not rely on Kilner to teach this limitation, nor to teach “computing an estimated field of view” (Final Act. 6, 8–9; Ans. 3–6). Again, Appellant is arguing the references individually. Appellant does not rebut the Examiner’s findings regarding the combined teachings of the references. Accordingly, Appellant’s arguments are not persuasive. Thus, Appellant has not persuaded us of Examiner error in rejecting independent claim 1. Obvious to Combine Teachings Appellant argues an ordinarily skilled artisan would not have found it obvious to combine the teachings of Kilner and Gruttadauria (Appeal Br. 17). According to Appellant, Gruttadauria is “used to predict shoppers’ responses to shopping environments that do not yet physically exist” while Kilner is “used to measure the behavior of shoppers in existing physical shopping environments” (id.). Further, Appellant contends an ordinarily skilled artisan would not have be motivated “to apply the system and methods of facial-recognition-based advertising in real-world environments as disclosed by Kilner to the predictive, virtual reality simulation of Gruttadauria” (id.). Therefore, Appellant contends, because “the disclosure of Kilner addresses a problem that does not occur when using the virtual store environment of Gruttadauria,” an ordinarily skilled artisan would not Appeal 2019-005590 Application 14/473,954 17 have found it obvious to combine the teachings of Gruttaduaria with Kilner (id.at 18). Again, we are not persuaded by Appellant’s contentions. The Examiner has articulated reasoning with a rational underpinning as to why an ordinarily skilled artisan would have been motivated to combine the teachings and suggestions of Gruttaduaria, Hu, and Kilner (Final Act. 11– 12). The Examiner provides additional reasons for combining the teachings of Gruttadauria, Hu, Kilner, and Sorensen ’085 in the Answer (Ans. 5–7). Appellant does not rebut the Examiner’s additional reasons. Moreover, Appellant’s reasoning seems to be based on incorporating Kilner as a whole into the system of Gruttaduaria by an ordinarily skilled artisan. Again, this is not the proper test (Keller, 642 F.2d at 425). Appellant has proffered insufficient evidence or argument to persuade us of error in the Examiner’s findings and conclusions, and specifically, that the claimed invention is a combination of old elements in a similar field of endeavor performing the same function as it did separately. Indeed, both Gruttaduaria and Kilner teach analyzing user response: Gruttaduaria measures responses of a user interacting with an environment (Gruttaduaria ¶ 47, claim 1) and Kilner determines a user’s face orientation (Kilner ¶ 61). Appellant has not persuaded us an ordinarily skilled artsian would not have been motivated to combine Kilner’s teachings of analyzing user responses in an environment with Gruattaduaria’s teachings of analyzing user response in an environment –– both environments are converted to data that is analyzed. Appeal 2019-005590 Application 14/473,954 18 Accordingly, Appellant has proffered insufficient evidence or argument to persuade us an ordinarily skilled artisan would not have found it obvious to combine the teachings and suggestions of Gruttadauria, Hu, Kilner, and Sorensen ’085. Conclusion Appellant has not persuaded us the Examiner erred in determining the combination of Gruttadauria, Hu, Kilner, and Sorensen ’085 renders obvious: assigning the estimated field of view based on the position and the orientation of the head pose in which the estimated field of view defines a three-dimensional volume or region of the physical shopping environment that is aligned with and surrounds the head pose, as recited in independent claim 1. Independent claims 8, 15, 16, and 20 are not separately argued, instead relying on the arguments set forth for independent claim 1. Accordingly, for the reasons set forth with respect to claim 1, we are not persuaded the Examiner erred in rejecting these claims for obviousness over Gruttadauria, Hu, Kilner, and Sorensen ’085. Appellant does not separately argue dependent claims 3 and 10 (Appeal Br. 20); thus, these claims fall with claim 1 and claim 8 respectively. 35 U.S.C. § 103 over Gruttadauria, Hu, Kilner, Sorensen ’085, and Sorensen ’756: Claims 17–24 “weighting the visibility metric . . .” Appellant argues Sorensen ’756 fails to disclose “weighting the visibility metric for the target product in which the visibility metric has a Appeal 2019-005590 Application 14/473,954 19 greater weighting if the target product is located closer to the head pose and the visibility metric has a lesser weighting if the target product is located further from the head pose,” as recited in claim 17 (Appeal Br. 22). According to Appellant, Sorensen ’756 “conflates the concept of distance along a line of sight with the express language of claim 17” (id.). We are not persuaded Sorensen ’756 fails to teach determining whether the target product is closer or further to the shopper. The Examiner relies on Sorensen ’756 to teach the disputed limitation (Final Act. 44–45). Specifically, Sorensen ’756 teaches determining the distance a shopper is from a display shelf: FIG. 7a illustrates a captured image 100 of the shopper view tracking and analysis system 10 shown in FIG.1, and FIG. 7b is a schematic floor plan illustrating an imputed position 120 of the shopper. The size of the objects in the estimated field of view 32 contained within probability ellipse 104 may indicate that a shopper is a first distance 132 from a display shelf 134. By way of comparison, FIG. 8a illustrates another example captured image 100 in which the objects in the estimated field of view 32 contained within probability ellipse 104 are larger. Thus, it may be determined by the tracking and analysis system 10 that the shopper is a second, shorter distance 136 from the display shelf 134 as illustrated in FIG. 8b. (Sorensen ’756 ¶ 55). The cited portion of Sorensen ’756 describes the size of an object in an estimated field of view may indicate a shopper is a particular distance from a display shelf (id.). Appellant contends Figure 4 of the Specification describes the relationship between the estimated field of view and head pose where the estimated field of view is defined as a three- dimensional volume or region aligned with and surrounding the head pose (Appeal Br. 22). Thus, according to Appellant, “[t]he proximity of the target Appeal 2019-005590 Application 14/473,954 20 product to the head pose within the context of claim 17 is with respect to a plane that is orthogonal to the head pose” (id.). Appellant therefore asserts Sorensen ’756’s line of sight differs from the “closer to”/”further from” limitations of claim 17 (id. at 17–18). We are not persuaded by Appellant’s contention. Claim 17 recites determining the target product’s distance to the head pose. Sorensen ’756 describes determining the distance between the target product and the shopper (head pose) (Sorensen ’756 ¶ 55). Contrary to Appellant’s assertion, claim 17 does not recite the proximity of the target product to the head pose is with respect to a plane that is orthogonal to the head pose; thus, we do not import these limitations into the claim (see, e.g., SuperGuide Corp. v. DirecTV Enters., Inc., 358 F.3d 870, 875 (Fed. Cir. 2004) (“Though understanding the claim language may be aided by the explanations contained in the written description, it is important not to import into a claim limitations that are not a part of the claim”)). Indeed, Appellant has not identified any disclosure in the Specification that defines explicitly any term that would require the distance to be measured with respect to a plane orthogonal to the head pose. Accordingly, Appellant has not persuaded us the combination of Gruttadauria, Hu, Kilner, Sorensen ’085, and Sorensen ’756 fails to teach claim 17. Alleged Teaching Away Appellant argues Hu teaches away from combination with Sorensen ’756 because Hu discloses “[w]hile techniques that use a mouse or other like user pointing device may prove easier for users selecting between several Appeal 2019-005590 Application 14/473,954 21 selectable regions, they can become burdensome when the display device(s) present a large GUI interface” (Appeal Br. 23 (citing Hu ¶ 6)). According to Appellant, Hu teaches “allowing the user to select an element of a GUI interface by looking at the GUI element, rather than by physically manipulating a separate pointing device” in contrast to Sorensen ’756 which teaches use of a tracking device (id.). Thus, Appellant argues, “[i]f the tracking device of Sorensen ’756 were used with the systems and methods disclosed by Hu, the very purpose of not requiring a pointing device . . . would not be achieved” (id.). We are not persuaded by Appellant’s argument. “[T]he fact that there may be reasons a skilled artisan would prefer one [alternative] over the other does not amount to a teaching away from the lesser preferred but still workable option” (Bayer Pharma AG v. Watson Labs., Inc., 874 F.3d 1316, 1327 (Fed. Cir. 2017); see also DePuy Spine, Inc. v. Medtronic Sofamor Danek, Inc., 567 F.3d 1314, 1327 (Fed. Cir. 2009) (“A reference does not teach away […] if it merely expresses a general preference for an alternative invention”)). Here, although Hu expresses a preference for not physically manipulating a large GUI interface with a separate pointing device, Hu also recognizes that the pointing device “may prove easier for users” (Hu ¶ 6). Hu thus discusses alternatives and expresses a general preference. Moreover, Appellant appears to be arguing Sorensen ’756 would need to be wholly inserted into the system of Hu, while the Examiner is relying on specific teachings of Sorensen ’756. Thus, we are not persuaded by Appellant’s contentions. Appeal 2019-005590 Application 14/473,954 22 Conclusion Appellant has not persuaded us the Examiner erred in determining claim 17 is obvious over the combination of Gruttadauria, Hu, Kilner, Sorensen ’085, and Sorensen ’756. Appellant argues claims 18–24 on the basis of their respective independent claims and claim 17 (Appeal Br. 24– 25). For the reasons set forth supra, Appellant has not persuaded us of error in the Examiner’s rejection. Accordingly, we are not persuaded the Examiner erred in rejecting claims 17–24 for obviousness over the combination Gruttadauria, Hu, Kilner, Sorensen ’085, and Sorensen ’756. CONCLUSION The Examiner’s rejection is AFFIRMED. More specifically, we sustain the Examiner’s rejection of claims 1, 3, 8, 10, 15, 16, and 17–24 for obviousness under 35 U.S.C. § 103. Appeal 2019-005590 Application 14/473,954 23 DECISION SUMMARY Claims Rejected 35 U.S.C. § Reference(s)/Basis Affirmed Reversed 1, 3, 8, 10, 15, 16 103 Gruttadauria, Hu, Kilner, and Sorensen 1, 3, 8, 10, 15, 16 17–24 103 Gruttadauria, Hu, Kilner, Sorensen, and Sorensen2 17–24 Overall Outcome 1, 3, 8, 10, 15–24 TIME PERIOD FOR RESPONSE No time period for taking any subsequent action in connection with this appeal may be extended under 37 C.F.R. § 1.136(a). See 37 C.F.R. § 1.136(a)(1)(iv). AFFIRMED Appeal 2019-005590 Application 14/473,954 24 MORGAN, Administrative Patent Judge, dissenting. I write separately because I respectfully disagree with the Majority’s decision to affirm the Examiner’s 35 U.S.C. § 103 rejections of claims 1, 3, 8, 10, and 15–24. Rather, I agree with Appellant that the Examiner’s findings fail to show that the combination of the cited references teaches or suggests at least some of the recitations of claim 1, including, for example, “assigning the estimated field of view based on the position and the orientation of the head pose in which the estimated field of view defines a three-dimensional volume or region of the physical shopping environment that is aligned with and surrounds the head pose,” where the position and orientation of the head pose are determined “based on the location of [a] facial feature” located in a plurality of images captured by first and second overhead cameras. Appeal Br. 14–16. The claimed assignment of an estimated field of view recitation of claim 1 is illustrated in the Specification’s Figure 4, which is reproduced below. Appeal 2019-005590 Application 14/473,954 25 The Specification’s Figure 4 “shows an image of [a] shopper with an estimated field of view.” Spec. ¶ 9. Head pose 40—which is based on facial features such as the shopper’s depicted eye 38—is depicted as a vector, originating from the head of shopper 34, that indicates the shopper’s line of sight. Id. ¶¶ 16, 19–20, Fig. 3. “[E]stimated field of view 28 is represented . . . as a probability ellipse with a point of focus at the line of sight, forming an elliptical cone.” Id. ¶ 16. The Examiner finds Hu teaches the assignment of an estimated field of view “based on the position and the orientation of [a] head pose.” Final Act. 7; id. at 9 (citing Hu ¶¶ 10, 20, 45). The Majority agrees with the Examiner and characterizes the selectable region of Hu as teaching the claimed field of view. Slip op., at *13. Appellant contends the Examiner erred because “Hu is concerned with the user’s focus, in contrast to the Appeal 2019-005590 Application 14/473,954 26 user’s field of view.” Appeal Br. 15. That is, Hu determines “a point of focus within [a] known display area rather than computing an initially unknown field of view.” Id. I find Appellant’s arguments persuasive because they accord with Hu’s teaching of detecting a user’s face based on image data from camera 216 to determine where within display area 202 the user’s face is directed. Hu ¶ 45. Hu “thereby estimate[s] which one of the selectable regions that the user was probably looking at when the image was captured by camera 216.” Id. Hu’s selectable regions are depicted in Hu Figure 2, which is reproduced below. Hu’s Figure 2 “is a block diagram depicting a system for use in assessing the pose of a person’s head.” Hu ¶ 19. It includes a depiction of “display area 202 having a plurality of exemplary selectable regions 204, 206 and 208.” Id. ¶ 44. These exemplary selectable regions vary greatly both Appeal 2019-005590 Application 14/473,954 27 in shape and size. For example, rectangular selectable region 204 is approximately five times taller than elliptical selectable region 208. I do not agree with the Majority’s characterization of Hu’s selectable region as representing the claimed assigned field of view. Slip op., at *13. Rather, Hu’s selectable regions represent regions within the display that the user can select, not the field of view of the user. For example, a user focusing on selectable region 208 would likely also have selectable regions 204 (to the left of selectable region 208) and selectable region 206 (above selectable region 208) in the user’s field of view. This accords with the common, everyday experience of a user using a device with a display area such as that found in Hu. That is, typically a user maintains the entire display area, or at least a significant portion of the display area, within his or her field of view—even while the user focuses on a smaller region of the display area. An individual with normal or correctable eyesight reading this dissenting opinion on such a display can quickly and easily ascertain just how difficult it is to focus on a particular part of the display while excluding from one’s field of view or moving to one’s periphery even a trivial portion of the rest of the display, let alone a substantial portion of the rest of the display (at least not without purposeful and unnatural contortions of one’s position with the express goal of severely limiting one’s field of view). Thus, Hu’s selectable regions 204, 206, and 208 merely represent smaller regions of display area 202 within which a user may direct his or her focus. Hu’s selectable regions do not represent the user’s field of view. Compare Hu Fig. 2 with Spec. Fig. 4. Appeal 2019-005590 Application 14/473,954 28 In the Final Action, the Examiner finds that Gruttadauria teaches “computing an estimated field of view of each shopper within [a] shopping region captured in [a] plurality of images by determining [the] location of a facial feature in the plurality of images wherein the facial feature includes one or more eyes of a shopper.” Final Act. 6 (citing Gruttadauria ¶¶ 47, 49). In the Answer, the Examiner further relies on Gruttadauria as also teaching the claimed estimated field of view assignment. Ans. 4 (citing Gruttadauria ¶¶ 6–7, 26, 47). That is, the Examiner relies on Gruttadauria to cure the deficiencies of Hu alleged by Appellant. The Majority agrees with the Examiner, finding that Gruttadauria teaches a region representing the claimed field of view. Slip op., at *7. Appellant contends that Gruttadauria does not teach or suggest the claimed assignment of an estimated field of view because in Gruttadauria “the person’s field of view within the computer-generated environment is already known to the computer, [thus] the computer need not determine a person’s field of view within that environment based on a head pose of that person.” Appeal Br. 14. I do not agree that the Examiner’s findings show that Gruttadauria cures the noted deficiencies of Hu. Although Gruttadauria teaches determining “the region the consumer is facing,” Gruttadauria teaches making this determination using a headgear mounted first camera, while a second camera “observes the motions of the wearer’s eye.” Gruttadauria ¶ 47. Relying on Gruttadauria’s headgear mounted cameras as teaching assigning an estimated field of view based on the position and orientation of a head pose is problematic given that the claimed invention relies on images Appeal 2019-005590 Application 14/473,954 29 captured by “overhead cameras aimed at a shopping region,” not on cameras that the consumers wear. The Examiner relies on Gruttadauria’s teaching that a participant may not know that monitoring is occurring as suggesting the use of overhead cameras. Final Act. 5 (citing Gruttadauria ¶¶ 39, 49). But Gruttadauria uses covert monitoring to analyze a participant’s facial response “for subtle cues (eye motion, action of various muscles in the face, etc.)” Gruttadauria ¶ 49. Thus, Gruttadauria at best suggests the use of overhead cameras as a covert “consumer response monitoring” alternative to more overt tools for monitoring “a participant’s physiological responses, such as heart rate, breathing, and other factors that may provide subtle information about emotional responses.” Id. ¶ 48. That is, Gruttadauria fails to teach or suggest using images from overhead cameras in assigning an estimated field of view based on the position and orientation of a head pose. Furthermore, Gruttadauria’s headgear mounted first camera, in determining the region the consumer is facing (i.e., the purported assigned estimated field of view), also does not estimate a field of view by “determining a location of a facial feature in the plurality of images.” Although the second camera observes the motions of the wearer’s eyes (i.e., that determines a location of a facial feature), the second camera merely “determines the direction of the eye” to be assimilated with data regarding the region the consumer is facing “to show which part of the field of view was being looked at by the wearer of the headgear.” Id. Therefore, not only are neither of Gruttadauria’s cameras overhead mounted as claimed, but the Examiner’s findings do not show that images captured by Gruttadauria’s first camera determine a location of facial features in a plurality of images or Appeal 2019-005590 Application 14/473,954 30 that images captured by Gruttadauria’s second camera are used in computing or assigning an estimated field of view in the manner claimed. Although not discussed by Appellant, the Examiner, or the Majority, the disclosure of Velazquez (US 2004/0212778 A1; published Oct. 28, 2004) (incorporated by reference in Gruttadauria paragraph 46) further shows that Gruttadauria’s first camera does not teach determining a location of facial features. In particular, Velazquez illustrates a vision system in Figure 4, which is reproduced below. Velazquez Figure 4 illustrates camera 74, integrated with glasses 72. Velazquez ¶ 44. Velazquez illustrates camera 74 and glasses 72 in greater detail in Velazquez Figure 5, which is reproduced below. Appeal 2019-005590 Application 14/473,954 31 Velazquez Figure 5 is an enlarged view of Velazquez Figure 4 “illustrating in greater detail a pair of glasses that may be used in [a] vision system” (id. ¶ 27), albeit with the addition of microphone 76 (id. ¶ 42). In Velazquez, camera 74 records use of the product P by consumer 200 from the perspective of consumer 200. Id. ¶ 62, Fig. 4. Camera 74 is not oriented at, and thus does not record, the facial features of consumer 200. Id. Figs. 4, 5. In disclosing the use of a first camera mounted on headgear to determine the region a consumer is facing, Gruttadauria clearly teaches using a camera such as camera 74 of Velazquez. Gruttadauria ¶¶ 46–47. This further shows that Gruttadauria’s first camera does not capture an image that includes a facial feature, the location of which is used to compute or assign an estimated field of view in the manner claimed. Therefore, Velazquez provides further evidence showing that the Examiner’s findings do not show that Gruttadauria cures the noted deficiencies of Hu. Appeal 2019-005590 Application 14/473,954 32 Although the Examiner did not rely on Sorenson ’756 in rejecting claim 1, the Majority finds that Sorenson ’756 shows that the disputed limitations were within the background knowledge possessed by an ordinarily-skilled artisan. Slip op., at *Error! Bookmark not defined.12– 13. I do not agree with the majority that Sorenson ’756 cures the noted deficiencies of Gruttadauria and Hu. Similar to Gruttadauria, Sorenson ’756 teaches use of a “camera coupled with a head of a shopper . . . to capture one or more images” for analysis “to determine an estimated shopper field of view.” Sorenson ’756, Abstract. This positioning of the camera on the shopper’s head is illustrated in Figure 1 of Sorenson ’756, which is reproduced below. Figure 1 of Sorenson ’756 illustrates a view tracking and analysis system. Sorenson ’756, ¶ 28. Eye camera 28 is “mounted in the body 16 of the device 12, and [is] oriented so as to point in a direction aligned with the Appeal 2019-005590 Application 14/473,954 33 orientation of the user’s head 24.” Id. ¶ 35. Therefore, like Gruttadauria’s cameras, the camera of Sorenson ’756 is not overhead mounted like the claimed first and second cameras. Furthermore, like Gruttadauria’s first camera, the camera of Sorenson ’756 does not capture images that can be used to determine the locations of facial features in a plurality of images. Rather, the camera is pointed away from the user’s facial features so as to capture images from the perspective of the user. This can be seen in, for example, Figure 2 of Sorenson ’756, which is reproduced below. , Figure 2 of Sorenson ’756 illustrates a field of view captured by the head-mounted camera depicted in Figure 1 of Sorenson ’756. Sorenson ’756, ¶ 29. “An estimated field of view 32 may be defined surrounding the estimated focal point [of the user],” center 106. Id. ¶ 45. Specifically, “the estimated field of view 32 is defined to be bounded by the area within the Appeal 2019-005590 Application 14/473,954 34 probability area 104,” which is depicted as a “probability ellipse.” Id. Sorenson ’756 does not depict any facial features in the captured image of Figure 2, or in any of the captured images depicted. Sorenson ’756, Figs. 3– 5, 7a, 8a, 9a. Therefore, like Gruttadauria, Sorenson ’756 does not teach estimating a field of view by “determining a location of a facial feature in [a] plurality of images” for use in “assigning the estimated field of view [of a shopper] based on the position and the orientation of the head pose” in the manner of claim 1. In rejecting claim 1 as obvious, the Examiner relied on Sorenson ’085 to teach “indicating a product location in [a] physical shopping environment,” estimating a field of view defining a “region of the physical shopping environment,” and “generating a visibility metric for the target product . . .” Final Act. 12. Only in the Answer does the Examiner—in passing and without a specific citation—purport that Sorenson ’085 “discloses head pose and rotating head [sic] to determine field of view.” Ans. 5. Rather, the Examiner merely cites to specific portions of Sorenson ’085 for the “physical shopping environment” recitation, which Appellant does not dispute Sorenson ’085 teaches. Id. (citing Sorenson ’085 ¶¶ 6, 8, 22, 32). The most seemingly pertinent disclosure of Sorenson ’085 that the Examiner cites is the determination of “an angle θ, which represents the angular breadth of the shopper’s field of view.” Id. at 6 (quoting Sorenson ’085 ¶ 32). But Sorenson ’085 does not teach the use of overhead cameras in “assigning the estimated field of view based on the position and the orientation of [a] head pose” determined “based on the location of the facial Appeal 2019-005590 Application 14/473,954 35 feature” in images captured by such cameras. Rather, Sorenson ’085 relies on tracking the shopper path using, for example, a transmitter on a shopping cart emitting a tracking signal at regular four second intervals. Sorenson ’085 ¶¶ 25, 40, Fig. 2. This relationship between the tracked shopper path and the estimated field of view is illustrated in Figure 3 of Sorenson ’085, which is reproduced below. Figure 3 of Sorenson ’085 “is a schematic view showing the relationship between a shopper path and lines of sight from a plurality of positions along the shopper path.” Id. ¶ 13. “[F]ield of view 48 is typically calculated by determining an angle θ, which represents the angular breadth of the shopper’s field of view” and “is composed of constituent angles θ1 and θ2, which are typically equal.” Id. ¶ 32; id. Fig. 4. Line of sight 46 is calculated to “simulate[] a direction in which a shopper may be looking as the shopper travels along shopper path 38.” Id. ¶ 41 (emphasis added). The calculated line of sight 46 “is typically tangent to the shopper path and facing in the direction of a velocity vector at that point on the shopper path.” Appeal 2019-005590 Application 14/473,954 36 Id. ¶ 31. Thus, Sorenson ’085 relies on a simplified model that assumes that the shopper is facing in the direction that the transmitter is traveling—an assumption that belies the typical shopping experience where one looks around in directions other than strictly and directly forward as one typically spends significant shopping time moving alongside shelves rather than moving directly toward them. Such coarse speculation as to the direction the shopper may be facing is a far cry from the claimed invention’s use of images captured by overhead cameras to determine the “position and . . . orientation of [a] head pose” to assign a field of view that, while estimated, would be significantly more precise than the rough calculation of Sorenson ’085. Kilner is the only cited reference that neither the Examiner nor the Majority posits would cure alleged deficiencies in the other cited references. Slip op., at *16; Final Act. 10–12; Ans. 6 (Kilner “was used or other limitations in the claim”). I agree that Kilner’s display arrangement and camera teachings, even though they include detecting a human face (Kilner ¶ 44), do not teach or suggest the claimed estimated field of view assignment. With these noted deficiencies in mind, I respectfully disagree with the Majority that the Examiner has articulated persuasive reasoning having a rational underpinning showing that the invention of claim 1 would have been obvious over the cited references. Slip op., at *1717 (citing Final Act. 11– 12; Ans. 5–7). Rather, I am unable to discern how, but for impermissible reliance on the claimed invention to chart a path backward to the cited prior Appeal 2019-005590 Application 14/473,954 37 art, the claimed invention would have been obvious to an artisan of ordinary skill. [C]harting a path to the claimed [invention] by hindsight is not enough to prove obviousness. “Any [invention] may look obvious once someone someone has made it and found it to be useful, but working backwards from that [invention], with the benefit of hindsight, once one is aware of it does not render it obvious.” Sanofi-Aventis U.S., LLC v. Dr. Reddy’s Labs., Inc., 933 F.3d 1367, 1375 (Fed. Cir. 2019) (quoting Amerigen Pharm. Ltd. v. UCB Pharma GmBH, 913 F.3d 1076, 1089 (Fed. Cir. 2019)). “Rigid preventative rules that deny factfinders recourse to common sense” are unnecessary to guard against impermissible hindsight reasoning and inconsistent with binding case law. KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 421 (2007). But factfinders must still be aware “of the distortion caused by hindsight bias and must be cautious of arguments reliant upon ex post reasoning.” Id. (citing Graham v. John Deere Co. of Kansas City, 383 U.S. 1, 36 (1966)). In this case, I am unable to ascertain, without reliance on impermissible hindsight, why the invention of claim 1 would have been obvious in light of the cited references. Even in light of the purported admissions and waivers by Appellant, I do not find the Examiner’s findings and analysis persuasive. Accordingly, I respectfully disagree with the decision of the Majority to affirm the Examiner’s rejection. Instead, I would reverse the Examiner’s 35 U.S.C. § 103 rejections of claim 1, and claims 3, 8, 10, and 15–25, which have similar recitations to claim 1. Copy with citationCopy as parenthetical citation