Ex Parte Lakshmanan et alDownload PDFPatent Trial and Appeal BoardAug 23, 201713900770 (P.T.A.B. Aug. 23, 2017) Copy Citation United States Patent and Trademark Office UNITED STATES DEPARTMENT OF COMMERCE United States Patent and Trademark Office Address: COMMISSIONER FOR PATENTS P.O.Box 1450 Alexandria, Virginia 22313-1450 www.uspto.gov APPLICATION NO. FILING DATE FIRST NAMED INVENTOR ATTORNEY DOCKET NO. CONFIRMATION NO. 13/900,770 05/23/2013 Geetika T. Lakshmanan RSW920130111US1 7401 58139 7590 IBM CORP. (WSM) c/o WINSTEAD P.C. P.O. BOX 131851 DALLAS, TX 75313 08/25/2017 EXAMINER GART, MATTHEW S ART UNIT PAPER NUMBER 3623 NOTIFICATION DATE DELIVERY MODE 08/25/2017 ELECTRONIC Please find below and/or attached an Office communication concerning this application or proceeding. The time period for reply, if any, is set in the attached communication. Notice of the Office communication was sent electronically on above-indicated "Notification Date" to the following e-mail address(es): patdocket@winstead.com PTOL-90A (Rev. 04/07) UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD Ex parte GEETIKA T. LAKSHMANAN and MARTIN OBERHOFER (Applicant: INTERNATIONAL BUSINESS MACHINES CORP.) Appeal 2016-007667 Application 13/900,7701 Technology Center 3600 Before BRUCE R. WINSOR, KARA L. SZPONDOWSKI, and DAVID J. CUTITTAII, Administrative Patent Judges. WINSOR, Administrative Patent Judge. DECISION ON APPEAL Appellants appeal under 35 U.S.C. § 134(a) from the final rejection of claims 1—20, which constitute all the claims pending in this application. We have jurisdiction under 35 U.S.C. § 6(b). We affirm and designate our affirmance as a new ground of rejection within the provisions of 37 C.F.R. § 41.50(b) (2015). 1 The real party in interest identified by Appellants is the Applicant, International Business Machines Corp. App. Br. 1. Appeal 2016-007667 Application 13/900,770 STATEMENT OF THE CASE Appellants describe their invention “relating] generally to workflows, and more particularly to providing a best practice workflow to aid the user in completing a project (e.g., applying for a job) that is constantly updated based on feedback from other users.” Spec. 11. Claim 1, which is representative, reads as follows: 1. A computer program product embodied in a computer readable storage medium for providing a best practice workflow to aid a user in completing a project, the computer program product comprising the programming instructions for: receiving, by a best practice workflow system over a network, practice instances for completing said project from a first plurality of users, wherein each of said practice instances comprises a graph of nodes and directed edges, wherein each of said nodes represents a task in a process for completing said project and each of said directed edges illustrates an execution sequence between two tasks; receiving, by said best practice workflow system over said network, crowdsourcing feedback concerning a plurality of rankings for each of said practice instances from a second plurality of users; computing, by said best practice workflow system, a single ranking for each of said practice instances based on said received rankings from said second plurality of users; and generating, by said best practice workflow system, said best practice workflow for completing said project based on practice instances whose single ranking exceeds a threshold, wherein said best practice workflow comprises a plurality of tasks in said process for completing said project and edges between tasks indicating control flow between said tasks. Claims 1—20 stand rejected under 35 U.S.C. § 101 as not being directed to patent eligible subject matter. See Final Act. 9-13; see also id. at 2-6. 2 Appeal 2016-007667 Application 13/900,770 Rather than repeat the arguments here, we refer to the Briefs (“App. Br.” filed Feb. 29, 2016; “Reply Br.” filed Aug. 9, 2016) and the Specification (“Spec.” filed May 23, 2013) for the positions of Appellants and the Final Office Action (“Final Act.” mailed Nov. 12, 2015) and Examiner’s Answer (“Ans.” mailed June 30, 2016) for the reasoning, findings, and conclusions of the Examiner. Only those arguments actually made by Appellants have been considered in this decision. Arguments that Appellants did not make in the Briefs have not been considered and are deemed to be waived. See 37 C.F.R. § 41.37(c)(l)(iv). ISSUE The issue presented by Appellants’ arguments is whether the Examiner errs in finding claims 1—20 are directed to a patent-ineligible abstract idea. ANALYSIS We review the appealed rejection for error based upon the issues identified by Appellants, and in light of the arguments and evidence produced thereon. Ex parte Frye, 94 USPQ2d 1072, 1075 (BPAI 2010) (precedential). Patent eligibility is a question of law that is reviewable de novo. Dealertrack, Inc. v. Huber, 674 F.3d 1315, 1333 (Fed. Cir. 2012). To be statutorily patentable, the subject matter of an invention must be a “new and useful process, machine, manufacture, or composition of matter, or [a] new and useful improvement thereof.” 35 U.S.C. § 101. We have reviewed the Examiner’s findings, conclusions, and reasoning (Final Act. 2—6, 9-13; Ans. 3—17) in light of Appellants’ 3 Appeal 2016-007667 Application 13/900,770 arguments and contentions (App. Br. 3—18; Reply Br. 2—19). We agree with the Examiner’s findings, conclusions, and reasoning and, except as set forth below, we adopt them as our own. The following discussion, findings, and conclusions are for emphasis. Statutory Class We first look to see whether the inventions described in the claims fall into one of the four statutory classes of subject matter prescribed by § 101, i.e., “process, machine, manufacture, or composition of matter.” Claims 1—10 During prosecution, claims are to be given their broadest reasonable interpretation consistent with the Specification, In re American Academy of Science Tech Center, 367 F.3d 1359, 1364 (Fed. Cir. 2004), without importing limitations into the claims from the Specification, In re Van Geuns, 988 F.2d 1181, 1184 (Fed. Cir. 1993). Our reviewing court has held “[a] transitory, propagating signal... is not a ‘process, machine, manufacture, or composition of matter. ’ [These] four categories define the explicit scope and reach of subject matter patentable under 35 U.S.C. § 101; thus, such a signal cannot be patentable subject matter.” In reNuijten, 500 F.3d 1346, 1357 (Fed. Cir. 2007). Therefore, a claim directed to computer instructions embodied in a transitory signal is not statutory under § 101. Moreover, “[a] claim that covers both statutory and non-statutory embodiments . . . embraces subject matter that is not eligible for patent protection and therefore is directed to non-statutory subject matter.” MPEP § 2106(1) (emphasis added); cf. In reLintner, 458 F.2d 1013, 1015 (CCPA 1972) (“Claims which are broad enough to read on 4 Appeal 2016-007667 Application 13/900,770 obvious subject matter are unpatentable even though they also read on nonobvious subject matter.”). Claim 1 and, by direct or indirect reference to claim 1, each of claims 2—10 are directed to “[a] computer program product embodied in a computer readable storage medium . . . , the computer program product comprising the programming instructions for” performing a series of steps. App. Br. 20 (Claims App’x). Absent a definition in the Specification that excludes transitory signals from being “computer readable storage medi[a]” “the ordinary and customary meaning of ‘computer readable storage medium’ to a person of ordinary skill in the art [is] broad enough to encompass both non-transitory and transitory media.” Ex parte Mewherter, 107 USPQ2d 1857, 1860 (PTAB 2013) (precedential). Although the Specification describes “computer readable medium(s)” (Spec. 126), including “a computer readable signal medium” (id.) and “a computer readable storage medium” (id.), the descriptions are, for the most part, couched in permissive or exemplary language (“may be, for example, but not limited to,” “a non-exhaustive list” (id.)) rather than setting forth a definition. The Specification states “[i]n the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.” Id. (emphases added). However, a signal, which is comprised of energy, is tangible, i.e., “substantially real,” Merriam-Webster’s Collegiate Dictionary 1204 (def. l.b.) (10th ed. 1999). Furthermore, “a signal with embedded data [stores the data] ... for data can be copied and held by a transitory recording medium, albeit 5 Appeal 2016-007667 Application 13/900,770 temporarily, for future recovery of the embedded data.” Mewherter, 107 USPQ2d at 1862. The Specification goes on to state “[a] computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.” (Spec. 127.) Although this passage excludes a computer readable storage medium from the definition of a computer readable signal medium, it does not preclude a computer readable storage medium from encompassing a transitory signal (see id.). For the foregoing reasons, we conclude the broadest reasonable interpretation of claims 1—10 encompasses a non-statutory transitory signal. Therefore, we conclude claims 1—10 encompass subject matter that is not within any statutory class of patent-eligible subject matter and are, therefore, patent-ineligible under 35 U.S.C. § 101. Accordingly, we sustain the rejection of claims 1—10 under 35 U.S.C. § 101 for at least this reason. Because Appellants have not had the opportunity to respond, with argument or amendment, to this reason for holding claims 1—10 to be patent-ineligible, we designate our conclusion that claims 1—10 are not within any statutory class of patent-eligible subject matter to be a new ground of rejection under our authority under 37 C.F.R. § 41.50(b). The foregoing notwithstanding, in the interests of administrative and judicial economy, we will further consider the Examiner’s conclusion that claims 1—10 are directed to a judicial exception to patentable subject matter, i.e., to an abstract idea. 6 Appeal 2016-007667 Application 13/900,770 Claims 11—20 Claim 11 and, by direct or indirect reference to claim 11, each of claims 12—20 are directed to “[a] system . . . comprising: a memory unit for storing a computer program . . . ; and a processor coupled to said memory unit, wherein the processor is configured to execute the program instructions of the computer program.” App. Br. 22 (Claims App’x). “[A] machine is a ‘concrete thing, consisting of parts, or of certain devices and combination of devices.’ This ‘includes every mechanical device or combination of mechanical powers and devices to perform some function and produce a certain effect or result.”’ In re Ferguson, 558 F.3d 1359, 1364 (Fed. Cir. 2009) (quoting Nuijten, 500 F.3d at 1355). The system recited in claims 11—20 is comprised of a memory and a processor, making it a “concrete thing consisting of parts or . . . [a] combination of devices.” Id. We conclude claims 11—20 are directed to a “machine” which is one of the four classes of patent eligible subject matter specified by 35 U.S.C. § 101. We, therefore, consider the Examiner’s conclusion that claims 11—20 are directed to a judicial exception to patentable subject matter, i.e., to an abstract idea. Judicial Exceptions to Patentable Subject matter The Supreme Court has held that there are implicit exceptions to the categories of patentable subject matter identified in § 101, including (1) laws of nature, (2) natural phenomena, and (3) abstract ideas. Alice Corp. Pty. Ltd. v. CLS BankInt’l, 134 S. Ct. 2347, 2355 (2014). Further, the Court has “set forth a framework for distinguishing patents that claim [1] laws of nature, [2] natural phenomena, and [3] abstract ideas from those that claim patent-eligible applications of those concepts.” Id., citing Mayo Collaborative Services v. Prometheus Laboratories, Inc., 132 S. Ct. 1289 7 Appeal 2016-007667 Application 13/900,770 (2012). The evaluation follows the two-part analysis set forth in Mayo: (1) determine whether the claim is directed to a patent-ineligible concept, e.g., an abstract idea; and (2) if an abstract idea is present in the claim, determine whether any element, or combination of elements, in the claim is sufficient to ensure that the claim amounts to significantly more than the abstract idea itself. See Alice, 134 S. Ct. at 2355; see also Final Act. 9-10. Our reviewing court has described the first-stage inquiry as looking at the “focus'’ of the claims, their ‘“character as a whole,”' and the second-stage inquiry (where reached) as looking more precisely at what the claim elements add—specifically, whether, in the Supreme Court’s terms, they identify an ‘“inventive concept’” in the application of the ineligible matter to which (by assumption at stage two) the claim is directed. Electric Power Group LLC v. Alstom S.A., 830 F.3d 1350, 1353 (Fed. Cir. 2016). Claims 1—20 are subject to a single ground of rejection. Appellants argue independent claims 1 and 11 together and do not separately argue claims 2—10 and 12—20, which depend from claims 1 and 11 respectively, with particularity. Therefore, for the purposes of the following analysis, we select claim 1 as the representative claim, pursuant to our authority under 37C.F.R. §41.37(c)(l)(iv). Mayo/Alice Step One The Examiner finds as follows: [T]he claims are directed generating a best practice workflow comprising wholly generic computers. This is a concept involving human activity relating to commercial practices. The process of computing a single ranking and generating the best practice workflow all describe the abstract idea. Thus, at the first 8 Appeal 2016-007667 Application 13/900,770 step of the analysis, the claims at issue here are directed to a patent-ineligible concept: an abstract idea. Final Act. 11. The Examiner further explains the claimed invention “could be performed in the human mind, or by a human using a pen and paper.” Final Act. 3. The Examiner continues to explain that the claims merely “recite[] receiving and processing information. This is simply the organization and comparison of data which can be performed mentally and is an idea of itself.” Id. We agree. Organizing workflow is a fundamental economic practice (see, e.g., Spec. 121, which identifies obtaining a job as an example of a workflow project) and it is axiomatically related to organizing human activity. It is also a fundamental economic practice related to organizing human activity to get advice from others as to the best way of going about accomplishing a task. Appellants contend the claims “specifically recite that a best practice workflow system receives practice instances over a network from users. Hence, these claim limitations are not being performed in a person’s mind or by using a pen and paper.” App. Br. 5. Appellants additionally argue as follows: [The claims] specifically recite that a best practice workflow system receives crowdsourcing feedback over a network from users. How can one receive feedback from an online community in one’s own mind or by using a pen and paper? Hence, this claim limitation is not being performed in a person's mind or by using a pen and paper. App. Br. 6. Appellants further contend “the claim limitations of claim[ 1] . . . is not a purely mental process that could otherwise be performed in any reasonable amount of time and with any reasonable expectation of accuracy without the use of a computer.” Reply Br. 3 (citing Spec. 1 38). 9 Appeal 2016-007667 Application 13/900,770 We are not persuaded of error. The Federal Circuit has held that if a method can be performed by human thought alone, or by a human using pen and paper, it is merely an abstract idea and is not patent-eligible under § 101. CyberSource Corp. v. Retail Decisions, Inc., 654 F.3d 1366, 1372—73 (Fed. Cir. 2011) (“[A] method that can be performed by human thought alone is merely an abstract idea and is not patent-eligible under § 101.”); Gottschalk v. Benson, 409 U.S. 63, 67 (1972) (“[phenomena of nature . . ., mental processes, and abstract intellectual concepts are not patentable, as they are the basic tools of scientific and technological work.”). Additionally, mental processes remain unpatentable even when automated to reduce the burden on the user of what once could have been done with pen and paper. CyberSource, 654 F.3d at 1375 (“That purely mental processes can be unpatentable, even when performed by a computer, was precisely the holding of the Supreme Court in Gottschalk v. Benson.'''’). Furthermore, “the name of the game is the claim,” In re Hiniker Co., 150 F.3d 1362, 1369 (Fed. Cir. 1998) (citing Giles Sutherland Rich, Extent of Protection and Interpretation of Claims—American Perspectives, 21 Inf 1 Rev. Indus. Prop. & Copyright L. 497, 499 (1990)); claims are to be given their broadest reasonable interpretation consistent with the Specification, American Academy of Science Tech Center., 367 F.3d at 1364, and “limitations are not to be read into the claims from the specification,” Van Geuns, 988 F.2d at 1184. We note that, contrary to Appellants’ arguments, claim 1 does not recite that the crowdsourcing involves “an online community” (App. Br. 6). As pointed out by the Examiner, the recited “network” could be met by a telephone or physical mail network (Ans. 7), and, in any case, receiving data and information over a network that interconnects computers is conventional 10 Appeal 2016-007667 Application 13/900,770 and routine use of the computers and network (id. at 7—8). Nor does the recitation of “crowdsourcing” lead to a different conclusion. Claim 1 places no lower limitation on the number of participants in the recited “crowd[],” places no lower limit on the number of practice instances, does not recite any criteria for the ranking of practice instances. Therefore, Appellants do not persuade us that the invention recited in claim 1, when given its broadest reasonable interpretation, without importing limitations from the Specification, cannot be performed in a reasonable amount of time with a reasonable expectation of accuracy without the use of a computer. For emphasis, we note that, although the term “crowdsourcing” is of relatively recent provenance (reportedly coined in 2005), gaining currency in the age of the Internet, the underlying concept is not. See Crowdsourcing, Wikipedia, https://en.wikipedia.org/wiki/Crowdsourcing (last edited July 31, 2017) (last visited Aug. 2, 2017) (hereinafter “Wikipedia”).2 Wikipedia is not itself prior art to Appellants’ invention, however it is instructive on the nature and history of the concept now referred to as “crowdsourcing.” For example, the number of participants in the crowdsourced project, although usually large, is “undefined.” Wikipedia (“Definitions”). Furthermore, the concept of “crowdsourcing” pre-dates the Internet by hundreds of years, as demonstrated by the British “Longitude Prize,” established in 1714. Id. (“Historical examples”/“Timeline of major events”). Further, it would be evident to any reader that many of the pre-Internet examples of crowdsourcing, for example the compilation of the first Oxford English 2 Wikipedia was not previously of record in this Application. 11 Appeal 2016-007667 Application 13/900,770 Dictionary, published in 1884, would have been performed with pen and paper and used the physical mail network for communication. Id. Appellants next contend that the computing of a ranking for each practice instance and the generating of the best practice workflow are “not being performed in a person’s mind or by using a pen and paper.” App. Br. 6, 8. Appellants assert “the best practice workflow system generates the best practice workflow for completing the project based on complex operations which cannot be simply performed entirely in a human’s mind.” Id. at 8 (referring to Spec. ]Hf 44-45); see Research Corp. Techs, v. Microsoft Corp., 627 F.3d 859 (Fed. Cir. 2010). However, although Appellants’ claimed invention performs these operations using computers and networks, Appellants proffer no explanation, other than the general assertion that the operations are complex, as to why these operations cannot be performed in a person’s mind or by using a pen and paper. We note that historical examples of crowdsourcing, such as the compilation of logs from users of 5000 copies of Wind and Current Charts, ca. 1848, were very complex operations, but were undoubtedly performed in a person’s mind or by using a pen and paper. See Wikipedia (“Historical examples”/“Timeline of major events”). Appellants contend the claim does not merely recite an algorithm for solving a mathematical problem, but rather the “invention provide[s] a means for generating a workflow to aid a user in completing a project (e.g., obtaining a job) that is constantly updated based on feedback from other users thereby providing the best workflow with the best process to assist the user in completing the project.” App. Br. 9 (referring to Spec. 1 34). The recited steps of “computing ... a single ranking for each of said practice instances based on said received rankings . . . ; and generating . . . said best 12 Appeal 2016-007667 Application 13/900,770 practice workflow for completing said project based on practice instances whose single ranking exceeds a threshold” (claiml), clearly describes a mathematical algorithm, albeit without setting forth the computational details of the algorithm. Although the Specification describes the invention as constantly updating the best practices workflow (see, e.g., Spec. 11), claim 1 does not recite this limitation. See Van Geuns, 988 F.2d at 1184. Nevertheless, the mere recitation of a mathematical algorithm in a claim does not necessarily render it patent-ineligible. See, e.g., Diamond v. Diehr, 450 U.S. 175 (1981); but see Parker v. Flook, 437 U.S. 584, (1978). We need not decide whether claim 1, when considered as a whole, is directed to a mathematical algorithm, and we do not rely on a conclusion that claim 1, as a whole, is directed to a mathematic algorithm. Because we agree with, and adopt, the Examiner’s other reasons for concluding claim 1 is directed to a patent-ineligible abstract idea, Appellants’ argument would be unavailing even if we were to agree. Appellants contend “claims 1 . . . [is] not simply directed to organizing and comparing data which can be performed mentally and is an idea of itself.” App. Br. 13. Appellants continue, arguing that “[w]hile . . . [the recited] steps [of claim 1] include terms, such as ‘receiving’ and ‘generating,’ surely that does not necessarily [imply] that the claims are directed to simply organizing and comparing data.” App. Br. 14. We are not persuaded of error. First of all, although Appellants assert that claim 1 is not directed to organizing and comparing data, their argument consists of quoting the language of the claim, with no explanation whatsoever as to why any of the quoted language represents more than organizing and comparing data. See id. at 13—14. Secondly, we see no principled distinction between 13 Appeal 2016-007667 Application 13/900,770 the patent-ineligible “[mjethod of detecting events on an interconnected electric power grid in real time over a wide area and automatically analyzing the events on the interconnected electric power grid,” at issue in Electric Power, 830 F.3d at 1351, and the method recorded on the “computer readable storage medium for providing a best practice workflow to aid a user in completing a project,” at issue here. Claim 1 recites receiving “practice instances for completing [a] project,” and “crowdsourcing feedback concerning a plurality of rankings for each of said practice instances,” which are merely steps in which information is collected without changing its character as information. See Electric Power, 830 F.3d at 1353 (“[W]e have treated collecting information, including when limited to particular content (which does not change its character as information), as within the realm of abstract ideas.”). We note that workflow diagrams comprising nodes or tasks and directed edges (see Spec., Figs. 4—6) are well-known and conventional ways of presenting information about workflow organization. Similarly, claim 1 recites “computing ... a single ranking for each of said practice instances based on said received rankings” and “generating . . . [a] best practice workflow for completing said project based on practice instances whose single ranking exceeds a threshold,” are merely steps in which the information is analyzed in ways that could be performed by people in their minds or using mathematical algorithms. See Electric Power, 830 F.3d at 1354 (“[W]e have treated analyzing information by steps people go through in their minds, or by mathematical algorithms, without more, as essentially mental processes within the abstract-idea category.”). Therefore, we conclude that, like the claim in Electric Power, claim 1 is focused on the combination of abstract-idea processes and is, therefore. 14 Appeal 2016-007667 Application 13/900,770 directed to an abstract idea for this additional reason. We note that claim 1. unlike the claim in Electric Power, does not recite any steps in which the information is displayed or presented to a user. Cf. Electric Power, 830 F.3d 1354 (“[W]e have recognized that merely presenting the results of abstract processes of collecting and analyzing information, without more (such as identifying a particular tool for presentation), is abstract as an ancillary part of such collection and analysis.”). Based on the foregoing, we conclude claim 1 is directed to a patent- ineligible abstract idea because it claims a method of organizing human activity relating to a fundamental economic practice, and is merely a method of organizing and comparing data that can be performed in a human mind or using pen and paper. Mayo/Alice Step Two The Examiner finds that claim 1 recites no element, or combination of elements, that is sufficient to ensure that the claim amounts to significantly more than the abstract idea itself. See Alice, 134 S. Ct. at 2355. See Final Act. 11—13. In particular, the Examiner finds that the recitations of computer and network technology in the claim (and similarly in claim 11) amount to no more than instructions to apply the recited abstract idea using conventional computer and network technology as a tool. See Final Act. 11— 12 (citing Alice, 134 S. Ct. at 2358). The Examiner further finds that the invention recited in claim 1 do[es] not improve another technology or technology field, the claims do not effect a transformation or reduction of a particular article to a different state or thing, the claims do not add a specific limitation that is other than well-known and understood, routine, conventional in the field or add unconventional steps that confine the claim to a particular useful application and the claims do not 15 Appeal 2016-007667 Application 13/900,770 recite other meaningful limitations beyond generally linking that use the judicial exception (abstract idea) to a particular technological environment Final Act. 12 (citing Bancorp Servs., L.L.C. v. Sun Life Assurance Co., 687 F.3d 1266, 1278-79 (Fed. Cir. 2012). We agree. Appellants contend “when looking at the additional limitations as an ordered combination [of claim 1], the invention as a whole amounts to significantly more than simply organizing and comparing data as suggested by the Examiner.” App. Br. 14. We disagree. Claim 1 merely recites using computers and networks in a conventional manner as tools to obtain advice from others as to the best way (sequence) of accomplishing a task, a pre-Internet task. Appellants contend the claimed invention “generates] a workflow to aid a user in completing a project (e.g., obtaining a job) that is constantly updated based on feedback from other users thereby providing the best workflow with the best process to assist the user in completing the project.” Id. at 15 (citing Spec. 134). However, we note that claim 1 does not recite constant updating. Appellants point to limitations related to “computing a ranking for each of the practice instances based on the rankings received from a plurality of users (‘crowdsourcing’)” (id.) as amounting to significantly more than the abstract idea because it is necessarily rooted in computer technology (id. (citing DDR Holdings, LLC v. Hotels.com, L.P., 773 F.3d 1245 (Fed. Cir. 2014)). We disagree because, as discussed supra, the recited computing step and the threshold recitation in the generating step merely describes an algorithm for evaluating the advice (“feedback”) received from others. Unlike the claims at issue in DDR, the invention recited in claim 1 does not change how the Internet responds to user input, see DDR, 773 F.3d at 1257. 16 Appeal 2016-007667 Application 13/900,770 Nor does the claimed invention control or improve another process or technology, see Diamond v. Diehr, 450 U.S. 175 (1981), or improve how the computer or network themselves function, see Enfish, LLC v. Microsoft Corp., 822 F.3d 1327 (Fed. Cir. 2016). Based on the foregoing, we conclude claim 1 does not recite any element or combination of elements such that claim 1 amounts to significantly more than the abstract idea itself. Summary In view of the foregoing, we conclude the Examiner did not err in rejecting representative claim 1 as being directed to a patent-ineligible abstract idea. Accordingly, we sustain the rejection of representative claim 1 and claims 2—20, which fall with representative claim 1, under 35 U.S.C. §101. However, in sustaining the rejection of claims 1—20, we cite to evidence not previously placed in the record, regarding which the Appellants have not had an opportunity to respond, by amendment or argument. Therefore, we designate our conclusion that claims 1—20 are directed to a patent-ineligible abstract idea as a new ground of rejection. Additional Arguments Appellants argue that the Examiner has improperly concluded that software implemented on a generic computer can never be patentable. See App. Br. 16—18. In this vein, Appellants further point out that the number of patents allowed by the Examiner year-over-year was greatly reduced subsequent to the Supreme Court’s decision in Alice. Id. at 17. Appellants argue that the Examiner has taken the position that generating a best practice workflow is not patentable “no matter what is recited in the claim limitations.” Id. at 18. 17 Appeal 2016-007667 Application 13/900,770 This Board is empowered to “review adverse decisions of examiners upon applications for patents.” 35 U.S.C. § 6(b)(1) (emphasis added); see also 37 C.F.R. § 41.31(a)(1). We have reviewed the decision of the Examiner to reject the claims before us for patent-ineligibility. We have not considered, nor can we consider, whether different claims to computer- implemented inventions, i.e., software, or different claims to inventions for generating a best practice workflow, with different limitations, would be patent-eligible. To the extent Appellants’ arguments are grounded in how the Examiner conducted the examination, such arguments relate to the conduct of examination rather than the decision itself, and are reviewable by petition to the Director of the US Patent and Trademark Office or as delegated by the Director. See 37 C.F.R. § 1.181 et seq.; see also MPEP § 1201 (“The Board will not ordinarily hear a question that should be decided by the Director on petition.”). CONCLUSION On the record before us, we conclude that that the Examiner did not err in rejecting claims 1—20 under 35 U.S.C. § 101. We designate our conclusion as a new ground of rejection, however, because (1) we have newly concluded that claims 1—10 are not limited to any statutory class of patent-eligible subject matter, and (2) we have introduced new evidence, i.e., Wikipedia, into the record. 18 Appeal 2016-007667 Application 13/900,770 DECISION The decision of the Examiner to reject claims 1—20 is affirmed. We designate our affirmance of the rejection of claims 1—20 as a new ground of rejection. This decision contains new grounds of rejection pursuant to 37 C.F.R. § 41.50(b). Section 41.50(b) provides that “[a] new ground of rejection . . . shall not be considered final for judicial review.” Section 41.50(b) also provides that Appellants, WITHIN TWO MONTHS FROM THE DATE OF THE DECISION, must exercise one of the following two options with respect to the new ground of rejection to avoid termination of the appeal as to the rejected claims: (1) Reopen prosecution. Submit an appropriate amendment of the claims so rejected or new Evidence relating to the claims so rejected, or both, and have the matter reconsidered by the examiner, in which event the proceeding will be remanded to the examiner. . . . (2) Request rehearing. Request that the proceeding be reheard under § 41.52 by the Board upon the same Record. 37 C.F.R. §41.50(b). No time period for taking any subsequent action in connection with this appeal may be extended under 37 C.F.R. § 1.136(a)(1). See 37 C.F.R. §§ 41.50(f), 41.52(b). AFFIRMED 37 C.F.R, § 41.50(b) 19 Notice of References Cited Application/Control No. Applicant(s)/Patent Under Reexamination 13/900,770 Lakshmanan, et al. Examiner Art Unit Page 1 of 1 Appeal No. 2016-007667 3623 U.S. PATENT DOCUMENTS * Document Number Country Code-Number-Kind Code Date MM-YYYY Name Classification A us- 707/5 B us- C US- D US- E US- F US- G US- H US- 1 US- J US- K US- L US- M US- FOREIGN PATENT DOCUMENTS * Document Number Country Code-Number-Kind Code Date MM-YYYY Country Name Classification N O P Q R S T NON-PATENT DOCUMENTS * Include as applicable: Author, Title Date, Publisher, Edition or Volume, Pertinent Pages) U Crowdsourcing, Wikipedia, https://en.wikipedia.org/wiki/Crowdsourcing (last edited July 31, 2017) (last visited Aug. 2, 2017) V w X *A copy of this reference is not being furnished with this Office action. (See MPEP § 707.05(a).) Dates in MM-YYYY format are publication dates. Classifications may be US or foreign. U.S. Patent and Trademark Office PTO-892 (Rev. 01-2001) Notice of References Cited Part of Paper No. Crowdsourcing From Wikipedia, the free encyclopedia Crowdsourcing is a specific sourcing model in which individuals or organizations use contributions from Internet users to obtain needed services or ideas. Crowdsourcing was coined in 2005 as a portmanteau of crowd and outsourcingThis mode of sourcing, which is to divide work between participants to achieve a cumulative result, was already successful prior to the digital age (i.e., "offline").^ Crowdsourcing is distinguished from outsourcing in that the work can come from an undefined public (instead of being commissioned from a specific, named group) and in that crowdsourcing includes a mix of bottom-up and top- down processesAdvantages of using crowdsourcing may include improved costs, speed, quality, flexibility, scalability, or diversityJ9^10] Crowdsourcing in the form of idea competitions or innovation contests provides a way for organizations to learn beyond what their "base of minds" of employees provides (e.g., LEGO Ideas)^11] Crowdsourcing can also involve rather tedious "microtasks" that are performed in parallel by large, paid crowds (e.g., Amazon Mechanical Turk). Crowdsourcing has also been used for noncommercial work and to develop common goods (e.g., Wikipedia)J12^ Contents ■ 1 Definitions ■ 2 Historical examples ■ 2.1 Timeline of major events ■ 2.2 Early competitions ■ 2.3 In astronomy ■ 2.4 In energy system research ■ 2.5 In genealogy research ■ 2.6 In genetic genealogy research ■ 2.7 In journalism ■ 2.8 In linguistics ■ 2.9 In ornithology ■ 2.10 In public policy ■ 2.11 In seismology ■ 2.12 In libraries ■ 3 Modern methods ■ 4 Examples ■ 4.1 Crowdvoting ■ 4.2 Crowdsourcing creative work ■ 4.3 Crowdsourcing language-related data collection ■ 4.4 Crowdsolving ■ 4.5 Crowdsearching ■ 4.6 Crowdfunding ■ 4.7 Mobile crowdsourcing ■ 4.8 Macrowork ■ 4.9 Microwork ■ 4.10 Simple projects ■ 4.11 Complex projects ■ 4.12 Inducement prize contests ■ 4.13 Implicit crowdsourcing ■ 4.14 Health-care crowdsourcing ■ 4.15 Crowdsourcing in agriculture ■ 4.16 Crowdsourcing in cheating in bridge ■ 5 Crowdsourcers ■ 5.1 Demographics ■ 5.2 Motivations ■ 5.2.1 Contributors ■ 5.2.2 Requesters ■ 5.3 Participation in crowdsourcing ■ 6 Limitations and controversies ■ 6.1 Impact of crowdsourcing on product quality ■ 6.2 Entrepreneurs contribute less capital themselves ■ 6.3 Increased number of funded ideas ■ 6.4 Concerns ■ 6.4.1 Irresponsible crowdsourcing ■ 7 See also ■ 8 References ■ 9 External links Definitions The term "crowdsourcing" was coined in 2005 by Jeff Howe and Mark Robinson, editors at Wired, to describe how businesses were using the Internet to "outsource work to the crowd",^ which quickly led to the portmanteau "crowdsourcing." Howe first published a definition for the term crowdsourcing in a companion blog post to his June 2006 Wired article, "The Rise of Crowdsourcing", which came out in print just days latent13^ "Simply defined, crowdsourcing represents the act of a company or institution taking a function once performed by employees and outsourcing it to an undefined (and generally large) network of people in the form of an open call. This can take the form of peer-production (when the job is performed collaboratively), but is also often undertaken by sole individuals. The crucial prerequisite is the use of the open call format and the large network of potential laborers." In a February 1, 2008, article, Daren C. Brabham, "the first [person] to publish scholarly research using the word crowdsourcing" and writer of the 2013 book, Crowdsourcing, defined it as an "online, distributed problem-solving and production model."[14] [15] After studying more than 40 definitions of crowdsourcing in the scientific and popular literature, Enrique Estelles-Arolas and Fernando Gonzalez Ladron-de-Guevara, researchers at the Technical University of Valencia, developed a new integrating definition:^ "Crowdsourcing is a type of participative online activity in which an individual, an institution, a nonprofit organization, or company proposes to a group of individuals of varying knowledge, heterogeneity, and number, via a flexible open call, the voluntary undertaking of a task. The undertaking of the task; of variable complexity and modularity, and; in which the crowd should participate, bringing their work, money, knowledge **[and/or]** experience, always entails mutual benefit. The user will receive the satisfaction of a given type of need, be it economic, social recognition, self-esteem, or the development of individual skills, while the crowdsourcer will obtain and use to their advantage that which the user has brought to the venture, whose form will depend on the type of activity undertaken". As mentioned by the definitions of Brabham and Estelles-Arolas and Ladron-de-Guevara above, crowdsourcing in the modern conception is an IT-mediated phenomenon, meaning that a form of IT is always used to create and access crowds of people J16^17] In this respect, crowdsourcing has been considered to encompass three separate, but stable techniques; competition crowdsourcing, virtual labor market crowdsourcing, and open collaboration crowdsourcing, t10] t16] t18] Henk van Ess, a college lecturer in online communications, emphasizes the need to "give back" the crowdsourced results to the public on ethical grounds. His nonscientific, noncommercial definition is widely cited in the popular press^19] "Crowdsourcing is channeling the experts’ desire to solve a problem and then freely sharing the answer with everyone." Despite the multiplicity of definitions for crowdsourcing, one constant has been the broadcasting of problems to the public, and an open call for contributions to help solve the problem. Members of the public submit solutions that are then owned by the entity, which originally broadcast the problem. In some cases, the contributor of the solution is compensated monetarily with prizes or with recognition. In other cases, the only rewards may be a kudos or intellectual satisfaction. Crowdsourcing may produce solutions from amateurs or volunteers working in their spare time or from experts or small businesses, which were previously unknown to the initiating organization J5] Another consequence of the multiple definitions is the controversy surrounding what kinds of activities that may be considered crowdsourcing. Historical examples While the term "crowdsourcing" was popularized on the Internet to describe Internet-based activities,![15] some examples of projects, in retrospect, can be described as crowdsourcing. Timeline of major events A brief timeline of events prior to 2006: ■ 1714 - The Longitude Prize: When the British government was trying to find a way to measure a ship’s longitudinal position, they offered the public a monetary prize to whomever came up with the best solution J2°] ■ 1783 - King Louis XVI offered an award to the person who could ‘make the alkali’ by decomposing sea salt by the ‘simplest and most economic method.’t20] ■ 1848 - Matthew Lontaine Maury distributed 5000 copies of his Wind and Current Charts free of charge on the condition that sailors returned a standardized log of their voyage to the U.S. Naval Observatory . By 1861, he had distributed 200,000 copies free of charge, on the same conditions.^21] ■ 1884 - Publication of the Oxford English Dictionary: 800 volunteers catalogued words to create the first fascicle of the OED^20] ■ 1916 - Planters Peanuts contest: The Mr. Peanut logo was designed by a 14-year-old boy who won the Planter Peanuts logo contest J2°] ■ 1957 - Jorn Utzon, winner of the design competition for the Sydney Opera House^20] ■ 1970 - Trench amateur photo contest ‘C’etait Paris en 1970’ (‘This Was Paris in 1970’) sponsored by the city of Paris, Prance-Inter radio, and the Pnac: 14,000 photographers produced 70,000 black-and-white prints and 30,000 color slides of the Trench capital to document the architectural changes of Paris. Photographs were donated to the Bibliotheque historique de la ville de Paris J22] ■ 1975 - 'Manthan' movie directed by Shyam Benegal about the story of Amul brand was funded by 500,000 farmers who contributed Rs. 2 eachJ23] ■ 1996 - The Hollywood Stock Exchange was founded: Allowed for the buying and selling of shares^20] ■ 1997 - British rock band Marillion raised $60,000 from their fans to help finance their U.S. tourJ2°] ■ 2000 - JustGiving established: This online platform allows the public to help raise money for charities J2°] ■ 2000 - UNV Online Volunteering service launched: Connecting people who commit their time and skills over the Internet to help organizations address development challenges^24] ■ 2000 - iStockPhoto was founded: The free stock imagery website allows the public to contribute to and receive commission for their contributions J25] ■ 2001 - Launch of Wikipedia: “Free-access, free content Internet encyclopedia”!-26] ■ 2004 - Toyota’s first "Dream car art" contest: Children were asked globally to draw their ‘dream car of the future.’t27] ■ 2005 - Kodak’s "Go for the Gold" contest: Kodak asked anyone to submit a picture of a personal victory^27] ■ 2006 - Jeff Howe coined the term crowdsourcing in Wired J25] ■ 2009 - Waze, a community-oriented GPS app, allows for users to submit road info and route data based on location, such as reports of car accidents or traffic, and integrates that data into its routing algorithms for all users of the app Early competitions Crowdsourcing has often been used in the past as a competition to discover a solution. The French government proposed several of these competitions, often rewarded with Montyon Prizes, created for poor Frenchmen who had done virtuous actsJ28] These included the Leblanc process, or the Alkali prize, where a reward was provided for separating the salt from the alkali, and the Fourneyron's turbine, when the first hydraulic commercial turbine was developedJ29] In response to a challenge from the French government, Nicolas Appert won a prize for inventing a new way of food preservation that involved sealing food in air-tight jarsJ3°] The British government provided a similar reward to find an easy way to determine a ship's longitude in the Longitude Prize. During the Great Depression, out-of-work clerks tabulated higher mathematical functions in the Mathematical Tables Project as an outreach project^31] One of the biggest crowdsourcing campaigns was a public design contest in 2010 hosted by the Indian government's finance ministry to create a symbol for the Indian rupee. Thousands of people sent in entries before the government zeroed in on the final symbol based on the Devanagari script using the letter RaJ32] In astronomy Crowdsourcing in astronomy was used in the early 19th century by astronomer Denison Olmsted. After being awakened in a late November night due to a meteor shower taking place, Olmsted noticed a pattern in the shooting stars. Olmsted wrote a brief report of this meteor shower in the local newspaper. “As the cause of ‘Falling Stars’ is not understood by meteorologists, it is desirable to collect all the facts attending this phenomenon, stated with as much precision as possible,” Olmsted wrote to readers, in a report subsequently picked up and pooled to newspapers nationwide. Responses came pouring in from many states, along with scientists’ observations sent to the American Journal of Science and ArtsJ33] These responses helped him make a series of scientific breakthroughs, the major discovery being that meteor showers are seen nationwide, and fall from space under the influence of gravity. Also, they demonstrated that the showers appeared in yearly cycles, a fact that often eluded scientists. The responses allowed him to suggest a velocity for the meteors, although his estimate turned out to be too conservative. If he had just taken the responses as presented, his conjecture on the meteors' velocity would have been closer to their actual speed. A more recent version of crowdsourcing in astronomy is NASA's photo organizing project,^ which asks internet users to browse photos taken from space and try to identify the location the picture is documenting J35] In energy system research Energy system models require large and diverse datasets, increasingly so given the trend towards greater temporal and spatial resolution.^36] In response, there have been several initiatives to crowdsource this data. Launched in December 2009, OpenEI is a collaborative website, run by the US government, providing open energy dataJ37^38] While much of its information is from US government sources, the platform also seeks crowdsourced input from around the worldJ39] The semantic wiki and database Enipedia also publishes energy systems data using the concept of crowdsourced open information. Enipedia went live in March 2011 [40][41]:184-188 In genealogy research Genealogical research was using crowdsourcing techniques long before personal computers were common. Beginning in 1942, members of The Church of Jesus Christ of Latter-day Saints encouraged members to submit information about their ancestors. The submitted information was gathered together into a single collection. In 1969, to encourage more people to participate in gathering genealogical information about their ancestors, the church started the three-generation program. In this program, church members were asked to prepare documented family group record forms for the first three generations. The program was later expanded to encourage members to research at least four generations and became known as the four-generation programJ42^ Institutes that have records of interest to genealogical research have used crowds of volunteers to create catalogs and indices to records. In genetic genealogy r esearch Genetic genealogy is a combination of traditional genealogy with genetics. The rise of personal DNA testing, after the turn of the century, by companies such as Gene by Gene, FTDNA, GeneTree, 23andMe, and Ancestry.com, has led to public and semipublic databases of DNA testing which uses crowdsourcing techniques. In recent years, citizen science projects have become increasingly focused providing benefits to scientific researchJ43^44^45] This includes support, organization, and dissemination of personal DNA (genetic) testing. Similar to amateur astronomy, citizen scientists encouraged by volunteer organizations like the International Society of Genetic Genealogy,f46^ have provided valuable information and research to the professional scientific communityJ47^ Spencer Wells, director of the Genographic Project blurb: Since 2005, the Genographic Project has used the latest genetic technology to expand our knowledge of the human story, and its pioneering use of DNA testing to engage and involve the public in the research effort has helped to create a new breed of "citizen scientist." Geno 2.0 expands the scope for citizen science, harnessing the power of the crowd to discover new details of human population historyJ48^ In journalism Crowdsourcing is increasingly used in professional journalism. Journalists crowdsource information from the crowd, typically fact check the information and then use it in their articles as they see fit. The leading daily newspaper in Sweden has successfully used crowdsourcing in investigating the home loan interest rates in the country in 2013-2014, resulting to over 50,000 submissions.^49! The leading daily newspaper in Finland crowdsourced investigation in stock short selling in 2011-2012, and the crowdsourced information lead to a revelation of a sketchy tax evasion system in a Finnish bank. The bank executive was fired and policy changes followed.^50! TalkingPointsMemo in the United States asked its readers to examine 3000 emails concerning the firing of federal prosecutors in 2008. The British newspaper the Guardian crowdsourced the examination of hundreds of thousands of documents in 2009J51! In linguistics Crowdsourcing strategies have been applied to estimate word knowledge and vocabulary size J52! In ornithology Another early example of crowdsourcing occurred in the field of ornithology. On December 25, 1900, Frank Chapman, an early officer of the National Audubon Society, initiated a tradition, dubbed the "Christmas Day Bird Census". The project called birders from across North America to count and record the number of birds in each species they witnessed on Christmas Day. The project was successful, and the records from 27 different contributors were compiled into one bird census, which tallied around 90 species of birds J53^ This large-scale collection of data constituted an early form of citizen science, the premise upon which crowdsourcing is based. In the 2012 census, more than 70,000 individuals participated across 2,369 bird count circles^54] Christmas 2014 marked the National Audubon Society's 115th annual Christmas Bird Count. In public policy Crowdsourcing public policy and the production of public services is also referred to as citizen sourcing. The first conference focusing on Crowdsourcing for Politics and Policy took place at Oxford University, under the auspices of the Oxford Internet Institute in 2014. Research has emerged since 2012t55] that focuses on the use of crowdsourcing for policy purposesJ56^57] These include the experimental investigation of the use of Virtual Labor Markets for policy assessment, t58] and an assessment of the potential for citizen involvement in process innovation for public administrationJ59^ Governments across the world are increasingly using crowdsourcing for knowledge discovery and civic engagement. Iceland crowdsourced their constitution reform process in 2011, and Finland has crowdsourced several law reform processes to address their off-road traffic laws. The Finnish government allowed citizens to go on an online forum to discuss problems and possible resolutions regarding some off-road traffic laws. The crowdsourced information and resolutions would then be passed on to legislators for them to refer to when making a decision, letting citizens more directly contribute to public policy.l60! I61! The City of Palo Alto is crowdsourcing people's feedback for its Comprehensive City Plan update in a process, which started in 2015J62] The House of Representahves in Brazil has used crowdsourcing in policy-reforms, and federal agencies in the United States have used crowdsourcing for several yearsJ63^ In seismology The European-Mediterranean Seismological Centre (EMSC) has developed a seismic detection system by monitoring the traffic peaks on its website and by the analysis of keywords used on TwitterJ64^ In libraries Crowdsourcing is used in libraries^65! for OCR corrections on digittzed texts, for tagging and for funding. Modern methods Currently, crowdsourcing has transferred mainly to the Internet, which provides a particularly beneficial venue for crowdsourcing since individuals tend to be more open in web-based projects where they are not being physically judged or scrutinized, and thus can feel more comfortable sharing. This approach ultimately allows for well-designed artistic projects because individuals are less conscious, or maybe even less aware, of scrutiny towards their work. In an online atmosphere, more attention can be given to the specific needs of a project, rather than spending as much time in communicatton with other individuals J66^ According to a definition by Henk van Ess:^ "The crowdsourced problem can be huge (epic tasks like finding alien life or mapping earthquake zones) or very small ('where can I skate safely?'). Some examples of successful crowdsourcing themes are problems that bug people, things that make people feel good about themselves, projects that tap into niche knowledge of proud experts, subjects that people find sympathettc or any form of injustice." Crowdsourcing can either take an explicit or an implicit route. Explicit crowdsourcing lets users work together to evaluate, share, and build different specific tasks, while implicit crowdsourcing means that users solve a problem as a side effect of something else they are doing. With explicit crowdsourcing, users can evaluate particular items like books or webpages, or share by posting products or items. Users can also build artifacts by providing information and editing other people's work. Implicit crowdsourcing can take two forms: standalone and piggyback. Standalone allows people to solve problems as a side effect of the task they are actually doing, whereas piggyback takes users' information from a third-party website to gather informationJ68] In his 2013 book, Crowdsourcing, Daren C. Brabham puts forth a problem-based typology of crowdsourcing approaches^69! ■ Knowledge discovery and management is used for information management problems where an organization mobilizes a crowd to find and assemble information. It is ideal for creating collective resources. ■ Distributed human intelligence tasking is used for information management problems where an organization has a set of information in hand and mobilizes a crowd to process or analyze the information. It is ideal for processing large data sets that computers cannot easily do. ■ Broadcast search is used for ideation problems where an organization mobilizes a crowd to come up with a solution to a problem that has an objective, provable right answer. It is ideal for scientific problem solving. ■ Peer-vetted creative production is used for ideation problems, where an organization mobilizes a crowd to come up with a solution to a problem which has an answer that is subjective or dependent on public support. It is ideal for design, aesthetic, or policy problems. Crowdsourcing often allows participants to rank each other's contributions, e.g. in answer to the question "What is one thing we can do to make Acme a great company?" One common method for ranking is "like" counting, where the contribution with the most likes ranks first. This method is simple and easy to understand, but it privileges early contributions, which have more time to accumulate likes. In recent years several crowdsourcing companies have begun to use pairwise comparisons, backed by ranking algorithms. Ranking algorithms do not penalize late contributions. They also produce results faster. Ranking algorithms have proven to be at least 10 times faster than manual stack ranking. "Crowdvoting: How Elo Limits Disruption", thevisionlab.com. May 25, 2017. One drawback, however, is that ranking algorithms are more difficult to understand than like counting. Examples Some common categories of crowdsourcing can be used effectively in the commercial world, including crowdvoting, crowdsolving, crowdfunding, microwork, creative crowdsourcing, crowdsource workforce management, and inducement prize contests. Although this may not be an exhaustive list, the items cover the current major ways in which people use crowds to perform tasksJ70] Crowdvoting Crowdvoting occurs when a website gathers a large group's opinions and judgments on a certain topic. The Iowa Electronic Market is a prediction market that gathers crowds' views on politics and tries to ensure accuracy by having participants pay money to buy and sell contracts based on political outcomes J71] Some of the most famous examples have made use of social media channels: Domino's Pizza, Coca-Cola, Heineken, and Sam Adams have thus crowdsourced a new pizza, bottle design, beer, and song, respectively.^72] Threadless.com selects the T-shirts it sells by having users provide designs and vote on the ones they like, which are then printed and available for purchase J15] The California Report Card (CRC), a program jointly launched in January 2014 by the Center for Information Technology Research in the Interest of Society^73! and Lt. Governor Gavin Newsom, is an example of modern- day crowd voting. Participants access the CRC online and vote on six timely issues. Through principal component analysis, the users are then placed into an online "cafe" in which they can present their own political opinions and grade the suggestions of other participants. This system aims to effectively involve the greater public in relevant political discussions and highlight the specific topics with which Californians are most concerned. Crowdvoting's value in the movie industry was shown when in 2009 a crowd accurately predicting the success or failure of a movie based on its trailer,![74][75] a feat that was replicated in 2013 by GoogleJ76] On reddit users collectively rate web content, discussions and comments as well as questions posed to persons of interest in "AMA" and /r/AskScience online interviews. Crowdsourcing creative work Creative crowdsourcing spans sourcing creative projects such as graphic design, crowdsourcing architecture, apparel design, movies,![77i writing, illustration, etcJ78^79] While crowdsourcing competitions have been used for decades in some creative fields (such as architecture), creative crowdsourcing has proliferated with the recent development of web-based platforms where clients can solicit a wide variety of creative work at lower cost than by traditional means. Crowdsourcing language-related data collection Crowdsourcing has also been used for gathering language-related data. For dictionary work, as was mentioned above, it was applied over a hundred years ago by the Oxford English Dictionary editors, using paper and postage. Much later, a call for collecting examples of proverbs on a specific topic (religious pluralism) was printed in a journal^80] Today, as "crowdsourcing" has the inherent connotation of being web-based, such language-related data gathering is being conducted on the web by crowdsourcing in accelerating ways. Currently, a number of dictionary compilation projects are being conducted on the web, particularly for languages that are not highly academically documented, such as for the Oromo language J81] Software programs have been developed for crowdsourced dictionaries, such as WeSayP2^ A slightly different form of crowdsourcing for language data has been the online creation of scientific and mathematical terminology for American Sign Language J83^ Proverb collection is also being done via crowdsourcing on the Web, most innovatively for the Pashto language of Afghanistan and Pakistan.IMP5]!86] Crowdsourcing has been extensively used to collect high-quality gold standard for creating automatic systems in natural language processing (e.g., named entity recognition, entity linking) J87^ Crowdsolving Crowdsolving is a collaborative, yet holistic, way of solving a problem using many people, communities, groups, or resources. Crowdsearching Chicago-based startup Crowdfind, formerly "crowdfynd", uses a version of crowdsourcing best termed as crowdsearching, which differs from microwork in that no payment for taking part in the search is madeJ88] Their platform, through geographic location anchoring, builds a virtual search party of smartphone and Internet users to find lost items, pets, or persons, as well as returning them. TrackR uses a system they call "crowd GPS" to load Bluetooth identities to a central server to track lost or stolen items. Crowdfunding Crowdfunding is the process of funding projects by a multitude of people contributing a small amount to attain a certain monetary goal, typically via the Internet J89] Crowdfunding has been used for both commercial and charitable purposes J9°] The crowdfuding model that has been around the longest is rewards-based crowdfunding. This model is where people can prepurchase products, buy experiences, or simply donate. While this funding may in some cases go towards helping a business, funders are not allowed to invest and become shareholders via rewards-based crowdfundingj91] Individuals, businesses, and entrepreneurs can showcase their businesses and projects to the entire world by creating a profile, which typically includes a short video introducing their project, a list of rewards per donation, and illustrations through images. The goal is to create a compelling message towards which readers will be drawn. Funders make monetary contribution for numerous reasons: 1. They connect to the greater purpose of the campaign, such as being a part of an entrepreneurial community and supporting an innovative idea or product J92] 2. They connect to a physical aspect of the campaign like rewards and gains from investment.^92] 3. They connect to the creative display of the campaign’s presentation. 4. They want to see new products before the public J92] The dilemma for equity crowdfunding in the US as of 2012 was how the Securities and Exchange Commission (SEC) is going to regulate the entire process. At the time, rules and regulations were being refined by the SEC, which had until January 1, 2013, to tweak the fundraising methods. The regulators were overwhelmed trying to regulate Dodd - Frank and all the other rules and regulations involving public companies and the way they trade. Advocates of regulation claimed that crowdfunding would open up the flood gates for fraud, called it the "wild west" of fundraising, and compared it to the 1980s days of penny stock "cold-call cowboys". The process allows for up to $1 million to be raised without some of the regulations being involved. Companies under the then-current proposal would have exemptions available and be able to raise capital from a larger pool of persons, which can include lower thresholds for investor criteria, whereas the old rules required that the person be an "accredited" investor. These people are often recruited from social networks, where the funds can be acquired from an equity purchase, loan, donation, or ordering. The amounts collected have become quite high, with requests that are over a million dollars for software such as Trampoline Systems, which used it to finance the commercialization of their new software. Mobile crowdsourcing Mobile crowdsourcing involves activities that take place on smartphones or mobile platforms, frequently characterized by GPS technology.^93] This allows for real-time data gathering and gives projects greater reach and accessibility. However, mobile crowdsourcing can lead to an urban bias, as well as safety and privacy concerns J94] t95] t96] Macrowork Macrowork tasks typically have these characteristics: they can be done independently, they take a fixed amount of time, and they require special skills. Macrotasks could be part of specialized projects or could be part of a large, visible project where workers pitch in wherever they have the required skills. The key distinguishing factors are that macrowork requires specialized skills and typically takes longer, while microwork requires no specialized skills. Microwork Microwork is a crowdsourcing platform where users do small tasks for which computers lack aptitude for low amounts of money. Amazon’s popular Mechanical Turk has created many different projects for users to participate in, where each task requires very little time and offers a very small amount in paymentJ5] The Chinese versions of this, commonly called Witkey, are similar and include such sites as Taskcn.com and k68.cn. When choosing tasks, since only certain users “win”, users learn to submit later and pick less popular tasks to increase the likelihood of getting their work chosenJ97] An example of a Mechanical Turk project is when users searched satellite images for a boat to find lost researcher Jim GrayJ68! Based on an elaborate survey of participants in a microtask crowdsourcing platform, Gadiraju et al. have proposed a taxonomy of different types of microtasks that are crowdsourcedJ98] Simple projects Simple projects are those that require a large amount of time and skills compared to micro and macrowork. While an example of macrowork would be writing survey feedback, simple projects rather include activities like writing a basic line of code or programming a database, which both require a larger time commitment and skill level. These projects are usually not found on sites like Amazon Mechanical Turk, and are rather posted on platforms like Upwork that call for a specific expertise Complex projects Complex projects generally take the most time, have higher stakes, and call for people with very specific skills. These are generally “one-off” projects that are difficult to accomplish and can include projects like designing a new product that a company hopes to patent. Tasks like that would be “complex” because design is a meticulous process that requires a large amount of time to perfect, and also people doing these projects must have specialized training in design to effectively complete the project. These projects usually pay the highest, yet are rarely offered, t100! Inducement prize contests Web-based idea competitions or inducement prize contests often consist of generic ideas, cash prizes, and an Internet-based platform to facilitate easy idea generation and discussion. An example of these competitions includes an event like IBM's 2006 "Innovation Jam", attended by over 140,000 international participants and yielding around 46,000 ideasJ101][102] Another example is the Netflix Prize in 2009. The idea was to ask the crowd to come up with a recommendation algorithm more accurate than Netflix's own algorithm. It had a grand prize of US$1,000,000, and it was given to the BellKor's Pragmatic Chaos team which bested Netflix's own algorithm for predicting ratings, by 10.06%. Another example of competition-based crowdsourcing is the 2009 DARPA balloon experiment, where DARPA placed 10 balloon markers across the United States and challenged teams to compete to be the first to report the location of all the balloons. A collaboration of efforts was required to complete the challenge quickly and in addition to the competitive motivation of the contest as a whole, the winning team (MIT, in less than nine hours) established its own "collaborapetitive" environment to generate participation in their teamJ103] A similar challenge was the Tag Challenge, funded by the US State Department, which required locating and photographing individuals in five cities in the US and Europe within 12 hours based only on a single photograph. The winning team managed to locate three suspects by mobilizing volunteers worldwide using a similar incentive scheme to the one used in the balloon challenge J104! Open innovation platforms are a very effective way of crowdsourcing people's thoughts and ideas to do research and development. The company InnoCentive is a crowdsourcing platform for corporate research and development where difficult scientific problems are posted for crowds of solvers to discover the answer and win a cash prize, which can range from $10,000 to $100,000 per challenge.^15! InnoCentive, of Waltham, MA and London, England provides access to millions of scientific and technical experts from around the world. The company claims a success rate of 50% in providing successful solutions to previously unsolved scientific and technical problems. IdeaConnection.com challenges people to come up with new inventions and innovations and Ninesigma.com connects clients with experts in various fields. The X Prize Foundation creates and runs incentive competitions offering between $1 million and $30 million for solving challenges. Local Motors is another example of crowdsourcing. A community of 20,000 automotive engineers, designers, and enthusiasts competes to build off-road rally trucksJ105] Implicit crowdsourcing Implicit crowdsourcing is less obvious because users do not necessarily know they are contributing, yet can still be very effective in completing certain tasks. Rather than users actively participating in solving a problem or providing information, implicit crowdsourcing involves users doing another task entirely where a third party gains information for another topic based on the user's actions J15] A good example of implicit crowdsourcing is the ESP game, where users guess what images are and then these labels are used to tag Google images. Another popular use of implicit crowdsourcing is through reCAPTCHA, which asks people to solve CAPTCHAs to prove they are human, and then provides CAPTCHAs from old books that cannot be deciphered by computers, to digitize them for the web. Like many tasks solved using the Mechanical Turk, CAPTCHAs are simple for humans, but often very difficult for computers.^68] Piggyback crowdsourcing can be seen most frequently by websites such as Google that data-mine a user's search history and websites to discover keywords for ads, spelling corrections, and finding synonyms. In this way, users are unintentionally helping to modify existing systems, such as Google's AdWordsJ106] Health-care crowdsourcing Research has emerged that outlines the use of crowdsourcing techniques in the public health domain. The collective intelligence outcomes from crowdsourcing are being generated in three broad categories of public health care; health promotion, health research,![107] anc[ ]iea]th maintenance.^108] Crowdsourcing also enables researchers to move from small homogeneous groups of participants to large heterogenous groups,t109] beyond convenience samples such as students or higher educated people. Crowdsourcing in agriculture Crowdsource research also reaches to the field of agriculture. This is mainly to give the kind of help in identification of different types of weeds^110] from the fields and also to to remove the weeds from fields. Crowdsourcing in cheating in bridge Boye Brogeland initiated a crowdsourcing investigation of cheating by top-level bridge several players were guilty, which led to their suspension.^111] Crowdsourcers A number of motivations exist for businesses to use crowdsourcing to accomplish their tasks, find solutions for problems, or to gather information. These include the ability to offload peak demand, access cheap labor and information, generate better results, access a wider array of talent than might be present in one organization, and undertake problems that would have been too difficult to solve internally^112] Crowdsourcing allows businesses to submit problems on which contributors can work, on topics such as science, manufacturing, biotech, and medicine, with monetary rewards for successful solutions. Although crowdsourcing complicated tasks can be difficult, simple work tasks can be crowdsourced cheaply and effectively.^113] Crowdsourcing also has the potential to be a problem-solving mechanism for government and nonprofit use J114] Urban and transit planning are prime areas for crowdsourcing. One project to test crowdsourcing's public participation process for transit planning in Salt Lake City was carried out from 2008 to 2009, funded by a U.S. Lederal Transit Administration grantJ115] Another notable application of crowdsourcing to government problem solving is the Peer to Patent Community Patent Review project for the U.S. Patent and Trademark Office J116] Researchers have used crowdsourcing systems like the Mechanical Turk to aid their research projects by crowdsourcing some aspects of the research process, such as data collection, parsing, and evaluation. Notable examples include using the crowd to create speech and language databases,[117][118] and using the crowd to farmers and experts a give them the best way players that showed conduct user studiesJ106! Crowdsourcing systems provide these researchers with the ability to gather large amount of data. Additionally, using crowdsourcing, researchers can collect data from populations and demographics they may not have had access to locally, but that improve the validity and value of their workX119] Artists have also used crowdsourcing systems. In his project called the Sheep Market, Aaron Koblin used Mechanical Turk to collect 10,000 drawings of sheep from contributors around the worldJ120! Sam Brown (artist) leverages the crowd by asking visitors of his website explodingdog to send him sentences that he uses as inspirations for paintings J121] Art curator Andrea Grover argues that individuals tend to be more open in crowdsourced projects because they are not being physically judged or scrutinized.^66] As with other crowdsourcers, artists use crowdsourcing systems to generate and collect data. The crowd also can be used to provide inspiration and to collect financial support for an artist's workJ122] Additionally, crowdsourcing from 100 million drivers is being used by INRIX to collect users' driving times to provide better GPS routing and real-time traffic updatesJ123! Demographics The crowd is an umbrella term for the people who contribute to crowdsourcing efforts. Though it is sometimes difficult to gather data about the demographics of the crowd, a study by Ross et al. surveyed the demographics of a sample of the more than 400,000 registered crowdworkers using Amazon Mechanical Turk to complete tasks for pay. A previous study in 2008 by Ipeirotis found that users at that time were primarily American, young, female, and well-educated, with 40% earning more than $40,000 per year. In November 2009, Ross found a very different Mechanical Turk population, 36% of which was Indian. Two-thirds of Indian workers were male, and 66% had at least a bachelor's degree. Two-thirds had annual incomes less than $10,000, with 27% sometimes or always depending on income from Mechanical Turk to make ends meetJ124] The average US user of Mechanical Turk earned $2.30 per hour for tasks in 2009, versus $1.58 for the average Indian worker. While the majority of users worked less than five hours per week, 18% worked 15 hours per week or more. This is less than minimum wage in the United States (but not in India), which Ross suggests raises ethical questions for researchers who use crowdsourcing. The demographics of Microworkers.com differ from Mechanical Turk in that the US and India together account for only 25% of workers; 197 countries are represented among users, with Indonesia (18%) and Bangladesh (17%) contributing the largest share. However, 28% of employers are from the US J125^ Another study of the demographics of the crowd at iStockphoto found a crowd that was largely white, middle- to upper-class, higher educated, worked in a so-called "white-collar job" and had a high-speed Internet connection at homeJ126] In a crowd-sourcing diary study of 30 days in Europe the participants were predominantly higher educated womenJ109] Studies have also found that crowds are not simply collections of amateurs or hobbyists. Rather, crowds are often professionally trained in a discipline relevant to a given crowdsourcing task and sometimes hold advanced degrees and many years of experience in the profession. [426] [127] [128] [129] claiming that crowds are amateurs, rather than professionals, is both factually untrue and may lead to marginalization of crowd labor rights J13°] G. D. Saxton et al. (2013) studied the role of community users, among other elements, during his content analysis of 103 crowdsourcing organizations. Saxton et al. developed a taxonomy of nine crowdsourcing models (intermediary model, citizen media production, collaborative software development, digital goods sales, product design, peer-to-peer social financing, consumer report model, knowledge base building model, and collaborative science project model) in which to categorize the roles of community users, such as researcher, engineer, programmer, journalist, graphic designer, etc., and the products and services developed J131] Motivations Contributors Many scholars of crowdsourcing suggest that both intrinsic and extrinsic motivations cause people to contribute to crowdsourced tasks and these factors influence different types of contributorsJ132^133^126^127^129^134^135^136] For example, students and people employed full-time rate human capital advancement as less important than part-time workers do, while women rate social contact as more important than men doJ134^ Intrinsic motivations are broken down into two categories: enjoyment-based and community-based motivations. Enjoyment-based motivations refer to motivations related to the fun and enjoyment that contributors experience through their participation. These motivations include: skill variety, task identity, task autonomy, direct feedback from the job, and pastime. Community-based motivations refer to motivations related to community participation, and include community identification and social contact. In crowdsourced journalism, the motivation factors are intrinsic: the crowd is driven by a possibility to make social impact, contribute to social change and help their peersJ132] Extrinsic motivations are broken down into three categories: immediate payoffs, delayed payoffs, and social motivations. Immediate payoffs, through monetary payment, are the immediately received compensations given to those who complete tasks. Delayed payoffs are benefits that can be used to generate future advantages, such as training skills and being noticed by potential employers. Social motivations are the rewards of behaving pro- sociallyj137] such as the altruistic motivations of online volunteers. Chandler and Kapelner found that US users of the Amazon Mechanical Turk were more likely to complete a task when told they were going to “help researchers identify tumor cells,” than when they were not told the purpose of their task. However, of those who completed the task, quality of output did not depend on the framing of the taskJ138] Motivation factors in crowdsourcing are often a mix of intrinsic and extrinsic factorsJ139! in a crowdsourced law-making project, the crowd was motivated by a mix of intrinsic and extrinsic factors. Intrinsic motivations included fulfilling civic duty, affecting the law for sociotropic reasons, to deliberate with and learn from peers. Extrinsic motivations included changing the law for financial gain or other benefits. Participation in crowdsourced policy-making was an act of grassroots advocacy, whether to pursue one’s own interest or more altruistic goals, such as protecting nature J14°] Another form of social motivation is prestige or status. The International Children's Digital Library recruits volunteers to translate and review books. Because all translators receive public acknowledgment for their contributions, Kaufman and Schulz cite this as a reputation-based strategy to motivate individuals who want to be associated with institutions that have prestige. The Mechanical Turk uses reputation as a motivator in a different sense, as a form of quality control. Crowdworkers who frequently complete tasks in ways judged to be inadequate can be denied access to future tasks, providing motivation to produce high-quality workJ141] Requesters Using crowdsourcing through means such as Amazon Mechanical Turk can help provide researchers and requesters with an already established infrastructure for their projects, allowing them to easily use a crowd and access participants from a diverse culture background. Using crowdsourcing can also help complete the work for projects that would normally have geographical and population size limitations.^142] Participation in crowdsourcing Despite the potential global reach of IT applications online, recent research illustrates that differences in location affect participation outcomes in IT-mediated crowdsJ143] Limitations and controversies At least five major topics cover the limitations and controversies about crowdsourcing: 1. Impact of crowdsourcing on product quality 2. Entrepreneurs contribute less capital themselves 3. Increased number of funded ideas 4. The value and impact of the work received from the crowd 5. The ethical implications of low wages paid to crowdworkers Impact of crowdsourcing on product quality Crowdsourcing allows anyone to participate, allowing for many unqualified participants and resulting in large quantities of unusable contributions. Companies, or additional crowdworkers, then have to sort through all of these low-quality contributions. The task of sorting through crowdworkers’ contributions, along with the necessary job of managing the crowd, requires companies to hire actual employees, thereby increasing management overhead J144l For example, susceptibility to faulty results is caused by targeted, malicious work efforts. Since crowdworkers completing microtasks are paid per task, often a financial incentive causes workers to complete tasks quickly rather than well. Verifying responses is time-consuming, so requesters often depend on having multiple workers complete the same task to correct errors. However, having each task completed multiple times increases time and monetary costsJ145] Crowdsourcing quality is also impacted by task design. Lukyanenko et a/J146] argue that, the prevailing practice of modeling crowdsourcing data collection tasks in terms of fixed classes (options), unnecessarily restricts quality. Results demonstrate that information accuracy depends on the classes used to model domains, with participants providing more accurate information when classifying phenomena at a more general level (which is typically less useful to sponsor organizations, hence less common). Further, greater overall accuracy is expected when participants could provide free-form data compared to tasks in which they select from constrained choices. Just as limiting, oftentimes the scenario is that just not enough skills or expertise exist in the crowd to successfully accomplish the desired task. While this scenario does not affect "simple" tasks such as image labeling, it is particularly problematic for more complex tasks, such as engineering design or product validation. In these cases, it may be difficult or even impossible to find the qualified people in the crowd, as their voices may be drowned out by consistent, but incorrect crowd members/147] However, if the difficulty of the task is even "intermediate" in its difficultly, estimating crowdworkers' skills and intentions and leveraging them for inferring true responses works well/148] albeit with an additional computation cost. Crowdworkers are a nonrandom sample of the population. Many researchers use crowdsourcing to quickly and cheaply conduct studies with larger sample sizes than would be otherwise achievable. However, due to limited access to the Internet, participation in low developed countries is relatively low. Participation in highly developed countries is similarly low, largely because the low amount of pay is not a strong motivation for most users in these countries. These factors lead to a bias in the population pool towards users in medium developed countries, as deemed by the human development index/149] The likelihood that a crowdsourced project will fail due to lack of monetary motivation or too few participants increases over the course of the project. Crowdsourcing markets are not a first-in, first-out queue. Tasks that are not completed quickly may be forgotten, buried by filters and search procedures so that workers do not see them. This results in a long-tail power law distribution of completion times/150] Additionally, low-paying research studies online have higher rates of attrition, with participants not completing the study once started/119] Even when tasks are completed, crowdsourcing does not always produce quality results. When Facebook began its localization program in 2008, it encountered some criticism for the low quality of its crowdsourced translations/151] One of the problems of crowdsourcing products is the lack of interaction between the crowd and the client. Usually little information is known about the final desired product, and often very limited interaction with the final client occurs. This can decrease the quality of product because client interaction is a vital part of the design process/152] An additional cause of the decrease in product quality that can result from crowdsourcing is the lack of collaboration tools. In a typical workplace, coworkers are organized in such a way that they can work together and build upon each other’s knowledge and ideas. Furthermore, the company often provides employees with the necessary information, procedures, and tools to fulfill their responsibilities. However, in crowdsourcing, crowdworkers are left to depend on their own knowledge and means to complete tasks J144! A crowdsourced project is usually expected to be unbiased by incorporating a large population of participants with a diverse background. However, most of the crowdsourcing works are done by people who are paid or directly benefit from the outcome (e.g. most of open source projects working on Linux). In many other cases, the end product is the outcome of a single person's endeavour, who creates the majority of the product, while the crowd only participates in minor detailsJ153] Entrepreneurs contribute less capital themselves To make an idea turn into a reality, the first component needed is capital. Depending on the scope and complexity of the crowdsourced project, the amount of necessary capital can range from a few thousand dollars to hundreds of thousands, if not more. The capital-raising process can take from days to months depending on different variables, including the entrepreneur’s network and the amount of initial self-generated capital. The crowdsourcing process allows entrepreneurs to access to a wide range of investors who can take different stakes in the projectJ154] In effect, crowdsourcing simplifies the capital-raising process and allows entrepreneurs to spend more time on the project itself and reaching milestones rather than dedicating time to get it started. Overall, the simplified access to capital can save time to start projects and potentially increase efficiency of projects. Opponents of this issue argue easier access to capital through a large number of smaller investors can hurt the project and its creators. With a simplified capital-raising process involving more investors with smaller stakes, investors are more risk-seeking because they can take on an investment size with which they are comfortableJ154] This leads to entrepreneurs losing possible experience convincing investors who are wary of potential risks in investing because they do not depend on one single investor for the survival of their project. Instead of being forced to assess risks and convince large institutional investors why their project can be successful, wary investors can be replaced by others who are willing to take on the risk. There are translation companies and several users of translations who pretend to use crowdsourcing as a means for drastically cutting costs, instead of hiring professional translators. This situation has been systematically denounced by IAPTI and other translator organizations J155! Increased number of funded ide as The raw number of ideas that get funded and the quality of the ideas is a large controversy over the issue of crowdsourcing. Proponents argue that crowdsourcing is beneficial because it allows niche ideas that would not survive venture capitalist or angel funding, many times the primary investors in startups, to be started. Many ideas are killed in their infancy due to insufficient support and lack of capital, but crowdsourcing allows these ideas to be started if an entrepreneur can find a community to take interest in the project^156] Crowdsourcing allows those who would benefit from the project to fund and become a part of it, which is one way for small niche ideas get startedJ157] However, when the raw number of projects grows, the number of possible failures can also increase. Crowdsourcing assists niche and high-risk projects to start because of a perceived need from a select few who seek the product. With high risk and small target markets, the pool of crowdsourced projects faces a greater possible loss of capital, lower return, and lower levels of success.^158! Concerns Because crowdworkers are considered independent contractors rather than employees, they are not guaranteed minimum wage. In practice, workers using the Amazon Mechanical Turk generally earn less than the minimum wage, with US users earning an average of $2.30 per hour for tasks in 2009, and users in India earning an average of $1.58 per hour, which is below minimum wage in the United States (but not in India) J124^159] Some researchers who have considered using Mechanical Turk to get participants for research studies have argued that the wage conditions might be unethical JH0] [160] However, according to other research, workers on Amazon Mechanical Turk do not feel that they are exploited and are ready to participate in crowdsourcing activities in the futureJ161] When Facebook began its localization program in 2008, it received criticism for using free labor in crowdsourcing the translation of site guidelinesJ151] Typically, no written contracts, nondisclosure agreements, or employee agreements are made with crowdworkers. For users of the Amazon Mechanical Turk, this means that requestors decide whether users' work is acceptable, and reserve the right to withhold pay if it does not meet their standards J142! Critics say that crowdsourcing arrangements exploit individuals in the crowd, and a call has been made for crowds to organize for their labor rights.l130!!162! Collaboration between crowd members can also be difficult or even discouraged, especially in the context of competitive crowd sourcing. Crowdsourcing site InnoCentive allows organizations to solicit solutions to scientific and technological problems; only 10.6% of respondents report working in a team on their submission J127] Amazon Mechanical Turk workers collaborated with academics to create a platform, WeAreDynamo.org, that allows them to organize and create campaigns to better their work situation.^163] Irresponsible crowdsourcing The popular forum website reddit came under the spotlight during the first few days after the events of the Boston Marathon bombing as it showed how powerful social media and crowdsourcing could be. Reddit was able to help many victims of the bombing as they sent relief and some even opened up their homes, all being communicated very efficiently on their site. However, Reddit soon came under fire after they started to crowdsource information on the possible perpetrators of the bombing. While the FBI received thousands of photos from average citizens, the website also started to focus on crowdsourcing their own investigation, with the information that they were crowdsourcing. Eventually, Reddit members claimed to have found 4 bombers but all were innocent, including a college student who had committed suicide a few days before the bombing. The problem was exacerbated when the media also started to rely on Reddit as their source for information, allowing the misinformation to spread almost nationwide. The FBI has since warned the media to be more careful of where they are getting their information but Reddit’s investigation and its false accusations opened up questions about what should be crowdsourced and the unintended consequences of irresponsible crowdsourcing. See also Citizen science Clickworkers Collaborative innovation network Collective consciousness Collective intelligence Collective problem solving Commons-based peer production Crowd computing Crowdcasting Crowdfixing Crowdsourcing software development Distributed thinking Distributed Proofreaders Flash mob Gamification Government crowdsourcing List of crowdsourcing projects Microcredit Open value network Participatory democracy Participatory monitoring Smart mob Social collaboration "Stone Soup" TrueCaller Virtual Collective Consciousness Virtual volunteering Wisdom of the crowd References 13. 14. 15. lb. 17 Satire, William (February 5, 2009). "On Language" (https://www.nytimes.com/2009/02/08/magazine/08 wwln-safire-t.html?_r=3&ref=magazine&). New York Times Magazine, Retrieved May 19, 2013, Schenk, Eric: Guittard, Claude (2009), Crowdsourcing: What can be Outsourced to the Crowd, and Why ? (https://hal.archives-ouvertes.fr/file/index/docid/439256/filename/Crowdsourcing_eng.pdf.) Hirth, Matthias; Hofifeld, Tobias; Tran-Gia, Phuoc. Anatomy of a Crowdsourcing Platform - Using the Example of Microworkers.com (http://i3wue.de/staff/matthias.hirth/author_version/papers/conf_410_aut hor_version.pdf). 5th IEEE international Conference on Innovative Mobile and Internet Services in Ubiquitous Computing (IMIS 2011), June 2011, doi:10.1109/IMIS.2011.89 (https://dx.doi.oig/10.1109% 2FIMIS.2011.89) Estelles-Arol as, Enrique: Gonzalez-Ladrdn-de-Guevara, Fernando (2012), "Towards an Integrated Crowdsourcing Definition" (http://www.crowdsourcing-blog.org/wp-content/uploads/2012/02/Towards-a n-integrated-crowdsourcing-definition-Estell%C3%A9s-Gonz%C3%Allez.pdf) (PDF), Journo! of Information Science, 38 (2); 189 200, doi:10.1177/0165551512437638 (https://doi.org/10.1177%2F0165 551512437638) Howe, left (2006), "The Rise of Crowdsourcing" (https://www.wired.com/wired/archive/14.06/crowds.ht ml). Wired. Brabham, D. C, (2013), Crowdsourcing. Cambridge, Massachusetts; London, England: The MIT Press. Brabham, D. C, (2008), "Crowdsourcing as a Model for Problem Solving an Introduction and Cases", Convergence: The International Journal of Research into New Media Technologies. 14 (1): 75-90. doi: 10.1177/1354856507084420 (https://doi.org/10.1177%2F1354856507084420). Prpic, J., & Shnkia, P. (2016), Crowd Science: Measurements, Models, and Methods. Ft Proceedings of the 49th Animal Hawaii International Conference on System Sciences, Kauai, Hawaii: IEEE Computer Society Buettner, Ricardo (2015 ), A Systematic Literature Review of Crowdsourcing Research from a Human Resource Management Perspective (http://dx.doi.Org/10.13140/2.l.2061.1845), 48th Annual Hawaii International Conference on System Sciences (http://www.hicss.hawaii.edu/hicss_48/apahome48.htm). Kauai, Hawaii: IEEE, pp. 4609-4618. ISBN 978-1-4799-7367-5. doi:10.13140/2.1.2061.1845 (https://do i.org/10.13140%2F2.1.2061.1845), pryy!C /sjyyy' sV! CHtOfi i y'O! ? Thv' K tC'C (Usin v' H H C*t f y V Crowdsourcing". Policy & Internet 7 (3): 340-361. doi:10.1002/poi3.102 (https://doi.org/10.1002%2Fpo i3.102). Scblagwein, Daniel; Bjoru-Anderseu, Niels (2014), "Organizational Fearning with Crowdsourcing: The Revelatory Case of FEGO" (http://aisel.aisnet.org/cgi/viewcontent.cgi?article=1693&context=jais) Taeihagh, Araz (2017-06-19). "Crowdsourcing, Sharing Economies, and Development" (http://journals.s agepub.com/doi/10.1177/0169796X17710072). Journo/ of Developing Societies. doi:10.1177/0169796x17710072 (https://doi.org/10.1177%2F0169796xl7710072). Howe, Jeff (June 2, 2006). "Crowdsourcing: A Definition" (http://crowdsourcmg.typepad.eom/cs/2006/0 6/crowdsourcing_a.html), Crowdsourcing Blog, Retrieved January 2, 2013. "Daren C. Brabham" (http://annenberg.usc.edu/Faculty/Communication%20and%20Journalism/Brabham D.aspx), USC Annenberg. University of Southern California, Retrieved 17 September 2014. Brabham, Daren (2008), "Crowdsourcing as a Model for Problem Solving: An Introduction and Cases" (http://www.webcitation.Org/67BFxbafePurMhttp://www.clickadvisor.com/downloads/Brabham_Crowds ourcing_Problem_Solving.pdf) (PDF), Convergence: The International Journal of Research into New Media technologies, 14 (1): 75-90, doi:10.1177/1354856507084420 (https://doi.org/10.1177%2F135485 6507084420), archived from the original (http://www.clickadvisor.com/downloads/Brabham_Crowdsour cing_Problem_Solving.pdf) (PDF) on 2012-04-25 Afuah, A.; Tucci, C. L. (2012). “Crowdsourcing as a Solution to Distant Search". Academy of Management Review. 37 (3): 355 375, doi:10.5465/amr.2010.0146 (https://doi.org/10.5465%2Famr.201 0.0146). Vukovic, M, (2009), Crowdsourcing for enterprises. In Services-!, 2009 World Conference on (pp, 686- 692). IEEE. 18, de Vreede, X, Nguyen, C,, de Vreede, G. J,, Boughzala, L, Ob, O,, & Reiier-Palmon, R. (2013). A Theoretical Model of User Engagement in Crowdsourcing, In Collaboration and Technology (pp. 94- 109), Springer Berlin Heidelberg 19. Claypole, Maurice (February 14, 2012). "Learning through crowdsourcing is deaf to the language challenge" (https://www.theguardian.com/education/2012/feb/14/web-translation-fails-learners). The Guardian. London. 20. "A Brief History of Crowdsourcing [Infographic]" (http://www.crowdsourcing.org/editorial/a-brief-histor y-of-crowdsourcing-infographic/12532), Crowdsourcing.org. 2012-03-18. Retrieved 2015-07-02, 21. Hern , Chester G,(20G2). 'Racks in the Sea, p. 123 & 246. McGraw Hill. ISBN 0-07-136826-4. 22. "‘C’etait Paris en 1970’" (http://etudesphotographiques.revues.org/3407), Etudesphotographiques.revues.org. 1970-04-25. Retrieved 2015-07-02. 23. " 'The Big Picture a New Churning'" (http://indianexpress.com/article/india/india-others/the-big-picture- a-new-churning/), indianexpress.com, 20! 5-01-15. Retrieved 2015-08-19. 24. "UNV Online Volunteering Service | History" (httpsh/www.onlinevolunteering.org/en/org/about/history.h tml), Onlmevolunteering.org, Retrieved 2015-07-02, 25. "Wired 14.06: The Rise of Crowdsourcing" (http://archive.wired.com/wired/archive/14.06/crowds.html). Archive.wifed.com, 2009-01-04. Retrieved 2015-07-02. 26. Lik Andrew (2009), The Wikipedia revolution : how a hunch ofnahodies created the worlds greatest encyclopedia (1st ed,). New York: Hyperion. ISBN 1401303714. 27. [1] (http://www.tiki-toki.com/timeline/entry/323158/Crowdsourcing-Back-Up-Timeline-Early-Stories/) Archived (https://web.archive.Org/web/20141129054631/http://www.tiki-toki.com/timeline/entry/32315 8/Crowdsourcing-Back-Up-Timeline-Early-Stories/) November 29, 2014, at the Wayback Machine. 28. "Antoine-Jean-Baptiste-Robert Auget, Baron de Montyon" (http://www.newadvent.org/cathen/10552a.ht m). New Advent. Retrieved February 25, 2012. 29. "It Was All About Alkali" (http://pubs.acs.org/subscribe/archive/tcaw/ll/i01/html/01chemchron.html). 30. "Nicolas Appert" (http://www.brooklyn.cuny.edu/bc/ahp/MBG/MBG4/Appert.html), John Biamire. 31. "9 Examples of Crowdsourcing, Before ‘Crowdsourcing’ Existed" (http://memeburn.com/2011/09/9-exa mples-of-crowdsourcing-before-%E2%80%98crowdsourcing%E2%80%99-existed/), MemeBurn. 32. Pande, Shamni. "The People Know Best" (http://businesstoday.intoday.in/story/crowdsourcing-is-the-ne w-buzzword-in-communications/l/195160.html/). Business Today. India: Living Media India Limited, 33. Vergano, Dsn. "1833 Meteor Storm Started Citizen Science" (http://newswatch.nationalgeographic.eom/2 014/08/30/1833-meteor-storm-started-citizen-science/), Notional Geographic. StarStruck, Retrieved 34. "Gateway to Astronaut Photography of Earth" (http://eol.jsc.nasa.gov/). NASA. 35. McLaughlin, Elliot. "Image Overload: Help us sort it all out, NASA requests" (http://www.cnn.com/201 4/08/17/tech/nasa-earth-images-help-needed/). Cm.com. CNN. Retrieved 18 September 2014. 36. Despres, Jacques; Hadjsaid, Nouredine; Criqui, Patrick; Noirot, Isabelle (1 February 2015), “Modelling the impacts of variable renewable sources on the power sector: reconsidering the typology of energy modelling tools". Energy. 80: 486-495, ISSN 0360-5442 (https://www.worldcat.org/issn/0360-5442). doi:10.1016/j.energy.2014.12.005 (https://doi.org/10.1016%2Fj.energy.2014.12.005). 37. "OpenEI — Energy Information, Data, and other Resources" (http://en.0penei.01g). OpenEL Retrieved 2016-09-26, 38. Garvin, Peggy (12 December 2009), "New Gateway: Open Energy Info" (http://govinfo.sla.org/2009/12/ 12/new-gateway-open-energy-info/), SLA Government Information Division. Dayton, OH, USA. Retrieved 2016-09-26. 39. Brodt-Giles, Debbie (2012). WREF 2012: OpenEI — an open energy data and information exchange for international audiences (https://ases.conference-services.net/resources/252/2859/pdf/SOLAR2012_0677 _full%20paper.pdf) (PDF). Golden, CO, USA: National Renewable Energy Laboratory (NEEL). Retrieved 2016-09-24, 40. Davis, Chris; Chmiehauskas, Aifredas; Dijkema, Gerard; Nikolic, Igor, "Enipedia" (http://enipedia.tudelf t.nl). Delft, The Netherlands: Energy and Industry group, Faculty of Technology, Policy and Management, TU Delft, Retrieved 2016-16-07, 41. Davis, Chris (2012), Making sense of open data: from raw data to actionable insight — PhD thesis (htt p://enipedia.tudelft.nl/thesis/ChrisDavisPhD_MakingSenseOfOpenData.pdf) (PDF). Delft, The Netherlands: Delft University of Technology. Retrieved 2016-07-21. Chapter 9 discusses in depth the initial development of Enipedia, 42. "What Is the Four-Generation Program?" (https://www.lds.org/ensign/1972/03/what-is-the-four-generatio n-program?lang=eng). The Church of Jesus Christ of Latter-day Saints, Retrieved January 30, 2012, 43, Bonney, R. and LaBranche, M. (2004), Citizen Science: involving the Public in Research. ASTC Dimensions, May (June 2004, p, 13. 44, Baresto, Fasfovsky, D.; Sheehan, P. (2003), "A Model for integrating the Public into Scientific Research”. Journal of Geoscience Education. 50 (! }: 71-75. 45, McCaffrey, R.E. (2005). "Using Citizen Science in Urban Bird Studies”. Urban Habitats. 3 (1): 70-86, 46. King, Tori Ey Jobfing, Mark A. (2009). "Whads in a name? Y chromosomes, surnames and the genetic genealogy revolution”, Itends in Genetics, IS (8): 351-60. PMID 19665817 (https://www.ncbi.nlm.nih.g ov/pubmed/19665817), doi:10.1016/j.tig.2009.06.003 (https://doi.org/10.1016%2Fj.tig.2009.06.003), "The international Society of Genetic Genealogy (www.isogg.org; advocates the use of genetics as a tool for genealogical research, and provides a support network for genetic genealogists, it hosts the iSOGG Y- haplogroup tree, which has the virtue of being regularly updated.” 47. Mendex, etc. ai,s Fernando (28 February 2013), "An African American Paternal Lineage Adds an Extremely Ancient Root to the Human Y Chromosome Phylogenetic Tree" (http://www.cell.eom/AJHG/a bstract/S0002-9297%2813%2900073-6). The American Journoi of Human Genetics, The American Society of Human Genetics. 92: 454-459. PMC 3591855 (https://www.ncbi.nlm.nih.gOv/pmc/articles/P MC3591855)i. PMID 23453668 (https://www.ncbi.nlm.nih.gov/pubmed/23453668), doi:10.1016/j.ajhg.2013.02.002 (https://doi.org/10.1016%2Fj.ajhg.2013.02.002). Retrieved 10 July 2013. 48. Wells, Spencer (2013), "The Genographic Project and the Rise of Citizen Science" (https://web.archive.o rg/web/20130710014353/http://www.scgsgenealogy.com/Jamboree/2013/DNAday.htm). Southern hived from the original (http://www.scgsgenealogy.com/Ja(SGGSl evenmbore e/2 013/DN Aday.htm) Aitamurto, Tan]a (2015). "Motivation Factors in Crowdsourced Journalism: Social Impact, Social Change and Peer-Learning" (http://crowdsourcinginjournalism.com/2015/10/28/motivation-factors-in-cro wdsourced-journalism-social-impact-social-change-and-peer-learning/). International Journal of s. Aitamurto, Tanja (2016). "Crowdsourcing as a Knowledge-Search Method in Digital Journalism: Ruptured Ideals and Blended Responsibility" (http://crowdsourcinginjournalism.com/2015/07/04/crowds ourcing-as-a-knowledge-search-method-in-digital-journalism-ruptured-ideals-and-blended-responsibilit y/)„ Digital Journalism. 4: 280-297, doi:10.1080/21670811.2015.1034807 (https://doi.org/10.1080%2F2 1670811.2015.1034807). 51. Aitamurto, Tanja. "Balancing between open and closed: co-creation in magazine journalism" (http://ww w.tandfonline.com/doi/abs/10.1080/21670811.2012.750150). Digital Journalism. 1 (2): 229-251, doi: 10.1080/21670811.2012.750150 (https://doi.org/10.1080%2F21670811.2012.750150), 52. Keuleers; et a!. (Feb 2015), "Word knowledge in the crowd: Measuring vocabulary size and word prevalence in a massive online experiment”. Quarterly journal of experimental psychology, 68: 1665- 1692. doi: 10.1080/17470218.2015.1022560 (https://doi.org/10.1080%2F17470218.2015T022560), 53. "History of the Christmas Bird Count | Audubon" (http://birds.audubon.org/history-christmas-bird- count), Birds.audubon.org, Retrieved 2015-07-02. 54. [2] (http://www.audubon.org/thank-you-0) Archived (https://web.archive.org/web/20140824051327/htt p://www.audubon.org/thank-you-0) August 24, 2014, at the Wayback Machine. 55. Aitamurto, Tanja (2012). Crowdsourcing for Democracy: New Era In Policy-Making. Committee for the Future, Parliament of Finland, pp, 10-30, ISBN 978-951-53-3459-6. 56. Prpic,.!.: Taeihagh, A,; Melton, .1. (2014). "Crowdsourcing the Policy Cycle. Collective Intelligence 2014, MIT Center for Collective Intelligence" (http://humancomputation.com/ci2014/papers/Active%20P apers%5CPaper%2040.pdf) (PDF). Humancomputation.com. Retrieved 2015-07-02, 57. Prpic, .1.; laeihagh, A.; Melton, J, (2014), "A Framework for Policy Crowdsourcing. Oxford Internet Institute, University of Oxford - IPP 2014 - Crowdsourcing for Politics and Policy" (http://ipp.oii.ox.ac.u k/sites/ipp/files/documents/IPP2014_Taeihagh%20%282%29.pdf) (PDF). I 58. Prpic, Jy laeihagh, A.: Melton, J, (2014). "Experiments on Crowdsourcing Policy Assessment Oxford Internet Institute, University of Oxford - IPP 2014 - Crowdsourcing for Politics and Policy" (http://ipp.oi Lox.ac.uk/sites/ipp/files/documents/IPP2014_Taeihagh.pdf) (PDF). Ipp.oii.ox.ac.uk. Retrieved 59, Thapa, By Nlehaves, By Seidel, Cy Plattfant, R, (2015). "Citizen involvement in public sector innovation: Government and citizen perspectives" (http://content.iospress.com/articles/information-polit y/ip351). Information Polity, pp. 3-17, doi:10.3233/IP-150351 (https://doi.org/10.3233%2FIP-150351). 60. Aitamurto and Landemore. "Five design principles for crowdsourced policymaking: Assessing the case of crowdsourced off-road traffic law reform in Finland" (http://thefinnishexperiment.com/2015/02/04/des ign-for-crowdsourced-policy-making/). Journal of Social Medio for Organizations (1): 1-19. 61. Aitamurto, Landemore, Galli (2016). "Unmasking the Crowd: Participants' Motivation Factors, Profile and Expectations for Participation in Crowdsourced Policymaking" (http://thefinnishexperiment.com/201 6/09/21/motivation-factors-for-participation-in-crowdsourced-poUcymaking/). Information, 62. Aitaraurio, Chen, Cherif, Galli and Santana (2016). "Making Sense of Crowdsourced Civic Data with Big Data Tools" (http://thefinnishexperiment.com/2016/10/23/making-sense-of-crowdsourced-civic-input -with-big-data-tools/). ACM Digital .Archive: Academic Mindtrek-- via ACM Digital Archive, 63. Aitamurto, Tan)a. Crowdsourcing for Democracy: New Era in Policymaking (http://thefinnishexperimen t.com/2015/01/31/crowdsourcing-for-democracy-new-era-in-policy-making/). Committee for the Future, Parliament of Finland. ISBN 978-951-53-3459-6. 64. http://iscram2015.uia.no/wp-content/uploads/2015/05/8-9.pdf 65. Andro, M., Saleh, L (2017), "Digital Libraries and Crowdsourcing: A Review", pp. 135-162. In Collective Intelligence and Digital Archives: 'Towards Knowledge Ecosystems. Edited by Samuel 5zo.niec.ky and Nasreddine Boithai, Wiley / ISTE, ISBN 9781786300607. 260 p. 66. DeVun, Leah (November 19, 2009). "Looking at how crowds produce and present art." (https://web.archi ve.org/web/20121024130503/http://www.wired.com/techbiz/media/news/2007/07/crowd_captain?current Page=all), Wired News, Archived front the original (https://www.wired.eom/techbiz/media/news/2007/0 7/crowd_captain?currentPage=all) on 2012-10-24. Retrieved February 26, 2012. 67. Ess, Henk van "Crowdsourcing: how to find a crowd" (http://www.slideshare.net/searchbistro/harvesting- knowledge-how-to-crowdsource-in-2010), ARD ZDF Akademie 2010, Berlin, p, 99, 68. Doan, A.; Rarnarkrishnan, R,: Halevy, A. (2011), "Crowdsourcing Systems on the World Wide Web" (htt p://delivery.acm.org/10.1145/1930000/1924442/p86-doan.pdf?ip=71.182.229.123&acc=OPEN&CFID=8 7199630&CFTOKEN=25539817&_acm_=1330408287_b904f7bffdd69c8ad85d83c619e9a3cl) ( PDF), Communications of the ACM, 54 (4): 86-96, doi:10.1145/1924421.1924442 (https://doi.org/10.ll 45%2F1924421.1924442) 69. Brabham, Daren C, (20.1.3), Crowdsourcina, MIT Press,, p, 45 70. Howe, left (2008), "Crowdsourcing: Why the Power of the Crowd is Driving the Future of Business" (htt p://www.bizbriefings.com/Samples/IntInst%20—%20Crowdsourcing.PDF) (PDF), The Internationa! 71. 70 Robson, John (February 24, 2012). "IEM Demonstrates the Political Wisdom of Crowds" (http://tippie.ui owa.edu/iem/media/story.cfm?id=2793), Canoe.ca. Retrieved March 31, 2012, "4 Great Examples of Crowdsourcing through Social Media" (http://www.digitalagencymarketing.eom/2 012/03/4-great-examples-of-social-crowdsourcing/). digitaiagencyrnarketing.com, 2012. Goldberg, Ken; Newsom, Gavin, "Let's amplify California's collective intelligence" (http://citris-uc.org/le ts-amplify-californias-collective-intelligence-op-ed-ken-goldberg-gavin-newsom-california-report-card/). 74. Escoiiier, N, and B, McKelvey (2014). "Using "Crowd-Wisdom. Strategy" to Co-Create Market Value: Proof-of-Concept from the Movie industry." in International Perspective on Business innovation and Disruption in the Creative industries: Film, Video, Photography, R Wikstrom and R, DeFhiippi, eds., UK: Edward Elgar Publishing Ltd, Chap. 11. 75. Block, A. B. (2009). “How boxoffice trading could flop," The Hollywood Reporter, (April 22). 76. Chen, A. and R. Panaiigan (2013). "Quantifying movie magic with Google search." Google White Paper, industry Perspectives HJser Insights 77. Canard, C. (2010). "The Movie Research Experience gets audiences involved in filmmaking." The Daily Bruin, (July 19) 78. "Compete To Create Your Dream Home" (http://www.fastcoexist.com/1682162/a-site-that-lets-designers- compete-to-create-your-dream-home). FastCoexist.com. June 4, 2013. Retrieved 2014-02-03. 79. "Designers, clients forge ties on web" (http://bostonherald.com/business/technology/technology_news/20 12/06/designers_clients_forge_ties_web), Boston Herald. June 11, 2012. Retrieved 2014-02-03. 80. Stan Nussbaum,. 2003. Proverbial perspectives on pluralism. Connections: (he journal of the WE A Missions Committee October, pp. 30, 31, 81. "Oromo dictionary project" (http://oromodictionary.com/index.php), OromoDictionary.com, Retrieved 2014-02-03. 82, "Description of WeSay software and process" (http://scholarspace.manoa.hawaii.edu/bitstream/handle/10 125/1368/10albrightsmall.pdf) (PDF), Retrieved 2014-02-03. 83, "Developing ASL vocabulary for science and math" (http://www.washington.edu/news/2012/12/07/crow dsourcing-sit-compiles-new-sign-language-for-math-and-science/). Washington,edu. December 7, 2012. 84. "Pashto Proverb Collection project" (http://www.afghanproverbs.com/the_pashto_proverbs_project), AfghanProverbs.com. Retrieved 2014-02-03. 85. "Comparing methods of collecting proverbs" (http://www.gial.edu/images/gialens/vol8-3/Unseth_collecti ng_proverbs.pdf) (PDF), glal.edu. 86. Edward Zellem. 2014, Mataiuna: iSl Afghan Pashto Proverbs. Tampa, FL: Culture Direct, 87. "Web 2.0-based crowdsourcing for high-quality gold standard development in clinical Natural Language Processing" (http://www.jmir.Org/2013/4/e73/), Jmir.org, doi:10.2196/jmir.2426 (https://doi.org/10.219 6%2Fjmir.2426), Retrieved 2014-02-03. 88. Lombard, Amy (May 5, 2013), "Crowdfynd: The First Place to Look" (http://techland.time.com/2013/05/ 02/five-noteworthy-startups-from-techcrunch-disrupt-ny/slide/crowdfynd-the-first-place-to-look/). 89. Prive, Tanya. "What Is Crowdfunding And How Does It Benefit The Economy" (http://www.forbes.com/ sites/tanyaprive/2012/ll/27/what-is-crowdfunding-and-how-does-it-benefit-the-economy/). Forbes.com. Choy, Katherine; Scblagwein, Daniel (2016), "Crowdsourcing for a better world: On the relation between IT affordances and donor motivations in charitable crowdfunding" (http://www.emeraldinsight.eom/doi/p dfplus/10.1108/ITP-09-2014-0215) (PL ■ r :, .eager: nuu on .owmonogy nr ner i *■} 91, Barnett, Chance. "Crowdfunding Sites In 2014" (http://www.forbes.com/sites/chancebarnett/2014/08/29/ crowdfunding-sites-in-2014/), Forbes.com. Retrieved 2015-07-02, 92, Agrawal, Ajay, Christian Catalini, and Avi Goldfarb, “Some Simple Economics of Crowdfunding," Notional Bureau o f Economic Research (2014): 63-97. 93, "Mobile Crowdsourcing" (http://www.clickworker.com/en/crowdsourcing-glossar/mobile-crowdsourcin g/), Clickworker. Retrieved 10 December 2014, 94, Thebault-Spieker, Terveen, & Hecht, Avoiding the South Side and the Suburbs: The Geography of Mobile Crowdsourcing Markets. 95, Chatzirniloudis, Konstantinidis & Laoudias, Zeinahpour-Yazti, "Crowdsourcing with smartphones" (htt p://www.cs.ucy.ac.cy/~dzeina/papers/icl2-crowdsourcing.pdf) (PDF). 96, MIST: Fog-based Data Analytics Scheme with Cost-Efficient Resource Provisioning for loT Crowdsensing Applications [3] (http://dx.doi.Org/10.1016/j.jnca.2017.01.012), 97, Yang,Adamic. L.; Ackerman, M. (2008), "Crowdsourcing and Knowledge Sharing: Strategic User Behavior on Taskcn" (http://www-personal.umich.eduMadamic/papers/taskcn/EC2008Witkey.pdf) (PDF), Proceedings ofthe 9ihACM Conjerence on Electronic Commerce 98, Gadiraja, CL; Kawase, R.; Dietze, S, (2014), "A Taxonomy of Microtasks on the Web" (http://dl.acm.org/ ft_gateway.cfm?id=2631819&type=pdf) (PDF), Proceedings ofthe 25th. ACM Conference on Hypertext 99, Felstiner, Alek (August 2011). "Working the Crowd: Employment and Labor Law in the Crowdsourcing Industry" (http://wtf.tw/ref/felstiner.pdf) (PDF). BERKELEY JOURNAL OF EMPLOYMENT & LABOR. 100. "View of Crowdsourcing: Libertarian Panacea or Regulatory Nightmare?" (https://online-shc.com/arc/oj s/index.php/JOHE/article/view/4/4), online-shc.com. Retrieved 2017-05-26, 101. Leimeister, J.M.; Huber, M,; Bretschneidep IF; Kremar, H. (2009), "Leveraging Crowdsourcing: Activation-Supporting Components for IT-Based Ideas Competition" (http://portal.acm.org/citation.cfmPi d=1653890), Journal of Management Information. Systems, 26 (1): 197 -224, doi:10.2753/mis0742- 1222260108 (https://doi.org/l6.2753%2Fmis0742-1222260108) 102. Elmer, W.; Leimeister, I.; Kremar, H. (2009), "Community Engineering for Innovations: The Ideas Competition as a method to nurture a Virtual Community for Innovations" (http://www3.interscience.wil ey.com/search/allsearch?mode=viewselected&product=journal&ID=122535413&view_selected.x=67&v iew_selected.y=8), R&D Management, 39 (4): 342 -356, doi:10.1111/j.l467-9310.2009.00564.x (https://d oi.org/10.llll%2Fj.1467-9310.2009.00564.X) 03, "DARPA Network Challenge" (https://web.archive.Org/web/20110811233340/https://networkchallenge.d arpa.mil/Default.aspx). DARPA Network Challenge. Archived from the original (https://networkchalleng e.darpa.mil/default.aspx) on August 11, 2011. Retrieved November 28, 2011. 04, "Social media web snares 'criminals’" (http://www.newscientist.com/article/dn21666-social-media-web-s nares-criminals.html). New Scientist, Retrieved April 4, 2012, 105, "Beyond XPrize: The 10 Best Crowdsourcing Tools and Technologies" (http://www.fourhourworkweek.c om/blog/2012/02/20/beyond-x-prize-the-10-best-crowdsourcing-tools-and-technologies/). February 20, ivi'dlhG': KlkJlA. Kiting A.; Chi, E.H.; Sun, B. (2008), "Crowdsourcing user studies with Mechanical Turk" (http://www-u sers.cs.umn.edu/~echi/papers/2008-CHI2008/2008-02-mech-turk-online-experiments-chil049-kittur.pdf) (PDe), L>til JU08 107. van der Krieke; ei ah (2015). "MowNutsAreTheDutch (floeGeklsNL); A crowdsourcing study of mental symptoms and strengths", internadona; Journal of Methods in Psychiatric Research. 25 (2): 123-144, PMID 26395198 (https://www.ncbi.nlm.nih.gov/pubmed/26395198), doi;10.1002/mpr.l495 (https://doi.o rg/10.1002%2Fmpr,1495). 108. Prpic, J. (2015), “Health Care Crowds: Collective Intelligence in Public Health, Collective Intelligence 2015, Center for the Study of Complex Systems, University of Michigan,", Pa.oers.ssrn,com. SSRN 2570593 (https://ssrn.com/abstract=2570593)@. 109. van der Krieke, L; Blaauw, EJ; Emerencia, AC: Schenk, 1IM; Slaets, JO; Bos, EH; de Jonge, P; Jeronlmus, BP (2016). “lemporal Dynamics of Health and Well-Being: A Crowdsourcing Approach to Momentary Assessments and Automated Generation of Personalized Feedback (2016)". Psychosomatic Medicine: 1. PMID 27551988 (https://www.ncbi.nlm.nih.gov/pubmed/27551988). doi:10.1097/PSY.0000000000000378 (https://doi.org/10.1097%2FPSY.0000000000000378). 110. Rahman, Mahhubur; Blackwell, Brenna: Banerjee, Niianjan; Dharmendra, Saraswat (2015), "Smartphone-based hierarchical crowdsourcing for weed identification" (http://dl.acm.org/citation.cfmPid =2784520), Computers and Electronics in Agriculture: 14-23, retrieved 12 August 2015 111. Primarily on the Bridge Winners website (http://bridgewinners.com/article/series/2015-cheating-scandal/) 112. Noveck, Beth Simone (2009), Wiki Governme.nl:: How Technology Can Make Government Better, Democracy Stronger and Citizens More Powerful, Brookings Institution Press 113. Sarasua, Cristina; Simper!, Elena; Noy, Natalya E (2012), "Crowdsourcing Ontology Alignment with Microtasks" (http://web.stanford.edu/~natalya/papers/iswc2012_crowdmap.pdf) (PDF), Institute AIFB. 114, "Crowdfunding and Civic Society in Europe: A Profitable Partnership?" (http://www.academia.edu/3415 172/Crowdfunding_and_Civic_Society_in_Europe_A_Profitable_Partnership), Open Citizenship 115, Federal Transit Administration Public Transportation Participation Pilot Program (https://web.archive.o rg/web/20090107140521/http://www.fta.dot.gov./planning/programs/planning_environment_8711.html), U.S. Department of Transportation, archived from the original (http://www.fta.dot.gov/planning/program s/planning_environment_8711.html) on January 7, 2009 116. Peer-to-Patent Community Patent Review Project (http://www.peertopatent.org/), Peer-to-Patent Cailison-Burch, C,; Dredze, M, (2010), "Creating Speech and Language Data With Amazon’s Mechanical Turk" (http://www.aclweb.Org/anthology-new/W/W10/W10-0701.pdf) (PDF), Human McGmw, I,: Seneil, S. (2011), "Growing a Spoken Language Interface on Amazon Mechanical Turk" (htt p://people.csail.mit.edu/jrg/2011/McGraw_Interspeechll.pdf) (PDF), Imerspeech: 3057 3060 Mason, W.: Suri. S, (2010), "Conducting Behavioral Research on Amazon's Mechanical Turk", Behavior Research Methods, SSRN 1691163 (https://ssm.com/abstract=1691163)H 120. Koblin, A. (2008), "The sheep market" (http://dl.acm.oig/citation.cfm?id=1640348), Creativity and 121. "explodingdog 2015" (http://www.explodingdog.com/). Explodingdog.com. Retrieved 2015-07-02. 122. (driver, D. (2010), Crowdsourcing and the Evolving Relationship between Art and Artist (http://www.cro wdsourcing.org/document/crowdsourcing-and-the-evolving-relationship-between-artist-and-audience/55 15) 123. "Why" (http://www.inrix.com/companyoverview.asp). INRIX.com, 2014-09-13. Retrieved 2015-07-02, 124. Ross, J,; Irani, L„; Silberm&n, M,S,; Zaldivar, A,; Tomlinson, B, (2010), "Who are the Crowdworkers? Shifting Demographics in Mechanical Turk" (https://web.archive.Org/web/20120330170503/http://www.i cs.uci.edu/~jwross/pubs/RossEtAl-WhoAreTheCrowdworkers-altCHI2010.pdf) (PDF). CHI 2010. Archived from the original (http://www.ics.uci.edu/~jwross/pubs/RossEtAl-WhoAreTheCrowdworkers-a ltCHI2010.pdf) (PDF) on March 30, 2012, 125. Hirth, M,; HoEleld, lid Train-Gia, P (2011), Human Cloud as Emerging Internet Application - Anatomy of the Microworkers Crowdsourcing Platform (http://www3.informatik.uni-wuerzburg.de/TR/tr478.pdf) Brabham, Daren C. (2008), "Moving the Crowd at iStockphoto: The Composition of the Crowd and Motivations for Participation in a Crowdsourcing Application" (http://firstmonday.org/htbin/cgiwrap/bin/ oj s/index.php/fm/article/view/2159/1969). First Monday. Lakhani; et. al. (2007). "The Value of Openness in Scientific Problem Solving" (http://www.hbs.edu/resea rch/pdf/07-050.pdf) (PDF). Retries Brabham, Daren C, (2012). "Managing Unexpected Publics Online: The Challenge of Targeting Specific Groups with the Wide-Reaching Tool of the Internet" (http://ijoc.org/ojs/index.php/ijoc/article/view/154 Brabham, Daren C, (2010). "Moving the Crowd at Threadless: Motivations for Participation in a Crowdsourcing Application" (http://www.tandfonline.eom/doi/abs/10.1080/13691181003624090). Information, Communication & Society, 13: 1122 1145. doi:10.1080/13691181003624090 (https://doi.or g/10.1080%2F13691181003624090), Brabham, Daren C. (2012 ). "The Myth of Amateur Crowds: A Critical Discourse Analysis of Crowdsourcing Coverage" (http://www.tandfonline.com/doi/abs/10.1080/1369118X.2011.641991). Information, Communication & Society. 15: 394 410. doi:10.1080/1369118X.2011.641991 (https://doi.or g/10.1080%2F1369118X.2011.641991), Saxton, Oh, & Kishore (2013). "Rules of Crowdsourcing: Models, Issues, and Systems of Control." (htt p://www.tandfonhne.com/doi/full/10.1080/10580530.2013.739883#tabModule), Information Systems Management. 30: 2-20. doi:10.1080/10580530.2013.739883 (https://doi.org/10.1080%2F10580530.201 3.739883), Aitarnurto, Tan]a (2015). "Motivation Factors in Crowdsourced Journalism: Social Impact, Social Change, and Peer Learning" (http://crowdsourcinginjournalism.com/2015/10/28/motivation-factors-in-cr owdsourced-journalism-social-impact-social-change-and-peer-learning/). International Journal of Aitarnurto, Landemore, Galli (2016). "Unmasking the Crowd: Participants' Motivation Factors, Profile and Expectations for Participation in Crowdsourced Policymaking" (http://thefinnishexperiment.com/201 6/09/21/motivation-factors-for-participation-m-crowdsourced-policymaking/). Information, •"V in, Ng Schulze, Tg Viet, D. (2011), "More than fun and money. Worker Motivation in Crowdsourcing - A Study on Mechanical Turk" (http://schader.bwl.uni-mannheim.de/fileadmin/files/pub likationen/Kaufmann_Schulze_Veit_2011_-_More_than_fun_and_money_Worker_motivation_m_Crowd sourcmg_-_A_Study_on_Mechamcal_Turk_AMCIS_2011.pdf) (PDF). Proceedings of the Seventeenth 135. Brabham, Daren C, (2012). "Motivations for Participation in a Crowdsourcing Application to Improve Public Engagement in Transit Planning" (http://www.tandfonline.eom/doFabs/10.1080/00909882.2012.6 93940). Journo/ of Applied Communication Research. 40; 307-328. doi:10.1080/00909882.2012.693940 (https://doi.org/10.1080%2F00909882.2012.693940). 136. Lieisala, Kalrk Joutsen, Atle (2007). "Hang-a-rounds and True Believers: A Case Analysis of the Roles and Motivational Factors of the Star Wreck cans". MindTrek 2007 Conference Proceedings, 137. "State of the World’s Volunteerism Report 2011" (http://www.unv.org/fileadmin/docdb/pdf/2011/SWVR/ English/SWVR2011_full.pdf) (PDF). Uav.org, Retrieved 2015-07-01. 138. Chandler, Dg Kapelner, A. (2010). "Breaking Monotony with Meaning: Motivation in Crowdsourcing Markets" (http://www.danachandler.com/files/Chandler_Kapelner_BreakingMonotonyWithMeaning.pdf) Aparicio, Mg Costa, C; Braga, A, (2012), "Proposing a system to support crowdsourcing" (https://www.r esearchgate.net/profile/Manuela_Aparicio/publication/232659015_Proposing_a_system_to_support_cro wdsourcing/links/02bfe5127344436cbf000000/Proposing-a-system-to-support-crowdsourcing.pdf?origin publication detail) (PDF). OSDOC 22 Proceedings of the Workshop on Open Source and Design of 140. Aitarnurto, Landemore, Galli (2016). "Unmasking the Crowd: Participants’ Motivation Factors, Expectations, and Profile in a Crowdsourced Law Reform." (http://thefinnishexperiment.eom/2016/09/2 1/motivation-factors-for-participation-in-crowdsourced-policymaking/). Information, Communication & Socierv. Quinn, Alexander Jg Rederson, Benjamin B. (2011), "Human ComputatiomA Survey and Taxonomy of a Growing Field, CHI 2011 [Computer Human Interaction conference], May 7-12, 2011, Vancouver, BC, Canada" (http://alexquinn.org/papers/Human%20Computation,%20A%20Survey%20and%20Taxonom y%20of%20a%20Growing%20Field%20(CHI%202011).pdf) (PDF). Retrieved ■\J : .1.42, Paolacd, G: Chandler, I; Ipeirotis, P.G. (2010). "Running experiments on Amazon Mechanical Turk" (htt p://hdl.handle.net/1765/31983). Judgment ond Decision Making. 5 (5): 411-419. 143. Prpic, J; Shnkia, !?.; Roth, Y.; Lemoine, j,F. (2015). “A Geography of Participation in IT-Mediated Crowds", F>roceedi.ngs of the Hawaii International Conference on Systems Sciences 2015. SSRN 2494537 (https://ssrn.com/abstractM494537) §. 144. Borst, Irma. "The Case For and Against Crowdsourcing: Part 2" (http://www.crowdsourcing.org/editoria l/the-case-for-and-against-crowdsourcing-part-2/2850). Retrieved 2015-02 -09. 145. Ipeirotis; Provost; Wang (2010), "Quality Management on Amazon Mechanical Turk" (http://people.ster n.nyu.edu/panos/publications/hcomp2010.pdf) (PDF ). 146. Lukyanenko. Roman; Parsons, Jeffrey; Wiersrna, Yolanda (2014), "The IQ of the Crowd: Understanding and improving information Quality in Structured User-Generated Content". Information Systems Research. 25 (4): 669 -689. doi:10“l287/isre.2014.0537 (https://doi.oig/10.1287%2Fisre.2bl4.0537). 147. Remap, Alex; Ren, Aiex J,; Papazoglou, Giannis; Gerth, Richard; Gonzalez, Richard: Papalambros, Pesos. "When Crowdsourcing Fails: A Study of Expertise on Crowdsourced Design Evaluation" (http://o de.engin.umich.edu/publications/PapalambrosPapers/2015/316J.pdf) (PDF), 148. Kurve, Adilya; Miller, David .1.; Kesidis, George (30 May 2014). "Malticategory Crowdsourcing Accounting for Variable Task Difficulty, Worker Skill, and Worker Intention", IEEE KDE (99). 149. Hirth; HoSfeld; Tran-Gia (2011), Human Cloud as Emerging Internet Application - Anatomy of the Microworkers Crowdsourcing Platform (http://www3.informatik.uni-wuerzburg.de/TR/tr478.pdf) (PDF) 150. Ipeirotis (2010), "Analyzing the Amazon Mechanical Turk Marketplace". ARDS; Crossroods, The ACM Magazine for Students (PDF). ACM. 17 (2), SSRN 1688194 (https://ssrn.com/abstractM688194)§, doiT0.1145/1870000/1869094 (https://doi.org/10.1145%2F1870000%2F1869094). 151. I losaka, Tomoko A. (April 2008). "Facebook asks users to translate for free" (http://www.msnbc.msn.co m/id/24205912/ns/technology_and_science-internet/), MSNBC. , Britt, Darice. "Crowdsourcing: The Debate Roars On" (http://insite.artinstitutes.edu/crowdsourcing-the-d ebate-roars-on-39739.aspx). Retrieved 2012-12-04. L Woods, Dan (28 September 2009). "The Myth of Crowdsourcing" (http://www.forbes.com/2009/09/28/cr owdsourcing-enterprise-innovation-technology-cio-network-jargonspy.html). 154. "The Promise of Idea Crowdsourcing: Benefits, Contexts, Limitations | Tanja Aitamurto" (http://www.ac ademia.edu/963662/The_Promise_of_Idea_Crowdsourcing_Benefits_Contexts_Limitations). 155. "International Translators Association Launched in Argentina" (http://www.laht.com/article.aspPArticleId =344753&CategoryId=14093). Latin American Herald Tribune. Retrieved 23 November 2016. 156. Kleeman, Frank (2008). "Un(der)paid Innovators: The Commercial Utilization of Consumer Work through Crowdsourcing" (http://www.sti-studies.de/ojs/index.php/sti/article/view/81/62). Sti~studies.de. 157. Jason (2011). "Crowdsourcing: A Million Heads is Better Than One" (http://www.crowdsourcing.org/doc ument/crowdsourcing-a-million-heads-is-better-than-one/8619). Crowdsourcing.org, Retrieved Vj 0fp 158. Dupree, Steven (2014). "Crowdfunding 101: Pros and Cons" (http://www.gsb.stanford.edu/ces/crowdfun ding-101). Gsb.starsford.edu. Retrieved 2015-07-02. 159. "Fair Labor Standards Act Advisor" (http://www.dol.gov/elaws/faq/esa/flsa/001.htm). Retrieved 28 February 2012. 160. Greg Norcie, 2011, “Ethical and practical considerations for compensation of crowdsourced research participants," CHi'WS on Ethics Logs and video lope: Ethics in Large Scale 'Trials & User Generated Content, [4] (http://www.crowdsourcing.org/document/ethical-and-practical-considerations-for-compensa tion-of-crowdsourced-research-participants/3650), accessed 30 June 2015. 161. Busarovs, Aleksejs (2013). "Ethical Aspects of Crowdsourcing, or is it a Modem Form of Exploitation" (http://www.ijeba.com/docirnients/papers/2013_l_pl.pdf) (PDF). International Journal of Economics & Business Administration. 1 (I): 3-14, Retrieved 26 November 2014. 162. The Crowdsourcing Scam (http://www.thebaffler.com/salvos/crowdsourcing-scam) (Dec. 2014), The Baffler, No. 26 163. Salem: et al, (2015). "We Are Dynamo: Overcoming Stalling and Friction in Collective Action for Crowd Workers" (http://www.kristymilland.com/papers/Salehi.2015.We.Are.Dynamo.pdf) (PDF). Retrieved June 16, 2015, External links ■ J| Crowdsourcing at Wikibooks ■ || Media related to Crowdsourcing at Wikimedia Commons Retrieved from "https://en.wikipedia.Org/w/index.php?title=Crowdsourcing&oldid=793219794" ■ This page was last edited on 31 July 2017, at 11:33. ■ Text is available under the Creative Commons Attribution-Share Alike License; additional terms may apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization. Copy with citationCopy as parenthetical citation