Journal articles on the topic 'Crowdsourcing experiments'

To see the other types of publications on this topic, follow the link: Crowdsourcing experiments.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Crowdsourcing experiments.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Ramírez, Jorge, Burcu Sayin, Marcos Baez, Fabio Casati, Luca Cernuzzi, Boualem Benatallah, and Gianluca Demartini. "On the State of Reporting in Crowdsourcing Experiments and a Checklist to Aid Current Practices." Proceedings of the ACM on Human-Computer Interaction 5, CSCW2 (October 13, 2021): 1–34. http://dx.doi.org/10.1145/3479531.

Full text
Abstract:
Crowdsourcing is being increasingly adopted as a platform to run studies with human subjects. Running a crowdsourcing experiment involves several choices and strategies to successfully port an experimental design into an otherwise uncontrolled research environment, e.g., sampling crowd workers, mapping experimental conditions to micro-tasks, or ensure quality contributions. While several guidelines inform researchers in these choices, guidance of how and what to report from crowdsourcing experiments has been largely overlooked. If under-reported, implementation choices constitute variability sources that can affect the experiment's reproducibility and prevent a fair assessment of research outcomes. In this paper, we examine the current state of reporting of crowdsourcing experiments and offer guidance to address associated reporting issues. We start by identifying sensible implementation choices, relying on existing literature and interviews with experts, to then extensively analyze the reporting of 171 crowdsourcing experiments. Informed by this process, we propose a checklist for reporting crowdsourcing experiments.
APA, Harvard, Vancouver, ISO, and other styles
2

Danilchuk, M. V. "THE POTENTIAL OF THE CROWDSOURCING AS A METHOD OF LINGUISTIC EXPERIMENT." Bulletin of Kemerovo State University, no. 4 (December 23, 2018): 198–204. http://dx.doi.org/10.21603/2078-8975-2018-4-198-204.

Full text
Abstract:
The present research considers crowdsourcing as a method of linguistic experiment. The paper features an experiment with the following algorithm: 1) problem statement, 2) development, 3) and questionnaire testing. The paper includes recommendations on crowdsourcing project organization, as well as some issues of respondents’ motivation, questionnaire design, choice of crowdsourcing platform, data export, etc. The linguistic experiment made it possible to obtain data on the potential of the phonosemantic analysis in solving naming problems in marketing. The associations of the brand name designer matched those of the majority of the Internet pannellists. The experiment showed that crowdsourcing proves to be an available method within the network society. It gives an opportunity to receive objective data and demonstrates high research capabilities. The described procedure of the crowdsourcing project can be used in various linguistic experiments.
APA, Harvard, Vancouver, ISO, and other styles
3

Makiguchi, Motohiro, Daichi Namikawa, Satoshi Nakamura, Taiga Yoshida, Masanori Yokoyama, and Yuji Takano. "Proposal and Initial Study for Animal Crowdsourcing." Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 2 (September 5, 2014): 40–41. http://dx.doi.org/10.1609/hcomp.v2i1.13185.

Full text
Abstract:
We focus on animals as a resource of processing capability in crowdsourcing and propose an Animal Crowdsourcing (we call it "Animal Cloud”, too) that resolves problems with cooperation between computers and human or animals. This paper gives an overview of Animal Crowdsourcing and reports on the interim results of our learning experiments using rats (Long-Evans rats) to verify the feasibility of Animal Crowdsourcing.
APA, Harvard, Vancouver, ISO, and other styles
4

Lutz, Johannes. "The Validity of Crowdsourcing Data in Studying Anger and Aggressive Behavior." Social Psychology 47, no. 1 (January 2016): 38–51. http://dx.doi.org/10.1027/1864-9335/a000256.

Full text
Abstract:
Abstract. Crowdsourcing platforms provide an affordable approach for recruiting large and diverse samples in a short time. Past research has shown that researchers can obtain reliable data from these sources, at least in domains of research that are not affectively involving. The goal of the present study was to test if crowdsourcing platforms can also be used to conduct experiments that incorporate the induction of aversive affective states. First, a laboratory experiment with German university students was conducted in which a frustrating task induced anger and aggressive behavior. This experiment was then replicated online using five crowdsourcing samples. The results suggest that participants in the online samples reacted very similarly to the anger manipulation as participants in the laboratory experiments. However, effect sizes were smaller in crowdsourcing samples with non-German participants while a crowdsourcing sample with exclusively German participants yielded virtually the same effect size as in the laboratory.
APA, Harvard, Vancouver, ISO, and other styles
5

Jiang, Ming, Zhiqi Shen, Shaojing Fan, and Qi Zhao. "SALICON: a web platform for crowdsourcing behavioral experiments." Journal of Vision 17, no. 10 (August 31, 2017): 704. http://dx.doi.org/10.1167/17.10.704.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Della Mea, Vincenzo, Eddy Maddalena, and Stefano Mizzaro. "Mobile crowdsourcing: four experiments on platforms and tasks." Distributed and Parallel Databases 33, no. 1 (October 16, 2014): 123–41. http://dx.doi.org/10.1007/s10619-014-7162-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kandylas, Vasilis, Omar Alonso, Shiroy Choksey, Kedar Rudre, and Prashant Jaiswal. "Automating Crowdsourcing Tasks in an Industrial Environment." Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 1 (November 3, 2013): 95–96. http://dx.doi.org/10.1609/hcomp.v1i1.13056.

Full text
Abstract:
Crowdsourcing based applications are starting to gain traction in industrial environments. Crowdsourcing research has proven that it is possible to get good quality labels at the fraction of the cost and time. However, implementing such applications at large scale requires new infrastructure. In this demo we present a system that allows the automation of crowdsourcing tasks for information retrieval experiments.
APA, Harvard, Vancouver, ISO, and other styles
8

Ku, Chih-Hao, and Maryam Firoozi. "The Use of Crowdsourcing and Social Media in Accounting Research." Journal of Information Systems 33, no. 1 (November 1, 2017): 85–111. http://dx.doi.org/10.2308/isys-51978.

Full text
Abstract:
ABSTRACT In this study, we investigate the use of crowdsourcing websites in accounting research. Our analysis shows that the use of crowdsourcing in accounting research is relatively low, and these websites have been mainly used to collect data through surveys and for conducting experiments. Next, we compare and discuss papers related to crowdsourcing in the accounting area with research in computer science (CS) and information systems (IS), which are more advanced in using crowdsourcing websites. We then focus on Amazon Mechanical Turk as one of the most widely used crowdsourcing websites in academic research to investigate what type of tasks can be done through this platform. Based on our task analysis, one of the areas in accounting research that can benefit from crowdsourcing websites is research on social media content. Therefore, we then discuss how research in CS, IS, and crowdsourcing websites can help researchers improve their work on social media.
APA, Harvard, Vancouver, ISO, and other styles
9

Baba, Yukino, Hisashi Kashima, Kei Kinoshita, Goushi Yamaguchi, and Yosuke Akiyoshi. "Leveraging Crowdsourcing to Detect Improper Tasks in Crowdsourcing Marketplaces." Proceedings of the AAAI Conference on Artificial Intelligence 27, no. 2 (October 6, 2021): 1487–92. http://dx.doi.org/10.1609/aaai.v27i2.18987.

Full text
Abstract:
Controlling the quality of tasks is a major challenge in crowdsourcing marketplaces. Most of the existing crowdsourcing services prohibit requesters from posting illegal or objectionable tasks. Operators in the marketplaces have to monitor the tasks continuously to find such improper tasks; however, it is too expensive to manually investigate each task. In this paper, we present the reports of our trial study on automatic detection of improper tasks to support the monitoring of activities by marketplace operators. We perform experiments using real task data from a commercial crowdsourcing marketplace and show that the classifier trained by the operator judgments achieves high accuracy in detecting improper tasks. In addition, to reduce the annotation costs of the operator and improve the classification accuracy, we consider the use of crowdsourcing for task annotation. We hire a group of crowdsourcing (non-expert) workers to monitor posted tasks, and incorporate their judgments into the training data of the classifier. By applying quality control techniques to handle the variability in worker reliability, our results show that the use of non-expert judgments by crowdsourcing workers in combination with expert judgments improves the accuracy of detecting improper crowdsourcing tasks.
APA, Harvard, Vancouver, ISO, and other styles
10

Liu, Chong, and Yu-Xiang Wang. "Doubly Robust Crowdsourcing." Journal of Artificial Intelligence Research 73 (January 12, 2022): 209–29. http://dx.doi.org/10.1613/jair.1.13304.

Full text
Abstract:
Large-scale labeled dataset is the indispensable fuel that ignites the AI revolution as we see today. Most such datasets are constructed using crowdsourcing services such as Amazon Mechanical Turk which provides noisy labels from non-experts at a fair price. The sheer size of such datasets mandates that it is only feasible to collect a few labels per data point. We formulate the problem of test-time label aggregation as a statistical estimation problem of inferring the expected voting score. By imitating workers with supervised learners and using them in a doubly robust estimation framework, we prove that the variance of estimation can be substantially reduced, even if the learner is a poor approximation. Synthetic and real-world experiments show that by combining the doubly robust approach with adaptive worker/item selection rules, we often need much lower label cost to achieve nearly the same accuracy as in the ideal world where all workers label all data points.
APA, Harvard, Vancouver, ISO, and other styles
11

Ottinger, Gwen. "Crowdsourcing Undone Science." Engaging Science, Technology, and Society 3 (September 28, 2017): 560. http://dx.doi.org/10.17351/ests2017.124.

Full text
Abstract:
Could crowdsourcing be a way to get undone science done? Could grassroots groups enlist volunteers to help make sense of large amounts of otherwise unanalyzed data—an approach that has been gaining popularity among natural scientists? This paper assesses the viability of this technique for creating new knowledge about the local effects of petrochemicals, by examining three recent experiments in crowdsourcing led by non-profits and grassroots groups. These case studies suggest that undertaking a crowdsourcing project requires significant resources, including technological infrastructures that smaller or more informal groups may find it difficult to provide. They also indicate that crowdsourcing will be most successful when the questions of grassroots groups line up fairly well with existing scientific frameworks. The paper concludes that further experimentation in crowdsourcing is warranted, at least in cases where adequate resources and interpretive frameworks are available, and that further investment in technological infrastructures for data analysis is needed.
APA, Harvard, Vancouver, ISO, and other styles
12

Ceschia, Sara, Kevin Roitero, Gianluca Demartini, Stefano Mizzaro, Luca Di Gaspero, and Andrea Schaerf. "Task design in complex crowdsourcing experiments: Item assignment optimization." Computers & Operations Research 148 (December 2022): 105995. http://dx.doi.org/10.1016/j.cor.2022.105995.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Sasaki, Kyoshiro, and Yuki Yamada. "Crowdsourcing visual perception experiments: a case of contrast threshold." PeerJ 7 (December 20, 2019): e8339. http://dx.doi.org/10.7717/peerj.8339.

Full text
Abstract:
Crowdsourcing has commonly been used for psychological research but not for studies on sensory perception. A reason is that in online experiments, one cannot ensure that the rigorous settings required for the experimental environment are replicated. The present study examined the suitability of online experiments on basic visual perception, particularly the contrast threshold. We conducted similar visual experiments in the laboratory and online, employing three experimental conditions. The first was a laboratory experiment, where a small sample of participants (n = 24; laboratory condition) completed a task with 10 iterations. The other two conditions were online experiments: participants were either presented with a task without repetition of trials (n = 285; online non-repetition condition) or one with 10 iterations (n = 166; online repetition condition). The results showed significant equivalence in the contrast thresholds between the laboratory and online repetition conditions, although a substantial amount of data needed to be excluded from the analyses in the latter condition. The contrast threshold was significantly higher in the online non-repetition condition compared with the laboratory and online repetition conditions. To make crowdsourcing more suitable for investigating the contrast threshold, ways to reduce data wastage need to be formulated.
APA, Harvard, Vancouver, ISO, and other styles
14

Barsnes, Harald, and Lennart Martens. "Crowdsourcing in proteomics: public resources lead to better experiments." Amino Acids 44, no. 4 (February 2, 2013): 1129–37. http://dx.doi.org/10.1007/s00726-012-1455-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Majima, Yoshimasa. "The Feasibility of a Japanese Crowdsourcing Service for Experimental Research in Psychology." SAGE Open 7, no. 1 (January 2017): 215824401769873. http://dx.doi.org/10.1177/2158244017698731.

Full text
Abstract:
Recent studies have empirically validated the data obtained from Amazon’s Mechanical Turk. Amazon’s Mechanical Turk workers behaved similarly not only in simple surveys but also in tasks used in cognitive behavioral experiments that employ multiple trials and require continuous attention to the task. The present study aimed to extend these findings to data from Japanese crowdsourcing pool in which participants have different ethnic backgrounds from Amazon’s Mechanical Turk workers. In five cognitive experiments, such as the Stroop and Flanker experiments, the reaction times and error rates of Japanese crowdsourcing workers and those of university students were compared and contrasted. The results were consistent with those of previous studies, although the students responded more quickly and poorly than the workers. These findings suggested that the Japanese crowdsourcing sample is another eligible participant pool in behavioral research; however, further investigations are needed to address issues of qualitative differences between student and worker samples.
APA, Harvard, Vancouver, ISO, and other styles
16

Qarout, Rehab, Alessandro Checco, Gianluca Demartini, and Kalina Bontcheva. "Platform-Related Factors in Repeatability and Reproducibility of Crowdsourcing Tasks." Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 7 (October 28, 2019): 135–43. http://dx.doi.org/10.1609/hcomp.v7i1.5264.

Full text
Abstract:
Crowdsourcing platforms provide a convenient and scalable way to collect human-generated labels on-demand. This data can be used to train Artificial Intelligence (AI) systems or to evaluate the effectiveness of algorithms. The datasets generated by means of crowdsourcing are, however, dependent on many factors that affect their quality. These include, among others, the population sample bias introduced by aspects like task reward, requester reputation, and other filters introduced by the task design.In this paper, we analyse platform-related factors and study how they affect dataset characteristics by running a longitudinal study where we compare the reliability of results collected with repeated experiments over time and across crowdsourcing platforms. Results show that, under certain conditions: 1) experiments replicated across different platforms result in significantly different data quality levels while 2) the quality of data from repeated experiments over time is stable within the same platform. We identify some key task design variables that cause such variations and propose an experimentally validated set of actions to counteract these effects thus achieving reliable and repeatable crowdsourced data collection experiments.
APA, Harvard, Vancouver, ISO, and other styles
17

Wang, Shengxiang, Xiaofan Jia, and Qianqian Sang. "A Dual Privacy Preserving Algorithm in Spatial Crowdsourcing." Mobile Information Systems 2020 (June 27, 2020): 1–6. http://dx.doi.org/10.1155/2020/1960368.

Full text
Abstract:
Spatial crowdsourcing assigns location-related tasks to a group of workers (people equipped with smart devices and willing to complete the tasks), who complete the tasks according to their scope of work. Since space crowdsourcing usually requires workers’ location information to be uploaded to the crowdsourcing server, it inevitably causes the privacy disclosure of workers. At the same time, it is difficult to allocate tasks effectively in space crowdsourcing. Therefore, in order to improve the task allocation efficiency of spatial crowdsourcing in the case of large task quantity and improve the degree of privacy protection for workers, a new algorithm is proposed in this paper, which can improve the efficiency of task allocation by disturbing the location of workers and task requesters through k-anonymity. Experiments show that the algorithm can improve the efficiency of task allocation effectively, reduce the task waiting time, improve the privacy of workers and task location, and improve the efficiency of space crowdsourcing service when facing a large quantity of tasks.
APA, Harvard, Vancouver, ISO, and other styles
18

Rothwell, Spencer, Steele Carter, Ahmad Elshenawy, and Daniela Braga. "Job Complexity and User Attention in Crowdsourcing Microtasks." Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 3 (March 28, 2016): 20–25. http://dx.doi.org/10.1609/hcomp.v3i1.13265.

Full text
Abstract:
This paper examines the importance of presenting simple, intuitive tasks when conducting microtasking on crowdsourcing platforms. Most crowdsourcing platforms allow the maker of a task to present any length of instructions to crowd workers who participate in their tasks. Our experiments show, however, most workers who participate in crowdsourcing microtasks do not read the instructions, even when they are very brief. To facilitate success in microtask design, we highlight the importance of making simple, easy to grasp tasks that do not rely on instructions for explanation.
APA, Harvard, Vancouver, ISO, and other styles
19

Shimizu, Nobuyuki, Atsuyuki Morishima, and Ryota Hayashi. "A Crowdsourcing Method for Obtaining Rephrased Questions." Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 3 (September 23, 2015): 32–33. http://dx.doi.org/10.1609/hcomp.v3i1.13251.

Full text
Abstract:
We propose a method for obtaining and ranking paraphrased questions from crowds to be used as a part of instructions in microtask-based crowdsourcing. With our method, we are able to obtain questions that differ in expression yet have the same semantics with respect to the crowdsourcing task. This is done by generating tasks that give hints and elicit instructions from workers. We conducted experiments with data used for a real set of gold standard questions submitted to a commercial crowdsourcing platform and compared the results with those from a direct-rewrite method.
APA, Harvard, Vancouver, ISO, and other styles
20

Zhao, Bingxu, Yingjie Wang, Yingshu Li, Yang Gao, and Xiangrong Tong. "Task Allocation Model Based on Worker Friend Relationship for Mobile Crowdsourcing." Sensors 19, no. 4 (February 22, 2019): 921. http://dx.doi.org/10.3390/s19040921.

Full text
Abstract:
With the rapid development of mobile devices, mobile crowdsourcing has become an important research focus. According to the task allocation, scholars have proposed many methods. However, few works discuss combining social networks and mobile crowdsourcing. To maximize the utilities of mobile crowdsourcing system, this paper proposes a task allocation model considering the attributes of social networks for mobile crowdsourcing system. Starting from the homogeneity of human beings, the relationship between friends in social networks is applied to mobile crowdsourcing system. A task allocation algorithm based on the friend relationships is proposed. The GeoHash coding mechanism is adopted in the process of calculating the strength of worker relationship, which effectively protects the location privacy of workers. Utilizing synthetic dataset and the real-world Yelp dataset, the performance of the proposed task allocation model was evaluated. Through comparison experiments, the effectiveness and applicability of the proposed allocation mechanism were verified.
APA, Harvard, Vancouver, ISO, and other styles
21

Ak, Ali, Abhishek Goswami, Patrick Le Callet, and Frédéric Dufaux. "A Comprehensive Analysis of Crowdsourcing for Subjective Evaluation of Tone Mapping Operators." Electronic Imaging 2021, no. 9 (January 18, 2021): 262–1. http://dx.doi.org/10.2352/issn.2470-1173.2021.9.iqsp-262.

Full text
Abstract:
Tone mapping operators (TMO) are pivotal in rendering High Dynamic Range (HDR) content on limited dynamic range media. Analysing the quality of tone mapped images depends on several objective factors and a combination of several subjective factors like aesthetics, fidelity etc. Objective Image quality assessment (IQA) metrics are often used to evaluate TMO quality but they do not always reflect the ground truth. A robust alternative to objective IQA metrics is subjective quality assessment. Although, subjective experiments provide accurate results, they can be time-consuming and expensive to conduct. Over the last decade, crowdsourcing experiments have become more popular for collecting large amount of data within a shorter period of time for a lesser cost. Although they provide more data requiring less resources, lack of controlled environment for the experiment results in noisy data. In this work1, we propose a comprehensive analysis of crowdsourcing experiments with two different groups of participants. Our contributions include a comparative study and a collection of methods to detect unreliable participants in crowdsourcing experiments in a TMO quality evaluation scenario. These methods can be utilized by the scientific community to increase the reliability of the gathered data.
APA, Harvard, Vancouver, ISO, and other styles
22

Liao, Zhifang, Zhi Zeng, Yan Zhang, and Xiaoping Fan. "A Data-Driven Game Theoretic Strategy for Developers in Software Crowdsourcing: A Case Study." Applied Sciences 9, no. 4 (February 19, 2019): 721. http://dx.doi.org/10.3390/app9040721.

Full text
Abstract:
Crowdsourcing has the advantages of being cost-effective and saving time, which is a typical embodiment of collective wisdom and community workers’ collaborative development. However, this development paradigm of software crowdsourcing has not been used widely. A very important reason is that requesters have limited knowledge about crowd workers’ professional skills and qualities. Another reason is that the crowd workers in the competition cannot get the appropriate reward, which affects their motivation. To solve this problem, this paper proposes a method of maximizing reward based on the crowdsourcing ability of workers, they can choose tasks according to their own abilities to obtain appropriate bonuses. Our method includes two steps: Firstly, it puts forward a method to evaluate the crowd workers’ ability, then it analyzes the intensity of competition for tasks at Topcoder.com—an open community crowdsourcing platform—on the basis of the workers’ crowdsourcing ability; secondly, it follows dynamic programming ideas and builds game models under complete information in different cases, offering a strategy of reward maximization for workers by solving a mixed-strategy Nash equilibrium. This paper employs crowdsourcing data from Topcoder.com to carry out experiments. The experimental results show that the distribution of workers’ crowdsourcing ability is uneven, and to some extent it can show the activity degree of crowdsourcing tasks. Meanwhile, according to the strategy of reward maximization, a crowd worker can get the theoretically maximum reward.
APA, Harvard, Vancouver, ISO, and other styles
23

Krivosheev, Evgeny, Fabio Casati, Valentina Caforio, and Boualem Benatallah. "Crowdsourcing Paper Screening in Systematic Literature Reviews." Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 5 (September 21, 2017): 108–17. http://dx.doi.org/10.1609/hcomp.v5i1.13302.

Full text
Abstract:
Literature reviews allow scientists to stand on the shoulders of giants, showing promising directions, summarizing progress, and pointing out existing challenges in research. At the same time conducting a systematic literature review is a laborious and consequently expensive process. In the last decade, there have been several studies on crowdsourcing in literature reviews. This paper explores the feasibility of crowdsourcing for facilitating the literature review process in terms of results, time and effort, and identifies which crowdsourcing strategies provide the best results based on the budget available. In particular we focus on the screening phase of the literature review process and we contribute and assess strategies for running crowdsourcing tasks that are efficient in terms of budget and classification error. Finally, we present our findings based on experiments run on Crowdflower.
APA, Harvard, Vancouver, ISO, and other styles
24

Wang, Le, Zhihong Tian, Zhaoquan Gu, and Hui Lu. "Crowdsourcing Approach for Developing Hands-On Experiments in Cybersecurity Education." IEEE Access 7 (2019): 169066–72. http://dx.doi.org/10.1109/access.2019.2952585.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Gao, Liping, Kun Dai, and Chao Lu. "Research on Optimized Online Allocation of Scope Spatial Crowdsourcing Tasks." International Journal of Cooperative Information Systems 29, no. 03 (August 17, 2020): 2050003. http://dx.doi.org/10.1142/s0218843020500033.

Full text
Abstract:
Task allocation of spatial crowdsourcing tasks is an important branch of crowdsourcing. Spatial crowdsourcing tasks not only require workers to complete a specific task at a specified time, but also require users to go to the designated location to complete the corresponding tasks. In this paper, Scope spatial crowdsourcing task whose work position is a region rather than a location is a kind of spatial crowdsourcing task. Mobile crowdsourced sensing (MCS) is one of the most important platforms to publish spatial crowdsourcing tasks, based on which MCS workers can use smartphones to complete the collections of related sensing data. When assigning tasks for scoped crowdsourcing tasks, there is a scope overlap between tasks and one or more tasks due to the association of task scope between tasks, which causes a waste of manpower. The focus of this paper is to study the redundancy of the task scope that occurs when using MCS to collect scoping data in the case of fewer workers and more tasks. Optimizing scope spatial crowdsourcing tasks allocation algorithm (OSSA) can eliminate the redundancy of the task area by integrating and decomposing tasks and achieve the improvement of the assignable number of tasks. In the Windows platform, experiments are made to compare the efficiency of the OSSA algorithm with the greedy algorithm and the two-phase-based global online allocation (TGOA) algorithm to further prove the correctness and feasibility of the algorithm for task scope optimization.
APA, Harvard, Vancouver, ISO, and other styles
26

Pan, Qingxian, Hongbin Dong, Yingjie Wang, Zhipeng Cai, and Lizong Zhang. "Recommendation of Crowdsourcing Tasks Based on Word2vec Semantic Tags." Wireless Communications and Mobile Computing 2019 (March 24, 2019): 1–10. http://dx.doi.org/10.1155/2019/2121850.

Full text
Abstract:
Crowdsourcing is the perfect show of collective intelligence, and the key of finishing perfectly the crowdsourcing task is to allocate the appropriate task to the appropriate worker. Now the most of crowdsourcing platforms select tasks through tasks search, but it is short of individual recommendation of tasks. Tag-semantic task recommendation model based on deep learning is proposed in the paper. In this paper, the similarity of word vectors is computed, and the semantic tags similar matrix database is established based on the Word2vec deep learning. The task recommending model is established based on semantic tags to achieve the individual recommendation of crowdsourcing tasks. Through computing the similarity of tags, the relevance between task and worker is obtained, which improves the robustness of task recommendation. Through conducting comparison experiments on Tianpeng web dataset, the effectiveness and applicability of the proposed model are verified.
APA, Harvard, Vancouver, ISO, and other styles
27

Wu, Qinyue, Duankang Fu, Beijun Shen, and Yuting Chen. "Semantic Service Search in IT Crowdsourcing Platform: A Knowledge Graph-Based Approach." International Journal of Software Engineering and Knowledge Engineering 30, no. 06 (June 2020): 765–83. http://dx.doi.org/10.1142/s0218194020400069.

Full text
Abstract:
Understanding user’s search intent in vertical websites like IT service crowdsourcing platform relies heavily on domain knowledge. Meanwhile, searching for services accurately on crowdsourcing platforms is still difficult, because these platforms do not contain enough information to support high-performance search. To solve these problems, we build and leverage a knowledge graph named ITServiceKG to enhance search performance of crowdsourcing IT services. The main ideas are to (1) build an IT service knowledge graph from Wikipedia, Baidupedia, CN-DBpedia, StuQ and data in IT service crowdsourcing platforms, (2) use properties and relations of entities in the knowledge graph to expand user query and service information, and (3) apply a listwise approach with relevance features and topic features to re-rank the search results. The results of our experiments indicate that our approach outperforms the traditional search approaches.
APA, Harvard, Vancouver, ISO, and other styles
28

Zhou, Haofeng, Denys Baskov, and Matthew Lease. "Crowdsourcing Transcription Beyond Mechanical Turk." Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 1 (November 3, 2013): 9–16. http://dx.doi.org/10.1609/hcomp.v1i1.13093.

Full text
Abstract:
While much work has studied crowdsourced transcription via Amazon’s Mechanical Turk, we are not familiar with any prior cross-platform analysis of crowdsourcing service providers for transcription. We present a qualitative and quantitative analysis of eight such providers: 1-888-Type-It-Up, 3Play Media, Transcription Hub, CastingWords, Rev, TranscribeMe, Quicktate, and SpeakerText. We also provide comparative evaluation vs. three transcribers from oDesk. Spontanteous speech used in our experiments is drawn from USC-SFI MALACH collection of oral history interviews. After informally evaluating pilot transcripts from all providers, our formal evaluation measures word error rate (WER) over 10-minute segments from six interviews transcribed by three service providers and the three oDesk transcribers. We report the WER obtained in each case, and more generally assess tradeoffs among the quality, cost, risk and effort of alternative crowd-based transcription options.
APA, Harvard, Vancouver, ISO, and other styles
29

Song, Yi, Bin Hu, and Heqiang Xue. "Evolution of employee opinion in a crowdsourcing logistics company: a catastrophe-embedded RA model." SIMULATION 98, no. 4 (December 11, 2021): 347–60. http://dx.doi.org/10.1177/00375497211061269.

Full text
Abstract:
Crowdsourcing logistics is a business model for the modern logistics industry. However, the employee behavior of crowdsourcing logistics remains unstable due to the dynamic nature of crowdsourcing logistics. With a small change in environmental factors, e.g., the delivery price, employee opinion may show frequent polarization or reversal that can lead to employee turnover. To explore the mechanism of sudden change in employee opinion and turnover, a cusp catastrophe model is embedded into the relative agreement (RA) model of opinion dynamics to form a catastrophe-embedded RA model. Text data about employee opinion of the crowdsourcing logistics company DaDa are collected for modeling and validation of the catastrophe-embedded RA model. Simulation experiments explore the impact of network structure and delivery price on employee opinion evolution and employee turnover. The catastrophe theory–embedded RA model extends the application of the RA model in the field of opinion dynamics with frequent polarization or reversal.
APA, Harvard, Vancouver, ISO, and other styles
30

Hao, Hanyun, Jian Yang, and Jie Wang. "A Tripartite Evolutionary Game Analysis of Participant Decision-Making Behavior in Mobile Crowdsourcing." Mathematics 11, no. 5 (March 6, 2023): 1269. http://dx.doi.org/10.3390/math11051269.

Full text
Abstract:
With the rapid development of the Internet of Things and the popularity of numerous sensing devices, Mobile crowdsourcing (MCS) has become a paradigm for collecting sensing data and solving problems. However, most early studies focused on schemes of incentive mechanisms, task allocation and data quality control, which did not consider the influence and restriction of different behavioral strategies of stakeholders on the behaviors of other participants, and rarely applied dynamic system theory to analysis of participant behavior in mobile crowdsourcing. In this paper, we first propose a tripartite evolutionary game model of crowdsourcing workers, crowdsourcing platforms and task requesters. Secondly, we focus on the evolutionary stability strategies and evolutionary trends of different participants, as well as the influential factors, such as participants’ irrational personality, conflict of interest, punishment intensity, technical level and awareness of rights protection, to analyze the influence of different behavioral strategies on other participants. Thirdly, we verify the stability of the equilibrium point of the tripartite game system through simulation experiments. Finally, we summarize our work and provide related recommendations for governing agencies and different stakeholders to facilitate the continuous operation of the mobile crowdsourcing market and maximize social welfare.
APA, Harvard, Vancouver, ISO, and other styles
31

Bragg, Jonathan, Andrey Kolobov, Mausam Mausam, and Daniel Weld. "Parallel Task Routing for Crowdsourcing." Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 2 (September 5, 2014): 11–21. http://dx.doi.org/10.1609/hcomp.v2i1.13170.

Full text
Abstract:
An ideal crowdsourcing or citizen-science system would route tasks to the most appropriate workers, but the best assignment is unclear because workers have varying skill, tasks have varying difficulty, and assigning several workers to a single task may significantly improve output quality. This paper defines a space of task routing problems, proves that even the simplest is NP-hard, and develops several approximation algorithms for parallel routing problems. We show that an intuitive class of requesters' utility functions is submodular, which lets us provide iterative methods for dynamically allocating batches of tasks that make near-optimal use of available workers in each round. Experiments with live oDesk workers show that our task routing algorithm uses only 48% of the human labor compared to the commonly used round-robin strategy. Further, we provide versions of our task routing algorithm which enable it to scale to large numbers of workers and questions and to handle workers with variable response times while still providing significant benefit over common baselines.
APA, Harvard, Vancouver, ISO, and other styles
32

Kodjiku, Seth Larweh, Yili Fang, Tao Han, Kwame Omono Asamoah, Esther Stacy E. B. Aggrey, Collins Sey, Evans Aidoo, Victor Nonso Ejianya, and Xun Wang. "ExCrowd: A Blockchain Framework for Exploration-Based Crowdsourcing." Applied Sciences 12, no. 13 (July 2, 2022): 6732. http://dx.doi.org/10.3390/app12136732.

Full text
Abstract:
Because of the rise of cryptocurrencies and decentralized apps, blockchain technology has generated a lot of interest. Among these is the emergent blockchain-based crowdsourcing paradigm, which eliminates the centralized conventional mechanism servers in favor of smart contracts for task and reward allocation. However, there are a few crucial challenges that must be resolved properly. For starters, most reputation-based systems favor high-performing employees. Secondly, the crowdsourcing platform’s expensive service charges may obstruct the growth of crowdsourcing. Finally, unequal evaluation and reward allocation might lead to job dissatisfaction. As a result, the aforementioned issues will substantially impede the development of blockchain-based crowdsourcing systems. In this study, we introduce ExCrowd, a blockchain-based crowdsourcing system that employs a smart contract as a trustworthy authority to properly select workers, assess inputs, and award incentives while maintaining user privacy. Exploration-based crowdsourcing employs the hyperbolic learning curve model based on the conduct of workers and analyzes worker performance patterns using a decision tree technique. We specifically present the architecture of our framework, on which we establish a concrete scheme. Using a real-world dataset, we implement our model on the Ethereum public test network leveraging its reliability, adaptability, scalability, and rich statefulness. The results of our experiments demonstrate the efficiency, usefulness, and adaptability of our proposed system.
APA, Harvard, Vancouver, ISO, and other styles
33

Qiu, Guoying, Yulong Shen, Ke Cheng, Lingtong Liu, and Shuiguang Zeng. "Mobility-Aware Privacy-Preserving Mobile Crowdsourcing." Sensors 21, no. 7 (April 2, 2021): 2474. http://dx.doi.org/10.3390/s21072474.

Full text
Abstract:
The increasing popularity of smartphones and location-based service (LBS) has brought us a new experience of mobile crowdsourcing marked by the characteristics of network-interconnection and information-sharing. However, these mobile crowdsourcing applications suffer from various inferential attacks based on mobile behavioral factors, such as location semantic, spatiotemporal correlation, etc. Unfortunately, most of the existing techniques protect the participant’s location-privacy according to actual trajectories. Once the protection fails, data leakage will directly threaten the participant’s location-related private information. It open the issue of participating in mobile crowdsourcing service without actual locations. In this paper, we propose a mobility-aware trajectory-prediction solution, TMarkov, for achieving privacy-preserving mobile crowdsourcing. Specifically, we introduce a time-partitioning concept into the Markov model to overcome its traditional limitations. A new transfer model is constructed to record the mobile user’s time-varying behavioral patterns. Then, an unbiased estimation is conducted according to Gibbs Sampling method, because of the data incompleteness. Finally, we have the TMarkov model which characterizes the participant’s dynamic mobile behaviors. With TMarkov in place, a mobility-aware spatiotemporal trajectory is predicted for the mobile user to participate in the crowdsourcing application. Extensive experiments with real-world dataset demonstrate that TMarkov well balances the trade-off between privacy preservation and data usability.
APA, Harvard, Vancouver, ISO, and other styles
34

Wang, Zhenya, Xiang Cheng, Sen Su, and Longhan Wang. "Achieving Private and Fair Truth Discovery in Crowdsourcing Systems." Security and Communication Networks 2022 (March 30, 2022): 1–15. http://dx.doi.org/10.1155/2022/9281729.

Full text
Abstract:
Nowadays, crowdsourcing has witnessed increasing popularity as it can be adopted to solve many challenging question-answering tasks. One of the most significant problems in crowdsourcing is truth discovery, which aims to find reliable information from conflict answers provided by different workers. Despite the effectiveness for providing reliable aggregated results, existing works on truth discovery either fall short of preserving the workers’ privacy or fail to consider the unfairness issue in their design. In light of this deficiency, we propose a novel private and fair truth discovery approach called PFTD, which is implemented by two non-colluding cloud servers and leverages the Paillier cryptosystem. This approach not only preserves the privacy of the answers of each worker, but also addresses the unfairness issue in crowdsourcing. Extensive experiments conducted on both real and synthetic datasets demonstrate the effectiveness of our proposed PFTD approach.
APA, Harvard, Vancouver, ISO, and other styles
35

Sun, Yuyin, Adish Singla, Tori Yan, Andreas Krause, and Dieter Fox. "Evaluating Task-Dependent Taxonomies for Navigation." Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 4 (September 21, 2016): 229–38. http://dx.doi.org/10.1609/hcomp.v4i1.13286.

Full text
Abstract:
Taxonomies of concepts are important across many application domains, for instance, online shopping portals use catalogs to help users navigate and search for products. Task-dependent taxonomies, e.g., adapting the taxonomy to a specific cohort of users, can greatly improve the effectiveness of navigation and search. However, taxonomies are usually created by domain experts and hence designing task-dependent taxonomies can be an expensive process: this often limits the applications to deploy generic taxonomies. Crowdsourcing-based techniques have the potential to provide a cost-efficient solution to building task-dependent taxonomies. In this paper, we present the first quantitative study to evaluate the effectiveness of these crowdsourcing based techniques. Our experimental study compares different task-dependent taxonomies built via crowdsourcing and generic taxonomies built by experts. We design randomized behavioral experiments on the Amazon Mechanical Turk platform for navigation tasks using these taxonomies resembling real-world applications such as product search. We record various metrics such as the time of navigation, the number of clicks performed, and the search path taken by a participant to navigate the taxonomy to locate a desired object. Our findings show that task-dependent taxonomies built by crowdsourcing techniques can reduce the navigation time up to $20\%$. Our results, in turn,demonstrate the power of crowdsourcing for learning complex structures such as semantic taxonomies.
APA, Harvard, Vancouver, ISO, and other styles
36

Xu, Sunyue, and Jing Zhang. "Crowdsourcing with Meta-Knowledge Transfer (Student Abstract)." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 11 (June 28, 2022): 13095–96. http://dx.doi.org/10.1609/aaai.v36i11.21684.

Full text
Abstract:
When crowdsourced workers perform annotation tasks in an unfamiliar domain, their accuracy will dramatically decline due to the lack of expertise. Transferring knowledge from relevant domains can form a better representation for instances, which benefits the estimation of workers' expertise in truth inference models. However, existing knowledge transfer processes for crowdsourcing require a considerable number of well-collected instances in source domains. This paper proposes a novel truth inference model for crowdsourcing, where (meta-)knowledge is transferred by meta-learning and used in the estimation of workers' expertise. Our preliminary experiments demonstrate that the meta-knowledge transfer significantly reduces instances in source domains and increases the accuracy of truth inference.
APA, Harvard, Vancouver, ISO, and other styles
37

Patterson, Genevieve, Grant Van Horn, Serge Belongie, Pietro Perona, and James Hays. "Tropel: Crowdsourcing Detectors with Minimal Training." Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 3 (September 23, 2015): 150–59. http://dx.doi.org/10.1609/hcomp.v3i1.13224.

Full text
Abstract:
This paper introduces the Tropel system which enables non-technical users to create arbitrary visual detectors without first annotating a training set. Our primary contribution is a crowd active learning pipeline that is seeded with only a single positive example and an unlabeled set of training images. We examine the crowd's ability to train visual detectors given severely limited training themselves. This paper presents a series of experiments that reveal the relationship between worker training, worker consensus and the average precision of detectors trained by crowd-in-the-loop active learning. In order to verify the efficacy of our system, we train detectors for bird species that work nearly as well as those trained on the exhaustively labeled CUB 200 dataset at significantly lower cost and with little effort from the end user. To further illustrate the usefulness of our pipeline, we demonstrate qualitative results on unlabeled datasets containing fashion images and street-level photographs of Paris.
APA, Harvard, Vancouver, ISO, and other styles
38

Li, Ziyuan, Jian Liu, Jialu Hao, Huimei Wang, and Ming Xian. "CrowdSFL: A Secure Crowd Computing Framework Based on Blockchain and Federated Learning." Electronics 9, no. 5 (May 8, 2020): 773. http://dx.doi.org/10.3390/electronics9050773.

Full text
Abstract:
Over the years, the flourish of crowd computing has enabled enterprises to accomplish computing tasks through crowdsourcing in a large-scale and high-quality manner, and therefore how to efficiently and securely implement crowd computing becomes a hotspot. Some recent work innovatively adopted a P2P (peer-to-peer) network as the communication environment of crowdsourcing. Based on its decentralized control, issues like single-point-of-failure or DDoS attack can be overcome to some extent, but the huge computing capacity and storage costs required by this scheme is always unbearable. Federated learning is a distributed machine learning that supports local storage of data, and clients implement training through interactive gradient values. In our work, we combine blockchain with federated learning and propose a crowdsourcing framework named CrowdSFL, that users can implement crowdsourcing with less overhead and higher security. In addition, to protect the privacy of participants, we design a new re-encryption algorithm based on Elgamal to ensure that interactive values and other information will not be exposed to other participants outside the workflow. Finally, we have proved through experiments that our framework is superior to some similar work in accuracy, efficiency, and overhead.
APA, Harvard, Vancouver, ISO, and other styles
39

Guo, Shikai, Rong Chen, Hui Li, Jian Gao, and Yaqing Liu. "Crowdsourced Web Application Testing Under Real-Time Constraints." International Journal of Software Engineering and Knowledge Engineering 28, no. 06 (June 2018): 751–79. http://dx.doi.org/10.1142/s0218194018500213.

Full text
Abstract:
Crowdsourcing carried out by cyber citizens instead of hired consultants and professionals has become increasingly an appealing solution to test the feature rich and interactive web. Despite having various online crowdsourcing testing services, the benefits of exposure to a wider audience and harnessing the collective efforts of individuals remain uncertain, especially when the quality control is problematic in an open environment. The objective of this paper is to propose a real-time collaborative testing approach (RCTA) to create a productive crowdsourced testing on a dynamic Internet. We implemented a prototype crowdsourcing system XTurk, and carried out a case study, to understand the crowdsourced testers behavior, the trustworthiness, the execution time of test cases and accuracy of feedback. Several experiments are carried out and experimental results validate the quality, efficiency and reliability of the present approach and the positive testing feedback is are shown to outperform the previous methods.
APA, Harvard, Vancouver, ISO, and other styles
40

Salve, Tanay, Akash Agrahari, Tarun Sharma, Adnan Haque, and Shudhodhan Bokefode. "Crowdsourcing Platform for Website and Application Testing." YMER Digital 21, no. 04 (April 7, 2022): 104–14. http://dx.doi.org/10.37896/ymer21.04/11.

Full text
Abstract:
Crowdourcing has gained or achieved a great deal or path of interest in the operation or usage or exercise during the last years of the professional generation. Many teams have done a lot of procedures and tasks in this regard to appear to be a special obligation and income-generating games. However, regardless of its reputation and its use in this generation of security technologies and the best consumer interface and value, there may be well-informed information compared to crowdsourcing, especially with reference to crowdsourcing consultants or activities or courses. Crowdsourcing coordinators or course studies play an important role in multi-person projects or processes or activities as they ensure the relationships or cooperation established between crowdsourcing groups and the gang. However, the current problem or situation of crowdsourcing mediators or courses that misleads crowdsourcing projects and their functionality and the corresponding contexts they need are no longer addressed in the course of studies. We deal with or address those issues in a way that addresses a well-known case with an average person called TestCloud which has helped us to get to the top of the platform providing software experiments or software engineering. the stadium or donations of teams that progress slowly or completely without their efforts or sending sports and programs to a good crowd. Criticism shows that the testing of the said service Cloud meets the 3 most required conditions if you want to improve or enhance its use and, these are: process management, team management and collaboration with technology. In every size, we explain or highlight many of the methods or processes that this app uses to get through the difficult situations or problems related to the collection activities that need to be done properly at the same time. The term 'crowdsourcing' was changed to be added or introduced in 2006 to describe a problem-solving model distributed by remote internet users. Since then it has been extensively studied and practiced to assist software program engineering and objective software applications. In this course we provide a comprehensive or specific survey of the use and importance of crowdsourcing on the software engineering program and software program you are trying, we aim to cover all the books and topics that may be relevant to this topic. We first assess and attempt to find definitions and application functions, crowdsourcing performance and to define and define our Crowdsourcing Software Engineering ideas together in terms of their value or application. Then we summarize or integrate the many required features within the commercial fitness regions in software system engineering and related case research. Similarly we explore software programming engineering or experimenting with domains, obligations and programs, functions, crowdsourced operations and systems and stakeholders involved in knowing Crowdsourced Software Engineering solutions. We are concluding or would like to really focus on the process of developing styles, open-ended problems and opportunities for conclusion lessons in Crowdsourced Software Engineering or Software Testing. We may also want to show or demonstrate that statistics released from the crowd are primarily based entirely on experiments that can adhere to multiple regions or activities associated with an automated web site or experimental cell. We would really like to integrate the creation of POLARIS, which produces repetitive views of texts from the crowd — primarily based entirely on experiments, rendering the events 'motif' of a different application: a series of high-level automated easy-to-use integrated low-stage action actions. Our strong focus is on the mobilization staff deployed from Mechanical Turk to fulfill 1,350 test commitments in nine popular Google Play apps, each with at least 1 million customer subscriptions.
APA, Harvard, Vancouver, ISO, and other styles
41

GOTO, Akira. "A Study on economic game experiments with crowdsourcing: Focusing on communication structure." Joho Chishiki Gakkaishi 27, no. 2 (2017): 127–32. http://dx.doi.org/10.2964/jsik_2017_014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Lee, Tak Yeon, Casey Dugan, Werner Geyer, Tristan Ratchford, Jamie Rasmussen, N. Sadat Shami, and Stela Lupushor. "Experiments on Motivational Feedback for Crowdsourced Workers." Proceedings of the International AAAI Conference on Web and Social Media 7, no. 1 (August 3, 2021): 341–50. http://dx.doi.org/10.1609/icwsm.v7i1.14428.

Full text
Abstract:
This paper examines the relationship between motivational design and its longitudinal effects on crowdsourcing systems. In the context of a company internal web site that crowdsources the identification of Twitter accounts owned by company employees, we designed and investigated the effects of various motivational features including individual / social achievements and gamification. Our 6-month experiment with 437 users allowed us to compare the features in terms of both quantity and quality of the work produced by participants over time. While we found that gamification can increase workers’ motivation overall, the combination of motivational features also matters. Specifically, gamified social achievement is the best performing design over a longer period of time. Mixing individual and social achievements turns out to be less effective and can even encourage users to game the system.
APA, Harvard, Vancouver, ISO, and other styles
43

Hotaling, Abigail, and James P. Bagrow. "Efficient crowdsourcing of crowd-generated microtasks." PLOS ONE 15, no. 12 (December 17, 2020): e0244245. http://dx.doi.org/10.1371/journal.pone.0244245.

Full text
Abstract:
Allowing members of the crowd to propose novel microtasks for one another is an effective way to combine the efficiencies of traditional microtask work with the inventiveness and hypothesis generation potential of human workers. However, microtask proposal leads to a growing set of tasks that may overwhelm limited crowdsourcer resources. Crowdsourcers can employ methods to utilize their resources efficiently, but algorithmic approaches to efficient crowdsourcing generally require a fixed task set of known size. In this paper, we introduce cost forecasting as a means for a crowdsourcer to use efficient crowdsourcing algorithms with a growing set of microtasks. Cost forecasting allows the crowdsourcer to decide between eliciting new tasks from the crowd or receiving responses to existing tasks based on whether or not new tasks will cost less to complete than existing tasks, efficiently balancing resources as crowdsourcing occurs. Experiments with real and synthetic crowdsourcing data show that cost forecasting leads to improved accuracy. Accuracy and efficiency gains for crowd-generated microtasks hold the promise to further leverage the creativity and wisdom of the crowd, with applications such as generating more informative and diverse training data for machine learning applications and improving the performance of user-generated content and question-answering platforms.
APA, Harvard, Vancouver, ISO, and other styles
44

Zhao, Bei, Siwen Zheng, and Jianhui Zhang. "Optimal policy for composite sensing with crowdsourcing." International Journal of Distributed Sensor Networks 16, no. 5 (May 2020): 155014772092733. http://dx.doi.org/10.1177/1550147720927331.

Full text
Abstract:
The mobile crowdsourcing technology has been widely researched and applied with the wide popularity of smartphones in recent years. In the applications, the smartphone and its user act as a whole, which called as the composite node in this article. Since smartphone is usually under the operation of its user, the user’s participation cannot be excluded out the applications. But there are a few works noticed that humans and their smartphones depend on each other. In this article, we first present the relation between the smartphone and its user as the conditional decision and sensing. Under this relation, the composite node performs the sensing decision of the smartphone which based on its user’s decision. Then, this article studies the performance of the composite sensing process under the scenario which composes of an application server, some objects, and users. In the progress of the composite sensing, users report their sensing results to the server. Then, the server returns rewards to some users to maximize the overall reward. Under this scenario, this article maps the composite sensing process as the partially observable Markov decision process, and designs a composite sensing solution for the process to maximize the overall reward. The solution includes optimal and myopic policies. Besides, we provide necessary theoretical analysis, which ensures the optimality of the optimal algorithm. In the end, we conduct some experiments to evaluate the performance of our two policies in terms of the average quality, the sensing ratio, the success report ratio, and the approximate ratio. In addition, the delay and the progress proportion of optimal policy are analyzed. In all, the experiments show that both policies we provide are obviously superior to the random policy.
APA, Harvard, Vancouver, ISO, and other styles
45

Li, Chunhua, Pengpeng Zhao, Victor S. Sheng, Xuefeng Xian, Jian Wu, and Zhiming Cui. "Refining Automatically Extracted Knowledge Bases Using Crowdsourcing." Computational Intelligence and Neuroscience 2017 (2017): 1–17. http://dx.doi.org/10.1155/2017/4092135.

Full text
Abstract:
Machine-constructed knowledge bases often contain noisy and inaccurate facts. There exists significant work in developing automated algorithms for knowledge base refinement. Automated approaches improve the quality of knowledge bases but are far from perfect. In this paper, we leverage crowdsourcing to improve the quality of automatically extracted knowledge bases. As human labelling is costly, an important research challenge is how we can use limited human resources to maximize the quality improvement for a knowledge base. To address this problem, we first introduce a concept of semantic constraints that can be used to detect potential errors and do inference among candidate facts. Then, based on semantic constraints, we propose rank-based and graph-based algorithms for crowdsourced knowledge refining, which judiciously select the most beneficial candidate facts to conduct crowdsourcing and prune unnecessary questions. Our experiments show that our method improves the quality of knowledge bases significantly and outperforms state-of-the-art automatic methods under a reasonable crowdsourcing cost.
APA, Harvard, Vancouver, ISO, and other styles
46

Maddalena, Eddy, Marco Basaldella, Dario De Nart, Dante Degl'Innocenti, Stefano Mizzaro, and Gianluca Demartini. "Crowdsourcing Relevance Assessments: The Unexpected Benefits of Limiting the Time to Judge." Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 4 (September 21, 2016): 129–38. http://dx.doi.org/10.1609/hcomp.v4i1.13284.

Full text
Abstract:
Crowdsourcing has become an alternative approach to collect relevance judgments at scale thanks to the availability of crowdsourcing platforms and quality control techniques that allow to obtain reliable results. Previous work has used crowdsourcing to ask multiple crowd workers to judge the relevance of a document with respect to a query and studied how to best aggregate multiple judgments of the same topic-document pair. This paper addresses an aspect that has been rather overlooked so far: we study how the time available to express a relevance judgment affects its quality. We also discuss the quality loss of making crowdsourced relevance judgments more efficient in terms of time taken to judge the relevance of a document. We use standard test collections to run a battery of experiments on the crowdsourcing platform CrowdFlower, studying how much time crowd workers need to judge the relevance of a document and at what is the effect of reducing the available time to judge on the overall quality of the judgments. Our extensive experiments compare judgments obtained under different types of time constraints with judgments obtained when no time constraints were put on the task. We measure judgment quality by different metrics of agreement with editorial judgments. Experimental results show that it is possible to reduce the cost of crowdsourced evaluation collection creation by reducing the time available to perform the judgments with no loss in quality. Most importantly, we observed that the introduction of limits on the time available to perform the judgments improves the overall judgment quality. Top judgment quality is obtained with 25-30 seconds to judge a topic-document pair.
APA, Harvard, Vancouver, ISO, and other styles
47

Ji, Yinglei, Chunxiao Mu, Xiuqin Qiu, and Yibao Chen. "A Task Recommendation Model in Mobile Crowdsourcing." Wireless Communications and Mobile Computing 2022 (July 14, 2022): 1–12. http://dx.doi.org/10.1155/2022/9191605.

Full text
Abstract:
With the development of the Internet of Things and the popularity of smart terminal devices, mobile crowdsourcing systems are receiving more and more attention. However, the information overload of crowdsourcing platforms makes workers face difficulties in task selection. This paper proposes a task recommendation model based on the prediction of workers’ mobile trajectories. A recurrent neural network is used to obtain the movement pattern of workers and predict the next destination. In addition, an attention mechanism is added to the task recommendation model in order to capture records that are similar to candidate tasks and to obtain task selection preferences. Finally, we conduct experiments on two real datasets, Foursquare and AMT (Amazon Mechanical Turk), to verify the effectiveness of the proposed recommendation model.
APA, Harvard, Vancouver, ISO, and other styles
48

Matsubara, Masaki, Ria Mae Borromeo, Sihem Amer-Yahia, and Atsuyuki Morishima. "Task Assignment Strategies for Crowd Worker Ability Improvement." Proceedings of the ACM on Human-Computer Interaction 5, CSCW2 (October 13, 2021): 1–20. http://dx.doi.org/10.1145/3479519.

Full text
Abstract:
Workers are the most important resource in crowdsourcing. However, only investing in worker-centric needs, such as skill improvement, often conflicts with short-term platform-centric needs, such as task throughput. This paper studies learning strategies in task assignment in crowdsourcing and their impact on platform-centric needs. We formalize learning potential of individual tasks and collaborative tasks, and devise an iterative task assignment and completion approach that implements strategies grounded in learning theories. We conduct experiments to compare several learning strategies in terms of skill improvement, and in terms of task throughput and contribution quality. We discuss how our findings open new research directions in learning and collaboration.
APA, Harvard, Vancouver, ISO, and other styles
49

Pei, Weiping, Zhiju Yang, Monchu Chen, and Chuan Yue. "Quality Control in Crowdsourcing based on Fine-Grained Behavioral Features." Proceedings of the ACM on Human-Computer Interaction 5, CSCW2 (October 13, 2021): 1–28. http://dx.doi.org/10.1145/3479586.

Full text
Abstract:
Crowdsourcing is popular for large-scale data collection and labeling, but a major challenge is on detecting low-quality submissions. Recent studies have demonstrated that behavioral features of workers are highly correlated with data quality and can be useful in quality control. However, these studies primarily leveraged coarsely extracted behavioral features, and did not further explore quality control at the fine-grained level, i.e., the annotation unit level. In this paper, we investigate the feasibility and benefits of using fine-grained behavioral features, which are the behavioral features finely extracted from a worker's individual interactions with each single unit in a subtask, for quality control in crowdsourcing. We design and implement a framework named Fine-grained Behavior-based Quality Control (FBQC) that specifically extracts fine-grained behavioral features to provide three quality control mechanisms: (1) quality prediction for objective tasks, (2) suspicious behavior detection for subjective tasks, and (3) unsupervised worker categorization. Using the FBQC framework, we conduct two real-world crowdsourcing experiments and demonstrate that using fine-grained behavioral features is feasible and beneficial in all three quality control mechanisms. Our work provides clues and implications for helping job requesters or crowdsourcing platforms to further achieve better quality control.
APA, Harvard, Vancouver, ISO, and other styles
50

Kohler, Rachel, John Purviance, and Kurt Luther. "Supporting Image Geolocation with Diagramming and Crowdsourcing." Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 5 (September 21, 2017): 98–107. http://dx.doi.org/10.1609/hcomp.v5i1.13296.

Full text
Abstract:
Geolocation, the process of identifying the precise location in the world where a photo or video was taken, is central to many types of investigative work, from debunking fake news posted on social media to locating terrorist training camps. Professional geolocation is often a manual, time-consuming process that involves searching large areas of satellite imagery for potential matches. In this paper, we explore how crowdsourcing can be used to support expert image geolocation. We adapt an expert diagramming technique to overcome spatial reasoning limitations of novice crowds, allowing them to support an expert’s search. In two experiments (n=1080), we found that diagrams work significantly better than ground-level photos and allow crowds to reduce a search area by half before any expert intervention. We also discuss hybrid approaches to complex image analysis combining crowds, experts, and computer vision.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography