Littérature scientifique sur le sujet « Crowdsourcing, classification, task design, crowdsourcing experiments »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Crowdsourcing, classification, task design, crowdsourcing experiments ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Articles de revues sur le sujet "Crowdsourcing, classification, task design, crowdsourcing experiments"

1

Yang, Keyu, Yunjun Gao, Lei Liang, Song Bian, Lu Chen et Baihua Zheng. « CrowdTC : Crowd-powered Learning for Text Classification ». ACM Transactions on Knowledge Discovery from Data 16, no 1 (3 juillet 2021) : 1–23. http://dx.doi.org/10.1145/3457216.

Texte intégral
Résumé :
Text classification is a fundamental task in content analysis. Nowadays, deep learning has demonstrated promising performance in text classification compared with shallow models. However, almost all the existing models do not take advantage of the wisdom of human beings to help text classification. Human beings are more intelligent and capable than machine learning models in terms of understanding and capturing the implicit semantic information from text. In this article, we try to take guidance from human beings to classify text. We propose Crowd-powered learning for Text Classification (CrowdTC for short). We design and post the questions on a crowdsourcing platform to extract keywords in text. Sampling and clustering techniques are utilized to reduce the cost of crowdsourcing. Also, we present an attention-based neural network and a hybrid neural network to incorporate the extracted keywords as human guidance into deep neural networks. Extensive experiments on public datasets confirm that CrowdTC improves the text classification accuracy of neural networks by using the crowd-powered keyword guidance.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Ramírez, Jorge, Marcos Baez, Fabio Casati et Boualem Benatallah. « Understanding the Impact of Text Highlighting in Crowdsourcing Tasks ». Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 7 (28 octobre 2019) : 144–52. http://dx.doi.org/10.1609/hcomp.v7i1.5268.

Texte intégral
Résumé :
Text classification is one of the most common goals of machine learning (ML) projects, and also one of the most frequent human intelligence tasks in crowdsourcing platforms. ML has mixed success in such tasks depending on the nature of the problem, while crowd-based classification has proven to be surprisingly effective, but can be expensive. Recently, hybrid text classification algorithms, combining human computation and machine learning, have been proposed to improve accuracy and reduce costs. One way to do so is to have ML highlight or emphasize portions of text that it believes to be more relevant to the decision. Humans can then rely only on this text or read the entire text if the highlighted information is insufficient. In this paper, we investigate if and under what conditions highlighting selected parts of the text can (or cannot) improve classification cost and/or accuracy, and in general how it affects the process and outcome of the human intelligence tasks. We study this through a series of crowdsourcing experiments running over different datasets and with task designs imposing different cognitive demands. Our findings suggest that highlighting is effective in reducing classification effort but does not improve accuracy - and in fact, low-quality highlighting can decrease it.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Guo, Shikai, Rong Chen, Hui Li, Tianlun Zhang et Yaqing Liu. « Identify Severity Bug Report with Distribution Imbalance by CR-SMOTE and ELM ». International Journal of Software Engineering and Knowledge Engineering 29, no 02 (février 2019) : 139–75. http://dx.doi.org/10.1142/s0218194019500074.

Texte intégral
Résumé :
Manually inspecting bugs to determine their severity is often an enormous but essential software development task, especially when many participants generate a large number of bug reports in a crowdsourced software testing context. Therefore, boosting the capabilities of methods of predicting bug report severity is critically important for determining the priority of fixing bugs. However, typical classification techniques may be adversely affected when the severity distribution of the bug reports is imbalanced, leading to performance degradation in a crowdsourcing environment. In this study, we propose an enhanced oversampling approach called CR-SMOTE to enhance the classification of bug reports with a realistically imbalanced severity distribution. The main idea is to interpolate new instances into the minority category that are near the center of existing samples in that category. Then, we use an extreme learning machine (ELM) — a feedforward neural network with a single layer of hidden nodes — to predict the bug severity. Several experiments were conducted on three datasets from real bug repositories, and the results statistically indicate that the presented approach is robust against real data imbalance when predicting the severity of bug reports. The average accuracies achieved by the ELM in predicting the severity of Eclipse, Mozilla, and GNOME bug reports were 0.780, 0.871, and 0.861, which are higher than those of classifiers by 4.36%, 6.73%, and 2.71%, respectively.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Baba, Yukino, Hisashi Kashima, Kei Kinoshita, Goushi Yamaguchi et Yosuke Akiyoshi. « Leveraging Crowdsourcing to Detect Improper Tasks in Crowdsourcing Marketplaces ». Proceedings of the AAAI Conference on Artificial Intelligence 27, no 2 (6 octobre 2021) : 1487–92. http://dx.doi.org/10.1609/aaai.v27i2.18987.

Texte intégral
Résumé :
Controlling the quality of tasks is a major challenge in crowdsourcing marketplaces. Most of the existing crowdsourcing services prohibit requesters from posting illegal or objectionable tasks. Operators in the marketplaces have to monitor the tasks continuously to find such improper tasks; however, it is too expensive to manually investigate each task. In this paper, we present the reports of our trial study on automatic detection of improper tasks to support the monitoring of activities by marketplace operators. We perform experiments using real task data from a commercial crowdsourcing marketplace and show that the classifier trained by the operator judgments achieves high accuracy in detecting improper tasks. In addition, to reduce the annotation costs of the operator and improve the classification accuracy, we consider the use of crowdsourcing for task annotation. We hire a group of crowdsourcing (non-expert) workers to monitor posted tasks, and incorporate their judgments into the training data of the classifier. By applying quality control techniques to handle the variability in worker reliability, our results show that the use of non-expert judgments by crowdsourcing workers in combination with expert judgments improves the accuracy of detecting improper crowdsourcing tasks.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Ceschia, Sara, Kevin Roitero, Gianluca Demartini, Stefano Mizzaro, Luca Di Gaspero et Andrea Schaerf. « Task design in complex crowdsourcing experiments : Item assignment optimization ». Computers & ; Operations Research 148 (décembre 2022) : 105995. http://dx.doi.org/10.1016/j.cor.2022.105995.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Sun, Yuyin, Adish Singla, Tori Yan, Andreas Krause et Dieter Fox. « Evaluating Task-Dependent Taxonomies for Navigation ». Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 4 (21 septembre 2016) : 229–38. http://dx.doi.org/10.1609/hcomp.v4i1.13286.

Texte intégral
Résumé :
Taxonomies of concepts are important across many application domains, for instance, online shopping portals use catalogs to help users navigate and search for products. Task-dependent taxonomies, e.g., adapting the taxonomy to a specific cohort of users, can greatly improve the effectiveness of navigation and search. However, taxonomies are usually created by domain experts and hence designing task-dependent taxonomies can be an expensive process: this often limits the applications to deploy generic taxonomies. Crowdsourcing-based techniques have the potential to provide a cost-efficient solution to building task-dependent taxonomies. In this paper, we present the first quantitative study to evaluate the effectiveness of these crowdsourcing based techniques. Our experimental study compares different task-dependent taxonomies built via crowdsourcing and generic taxonomies built by experts. We design randomized behavioral experiments on the Amazon Mechanical Turk platform for navigation tasks using these taxonomies resembling real-world applications such as product search. We record various metrics such as the time of navigation, the number of clicks performed, and the search path taken by a participant to navigate the taxonomy to locate a desired object. Our findings show that task-dependent taxonomies built by crowdsourcing techniques can reduce the navigation time up to $20\%$. Our results, in turn,demonstrate the power of crowdsourcing for learning complex structures such as semantic taxonomies.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Lin, Christopher, Mausam Mausam et Daniel Weld. « Dynamically Switching between Synergistic Workflows for Crowdsourcing ». Proceedings of the AAAI Conference on Artificial Intelligence 26, no 1 (20 septembre 2021) : 87–93. http://dx.doi.org/10.1609/aaai.v26i1.8121.

Texte intégral
Résumé :
To ensure quality results from unreliable crowdsourced workers, task designers often construct complex workflows and aggregate worker responses from redundant runs. Frequently, they experiment with several alternative workflows to accomplish the task, and eventually deploy the one that achieves the best performance during early trials. Surprisingly, this seemingly natural design paradigm does not achieve the full potential of crowdsourcing. In particular, using a single workflow (even the best) to accomplish a task is suboptimal. We show that alternative workflows can compose synergistically to yield much higher quality output. We formalize the insight with a novel probabilistic graphical model. Based on this model, we design and implement AGENTHUNT, a POMDP-based controller that dynamically switches between these workflows to achieve higher returns on investment. Additionally, we design offline and online methods for learning model parameters. Live experiments on Amazon Mechanical Turk demonstrate the superiority of AGENTHUNT for the task of generating NLP training data, yielding up to 50% error reduction and greater net utility compared to previous methods.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Rothwell, Spencer, Steele Carter, Ahmad Elshenawy et Daniela Braga. « Job Complexity and User Attention in Crowdsourcing Microtasks ». Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 3 (28 mars 2016) : 20–25. http://dx.doi.org/10.1609/hcomp.v3i1.13265.

Texte intégral
Résumé :
This paper examines the importance of presenting simple, intuitive tasks when conducting microtasking on crowdsourcing platforms. Most crowdsourcing platforms allow the maker of a task to present any length of instructions to crowd workers who participate in their tasks. Our experiments show, however, most workers who participate in crowdsourcing microtasks do not read the instructions, even when they are very brief. To facilitate success in microtask design, we highlight the importance of making simple, easy to grasp tasks that do not rely on instructions for explanation.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Qarout, Rehab, Alessandro Checco, Gianluca Demartini et Kalina Bontcheva. « Platform-Related Factors in Repeatability and Reproducibility of Crowdsourcing Tasks ». Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 7 (28 octobre 2019) : 135–43. http://dx.doi.org/10.1609/hcomp.v7i1.5264.

Texte intégral
Résumé :
Crowdsourcing platforms provide a convenient and scalable way to collect human-generated labels on-demand. This data can be used to train Artificial Intelligence (AI) systems or to evaluate the effectiveness of algorithms. The datasets generated by means of crowdsourcing are, however, dependent on many factors that affect their quality. These include, among others, the population sample bias introduced by aspects like task reward, requester reputation, and other filters introduced by the task design.In this paper, we analyse platform-related factors and study how they affect dataset characteristics by running a longitudinal study where we compare the reliability of results collected with repeated experiments over time and across crowdsourcing platforms. Results show that, under certain conditions: 1) experiments replicated across different platforms result in significantly different data quality levels while 2) the quality of data from repeated experiments over time is stable within the same platform. We identify some key task design variables that cause such variations and propose an experimentally validated set of actions to counteract these effects thus achieving reliable and repeatable crowdsourced data collection experiments.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Fu, Donglai, et Yanhua Liu. « Fairness of Task Allocation in Crowdsourcing Workflows ». Mathematical Problems in Engineering 2021 (23 avril 2021) : 1–11. http://dx.doi.org/10.1155/2021/5570192.

Texte intégral
Résumé :
Fairness plays a vital role in crowd computing by attracting its workers. The power of crowd computing stems from a large number of workers potentially available to provide high quality of service and reduce costs. An important challenge in the crowdsourcing market today is the task allocation of crowdsourcing workflows. Requester-centric task allocation algorithms aim to maximize the completion quality of the entire workflow and minimize its total cost, which are discriminatory for workers. The crowdsourcing workflow needs to balance two objectives, namely, fairness and cost. In this study, we propose an alternative greedy approach with four heuristic strategies to address such an issue. In particular, the proposed approach aims to monitor the current status of workflow execution and use heuristic strategies to adjust the parameters of task allocation. We design a two-phase allocation model to accurately match the tasks with workers. The F-Aware allocates each task to the worker that maximizes the fairness and minimizes the cost. We conduct extensive experiments to quantitatively evaluate the proposed algorithms in terms of running time, fairness, and cost by using a customer objective function on the WorkflowSim, a well-known cloud simulation tool. Experimental results based on real-world workflows show that the F-Aware, which is 1% better than the best competitor algorithm, outperforms other optimal solutions in finding the tradeoff between fairness and cost.
Styles APA, Harvard, Vancouver, ISO, etc.

Actes de conférences sur le sujet "Crowdsourcing, classification, task design, crowdsourcing experiments"

1

Kawase, Yasushi, Yuko Kuroki et Atsushi Miyauchi. « Graph Mining Meets Crowdsourcing : Extracting Experts for Answer Aggregation ». Dans Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California : International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/177.

Texte intégral
Résumé :
Aggregating responses from crowd workers is a fundamental task in the process of crowdsourcing. In cases where a few experts are overwhelmed by a large number of non-experts, most answer aggregation algorithms such as the majority voting fail to identify the correct answers. Therefore, it is crucial to extract reliable experts from the crowd workers. In this study, we introduce the notion of "expert core", which is a set of workers that is very unlikely to contain a non-expert. We design a graph-mining-based efficient algorithm that exactly computes the expert core. To answer the aggregation task, we propose two types of algorithms. The first one incorporates the expert core into existing answer aggregation algorithms such as the majority voting, whereas the second one utilizes information provided by the expert core extraction algorithm pertaining to the reliability of workers. We then give a theoretical justification for the first type of algorithm. Computational experiments using synthetic and real-world datasets demonstrate that our proposed answer aggregation algorithms outperform state-of-the-art algorithms.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Gómez Barrón Sierra, José Pablo, Miguel-Ángel Manso-Callejo et Ramón Alcarria. « DISEÑO DE ESTRATEGIAS DE CROWDSOURCING EN SISTEMAS DE INFORMACIÓN GEOGRÁFICA VOLUNTARIA ». Dans 1st Congress in Geomatics Engineering. Valencia : Universitat Politècnica València, 2017. http://dx.doi.org/10.4995/cigeo2017.2017.6629.

Texte intégral
Résumé :
This work addresses voluntary geographic information (VGI) as an information system that facilitates organizations to achieve specific goals by outsourcing processes and activities to an online community. A definition of a voluntary geographic information system (VGIS) is proposed, identifying its core components (Project, Participants, Technology), then, crowdsourcing, the most relevant process for managing information within these type of systems, is analysed. We analyse several types of crowdsourcing models in the context of VGIS, and it is proposed a classification built around the different ways of organizing a community, which include different levels of participation according to the use of three processes: contributory, collaborative and participatory. Based on the study of the different typologies intrinsically linked to the existing levels of involvement and engagement, and the use of participants' cognitive skills, a continuum of participation is identified, presenting two opposite tendencies when designing VGI projects: crowd-based and community-driven, the latter with higher levels of collaboration or even co-creation. Based on the above, it is proposed a set of criteria for the design of the crowdsourcing strategy of a VGIS, as a roadmap that directs the project. This design and planning tool helps to characterize and define in a simple way the general requirements of the processes and activities of a VGIS that will be implemented through a crowdsourcing task, being the first step in the interdependent design of the project, participation and technological components. The design of subsequent strategies related to the other components of the system must be aligned and linked to the crowdsourcing strategy, and altogether will guide the development of tasks, functionalities and the specific technological tools of the system.http://dx.doi.org/10.4995/CIGeo2017.2017.6629
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie