Добірка наукової літератури з теми "Crowdsourcing, classification, task design, crowdsourcing experiments"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Crowdsourcing, classification, task design, crowdsourcing experiments".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Crowdsourcing, classification, task design, crowdsourcing experiments"

1

Yang, Keyu, Yunjun Gao, Lei Liang, Song Bian, Lu Chen, and Baihua Zheng. "CrowdTC: Crowd-powered Learning for Text Classification." ACM Transactions on Knowledge Discovery from Data 16, no. 1 (2021): 1–23. http://dx.doi.org/10.1145/3457216.

Повний текст джерела
Анотація:
Text classification is a fundamental task in content analysis. Nowadays, deep learning has demonstrated promising performance in text classification compared with shallow models. However, almost all the existing models do not take advantage of the wisdom of human beings to help text classification. Human beings are more intelligent and capable than machine learning models in terms of understanding and capturing the implicit semantic information from text. In this article, we try to take guidance from human beings to classify text. We propose Crowd-powered learning for Text Classification (Crow
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Ramírez, Jorge, Marcos Baez, Fabio Casati, and Boualem Benatallah. "Understanding the Impact of Text Highlighting in Crowdsourcing Tasks." Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 7 (October 28, 2019): 144–52. http://dx.doi.org/10.1609/hcomp.v7i1.5268.

Повний текст джерела
Анотація:
Text classification is one of the most common goals of machine learning (ML) projects, and also one of the most frequent human intelligence tasks in crowdsourcing platforms. ML has mixed success in such tasks depending on the nature of the problem, while crowd-based classification has proven to be surprisingly effective, but can be expensive. Recently, hybrid text classification algorithms, combining human computation and machine learning, have been proposed to improve accuracy and reduce costs. One way to do so is to have ML highlight or emphasize portions of text that it believes to be more
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Guo, Shikai, Rong Chen, Hui Li, Tianlun Zhang, and Yaqing Liu. "Identify Severity Bug Report with Distribution Imbalance by CR-SMOTE and ELM." International Journal of Software Engineering and Knowledge Engineering 29, no. 02 (2019): 139–75. http://dx.doi.org/10.1142/s0218194019500074.

Повний текст джерела
Анотація:
Manually inspecting bugs to determine their severity is often an enormous but essential software development task, especially when many participants generate a large number of bug reports in a crowdsourced software testing context. Therefore, boosting the capabilities of methods of predicting bug report severity is critically important for determining the priority of fixing bugs. However, typical classification techniques may be adversely affected when the severity distribution of the bug reports is imbalanced, leading to performance degradation in a crowdsourcing environment. In this study, w
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Baba, Yukino, Hisashi Kashima, Kei Kinoshita, Goushi Yamaguchi, and Yosuke Akiyoshi. "Leveraging Crowdsourcing to Detect Improper Tasks in Crowdsourcing Marketplaces." Proceedings of the AAAI Conference on Artificial Intelligence 27, no. 2 (2021): 1487–92. http://dx.doi.org/10.1609/aaai.v27i2.18987.

Повний текст джерела
Анотація:
Controlling the quality of tasks is a major challenge in crowdsourcing marketplaces. Most of the existing crowdsourcing services prohibit requesters from posting illegal or objectionable tasks. Operators in the marketplaces have to monitor the tasks continuously to find such improper tasks; however, it is too expensive to manually investigate each task. In this paper, we present the reports of our trial study on automatic detection of improper tasks to support the monitoring of activities by marketplace operators. We perform experiments using real task data from a commercial crowdsourcing mark
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Ceschia, Sara, Kevin Roitero, Gianluca Demartini, Stefano Mizzaro, Luca Di Gaspero, and Andrea Schaerf. "Task design in complex crowdsourcing experiments: Item assignment optimization." Computers & Operations Research 148 (December 2022): 105995. http://dx.doi.org/10.1016/j.cor.2022.105995.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Sun, Yuyin, Adish Singla, Tori Yan, Andreas Krause, and Dieter Fox. "Evaluating Task-Dependent Taxonomies for Navigation." Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 4 (September 21, 2016): 229–38. http://dx.doi.org/10.1609/hcomp.v4i1.13286.

Повний текст джерела
Анотація:
Taxonomies of concepts are important across many application domains, for instance, online shopping portals use catalogs to help users navigate and search for products. Task-dependent taxonomies, e.g., adapting the taxonomy to a specific cohort of users, can greatly improve the effectiveness of navigation and search. However, taxonomies are usually created by domain experts and hence designing task-dependent taxonomies can be an expensive process: this often limits the applications to deploy generic taxonomies. Crowdsourcing-based techniques have the potential to provide a cost-efficient solut
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Lin, Christopher, Mausam Mausam та Daniel Weld. "Dynamically Switching between Synergistic Workflows for Crowdsourcing". Proceedings of the AAAI Conference on Artificial Intelligence 26, № 1 (2021): 87–93. http://dx.doi.org/10.1609/aaai.v26i1.8121.

Повний текст джерела
Анотація:
To ensure quality results from unreliable crowdsourced workers, task designers often construct complex workflows and aggregate worker responses from redundant runs. Frequently, they experiment with several alternative workflows to accomplish the task, and eventually deploy the one that achieves the best performance during early trials. Surprisingly, this seemingly natural design paradigm does not achieve the full potential of crowdsourcing. In particular, using a single workflow (even the best) to accomplish a task is suboptimal. We show that alternative workflows can compose synergistically t
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Rothwell, Spencer, Steele Carter, Ahmad Elshenawy, and Daniela Braga. "Job Complexity and User Attention in Crowdsourcing Microtasks." Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 3 (March 28, 2016): 20–25. http://dx.doi.org/10.1609/hcomp.v3i1.13265.

Повний текст джерела
Анотація:
This paper examines the importance of presenting simple, intuitive tasks when conducting microtasking on crowdsourcing platforms. Most crowdsourcing platforms allow the maker of a task to present any length of instructions to crowd workers who participate in their tasks. Our experiments show, however, most workers who participate in crowdsourcing microtasks do not read the instructions, even when they are very brief. To facilitate success in microtask design, we highlight the importance of making simple, easy to grasp tasks that do not rely on instructions for explanation.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Qarout, Rehab, Alessandro Checco, Gianluca Demartini, and Kalina Bontcheva. "Platform-Related Factors in Repeatability and Reproducibility of Crowdsourcing Tasks." Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 7 (October 28, 2019): 135–43. http://dx.doi.org/10.1609/hcomp.v7i1.5264.

Повний текст джерела
Анотація:
Crowdsourcing platforms provide a convenient and scalable way to collect human-generated labels on-demand. This data can be used to train Artificial Intelligence (AI) systems or to evaluate the effectiveness of algorithms. The datasets generated by means of crowdsourcing are, however, dependent on many factors that affect their quality. These include, among others, the population sample bias introduced by aspects like task reward, requester reputation, and other filters introduced by the task design.In this paper, we analyse platform-related factors and study how they affect dataset characteri
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Fu, Donglai, and Yanhua Liu. "Fairness of Task Allocation in Crowdsourcing Workflows." Mathematical Problems in Engineering 2021 (April 23, 2021): 1–11. http://dx.doi.org/10.1155/2021/5570192.

Повний текст джерела
Анотація:
Fairness plays a vital role in crowd computing by attracting its workers. The power of crowd computing stems from a large number of workers potentially available to provide high quality of service and reduce costs. An important challenge in the crowdsourcing market today is the task allocation of crowdsourcing workflows. Requester-centric task allocation algorithms aim to maximize the completion quality of the entire workflow and minimize its total cost, which are discriminatory for workers. The crowdsourcing workflow needs to balance two objectives, namely, fairness and cost. In this study, w
Стилі APA, Harvard, Vancouver, ISO та ін.
Більше джерел
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!