Добірка наукової літератури з теми "Crowdsourcing experiments"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Crowdsourcing experiments".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Crowdsourcing experiments"

1

Ramírez, Jorge, Burcu Sayin, Marcos Baez, Fabio Casati, Luca Cernuzzi, Boualem Benatallah, and Gianluca Demartini. "On the State of Reporting in Crowdsourcing Experiments and a Checklist to Aid Current Practices." Proceedings of the ACM on Human-Computer Interaction 5, CSCW2 (October 13, 2021): 1–34. http://dx.doi.org/10.1145/3479531.

Повний текст джерела
Анотація:
Crowdsourcing is being increasingly adopted as a platform to run studies with human subjects. Running a crowdsourcing experiment involves several choices and strategies to successfully port an experimental design into an otherwise uncontrolled research environment, e.g., sampling crowd workers, mapping experimental conditions to micro-tasks, or ensure quality contributions. While several guidelines inform researchers in these choices, guidance of how and what to report from crowdsourcing experiments has been largely overlooked. If under-reported, implementation choices constitute variability sources that can affect the experiment's reproducibility and prevent a fair assessment of research outcomes. In this paper, we examine the current state of reporting of crowdsourcing experiments and offer guidance to address associated reporting issues. We start by identifying sensible implementation choices, relying on existing literature and interviews with experts, to then extensively analyze the reporting of 171 crowdsourcing experiments. Informed by this process, we propose a checklist for reporting crowdsourcing experiments.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Danilchuk, M. V. "THE POTENTIAL OF THE CROWDSOURCING AS A METHOD OF LINGUISTIC EXPERIMENT." Bulletin of Kemerovo State University, no. 4 (December 23, 2018): 198–204. http://dx.doi.org/10.21603/2078-8975-2018-4-198-204.

Повний текст джерела
Анотація:
The present research considers crowdsourcing as a method of linguistic experiment. The paper features an experiment with the following algorithm: 1) problem statement, 2) development, 3) and questionnaire testing. The paper includes recommendations on crowdsourcing project organization, as well as some issues of respondents’ motivation, questionnaire design, choice of crowdsourcing platform, data export, etc. The linguistic experiment made it possible to obtain data on the potential of the phonosemantic analysis in solving naming problems in marketing. The associations of the brand name designer matched those of the majority of the Internet pannellists. The experiment showed that crowdsourcing proves to be an available method within the network society. It gives an opportunity to receive objective data and demonstrates high research capabilities. The described procedure of the crowdsourcing project can be used in various linguistic experiments.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Makiguchi, Motohiro, Daichi Namikawa, Satoshi Nakamura, Taiga Yoshida, Masanori Yokoyama, and Yuji Takano. "Proposal and Initial Study for Animal Crowdsourcing." Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 2 (September 5, 2014): 40–41. http://dx.doi.org/10.1609/hcomp.v2i1.13185.

Повний текст джерела
Анотація:
We focus on animals as a resource of processing capability in crowdsourcing and propose an Animal Crowdsourcing (we call it "Animal Cloud”, too) that resolves problems with cooperation between computers and human or animals. This paper gives an overview of Animal Crowdsourcing and reports on the interim results of our learning experiments using rats (Long-Evans rats) to verify the feasibility of Animal Crowdsourcing.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Lutz, Johannes. "The Validity of Crowdsourcing Data in Studying Anger and Aggressive Behavior." Social Psychology 47, no. 1 (January 2016): 38–51. http://dx.doi.org/10.1027/1864-9335/a000256.

Повний текст джерела
Анотація:
Abstract. Crowdsourcing platforms provide an affordable approach for recruiting large and diverse samples in a short time. Past research has shown that researchers can obtain reliable data from these sources, at least in domains of research that are not affectively involving. The goal of the present study was to test if crowdsourcing platforms can also be used to conduct experiments that incorporate the induction of aversive affective states. First, a laboratory experiment with German university students was conducted in which a frustrating task induced anger and aggressive behavior. This experiment was then replicated online using five crowdsourcing samples. The results suggest that participants in the online samples reacted very similarly to the anger manipulation as participants in the laboratory experiments. However, effect sizes were smaller in crowdsourcing samples with non-German participants while a crowdsourcing sample with exclusively German participants yielded virtually the same effect size as in the laboratory.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Jiang, Ming, Zhiqi Shen, Shaojing Fan, and Qi Zhao. "SALICON: a web platform for crowdsourcing behavioral experiments." Journal of Vision 17, no. 10 (August 31, 2017): 704. http://dx.doi.org/10.1167/17.10.704.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Della Mea, Vincenzo, Eddy Maddalena, and Stefano Mizzaro. "Mobile crowdsourcing: four experiments on platforms and tasks." Distributed and Parallel Databases 33, no. 1 (October 16, 2014): 123–41. http://dx.doi.org/10.1007/s10619-014-7162-x.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Kandylas, Vasilis, Omar Alonso, Shiroy Choksey, Kedar Rudre, and Prashant Jaiswal. "Automating Crowdsourcing Tasks in an Industrial Environment." Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 1 (November 3, 2013): 95–96. http://dx.doi.org/10.1609/hcomp.v1i1.13056.

Повний текст джерела
Анотація:
Crowdsourcing based applications are starting to gain traction in industrial environments. Crowdsourcing research has proven that it is possible to get good quality labels at the fraction of the cost and time. However, implementing such applications at large scale requires new infrastructure. In this demo we present a system that allows the automation of crowdsourcing tasks for information retrieval experiments.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Ku, Chih-Hao, and Maryam Firoozi. "The Use of Crowdsourcing and Social Media in Accounting Research." Journal of Information Systems 33, no. 1 (November 1, 2017): 85–111. http://dx.doi.org/10.2308/isys-51978.

Повний текст джерела
Анотація:
ABSTRACT In this study, we investigate the use of crowdsourcing websites in accounting research. Our analysis shows that the use of crowdsourcing in accounting research is relatively low, and these websites have been mainly used to collect data through surveys and for conducting experiments. Next, we compare and discuss papers related to crowdsourcing in the accounting area with research in computer science (CS) and information systems (IS), which are more advanced in using crowdsourcing websites. We then focus on Amazon Mechanical Turk as one of the most widely used crowdsourcing websites in academic research to investigate what type of tasks can be done through this platform. Based on our task analysis, one of the areas in accounting research that can benefit from crowdsourcing websites is research on social media content. Therefore, we then discuss how research in CS, IS, and crowdsourcing websites can help researchers improve their work on social media.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Baba, Yukino, Hisashi Kashima, Kei Kinoshita, Goushi Yamaguchi, and Yosuke Akiyoshi. "Leveraging Crowdsourcing to Detect Improper Tasks in Crowdsourcing Marketplaces." Proceedings of the AAAI Conference on Artificial Intelligence 27, no. 2 (October 6, 2021): 1487–92. http://dx.doi.org/10.1609/aaai.v27i2.18987.

Повний текст джерела
Анотація:
Controlling the quality of tasks is a major challenge in crowdsourcing marketplaces. Most of the existing crowdsourcing services prohibit requesters from posting illegal or objectionable tasks. Operators in the marketplaces have to monitor the tasks continuously to find such improper tasks; however, it is too expensive to manually investigate each task. In this paper, we present the reports of our trial study on automatic detection of improper tasks to support the monitoring of activities by marketplace operators. We perform experiments using real task data from a commercial crowdsourcing marketplace and show that the classifier trained by the operator judgments achieves high accuracy in detecting improper tasks. In addition, to reduce the annotation costs of the operator and improve the classification accuracy, we consider the use of crowdsourcing for task annotation. We hire a group of crowdsourcing (non-expert) workers to monitor posted tasks, and incorporate their judgments into the training data of the classifier. By applying quality control techniques to handle the variability in worker reliability, our results show that the use of non-expert judgments by crowdsourcing workers in combination with expert judgments improves the accuracy of detecting improper crowdsourcing tasks.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Liu, Chong, and Yu-Xiang Wang. "Doubly Robust Crowdsourcing." Journal of Artificial Intelligence Research 73 (January 12, 2022): 209–29. http://dx.doi.org/10.1613/jair.1.13304.

Повний текст джерела
Анотація:
Large-scale labeled dataset is the indispensable fuel that ignites the AI revolution as we see today. Most such datasets are constructed using crowdsourcing services such as Amazon Mechanical Turk which provides noisy labels from non-experts at a fair price. The sheer size of such datasets mandates that it is only feasible to collect a few labels per data point. We formulate the problem of test-time label aggregation as a statistical estimation problem of inferring the expected voting score. By imitating workers with supervised learners and using them in a doubly robust estimation framework, we prove that the variance of estimation can be substantially reduced, even if the learner is a poor approximation. Synthetic and real-world experiments show that by combining the doubly robust approach with adaptive worker/item selection rules, we often need much lower label cost to achieve nearly the same accuracy as in the ideal world where all workers label all data points.
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "Crowdsourcing experiments"

1

Ramirez, Medina Jorge Daniel. "Strategies for addressing performance concerns and bias in designing, running, and reporting crowdsourcing experiment." Doctoral thesis, Università degli studi di Trento, 2021. http://hdl.handle.net/11572/321908.

Повний текст джерела
Анотація:
Crowdsourcing involves releasing tasks on the internet for people with diverse backgrounds and skills to solve. Its adoption has come a long way, from scaling up problem-solving to becoming an environment for running complex experiments. Designing tasks to obtain reliable results is not straightforward as it requires many design choices that grow with the complexity of crowdsourcing projects, often demanding multiple trial-and-error iterations to properly configure. These inherent characteristics of crowdsourcing, the complexity of the design space, and heterogeneity of the crowd, set quality control as a major concern, making it an integral part of task design. Despite all the progress and guidelines for developing effective tasks, crowdsourcing still is addressed as an ``art'' rather than an exact science, in part due to the challenges related to task design but also because crowdsourcing allows more complex use cases nowadays, where the support available has not yet caught up with this progress. This leaves researchers and practitioners at the forefront to often rely on intuitions instead of informed decisions. Running controlled experiments in crowdsourcing platforms is a prominent example. Despite their importance, experiments in these platforms are not yet first-class citizens, making researchers resort to building custom features to compensate for the lack of support, where pitfalls in this process may be detrimental to the experimental outcome. In this thesis, therefore, our goal is to attend to the need of moving crowdsourcing from art to science from two perspectives that interplay with each other: providing guidance on task design through experimentation, and supporting the experimentation process itself. First, we select classification problems as a use case, given their importance and pervasive nature, and aim to bring awareness, empirical evidence, and guidance to previously unexplored task design choices to address performance concerns. And second, we also aim to make crowdsourcing accessible to researchers and practitioners from all backgrounds, reducing the requirement of in-depth knowledge of known biases in crowdsourcing platforms, experimental methods, as well as programming skills to overcome the limitations of crowdsourcing providers while running experiments. We start by proposing task design strategies to address workers' performance, quality and time, in crowdsourced classification tasks. Then we distill the challenges associated with running controlled crowdsourcing experiments, propose coping strategies to address these challenges, and introduce solutions to help researchers report their crowdsourcing experiments, moving crowdsourcing forward to standardized reporting.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Eslick, Ian S. (Ian Scott). "Crowdsourcing health discoveries : from anecdotes to aggregated self-experiments." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/91433.

Повний текст джерела
Анотація:
Thesis: Ph. D., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2013.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 305-315).
Nearly one quarter of US adults read patient-generated health information found on blogs, forums and social media; many say they use this information to influence everyday health decisions. Topics of discussion in online forums are often poorly-addressed by existing, clinical research, so a patient's reported experiences are the only evidence. No rigorous methods exist to help patients leverage anecdotal evidence to make better decisions. This dissertation reports on multiple prototype systems that help patients augment anecdote with data to improve individual decision making, optimize healthcare delivery, and accelerate research. The web-based systems were developed through a multi-year collaboration with individuals, advocacy organizations, healthcare providers, and biomedical researchers. The result of this work is a new scientific model for crowdsourcing health insights: Aggregated Self-Experiments. The self-experiment, a type of single-subject (n-of-1) trial, formally validates the effectiveness of an intervention on a single person. Aggregated Personal Experiments enables user communities to translate anecdotal correlations into repeatable trials that can validate efficacy in the context of their daily lives. Aggregating the outcomes of multiple trials improves the efficiency of future trials and enables users to prioritize trials for a given condition. Successful outcomes from many patients provide evidence to motivate future clinical research. The model, and the design principles that support it were evaluated through a set of focused user studies, secondary data analyses, and experience with real-world deployments.
by Ian Scott Eslick.
Ph. D.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

McLeod, Ryan Nathaniel. "A PROOF OF CONCEPT FOR CROWDSOURCING COLOR PERCEPTION EXPERIMENTS." DigitalCommons@CalPoly, 2014. https://digitalcommons.calpoly.edu/theses/1269.

Повний текст джерела
Анотація:
Accurately quantifying the human perception of color is an unsolved prob- lem. There are dozens of numerical systems for quantifying colors and how we as humans perceive them, but as a whole, they are far from perfect. The ability to accurately measure color for reproduction and verification is critical to indus- tries that work with textiles, paints, food and beverages, displays, and media compression algorithms. Because the science of color deals with the body, mind, and the subjective study of perception, building models of color requires largely empirical data over pure analytical science. Much of this data is extremely dated, from small and/or homogeneous data sets, and is hard to compare. While these studies have somewhat advanced our understanding of color adequately, mak- ing significant, further progress without improved datasets has proven dicult if not impossible. I propose new methods of crowdsourcing color experiments through color-accurate mobile devices to help develop a massive, global set of color perception data to aid in creating a more accurate model of human color perception.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Goucher-Lambert, Kosa Kendall. "Investigating Decision Making in Engineering Design Through Complementary Behavioral and Cognitive Neuroimaging Experiments." Research Showcase @ CMU, 2017. http://repository.cmu.edu/dissertations/910.

Повний текст джерела
Анотація:
Decision-making is a fundamental process of human thinking and behavior. In engineering design, decision-making is studied from two different points of view: users and designers. User focused design studies tend to investigate ways to better inform the design process through the elicitation of preferences or information. Designer studies are broad in nature, but usually attempt to illustrate and understand some aspect of designer behavior, such as ideation, fixation, or collaboration. Despite their power, both qualitative and quantitative research methods are ultimately limited by the fact that they rely on direct input from the research participants themselves. This can be problematic, as individuals may not be able to accurately represent what they are truly thinking, feeling, or desiring at the time of the decision. A fundamental goal in both user- and designer-focused studies is to understand how the mind works while individuals are making decisions. This dissertation addresses these issues through the use of complementary behavioral and neuroimaging experiments, uncovering insights into how the mind processes design-related decision-making and the implications of those processes. To examine user decision-making, a visual conjoint analysis (preference modeling approach) was utilized for sustainable preference judgments. Here, a novel preference-modeling framework was employed, allowing for the real time calculation of dependent environmental impact metrics during individual choice decisions. However, in difficult moral and emotional decision-making scenarios, such as those involving sustainability, traditional methods of uncovering user preferences have proven to be inconclusive. To overcome these shortcomings, a neuroimaging approach was used. Specifically, study participants completed preference judgments for sustainable products inside of a functional magnetic resonance imaging (fMRI) scanner. Results indicated that theory of mind and moral reasoning processes occur during product evaluations involving sustainability. Designer decision-making was explored using an analogical reasoning and concept development experiment. First, a crowdsourcing method was used to obtain meaningful analogical stimuli, which were validated using a behavioral experiment. Following this, fMRI was used to uncover the neural mechanisms associated with analogical reasoning in design. Results demonstrated that analogies generally benefit designers; particularly after significant time on idea generation has taken place. Neuroimaging data helped to show two distinct brain activation networks based upon reasoning with and without analogies. We term these fixation driven external search and analogically driven internal search.. Fixation driven external search shows designers during impasse, as increased activation in brain regions associated with visual processing causes them to direct attention outward in search of inspiration. Conversely, during analogically driven internal search, significant areas of activation are observed in bilateral temporal and left parietal regions of the brain. These brain regions are significant, as prior research has linked them to semantic word-processing, directing attention to memory retrieval, and insight during problem solving. It is during analogically driven internal search that brain activity shows the most effective periods of ideation by participants.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Andersson, David. "Diversifying Demining : An Experimental Crowdsourcing Method for Optical Mine Detection." Thesis, Linköping University, Department of Electrical Engineering, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-15813.

Повний текст джерела
Анотація:

This thesis explores the concepts of crowdsourcing and the ability of diversity, applied to optical mine detection. The idea is to use the human eye and wide and diverse workforce available on the Internet to detect mines, in addition to computer algorithms.

The theory of diversity in problem solving is discussed, especially the Diversity Trumps Ability Theorem and the Diversity Prediction Theorem, and how they should be carried out for possible applications such as contrast interpretation and area reduction respectively.

A simple contrast interpretation experiment is carried out comparing the results of a laymen crowd and one of experts, having the crowds examine extracts from hyperspectral images, classifying the amount of objects or mines and the type of terrain. Due to poor participation rate of the expert group, and an erroneous experiment introduction, the experiment does not yield any statistically significant results. Therefore, no conclusion is made.

Experiment improvements are proposed as well as possible future applications.


Denna rapport går igenom tanken bakom crowdsourcing och mångfaldens styrka tillämpad på optisk mindetektering. Tanken är att använda det mänskliga ögat och Internets skiftande och varierande arbetsstyrka som ett tillägg för att upptäcka minor tillsammans med dataalgoritmer.

Mångfaldsteorin i problemlösande diskuteras och speciellt ''Diversity Trumps Ability''-satsen och ''Diversity Prediction''-satsen och hur de ska genomföras för tillämpningar som kontrastigenkänning respektive ytreduktion.

Ett enkelt kontrastigenkänningsexperiment har genomförts för att jämföra resultaten mellan en lekmannagrupp och en expertgrupp. Grupperna tittar på delar av data från hyperspektrala bilder och klassifierar andel objekt eller minor och terrängtyp. På grund av lågt deltagande från expertgruppen och en felaktig experimentintroduktion ger inte experimentet några statistiskt signifikanta resultat, varför ingen slutsats dras.

Experimentförbättringar och framtida tillämpningar föreslås.


Multi Optical Mine Detection System
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Ichatha, Stephen K. "The Role of Empowerment in Crowdsourced Customer Service." 2013. http://scholarworks.gsu.edu/bus_admin_diss/18.

Повний текст джерела
Анотація:
For decades, researchers have seen employee empowerment as the means to achieving a more committed workforce that would deliver better outcomes. The prior conceptual and descriptive research focused on structural empowerment, or workplace mechanisms for generating empowerment, and psychological empowerment, the felt empowerment. Responding to calls for intervention studies, this research experimentally tests the effects of structural empowerment changes, through different degrees of decision-making authority and access to customer-relationship information, on psychological empowerment and subsequent work-related outcomes. Using a virtual contact center simulation, crowdsourced workers responded to customer requests. Greater decision authority and access to customer-relationship information resulted in higher levels of psychological empowerment which in turn resulted in task satisfaction and task attractiveness outcomes among the crowdsourced customer service workers.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

(9183527), Murtuza Shergadwala. "SEQUENTIAL INFORMATION ACQUISITION AND DECISION MAKING IN DESIGN CONTESTS: THEORETICAL AND EXPERIMENTAL STUDIES." Thesis, 2020.

Знайти повний текст джерела
Анотація:

The primary research question of this dissertation is, \textit{How do contestants make sequential design decisions under the influence of competition?} To address this question, I study the influence of three factors, that can be controlled by the contest organizers, on the contestants' sequential information acquisition and decision-making behaviors. These factors are (i) a contestant's domain knowledge, (ii) framing of a design problem, and (iii) information about historical contests. The \textit{central hypothesis} is that by conducting controlled behavioral experiments we can acquire data of contestant behaviors that can be used to calibrate computational models of contestants' sequential decision-making behaviors, thereby, enabling predictions about the design outcomes. The behavioral results suggest that (i) contestants better understand problem constraints and generate more feasible design solutions when a design problem is framed in a domain-specific context as compared to a domain-independent context, (ii) contestants' efforts to acquire information about a design artifact to make design improvements are significantly affected by the information provided to them about their opponent who is competing to achieve the same objectives, and (iii) contestants make information acquisition decisions such as when to stop acquiring information, based on various criteria such as the number of resources, the target objective value, and the observed amount of improvement in their design quality. Moreover, the threshold values of such criteria are influenced by the information the contestants have about their opponent. The results imply that (i) by understanding the influence of an individual's domain knowledge and framing of a problem we can provide decision-support tools to the contestants in engineering design contexts to better acquire problem-specific information (ii) we can enable contest designers to decide what information to share to improve the quality of the design outcomes of design contest, and (iii) from an educational standpoint, we can enable instructors to provide students with accurate assessments of their domain knowledge by understanding students' information acquisition and decision making behaviors in their design projects. The \textit{primary contribution} of this dissertation is the computational models of an individual's sequential decision-making process that incorporate the behavioral results discussed above in competitive design scenarios. Moreover, a framework to conduct factorial investigations of human decision making through a combination of theory and behavioral experimentation is illustrated.

Стилі APA, Harvard, Vancouver, ISO та ін.

Книги з теми "Crowdsourcing experiments"

1

Archambault, Daniel, Helen Purchase, and Tobias Hoßfeld, eds. Evaluation in the Crowd. Crowdsourcing and Human-Centered Experiments. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-66435-4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Archambault, Daniel, Helen Purchase, and Tobias Hoßfeld. Evaluation in the Crowd. Crowdsourcing and Human-Centered Experiments: Dagstuhl Seminar 15481, Dagstuhl Castle, Germany, November 22 – 27, 2015, ... Springer, 2017.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Crowdsourcing experiments"

1

Egger-Lampl, Sebastian, Judith Redi, Tobias Hoßfeld, Matthias Hirth, Sebastian Möller, Babak Naderi, Christian Keimel, and Dietmar Saupe. "Crowdsourcing Quality of Experience Experiments." In Evaluation in the Crowd. Crowdsourcing and Human-Centered Experiments, 154–90. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-66435-4_7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Hirth, Matthias, Jason Jacques, Peter Rodgers, Ognjen Scekic, and Michael Wybrow. "Crowdsourcing Technology to Support Academic Research." In Evaluation in the Crowd. Crowdsourcing and Human-Centered Experiments, 70–95. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-66435-4_4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Borgo, Rita, Bongshin Lee, Benjamin Bach, Sara Fabrikant, Radu Jianu, Andreas Kerren, Stephen Kobourov, et al. "Crowdsourcing for Information Visualization: Promises and Pitfalls." In Evaluation in the Crowd. Crowdsourcing and Human-Centered Experiments, 96–138. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-66435-4_5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Gadiraju, Ujwal, Sebastian Möller, Martin Nöllenburg, Dietmar Saupe, Sebastian Egger-Lampl, Daniel Archambault, and Brian Fisher. "Crowdsourcing Versus the Laboratory: Towards Human-Centered Experiments Using the Crowd." In Evaluation in the Crowd. Crowdsourcing and Human-Centered Experiments, 6–26. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-66435-4_2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Archambault, Daniel, Helen C. Purchase, and Tobias Hoßfeld. "Evaluation in the Crowd: An Introduction." In Evaluation in the Crowd. Crowdsourcing and Human-Centered Experiments, 1–5. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-66435-4_1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Martin, David, Sheelagh Carpendale, Neha Gupta, Tobias Hoßfeld, Babak Naderi, Judith Redi, Ernestasia Siahaan, and Ina Wechsung. "Understanding the Crowd: Ethical and Practical Matters in the Academic Use of Crowdsourcing." In Evaluation in the Crowd. Crowdsourcing and Human-Centered Experiments, 27–69. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-66435-4_3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Edwards, Darren J., Linda T. Kaastra, Brian Fisher, Remco Chang, and Min Chen. "Cognitive Information Theories of Psychology and Applications with Visualization and HCI Through Crowdsourcing Platforms." In Evaluation in the Crowd. Crowdsourcing and Human-Centered Experiments, 139–53. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-66435-4_6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Gadiraju, Ujwal, Sebastian Möller, Martin Nöllenburg, Dietmar Saupe, Sebastian Egger-Lampl, Daniel Archambault, and Brian Fisher. "Erratum to: Crowdsourcing Versus the Laboratory: Towards Human-Centered Experiments Using the Crowd." In Evaluation in the Crowd. Crowdsourcing and Human-Centered Experiments, E1. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-66435-4_8.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Abou Chahine, Ramzi, Dongjae Kwon, Chungman Lim, Gunhyuk Park, and Hasti Seifi. "Vibrotactile Similarity Perception in Crowdsourced and Lab Studies." In Haptics: Science, Technology, Applications, 255–63. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-06249-0_29.

Повний текст джерела
Анотація:
AbstractCrowdsourcing can enable rapid data collection for haptics research, yet little is known about its validity in comparison to controlled lab experiments. Furthermore, no data exists on how different smartphone platforms impact the crowdsourcing results. To answer these questions, we conducted four vibrotactile (VT) similarity perception studies on iOS and Android smartphones in the lab and through Amazon Mechanical Turk (MTurk). Participants rated the pairwise similarities of 14 rhythmic VT patterns on their smartphones or a lab device. The similarity ratings from the lab and MTurk experiments suggested a very strong correlation for iOS devices ($$r_s= 0.9$$ r s = 0.9 ) and a lower but still strong correlation for Android phones ($$r_s= 0.68$$ r s = 0.68 ). In addition, we found a stronger correlation between the crowdsourced iOS and Android ratings ($$r_s=0.78$$ r s = 0.78 ) compared to the correlation between the iOS and Android data in the lab ($$r_s= 0.65$$ r s = 0.65 ). We provide further insights into these correlations using the perceptual spaces obtained from the four datasets. Our results provide preliminary evidence for the validity of crowdsourced VT similarity studies, especially on iOS devices.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Zallot, Camilla, Gabriele Paolacci, Jesse Chandler, and Itay Sisso. "Crowdsourcing in observational and experimental research." In Handbook of Computational Social Science, Volume 2, 140–57. London: Routledge, 2021. http://dx.doi.org/10.4324/9781003025245-12.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Crowdsourcing experiments"

1

Saffo, David, Caglar Yildirim, Sara Di Bartolomeo, and Cody Dunne. "Crowdsourcing Virtual Reality Experiments using VRChat." In CHI '20: CHI Conference on Human Factors in Computing Systems. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3334480.3382829.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Takoulidou, Eirini, and Konstantinos Chorianopoulos. "Crowdsourcing experiments with a video analytics system." In 2015 6th International Conference on Information, Intelligence, Systems and Applications (IISA). IEEE, 2015. http://dx.doi.org/10.1109/iisa.2015.7387979.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Choi, Jinhan, Changhoon Oh, Bongwon Suh, and Nam Wook Wook Kim. "VisLab: Crowdsourcing Visualization Experiments in the Wild." In CHI '21: CHI Conference on Human Factors in Computing Systems. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3411763.3451826.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Aljohani, Asmaa, and James Jones. "Conducting Malicious Cybersecurity Experiments on Crowdsourcing Platforms." In BDE 2021: The 2021 3rd International Conference on Big Data Engineering. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3468920.3468942.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Thuan, Nguyen Hoang, Pedro Antunes, and David Johnstone. "Pilot experiments on a designed crowdsourcing decision tool." In 2016 IEEE 20th International Conference on Computer Supported Cooperative Work in Design (CSCWD). IEEE, 2016. http://dx.doi.org/10.1109/cscwd.2016.7566058.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Ramirez, Jorge, Marcos Baez, Fabio Casati, Luca Cernuzzi, and Boualem Benatallah. "Challenges and strategies for running controlled crowdsourcing experiments." In 2020 XLVI Latin American Computing Conference (CLEI). IEEE, 2020. http://dx.doi.org/10.1109/clei52000.2020.00036.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Yamamoto, Ayako, Toshio Irino, Kenichi Arai, Shoko Araki, Atsunori Ogawa, Keisuke Kinoshita, and Tomohiro Nakatani. "Comparison of Remote Experiments Using Crowdsourcing and Laboratory Experiments on Speech Intelligibility." In Interspeech 2021. ISCA: ISCA, 2021. http://dx.doi.org/10.21437/interspeech.2021-174.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Vale, Samyr. "Towards model driven crowdsourcing: First experiments, methodology and transformation." In 2014 IEEE International Conference on Information Reuse and Integration (IRI). IEEE, 2014. http://dx.doi.org/10.1109/iri.2014.7051892.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Abdul-Rahman, Alfie, Karl J. Proctor, Brian Duffy, and Min Chen. "Repeated measures design in crowdsourcing-based experiments for visualization." In the Fifth Workshop. New York, New York, USA: ACM Press, 2014. http://dx.doi.org/10.1145/2669557.2669561.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Ko, Ching Yun, Rui Lin, Shu Li, and Ngai Wong. "MiSC: Mixed Strategies Crowdsourcing." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/193.

Повний текст джерела
Анотація:
Popular crowdsourcing techniques mostly focus on evaluating workers' labeling quality before adjusting their weights during label aggregation. Recently, another cohort of models regard crowdsourced annotations as incomplete tensors and recover unfilled labels by tensor completion. However, mixed strategies of the two methodologies have never been comprehensively investigated, leaving them as rather independent approaches. In this work, we propose MiSC ( Mixed Strategies Crowdsourcing), a versatile framework integrating arbitrary conventional crowdsourcing and tensor completion techniques. In particular, we propose a novel iterative Tucker label aggregation algorithm that outperforms state-of-the-art methods in extensive experiments.
Стилі APA, Harvard, Vancouver, ISO та ін.

Звіти організацій з теми "Crowdsourcing experiments"

1

Gastelum, Zoe Nellie, Kari Sentz, Meili Claire Swanson, and Cristina Rinaudo. FY2017 Final Report: Power of the People: A technical ethical and experimental examination of the use of crowdsourcing to support international nuclear safeguards verification. Office of Scientific and Technical Information (OSTI), October 2017. http://dx.doi.org/10.2172/1408389.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії