Littérature scientifique sur le sujet « Crowdsourcing experiments »
Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres
Sommaire
Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Crowdsourcing experiments ».
À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.
Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.
Articles de revues sur le sujet "Crowdsourcing experiments"
Ramírez, Jorge, Burcu Sayin, Marcos Baez, Fabio Casati, Luca Cernuzzi, Boualem Benatallah et Gianluca Demartini. « On the State of Reporting in Crowdsourcing Experiments and a Checklist to Aid Current Practices ». Proceedings of the ACM on Human-Computer Interaction 5, CSCW2 (13 octobre 2021) : 1–34. http://dx.doi.org/10.1145/3479531.
Texte intégralDanilchuk, M. V. « THE POTENTIAL OF THE CROWDSOURCING AS A METHOD OF LINGUISTIC EXPERIMENT ». Bulletin of Kemerovo State University, no 4 (23 décembre 2018) : 198–204. http://dx.doi.org/10.21603/2078-8975-2018-4-198-204.
Texte intégralMakiguchi, Motohiro, Daichi Namikawa, Satoshi Nakamura, Taiga Yoshida, Masanori Yokoyama et Yuji Takano. « Proposal and Initial Study for Animal Crowdsourcing ». Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 2 (5 septembre 2014) : 40–41. http://dx.doi.org/10.1609/hcomp.v2i1.13185.
Texte intégralLutz, Johannes. « The Validity of Crowdsourcing Data in Studying Anger and Aggressive Behavior ». Social Psychology 47, no 1 (janvier 2016) : 38–51. http://dx.doi.org/10.1027/1864-9335/a000256.
Texte intégralJiang, Ming, Zhiqi Shen, Shaojing Fan et Qi Zhao. « SALICON : a web platform for crowdsourcing behavioral experiments ». Journal of Vision 17, no 10 (31 août 2017) : 704. http://dx.doi.org/10.1167/17.10.704.
Texte intégralDella Mea, Vincenzo, Eddy Maddalena et Stefano Mizzaro. « Mobile crowdsourcing : four experiments on platforms and tasks ». Distributed and Parallel Databases 33, no 1 (16 octobre 2014) : 123–41. http://dx.doi.org/10.1007/s10619-014-7162-x.
Texte intégralKandylas, Vasilis, Omar Alonso, Shiroy Choksey, Kedar Rudre et Prashant Jaiswal. « Automating Crowdsourcing Tasks in an Industrial Environment ». Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 1 (3 novembre 2013) : 95–96. http://dx.doi.org/10.1609/hcomp.v1i1.13056.
Texte intégralKu, Chih-Hao, et Maryam Firoozi. « The Use of Crowdsourcing and Social Media in Accounting Research ». Journal of Information Systems 33, no 1 (1 novembre 2017) : 85–111. http://dx.doi.org/10.2308/isys-51978.
Texte intégralBaba, Yukino, Hisashi Kashima, Kei Kinoshita, Goushi Yamaguchi et Yosuke Akiyoshi. « Leveraging Crowdsourcing to Detect Improper Tasks in Crowdsourcing Marketplaces ». Proceedings of the AAAI Conference on Artificial Intelligence 27, no 2 (6 octobre 2021) : 1487–92. http://dx.doi.org/10.1609/aaai.v27i2.18987.
Texte intégralLiu, Chong, et Yu-Xiang Wang. « Doubly Robust Crowdsourcing ». Journal of Artificial Intelligence Research 73 (12 janvier 2022) : 209–29. http://dx.doi.org/10.1613/jair.1.13304.
Texte intégralThèses sur le sujet "Crowdsourcing experiments"
Ramirez, Medina Jorge Daniel. « Strategies for addressing performance concerns and bias in designing, running, and reporting crowdsourcing experiment ». Doctoral thesis, Università degli studi di Trento, 2021. http://hdl.handle.net/11572/321908.
Texte intégralEslick, Ian S. (Ian Scott). « Crowdsourcing health discoveries : from anecdotes to aggregated self-experiments ». Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/91433.
Texte intégralCataloged from PDF version of thesis.
Includes bibliographical references (pages 305-315).
Nearly one quarter of US adults read patient-generated health information found on blogs, forums and social media; many say they use this information to influence everyday health decisions. Topics of discussion in online forums are often poorly-addressed by existing, clinical research, so a patient's reported experiences are the only evidence. No rigorous methods exist to help patients leverage anecdotal evidence to make better decisions. This dissertation reports on multiple prototype systems that help patients augment anecdote with data to improve individual decision making, optimize healthcare delivery, and accelerate research. The web-based systems were developed through a multi-year collaboration with individuals, advocacy organizations, healthcare providers, and biomedical researchers. The result of this work is a new scientific model for crowdsourcing health insights: Aggregated Self-Experiments. The self-experiment, a type of single-subject (n-of-1) trial, formally validates the effectiveness of an intervention on a single person. Aggregated Personal Experiments enables user communities to translate anecdotal correlations into repeatable trials that can validate efficacy in the context of their daily lives. Aggregating the outcomes of multiple trials improves the efficiency of future trials and enables users to prioritize trials for a given condition. Successful outcomes from many patients provide evidence to motivate future clinical research. The model, and the design principles that support it were evaluated through a set of focused user studies, secondary data analyses, and experience with real-world deployments.
by Ian Scott Eslick.
Ph. D.
McLeod, Ryan Nathaniel. « A PROOF OF CONCEPT FOR CROWDSOURCING COLOR PERCEPTION EXPERIMENTS ». DigitalCommons@CalPoly, 2014. https://digitalcommons.calpoly.edu/theses/1269.
Texte intégralGoucher-Lambert, Kosa Kendall. « Investigating Decision Making in Engineering Design Through Complementary Behavioral and Cognitive Neuroimaging Experiments ». Research Showcase @ CMU, 2017. http://repository.cmu.edu/dissertations/910.
Texte intégralAndersson, David. « Diversifying Demining : An Experimental Crowdsourcing Method for Optical Mine Detection ». Thesis, Linköping University, Department of Electrical Engineering, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-15813.
Texte intégralThis thesis explores the concepts of crowdsourcing and the ability of diversity, applied to optical mine detection. The idea is to use the human eye and wide and diverse workforce available on the Internet to detect mines, in addition to computer algorithms.
The theory of diversity in problem solving is discussed, especially the Diversity Trumps Ability Theorem and the Diversity Prediction Theorem, and how they should be carried out for possible applications such as contrast interpretation and area reduction respectively.
A simple contrast interpretation experiment is carried out comparing the results of a laymen crowd and one of experts, having the crowds examine extracts from hyperspectral images, classifying the amount of objects or mines and the type of terrain. Due to poor participation rate of the expert group, and an erroneous experiment introduction, the experiment does not yield any statistically significant results. Therefore, no conclusion is made.
Experiment improvements are proposed as well as possible future applications.
Denna rapport går igenom tanken bakom crowdsourcing och mångfaldens styrka tillämpad på optisk mindetektering. Tanken är att använda det mänskliga ögat och Internets skiftande och varierande arbetsstyrka som ett tillägg för att upptäcka minor tillsammans med dataalgoritmer.
Mångfaldsteorin i problemlösande diskuteras och speciellt ''Diversity Trumps Ability''-satsen och ''Diversity Prediction''-satsen och hur de ska genomföras för tillämpningar som kontrastigenkänning respektive ytreduktion.
Ett enkelt kontrastigenkänningsexperiment har genomförts för att jämföra resultaten mellan en lekmannagrupp och en expertgrupp. Grupperna tittar på delar av data från hyperspektrala bilder och klassifierar andel objekt eller minor och terrängtyp. På grund av lågt deltagande från expertgruppen och en felaktig experimentintroduktion ger inte experimentet några statistiskt signifikanta resultat, varför ingen slutsats dras.
Experimentförbättringar och framtida tillämpningar föreslås.
Multi Optical Mine Detection System
Ichatha, Stephen K. « The Role of Empowerment in Crowdsourced Customer Service ». 2013. http://scholarworks.gsu.edu/bus_admin_diss/18.
Texte intégral(9183527), Murtuza Shergadwala. « SEQUENTIAL INFORMATION ACQUISITION AND DECISION MAKING IN DESIGN CONTESTS : THEORETICAL AND EXPERIMENTAL STUDIES ». Thesis, 2020.
Trouver le texte intégralThe primary research question of this dissertation is, \textit{How do contestants make sequential design decisions under the influence of competition?} To address this question, I study the influence of three factors, that can be controlled by the contest organizers, on the contestants' sequential information acquisition and decision-making behaviors. These factors are (i) a contestant's domain knowledge, (ii) framing of a design problem, and (iii) information about historical contests. The \textit{central hypothesis} is that by conducting controlled behavioral experiments we can acquire data of contestant behaviors that can be used to calibrate computational models of contestants' sequential decision-making behaviors, thereby, enabling predictions about the design outcomes. The behavioral results suggest that (i) contestants better understand problem constraints and generate more feasible design solutions when a design problem is framed in a domain-specific context as compared to a domain-independent context, (ii) contestants' efforts to acquire information about a design artifact to make design improvements are significantly affected by the information provided to them about their opponent who is competing to achieve the same objectives, and (iii) contestants make information acquisition decisions such as when to stop acquiring information, based on various criteria such as the number of resources, the target objective value, and the observed amount of improvement in their design quality. Moreover, the threshold values of such criteria are influenced by the information the contestants have about their opponent. The results imply that (i) by understanding the influence of an individual's domain knowledge and framing of a problem we can provide decision-support tools to the contestants in engineering design contexts to better acquire problem-specific information (ii) we can enable contest designers to decide what information to share to improve the quality of the design outcomes of design contest, and (iii) from an educational standpoint, we can enable instructors to provide students with accurate assessments of their domain knowledge by understanding students' information acquisition and decision making behaviors in their design projects. The \textit{primary contribution} of this dissertation is the computational models of an individual's sequential decision-making process that incorporate the behavioral results discussed above in competitive design scenarios. Moreover, a framework to conduct factorial investigations of human decision making through a combination of theory and behavioral experimentation is illustrated.
Livres sur le sujet "Crowdsourcing experiments"
Archambault, Daniel, Helen Purchase et Tobias Hoßfeld, dir. Evaluation in the Crowd. Crowdsourcing and Human-Centered Experiments. Cham : Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-66435-4.
Texte intégralArchambault, Daniel, Helen Purchase et Tobias Hoßfeld. Evaluation in the Crowd. Crowdsourcing and Human-Centered Experiments : Dagstuhl Seminar 15481, Dagstuhl Castle, Germany, November 22 – 27, 2015, ... Springer, 2017.
Trouver le texte intégralChapitres de livres sur le sujet "Crowdsourcing experiments"
Egger-Lampl, Sebastian, Judith Redi, Tobias Hoßfeld, Matthias Hirth, Sebastian Möller, Babak Naderi, Christian Keimel et Dietmar Saupe. « Crowdsourcing Quality of Experience Experiments ». Dans Evaluation in the Crowd. Crowdsourcing and Human-Centered Experiments, 154–90. Cham : Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-66435-4_7.
Texte intégralHirth, Matthias, Jason Jacques, Peter Rodgers, Ognjen Scekic et Michael Wybrow. « Crowdsourcing Technology to Support Academic Research ». Dans Evaluation in the Crowd. Crowdsourcing and Human-Centered Experiments, 70–95. Cham : Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-66435-4_4.
Texte intégralBorgo, Rita, Bongshin Lee, Benjamin Bach, Sara Fabrikant, Radu Jianu, Andreas Kerren, Stephen Kobourov et al. « Crowdsourcing for Information Visualization : Promises and Pitfalls ». Dans Evaluation in the Crowd. Crowdsourcing and Human-Centered Experiments, 96–138. Cham : Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-66435-4_5.
Texte intégralGadiraju, Ujwal, Sebastian Möller, Martin Nöllenburg, Dietmar Saupe, Sebastian Egger-Lampl, Daniel Archambault et Brian Fisher. « Crowdsourcing Versus the Laboratory : Towards Human-Centered Experiments Using the Crowd ». Dans Evaluation in the Crowd. Crowdsourcing and Human-Centered Experiments, 6–26. Cham : Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-66435-4_2.
Texte intégralArchambault, Daniel, Helen C. Purchase et Tobias Hoßfeld. « Evaluation in the Crowd : An Introduction ». Dans Evaluation in the Crowd. Crowdsourcing and Human-Centered Experiments, 1–5. Cham : Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-66435-4_1.
Texte intégralMartin, David, Sheelagh Carpendale, Neha Gupta, Tobias Hoßfeld, Babak Naderi, Judith Redi, Ernestasia Siahaan et Ina Wechsung. « Understanding the Crowd : Ethical and Practical Matters in the Academic Use of Crowdsourcing ». Dans Evaluation in the Crowd. Crowdsourcing and Human-Centered Experiments, 27–69. Cham : Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-66435-4_3.
Texte intégralEdwards, Darren J., Linda T. Kaastra, Brian Fisher, Remco Chang et Min Chen. « Cognitive Information Theories of Psychology and Applications with Visualization and HCI Through Crowdsourcing Platforms ». Dans Evaluation in the Crowd. Crowdsourcing and Human-Centered Experiments, 139–53. Cham : Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-66435-4_6.
Texte intégralGadiraju, Ujwal, Sebastian Möller, Martin Nöllenburg, Dietmar Saupe, Sebastian Egger-Lampl, Daniel Archambault et Brian Fisher. « Erratum to : Crowdsourcing Versus the Laboratory : Towards Human-Centered Experiments Using the Crowd ». Dans Evaluation in the Crowd. Crowdsourcing and Human-Centered Experiments, E1. Cham : Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-66435-4_8.
Texte intégralAbou Chahine, Ramzi, Dongjae Kwon, Chungman Lim, Gunhyuk Park et Hasti Seifi. « Vibrotactile Similarity Perception in Crowdsourced and Lab Studies ». Dans Haptics : Science, Technology, Applications, 255–63. Cham : Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-06249-0_29.
Texte intégralZallot, Camilla, Gabriele Paolacci, Jesse Chandler et Itay Sisso. « Crowdsourcing in observational and experimental research ». Dans Handbook of Computational Social Science, Volume 2, 140–57. London : Routledge, 2021. http://dx.doi.org/10.4324/9781003025245-12.
Texte intégralActes de conférences sur le sujet "Crowdsourcing experiments"
Saffo, David, Caglar Yildirim, Sara Di Bartolomeo et Cody Dunne. « Crowdsourcing Virtual Reality Experiments using VRChat ». Dans CHI '20 : CHI Conference on Human Factors in Computing Systems. New York, NY, USA : ACM, 2020. http://dx.doi.org/10.1145/3334480.3382829.
Texte intégralTakoulidou, Eirini, et Konstantinos Chorianopoulos. « Crowdsourcing experiments with a video analytics system ». Dans 2015 6th International Conference on Information, Intelligence, Systems and Applications (IISA). IEEE, 2015. http://dx.doi.org/10.1109/iisa.2015.7387979.
Texte intégralChoi, Jinhan, Changhoon Oh, Bongwon Suh et Nam Wook Wook Kim. « VisLab : Crowdsourcing Visualization Experiments in the Wild ». Dans CHI '21 : CHI Conference on Human Factors in Computing Systems. New York, NY, USA : ACM, 2021. http://dx.doi.org/10.1145/3411763.3451826.
Texte intégralAljohani, Asmaa, et James Jones. « Conducting Malicious Cybersecurity Experiments on Crowdsourcing Platforms ». Dans BDE 2021 : The 2021 3rd International Conference on Big Data Engineering. New York, NY, USA : ACM, 2021. http://dx.doi.org/10.1145/3468920.3468942.
Texte intégralThuan, Nguyen Hoang, Pedro Antunes et David Johnstone. « Pilot experiments on a designed crowdsourcing decision tool ». Dans 2016 IEEE 20th International Conference on Computer Supported Cooperative Work in Design (CSCWD). IEEE, 2016. http://dx.doi.org/10.1109/cscwd.2016.7566058.
Texte intégralRamirez, Jorge, Marcos Baez, Fabio Casati, Luca Cernuzzi et Boualem Benatallah. « Challenges and strategies for running controlled crowdsourcing experiments ». Dans 2020 XLVI Latin American Computing Conference (CLEI). IEEE, 2020. http://dx.doi.org/10.1109/clei52000.2020.00036.
Texte intégralYamamoto, Ayako, Toshio Irino, Kenichi Arai, Shoko Araki, Atsunori Ogawa, Keisuke Kinoshita et Tomohiro Nakatani. « Comparison of Remote Experiments Using Crowdsourcing and Laboratory Experiments on Speech Intelligibility ». Dans Interspeech 2021. ISCA : ISCA, 2021. http://dx.doi.org/10.21437/interspeech.2021-174.
Texte intégralVale, Samyr. « Towards model driven crowdsourcing : First experiments, methodology and transformation ». Dans 2014 IEEE International Conference on Information Reuse and Integration (IRI). IEEE, 2014. http://dx.doi.org/10.1109/iri.2014.7051892.
Texte intégralAbdul-Rahman, Alfie, Karl J. Proctor, Brian Duffy et Min Chen. « Repeated measures design in crowdsourcing-based experiments for visualization ». Dans the Fifth Workshop. New York, New York, USA : ACM Press, 2014. http://dx.doi.org/10.1145/2669557.2669561.
Texte intégralKo, Ching Yun, Rui Lin, Shu Li et Ngai Wong. « MiSC : Mixed Strategies Crowdsourcing ». Dans Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California : International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/193.
Texte intégralRapports d'organisations sur le sujet "Crowdsourcing experiments"
Gastelum, Zoe Nellie, Kari Sentz, Meili Claire Swanson et Cristina Rinaudo. FY2017 Final Report : Power of the People : A technical ethical and experimental examination of the use of crowdsourcing to support international nuclear safeguards verification. Office of Scientific and Technical Information (OSTI), octobre 2017. http://dx.doi.org/10.2172/1408389.
Texte intégral