Literatura académica sobre el tema "Crowdsourcing experiments"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Crowdsourcing experiments".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Artículos de revistas sobre el tema "Crowdsourcing experiments"
Ramírez, Jorge, Burcu Sayin, Marcos Baez, Fabio Casati, Luca Cernuzzi, Boualem Benatallah y Gianluca Demartini. "On the State of Reporting in Crowdsourcing Experiments and a Checklist to Aid Current Practices". Proceedings of the ACM on Human-Computer Interaction 5, CSCW2 (13 de octubre de 2021): 1–34. http://dx.doi.org/10.1145/3479531.
Texto completoDanilchuk, M. V. "THE POTENTIAL OF THE CROWDSOURCING AS A METHOD OF LINGUISTIC EXPERIMENT". Bulletin of Kemerovo State University, n.º 4 (23 de diciembre de 2018): 198–204. http://dx.doi.org/10.21603/2078-8975-2018-4-198-204.
Texto completoMakiguchi, Motohiro, Daichi Namikawa, Satoshi Nakamura, Taiga Yoshida, Masanori Yokoyama y Yuji Takano. "Proposal and Initial Study for Animal Crowdsourcing". Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 2 (5 de septiembre de 2014): 40–41. http://dx.doi.org/10.1609/hcomp.v2i1.13185.
Texto completoLutz, Johannes. "The Validity of Crowdsourcing Data in Studying Anger and Aggressive Behavior". Social Psychology 47, n.º 1 (enero de 2016): 38–51. http://dx.doi.org/10.1027/1864-9335/a000256.
Texto completoJiang, Ming, Zhiqi Shen, Shaojing Fan y Qi Zhao. "SALICON: a web platform for crowdsourcing behavioral experiments". Journal of Vision 17, n.º 10 (31 de agosto de 2017): 704. http://dx.doi.org/10.1167/17.10.704.
Texto completoDella Mea, Vincenzo, Eddy Maddalena y Stefano Mizzaro. "Mobile crowdsourcing: four experiments on platforms and tasks". Distributed and Parallel Databases 33, n.º 1 (16 de octubre de 2014): 123–41. http://dx.doi.org/10.1007/s10619-014-7162-x.
Texto completoKandylas, Vasilis, Omar Alonso, Shiroy Choksey, Kedar Rudre y Prashant Jaiswal. "Automating Crowdsourcing Tasks in an Industrial Environment". Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 1 (3 de noviembre de 2013): 95–96. http://dx.doi.org/10.1609/hcomp.v1i1.13056.
Texto completoKu, Chih-Hao y Maryam Firoozi. "The Use of Crowdsourcing and Social Media in Accounting Research". Journal of Information Systems 33, n.º 1 (1 de noviembre de 2017): 85–111. http://dx.doi.org/10.2308/isys-51978.
Texto completoBaba, Yukino, Hisashi Kashima, Kei Kinoshita, Goushi Yamaguchi y Yosuke Akiyoshi. "Leveraging Crowdsourcing to Detect Improper Tasks in Crowdsourcing Marketplaces". Proceedings of the AAAI Conference on Artificial Intelligence 27, n.º 2 (6 de octubre de 2021): 1487–92. http://dx.doi.org/10.1609/aaai.v27i2.18987.
Texto completoLiu, Chong y Yu-Xiang Wang. "Doubly Robust Crowdsourcing". Journal of Artificial Intelligence Research 73 (12 de enero de 2022): 209–29. http://dx.doi.org/10.1613/jair.1.13304.
Texto completoTesis sobre el tema "Crowdsourcing experiments"
Ramirez, Medina Jorge Daniel. "Strategies for addressing performance concerns and bias in designing, running, and reporting crowdsourcing experiment". Doctoral thesis, Università degli studi di Trento, 2021. http://hdl.handle.net/11572/321908.
Texto completoEslick, Ian S. (Ian Scott). "Crowdsourcing health discoveries : from anecdotes to aggregated self-experiments". Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/91433.
Texto completoCataloged from PDF version of thesis.
Includes bibliographical references (pages 305-315).
Nearly one quarter of US adults read patient-generated health information found on blogs, forums and social media; many say they use this information to influence everyday health decisions. Topics of discussion in online forums are often poorly-addressed by existing, clinical research, so a patient's reported experiences are the only evidence. No rigorous methods exist to help patients leverage anecdotal evidence to make better decisions. This dissertation reports on multiple prototype systems that help patients augment anecdote with data to improve individual decision making, optimize healthcare delivery, and accelerate research. The web-based systems were developed through a multi-year collaboration with individuals, advocacy organizations, healthcare providers, and biomedical researchers. The result of this work is a new scientific model for crowdsourcing health insights: Aggregated Self-Experiments. The self-experiment, a type of single-subject (n-of-1) trial, formally validates the effectiveness of an intervention on a single person. Aggregated Personal Experiments enables user communities to translate anecdotal correlations into repeatable trials that can validate efficacy in the context of their daily lives. Aggregating the outcomes of multiple trials improves the efficiency of future trials and enables users to prioritize trials for a given condition. Successful outcomes from many patients provide evidence to motivate future clinical research. The model, and the design principles that support it were evaluated through a set of focused user studies, secondary data analyses, and experience with real-world deployments.
by Ian Scott Eslick.
Ph. D.
McLeod, Ryan Nathaniel. "A PROOF OF CONCEPT FOR CROWDSOURCING COLOR PERCEPTION EXPERIMENTS". DigitalCommons@CalPoly, 2014. https://digitalcommons.calpoly.edu/theses/1269.
Texto completoGoucher-Lambert, Kosa Kendall. "Investigating Decision Making in Engineering Design Through Complementary Behavioral and Cognitive Neuroimaging Experiments". Research Showcase @ CMU, 2017. http://repository.cmu.edu/dissertations/910.
Texto completoAndersson, David. "Diversifying Demining : An Experimental Crowdsourcing Method for Optical Mine Detection". Thesis, Linköping University, Department of Electrical Engineering, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-15813.
Texto completoThis thesis explores the concepts of crowdsourcing and the ability of diversity, applied to optical mine detection. The idea is to use the human eye and wide and diverse workforce available on the Internet to detect mines, in addition to computer algorithms.
The theory of diversity in problem solving is discussed, especially the Diversity Trumps Ability Theorem and the Diversity Prediction Theorem, and how they should be carried out for possible applications such as contrast interpretation and area reduction respectively.
A simple contrast interpretation experiment is carried out comparing the results of a laymen crowd and one of experts, having the crowds examine extracts from hyperspectral images, classifying the amount of objects or mines and the type of terrain. Due to poor participation rate of the expert group, and an erroneous experiment introduction, the experiment does not yield any statistically significant results. Therefore, no conclusion is made.
Experiment improvements are proposed as well as possible future applications.
Denna rapport går igenom tanken bakom crowdsourcing och mångfaldens styrka tillämpad på optisk mindetektering. Tanken är att använda det mänskliga ögat och Internets skiftande och varierande arbetsstyrka som ett tillägg för att upptäcka minor tillsammans med dataalgoritmer.
Mångfaldsteorin i problemlösande diskuteras och speciellt ''Diversity Trumps Ability''-satsen och ''Diversity Prediction''-satsen och hur de ska genomföras för tillämpningar som kontrastigenkänning respektive ytreduktion.
Ett enkelt kontrastigenkänningsexperiment har genomförts för att jämföra resultaten mellan en lekmannagrupp och en expertgrupp. Grupperna tittar på delar av data från hyperspektrala bilder och klassifierar andel objekt eller minor och terrängtyp. På grund av lågt deltagande från expertgruppen och en felaktig experimentintroduktion ger inte experimentet några statistiskt signifikanta resultat, varför ingen slutsats dras.
Experimentförbättringar och framtida tillämpningar föreslås.
Multi Optical Mine Detection System
Ichatha, Stephen K. "The Role of Empowerment in Crowdsourced Customer Service". 2013. http://scholarworks.gsu.edu/bus_admin_diss/18.
Texto completo(9183527), Murtuza Shergadwala. "SEQUENTIAL INFORMATION ACQUISITION AND DECISION MAKING IN DESIGN CONTESTS: THEORETICAL AND EXPERIMENTAL STUDIES". Thesis, 2020.
Buscar texto completoThe primary research question of this dissertation is, \textit{How do contestants make sequential design decisions under the influence of competition?} To address this question, I study the influence of three factors, that can be controlled by the contest organizers, on the contestants' sequential information acquisition and decision-making behaviors. These factors are (i) a contestant's domain knowledge, (ii) framing of a design problem, and (iii) information about historical contests. The \textit{central hypothesis} is that by conducting controlled behavioral experiments we can acquire data of contestant behaviors that can be used to calibrate computational models of contestants' sequential decision-making behaviors, thereby, enabling predictions about the design outcomes. The behavioral results suggest that (i) contestants better understand problem constraints and generate more feasible design solutions when a design problem is framed in a domain-specific context as compared to a domain-independent context, (ii) contestants' efforts to acquire information about a design artifact to make design improvements are significantly affected by the information provided to them about their opponent who is competing to achieve the same objectives, and (iii) contestants make information acquisition decisions such as when to stop acquiring information, based on various criteria such as the number of resources, the target objective value, and the observed amount of improvement in their design quality. Moreover, the threshold values of such criteria are influenced by the information the contestants have about their opponent. The results imply that (i) by understanding the influence of an individual's domain knowledge and framing of a problem we can provide decision-support tools to the contestants in engineering design contexts to better acquire problem-specific information (ii) we can enable contest designers to decide what information to share to improve the quality of the design outcomes of design contest, and (iii) from an educational standpoint, we can enable instructors to provide students with accurate assessments of their domain knowledge by understanding students' information acquisition and decision making behaviors in their design projects. The \textit{primary contribution} of this dissertation is the computational models of an individual's sequential decision-making process that incorporate the behavioral results discussed above in competitive design scenarios. Moreover, a framework to conduct factorial investigations of human decision making through a combination of theory and behavioral experimentation is illustrated.
Libros sobre el tema "Crowdsourcing experiments"
Archambault, Daniel, Helen Purchase y Tobias Hoßfeld, eds. Evaluation in the Crowd. Crowdsourcing and Human-Centered Experiments. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-66435-4.
Texto completoArchambault, Daniel, Helen Purchase y Tobias Hoßfeld. Evaluation in the Crowd. Crowdsourcing and Human-Centered Experiments: Dagstuhl Seminar 15481, Dagstuhl Castle, Germany, November 22 – 27, 2015, ... Springer, 2017.
Buscar texto completoCapítulos de libros sobre el tema "Crowdsourcing experiments"
Egger-Lampl, Sebastian, Judith Redi, Tobias Hoßfeld, Matthias Hirth, Sebastian Möller, Babak Naderi, Christian Keimel y Dietmar Saupe. "Crowdsourcing Quality of Experience Experiments". En Evaluation in the Crowd. Crowdsourcing and Human-Centered Experiments, 154–90. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-66435-4_7.
Texto completoHirth, Matthias, Jason Jacques, Peter Rodgers, Ognjen Scekic y Michael Wybrow. "Crowdsourcing Technology to Support Academic Research". En Evaluation in the Crowd. Crowdsourcing and Human-Centered Experiments, 70–95. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-66435-4_4.
Texto completoBorgo, Rita, Bongshin Lee, Benjamin Bach, Sara Fabrikant, Radu Jianu, Andreas Kerren, Stephen Kobourov et al. "Crowdsourcing for Information Visualization: Promises and Pitfalls". En Evaluation in the Crowd. Crowdsourcing and Human-Centered Experiments, 96–138. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-66435-4_5.
Texto completoGadiraju, Ujwal, Sebastian Möller, Martin Nöllenburg, Dietmar Saupe, Sebastian Egger-Lampl, Daniel Archambault y Brian Fisher. "Crowdsourcing Versus the Laboratory: Towards Human-Centered Experiments Using the Crowd". En Evaluation in the Crowd. Crowdsourcing and Human-Centered Experiments, 6–26. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-66435-4_2.
Texto completoArchambault, Daniel, Helen C. Purchase y Tobias Hoßfeld. "Evaluation in the Crowd: An Introduction". En Evaluation in the Crowd. Crowdsourcing and Human-Centered Experiments, 1–5. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-66435-4_1.
Texto completoMartin, David, Sheelagh Carpendale, Neha Gupta, Tobias Hoßfeld, Babak Naderi, Judith Redi, Ernestasia Siahaan y Ina Wechsung. "Understanding the Crowd: Ethical and Practical Matters in the Academic Use of Crowdsourcing". En Evaluation in the Crowd. Crowdsourcing and Human-Centered Experiments, 27–69. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-66435-4_3.
Texto completoEdwards, Darren J., Linda T. Kaastra, Brian Fisher, Remco Chang y Min Chen. "Cognitive Information Theories of Psychology and Applications with Visualization and HCI Through Crowdsourcing Platforms". En Evaluation in the Crowd. Crowdsourcing and Human-Centered Experiments, 139–53. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-66435-4_6.
Texto completoGadiraju, Ujwal, Sebastian Möller, Martin Nöllenburg, Dietmar Saupe, Sebastian Egger-Lampl, Daniel Archambault y Brian Fisher. "Erratum to: Crowdsourcing Versus the Laboratory: Towards Human-Centered Experiments Using the Crowd". En Evaluation in the Crowd. Crowdsourcing and Human-Centered Experiments, E1. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-66435-4_8.
Texto completoAbou Chahine, Ramzi, Dongjae Kwon, Chungman Lim, Gunhyuk Park y Hasti Seifi. "Vibrotactile Similarity Perception in Crowdsourced and Lab Studies". En Haptics: Science, Technology, Applications, 255–63. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-06249-0_29.
Texto completoZallot, Camilla, Gabriele Paolacci, Jesse Chandler y Itay Sisso. "Crowdsourcing in observational and experimental research". En Handbook of Computational Social Science, Volume 2, 140–57. London: Routledge, 2021. http://dx.doi.org/10.4324/9781003025245-12.
Texto completoActas de conferencias sobre el tema "Crowdsourcing experiments"
Saffo, David, Caglar Yildirim, Sara Di Bartolomeo y Cody Dunne. "Crowdsourcing Virtual Reality Experiments using VRChat". En CHI '20: CHI Conference on Human Factors in Computing Systems. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3334480.3382829.
Texto completoTakoulidou, Eirini y Konstantinos Chorianopoulos. "Crowdsourcing experiments with a video analytics system". En 2015 6th International Conference on Information, Intelligence, Systems and Applications (IISA). IEEE, 2015. http://dx.doi.org/10.1109/iisa.2015.7387979.
Texto completoChoi, Jinhan, Changhoon Oh, Bongwon Suh y Nam Wook Wook Kim. "VisLab: Crowdsourcing Visualization Experiments in the Wild". En CHI '21: CHI Conference on Human Factors in Computing Systems. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3411763.3451826.
Texto completoAljohani, Asmaa y James Jones. "Conducting Malicious Cybersecurity Experiments on Crowdsourcing Platforms". En BDE 2021: The 2021 3rd International Conference on Big Data Engineering. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3468920.3468942.
Texto completoThuan, Nguyen Hoang, Pedro Antunes y David Johnstone. "Pilot experiments on a designed crowdsourcing decision tool". En 2016 IEEE 20th International Conference on Computer Supported Cooperative Work in Design (CSCWD). IEEE, 2016. http://dx.doi.org/10.1109/cscwd.2016.7566058.
Texto completoRamirez, Jorge, Marcos Baez, Fabio Casati, Luca Cernuzzi y Boualem Benatallah. "Challenges and strategies for running controlled crowdsourcing experiments". En 2020 XLVI Latin American Computing Conference (CLEI). IEEE, 2020. http://dx.doi.org/10.1109/clei52000.2020.00036.
Texto completoYamamoto, Ayako, Toshio Irino, Kenichi Arai, Shoko Araki, Atsunori Ogawa, Keisuke Kinoshita y Tomohiro Nakatani. "Comparison of Remote Experiments Using Crowdsourcing and Laboratory Experiments on Speech Intelligibility". En Interspeech 2021. ISCA: ISCA, 2021. http://dx.doi.org/10.21437/interspeech.2021-174.
Texto completoVale, Samyr. "Towards model driven crowdsourcing: First experiments, methodology and transformation". En 2014 IEEE International Conference on Information Reuse and Integration (IRI). IEEE, 2014. http://dx.doi.org/10.1109/iri.2014.7051892.
Texto completoAbdul-Rahman, Alfie, Karl J. Proctor, Brian Duffy y Min Chen. "Repeated measures design in crowdsourcing-based experiments for visualization". En the Fifth Workshop. New York, New York, USA: ACM Press, 2014. http://dx.doi.org/10.1145/2669557.2669561.
Texto completoKo, Ching Yun, Rui Lin, Shu Li y Ngai Wong. "MiSC: Mixed Strategies Crowdsourcing". En Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/193.
Texto completoInformes sobre el tema "Crowdsourcing experiments"
Gastelum, Zoe Nellie, Kari Sentz, Meili Claire Swanson y Cristina Rinaudo. FY2017 Final Report: Power of the People: A technical ethical and experimental examination of the use of crowdsourcing to support international nuclear safeguards verification. Office of Scientific and Technical Information (OSTI), octubre de 2017. http://dx.doi.org/10.2172/1408389.
Texto completo