Добірка наукової літератури з теми "Crowdsourcing experiments"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Crowdsourcing experiments".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Статті в журналах з теми "Crowdsourcing experiments"
Ramírez, Jorge, Burcu Sayin, Marcos Baez, Fabio Casati, Luca Cernuzzi, Boualem Benatallah, and Gianluca Demartini. "On the State of Reporting in Crowdsourcing Experiments and a Checklist to Aid Current Practices." Proceedings of the ACM on Human-Computer Interaction 5, CSCW2 (October 13, 2021): 1–34. http://dx.doi.org/10.1145/3479531.
Повний текст джерелаDanilchuk, M. V. "THE POTENTIAL OF THE CROWDSOURCING AS A METHOD OF LINGUISTIC EXPERIMENT." Bulletin of Kemerovo State University, no. 4 (December 23, 2018): 198–204. http://dx.doi.org/10.21603/2078-8975-2018-4-198-204.
Повний текст джерелаMakiguchi, Motohiro, Daichi Namikawa, Satoshi Nakamura, Taiga Yoshida, Masanori Yokoyama, and Yuji Takano. "Proposal and Initial Study for Animal Crowdsourcing." Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 2 (September 5, 2014): 40–41. http://dx.doi.org/10.1609/hcomp.v2i1.13185.
Повний текст джерелаLutz, Johannes. "The Validity of Crowdsourcing Data in Studying Anger and Aggressive Behavior." Social Psychology 47, no. 1 (January 2016): 38–51. http://dx.doi.org/10.1027/1864-9335/a000256.
Повний текст джерелаJiang, Ming, Zhiqi Shen, Shaojing Fan, and Qi Zhao. "SALICON: a web platform for crowdsourcing behavioral experiments." Journal of Vision 17, no. 10 (August 31, 2017): 704. http://dx.doi.org/10.1167/17.10.704.
Повний текст джерелаDella Mea, Vincenzo, Eddy Maddalena, and Stefano Mizzaro. "Mobile crowdsourcing: four experiments on platforms and tasks." Distributed and Parallel Databases 33, no. 1 (October 16, 2014): 123–41. http://dx.doi.org/10.1007/s10619-014-7162-x.
Повний текст джерелаKandylas, Vasilis, Omar Alonso, Shiroy Choksey, Kedar Rudre, and Prashant Jaiswal. "Automating Crowdsourcing Tasks in an Industrial Environment." Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 1 (November 3, 2013): 95–96. http://dx.doi.org/10.1609/hcomp.v1i1.13056.
Повний текст джерелаKu, Chih-Hao, and Maryam Firoozi. "The Use of Crowdsourcing and Social Media in Accounting Research." Journal of Information Systems 33, no. 1 (November 1, 2017): 85–111. http://dx.doi.org/10.2308/isys-51978.
Повний текст джерелаBaba, Yukino, Hisashi Kashima, Kei Kinoshita, Goushi Yamaguchi, and Yosuke Akiyoshi. "Leveraging Crowdsourcing to Detect Improper Tasks in Crowdsourcing Marketplaces." Proceedings of the AAAI Conference on Artificial Intelligence 27, no. 2 (October 6, 2021): 1487–92. http://dx.doi.org/10.1609/aaai.v27i2.18987.
Повний текст джерелаLiu, Chong, and Yu-Xiang Wang. "Doubly Robust Crowdsourcing." Journal of Artificial Intelligence Research 73 (January 12, 2022): 209–29. http://dx.doi.org/10.1613/jair.1.13304.
Повний текст джерелаДисертації з теми "Crowdsourcing experiments"
Ramirez, Medina Jorge Daniel. "Strategies for addressing performance concerns and bias in designing, running, and reporting crowdsourcing experiment." Doctoral thesis, Università degli studi di Trento, 2021. http://hdl.handle.net/11572/321908.
Повний текст джерелаEslick, Ian S. (Ian Scott). "Crowdsourcing health discoveries : from anecdotes to aggregated self-experiments." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/91433.
Повний текст джерелаCataloged from PDF version of thesis.
Includes bibliographical references (pages 305-315).
Nearly one quarter of US adults read patient-generated health information found on blogs, forums and social media; many say they use this information to influence everyday health decisions. Topics of discussion in online forums are often poorly-addressed by existing, clinical research, so a patient's reported experiences are the only evidence. No rigorous methods exist to help patients leverage anecdotal evidence to make better decisions. This dissertation reports on multiple prototype systems that help patients augment anecdote with data to improve individual decision making, optimize healthcare delivery, and accelerate research. The web-based systems were developed through a multi-year collaboration with individuals, advocacy organizations, healthcare providers, and biomedical researchers. The result of this work is a new scientific model for crowdsourcing health insights: Aggregated Self-Experiments. The self-experiment, a type of single-subject (n-of-1) trial, formally validates the effectiveness of an intervention on a single person. Aggregated Personal Experiments enables user communities to translate anecdotal correlations into repeatable trials that can validate efficacy in the context of their daily lives. Aggregating the outcomes of multiple trials improves the efficiency of future trials and enables users to prioritize trials for a given condition. Successful outcomes from many patients provide evidence to motivate future clinical research. The model, and the design principles that support it were evaluated through a set of focused user studies, secondary data analyses, and experience with real-world deployments.
by Ian Scott Eslick.
Ph. D.
McLeod, Ryan Nathaniel. "A PROOF OF CONCEPT FOR CROWDSOURCING COLOR PERCEPTION EXPERIMENTS." DigitalCommons@CalPoly, 2014. https://digitalcommons.calpoly.edu/theses/1269.
Повний текст джерелаGoucher-Lambert, Kosa Kendall. "Investigating Decision Making in Engineering Design Through Complementary Behavioral and Cognitive Neuroimaging Experiments." Research Showcase @ CMU, 2017. http://repository.cmu.edu/dissertations/910.
Повний текст джерелаAndersson, David. "Diversifying Demining : An Experimental Crowdsourcing Method for Optical Mine Detection." Thesis, Linköping University, Department of Electrical Engineering, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-15813.
Повний текст джерелаThis thesis explores the concepts of crowdsourcing and the ability of diversity, applied to optical mine detection. The idea is to use the human eye and wide and diverse workforce available on the Internet to detect mines, in addition to computer algorithms.
The theory of diversity in problem solving is discussed, especially the Diversity Trumps Ability Theorem and the Diversity Prediction Theorem, and how they should be carried out for possible applications such as contrast interpretation and area reduction respectively.
A simple contrast interpretation experiment is carried out comparing the results of a laymen crowd and one of experts, having the crowds examine extracts from hyperspectral images, classifying the amount of objects or mines and the type of terrain. Due to poor participation rate of the expert group, and an erroneous experiment introduction, the experiment does not yield any statistically significant results. Therefore, no conclusion is made.
Experiment improvements are proposed as well as possible future applications.
Denna rapport går igenom tanken bakom crowdsourcing och mångfaldens styrka tillämpad på optisk mindetektering. Tanken är att använda det mänskliga ögat och Internets skiftande och varierande arbetsstyrka som ett tillägg för att upptäcka minor tillsammans med dataalgoritmer.
Mångfaldsteorin i problemlösande diskuteras och speciellt ''Diversity Trumps Ability''-satsen och ''Diversity Prediction''-satsen och hur de ska genomföras för tillämpningar som kontrastigenkänning respektive ytreduktion.
Ett enkelt kontrastigenkänningsexperiment har genomförts för att jämföra resultaten mellan en lekmannagrupp och en expertgrupp. Grupperna tittar på delar av data från hyperspektrala bilder och klassifierar andel objekt eller minor och terrängtyp. På grund av lågt deltagande från expertgruppen och en felaktig experimentintroduktion ger inte experimentet några statistiskt signifikanta resultat, varför ingen slutsats dras.
Experimentförbättringar och framtida tillämpningar föreslås.
Multi Optical Mine Detection System
Ichatha, Stephen K. "The Role of Empowerment in Crowdsourced Customer Service." 2013. http://scholarworks.gsu.edu/bus_admin_diss/18.
Повний текст джерела(9183527), Murtuza Shergadwala. "SEQUENTIAL INFORMATION ACQUISITION AND DECISION MAKING IN DESIGN CONTESTS: THEORETICAL AND EXPERIMENTAL STUDIES." Thesis, 2020.
Знайти повний текст джерелаThe primary research question of this dissertation is, \textit{How do contestants make sequential design decisions under the influence of competition?} To address this question, I study the influence of three factors, that can be controlled by the contest organizers, on the contestants' sequential information acquisition and decision-making behaviors. These factors are (i) a contestant's domain knowledge, (ii) framing of a design problem, and (iii) information about historical contests. The \textit{central hypothesis} is that by conducting controlled behavioral experiments we can acquire data of contestant behaviors that can be used to calibrate computational models of contestants' sequential decision-making behaviors, thereby, enabling predictions about the design outcomes. The behavioral results suggest that (i) contestants better understand problem constraints and generate more feasible design solutions when a design problem is framed in a domain-specific context as compared to a domain-independent context, (ii) contestants' efforts to acquire information about a design artifact to make design improvements are significantly affected by the information provided to them about their opponent who is competing to achieve the same objectives, and (iii) contestants make information acquisition decisions such as when to stop acquiring information, based on various criteria such as the number of resources, the target objective value, and the observed amount of improvement in their design quality. Moreover, the threshold values of such criteria are influenced by the information the contestants have about their opponent. The results imply that (i) by understanding the influence of an individual's domain knowledge and framing of a problem we can provide decision-support tools to the contestants in engineering design contexts to better acquire problem-specific information (ii) we can enable contest designers to decide what information to share to improve the quality of the design outcomes of design contest, and (iii) from an educational standpoint, we can enable instructors to provide students with accurate assessments of their domain knowledge by understanding students' information acquisition and decision making behaviors in their design projects. The \textit{primary contribution} of this dissertation is the computational models of an individual's sequential decision-making process that incorporate the behavioral results discussed above in competitive design scenarios. Moreover, a framework to conduct factorial investigations of human decision making through a combination of theory and behavioral experimentation is illustrated.
Книги з теми "Crowdsourcing experiments"
Archambault, Daniel, Helen Purchase, and Tobias Hoßfeld, eds. Evaluation in the Crowd. Crowdsourcing and Human-Centered Experiments. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-66435-4.
Повний текст джерелаArchambault, Daniel, Helen Purchase, and Tobias Hoßfeld. Evaluation in the Crowd. Crowdsourcing and Human-Centered Experiments: Dagstuhl Seminar 15481, Dagstuhl Castle, Germany, November 22 – 27, 2015, ... Springer, 2017.
Знайти повний текст джерелаЧастини книг з теми "Crowdsourcing experiments"
Egger-Lampl, Sebastian, Judith Redi, Tobias Hoßfeld, Matthias Hirth, Sebastian Möller, Babak Naderi, Christian Keimel, and Dietmar Saupe. "Crowdsourcing Quality of Experience Experiments." In Evaluation in the Crowd. Crowdsourcing and Human-Centered Experiments, 154–90. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-66435-4_7.
Повний текст джерелаHirth, Matthias, Jason Jacques, Peter Rodgers, Ognjen Scekic, and Michael Wybrow. "Crowdsourcing Technology to Support Academic Research." In Evaluation in the Crowd. Crowdsourcing and Human-Centered Experiments, 70–95. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-66435-4_4.
Повний текст джерелаBorgo, Rita, Bongshin Lee, Benjamin Bach, Sara Fabrikant, Radu Jianu, Andreas Kerren, Stephen Kobourov, et al. "Crowdsourcing for Information Visualization: Promises and Pitfalls." In Evaluation in the Crowd. Crowdsourcing and Human-Centered Experiments, 96–138. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-66435-4_5.
Повний текст джерелаGadiraju, Ujwal, Sebastian Möller, Martin Nöllenburg, Dietmar Saupe, Sebastian Egger-Lampl, Daniel Archambault, and Brian Fisher. "Crowdsourcing Versus the Laboratory: Towards Human-Centered Experiments Using the Crowd." In Evaluation in the Crowd. Crowdsourcing and Human-Centered Experiments, 6–26. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-66435-4_2.
Повний текст джерелаArchambault, Daniel, Helen C. Purchase, and Tobias Hoßfeld. "Evaluation in the Crowd: An Introduction." In Evaluation in the Crowd. Crowdsourcing and Human-Centered Experiments, 1–5. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-66435-4_1.
Повний текст джерелаMartin, David, Sheelagh Carpendale, Neha Gupta, Tobias Hoßfeld, Babak Naderi, Judith Redi, Ernestasia Siahaan, and Ina Wechsung. "Understanding the Crowd: Ethical and Practical Matters in the Academic Use of Crowdsourcing." In Evaluation in the Crowd. Crowdsourcing and Human-Centered Experiments, 27–69. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-66435-4_3.
Повний текст джерелаEdwards, Darren J., Linda T. Kaastra, Brian Fisher, Remco Chang, and Min Chen. "Cognitive Information Theories of Psychology and Applications with Visualization and HCI Through Crowdsourcing Platforms." In Evaluation in the Crowd. Crowdsourcing and Human-Centered Experiments, 139–53. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-66435-4_6.
Повний текст джерелаGadiraju, Ujwal, Sebastian Möller, Martin Nöllenburg, Dietmar Saupe, Sebastian Egger-Lampl, Daniel Archambault, and Brian Fisher. "Erratum to: Crowdsourcing Versus the Laboratory: Towards Human-Centered Experiments Using the Crowd." In Evaluation in the Crowd. Crowdsourcing and Human-Centered Experiments, E1. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-66435-4_8.
Повний текст джерелаAbou Chahine, Ramzi, Dongjae Kwon, Chungman Lim, Gunhyuk Park, and Hasti Seifi. "Vibrotactile Similarity Perception in Crowdsourced and Lab Studies." In Haptics: Science, Technology, Applications, 255–63. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-06249-0_29.
Повний текст джерелаZallot, Camilla, Gabriele Paolacci, Jesse Chandler, and Itay Sisso. "Crowdsourcing in observational and experimental research." In Handbook of Computational Social Science, Volume 2, 140–57. London: Routledge, 2021. http://dx.doi.org/10.4324/9781003025245-12.
Повний текст джерелаТези доповідей конференцій з теми "Crowdsourcing experiments"
Saffo, David, Caglar Yildirim, Sara Di Bartolomeo, and Cody Dunne. "Crowdsourcing Virtual Reality Experiments using VRChat." In CHI '20: CHI Conference on Human Factors in Computing Systems. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3334480.3382829.
Повний текст джерелаTakoulidou, Eirini, and Konstantinos Chorianopoulos. "Crowdsourcing experiments with a video analytics system." In 2015 6th International Conference on Information, Intelligence, Systems and Applications (IISA). IEEE, 2015. http://dx.doi.org/10.1109/iisa.2015.7387979.
Повний текст джерелаChoi, Jinhan, Changhoon Oh, Bongwon Suh, and Nam Wook Wook Kim. "VisLab: Crowdsourcing Visualization Experiments in the Wild." In CHI '21: CHI Conference on Human Factors in Computing Systems. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3411763.3451826.
Повний текст джерелаAljohani, Asmaa, and James Jones. "Conducting Malicious Cybersecurity Experiments on Crowdsourcing Platforms." In BDE 2021: The 2021 3rd International Conference on Big Data Engineering. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3468920.3468942.
Повний текст джерелаThuan, Nguyen Hoang, Pedro Antunes, and David Johnstone. "Pilot experiments on a designed crowdsourcing decision tool." In 2016 IEEE 20th International Conference on Computer Supported Cooperative Work in Design (CSCWD). IEEE, 2016. http://dx.doi.org/10.1109/cscwd.2016.7566058.
Повний текст джерелаRamirez, Jorge, Marcos Baez, Fabio Casati, Luca Cernuzzi, and Boualem Benatallah. "Challenges and strategies for running controlled crowdsourcing experiments." In 2020 XLVI Latin American Computing Conference (CLEI). IEEE, 2020. http://dx.doi.org/10.1109/clei52000.2020.00036.
Повний текст джерелаYamamoto, Ayako, Toshio Irino, Kenichi Arai, Shoko Araki, Atsunori Ogawa, Keisuke Kinoshita, and Tomohiro Nakatani. "Comparison of Remote Experiments Using Crowdsourcing and Laboratory Experiments on Speech Intelligibility." In Interspeech 2021. ISCA: ISCA, 2021. http://dx.doi.org/10.21437/interspeech.2021-174.
Повний текст джерелаVale, Samyr. "Towards model driven crowdsourcing: First experiments, methodology and transformation." In 2014 IEEE International Conference on Information Reuse and Integration (IRI). IEEE, 2014. http://dx.doi.org/10.1109/iri.2014.7051892.
Повний текст джерелаAbdul-Rahman, Alfie, Karl J. Proctor, Brian Duffy, and Min Chen. "Repeated measures design in crowdsourcing-based experiments for visualization." In the Fifth Workshop. New York, New York, USA: ACM Press, 2014. http://dx.doi.org/10.1145/2669557.2669561.
Повний текст джерелаKo, Ching Yun, Rui Lin, Shu Li, and Ngai Wong. "MiSC: Mixed Strategies Crowdsourcing." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/193.
Повний текст джерелаЗвіти організацій з теми "Crowdsourcing experiments"
Gastelum, Zoe Nellie, Kari Sentz, Meili Claire Swanson, and Cristina Rinaudo. FY2017 Final Report: Power of the People: A technical ethical and experimental examination of the use of crowdsourcing to support international nuclear safeguards verification. Office of Scientific and Technical Information (OSTI), October 2017. http://dx.doi.org/10.2172/1408389.
Повний текст джерела