Academic literature on the topic 'Crowdsourced experiment'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Crowdsourced experiment.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Crowdsourced experiment"

1

Usuba, Hiroki, Shota Yamanaka, Junichi Sato, and Homei Miyashita. "Predicting touch accuracy for rectangular targets by using one-dimensional task results." Proceedings of the ACM on Human-Computer Interaction 6, ISS (November 14, 2022): 525–37. http://dx.doi.org/10.1145/3567732.

Full text
Abstract:
We propose a method that predicts the success rate in pointing to 2D rectangular targets by using 1D vertical-bar and horizontal-bar task results. The method can predict the success rates for more practical situations under fewer experimental conditions. This shortens the duration of experiments, thus saving costs for researchers and practitioners. We verified the method through two experiments: laboratory-based and crowdsourced ones. In the laboratory-based experiment, we found that using 1D task results to predict the success rate for 2D targets slightly decreases the prediction accuracy. In the crowdsourced experiment, this method scored better than using 2D task results. Thus, we recommend that researchers use the method properly depending on the situation.
APA, Harvard, Vancouver, ISO, and other styles
2

Shiraishi, Yuhki, Jianwei Zhang, Daisuke Wakatsuki, Katsumi Kumai, and Atsuyuki Morishima. "Crowdsourced real-time captioning of sign language by deaf and hard-of-hearing people." International Journal of Pervasive Computing and Communications 13, no. 1 (April 3, 2017): 2–25. http://dx.doi.org/10.1108/ijpcc-02-2017-0014.

Full text
Abstract:
Purpose The purpose of this paper is to explore the issues on how to achieve crowdsourced real-time captioning of sign language by deaf and hard-of-hearing (DHH) people, such that how a system structure should be designed, how a continuous task of sign language captioning should be divided into microtasks and how many DHH people are required to maintain a high-quality real-time captioning. Design/methodology/approach The authors first propose a system structure, including the new design of worker roles, task division and task assignment. Then, based on an implemented prototype, the authors analyze the necessary setting for achieving a crowdsourced real-time captioning of sign language, test the feasibility of the proposed system and explore its robustness and improvability through four experiments. Findings The results of Experiment 1 have revealed the optimal method for task division, the necessary minimum number of groups and the necessary minimum number of workers in a group. The results of Experiment 2 have verified the feasibility of the crowdsourced real-time captioning of sign language by DHH people. The results of Experiment 3 and Experiment 4 have shown the robustness and improvability of the captioning system. Originality/value Although some crowdsourcing-based systems have been developed for the captioning of voice to text, the authors intend to resolve the issues on the captioning of sign language to text, for which the existing approaches do not work well due to the unique properties of sign language. Moreover, DHH people are generally considered as the ones who receive support from others, but our proposal helps them become the ones who offer support to others.
APA, Harvard, Vancouver, ISO, and other styles
3

Zhang, Xuanhui, Si Chen, Yuxiang Chris Zhao, Shijie Song, and Qinghua Zhu. "The influences of social value orientation and domain knowledge on crowdsourcing manuscript transcription." Aslib Journal of Information Management 72, no. 2 (December 24, 2019): 219–42. http://dx.doi.org/10.1108/ajim-08-2019-0221.

Full text
Abstract:
Purpose The purpose of this paper is to explore how social value orientation and domain knowledge affect cooperation levels and transcription quality in crowdsourced manuscript transcription, and contribute to the recruitment of participants in such projects in practice. Design/methodology/approach The authors conducted a quasi-experiment using Transcribe-Sheng, which is a well-known crowdsourced manuscript transcription project in China, to investigate the influences of social value orientation and domain knowledge. The experiment lasted one month and involved 60 participants. ANOVA was used to test the research hypotheses. Moreover, inverviews and thematic analyses were conducted to analyze the qualitative data in order to provide additional insights. Findings The analysis confirmed that in crowdsourced manuscript transcription, social value orientation has a significant effect on participants’ cooperation level and transcription quality; domain knowledge has a significant effect on participants’ transcription quality, but not on their cooperation level. The results also reveal the interactive effect of social value orientation and domain knowledge on cooperation levels and quality of transcription. The analysis of the qualitative data illustrated the influences of social value orientation and domain knowledge on crowdsourced manuscript transcription in detail. Originality/value Researchers have paid little attention to the impacts of the psychological and cognitive factors on crowdsourced manuscript transcription. This study investigated the effect of social value orientation and the combined effect of social value orientation and domain knowledge in this context. The findings shed light on crowdsourcing transcription initiatives in the cultural heritage domain and can be used to facilitate participant selection in such projects.
APA, Harvard, Vancouver, ISO, and other styles
4

Alyahya, Sultan. "Collaborative Crowdsourced Software Testing." Electronics 11, no. 20 (October 17, 2022): 3340. http://dx.doi.org/10.3390/electronics11203340.

Full text
Abstract:
Crowdsourced software testing (CST) uses a crowd of testers to conduct software testing. Currently, the microtasking model is used in CST; in it, a testing task is sent to individual testers who work separately from each other. Several studies mentioned that the quality of test reports produced by individuals was a drawback because a large number of invalid defects were submitted. Additionally, individual workers tended to catch the simple defects, not those with high complexity. This research explored the effect of having pairs of collaborating testers working together to produce one final test report. We conducted an experiment with 75 workers to measure the effect of this approach in terms of (1) the total number of unique valid defects detected, (2) the total number of invalid defects reported, and (3) the possibility of detecting more difficult defects. The findings show that testers who worked in collaborating pairs can be as effective in detecting defects as an individual worker; the differences between them are marginal. However, CST significantly affects the quality of test reports submitted in two dimensions: it helps reduce the number of invalid defects and also helps detect more difficult defects. The findings are promising and suggest that CST platforms can benefit from new mechanisms that allow for the formation of teams of two individuals who can participate in doing testing jobs.
APA, Harvard, Vancouver, ISO, and other styles
5

Miller, John, Yu (Marco) Nie, and Amanda Stathopoulos. "Crowdsourced Urban Package Delivery." Transportation Research Record: Journal of the Transportation Research Board 2610, no. 1 (January 2017): 67–75. http://dx.doi.org/10.3141/2610-08.

Full text
Abstract:
Crowdsourced shipping presents an innovative shipping alternative that is expected to improve shipping efficiency, increase service, and decrease cost to the customer, and such shipping promises to enhance the sustainability of the transportation system. This study collected data on behavioral responses to choose from available crowdsourced shipping jobs. The goal of the study was to measure the potential willingness of individuals to change status from pure commuters to traveler–shippers. In particular, the study quantified potential crowdsourced shippers’ value of free time, or willingness to work (WTW) in a hypothetical scenario in which crowdsourced shipping jobs were available in a variety of settings. This WTW calculation is unique compared with the traditional willingness to pay (WTP) in that it measured the trade-off of making a profit and giving up time instead of spending money to save time. This work provides a foundation to analyze the application and effectiveness of crowdsourced shipping by exploring the WTW propensity of ordinary travelers. The analysis was based on a newly developed stated preference survey and analyzed choice across three potential shipping jobs and the option to choose none of the three (i.e., the status quo). Results showed that the experiment was successful in recovering reasonable WTW values that are higher than the normal WTP metrics. The results also identified many significant sociodemographic variables that could help crowdsourced shipping companies better target potential part-time drivers.
APA, Harvard, Vancouver, ISO, and other styles
6

Luther, Kurt, Nathan Hahn, Steven Dow, and Aniket Kittur. "Crowdlines: Supporting Synthesis of Diverse Information Sources through Crowdsourced Outlines." Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 3 (September 23, 2015): 110–19. http://dx.doi.org/10.1609/hcomp.v3i1.13239.

Full text
Abstract:
Learning about a new area of knowledge is challenging for novices partly because they are not yet aware of which topics are most important. The Internet contains a wealth of information for learning the underlying structure of a domain, but relevant sources often have diverse structures and emphases, making it hard to discern what is widely considered essential knowledge vs. what is idiosyncratic. Crowdsourcing offers a potential solution because humans are skilled at evaluating high-level structure, but most crowd micro-tasks provide limited context and time. To address these challenges, we present Crowdlines, a system that uses crowdsourcing to help people synthesize diverse online information. Crowdworkers make connections across sources to produce a rich outline that surfaces diverse perspectives within important topics. We evaluate Crowdlines with two experiments. The first experiment shows that a high context, low structure interface helps crowdworkers perform faster, higher quality synthesis, while the second experiment shows that a tournament-style (parallelized) crowd workflow produces faster, higher quality, more diverse outlines than a linear (serial/iterative) workflow.
APA, Harvard, Vancouver, ISO, and other styles
7

Lee, Tak Yeon, Casey Dugan, Werner Geyer, Tristan Ratchford, Jamie Rasmussen, N. Sadat Shami, and Stela Lupushor. "Experiments on Motivational Feedback for Crowdsourced Workers." Proceedings of the International AAAI Conference on Web and Social Media 7, no. 1 (August 3, 2021): 341–50. http://dx.doi.org/10.1609/icwsm.v7i1.14428.

Full text
Abstract:
This paper examines the relationship between motivational design and its longitudinal effects on crowdsourcing systems. In the context of a company internal web site that crowdsources the identification of Twitter accounts owned by company employees, we designed and investigated the effects of various motivational features including individual / social achievements and gamification. Our 6-month experiment with 437 users allowed us to compare the features in terms of both quantity and quality of the work produced by participants over time. While we found that gamification can increase workers’ motivation overall, the combination of motivational features also matters. Specifically, gamified social achievement is the best performing design over a longer period of time. Mixing individual and social achievements turns out to be less effective and can even encourage users to game the system.
APA, Harvard, Vancouver, ISO, and other styles
8

Andersen, David J., and Richard R. Lau. "Pay Rates and Subject Performance in Social Science Experiments Using Crowdsourced Online Samples." Journal of Experimental Political Science 5, no. 3 (2018): 217–29. http://dx.doi.org/10.1017/xps.2018.7.

Full text
Abstract:
AbstractMechanical Turk has become an important source of subjects for social science experiments, providing a low-cost alternative to the convenience of using undergraduates while avoiding the expense of drawing fully representative samples. However, we know little about how the rates we pay to “Turkers” for participating in social science experiments affects their participation. This study examines subject performance using two experiments – a short survey experiment and a longer dynamic process tracing study of political campaigns – that recruited Turkers at different rates of pay. Looking at demographics and using measures of attention, engagement and evaluation of the candidates, we find no effects of pay rates upon subject recruitment or participation. We conclude by discussing implications and ethical standards of pay.
APA, Harvard, Vancouver, ISO, and other styles
9

Bechtel, Benjamin, Matthias Demuzere, Panagiotis Sismanidis, Daniel Fenner, Oscar Brousse, Christoph Beck, Frieke Van Coillie, et al. "Quality of Crowdsourced Data on Urban Morphology—The Human Influence Experiment (HUMINEX)." Urban Science 1, no. 2 (May 9, 2017): 15. http://dx.doi.org/10.3390/urbansci1020015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ikeda, Kazushi, and Keiichiro Hoashi. "Utilizing Crowdsourced Asynchronous Chat for Efficient Collection of Dialogue Dataset." Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 6 (June 15, 2018): 60–69. http://dx.doi.org/10.1609/hcomp.v6i1.13321.

Full text
Abstract:
In this paper, we design a crowd-powered system to efficiently collect data for training dialogue systems. Conventional systems assign dialogue roles to a pair of crowd workers, and record their interaction on an online chat. In this framework, the pair is required to work simultaneously, and one worker must wait for the other when he/she is writing a message, which decreases work efficiency. Our proposed system allows multiple workers to create dialogues in an asynchronous manner, which relieves workers from time restrictions. We have conducted an experiment using our system on a crowdsourcing platform to evaluate the efficiency and the quality of dialogue collection. Results show that our system can reduce the necessary time to input a message by 68% while maintaining quality.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Crowdsourced experiment"

1

Ichatha, Stephen K. "The Role of Empowerment in Crowdsourced Customer Service." 2013. http://scholarworks.gsu.edu/bus_admin_diss/18.

Full text
Abstract:
For decades, researchers have seen employee empowerment as the means to achieving a more committed workforce that would deliver better outcomes. The prior conceptual and descriptive research focused on structural empowerment, or workplace mechanisms for generating empowerment, and psychological empowerment, the felt empowerment. Responding to calls for intervention studies, this research experimentally tests the effects of structural empowerment changes, through different degrees of decision-making authority and access to customer-relationship information, on psychological empowerment and subsequent work-related outcomes. Using a virtual contact center simulation, crowdsourced workers responded to customer requests. Greater decision authority and access to customer-relationship information resulted in higher levels of psychological empowerment which in turn resulted in task satisfaction and task attractiveness outcomes among the crowdsourced customer service workers.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Crowdsourced experiment"

1

Yamanaka, Shota. "Comparing Performance Models for Bivariate Pointing Through a Crowdsourced Experiment." In Human-Computer Interaction – INTERACT 2021, 76–92. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-85616-8_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Reyero Aldama, Gonzalo, and Federico Cabitza. "Experiments for a Real Time Crowdsourced Urban Design." In Lecture Notes in Computer Science, 56–63. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-15168-7_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Abou Chahine, Ramzi, Dongjae Kwon, Chungman Lim, Gunhyuk Park, and Hasti Seifi. "Vibrotactile Similarity Perception in Crowdsourced and Lab Studies." In Haptics: Science, Technology, Applications, 255–63. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-06249-0_29.

Full text
Abstract:
AbstractCrowdsourcing can enable rapid data collection for haptics research, yet little is known about its validity in comparison to controlled lab experiments. Furthermore, no data exists on how different smartphone platforms impact the crowdsourcing results. To answer these questions, we conducted four vibrotactile (VT) similarity perception studies on iOS and Android smartphones in the lab and through Amazon Mechanical Turk (MTurk). Participants rated the pairwise similarities of 14 rhythmic VT patterns on their smartphones or a lab device. The similarity ratings from the lab and MTurk experiments suggested a very strong correlation for iOS devices ($$r_s= 0.9$$ r s = 0.9 ) and a lower but still strong correlation for Android phones ($$r_s= 0.68$$ r s = 0.68 ). In addition, we found a stronger correlation between the crowdsourced iOS and Android ratings ($$r_s=0.78$$ r s = 0.78 ) compared to the correlation between the iOS and Android data in the lab ($$r_s= 0.65$$ r s = 0.65 ). We provide further insights into these correlations using the perceptual spaces obtained from the four datasets. Our results provide preliminary evidence for the validity of crowdsourced VT similarity studies, especially on iOS devices.
APA, Harvard, Vancouver, ISO, and other styles
4

Alexandrou, Panagiotis, Constantinos Marios Angelopoulos, Orestis Evangelatos, João Fernandes, Gabriel Filios, Marios Karagiannis, Nikolaos Loumis, et al. "A Service Based Architecture for Multidisciplinary IoT Experiments with Crowdsourced Resources." In Ad-hoc, Mobile, and Wireless Networks, 187–201. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-40509-4_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Pan, Bing, Virinchi Savanapelli, Abhishek Shukla, and Junjun Yin. "Monitoring Human-Wildlife Interactions in National Parks with Crowdsourced Data and Deep Learning." In Information and Communication Technologies in Tourism 2022, 492–97. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-94751-4_46.

Full text
Abstract:
AbstractThis short paper summarizes the first research stage for applying deep learning techniques to capture human-wildlife interactions in national parks from crowd-sourced data. The results from objection detection, image captioning, and distance calculation are reported. We were able to categorize animal types, summarize visitor behaviors in the pictures, and calculate distances between visitors and animals with different levels of accuracy. Future development will focus on getting more training data and field experiments to collect ground truth on animal types and distances to animals. More in-depth manual coding is needed to categorize visitor behavior into acceptable and unacceptable ones.
APA, Harvard, Vancouver, ISO, and other styles
6

Silva, Catarina, Ana Madeira, Alberto Cardoso, and Bernardete Ribeiro. "Crowdsourcing Holistic Deep Approach for Fire Identification." In Advances in Forest Fire Research 2022, 130–35. Imprensa da Universidade de Coimbra, 2022. http://dx.doi.org/10.14195/978-989-26-2298-9_20.

Full text
Abstract:
Forest fires have become a global problem that affects large areas of the globe, and has received contributions from citizen science, namely the reporting of fire sighting by citizens with location and images, often using smartphones, which is becoming a frequent source of information to firefighting teams. Nevertheless, such contributions need validation before resources are deployed and that is the focus of this work. This paper describes a novel approach to identify forest fires in real crowdsourced images using a deep learning (DL) approach. The approach is based on YOLO networks to train optimized models that identify the real cases that need to be addressed, using a holistic methodology that considers all objects detected in each image before producing a decision. For our experiments, we used benchmark and real datasets of fire and smoke. In the experiments, the performance is compared under different experimental setups. Our approach results show that the proposed crowdsourcing holistic deep approach for fire identification can be successfully used in real scenarios.
APA, Harvard, Vancouver, ISO, and other styles
7

Baker, Joseph O., Jonathan P. Hill, and Nathaniel D. Porter. "Assessing Measures of Religion and Secularity with Crowdsourced Data from Amazon’s Mechanical Turk." In Faithful Measures. NYU Press, 2017. http://dx.doi.org/10.18574/nyu/9781479875214.003.0005.

Full text
Abstract:
This chapter examines Amazon’s Mechanical Turk (MTurk), a crowdsourcing pool of potential participants for completing online tasks, as a resource for gathering novel data on religiosity and secularity (or other topics). MTurk provides an easily accessible, cost-efficient option for piloting new measures and conducting split-ballot experiments to assess measurement effects. The authors use MTurk data to evaluate measures of religious identity, demonstrating how question format can influence the percentage of respondents classified as religiously affiliated. They also use new measures to provide descriptive and analytical information on the rationales individuals give for being either religious or secular across different religious traditions and types of secularity. They conclude by outlining the opportunities and limitations of crowdsourcing data for exploring issues of measurement—as well as substantive areas of inquiry—in religion and beyond.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Crowdsourced experiment"

1

Zhang, Xiang, Kaori Ikematsu, Kunihiro Kato, and Yuta Sugiura. "Evaluation of Grasp Posture Detection Method using Corneal Reflection Images through a Crowdsourced Experiment." In ISS '22: Conference on Interactive Surfaces and Spaces. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3532104.3571457.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kokuryo, Daisuke, Yoshiaki Harada, Toshiya Kaihara, and Nobutada Fujii. "A Proposal of Resource Allocation Method Based on Combinatorial Double Auction Technique in Crowdsourced Manufacturing." In 2020 International Symposium on Flexible Automation. American Society of Mechanical Engineers, 2020. http://dx.doi.org/10.1115/isfa2020-9638.

Full text
Abstract:
Abstract With the development of the IoT (Internet of Things), smart manufacturing system with cloud service and computing techniques has gained worldwide attention. Crowdsourced manufacturing system is a production styles that connects among different companies and factories to share the production resources. In this system, it is important to distribute resources appropriately to increase the productivity. In this proceeding, a resource allocation method based on combinatorial double auction technique is proposed. In the computational experiment, a characteristic of the proposed resource allocation method is evaluated.
APA, Harvard, Vancouver, ISO, and other styles
3

Bazilinskyy, Pavlo, Dimitra Sodou, and Joost De Winter. "Crowdsourced Assessment of 227 Text-Based eHMIs for a Crossing Scenario." In 13th International Conference on Applied Human Factors and Ergonomics (AHFE 2022). AHFE International, 2022. http://dx.doi.org/10.54941/ahfe1002444.

Full text
Abstract:
Future automated vehicles may be equipped with external human-machine interfaces (eHMIs) capable of signaling whether pedestrians can cross the road. Industry and academia have proposed a variety of eHMIs featuring a text message. An eHMI message can refer to the action to be performed by the pedestrian (egocentric message) or the automated vehicle (allocentric message). Currently, there is no consensus on the correct phrasing of the text message. We created 227 eHMIs based on text-based eHMIs observed in the literature. A crowdsourcing experiment (N = 1438) was performed with images depicting an automated vehicle equipped with an eHMI on the front bumper. The participants indicated whether they would (not) cross the road, and response times were recorded. Egocentric messages were found to be more compelling for participants to (not) cross than allocentric messages. Furthermore, Spanish-speaking participants found Spanish eHMIs more compelling than English eHMIs. Finally, it was established that some eHMI texts should be avoided, as signified by low compellingness, long responses, and high inter-subject variability.
APA, Harvard, Vancouver, ISO, and other styles
4

Elasmar, Rasmi. "Computer vision experiments in crowdsourced astronomy." In 2016 New York Scientific Data Summit (NYSDS). IEEE, 2016. http://dx.doi.org/10.1109/nysds.2016.7747814.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Huai, Mengdi, Di Wang, Chenglin Miao, Jinhui Xu, and Aidong Zhang. "Privacy-aware Synthesizing for Crowdsourced Data." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/353.

Full text
Abstract:
Although releasing crowdsourced data brings many benefits to the data analyzers to conduct statistical analysis, it may violate crowd users' data privacy. A potential way to address this problem is to employ traditional differential privacy (DP) mechanisms and perturb the data with some noise before releasing them. However, considering that there usually exist conflicts among the crowdsourced data and these data are usually large in volume, directly using these mechanisms can not guarantee good utility in the setting of releasing crowdsourced data. To address this challenge, in this paper, we propose a novel privacy-aware synthesizing method (i.e., PrisCrowd) for crowdsourced data, based on which the data collector can release users' data with strong privacy protection for their private information, while at the same time, the data analyzer can achieve good utility from the released data. Both theoretical analysis and extensive experiments on real-world datasets demonstrate the desired performance of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
6

Madnani, Nitin, Martin Chodorow, Aoife Cahill, Melissa Lopez, Yoko Futagi, and Yigal Attali. "Preliminary Experiments on Crowdsourced Evaluation of Feedback Granularity." In Proceedings of the Tenth Workshop on Innovative Use of NLP for Building Educational Applications. Stroudsburg, PA, USA: Association for Computational Linguistics, 2015. http://dx.doi.org/10.3115/v1/w15-0619.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Stoven-Dubois, Alexis, Aziz Dziri, Bertrand Leroy, and Roland Chapuis. "Graph-based Approach for Crowdsourced Mapping: Evaluation through Field Experiments." In 2020 16th International Conference on Control, Automation, Robotics and Vision (ICARCV). IEEE, 2020. http://dx.doi.org/10.1109/icarcv50220.2020.9305308.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Stoven-Dubois, Alexis, Aziz Dziri, Bertrand Leroy, and Roland Chapuis. "Graph-based Approach for Crowdsourced Mapping: Evaluation through Field Experiments." In 2020 16th International Conference on Control, Automation, Robotics and Vision (ICARCV). IEEE, 2020. http://dx.doi.org/10.1109/icarcv50220.2020.9305308.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Nakayama, Yu, Daisuke Hisano, Takayuki Nishio, and Kazuki Maruta. "Experimental Results on Crowdsourced Radio Units Mounted on Parked Vehicles." In 2019 IEEE 90th Vehicular Technology Conference (VTC2019-Fall). IEEE, 2019. http://dx.doi.org/10.1109/vtcfall.2019.8891321.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Hovy, Dirk, Barbara Plank, and Anders Søgaard. "Experiments with crowdsourced re-annotation of a POS tagging data set." In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Stroudsburg, PA, USA: Association for Computational Linguistics, 2014. http://dx.doi.org/10.3115/v1/p14-2062.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography