Gotowa bibliografia na temat „Annotation via web”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Spis treści
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Annotation via web”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Artykuły w czasopismach na temat "Annotation via web"
Islamaj, Rezarta, Dongseop Kwon, Sun Kim i Zhiyong Lu. "TeamTat: a collaborative text annotation tool". Nucleic Acids Research 48, W1 (8.05.2020): W5—W11. http://dx.doi.org/10.1093/nar/gkaa333.
Pełny tekst źródłaMazhoud, Omar, Anis Kalboussi i Ahmed Hadj Kacem. "Educational Recommender System based on Learner’s Annotative Activity". International Journal of Emerging Technologies in Learning (iJET) 16, nr 10 (25.05.2021): 108. http://dx.doi.org/10.3991/ijet.v16i10.19955.
Pełny tekst źródłaWang, Han, Xinxiao Wu i Yunde Jia. "Video Annotation via Image Groups from the Web". IEEE Transactions on Multimedia 16, nr 5 (sierpień 2014): 1282–91. http://dx.doi.org/10.1109/tmm.2014.2312251.
Pełny tekst źródłaMa, Zhigang, Feiping Nie, Yi Yang, Jasper R. R. Uijlings i Nicu Sebe. "Web Image Annotation Via Subspace-Sparsity Collaborated Feature Selection". IEEE Transactions on Multimedia 14, nr 4 (sierpień 2012): 1021–30. http://dx.doi.org/10.1109/tmm.2012.2187179.
Pełny tekst źródłaWei, Chih-Hsuan, Alexis Allot, Robert Leaman i Zhiyong Lu. "PubTator central: automated concept annotation for biomedical full text articles". Nucleic Acids Research 47, W1 (22.05.2019): W587—W593. http://dx.doi.org/10.1093/nar/gkz389.
Pełny tekst źródłaHu, Mengqiu, Yang Yang, Fumin Shen, Luming Zhang, Heng Tao Shen i Xuelong Li. "Robust Web Image Annotation via Exploring Multi-Facet and Structural Knowledge". IEEE Transactions on Image Processing 26, nr 10 (październik 2017): 4871–84. http://dx.doi.org/10.1109/tip.2017.2717185.
Pełny tekst źródłaWang, Han, Xiabi Liu, Xinxiao Wu i Yunde Jia. "Cross-domain structural model for video event annotation via web images". Multimedia Tools and Applications 74, nr 23 (30.07.2014): 10439–56. http://dx.doi.org/10.1007/s11042-014-2175-z.
Pełny tekst źródłaLelong, Sebastien, Xinghua Zhou, Cyrus Afrasiabi, Zhongchao Qian, Marco Alvarado Cano, Ginger Tsueng, Jiwen Xin i in. "BioThings SDK: a toolkit for building high-performance data APIs in biomedical research". Bioinformatics 38, nr 7 (10.01.2022): 2077–79. http://dx.doi.org/10.1093/bioinformatics/btac017.
Pełny tekst źródłaPark, Yeon-Ji, Min-a. Lee, Geun-Je Yang, Soo Jun Park i Chae-Bong Sohn. "Biomedical Text NER Tagging Tool with Web Interface for Generating BERT-Based Fine-Tuning Dataset". Applied Sciences 12, nr 23 (24.11.2022): 12012. http://dx.doi.org/10.3390/app122312012.
Pełny tekst źródłaBarrett, Kristian, Cameron J. Hunt, Lene Lange i Anne S. Meyer. "Conserved unique peptide patterns (CUPP) online platform: peptide-based functional annotation of carbohydrate active enzymes". Nucleic Acids Research 48, W1 (14.05.2020): W110—W115. http://dx.doi.org/10.1093/nar/gkaa375.
Pełny tekst źródłaRozprawy doktorskie na temat "Annotation via web"
Mitran, Mădălina. "Annotation d'images via leur contexte spatio-temporel et les métadonnées du Web". Toulouse 3, 2014. http://thesesups.ups-tlse.fr/2399/.
Pełny tekst źródłaThe documents processed by Information Retrieval (IR) systems are typically indexed according to their contents: Text or multimedia. Search engines based on these indexes aim to provide relevant answers to users' needs in the form of texts, images, sounds, videos, and so on. Our work is related to "image" documents. We are specifically interested in automatic image annotation systems that automatically associate keywords to images. Keywords are subsequently used for search purposes via textual queries. The automatic image annotation task intends to overcome the issues of manual and semi-automatic annotation tasks, as they are no longer feasible in nowadays' context (i. E. , the development of digital technologies and the advent of devices, such as smartphones, allowing anyone to take images with a minimal cost). Among the different types of existing image collections (e. G. , medical, satellite) in our work we are interested in landscape image collections for which we identified the following challenges: What are the most discriminant features for this type of images ? How to model and how to merge these features ? What are the sources of information that should be considered ? How to manage scalability issues ? The proposed contribution is threefold. First, we use different factors that influence the description of landscape images: The spatial factor (i. E. , latitude and longitude of images), the temporal factor (i. E. , the time when the images were taken), and the thematic factor (i. E. , tags crowdsourced and contributed to image sharing platforms). We propose various techniques to model these factors based on tag frequency, as well as spatial and temporal similarities. The choice of these factors is based on the following assumptions: A tag is all the more relevant for a query-image as it is associated with images located in its close geographical area ; A tag is all the more relevant for a query-image as it is associated with images captured close in time to it ; sourcing concept). Second, we introduce a new image annotation process that recommends the terms that best describe a given query-image provided by a user. For each query-image we rely on spatial, temporal, and spatio-temporal filters to identify similar images along with their tags. Then, the different factors are merged through a probabilistic model to boost the terms best describing each query-image. Third, the contributions presented above are only based on information extracted from image photo sharing platforms (i. E. , subjective information). This raised the following research question: Can the information extracted from the Web provide objective terms useful to enrich the initial description of images? We tackle this question by introducing an approach relying on query expansion techniques developed in IR. As there is no standard evaluation protocol for the automatic image annotation task tailored to landscape images, we designed various evaluation protocols to validate our contributions. We first evaluated the approaches defined to model the spatial, temporal, and thematic factors. Then, we validated the annotation image process and we showed that it yields significant improvement over two state-of-the-art baselines. Finally, we assessed the effectiveness of tag expansion through Web sources and showed its contribution to the image annotation process. These experiments are complemented by the image annotation prototype AnnoTaGT, which provides users with an operational framework for automatic image annotation
Bedoya, Ramos Daniel. "Capturing Musical Prosody Through Interactive Audio/Visual Annotations". Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS698.
Pełny tekst źródłaThe proliferation of citizen science projects has advanced research and knowledge across disciplines in recent years. Citizen scientists contribute to research through volunteer thinking, often by engaging in cognitive tasks using mobile devices, web interfaces, or personal computers, with the added benefit of fostering learning, innovation, and inclusiveness. In music, crowdsourcing has been applied to gather various structural annotations. However, citizen science remains underutilized in musical expressiveness studies. To bridge this gap, we introduce a novel annotation protocol to capture musical prosody, which refers to the acoustic variations performers introduce to make music expressive. Our top-down, human-centered method prioritizes the listener's role in producing annotations of prosodic functions in music. This protocol provides a citizen science framework and experimental approach to carrying out systematic and scalable studies on the functions of musical prosody. We focus on the segmentation and prominence functions, which convey structure and affect. We implement this annotation protocol in CosmoNote, a web-based, interactive, and customizable software conceived to facilitate the annotation of expressive music structures. CosmoNote gives users access to visualization layers, including the audio waveform, the recorded notes, extracted audio attributes (loudness and tempo), and score features (harmonic tension and other markings). The annotation types comprise boundaries of varying strengths, regions, comments, and note groups. We conducted two studies aimed at improving the protocol and the platform. The first study examines the impact of co-occurring auditory and visual stimuli on segmentation boundaries. We compare differences in boundary distributions derived from cross-modal (auditory and visual) vs. unimodal (auditory or visual) information. Distances between unimodal-visual and cross-modal distributions are smaller than between unimodal-auditory and cross-modal distributions. On the one hand, we show that adding visuals accentuates crucial information and provides cognitive scaffolding for accurately marking boundaries at the starts and ends of prosodic cues. However, they sometimes divert the annotator's attention away from specific structures. On the other hand, removing the audio impedes the annotation task by hiding subtle, relied-upon cues. Although visual cues may sometimes overemphasize or mislead, they are essential in guiding boundary annotations of recorded performances, often improving the aggregate results. The second study uses all CosmoNote's annotation types and analyzes how annotators, receiving either minimal or detailed protocol instructions, approach annotating musical prosody in a free-form exercise. We compare the quality of annotations between participants who are musically trained and those who are not. The citizen science component is evaluated in an ecological setting where participants are fully autonomous in a task where time, attention, and patience are valued. We present three methods based on common annotation labels, categories, and properties to analyze and aggregate the data. Results show convergence in annotation types and descriptions used to mark recurring musical elements across experimental conditions and musical abilities. We propose strategies for improving the protocol, data aggregation, and analysis in large-scale applications. This thesis contributes to representing and understanding performed musical structures by introducing an annotation protocol and platform, tailored experiments, and aggregation/analysis methods. The research shows the importance of balancing the collection of easier-to-analyze datasets and having richer content that captures complex musical thinking. Our protocol can be generalized to studies on performance decisions to improve the comprehension of expressive choices in musical performances
Wang, Sheng-Ren, i 王聖仁. "Web-based Summary Writing Learning Environment via the Model of Integrating Concept Mapping and Sharing Annotation". Thesis, 2006. http://ndltd.ncl.edu.tw/handle/16309342212500880796.
Pełny tekst źródła國立臺北教育大學
教育傳播與科技研究所
94
In order to solve the difficulties of learners’ summary writing in web-based learning environment, the present study brings up a web-based summary writing model integrating concept mapping, annotation, and CSCL(computer support collaborative learning). Summary writing is beneficial for learners’ reading comprehension, recall, and recognizing the text’s main idea, but it’s still difficult for some learners. Take the judgment of importance for example. When reading a longer or more complicated text, many learners always couldn’t determine what should be deleted and what should be put in the summary. The present study regards this difficulty as two parts: the judgment of importance and cognitive overloading. Thus, the present study is to construct and practice a web-based summary writing learning environment integrating concept mapping and sharing annotation, regarding concept mapping as the scaffold of learners’ catching the main idea of the text. When the learners have some problems in concept mapping in this learning environment, peer collaborative sharing annotation will be applied. And at last, the learners would view the complete concept mapping as a writing frame and then proceed with summary writing. The concept mapping of the present study is a detecting-fault one. It would help learners reduce cognitive loading, recognize the main idea, and find the unknown main idea. When the learners make an annotation in the text, they also help other learners in this learning environment. In conclusion, the present study provides one way of thinking and practice for computer-based summary writing.
Książki na temat "Annotation via web"
Anne of Green Gables: Englische Lektüre mit Audio-CD für das 1. und 2. Lernjahr. Illustrierte Lektüre mit Annotationen und Zusatztexten. Klett Sprachen GmbH, 2015.
Znajdź pełny tekst źródłaCzęści książek na temat "Annotation via web"
Xu, Hongtao, Xiangdong Zhou, Lan Lin, Yu Xiang i Baile Shi. "Automatic Web Image Annotation via Web-Scale Image Semantic Space Learning". W Advances in Data and Web Management, 211–22. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-00672-2_20.
Pełny tekst źródłaDe Virgilio, Roberto, i Lorenzo Dolfi. "Web Navigation via Semantic Annotations". W Lecture Notes in Computer Science, 347–57. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33999-8_41.
Pełny tekst źródłaCollmer, C. W., M. Lindeberg i Alan Collmer. "Gene Ontology (GO) for Microbe–Host Interactions and Its Use in Ongoing Annotation of Three Pseudomonas syringae Genomes via the Pseudomonas–Plant Interaction (PPI) Web Site". W Pseudomonas syringae Pathovars and Related Pathogens – Identification, Epidemiology and Genomics, 221–28. Dordrecht: Springer Netherlands, 2008. http://dx.doi.org/10.1007/978-1-4020-6901-7_23.
Pełny tekst źródłaBensmann, Felix, Andrea Papenmeier, Dagmar Kern, Benjamin Zapilko i Stefan Dietze. "Semantic Annotation, Representation and Linking of Survey Data". W Semantic Systems. In the Era of Knowledge Graphs, 53–69. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-59833-4_4.
Pełny tekst źródłaKoutavas, Vasileios, Yu-Yang Lin i Nikos Tzevelekos. "From Bounded Checking to Verification of Equivalence via Symbolic Up-to Techniques". W Tools and Algorithms for the Construction and Analysis of Systems, 178–95. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-99527-0_10.
Pełny tekst źródłaBeyer, Dirk, i Martin Spiessl. "The Static Analyzer Frama-C in SV-COMP (Competition Contribution)". W Tools and Algorithms for the Construction and Analysis of Systems, 429–34. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-99527-0_26.
Pełny tekst źródłaCiavotta, Michele, Vincenzo Cutrona, Flavio De Paoli, Nikolay Nikolov, Matteo Palmonari i Dumitru Roman. "Supporting Semantic Data Enrichment at Scale". W Technologies and Applications for Big Data Value, 19–39. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-78307-5_2.
Pełny tekst źródłaWood, James, i Robert Atkey. "A Framework for Substructural Type Systems". W Programming Languages and Systems, 376–402. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-99336-8_14.
Pełny tekst źródłaFiaidhi, Jinan, Sabah Mohammed i Yuan Wei. "Implications of Web 2.0 Technology on Healthcare". W Healthcare and the Effect of Technology, 269–89. IGI Global, 2010. http://dx.doi.org/10.4018/978-1-61520-733-6.ch016.
Pełny tekst źródłaWong, Zoie S. Y., Yuchen Qiao, Ryohei Sasano, Hongkuan Zhang, Kenichiro Taneda i Shin Ushiro. "Annotation Guidelines for Medication Errors in Incident Reports: Validation Through a Mixed Methods Approach". W MEDINFO 2021: One World, One Health – Global Partnership for Digital Innovation. IOS Press, 2022. http://dx.doi.org/10.3233/shti220095.
Pełny tekst źródłaStreszczenia konferencji na temat "Annotation via web"
Braylan, Alexander, i Matthew Lease. "Modeling and Aggregation of Complex Annotations via Annotation Distances". W WWW '20: The Web Conference 2020. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3366423.3380250.
Pełny tekst źródłaYagnik, Jay, i Atiq Islam. "Learning people annotation from the web via consistency learning". W the international workshop. New York, New York, USA: ACM Press, 2007. http://dx.doi.org/10.1145/1290082.1290121.
Pełny tekst źródłaGong, Xinyu, Yuefu Zhou, Yue Bi, Mingcheng He, Shiying Sheng, Han Qiu, Ruan He i Jialiang Lu. "Estimating Web Attack Detection via Model Uncertainty from Inaccurate Annotation". W 2019 6th IEEE International Conference on Cyber Security and Cloud Computing (CSCloud)/ 2019 5th IEEE International Conference on Edge Computing and Scalable Cloud (EdgeCom). IEEE, 2019. http://dx.doi.org/10.1109/cscloud/edgecom.2019.00019.
Pełny tekst źródłaShen, Hualei, Dianfu Ma, Yongwang Zhao i Rongwei Ye. "Collaborative annotation of medical images via web browser for teleradiology". W 2012 International Conference on Computerized Healthcare (ICCH). IEEE, 2012. http://dx.doi.org/10.1109/icch.2012.6724483.
Pełny tekst źródłaCao, Liangliang, Jie Yu, Jiebo Luo i Thomas S. Huang. "Enhancing semantic and geographic annotation of web images via logistic canonical correlation regression". W the seventeen ACM international conference. New York, New York, USA: ACM Press, 2009. http://dx.doi.org/10.1145/1631272.1631292.
Pełny tekst źródłaVoloshina, Ekaterina, i Polina Leonova. "The Universal Database for Lexical Typology". W INTERNATIONAL CONFERENCE on Computational Linguistics and Intellectual Technologies. RSUH, 2023. http://dx.doi.org/10.28995/2075-7182-2023-22-1133-1140.
Pełny tekst źródłaJia, Jimin, Nenghai Yu i Xian-Sheng Hua. "Annotating personal albums via web mining". W Proceeding of the 16th ACM international conference. New York, New York, USA: ACM Press, 2008. http://dx.doi.org/10.1145/1459359.1459421.
Pełny tekst źródłaPattanaik, Vishwajeet, Shweta Suran i Dirk Draheim. "Enabling Social Information Exchange via Dynamically Robust Annotations". W iiWAS2019: The 21st International Conference on Information Integration and Web-based Applications & Services. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3366030.3366060.
Pełny tekst źródła