Academic literature on the topic 'Annotation via web'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Annotation via web.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Annotation via web"
Islamaj, Rezarta, Dongseop Kwon, Sun Kim, and Zhiyong Lu. "TeamTat: a collaborative text annotation tool." Nucleic Acids Research 48, W1 (May 8, 2020): W5—W11. http://dx.doi.org/10.1093/nar/gkaa333.
Full textMazhoud, Omar, Anis Kalboussi, and Ahmed Hadj Kacem. "Educational Recommender System based on Learner’s Annotative Activity." International Journal of Emerging Technologies in Learning (iJET) 16, no. 10 (May 25, 2021): 108. http://dx.doi.org/10.3991/ijet.v16i10.19955.
Full textWang, Han, Xinxiao Wu, and Yunde Jia. "Video Annotation via Image Groups from the Web." IEEE Transactions on Multimedia 16, no. 5 (August 2014): 1282–91. http://dx.doi.org/10.1109/tmm.2014.2312251.
Full textMa, Zhigang, Feiping Nie, Yi Yang, Jasper R. R. Uijlings, and Nicu Sebe. "Web Image Annotation Via Subspace-Sparsity Collaborated Feature Selection." IEEE Transactions on Multimedia 14, no. 4 (August 2012): 1021–30. http://dx.doi.org/10.1109/tmm.2012.2187179.
Full textWei, Chih-Hsuan, Alexis Allot, Robert Leaman, and Zhiyong Lu. "PubTator central: automated concept annotation for biomedical full text articles." Nucleic Acids Research 47, W1 (May 22, 2019): W587—W593. http://dx.doi.org/10.1093/nar/gkz389.
Full textHu, Mengqiu, Yang Yang, Fumin Shen, Luming Zhang, Heng Tao Shen, and Xuelong Li. "Robust Web Image Annotation via Exploring Multi-Facet and Structural Knowledge." IEEE Transactions on Image Processing 26, no. 10 (October 2017): 4871–84. http://dx.doi.org/10.1109/tip.2017.2717185.
Full textWang, Han, Xiabi Liu, Xinxiao Wu, and Yunde Jia. "Cross-domain structural model for video event annotation via web images." Multimedia Tools and Applications 74, no. 23 (July 30, 2014): 10439–56. http://dx.doi.org/10.1007/s11042-014-2175-z.
Full textLelong, Sebastien, Xinghua Zhou, Cyrus Afrasiabi, Zhongchao Qian, Marco Alvarado Cano, Ginger Tsueng, Jiwen Xin, et al. "BioThings SDK: a toolkit for building high-performance data APIs in biomedical research." Bioinformatics 38, no. 7 (January 10, 2022): 2077–79. http://dx.doi.org/10.1093/bioinformatics/btac017.
Full textPark, Yeon-Ji, Min-a. Lee, Geun-Je Yang, Soo Jun Park, and Chae-Bong Sohn. "Biomedical Text NER Tagging Tool with Web Interface for Generating BERT-Based Fine-Tuning Dataset." Applied Sciences 12, no. 23 (November 24, 2022): 12012. http://dx.doi.org/10.3390/app122312012.
Full textBarrett, Kristian, Cameron J. Hunt, Lene Lange, and Anne S. Meyer. "Conserved unique peptide patterns (CUPP) online platform: peptide-based functional annotation of carbohydrate active enzymes." Nucleic Acids Research 48, W1 (May 14, 2020): W110—W115. http://dx.doi.org/10.1093/nar/gkaa375.
Full textDissertations / Theses on the topic "Annotation via web"
Mitran, Mădălina. "Annotation d'images via leur contexte spatio-temporel et les métadonnées du Web." Toulouse 3, 2014. http://thesesups.ups-tlse.fr/2399/.
Full textThe documents processed by Information Retrieval (IR) systems are typically indexed according to their contents: Text or multimedia. Search engines based on these indexes aim to provide relevant answers to users' needs in the form of texts, images, sounds, videos, and so on. Our work is related to "image" documents. We are specifically interested in automatic image annotation systems that automatically associate keywords to images. Keywords are subsequently used for search purposes via textual queries. The automatic image annotation task intends to overcome the issues of manual and semi-automatic annotation tasks, as they are no longer feasible in nowadays' context (i. E. , the development of digital technologies and the advent of devices, such as smartphones, allowing anyone to take images with a minimal cost). Among the different types of existing image collections (e. G. , medical, satellite) in our work we are interested in landscape image collections for which we identified the following challenges: What are the most discriminant features for this type of images ? How to model and how to merge these features ? What are the sources of information that should be considered ? How to manage scalability issues ? The proposed contribution is threefold. First, we use different factors that influence the description of landscape images: The spatial factor (i. E. , latitude and longitude of images), the temporal factor (i. E. , the time when the images were taken), and the thematic factor (i. E. , tags crowdsourced and contributed to image sharing platforms). We propose various techniques to model these factors based on tag frequency, as well as spatial and temporal similarities. The choice of these factors is based on the following assumptions: A tag is all the more relevant for a query-image as it is associated with images located in its close geographical area ; A tag is all the more relevant for a query-image as it is associated with images captured close in time to it ; sourcing concept). Second, we introduce a new image annotation process that recommends the terms that best describe a given query-image provided by a user. For each query-image we rely on spatial, temporal, and spatio-temporal filters to identify similar images along with their tags. Then, the different factors are merged through a probabilistic model to boost the terms best describing each query-image. Third, the contributions presented above are only based on information extracted from image photo sharing platforms (i. E. , subjective information). This raised the following research question: Can the information extracted from the Web provide objective terms useful to enrich the initial description of images? We tackle this question by introducing an approach relying on query expansion techniques developed in IR. As there is no standard evaluation protocol for the automatic image annotation task tailored to landscape images, we designed various evaluation protocols to validate our contributions. We first evaluated the approaches defined to model the spatial, temporal, and thematic factors. Then, we validated the annotation image process and we showed that it yields significant improvement over two state-of-the-art baselines. Finally, we assessed the effectiveness of tag expansion through Web sources and showed its contribution to the image annotation process. These experiments are complemented by the image annotation prototype AnnoTaGT, which provides users with an operational framework for automatic image annotation
Bedoya, Ramos Daniel. "Capturing Musical Prosody Through Interactive Audio/Visual Annotations." Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS698.
Full textThe proliferation of citizen science projects has advanced research and knowledge across disciplines in recent years. Citizen scientists contribute to research through volunteer thinking, often by engaging in cognitive tasks using mobile devices, web interfaces, or personal computers, with the added benefit of fostering learning, innovation, and inclusiveness. In music, crowdsourcing has been applied to gather various structural annotations. However, citizen science remains underutilized in musical expressiveness studies. To bridge this gap, we introduce a novel annotation protocol to capture musical prosody, which refers to the acoustic variations performers introduce to make music expressive. Our top-down, human-centered method prioritizes the listener's role in producing annotations of prosodic functions in music. This protocol provides a citizen science framework and experimental approach to carrying out systematic and scalable studies on the functions of musical prosody. We focus on the segmentation and prominence functions, which convey structure and affect. We implement this annotation protocol in CosmoNote, a web-based, interactive, and customizable software conceived to facilitate the annotation of expressive music structures. CosmoNote gives users access to visualization layers, including the audio waveform, the recorded notes, extracted audio attributes (loudness and tempo), and score features (harmonic tension and other markings). The annotation types comprise boundaries of varying strengths, regions, comments, and note groups. We conducted two studies aimed at improving the protocol and the platform. The first study examines the impact of co-occurring auditory and visual stimuli on segmentation boundaries. We compare differences in boundary distributions derived from cross-modal (auditory and visual) vs. unimodal (auditory or visual) information. Distances between unimodal-visual and cross-modal distributions are smaller than between unimodal-auditory and cross-modal distributions. On the one hand, we show that adding visuals accentuates crucial information and provides cognitive scaffolding for accurately marking boundaries at the starts and ends of prosodic cues. However, they sometimes divert the annotator's attention away from specific structures. On the other hand, removing the audio impedes the annotation task by hiding subtle, relied-upon cues. Although visual cues may sometimes overemphasize or mislead, they are essential in guiding boundary annotations of recorded performances, often improving the aggregate results. The second study uses all CosmoNote's annotation types and analyzes how annotators, receiving either minimal or detailed protocol instructions, approach annotating musical prosody in a free-form exercise. We compare the quality of annotations between participants who are musically trained and those who are not. The citizen science component is evaluated in an ecological setting where participants are fully autonomous in a task where time, attention, and patience are valued. We present three methods based on common annotation labels, categories, and properties to analyze and aggregate the data. Results show convergence in annotation types and descriptions used to mark recurring musical elements across experimental conditions and musical abilities. We propose strategies for improving the protocol, data aggregation, and analysis in large-scale applications. This thesis contributes to representing and understanding performed musical structures by introducing an annotation protocol and platform, tailored experiments, and aggregation/analysis methods. The research shows the importance of balancing the collection of easier-to-analyze datasets and having richer content that captures complex musical thinking. Our protocol can be generalized to studies on performance decisions to improve the comprehension of expressive choices in musical performances
Wang, Sheng-Ren, and 王聖仁. "Web-based Summary Writing Learning Environment via the Model of Integrating Concept Mapping and Sharing Annotation." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/16309342212500880796.
Full text國立臺北教育大學
教育傳播與科技研究所
94
In order to solve the difficulties of learners’ summary writing in web-based learning environment, the present study brings up a web-based summary writing model integrating concept mapping, annotation, and CSCL(computer support collaborative learning). Summary writing is beneficial for learners’ reading comprehension, recall, and recognizing the text’s main idea, but it’s still difficult for some learners. Take the judgment of importance for example. When reading a longer or more complicated text, many learners always couldn’t determine what should be deleted and what should be put in the summary. The present study regards this difficulty as two parts: the judgment of importance and cognitive overloading. Thus, the present study is to construct and practice a web-based summary writing learning environment integrating concept mapping and sharing annotation, regarding concept mapping as the scaffold of learners’ catching the main idea of the text. When the learners have some problems in concept mapping in this learning environment, peer collaborative sharing annotation will be applied. And at last, the learners would view the complete concept mapping as a writing frame and then proceed with summary writing. The concept mapping of the present study is a detecting-fault one. It would help learners reduce cognitive loading, recognize the main idea, and find the unknown main idea. When the learners make an annotation in the text, they also help other learners in this learning environment. In conclusion, the present study provides one way of thinking and practice for computer-based summary writing.
Books on the topic "Annotation via web"
Anne of Green Gables: Englische Lektüre mit Audio-CD für das 1. und 2. Lernjahr. Illustrierte Lektüre mit Annotationen und Zusatztexten. Klett Sprachen GmbH, 2015.
Find full textBook chapters on the topic "Annotation via web"
Xu, Hongtao, Xiangdong Zhou, Lan Lin, Yu Xiang, and Baile Shi. "Automatic Web Image Annotation via Web-Scale Image Semantic Space Learning." In Advances in Data and Web Management, 211–22. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-00672-2_20.
Full textDe Virgilio, Roberto, and Lorenzo Dolfi. "Web Navigation via Semantic Annotations." In Lecture Notes in Computer Science, 347–57. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33999-8_41.
Full textCollmer, C. W., M. Lindeberg, and Alan Collmer. "Gene Ontology (GO) for Microbe–Host Interactions and Its Use in Ongoing Annotation of Three Pseudomonas syringae Genomes via the Pseudomonas–Plant Interaction (PPI) Web Site." In Pseudomonas syringae Pathovars and Related Pathogens – Identification, Epidemiology and Genomics, 221–28. Dordrecht: Springer Netherlands, 2008. http://dx.doi.org/10.1007/978-1-4020-6901-7_23.
Full textBensmann, Felix, Andrea Papenmeier, Dagmar Kern, Benjamin Zapilko, and Stefan Dietze. "Semantic Annotation, Representation and Linking of Survey Data." In Semantic Systems. In the Era of Knowledge Graphs, 53–69. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-59833-4_4.
Full textKoutavas, Vasileios, Yu-Yang Lin, and Nikos Tzevelekos. "From Bounded Checking to Verification of Equivalence via Symbolic Up-to Techniques." In Tools and Algorithms for the Construction and Analysis of Systems, 178–95. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-99527-0_10.
Full textBeyer, Dirk, and Martin Spiessl. "The Static Analyzer Frama-C in SV-COMP (Competition Contribution)." In Tools and Algorithms for the Construction and Analysis of Systems, 429–34. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-99527-0_26.
Full textCiavotta, Michele, Vincenzo Cutrona, Flavio De Paoli, Nikolay Nikolov, Matteo Palmonari, and Dumitru Roman. "Supporting Semantic Data Enrichment at Scale." In Technologies and Applications for Big Data Value, 19–39. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-78307-5_2.
Full textWood, James, and Robert Atkey. "A Framework for Substructural Type Systems." In Programming Languages and Systems, 376–402. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-99336-8_14.
Full textFiaidhi, Jinan, Sabah Mohammed, and Yuan Wei. "Implications of Web 2.0 Technology on Healthcare." In Healthcare and the Effect of Technology, 269–89. IGI Global, 2010. http://dx.doi.org/10.4018/978-1-61520-733-6.ch016.
Full textWong, Zoie S. Y., Yuchen Qiao, Ryohei Sasano, Hongkuan Zhang, Kenichiro Taneda, and Shin Ushiro. "Annotation Guidelines for Medication Errors in Incident Reports: Validation Through a Mixed Methods Approach." In MEDINFO 2021: One World, One Health – Global Partnership for Digital Innovation. IOS Press, 2022. http://dx.doi.org/10.3233/shti220095.
Full textConference papers on the topic "Annotation via web"
Braylan, Alexander, and Matthew Lease. "Modeling and Aggregation of Complex Annotations via Annotation Distances." In WWW '20: The Web Conference 2020. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3366423.3380250.
Full textYagnik, Jay, and Atiq Islam. "Learning people annotation from the web via consistency learning." In the international workshop. New York, New York, USA: ACM Press, 2007. http://dx.doi.org/10.1145/1290082.1290121.
Full textGong, Xinyu, Yuefu Zhou, Yue Bi, Mingcheng He, Shiying Sheng, Han Qiu, Ruan He, and Jialiang Lu. "Estimating Web Attack Detection via Model Uncertainty from Inaccurate Annotation." In 2019 6th IEEE International Conference on Cyber Security and Cloud Computing (CSCloud)/ 2019 5th IEEE International Conference on Edge Computing and Scalable Cloud (EdgeCom). IEEE, 2019. http://dx.doi.org/10.1109/cscloud/edgecom.2019.00019.
Full textShen, Hualei, Dianfu Ma, Yongwang Zhao, and Rongwei Ye. "Collaborative annotation of medical images via web browser for teleradiology." In 2012 International Conference on Computerized Healthcare (ICCH). IEEE, 2012. http://dx.doi.org/10.1109/icch.2012.6724483.
Full textCao, Liangliang, Jie Yu, Jiebo Luo, and Thomas S. Huang. "Enhancing semantic and geographic annotation of web images via logistic canonical correlation regression." In the seventeen ACM international conference. New York, New York, USA: ACM Press, 2009. http://dx.doi.org/10.1145/1631272.1631292.
Full textVoloshina, Ekaterina, and Polina Leonova. "The Universal Database for Lexical Typology." In INTERNATIONAL CONFERENCE on Computational Linguistics and Intellectual Technologies. RSUH, 2023. http://dx.doi.org/10.28995/2075-7182-2023-22-1133-1140.
Full textJia, Jimin, Nenghai Yu, and Xian-Sheng Hua. "Annotating personal albums via web mining." In Proceeding of the 16th ACM international conference. New York, New York, USA: ACM Press, 2008. http://dx.doi.org/10.1145/1459359.1459421.
Full textPattanaik, Vishwajeet, Shweta Suran, and Dirk Draheim. "Enabling Social Information Exchange via Dynamically Robust Annotations." In iiWAS2019: The 21st International Conference on Information Integration and Web-based Applications & Services. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3366030.3366060.
Full text