Literatura académica sobre el tema "Annotation via web"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Annotation via web".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Artículos de revistas sobre el tema "Annotation via web"
Islamaj, Rezarta, Dongseop Kwon, Sun Kim y Zhiyong Lu. "TeamTat: a collaborative text annotation tool". Nucleic Acids Research 48, W1 (8 de mayo de 2020): W5—W11. http://dx.doi.org/10.1093/nar/gkaa333.
Texto completoMazhoud, Omar, Anis Kalboussi y Ahmed Hadj Kacem. "Educational Recommender System based on Learner’s Annotative Activity". International Journal of Emerging Technologies in Learning (iJET) 16, n.º 10 (25 de mayo de 2021): 108. http://dx.doi.org/10.3991/ijet.v16i10.19955.
Texto completoWang, Han, Xinxiao Wu y Yunde Jia. "Video Annotation via Image Groups from the Web". IEEE Transactions on Multimedia 16, n.º 5 (agosto de 2014): 1282–91. http://dx.doi.org/10.1109/tmm.2014.2312251.
Texto completoMa, Zhigang, Feiping Nie, Yi Yang, Jasper R. R. Uijlings y Nicu Sebe. "Web Image Annotation Via Subspace-Sparsity Collaborated Feature Selection". IEEE Transactions on Multimedia 14, n.º 4 (agosto de 2012): 1021–30. http://dx.doi.org/10.1109/tmm.2012.2187179.
Texto completoWei, Chih-Hsuan, Alexis Allot, Robert Leaman y Zhiyong Lu. "PubTator central: automated concept annotation for biomedical full text articles". Nucleic Acids Research 47, W1 (22 de mayo de 2019): W587—W593. http://dx.doi.org/10.1093/nar/gkz389.
Texto completoHu, Mengqiu, Yang Yang, Fumin Shen, Luming Zhang, Heng Tao Shen y Xuelong Li. "Robust Web Image Annotation via Exploring Multi-Facet and Structural Knowledge". IEEE Transactions on Image Processing 26, n.º 10 (octubre de 2017): 4871–84. http://dx.doi.org/10.1109/tip.2017.2717185.
Texto completoWang, Han, Xiabi Liu, Xinxiao Wu y Yunde Jia. "Cross-domain structural model for video event annotation via web images". Multimedia Tools and Applications 74, n.º 23 (30 de julio de 2014): 10439–56. http://dx.doi.org/10.1007/s11042-014-2175-z.
Texto completoLelong, Sebastien, Xinghua Zhou, Cyrus Afrasiabi, Zhongchao Qian, Marco Alvarado Cano, Ginger Tsueng, Jiwen Xin et al. "BioThings SDK: a toolkit for building high-performance data APIs in biomedical research". Bioinformatics 38, n.º 7 (10 de enero de 2022): 2077–79. http://dx.doi.org/10.1093/bioinformatics/btac017.
Texto completoPark, Yeon-Ji, Min-a. Lee, Geun-Je Yang, Soo Jun Park y Chae-Bong Sohn. "Biomedical Text NER Tagging Tool with Web Interface for Generating BERT-Based Fine-Tuning Dataset". Applied Sciences 12, n.º 23 (24 de noviembre de 2022): 12012. http://dx.doi.org/10.3390/app122312012.
Texto completoBarrett, Kristian, Cameron J. Hunt, Lene Lange y Anne S. Meyer. "Conserved unique peptide patterns (CUPP) online platform: peptide-based functional annotation of carbohydrate active enzymes". Nucleic Acids Research 48, W1 (14 de mayo de 2020): W110—W115. http://dx.doi.org/10.1093/nar/gkaa375.
Texto completoTesis sobre el tema "Annotation via web"
Mitran, Mădălina. "Annotation d'images via leur contexte spatio-temporel et les métadonnées du Web". Toulouse 3, 2014. http://thesesups.ups-tlse.fr/2399/.
Texto completoThe documents processed by Information Retrieval (IR) systems are typically indexed according to their contents: Text or multimedia. Search engines based on these indexes aim to provide relevant answers to users' needs in the form of texts, images, sounds, videos, and so on. Our work is related to "image" documents. We are specifically interested in automatic image annotation systems that automatically associate keywords to images. Keywords are subsequently used for search purposes via textual queries. The automatic image annotation task intends to overcome the issues of manual and semi-automatic annotation tasks, as they are no longer feasible in nowadays' context (i. E. , the development of digital technologies and the advent of devices, such as smartphones, allowing anyone to take images with a minimal cost). Among the different types of existing image collections (e. G. , medical, satellite) in our work we are interested in landscape image collections for which we identified the following challenges: What are the most discriminant features for this type of images ? How to model and how to merge these features ? What are the sources of information that should be considered ? How to manage scalability issues ? The proposed contribution is threefold. First, we use different factors that influence the description of landscape images: The spatial factor (i. E. , latitude and longitude of images), the temporal factor (i. E. , the time when the images were taken), and the thematic factor (i. E. , tags crowdsourced and contributed to image sharing platforms). We propose various techniques to model these factors based on tag frequency, as well as spatial and temporal similarities. The choice of these factors is based on the following assumptions: A tag is all the more relevant for a query-image as it is associated with images located in its close geographical area ; A tag is all the more relevant for a query-image as it is associated with images captured close in time to it ; sourcing concept). Second, we introduce a new image annotation process that recommends the terms that best describe a given query-image provided by a user. For each query-image we rely on spatial, temporal, and spatio-temporal filters to identify similar images along with their tags. Then, the different factors are merged through a probabilistic model to boost the terms best describing each query-image. Third, the contributions presented above are only based on information extracted from image photo sharing platforms (i. E. , subjective information). This raised the following research question: Can the information extracted from the Web provide objective terms useful to enrich the initial description of images? We tackle this question by introducing an approach relying on query expansion techniques developed in IR. As there is no standard evaluation protocol for the automatic image annotation task tailored to landscape images, we designed various evaluation protocols to validate our contributions. We first evaluated the approaches defined to model the spatial, temporal, and thematic factors. Then, we validated the annotation image process and we showed that it yields significant improvement over two state-of-the-art baselines. Finally, we assessed the effectiveness of tag expansion through Web sources and showed its contribution to the image annotation process. These experiments are complemented by the image annotation prototype AnnoTaGT, which provides users with an operational framework for automatic image annotation
Bedoya, Ramos Daniel. "Capturing Musical Prosody Through Interactive Audio/Visual Annotations". Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS698.
Texto completoThe proliferation of citizen science projects has advanced research and knowledge across disciplines in recent years. Citizen scientists contribute to research through volunteer thinking, often by engaging in cognitive tasks using mobile devices, web interfaces, or personal computers, with the added benefit of fostering learning, innovation, and inclusiveness. In music, crowdsourcing has been applied to gather various structural annotations. However, citizen science remains underutilized in musical expressiveness studies. To bridge this gap, we introduce a novel annotation protocol to capture musical prosody, which refers to the acoustic variations performers introduce to make music expressive. Our top-down, human-centered method prioritizes the listener's role in producing annotations of prosodic functions in music. This protocol provides a citizen science framework and experimental approach to carrying out systematic and scalable studies on the functions of musical prosody. We focus on the segmentation and prominence functions, which convey structure and affect. We implement this annotation protocol in CosmoNote, a web-based, interactive, and customizable software conceived to facilitate the annotation of expressive music structures. CosmoNote gives users access to visualization layers, including the audio waveform, the recorded notes, extracted audio attributes (loudness and tempo), and score features (harmonic tension and other markings). The annotation types comprise boundaries of varying strengths, regions, comments, and note groups. We conducted two studies aimed at improving the protocol and the platform. The first study examines the impact of co-occurring auditory and visual stimuli on segmentation boundaries. We compare differences in boundary distributions derived from cross-modal (auditory and visual) vs. unimodal (auditory or visual) information. Distances between unimodal-visual and cross-modal distributions are smaller than between unimodal-auditory and cross-modal distributions. On the one hand, we show that adding visuals accentuates crucial information and provides cognitive scaffolding for accurately marking boundaries at the starts and ends of prosodic cues. However, they sometimes divert the annotator's attention away from specific structures. On the other hand, removing the audio impedes the annotation task by hiding subtle, relied-upon cues. Although visual cues may sometimes overemphasize or mislead, they are essential in guiding boundary annotations of recorded performances, often improving the aggregate results. The second study uses all CosmoNote's annotation types and analyzes how annotators, receiving either minimal or detailed protocol instructions, approach annotating musical prosody in a free-form exercise. We compare the quality of annotations between participants who are musically trained and those who are not. The citizen science component is evaluated in an ecological setting where participants are fully autonomous in a task where time, attention, and patience are valued. We present three methods based on common annotation labels, categories, and properties to analyze and aggregate the data. Results show convergence in annotation types and descriptions used to mark recurring musical elements across experimental conditions and musical abilities. We propose strategies for improving the protocol, data aggregation, and analysis in large-scale applications. This thesis contributes to representing and understanding performed musical structures by introducing an annotation protocol and platform, tailored experiments, and aggregation/analysis methods. The research shows the importance of balancing the collection of easier-to-analyze datasets and having richer content that captures complex musical thinking. Our protocol can be generalized to studies on performance decisions to improve the comprehension of expressive choices in musical performances
Wang, Sheng-Ren y 王聖仁. "Web-based Summary Writing Learning Environment via the Model of Integrating Concept Mapping and Sharing Annotation". Thesis, 2006. http://ndltd.ncl.edu.tw/handle/16309342212500880796.
Texto completo國立臺北教育大學
教育傳播與科技研究所
94
In order to solve the difficulties of learners’ summary writing in web-based learning environment, the present study brings up a web-based summary writing model integrating concept mapping, annotation, and CSCL(computer support collaborative learning). Summary writing is beneficial for learners’ reading comprehension, recall, and recognizing the text’s main idea, but it’s still difficult for some learners. Take the judgment of importance for example. When reading a longer or more complicated text, many learners always couldn’t determine what should be deleted and what should be put in the summary. The present study regards this difficulty as two parts: the judgment of importance and cognitive overloading. Thus, the present study is to construct and practice a web-based summary writing learning environment integrating concept mapping and sharing annotation, regarding concept mapping as the scaffold of learners’ catching the main idea of the text. When the learners have some problems in concept mapping in this learning environment, peer collaborative sharing annotation will be applied. And at last, the learners would view the complete concept mapping as a writing frame and then proceed with summary writing. The concept mapping of the present study is a detecting-fault one. It would help learners reduce cognitive loading, recognize the main idea, and find the unknown main idea. When the learners make an annotation in the text, they also help other learners in this learning environment. In conclusion, the present study provides one way of thinking and practice for computer-based summary writing.
Libros sobre el tema "Annotation via web"
Anne of Green Gables: Englische Lektüre mit Audio-CD für das 1. und 2. Lernjahr. Illustrierte Lektüre mit Annotationen und Zusatztexten. Klett Sprachen GmbH, 2015.
Buscar texto completoCapítulos de libros sobre el tema "Annotation via web"
Xu, Hongtao, Xiangdong Zhou, Lan Lin, Yu Xiang y Baile Shi. "Automatic Web Image Annotation via Web-Scale Image Semantic Space Learning". En Advances in Data and Web Management, 211–22. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-00672-2_20.
Texto completoDe Virgilio, Roberto y Lorenzo Dolfi. "Web Navigation via Semantic Annotations". En Lecture Notes in Computer Science, 347–57. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33999-8_41.
Texto completoCollmer, C. W., M. Lindeberg y Alan Collmer. "Gene Ontology (GO) for Microbe–Host Interactions and Its Use in Ongoing Annotation of Three Pseudomonas syringae Genomes via the Pseudomonas–Plant Interaction (PPI) Web Site". En Pseudomonas syringae Pathovars and Related Pathogens – Identification, Epidemiology and Genomics, 221–28. Dordrecht: Springer Netherlands, 2008. http://dx.doi.org/10.1007/978-1-4020-6901-7_23.
Texto completoBensmann, Felix, Andrea Papenmeier, Dagmar Kern, Benjamin Zapilko y Stefan Dietze. "Semantic Annotation, Representation and Linking of Survey Data". En Semantic Systems. In the Era of Knowledge Graphs, 53–69. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-59833-4_4.
Texto completoKoutavas, Vasileios, Yu-Yang Lin y Nikos Tzevelekos. "From Bounded Checking to Verification of Equivalence via Symbolic Up-to Techniques". En Tools and Algorithms for the Construction and Analysis of Systems, 178–95. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-99527-0_10.
Texto completoBeyer, Dirk y Martin Spiessl. "The Static Analyzer Frama-C in SV-COMP (Competition Contribution)". En Tools and Algorithms for the Construction and Analysis of Systems, 429–34. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-99527-0_26.
Texto completoCiavotta, Michele, Vincenzo Cutrona, Flavio De Paoli, Nikolay Nikolov, Matteo Palmonari y Dumitru Roman. "Supporting Semantic Data Enrichment at Scale". En Technologies and Applications for Big Data Value, 19–39. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-78307-5_2.
Texto completoWood, James y Robert Atkey. "A Framework for Substructural Type Systems". En Programming Languages and Systems, 376–402. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-99336-8_14.
Texto completoFiaidhi, Jinan, Sabah Mohammed y Yuan Wei. "Implications of Web 2.0 Technology on Healthcare". En Healthcare and the Effect of Technology, 269–89. IGI Global, 2010. http://dx.doi.org/10.4018/978-1-61520-733-6.ch016.
Texto completoWong, Zoie S. Y., Yuchen Qiao, Ryohei Sasano, Hongkuan Zhang, Kenichiro Taneda y Shin Ushiro. "Annotation Guidelines for Medication Errors in Incident Reports: Validation Through a Mixed Methods Approach". En MEDINFO 2021: One World, One Health – Global Partnership for Digital Innovation. IOS Press, 2022. http://dx.doi.org/10.3233/shti220095.
Texto completoActas de conferencias sobre el tema "Annotation via web"
Braylan, Alexander y Matthew Lease. "Modeling and Aggregation of Complex Annotations via Annotation Distances". En WWW '20: The Web Conference 2020. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3366423.3380250.
Texto completoYagnik, Jay y Atiq Islam. "Learning people annotation from the web via consistency learning". En the international workshop. New York, New York, USA: ACM Press, 2007. http://dx.doi.org/10.1145/1290082.1290121.
Texto completoGong, Xinyu, Yuefu Zhou, Yue Bi, Mingcheng He, Shiying Sheng, Han Qiu, Ruan He y Jialiang Lu. "Estimating Web Attack Detection via Model Uncertainty from Inaccurate Annotation". En 2019 6th IEEE International Conference on Cyber Security and Cloud Computing (CSCloud)/ 2019 5th IEEE International Conference on Edge Computing and Scalable Cloud (EdgeCom). IEEE, 2019. http://dx.doi.org/10.1109/cscloud/edgecom.2019.00019.
Texto completoShen, Hualei, Dianfu Ma, Yongwang Zhao y Rongwei Ye. "Collaborative annotation of medical images via web browser for teleradiology". En 2012 International Conference on Computerized Healthcare (ICCH). IEEE, 2012. http://dx.doi.org/10.1109/icch.2012.6724483.
Texto completoCao, Liangliang, Jie Yu, Jiebo Luo y Thomas S. Huang. "Enhancing semantic and geographic annotation of web images via logistic canonical correlation regression". En the seventeen ACM international conference. New York, New York, USA: ACM Press, 2009. http://dx.doi.org/10.1145/1631272.1631292.
Texto completoVoloshina, Ekaterina y Polina Leonova. "The Universal Database for Lexical Typology". En INTERNATIONAL CONFERENCE on Computational Linguistics and Intellectual Technologies. RSUH, 2023. http://dx.doi.org/10.28995/2075-7182-2023-22-1133-1140.
Texto completoJia, Jimin, Nenghai Yu y Xian-Sheng Hua. "Annotating personal albums via web mining". En Proceeding of the 16th ACM international conference. New York, New York, USA: ACM Press, 2008. http://dx.doi.org/10.1145/1459359.1459421.
Texto completoPattanaik, Vishwajeet, Shweta Suran y Dirk Draheim. "Enabling Social Information Exchange via Dynamically Robust Annotations". En iiWAS2019: The 21st International Conference on Information Integration and Web-based Applications & Services. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3366030.3366060.
Texto completo