Thèses sur le sujet « Web-based annotation »

Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Web-based annotation.

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 48 meilleures thèses pour votre recherche sur le sujet « Web-based annotation ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.

1

Rodriguez, Henrry. « Designing, evaluating and exploring Web-based tools for collaborative annotation of documents ». Doctoral thesis, KTH, Numerical Analysis and Computer Science, NADA, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-3552.

Texte intégral
Résumé :

This thesis explores the use of the World Wide Web asinfrastructure for collaboration among small or middle sizedgroups. A collection of Web-based tools has been developed,whose main characteristic is that they allow users to makeannotations to shared documents. These Web annotations form adialogue that is persistent and immediately accessible to theusers. Special interest has been devoted to observing howcollaborators make use of a common space where Web-documents aswell as Web-annotations are organized and stored. This commonspace has been called a domain.

We have also tried a novel method for the design ofcollaborative Web-based systems, called“designing frominside”. It is based on communication between the usersand the designer in the form of a dialogue, which is generatedand presented“inside”the system that is beingdeveloped. In this way, users can make comments about theirexperience using the tool while in the appropriate context.Comments by the users as well as the designer's replies areshared with other users. In this way the users become involvedunobtrusively in the design process of the tool.

One of the tools, DHS, has been used in longitudinal studieswithin courses where students also met regularly in theclassroom. In one contextthe students used the DHS as adiscussion or annotation tool for documents that they hadwritten. Within this framework, we also explored how secondlanguage students collaboratively made use of the tool toaccomplish a task that is normally done individually (readingcomprehension).

Col·lecció is the latest version of the DHS. Themost important change in this tool is that users can add theWeb-documents to the domain themselves. This gives a newperspective to the tools because it can work as a collectivebookmark system. This system has been used in three casestudies in which a distributed and co-located group discussed acollection of Web-documents.

Another system in the family is Col·laboració,which is oriented to supporting collaborative writing tasks. Itfocuses primarily on the communication needs co-authors mighthave around a shared document that is being produced. Thesystem also allows for on-line revision and for generatingversions of the document. This system has been used in 8 casestudies, where we have observed the users’interaction andexplored the possibilities that the Web offers to collaborativewriting. For example, co-authors can use the commenting spaceas a“window to the Web”, as the Web provides a hugeamount of information that can be relevant during the writingprocess.

One of the characteristics of all these tools is that theypresent the comments in chronological order. No threadingmechanism is used, although several users have requested athreaded presentation of the comments. This design choice isbased on the belief that with threading of comments, the focusof the discussion could drastically divert from its originaltopic, the document. In our observations, a dual discoursecontext is often found in the comments referring both to aprevious comment and to the shared document. To facilitateorientation in the discussions, we have also developed avisualization tool called Domain Interactivity Diagram (DID),designed to work together with the other systems.

The studies show that the Web offers a suitableinfrastructure for text-based discussions in which the documentcan be given a prime role. It also emerged that the integrationof email was appreciated by users mainly because it wasconsidered as a reminder of the task. In educational settings,students valued the possibility to go through many exampleswritten by other students in comparison with the traditionalway. Also the dialogue formed by the comments was astraightforward way to promote collaboration amongstudents.

WWW, discussion, annotation, design, writing, collaborativework, asynchronous communication, text-based communication.

Styles APA, Harvard, Vancouver, ISO, etc.
2

Hatem, Muna Salman. « A framework for semantic web implementation based on context-oriented controlled automatic annotation ». Thesis, University of Bradford, 2009. http://hdl.handle.net/10454/3207.

Texte intégral
Résumé :
The Semantic Web is the vision of the future Web. Its aim is to enable machines to process Web documents in a way that makes it possible for the computer software to "understand" the meaning of the document contents. Each document on the Semantic Web is to be enriched with meta-data that express the semantics of its contents. Many infrastructures, technologies and standards have been developed and have proven their theoretical use for the Semantic Web, yet very few applications have been created. Most of the current Semantic Web applications were developed for research purposes. This project investigates the major factors restricting the wide spread of Semantic Web applications. We identify the two most important requirements for a successful implementation as the automatic production of the semantically annotated document, and the creation and maintenance of semantic based knowledge base. This research proposes a framework for Semantic Web implementation based on context-oriented controlled automatic Annotation; for short, we called the framework the Semantic Web Implementation Framework (SWIF) and the system that implements this framework the Semantic Web Implementation System (SWIS). The proposed architecture provides for a Semantic Web implementation of stand-alone websites that automatically annotates Web pages before being uploaded to the Intranet or Internet, and maintains persistent storage of Resource Description Framework (RDF) data for both the domain memory, denoted by Control Knowledge, and the meta-data of the Web site's pages. We believe that the presented implementation of the major parts of SWIS introduce a competitive system with current state of art Annotation tools and knowledge management systems; this is because it handles input documents in the ii context in which they are created in addition to the automatic learning and verification of knowledge using only the available computerized corporate databases. In this work, we introduce the concept of Control Knowledge (CK) that represents the application's domain memory and use it to verify the extracted knowledge. Learning is based on the number of occurrences of the same piece of information in different documents. We introduce the concept of Verifiability in the context of Annotation by comparing the extracted text's meaning with the information in the CK and the use of the proposed database table Verifiability_Tab. We use the linguistic concept Thematic Role in investigating and identifying the correct meaning of words in text documents, this helps correct relation extraction. The verb lexicon used contains the argument structure of each verb together with the thematic structure of the arguments. We also introduce a new method to chunk conjoined statements and identify the missing subject of the produced clauses. We use the semantic class of verbs that relates a list of verbs to a single property in the ontology, which helps in disambiguating the verb in the input text to enable better information extraction and Annotation. Consequently we propose the following definition for the annotated document or what is sometimes called the 'Intelligent Document' 'The Intelligent Document is the document that clearly expresses its syntax and semantics for human use and software automation'. This work introduces a promising improvement to the quality of the automatically generated annotated document and the quality of the automatically extracted information in the knowledge base. Our approach in the area of using Semantic Web iii technology opens new opportunities for diverse areas of applications. E-Learning applications can be greatly improved and become more effective.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Ayuso, Anna Maria E. « Automation of Drosophila gene expression pattern image annotation : development of web-based image annotation tool and application of machine learning methods ». Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/66403.

Texte intégral
Résumé :
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 91-92).
Large-scale in situ hybridization screens are providing an abundance of spatio-temporal patterns of gene expression data that is valuable for understanding the mechanisms of gene regulation. Drosophila gene expression pattern images have been generated by the Berkeley Drosophila Genome Project (BDGP) for over 7,000 genes in over 90,000 digital images. These images are currently hand curated by field experts with developmental and anatomical terms based on the stained regions. These annotations enable the integration of spatial expression patterns with other genomic data sets that link regulators with their downstream targets. However, the manual curation has become a bottleneck in the process of analyzing the rapidly generated data therefore it is necessary to explore computational methods for the curation of gene expression pattern images. This thesis addresses improving the manual annotation process with a web-based image annotation tool and also enabling automation of the process using machine learning methods. First, a tool called LabelLife was developed to provide a systematic and flexible way of annotating images, groups of images, and shapes within images using terms from a controlled vocabulary. Second, machine learning methods for automatically predicting vocabulary terms for a given image based on image feature data were explored and implemented. The results of the applied machine learning methods are promising in terms of predictive ability, which has the potential to simplify and expedite the curation process hence increasing the rate that biologically significant data can be evaluated and new insights can be gained.
by Anna Maria E. Ayuso.
M.Eng.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Al, Asswad Mohammad Mourhaf. « Semantic information systems engineering : a query-based approach for semi-automatic annotation of web services ». Thesis, Brunel University, 2011. http://bura.brunel.ac.uk/handle/2438/5441.

Texte intégral
Résumé :
There has been an increasing interest in Semantic Web services (SWS) as a proposed solution to facilitate automatic discovery, composition and deployment of existing syntactic Web services. Successful implementation and wider adoption of SWS by research and industry are, however, profoundly based on the existence of effective and easy to use methods for service semantic description. Unfortunately, Web service semantic annotation is currently performed by manual means. Manual annotation is a difficult, error-prone and time-consuming task and few approaches exist aiming to semi-automate that task. Existing approaches are difficult to use since they require ontology building. Moreover, these approaches employ ineffective matching methods and suffer from the Low Percentage Problem. The latter problem happens when a small number of service elements - in comparison to the total number of elements – are annotated in a given service. This research addresses the Web services annotation problem by developing a semi-automatic annotation approach that allows SWS developers to effectively and easily annotate their syntactic services. The proposed approach does not require application ontologies to model service semantics. Instead, a standard query template is used: This template is filled with data and semantics extracted from WSDL files in order to produce query instances. The input of the annotation approach is the WSDL file of a candidate service and a set of ontologies. The output is an annotated WSDL file. The proposed approach is composed of five phases: (1) Concept extraction; (2) concept filtering and query filling; (3) query execution; (4) results assessment; and (5) SAWSDL annotation. The query execution engine makes use of name-based and structural matching techniques. The name-based matching is carried out by CN-Match which is a novel matching method and tool that is developed and evaluated in this research. The proposed annotation approach is evaluated using a set of existing Web services and ontologies. Precision (P), Recall (R), F-Measure (F) and Percentage of annotated elements are used as evaluation metrics. The evaluation reveals that the proposed approach is effective since - in relation to manual results - accurate and almost complete annotation results are obtained. In addition, high percentage of annotated elements is achieved using the proposed approach because it makes use of effective ontology extension mechanisms.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Chan, Wun Wa. « A study of social annotation tool in facilitating collaborative inquiry learning ». HKBU Institutional Repository, 2018. https://repository.hkbu.edu.hk/etd_oa/514.

Texte intégral
Résumé :
In twenty-first (21st) century tertiary education, undergraduate study is intended to not only to teach the subject knowledge through direct instruction or lecturing, but also to cultivate and foster students' skills and literacies to suit societal needs. For this reason, it is increasingly important to introduce new teaching and learning (T&L) strategies and web applications (apps) into students' undergraduate study. The introduction of collaborative inquiry learning (CIL) is intended to enhance students' communication and collaboration skills throughout their learning. In addition, by introducing social annotation (SoAn) tools, students are able to bookmark, highlight, annotate, share, discuss, and collaborate on information sources collected by students for their collaborative inquiry learning assignments (CILA). In this study, a self-developed SoAn tool known as the Web Annotation and Sharing Platform (WASP) was introduced to investigate how the SoAn tool can facilitate students' CIL. The study included 377 students (freshmen or sophomores) from three different courses at a Hong Kong University, Hong Kong Christian University. A mixed-method research approach was employed using four data collection methods. Quantitative data were collected from all participating students through a questionnaire survey, WASP log file (students' actions on WASP), and CILA marks. Furthermore, qualitative data were gathered from selected students in individual face-to-face interviews. The study aimed to ascertain how students integrate and use the SoAn tool in their CIL. This study also investigated whether students think a SoAn tool is useful and effective for their CIL. Moreover, this study examined the correlations between students' perceptions of CIL and WASP, usage of WASP, and their CILA mark. Finally, this study examined the challenges students encountered when they integrate and use WASP in their CIL. The results reveal that the integration and usage of a SoAn tool were concentrated in the early stages of students' CIL. Furthermore, the results illustrated how the 'able other (s)' arise in the CIL group to provide information sources that initiate the discussion and collaboration among group members. Based upon the student perceptions collected in this study, the results suggested that students agreed that the WASP functions were useful and effective for CIL in courses that teach elementary Information and Communications Technology knowledge content (ICT-related courses). Moreover, student perceptions on the WASP functions highly correlated with their perceptions of CL before this study and any respective group process experiences. The results also indicated that students' perceptions, SoAn tool usage and learning outcomes (CILA mark) are not correlated, there is a higher chance of reaching correlation between the perceived usefulness of the WASP functions and their CILA mark in ICT-related courses. Lastly, the results suggested that low motivation for learning and using a SoAn tool, the functionality and recognition of a SoAn tool, and methods of processing, discussing, and collaborating on collected information sources were the challenges encountered when students integrate and use a SoAn tool in their CIL. The implications and limitations of this study are discussed in Chapter 8. Directions for future research and suggestions are provided, which includes introducing SoAn tools in ICT-related courses and enhancing the functions of SoAn tools both for better user experiences and research purposes.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Vicient, Monllaó Carlos. « Moving towards the semantic web : enabling new technologies through the semantic annotation of social contents ». Doctoral thesis, Universitat Rovira i Virgili, 2015. http://hdl.handle.net/10803/285334.

Texte intégral
Résumé :
La Web Social ha causat un creixement exponencial dels continguts disponibles deixant enormes quantitats de recursos textuals electrònics que sovint aclaparen els usuaris. Aquest volum d’informació és d’interès per a la comunitat de mineria de dades. Els algorismes de mineria de dades exploten característiques de les entitats per tal de categoritzar-les, agrupar-les o classificar-les segons la seva semblança. Les dades per si mateixes no aporten cap mena de significat: han de ser interpretades per esdevenir informació. Els mètodes tradicionals de mineria de dades no tenen com a objectiu “entendre” el contingut d’un recurs, sinó que extreuen valors numèrics els quals esdevenen models en aplicar-hi càlculs estadístics, que només cobren sentit sota l’anàlisi manual d’un expert. Els darrers anys, motivat per la Web Semàntica, molts investigadors han proposat mètodes semàntics de classificació de dades capaços d’explotar recursos textuals a nivell conceptual. Malgrat això, normalment aquests mètodes depenen de recursos anotats prèviament per poder interpretar semànticament el contingut d’un document. L’ús d’aquests mètodes està estretament relacionat amb l’associació de dades i el seu significat. Aquest treball es centra en el desenvolupament d’una metodologia genèrica capaç de detectar els trets més rellevants d’un recurs textual descobrint la seva associació semàntica, es a dir, enllaçant-los amb conceptes modelats a una ontologia, i detectant els principals temes de discussió. Els mètodes proposats són no supervisats per evitar el coll d’ampolla generat per l’anotació manual, independents del domini (aplicables a qualsevol àrea de coneixement) i flexibles (capaços d’analitzar recursos heterogenis: documents textuals o documents semi-estructurats com els articles de la Viquipèdia o les publicacions de Twitter). El treball ha estat avaluat en els àmbits turístic i mèdic. Per tant, aquesta dissertació és un primer pas cap a l'anotació semàntica automàtica de documents necessària per possibilitar el camí cap a la visió de la Web Semàntica.
La Web Social ha provocado un crecimiento exponencial de los contenidos disponibles, dejando enormes cantidades de recursos electrónicos que a menudo abruman a los usuarios. Tal volumen de información es de interés para la comunidad de minería de datos. Los algoritmos de minería de datos explotan características de las entidades para categorizarlas, agruparlas o clasificarlas según su semejanza. Los datos por sí mismos no aportan ningún significado: deben ser interpretados para convertirse en información. Los métodos tradicionales no tienen como objetivo "entender" el contenido de un recurso, sino que extraen valores numéricos que se convierten en modelos tras aplicar cálculos estadísticos, los cuales cobran sentido bajo el análisis manual de un experto. Actualmente, motivados por la Web Semántica, muchos investigadores han propuesto métodos semánticos de clasificación de datos capaces de explotar recursos textuales a nivel conceptual. Sin embargo, generalmente estos métodos dependen de recursos anotados previamente para poder interpretar semánticamente el contenido de un documento. El uso de estos métodos está estrechamente relacionado con la asociación de datos y su significado. Este trabajo se centra en el desarrollo de una metodología genérica capaz de detectar los rasgos más relevantes de un recurso textual descubriendo su asociación semántica, es decir, enlazándolos con conceptos modelados en una ontología, y detectando los principales temas de discusión. Los métodos propuestos son no supervisados para evitar el cuello de botella generado por la anotación manual, independientes del dominio (aplicables a cualquier área de conocimiento) y flexibles (capaces de analizar recursos heterogéneos: documentos textuales o documentos semi-estructurados, como artículos de la Wikipedia o publicaciones de Twitter). El trabajo ha sido evaluado en los ámbitos turístico y médico. Esta disertación es un primer paso hacia la anotación semántica automática de documentos necesaria para posibilitar el camino hacia la visión de la Web Semántica.
Social Web technologies have caused an exponential growth of the documents available through the Web, making enormous amounts of textual electronic resources available. Users may be overwhelmed by such amount of contents and, therefore, the automatic analysis and exploitation of all this information is of interest to the data mining community. Data mining algorithms exploit features of the entities in order to characterise, group or classify them according to their resemblance. Data by itself does not carry any meaning; it needs to be interpreted to convey information. Classical data analysis methods did not aim to “understand” the content and the data were treated as meaningless numbers and statistics were calculated on them to build models that were interpreted manually by human domain experts. Nowadays, motivated by the Semantic Web, many researchers have proposed semantic-grounded data classification and clustering methods that are able to exploit textual data at a conceptual level. However, they usually rely on pre-annotated inputs to be able to semantically interpret textual data such as the content of Web pages. The usability of all these methods is related to the linkage between data and its meaning. This work focuses on the development of a general methodology able to detect the most relevant features of a particular textual resource finding out their semantics (associating them to concepts modelled in ontologies) and detecting its main topics. The proposed methods are unsupervised (avoiding the manual annotation bottleneck), domain-independent (applicable to any area of knowledge) and flexible (being able to deal with heterogeneous resources: raw text documents, semi-structured user-generated documents such Wikipedia articles or short and noisy tweets). The methods have been evaluated in different fields (Tourism, Oncology). This work is a first step towards the automatic semantic annotation of documents, needed to pave the way towards the Semantic Web vision.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Dytrych, Jaroslav. « Sémantická anotace textu ». Doctoral thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2017. http://www.nusl.cz/ntk/nusl-412580.

Texte intégral
Résumé :
This thesis deals with intelligent systems for support of the semantic annotation of text. It discusses the motivation for creation of such systems and state of the art in the areas of their usage. The thesis also describes newly proposed and realised annotation system which realizes advanced functions of semantic filtering and presentation of annotation suggestion alternatives in a unique way. The results of finished experiments clearly show the advantages of proposed solution. They also prove that the user interface of the annotation tools affects the annotation process. The optimisation of displayed information for the task of disambiguation of ambiguous entity names was done and proposed methods to speedup and increase of quality of the created annotations was experimentally evaluated. The comparison with the Protégé general tool has proven the benefits of created system for collaborative ontology creation which should be anchored in the text. In the conclusion, all achieved results are analysed and summarized.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Khan, Arshad Ali. « Exploiting Linked Open Data (LoD) and Crowdsourcing-based semantic annotation & ; tagging in web repositories to improve and sustain relevance in search results ». Thesis, University of Southampton, 2018. https://eprints.soton.ac.uk/428046/.

Texte intégral
Résumé :
Online searching of multi-disciplinary web repositories is a topic of increasing importance as the number of repositories increases and the diversity of skills and backgrounds of their users widens. Earlier term-frequency based approaches have been improved by ontology-based semantic annotation, but such approaches are predominantly driven by "domain ontologies engineering first" and lack dynamicity, whereas the information is dynamic; the meaning of things changes with time; and new concepts are constantly being introduced. Further, there is no sustainable framework or method, discovered so far, which could automatically enrich the content of heterogeneous online resources for information retrieval over time. Furthermore, the methods and techniques being applied are fast becoming inadequate due to increasing data volume, concept obsolescence, and complexity and heterogeneity of content types in web repositories. In the face of such complexities, term matching alone between a query and the indexed documents will no longer fulfil complex user needs. The ever growing gap between syntax and semantics needs to be continually bridged in order to address the above issues; and ensure accurate search results retrieval, against natural language queries, despite such challenges. This thesis investigates that by domain-specific expert crowd-annotation of content, on top of the automatic semantic annotation (using Linked Open Data sources), the contemporary value of content in scientific repositories, can be continually enriched and sustained. A purpose-built annotation, indexing and searching environment has been developed and deployed to a web repository, which hosts more than 3,400 heterogeneous web documents. Based on expert crowd annotations, automatic LoD-based named entity extraction and search results evaluations, this research finds that search results retrieval, having the crowd-sourced element, performs better than those having no crowd-sourced element. This thesis also shows that a consensus can be reached between the expert and non-expert crowd-sourced annotators on annotating and tagging the content of web repositories, using the controlled vocabulary (typology) and free-text terms and keywords.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Bedoya, Ramos Daniel. « Capturing Musical Prosody Through Interactive Audio/Visual Annotations ». Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS698.

Texte intégral
Résumé :
Des projets de science participative (SP) ont stimulé la recherche dans plusieurs disciplines au cours des dernières années. Des citoyens scientifiques contribuent à cette recherche en effectuant des tâches cognitives, favorisant l'apprentissage, l'innovation et l'inclusion. Bien que le crowdsourcing ait servi à recueillir des annotations structurelles en musique, la SP reste sous-utilisée pour étudier l'expressivité musicale. On introduit un nouveau protocole d'annotation pour capturer la prosodie musicale, associée aux variations acoustiques introduites par les interprètes pour rendre la musique expressive. Notre méthode descendante, centrée sur l'humain, donne la priorité à l'auditeur dans la production d'annotations des fonctions prosodiques de la musique. On se concentre sur la segmentation et la proéminence, qui véhiculent la structure et l'affect. Ce protocole fournit un cadre de SP et une approche expérimentale pour réaliser des études systématiques et extensibles. On met en œuvre ce protocole d'annotation dans CosmoNote, un logiciel web personnalisable, conçu pour faciliter l'annotation de structures musicales expressives. CosmoNote permet aux utilisateurs d'interagir avec des couches visuelles, y compris la forme d'onde, les notes enregistrées, les attributs audio extraits et les caractéristiques de la partition. On peut placer des frontières de niveaux différents, des régions, des commentaires et des groupes de notes. On a mené deux études visant à améliorer le protocole et la plateforme. La première, examine l'impact des stimuli auditifs et visuels simultanés sur les frontières de segmentation. On compare les différences dans les distributions de frontières dérivées d'informations intermodales (auditives et visuelles) et unimodales (auditives ou visuelles). Les distances entre les distributions unimodales-visuelles et intermodales sont plus faibles qu'entre les distributions unimodales-auditives et intermodales. On montre que l'ajout de visuels accentue les informations clés et fournit un échafaudage cognitif aidant à marquer clairement les frontières prosodiques, bien qu'ils puissent détourner l'attention de structures spécifiques. À l'inverse, sans audio, la tâche d'annotation devient difficile, masquant des indices subtils. Malgré leur exagération ou inexactitude, les repères visuels sont essentiels pour guider les annotations de frontières en interprétation, ce qui améliore les résultats globaux. La deuxième étude utilise tous les types d'annotations de CosmoNote et analyse comment les participants annotent la prosodie musicale, avec des instructions minimales ou détaillées, dans un cadre d'annotations libres. On compare la qualité des annotations entre musiciens et non-musiciens. On évalue la composante de SP dans un cadre écologique où les participants sont totalement autonomes dans une tâche où le temps, l'attention et la patience sont valorisés. On présente trois méthodes basées sur des étiquettes d'annotation, des catégories et des propriétés communes pour analyser et agréger les données. Les résultats montrent une convergence dans les types d'annotations et les descriptions utilisées pour marquer les éléments musicaux récurrents, pour toute condition expérimentale et aptitude musicale. On propose des stratégies pour améliorer le protocole, l'agrégation des données et l'analyse dans des applications à grande échelle. Cette thèse enrichit la représentation et la compréhension des structures en musique interprétée en introduisant un protocole et une plateforme d'annotation, des expériences adaptables et des méthodes d'agrégation et d'analyse. On montre l'importance du compromis entre l'obtention de données plus simples à analyser et celle d'un contenu plus riche, capturant une pensée musicale complexe. Notre protocole peut être généralisé aux études sur les décisions d'interprétation afin d'améliorer la compréhension des choix expressifs dans l'interprétation musicale
The proliferation of citizen science projects has advanced research and knowledge across disciplines in recent years. Citizen scientists contribute to research through volunteer thinking, often by engaging in cognitive tasks using mobile devices, web interfaces, or personal computers, with the added benefit of fostering learning, innovation, and inclusiveness. In music, crowdsourcing has been applied to gather various structural annotations. However, citizen science remains underutilized in musical expressiveness studies. To bridge this gap, we introduce a novel annotation protocol to capture musical prosody, which refers to the acoustic variations performers introduce to make music expressive. Our top-down, human-centered method prioritizes the listener's role in producing annotations of prosodic functions in music. This protocol provides a citizen science framework and experimental approach to carrying out systematic and scalable studies on the functions of musical prosody. We focus on the segmentation and prominence functions, which convey structure and affect. We implement this annotation protocol in CosmoNote, a web-based, interactive, and customizable software conceived to facilitate the annotation of expressive music structures. CosmoNote gives users access to visualization layers, including the audio waveform, the recorded notes, extracted audio attributes (loudness and tempo), and score features (harmonic tension and other markings). The annotation types comprise boundaries of varying strengths, regions, comments, and note groups. We conducted two studies aimed at improving the protocol and the platform. The first study examines the impact of co-occurring auditory and visual stimuli on segmentation boundaries. We compare differences in boundary distributions derived from cross-modal (auditory and visual) vs. unimodal (auditory or visual) information. Distances between unimodal-visual and cross-modal distributions are smaller than between unimodal-auditory and cross-modal distributions. On the one hand, we show that adding visuals accentuates crucial information and provides cognitive scaffolding for accurately marking boundaries at the starts and ends of prosodic cues. However, they sometimes divert the annotator's attention away from specific structures. On the other hand, removing the audio impedes the annotation task by hiding subtle, relied-upon cues. Although visual cues may sometimes overemphasize or mislead, they are essential in guiding boundary annotations of recorded performances, often improving the aggregate results. The second study uses all CosmoNote's annotation types and analyzes how annotators, receiving either minimal or detailed protocol instructions, approach annotating musical prosody in a free-form exercise. We compare the quality of annotations between participants who are musically trained and those who are not. The citizen science component is evaluated in an ecological setting where participants are fully autonomous in a task where time, attention, and patience are valued. We present three methods based on common annotation labels, categories, and properties to analyze and aggregate the data. Results show convergence in annotation types and descriptions used to mark recurring musical elements across experimental conditions and musical abilities. We propose strategies for improving the protocol, data aggregation, and analysis in large-scale applications. This thesis contributes to representing and understanding performed musical structures by introducing an annotation protocol and platform, tailored experiments, and aggregation/analysis methods. The research shows the importance of balancing the collection of easier-to-analyze datasets and having richer content that captures complex musical thinking. Our protocol can be generalized to studies on performance decisions to improve the comprehension of expressive choices in musical performances
Styles APA, Harvard, Vancouver, ISO, etc.
10

Furno, Domenico. « Hybrid approaches based on computational intelligence and semantic web for distributed situation and context awareness ». Doctoral thesis, Universita degli studi di Salerno, 2013. http://hdl.handle.net/10556/927.

Texte intégral
Résumé :
2011 - 2012
The research work focuses on Situation Awareness and Context Awareness topics. Specifically, Situation Awareness involves being aware of what is happening in the vicinity to understand how information, events, and one’s own actions will impact goals and objectives, both immediately and in the near future. Thus, Situation Awareness is especially important in application domains where the information flow can be quite high and poor decisions making may lead to serious consequences. On the other hand Context Awareness is considered a process to support user applications to adapt interfaces, tailor the set of application-relevant data, increase the precision of information retrieval, discover services, make the user interaction implicit, or build smart environments. Despite being slightly different, Situation and Context Awareness involve common problems such as: the lack of a support for the acquisition and aggregation of dynamic environmental information from the field (i.e. sensors, cameras, etc.); the lack of formal approaches to knowledge representation (i.e. contexts, concepts, relations, situations, etc.) and processing (reasoning, classification, retrieval, discovery, etc.); the lack of automated and distributed systems, with considerable computing power, to support the reasoning on a huge quantity of knowledge, extracted by sensor data. So, the thesis researches new approaches for distributed Context and Situation Awareness and proposes to apply them in order to achieve some related research objectives such as knowledge representation, semantic reasoning, pattern recognition and information retrieval. The research work starts from the study and analysis of state of art in terms of techniques, technologies, tools and systems to support Context/Situation Awareness. The main aim is to develop a new contribution in this field by integrating techniques deriving from the fields of Semantic Web, Soft Computing and Computational Intelligence. From an architectural point of view, several frameworks are going to be defined according to the multi-agent paradigm. Furthermore, some preliminary experimental results have been obtained in some application domains such as Airport Security, Traffic Management, Smart Grids and Healthcare. Finally, future challenges is going to the following directions: Semantic Modeling of Fuzzy Control, Temporal Issues, Automatically Ontology Elicitation, Extension to other Application Domains and More Experiments. [edited by author]
XI n.s.
Styles APA, Harvard, Vancouver, ISO, etc.
11

Amir, Mohammad. « Semantically-enriched and semi-Autonomous collaboration framework for the Web of Things. Design, implementation and evaluation of a multi-party collaboration framework with semantic annotation and representation of sensors in the Web of Things and a case study on disaster management ». Thesis, University of Bradford, 2015. http://hdl.handle.net/10454/14363.

Texte intégral
Résumé :
This thesis proposes a collaboration framework for the Web of Things based on the concepts of Service-oriented Architecture and integrated with semantic web technologies to offer new possibilities in terms of efficient asset management during operations requiring multi-actor collaboration. The motivation for the project comes from the rise in disasters where effective cross-organisation collaboration can increase the efficiency of critical information dissemination. Organisational boundaries of participants as well as their IT capability and trust issues hinders the deployment of a multi-party collaboration framework, thereby preventing timely dissemination of critical data. In order to tackle some of these issues, this thesis proposes a new collaboration framework consisting of a resource-based data model, resource-oriented access control mechanism and semantic technologies utilising the Semantic Sensor Network Ontology that can be used simultaneously by multiple actors without impacting each other’s networks and thus increase the efficiency of disaster management and relief operations. The generic design of the framework enables future extensions, thus enabling its exploitation across many application domains. The performance of the framework is evaluated in two areas: the capability of the access control mechanism to scale with increasing number of devices, and the capability of the semantic annotation process to increase in efficiency as more information is provided. The results demonstrate that the proposed framework is fit for purpose.
Styles APA, Harvard, Vancouver, ISO, etc.
12

Khalili, Ali. « A Semantics-based User Interface Model for Content Annotation, Authoring and Exploration ». Doctoral thesis, Universitätsbibliothek Leipzig, 2015. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-159956.

Texte intégral
Résumé :
The Semantic Web and Linked Data movements with the aim of creating, publishing and interconnecting machine readable information have gained traction in the last years. However, the majority of information still is contained in and exchanged using unstructured documents, such as Web pages, text documents, images and videos. This can also not be expected to change, since text, images and videos are the natural way in which humans interact with information. Semantic structuring of content on the other hand provides a wide range of advantages compared to unstructured information. Semantically-enriched documents facilitate information search and retrieval, presentation, integration, reusability, interoperability and personalization. Looking at the life-cycle of semantic content on the Web of Data, we see quite some progress on the backend side in storing structured content or for linking data and schemata. Nevertheless, the currently least developed aspect of the semantic content life-cycle is from our point of view the user-friendly manual and semi-automatic creation of rich semantic content. In this thesis, we propose a semantics-based user interface model, which aims to reduce the complexity of underlying technologies for semantic enrichment of content by Web users. By surveying existing tools and approaches for semantic content authoring, we extracted a set of guidelines for designing efficient and effective semantic authoring user interfaces. We applied these guidelines to devise a semantics-based user interface model called WYSIWYM (What You See Is What You Mean) which enables integrated authoring, visualization and exploration of unstructured and (semi-)structured content. To assess the applicability of our proposed WYSIWYM model, we incorporated the model into four real-world use cases comprising two general and two domain-specific applications. These use cases address four aspects of the WYSIWYM implementation: 1) Its integration into existing user interfaces, 2) Utilizing it for lightweight text analytics to incentivize users, 3) Dealing with crowdsourcing of semi-structured e-learning content, 4) Incorporating it for authoring of semantic medical prescriptions.
Styles APA, Harvard, Vancouver, ISO, etc.
13

du, Toit Nicola. « Designing an interface to provide new functionality for the post-processing of web-based annotations ». Thesis, University of Cape Town, 2014. http://pubs.cs.uct.ac.za/archive/00000960/.

Texte intégral
Résumé :
Systems to annotate online content are becoming increasingly common on the World Wide Web. While much research and development has been done for interfaces that allow users to make and view annotations, few annotation systems provide functionality that extends beyond this and allows users to also manage and process collections of existing annotations. Siyavula Education is a social enterprise that publishes high school Maths and Science textbooks online. The company uses annotations to collate collaborator and volunteer feedback (corrections, opinions, suggestions) about its books at various phases in the book-writing life cycle. Currently the company captures annotations on PDF versions of their books. The web-based software they use allows for some filtering and sorting of existing annotations, but the system is limited and not ideal for their rather specialised requirements. In an attempt to move away from a proprietary, PDF-based system Siyavula implemented annotator (http://okfnlabs.org/annotator/), software which allowed for the annotation of HTML pages. However, this software was not coupled with a back-end interface that would allow users to interact with a database of saved annotations. To enable this kind of interaction, a prototype interface was designed and is presented here. The purpose of the interface was to give users new and improved functionality for querying and manipulating a collection of web-based annotations about Siyavula’s online content. Usability tests demonstrated that the interface was successful at giving users this new and necessary functionality (including filtering, sorting and searching) to process annotations. Once integrated with front-end software (such as Annotator) and issue tracking software (such as GitHub) the interface could form part of a powerful new tool for the making and management of annotations on the Web.
Styles APA, Harvard, Vancouver, ISO, etc.
14

Bai, Xi. « Peer-to-peer, multi-agent interaction adapted to a web architecture ». Thesis, University of Edinburgh, 2013. http://hdl.handle.net/1842/7968.

Texte intégral
Résumé :
The Internet and Web have brought in a new era of information sharing and opened up countless opportunities for people to rethink and redefine communication. With the development of network-related technologies, a Client/Server architecture has become dominant in the application layer of the Internet. Nowadays network nodes are behind firewalls and Network Address Translations, and the centralised design of the Client/Server architecture limits communication between users on the client side. Achieving the conflicting goals of data privacy and data openness is difficult and in many cases the difficulty is compounded by the differing solutions adopted by different organisations and companies. Building a more decentralised or distributed environment for people to freely share their knowledge has become a pressing challenge and we need to understand how to adapt the pervasive Client/Server architecture to this more fluid environment. This thesis describes a novel framework by which network nodes or humans can interact and share knowledge with each other through formal service-choreography specifications in a decentralised manner. The platform allows peers to publish, discover and (un)subscribe to those specifications in the form of Interaction Models (IMs). Peer groups can be dynamically formed and disbanded based on the interaction logs of peers. IMs are published in HTML documents as normal Web pages indexable by search engines and associated with lightweight annotations which semantically enhance the embedded IM elements and at the same time make IM publications comply with the Linked Data principles. The execution of IMs is decentralised on each peer via conventional Web browsers, potentially giving the system access to a very large user community. In this thesis, after developing a proof-of-concept implementation, we carry out case studies of the resulting functionality and evaluate the implementation across several metrics. An increasing number of service providers have began to look for customers proactively, and we believe that in the near future we will not search for services but rather services will find us through our peer communities. Our approaches show how a peer-to-peer architecture for this purpose can be obtained on top of a conventional Client/Server Web infrastructure.
Styles APA, Harvard, Vancouver, ISO, etc.
15

Dong, Hai. « A customized semantic service retrieval methodology for the digital ecosystems environment ». Thesis, Curtin University, 2010. http://hdl.handle.net/20.500.11937/2345.

Texte intégral
Résumé :
With the emergence of the Web and its pervasive intrusion on individuals, organizations, businesses etc., people now realize that they are living in a digital environment analogous to the ecological ecosystem. Consequently, no individual or organization can ignore the huge impact of the Web on social well-being, growth and prosperity, or the changes that it has brought about to the world economy, transforming it from a self-contained, isolated, and static environment to an open, connected, dynamic environment. Recently, the European Union initiated a research vision in relation to this ubiquitous digital environment, known as Digital (Business) Ecosystems. In the Digital Ecosystems environment, there exist ubiquitous and heterogeneous species, and ubiquitous, heterogeneous, context-dependent and dynamic services provided or requested by species. Nevertheless, existing commercial search engines lack sufficient semantic supports, which cannot be employed to disambiguate user queries and cannot provide trustworthy and reliable service retrieval. Furthermore, current semantic service retrieval research focuses on service retrieval in the Web service field, which cannot provide requested service retrieval functions that take into account the features of Digital Ecosystem services. Hence, in this thesis, we propose a customized semantic service retrieval methodology, enabling trustworthy and reliable service retrieval in the Digital Ecosystems environment, by considering the heterogeneous, context-dependent and dynamic nature of services and the heterogeneous and dynamic nature of service providers and service requesters in Digital Ecosystems.The customized semantic service retrieval methodology comprises: 1) a service information discovery, annotation and classification methodology; 2) a service retrieval methodology; 3) a service concept recommendation methodology; 4) a quality of service (QoS) evaluation and service ranking methodology; and 5) a service domain knowledge updating, and service-provider-based Service Description Entity (SDE) metadata publishing, maintenance and classification methodology.The service information discovery, annotation and classification methodology is designed for discovering ubiquitous service information from the Web, annotating the discovered service information with ontology mark-up languages, and classifying the annotated service information by means of specific service domain knowledge, taking into account the heterogeneous and context-dependent nature of Digital Ecosystem services and the heterogeneous nature of service providers. The methodology is realized by the prototype of a Semantic Crawler, the aim of which is to discover service advertisements and service provider profiles from webpages, and annotating the information with service domain ontologies.The service retrieval methodology enables service requesters to precisely retrieve the annotated service information, taking into account the heterogeneous nature of Digital Ecosystem service requesters. The methodology is presented by the prototype of a Service Search Engine. Since service requesters can be divided according to the group which has relevant knowledge with regard to their service requests, and the group which does not have relevant knowledge with regard to their service requests, we respectively provide two different service retrieval modules. The module for the first group enables service requesters to directly retrieve service information by querying its attributes. The module for the second group enables service requesters to interact with the search engine to denote their queries by means of service domain knowledge, and then retrieve service information based on the denoted queries.The service concept recommendation methodology concerns the issue of incomplete or incorrect queries. The methodology enables the search engine to recommend relevant concepts to service requesters, once they find that the service concepts eventually selected cannot be used to denote their service requests. We premise that there is some extent of overlap between the selected concepts and the concepts denoting service requests, as a result of the impact of service requesters’ understandings of service requests on the selected concepts by a series of human-computer interactions. Therefore, a semantic similarity model is designed that seeks semantically similar concepts based on selected concepts.The QoS evaluation and service ranking methodology is proposed to allow service requesters to evaluate the trustworthiness of a service advertisement and rank retrieved service advertisements based on their QoS values, taking into account the contextdependent nature of services in Digital Ecosystems. The core of this methodology is an extended CCCI (Correlation of Interaction, Correlation of Criterion, Clarity of Criterion, and Importance of Criterion) metrics, which allows a service requester to evaluate the performance of a service provider in a service transaction based on QoS evaluation criteria in a specific service domain. The evaluation result is then incorporated with the previous results to produce the eventual QoS value of the service advertisement in a service domain. Service requesters can rank service advertisements by considering their QoS values under each criterion in a service domain.The methodology for service domain knowledge updating, service-provider-based SDE metadata publishing, maintenance, and classification is initiated to allow: 1) knowledge users to update service domain ontologies employed in the service retrieval methodology, taking into account the dynamic nature of services in Digital Ecosystems; and 2) service providers to update their service profiles and manually annotate their published service advertisements by means of service domain knowledge, taking into account the dynamic nature of service providers in Digital Ecosystems. The methodology for service domain knowledge updating is realized by a voting system for any proposals for changes in service domain knowledge, and by assigning different weights to the votes of domain experts and normal users.In order to validate the customized semantic service retrieval methodology, we build a prototype – a Customized Semantic Service Search Engine. Based on the prototype, we test the mathematical algorithms involved in the methodology by a simulation approach and validate the proposed functions of the methodology by a functional testing approach.
Styles APA, Harvard, Vancouver, ISO, etc.
16

Khan, Imran. « Cloud-based cost-efficient application and service provisioning in virtualized wireless sensor networks ». Thesis, Evry, Institut national des télécommunications, 2015. http://www.theses.fr/2015TELE0019/document.

Texte intégral
Résumé :
Des Réseaux de Capteurs Sans Fil (RdCSF) deviennent omniprésents et sont utilisés dans diverses applications domaines. Ils sont les pierres angulaires de l'émergence de l'Internet des Objets (IdO) paradigme. Déploiements traditionnels de réseaux de capteurs sont spécifiques à un domaine, avec des applications généralement incrustés dans le RdCSF, excluant la ré-utilisation de l'infrastructure par d'autres applications. Maintenant, avec l'avènement de l'IdO, cette approche est de moins en moins viable. Une solution possible réside dans le partage d'une même RdCSF par de plusieurs applications et services, y compris même les applications et services qui ne sont pas envisagées lors du déploiement de RdCSF. Deux principaux développements majeurs ont conduit à cette solution potentielle. Premièrement, comme les nœuds de RdCSF sont de plus en plus puissants, il devient de plus en plus pertinent de rechercher comment pourrait plusieurs applications partager les mêmes déploiements WSN. La deuxième évolution est le Cloud Computing paradigme qui promeut des ressources et de la rentabilité en appliquant le concept de virtualisation les ressources physiques disponibles. Grâce à ces développements de cette thèse fait les contributions suivantes. Tout d'abord, un vaste état de la revue d'art est présenté qui présente les principes de base de RdCSF la virtualisation et sa pertinence avec précaution motive les scénarios sélectionnés. Les travaux existants sont présentés en détail et évaluées de manière critique en utilisant un ensemble d'exigences provenant du scénario. Cette contribution améliore sensiblement les critiques actuelles sur l'état de l'art en termes de portée, de la motivation, de détails, et les questions de recherche futures. La deuxième contribution se compose de deux parties: la première partie est une nouvelle architecture de virtualization RdCSF multicouche permet l'approvisionnement de plusieurs applications et services au cours du même déploiement de RdCSF. Il est mis en œuvre et évaluée en utilisant un prototype basé sur un scénario de preuve de concept en utilisant le kit Java SunSpot. La deuxième partie de cette contribution est l'architecture étendue qui permet à l’infrastructure virtualisée RdCSF d'interagir avec un RdCSF Platform-as-a-Service (PaaS) à un niveau d'abstraction plus élevé. Grâce à ces améliorations RdCSF PaaS peut provisionner des applications et des services RdCSF aux utilisateurs finaux que Software-as-a-Service (SaaS). Les premiers résultats sont présentés sur la base de l'implantation de l'architecture améliorée en utilisant le kit Java SunSpot. La troisième contribution est une nouvelle architecture d'annotation de données pour les applications sémantiques dans les environnements virtualisés les RdCSF. Il permet en réseau annotation de données et utilise des superpositions étant la pierre angulaire. Nous utilisons la base ontologie de domaine indépendant d'annoter les données du capteur. Un prototype de preuve de concept, basé sur un scénario, est développé et mis en œuvre en utilisant Java SunSpot, Kits AdvanticSys et Google App Engine. La quatrième et dernière contribution est l'amélioration à l'annotation de données proposée l'architecture sur deux fronts. L'un est l'extension à l'architecture proposée pour soutenir la création d'ontologie, de la distribution et la gestion. Le deuxième front est une heuristique génétique basée algorithme utilisé pour la sélection de noeuds capables de stocker l'ontologie de base. L'extension de la gestion d'ontologie est mise en oeuvre et évaluée à l'aide d'un prototype de validation de principe à l'aide de Java kit SunSpot, tandis que les résultats de la simulation de l'algorithme sont présentés
Wireless Sensor Networks (WSNs) are becoming ubiquitous and are used in diverse applications domains. Traditional deployments of WSNs are domain-specific, with applications usually embedded in the WSN, precluding the re-use of the infrastructure by other applications. This can lead to redundant deployments. Now with the advent of IoT, this approach is less and less viable. A potential solution lies in the sharing of a same WSN by multiple applications and services, to allow resource- and cost-efficiency. In this dissertation, three architectural solutions are proposed for this purpose. The first solution consists of two parts: the first part is a novel multilayer WSN virtualization architecture that allows the provisioning of multiple applications and services over the same WSN deployment. The second part of this contribution is the extended architecture that allows virtualized WSN infrastructure to interact with a WSN Platform-as-a-Service (PaaS) at a higher level of abstraction. Both these solutions are implemented and evaluated using two scenario-based proof-of-concept prototypes using Java SunSpot kit. The second architectural solution is a novel data annotation architecture for the provisioning of semantic applications in virtualized WSNs. It is capable of providing in-network, distributed, real-time annotation of raw sensor data and uses overlays as the cornerstone. This architecture is implemented and evaluated using Java SunSpot, AdvanticSys kits and Google App Engine. The third architectural solution is the enhancement to the data annotation architecture on two fronts. One is a heuristic-based genetic algorithm used for the selection of capable nodes for storing the base ontology. The second front is the extension to the proposed architecture to support ontology creation, distribution and management. The simulation results of the algorithm are presented and the ontology management extension is implemented and evaluated using a proof-of-concept prototype using Java SunSpot kit. As another contribution, an extensive state-of-the-art review is presented that introduces the basics of WSN virtualization and motivates its pertinence with carefully selected scenarios. This contribution substantially improves current state-of-the-art reviews in terms of the scope, motivation, details, and future research issues
Styles APA, Harvard, Vancouver, ISO, etc.
17

Khan, Imran. « Cloud-based cost-efficient application and service provisioning in virtualized wireless sensor networks ». Electronic Thesis or Diss., Evry, Institut national des télécommunications, 2015. http://www.theses.fr/2015TELE0019.

Texte intégral
Résumé :
Des Réseaux de Capteurs Sans Fil (RdCSF) deviennent omniprésents et sont utilisés dans diverses applications domaines. Ils sont les pierres angulaires de l'émergence de l'Internet des Objets (IdO) paradigme. Déploiements traditionnels de réseaux de capteurs sont spécifiques à un domaine, avec des applications généralement incrustés dans le RdCSF, excluant la ré-utilisation de l'infrastructure par d'autres applications. Maintenant, avec l'avènement de l'IdO, cette approche est de moins en moins viable. Une solution possible réside dans le partage d'une même RdCSF par de plusieurs applications et services, y compris même les applications et services qui ne sont pas envisagées lors du déploiement de RdCSF. Deux principaux développements majeurs ont conduit à cette solution potentielle. Premièrement, comme les nœuds de RdCSF sont de plus en plus puissants, il devient de plus en plus pertinent de rechercher comment pourrait plusieurs applications partager les mêmes déploiements WSN. La deuxième évolution est le Cloud Computing paradigme qui promeut des ressources et de la rentabilité en appliquant le concept de virtualisation les ressources physiques disponibles. Grâce à ces développements de cette thèse fait les contributions suivantes. Tout d'abord, un vaste état de la revue d'art est présenté qui présente les principes de base de RdCSF la virtualisation et sa pertinence avec précaution motive les scénarios sélectionnés. Les travaux existants sont présentés en détail et évaluées de manière critique en utilisant un ensemble d'exigences provenant du scénario. Cette contribution améliore sensiblement les critiques actuelles sur l'état de l'art en termes de portée, de la motivation, de détails, et les questions de recherche futures. La deuxième contribution se compose de deux parties: la première partie est une nouvelle architecture de virtualization RdCSF multicouche permet l'approvisionnement de plusieurs applications et services au cours du même déploiement de RdCSF. Il est mis en œuvre et évaluée en utilisant un prototype basé sur un scénario de preuve de concept en utilisant le kit Java SunSpot. La deuxième partie de cette contribution est l'architecture étendue qui permet à l’infrastructure virtualisée RdCSF d'interagir avec un RdCSF Platform-as-a-Service (PaaS) à un niveau d'abstraction plus élevé. Grâce à ces améliorations RdCSF PaaS peut provisionner des applications et des services RdCSF aux utilisateurs finaux que Software-as-a-Service (SaaS). Les premiers résultats sont présentés sur la base de l'implantation de l'architecture améliorée en utilisant le kit Java SunSpot. La troisième contribution est une nouvelle architecture d'annotation de données pour les applications sémantiques dans les environnements virtualisés les RdCSF. Il permet en réseau annotation de données et utilise des superpositions étant la pierre angulaire. Nous utilisons la base ontologie de domaine indépendant d'annoter les données du capteur. Un prototype de preuve de concept, basé sur un scénario, est développé et mis en œuvre en utilisant Java SunSpot, Kits AdvanticSys et Google App Engine. La quatrième et dernière contribution est l'amélioration à l'annotation de données proposée l'architecture sur deux fronts. L'un est l'extension à l'architecture proposée pour soutenir la création d'ontologie, de la distribution et la gestion. Le deuxième front est une heuristique génétique basée algorithme utilisé pour la sélection de noeuds capables de stocker l'ontologie de base. L'extension de la gestion d'ontologie est mise en oeuvre et évaluée à l'aide d'un prototype de validation de principe à l'aide de Java kit SunSpot, tandis que les résultats de la simulation de l'algorithme sont présentés
Wireless Sensor Networks (WSNs) are becoming ubiquitous and are used in diverse applications domains. Traditional deployments of WSNs are domain-specific, with applications usually embedded in the WSN, precluding the re-use of the infrastructure by other applications. This can lead to redundant deployments. Now with the advent of IoT, this approach is less and less viable. A potential solution lies in the sharing of a same WSN by multiple applications and services, to allow resource- and cost-efficiency. In this dissertation, three architectural solutions are proposed for this purpose. The first solution consists of two parts: the first part is a novel multilayer WSN virtualization architecture that allows the provisioning of multiple applications and services over the same WSN deployment. The second part of this contribution is the extended architecture that allows virtualized WSN infrastructure to interact with a WSN Platform-as-a-Service (PaaS) at a higher level of abstraction. Both these solutions are implemented and evaluated using two scenario-based proof-of-concept prototypes using Java SunSpot kit. The second architectural solution is a novel data annotation architecture for the provisioning of semantic applications in virtualized WSNs. It is capable of providing in-network, distributed, real-time annotation of raw sensor data and uses overlays as the cornerstone. This architecture is implemented and evaluated using Java SunSpot, AdvanticSys kits and Google App Engine. The third architectural solution is the enhancement to the data annotation architecture on two fronts. One is a heuristic-based genetic algorithm used for the selection of capable nodes for storing the base ontology. The second front is the extension to the proposed architecture to support ontology creation, distribution and management. The simulation results of the algorithm are presented and the ontology management extension is implemented and evaluated using a proof-of-concept prototype using Java SunSpot kit. As another contribution, an extensive state-of-the-art review is presented that introduces the basics of WSN virtualization and motivates its pertinence with carefully selected scenarios. This contribution substantially improves current state-of-the-art reviews in terms of the scope, motivation, details, and future research issues
Styles APA, Harvard, Vancouver, ISO, etc.
18

Du, Ming-zhang, et 杜明璋. « Personalized Annotation Management for Web Based Learning Service ». Thesis, 2004. http://ndltd.ncl.edu.tw/handle/29464948477408787074.

Texte intégral
Résumé :
碩士
國立中央大學
網路學習科技研究所
92
Annotation is used to provide interpretation of content. Existed learning standard or annotation tools only provide systematic definition. However, people always have their opinion when he read the content. This opinion is valuable for other people. In this thesis, we describe the design and implementation of a Personalized Annotation Management that enable people to manage, share and reuse their annotation in an efficient way. We develop an anchoring method which associated annotation to E-document in a precisely position. It also provides an adaptive annotation service for different service. The proposed system provides an interactive mechanism for discussion about shared annotations among multiple users.
Styles APA, Harvard, Vancouver, ISO, etc.
19

Lin, Yi-Hsien, et 林易賢. « The Design and Development of Web-based Annotation System ». Thesis, 2011. http://ndltd.ncl.edu.tw/handle/7f6bby.

Texte intégral
Résumé :
碩士
國立交通大學
資訊學院碩士在職專班數位圖書資訊組
99
We always search more information to assist us to understand articles when we are reading and encounter difficult subjects. If there are tools and systems can keep information and comment from previous reader. It may be helpful to later reader to understand articles easier and quicker. Main purpose of Content Markup is describing individual content, including comprehension and feeling of content. This research establishes one system through concept of Content Markup and Document Annotation to provide users document processing, knowledge structure establishment, content markup and mapping to build connections between documents. Users can read related section by Content Markup and find more related information easier. Furthermore, users can write comments for articles content and share to other users. System also classify users into communities.
Styles APA, Harvard, Vancouver, ISO, etc.
20

Russell, Bryan C., Antonio Torralba, Kevin P. Murphy et William T. Freeman. « LabelMe : a database and web-based tool for image annotation ». 2005. http://hdl.handle.net/1721.1/30567.

Texte intégral
Résumé :
Research in object detection and recognition in cluttered scenes requires large image collections with ground truth labels. The labels should provide information about the object classes present in each image, as well as their shape and locations, and possibly other attributes such as pose. Such data is useful for testing, as well as for supervised learning. This project provides a web-based annotation tool that makes it easy to annotate images, and to instantly sharesuch annotations with the community. This tool, plus an initial set of 10,000 images (3000 of which have been labeled), can be found at http://www.csail.mit.edu/$\sim$brussell/research/LabelMe/intro.html
Styles APA, Harvard, Vancouver, ISO, etc.
21

Jeng-Han, Hsieh. « A Minimalist Web Based Data Management Framework using Architectural Annotation ». 2006. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0001-0602200611251800.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
22

Liu, Kuo-Yu, et 劉國有. « A Web-based Multimedia Annotation System for Correcting English Compositions ». Thesis, 2006. http://ndltd.ncl.edu.tw/handle/50729232584856369114.

Texte intégral
Résumé :
博士
國立暨南國際大學
資訊工程學系
94
The integration of Web and multimedia technologies has ushered in a new era for learning. In this thesis, we aim to develop a useful hypermedia application, Web-based Multimedia Annotation (WMA) system, for correcting English compositions. Unlike the traditional hypertext-based lectures, we devised an elaborately capturing tool to record the instructor’s lecturing process. The generated document consists of instructor’s narration, and several types of navigation events (e.g., tele-pointer, pen strokes, and highlight) which are triggered by the instructor during the recording process. At the presentation stage, the recorded lectures can be presented dynamically and synchronously by multimedia synchronization techniques. In contrast to passive navigations of static hypertext documents, the recorded documents offer audiovisual features to activate the lecturing presentation. We believe that the instructor’s narration and guidance make the lecture more comprehensive for learners. The approach is helpful for increasing learners’ learning efficiency and raising their writing ability. For a composite hypermedia document, media correlations provide important clues to synchronized presentation and cross-media access. The media correlations are classified into implicit relation (retrieved by computing) and explicit relation (recorded or pre-orchestrated by an authoring tool). We show the feasibility to construct a vivid presentation by recording explicit relation and further exploring relations derived from the explicit relation. We describe the synchronization problems in temporal, spatial and content domains that a system may encounter when dealing with hypermedia documents. To facilitate the navigation of the integrated hypermedia documents, we devised several processes for discovering media correlations to provide easy-to-use random access mechanisms and completed visual presentations. Our system has been served online for raising graduate students’ English writing ability in department of Computer Science and Information Engineering on our campus. According to students’ feedback and performance, they are satisfied with the features of the WMA system, and their writing ability was improved gradually.
Styles APA, Harvard, Vancouver, ISO, etc.
23

Hsieh, Jeng-Han, et 謝政翰. « A Minimalist Web Based Data Management Framework using Architectural Annotation ». Thesis, 2006. http://ndltd.ncl.edu.tw/handle/12771728714000582531.

Texte intégral
Résumé :
碩士
國立臺灣大學
電機工程學研究所
94
In this paper, we present a data management framework that uses a novel annotation scheme called architectural annotation to bridge the gap between human readable and machine readable information by constructing generative templates that contains both the grammar and the presentation of a set of data entries. A system that utilize the annotated information called MetaEngine is also presented to show how existing browsers and servers can utilize the annotated materials to construct data authoring interface automatically and how such annotation scheme can create a novel model of client/server interaction for web based applications.
Styles APA, Harvard, Vancouver, ISO, etc.
24

Yeh, Yi-Ting. « Applying Video Annotation Technology on Web-Based Multimedia Learning Framework ». 2006. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0020-2007200709554500.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
25

Luo, Guo-Heng, et 羅國亨. « An XML-based Multimedia Annotation schema for World Wide Web ». Thesis, 2009. http://ndltd.ncl.edu.tw/handle/60834643191701010457.

Texte intégral
Résumé :
碩士
國立交通大學
資訊科學與工程研究所
97
Since people need to read and process massive information nowadays, they need to realize and conclude the information very quickly. When people read a book, they would highlight some text or add annotations in order to increase the reading efficiency. However, today our reading is going to happen on web pages, people must adapt to read on web pages. Because the inconvenient of reading on web browser, in these years web service provider has develop a way to add annotations on web pages. As web annotation service providers keep the users’ annotations on their own space, the annotations’ application and exchange will be restricted. The thesis attempts to a XML-based multimedia annotation schema for World Wide Web. Use the XML standard format to increase the web annotations’ readability and exchangeability. The use of XML standard format let the annotations be processed by many programming languages. Moreover, it is convenient to be exchanged between client and server or between servers.
Styles APA, Harvard, Vancouver, ISO, etc.
26

Yeh, Yi-Ting, et 葉怡婷. « Applying Video Annotation Technology on Web-Based Multimedia Learning Framework ». Thesis, 2007. http://ndltd.ncl.edu.tw/handle/53938414348292261360.

Texte intégral
Résumé :
碩士
國立暨南國際大學
資訊工程學系
95
With the Web and hypertext technology, we can obtain many kinds of information and data on Web. Moreover, streaming and compression technology enable us to access multimedia data through network more easily than before. Hypertext technology has been used in linking Web resources popularly. However, we usually only can follow the original sequence of a video to browse it. Our research extends Hypervideo concept and combines hypertext concept and video presentation to give a different presentation style. We use current multimedia streaming technology, multimedia compression technology, and Web technology to construct a framework for our Web-based Hypervideo system which is mainly used for instruction. The system offers an easy-to-use interface for teachers to annotate and add information on video. The player tool designed for students can re-present the provided annotations and information. Moreover, students can configure the options of video presentation for navigating video in different ways. A prototype system has been built for evaluating the proposed framework. According to the experiment result, the proposed framework is feasible for this kind of application. The prototype system also verifies that the user-interface and data model are designed in a good way.
Styles APA, Harvard, Vancouver, ISO, etc.
27

Chen, Kung-Chih. « A Web-Based Object-Oriented Annotation System for English Compositions Correction ». 2006. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0020-1807200711025400.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
28

Chen, Kung-Chih, et 陳恭志. « A Web-Based Object-Oriented Annotation System for English Compositions Correction ». Thesis, 2007. http://ndltd.ncl.edu.tw/handle/17398534557682830906.

Texte intégral
Résumé :
碩士
國立暨南國際大學
資訊工程學系
95
With the ubiquity of the Internet and the importance of the English ability, many multimedia technologies are integrated for computer-aided English teaching and learning. Traditionally, it takes much manpower to produce English teaching materials. Nowadays much software provides diverse useful material-producing tools for editors, simplifying the traditional procedure of making teaching materials. However, most professional software is too complex to operate for editors who are not specialized in computer science, and produces materials which are too large to be well-browsed on the web page. Accordingly, we devise a multimedia annotation system which provides diverse presentations different from traditional static web pages. We record instructors’ cursor activities as well as their voice comments. At the presentation stage, the recorded documents offer audiovisual features to activate the lecturing presentation. We believe that the narration and guidance make the lecture more comprehensive for learners. The approach is helpful for increasing learners’ learning efficiency and raising their writing ability.
Styles APA, Harvard, Vancouver, ISO, etc.
29

Ma, Chih-Chun, et 馬治群. « A Study on Web Service Discovery based on Business Rule Annotation ». Thesis, 2012. http://ndltd.ncl.edu.tw/handle/20636803412742900071.

Texte intégral
Résumé :
碩士
國立臺灣海洋大學
資訊工程學系
100
Service-Oriented Computing (SOC) has become an important trend in software engineering, exploiting both web services and Service-Oriented Architecture (SOA) as fundamental elements to provide on-demand applications. Among a variety of SOC technology, web service discovery is the process of locating web services to satisfy the requirements of service requesters, and as such, plays an important role in building loosely coupled service-oriented applications. Today, many service discovery mechanisms are available, including (1) UDDI-based service search; (2) semantic and rule-based service discovery; and (3) text-based service matching. However, most of these efforts do not focus on linking actual business services in real world and do not provide an appropriate relaxation mechanism. Therefore, this paper proposes a service discovery approach to address above issues. The main features of the proposed approach include (1) to describe services using the proposed business rule annotation mechanism to describe condition rules, enumeration rules, and applied utility references; (2) to filter services through multiple filters, including kernel property filter, limitation property filter, and user property filter; (3) to calculate service ranking scores by consolidating QoS limitation values and setting of QoS importance rank; and (4) to assist users with locating appropriate services by the proposed service relaxation mechanism.
Styles APA, Harvard, Vancouver, ISO, etc.
30

Li, Wei-Hsin, et 李偉新. « Developing a Web-based Video Annotation System and Evaluating Its Suitability ». Thesis, 2018. http://ndltd.ncl.edu.tw/handle/2q2ky3.

Texte intégral
Résumé :
碩士
臺北市立大學
資訊科學系碩士在職專班
106
Annotation is a helpful strategy for remembering, clarifying, thinking and sharing in the learning process. Applying video-based materials in learning models including MOOCS and flipped classroom often encounters dilemmas. How to engage the learners in viewing video-based content is an important task. Video annotation has great potentials for reducing the aforementioned problems. Some of earlier researches designed video annotation systems or platforms by using Adobe Flash or plugged-in modular. Nevertheless, those techniques have some limitations or it will not be adopted in the future. HTML 5 have been announced by W3C in 2014, and it has become new standard in web browsers. In addition, the aforementioned dilemma can be handled by HTML 5. As a result, the main purpose of this study is to develop a web-based video annotation system and evaluate its suitability on learning. This study adopted HTML 5 technology for designing a web-based video annotation system without limitation by Flash or other modular. In this system, the learners can make annotation by writing text or drawing geometry shape, share and review their peers’ annotation, and offer feedback. The teachers can construct on-line classroom, upload video materials, and ask their students to view and make annotation for specified video. Additionally, the teachers can create popup questions for asking the students to reply during viewing video for evaluating the learners’ learning performance. In order to record and analyze the students’ on-line behaviors, this study adopted ADL XAPI standard. This study employed prototyping method for developing this system, and adopted TAM model for designing evaluation questionnaires. The questionnaires consist of suitability, perceived easiness, perceived interest, perceived usefulness, and instructional benefits. After finishing this system, this study employed ICT experts and teaching experts for testing and evaluating this system. The evaluation results revealed that the experts show high appraisal toward this system. In advance, this study applied this system on computer architecture courses in university for measuring the learners’ acceptance. Most of the learners showed high appraisal on perceived interest, and usefulness of this annotation system. But the user interface has to be improved in the future. Keywords: annotation, video annotation, xAPI, TAM, system evaluation
Styles APA, Harvard, Vancouver, ISO, etc.
31

Amir, Mohammad, Yim Fun Hu et Prashant Pillai. « Effective knowledge management using tag-based semantic annotation for web of things devices ». 2014. http://hdl.handle.net/10454/10583.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
32

Eng, Daniel C. « Web-based Stereo Rendering for Visualization and Annotation of Scientific Volumetric Data ». 2008. http://hdl.handle.net/1969.1/ETD-TAMU-2008-12-226.

Texte intégral
Résumé :
Advancement in high-throughput microscopy technology such as the Knife-Edge Scanning Microscopy (KESM) is enabling the production of massive amounts of high-resolution and high-quality volumetric data of biological microstructures. To fully utilize these data, they should be efficiently distributed to the scientific research community through the Internet and should be easily visualized, annotated, and analyzed. Given the volumetric nature of the data, visualizing them in 3D is important. However, since we cannot assume that every end user has high-end hardware, an approach that has minimal hardware and software requirements will be necessary, such as a standard web browser running on a typical personal computer. There are several web applications that facilitate the viewing of large collections of images. Google Maps and Google Maps-like interfaces such as Brainmaps.org allow users to pan and zoom 2D images efficiently. However, they do not yet support the rendering of volumetric data in their standard web interface. The goal of this thesis is to develop a light-weight volumetric image viewer using existing web technologies such as HTML, CSS and JavaScript while exploiting the properties of stereo vision to facilitate the viewing and annotations of volumetric data. The choice of stereogram over other techniques was made since it allows the usage of raw image stacks produced by the 3D microscope without any extra computation on the data at all. Operations to generate stereo images using 2D image stacks include distance attenuation and binocular disparity. By using HTML and JavaScript that are computationally cheap, we can accomplish both tasks dynamically in a standard web browser, by overlaying the images with intervening semi-opaque layers. The annotation framework has also been implemented and tested. In order for annotation to work in this environment, it should also be in the form of stereogram and should aid the merging of stereo pairs. The current technique allows users to place a mark (dot) on one image stack, and its projected position onto the other image stack is calculated dynamically on the client side. Other extra metadata such as textual descriptions can be entered by the user as well. To cope with the occlusion problem caused by changes in the z direction, the structure traced by the user will be displayed on the side, together with the data stacks. Using the same stereo-gram creation techniques, the traces made by the user is dynamically generated and shown as stereogram. We expect the approach presented in this thesis to be applicable to a broader scientific domain, including geology and meteorology.
Styles APA, Harvard, Vancouver, ISO, etc.
33

Hsien, Li Chung, et 李忠憲. « The Design of a Web-based Stream Video Annotation and Virtual Composing System ». Thesis, 2008. http://ndltd.ncl.edu.tw/handle/96330472475667910702.

Texte intégral
Résumé :
碩士
臺北市立教育大學
數學資訊教育學系碩士班
96
Stream type video has been widespread used for various applications. Nevertheless, there exit many difficulties when integrated it into instruction, above all, in blended learning. Video composing and annotation are essential and tedious tasks for the teachers before teaching. In order to reduce the previous dilemma, the main purpose of this study was to develop a web-based annotation and virtual composing system for stream type video materials. The system was named after “ET-Tube (Easy Teaching Tube)”. This study adopted MVC and OOP model, PHP, AJAX, and Flash RIA technology to develop ET-Tube system with WEB 2.0 features. The ET-Tube will offer the teachers to process annotations and virtual composing for a video-based teaching material cooperatively, and the result of the annotation and composing will form a SMIL playing list which include text-based annotation (also support SRT&TT format), voice annotation (support MP3 format), drawing annotation (support SVG format), static and dynamic overlay image annotation (support JPG, GIF, PNG, SWF). The users can specify the sequence of video segment, forming a virtual video. The annotation and virtual video will be used for instructional design and be shared, searched, reused, recomposed in the web-based video-on-demand system according to the demand of the teachers. Applying this system on instruction, the teachers merely pay attention to instructional design and don’t encounter the difficulties of video processing task. Appling this system and video materials on teaching, it will save the preparation time and make the teachers’ instruction smoothly. The teachers can also conduct a video-based e-learning on-line and the participants can express their viewpoints and ideas after viewing the virtual video by means of annotation function which can enhance the learning effectiveness and efficiency.
Styles APA, Harvard, Vancouver, ISO, etc.
34

Chang, Yen-Jia, et 張晏嘉. « Applying the Revised Statistic-Based Chinese Segmentation in Real-Time Web Image Annotation ». Thesis, 2012. http://ndltd.ncl.edu.tw/handle/39615717269596825261.

Texte intégral
Résumé :
碩士
國立雲林科技大學
資訊管理系碩士班
100
This research proposes a new Chinese segmentation method, the “Iterative Merging Chinese Segmentation,” which could apply to Automatic Image Annotation; currently there are two methods of image annotation, Content-based Image Retrieval and Textual Content Analysis. The Content-based Image Retrieval which primarily uses the feature of images to extract objects and give appropriate annotation. This method is restricted to the content of pictures and the terms already defined, thus cannot give them with deeper meaning. This research focuses on the textual content analysis and proposes a new segmentation method that does not rely on any lexicon during segmentation process, thus improves the defects of high maintenance cost and the difficulty of extracting newer terms. In addition, it also improves the time-consuming deficiency when using N-gram Segmentation identifies newer terms by cross-matching. Finally, this research uses webpages of news from udn.com to be our testing data and produces the corresponding terms. The precision is 86.02% for comparing with the original annotation. Then, we could present the result of 85% that subjects agree that related annotations can be corresponded with the image in human judgment. In performance testing, there is average 0.006 second of time-spending for each news. It presented well in both precision and performance.
Styles APA, Harvard, Vancouver, ISO, etc.
35

劉潤身. « Development of a concept construction system for web-based learning that utilizes annotation techniques ». Thesis, 2008. http://ndltd.ncl.edu.tw/handle/02707735811980330977.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
36

Wang, Jin-Yao, et 王勁堯. « Automatic Image Tagging and Annotation Based on Web Mining - A Case Study on Travel ». Thesis, 2011. http://ndltd.ncl.edu.tw/handle/72759209086980510871.

Texte intégral
Résumé :
碩士
國立暨南國際大學
資訊工程學系
99
As the growth of Web 2.0 and rapid development of mobile devices and applications, users are familiar to take photos via smart phones on their trip and share photos to friends on websites of blog, album or social network. Obviously, users must spend time to organize and annotate photos, collect related information, type words for their blogs, and finally design the page style of their blogs or albums. Thinking how the user applies his or her smart phones on the trip, an intelligent cloud system can recommend tags related to the photo taken by the user. After the end of the trip, the user synchronizes the smart phone with PC and the cloud system automatically organizes the photo sets and collects annotations and related information for them. The user merely selects desired photo sets and clicks adequate words, titles, descriptions and texts for sets, the system then automatically generates the blog page for revisions. Based on these motivations, three subsystems, Tag Recommendation System (TRS), Annotation Recommendation System (ARS), and Blog Template Generator (BTG) are proposed to achieve the Travel Blog Generator (TBG) system. Mining association method is employed to improve the effectiveness of recommendations. Several experiments are designed to verify the feasibility and performance of the system. Finally, the User Experience (UX) test is performed and the system obtains about 80% satisfaction rate. The UX test also proves that users can publish their rich-text travel blogs within 13 minutes in average using the system
Styles APA, Harvard, Vancouver, ISO, etc.
37

Chen, Yu-Ting, et 陳毓婷. « The Effects of Web-based Inquiry-based Learning with Collaborative Reading Annotation Support on Information Literacy Instruction ». Thesis, 2018. http://ndltd.ncl.edu.tw/handle/hxq7q2.

Texte intégral
Résumé :
碩士
國立政治大學
圖書資訊學數位碩士在職專班
106
The past studies have suggested that the lack of basic digital literacy and acuteness has reduced Taiwanese students’ ability to filter information when facing a vast amount of Internet information. As a result, establishing a mechanism for selecting and assessing information, as well as cultivating digital reading ability and information literacy have been the hot topics in recent years. By combing the Reading Knowledge Collaborative Annotation Tool (CAT) with the Web-based inquiry-based learning, this study has developed the “Web-based Inquiry-based Learning Model with the Collaborative Annotation Tool,” hoping to innovate the information literacy instruction and find new ways to effectively improve students’ information search capabilities. In this study, a quasi-experimental study method was adopted, and 50 fifth-graders from two classes in a certain elementary school in New Taipei City were selected as the research subjects to conduct the collaborative inquiry-based learning on the theme of “Internet Information Assessment and Judgment.” Among them, 25 students from one class were randomly assigned to the experimental group of adopting the “Web-based Inquiry-based Learning Model with the Collaborative Annotation Tool,” while 25 students from another class were randomly assigned to the control group of adopting the “Web-based Inquiry-based Learning Model with the Discussion Board Tool.” With prior knowledge and cognitive style as background variables, the influences and differences in students’ learning effectiveness, cognitive load, technology acceptance, and learning satisfaction in two different learning models were thoroughly explored. The research results found that compared to the “Web-based Inquiry-based Learning Model with the Discussion Board Tool,” the “Web-based Inquiry-based Learning Model with the Collaborative Annotation Tool” showed much higher benefits in the learning effectiveness for students with middle and low prior knowledge and with field independence. Both of these two models produce would not produce excessive cognitive load on students during the learning process. As for the assessments on technology acceptance and learning satisfaction, students with low prior knowledge considered that the Web-based Inquiry-based Learning Model with the Collaborative Annotation Tool was more helpful for them than the one with the Discussion Board Tool, and they also showed a higher significant level of learning satisfaction. Lastly, based on the research results, this study suggests that the advantages of the tool can be used to further develop a series of promotion courses, and the use of critical thinking learning can be extended to the teaching for teachers. Also, this study suggests that the long-term in-depth explorations of the interactive course behavior of inquiry-based learning, transfer of learning, and other relevant studies can be conducted in the future, hoping to provide as new directions of topics for the research field when promoting information literacy instruction.
Styles APA, Harvard, Vancouver, ISO, etc.
38

Jennwei, Kuo, et 郭振維. « A Web-Based Bioinformatics Tool for the Functional Genomics - Annotation of Gene Sequences in Databases ». Thesis, 1998. http://ndltd.ncl.edu.tw/handle/80280186606328714940.

Texte intégral
Résumé :
碩士
中原大學
資訊工程研究所
86
Information technology has an important role in the development of genetics, modern biology and genome research. Furthermore, due to the progress of Human Genome Project and the enhancement of high throughput sequencing technology, bioinfomatics has become a new field with vast amounts of genetic information. It is imperative that bioinformatic tools are indispensable in the search, analysis,and translation of genetic information. In this thesis, An automated software tool package was designed and implemented an automated software tool for the CR technology (a molecular biology methodology) to integrate gene sequences acquired from different sources into different formats. This tool enables users to establish a CR Database, to perform databases search automatically, and to profile individual DNA sequences. We employed Internet technology and Web browsers as an interface to implement software tools includeing Java Applet and CGI in Java, so that the users may use this tool to access databases and to perform data manipulation over the Internet.
Styles APA, Harvard, Vancouver, ISO, etc.
39

Wang, Sheng-Ren, et 王聖仁. « Web-based Summary Writing Learning Environment via the Model of Integrating Concept Mapping and Sharing Annotation ». Thesis, 2006. http://ndltd.ncl.edu.tw/handle/16309342212500880796.

Texte intégral
Résumé :
碩士
國立臺北教育大學
教育傳播與科技研究所
94
In order to solve the difficulties of learners’ summary writing in web-based learning environment, the present study brings up a web-based summary writing model integrating concept mapping, annotation, and CSCL(computer support collaborative learning). Summary writing is beneficial for learners’ reading comprehension, recall, and recognizing the text’s main idea, but it’s still difficult for some learners. Take the judgment of importance for example. When reading a longer or more complicated text, many learners always couldn’t determine what should be deleted and what should be put in the summary. The present study regards this difficulty as two parts: the judgment of importance and cognitive overloading. Thus, the present study is to construct and practice a web-based summary writing learning environment integrating concept mapping and sharing annotation, regarding concept mapping as the scaffold of learners’ catching the main idea of the text. When the learners have some problems in concept mapping in this learning environment, peer collaborative sharing annotation will be applied. And at last, the learners would view the complete concept mapping as a writing frame and then proceed with summary writing. The concept mapping of the present study is a detecting-fault one. It would help learners reduce cognitive loading, recognize the main idea, and find the unknown main idea. When the learners make an annotation in the text, they also help other learners in this learning environment. In conclusion, the present study provides one way of thinking and practice for computer-based summary writing.
Styles APA, Harvard, Vancouver, ISO, etc.
40

Chen, Yung-Fu, et 陳永富. « A Study of Developing a Web-based Document Annotation System for Mobile Devices and Evaluating Its Suitability ». Thesis, 2019. http://ndltd.ncl.edu.tw/handle/s542f5.

Texte intégral
Résumé :
碩士
臺北市立大學
資訊科學系碩士在職專班
107
Annotation is the procedure of document processing, and it can help the readers to clarify the key point of reading text. Therefore, the annotated content shared among peers has the effectiveness of promoting the learning achievement. Some annotation systems were developed for educational use. Nevertheless, those systems have some limitations in practical application, such as deploying in specific operating system merely, Flash-based environment, without supporting mobile devices, not offering available document for annotation, and so on. As a result, this study is to develop an annotation system for different computing devices without aforementioned limitations. This study adopted agile software development model and new web technology for developing this system including RWD, php and HTML 5. The system can be used in any mobile devices with main steam web browsers. There are many features in this system including many annotation types, automatic document type conversion, automatic comparison of annotated content between the learners and experts, peer assessment and reviewing, school-based management for annotation activities in different courses. In order to investigate the educational suitability of this annotation system, this study adopted technology acceptance model (TAM for abbrev.) for system evaluation. This system evaluation questionnaire consists of six facets including functions and interface suitability, easiness to use, usefulness to use, interest to use, willing to use, and instructional effectiveness. This study employed ICT experts and experienced instructional experts for evaluating this system. In addition, this study conducted a short term learning experiment of annotation activities in statistics courses of public university. After experiment, the participants were asked to evaluate this system as well. The evaluation results revealed that most of experts and the learners show high and positive appraisal toward this annotation system functions and future educational applications. Keywords: mobile device, annotation, TAM, RWD, system evaluation
Styles APA, Harvard, Vancouver, ISO, etc.
41

« XML-based Personal Web Annotations ». 2002. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0009-0112200611343142.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
42

曾閔棋. « XML-based Personal Web Annotations ». Thesis, 2002. http://ndltd.ncl.edu.tw/handle/19095939536360031493.

Texte intégral
Résumé :
碩士
元智大學
資訊工程學系
90
Annotating paper documents is one of the most common activities in reading. However, annotating Web documents is not as straightforward. The first obstacle comes from the fact that not only the content but also the layout of aWeb page may be often changed. The second obstacle exists because until recently most of annotation tools focus on collaborative annotation, rather than user-centric annotation. Users cannot make any personalization or extension locally. Extending or adapting the annotation functionality is thus system-wide. In this thesis, we present an open and extensible annotation tool called WebPAT. It addresses three issues: extensibility, openness, and user-centricity. To achieve the extensibility and openness goals, XML is used as a core technology to describe the annotations and JavaScript is used for open function design. In addition, it supports user-centric functionalities that can be achieved locally without needs to interact with remote servers. To date, a preliminary prototype has been implemented on the basis of Microsoft Internet Explorer. Though WebPAT currently lacks some fancy drawing facilities, it indeed shows an open annotation architecture that is highly extensible for future development.
Styles APA, Harvard, Vancouver, ISO, etc.
43

Matos, Teresa Carla de Canha e. « Visualizing and Interacting with 360º Web-based Videos using Dynamic Annotations ». Master's thesis, 2018. https://repositorio-aberto.up.pt/handle/10216/111069.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
44

Matos, Teresa Carla de Canha e. « Visualizing and Interacting with 360º Web-based Videos using Dynamic Annotations ». Dissertação, 2018. https://repositorio-aberto.up.pt/handle/10216/111069.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
45

Khalili, Ali. « A Semantics-based User Interface Model for Content Annotation, Authoring and Exploration ». Doctoral thesis, 2014. https://ul.qucosa.de/id/qucosa%3A13126.

Texte intégral
Résumé :
The Semantic Web and Linked Data movements with the aim of creating, publishing and interconnecting machine readable information have gained traction in the last years. However, the majority of information still is contained in and exchanged using unstructured documents, such as Web pages, text documents, images and videos. This can also not be expected to change, since text, images and videos are the natural way in which humans interact with information. Semantic structuring of content on the other hand provides a wide range of advantages compared to unstructured information. Semantically-enriched documents facilitate information search and retrieval, presentation, integration, reusability, interoperability and personalization. Looking at the life-cycle of semantic content on the Web of Data, we see quite some progress on the backend side in storing structured content or for linking data and schemata. Nevertheless, the currently least developed aspect of the semantic content life-cycle is from our point of view the user-friendly manual and semi-automatic creation of rich semantic content. In this thesis, we propose a semantics-based user interface model, which aims to reduce the complexity of underlying technologies for semantic enrichment of content by Web users. By surveying existing tools and approaches for semantic content authoring, we extracted a set of guidelines for designing efficient and effective semantic authoring user interfaces. We applied these guidelines to devise a semantics-based user interface model called WYSIWYM (What You See Is What You Mean) which enables integrated authoring, visualization and exploration of unstructured and (semi-)structured content. To assess the applicability of our proposed WYSIWYM model, we incorporated the model into four real-world use cases comprising two general and two domain-specific applications. These use cases address four aspects of the WYSIWYM implementation: 1) Its integration into existing user interfaces, 2) Utilizing it for lightweight text analytics to incentivize users, 3) Dealing with crowdsourcing of semi-structured e-learning content, 4) Incorporating it for authoring of semantic medical prescriptions.
Styles APA, Harvard, Vancouver, ISO, etc.
46

Verhaart, Michael Henry. « The virtualMe : a knowledge acquisition framework : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy (Ph.D.) in Information Systems at Massey University, Palmerston North, New Zealand ». 2008. http://hdl.handle.net/10179/851.

Texte intégral
Résumé :
Throughout life, we continuously accumulate data, information and knowledge. The ability to recall much of this accumulated knowledge commonly deteriorates with time, though some forms part of what is referred to as tacit knowledge. In the context of education, students access and interact with a teacher’s knowledge in order to create their own, and may have their own data, information and knowledge that could be added to teacher’s knowledge for everyone’s benefit. The realization that students can contribute to enhancing personal knowledge is an important cornerstone in developing a mentor (teacher, tutor and facilitator) focused knowledge system. The research presented in this thesis discusses an integrated framework that manages an individual’s personal data, information and knowledge and enables it to be enhanced by others, in the context of a blended teaching and learning environment. Existing related models, structures, systems and current practices are discussed. The core outcomes of this thesis include: • the virtualMe framework that can be utilized when developing Web based teaching and learning systems; • the sniplet content model that can be used as the basis for sharing information and knowledge; • an annotation framework used to manage knowledge acquisition; and • a multimedia object (MMO) model that: o allows for related media artefacts to be intuitively grouped in a logical collection; o includes a meta-data schema that encompasses other metadata structures, and manages context and referencing; and o includes a model allowing component parts to be reaggregated if they are separated. The virtualMe framework provides the ability to retain context while transferring the content from one person to another and from one place to another. The framework retains the content’s original context and then allows the receiver to customise the content and metadata so that the content becomes that person’s knowledge. A mechanism has been created for such contextual transfer of content (context retained by the metadata).
Styles APA, Harvard, Vancouver, ISO, etc.
47

Yen, Yi-Ching, et 顏怡青. « The Effects of Annotations of Illustrations and Simplified Text on College Students’ Reading Comprehension in the Web-based Learning Environment ». Thesis, 2004. http://ndltd.ncl.edu.tw/handle/44315970136969274633.

Texte intégral
Résumé :
碩士
大葉大學
應用外語研究所
95
Researchers of computer assisted language learning have suggested that integration of text and pictures can create an authentic and interactive environment for learning languages. However, how well non-English major college students comprehend the reading passage through multimedia annotations deserves more attention. The purpose of the present study was to investigate the effectiveness of different annotation modes in web-based learning environment. In addition, the participants’ attitudes and perspectives toward different multimedia annotation modes were also examined. The participants were 120 non-English major freshmen at a university in central Taiwan. All participants were assigned into four groups. The first group was the control group, and participants in this group read a text in the web-based environment without any annotation. The participants in Group Two comprehended the reading passage with the assistance of ten simplified text annotations. The participants in Group Three comprehended the reading passage with the assistance of ten text-related pictorial annotations. The participants in the Group Four utilized the assistance of text-picture combination annotation to facilitate the reading process. A multiple-choice reading comprehension test, a recall protocol test, a questionnaire, and an interview were adopted in this study. The main findings from the present study are: (1) Participants’ performance on the multiple-choice reading comprehension was better while reading with text-picture combination annotations in the web-based learning environment. (2) Participants’ performance on the recall protocol test was better while reading with text-picture combination annotations in the web-based learning environment. (3) Most participants had positive attitudes and perspectives toward multimedia annotations on comprehending the reading text in the web-based environment. Results of the present study were of importance in explaining the effectiveness of different kinds of multimedia annotations. The pedagogical implications for multimedia design further discussed in the fifth chapter and some suggestions for future study were also drawn.
Styles APA, Harvard, Vancouver, ISO, etc.
48

Sung, Shan-chun, et 宋姍錞. « Exploring the effects of learning performance in web-based English activities - Using multimedia annotations tool to facilitate the English writing and speaking ». Thesis, 2007. http://ndltd.ncl.edu.tw/handle/07010088995784338434.

Texte intégral
Résumé :
碩士
國立中央大學
網路學習科技研究所
95
The individual''s capabilities in English listening, speaking, reading and writing are related to his or her own efforts and achievements, and the four skills should be taken seriously in the English classroom. Therefore, we designed a language productive skills oriented and included playfulness in English learning activities, combinied with two important concepts to design seven on-line English activities. In this study, we provided a Virtual Pen system (VPen), integrated with listening, speaking, reading and writing four items of teaching tools and allowed students to arbitrary in the course website for writing and speaking. Using pictures substitute as words to increase the interests of the students’ imagination in English learning. We hope to create a real English-use environment and raise students’ inherent motivation in English learning. This study considers the four factors: perceived ease of use, perceived usefulness, perceived usefulness of activity, playfulness anxiety as independent variables to analyze the relationship with the acceptance attitude and comprehends the relationship between the seven English activities on-line with VPen and the achievements to English learning. The results improve our understanding of using computer assist English learning in addition to tools, the material and activity are important factors whether a user will accept the product. In addition, the language productive skills that writing and speaking are among significant relationship and interactive exercises which can excite students’ performance of writing and speaking better.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie