Literatura científica selecionada sobre o tema "Annotation de contraintes"
Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos
Consulte a lista de atuais artigos, livros, teses, anais de congressos e outras fontes científicas relevantes para o tema "Annotation de contraintes".
Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.
Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.
Artigos de revistas sobre o assunto "Annotation de contraintes"
Mathon, Catherine, Gilles Boyé e Anna Kupsc. "Modélisation des contraintes extralinguistiques et de leur impact sur les variations prosodiques : le cas du commentaire sportif télévisuel". SHS Web of Conferences 191 (2024): 09006. http://dx.doi.org/10.1051/shsconf/202419109006.
Texto completo da fonteFeltgen, Quentin, Georgeta Cislaru e Christophe Benzitoun. "Étude linguistique et statistique des unités de performance écrite : le cas de et". SHS Web of Conferences 138 (2022): 10001. http://dx.doi.org/10.1051/shsconf/202213810001.
Texto completo da fonteTeses / dissertações sobre o assunto "Annotation de contraintes"
Thuilier, Juliette. "Contraintes préférentielles et ordre des mots en français". Phd thesis, Université Paris-Diderot - Paris VII, 2012. http://tel.archives-ouvertes.fr/tel-00781228.
Texto completo da fonteSchild, Erwan. "De l’importance de valoriser l’expertise humaine dans l’annotation : application à la modélisation de textes en intentions à l’aide d’un clustering interactif". Electronic Thesis or Diss., Université de Lorraine, 2024. http://www.theses.fr/2024LORR0024.
Texto completo da fonteUsually, the task of annotation, used to train conversational assistants, relies on domain experts who understand the subject matter to model. However, data annotation is known to be a challenging task due to its complexity and subjectivity. Therefore, it requires strong analytical skills to model the text in dialogue intention. As a result, most annotation projects choose to train experts in analytical tasks to turn them into "super-experts". In this thesis, we decided instead to focus on the real knowledge of experts by proposing a new annotation method based on Interactive Clustering. This method involves a Human-Machine cooperation, where the machine performs clustering to provide an initial learning base, and the expert annotates MUST-LINK or CANNOT-LINK constraints between the data to iteratively refine the proposed learning base. Such annotation has the advantage of being more instinctive, as experts can associate or differentiate data according to the similarity of their use cases, allowing them to handle the data as they would professionally do on a daily basis. During our studies, we have been able to show that this method significantly reduces the complexity of designing a learning base, notably by reducing the need for training the experts involved in an annotation project. We provide a technical implementation of this method (algorithms and associated graphical interface), as well as a study of optimal parameters to achieve a coherent learning base with minimal annotation. We have also conducted a cost study (both technical and human) to confirm that the use of such a method is realistic in an industrial context. Finally, we provide a set of recommendations to help this method reach its full potential, including: (1) advice aimed at framing the annotation strategy, (2) assistance in identifying and resolving differences of opinion between annotators, (3) rentability indicators for each expert intervention, and (4) methods for analyzing the relevance of the learning base under construction. In conclusion, this thesis provides an innovative approach to design a learning base for a conversational assistant, involving domain experts for their actual knowledge, while requiring a minimum of analytical and technical skills. This work opens the way for more accessible methods for building such assistants
Guillaumin, Matthieu. "Données multimodales pour l'analyse d'image". Phd thesis, Grenoble, 2010. http://www.theses.fr/2010GRENM048.
Texto completo da fonteThis dissertation delves into the use of textual metadata for image understanding. We seek to exploit this additional textual information as weak supervision to improve the learning of recognition models. There is a recent and growing interest for methods that exploit such data because they can potentially alleviate the need for manual annotation, which is a costly and time-consuming process. We focus on two types of visual data with associated textual information. First, we exploit news images that come with descriptive captions to address several face related tasks, including face verification, which is the task of deciding whether two images depict the same individual, and face naming, the problem of associating faces in a data set to their correct names. Second, we consider data consisting of images with user tags. We explore models for automatically predicting tags for new images, i. E. Image auto-annotation, which can also used for keyword-based image search. We also study a multimodal semi-supervised learning scenario for image categorisation. In this setting, the tags are assumed to be present in both labelled and unlabelled training data, while they are absent from the test data. Our work builds on the observation that most of these tasks can be solved if perfectly adequate similarity measures are used. We therefore introduce novel approaches that involve metric learning, nearest neighbour models and graph-based methods to learn, from the visual and textual data, task-specific similarities. For faces, our similarities focus on the identities of the individuals while, for images, they address more general semantic visual concepts. Experimentally, our approaches achieve state-of-the-art results on several standard and challenging data sets. On both types of data, we clearly show that learning using additional textual information improves the performance of visual recognition systems
Guillaumin, Matthieu. "Données multimodales pour l'analyse d'image". Phd thesis, Grenoble, 2010. http://tel.archives-ouvertes.fr/tel-00522278/en/.
Texto completo da fonte