Dissertations / Theses on the topic 'Language spécifique au domaine'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Language spécifique au domaine.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Pradet, Quentin. "Annotation en rôles sémantiques du français en domaine spécifique." Sorbonne Paris Cité, 2015. https://hal.inria.fr/tel-01182711/document.
Full textLn th is Natural Language Processing Ph. D. Thesis, we aim to perform semantic role labeling on French domain-specific texts. This task first disambiguates the sense of predicates in a given text and annotates its child chunks with semantic roles such as Agent, Patient or Destination. The task helps many applications in domains where annotated corpora exist, but is difficult to use otherwise. We first evaluate on the FrameNet corpus an existing method based on VerbNet, which explains why the method is domain-independant. We show that substantial improvements can be obtained. We first use syntactic information by handling the passive voice. Next, we use semantic informations by taking advantage of the selectional restrictions present in VerbNet. To apply this method to French, we first translate lexical resources. We first translate the WordNet lexical database. Next, we translate the VerbNet lexicon which is organized semantically using syntactic information. We obtains its translation, VerbuNet, by reusing two French verb lexicons (the Lexique-Grammaire and Les Verbes Français) and by manually modifying and reorganizing the resulting lexicon. Finally, once those building blocks are in place, we evaluate the feasibilty of semantic role labeling of French and English in three specific domains. We study the pros and cons of using VerbNet and VerbnNet to annotate those domains before explaining our future work
Machlouzarides-Shalit, Antonia. "Development of subject-specific representations of neuroanatomy via a domain-specific language." Electronic Thesis or Diss., université Paris-Saclay, 2020. http://www.theses.fr/2020UPASG041.
Full textWithin the field of brain mapping, we identified the need for a tool which is grounded in the detailed knowledge of individual variability of sulci. In this thesis, we develop a new brain mapping tool called NeuroLang, which utilises the spatial geometry of the brain.We approached this challenge with two perspectives: firstly, we grounded our theory firmly in classical neuroanatomy. Secondly, we designed and implemented methods for sulcus-specific queries in the domain-specific language, NeuroLang. We tested our method on 52 subjects and evaluated the performance of NeuroLang for population and subject-specific representations of neuroanatomy. Then, we present our novel, data-driven hierarchical organisation of sulcal stability.To conclude, we summarise the implication of our method within the current field, as well as our overall contribution to the field of brain mapping
Cremet, Françoise. "Étude de cas sur l'anglais des relations internationales dans le domaine du transport aérien : esquisse d'un enseignement spécifique." Paris 9, 1987. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=1987PA090013.
Full textHow can we improve the effectiveness of our representatives to international meetings? The working language, english, is not their mother tongue. Can this disadvantage vis-a-vis the anglo-saxon delegates be overcome? Which teaching approaches might be suggested for this problem? The field of air transportation provides an excellent proving ground for resolving these issues. International by definition, it has given birth to a wide variety of organizations. Have airline companies tried to settle the problems met by their delegates taking part in the numerous meetings held under the auspices of these organizations? The teaching of general english is usually covered by vocational training. It would seem, however, that little has been done so far to deal with the specific needs of international meetings. The research presented here is based primarily on observations (surveys, recordings) made during the course of meetings dealing with various subjects and held at different hierarchical levels, within icao, iata or more specialized organizations several methods have been used to construct pedagogical principles : situational analysis, survey analysis, computerized lexical analysis, comparisons between different communicative situations, application of the system of discourse analysis developed by Sinclair and Coulthard to the language of meetings, notional functional analysis based upon Wilkins' taxonomy. This approach has led to proposals for specific teaching materials oriented towards three main objectives: 1- to train non-native speakers to recognize and use formal speaking procedures 2- to teach students how to present a paper in english and to demonstrate the specificity of the language used in conferences 3- to make learners practise both procedure and language through exercices based on authentic material. Such a course is geared to people at an intermediate or advanced level of general english and aims to help them acquire most of the socio-linguistic skills they need
Vallejo, Paola. "Réutilisation de composants logiciels pour l'outillage de DSML dans le contexte des MPSoC." Thesis, Brest, 2015. http://www.theses.fr/2015BRES0101/document.
Full textDesigners of domain specific modeling languages (DSMLs) must provide all the tooling of these languages. In many cases, the features to be developped already exist, but it applies to portions or variants of the DSML.One way to simplify the implementation of these features is by reusing the existing functionalities. Reuse means that DSML data must be adapted to be valid according to the functionality to be reused. If the adaptation is done and the data are placed in the context of the functionality, it can be reused. The result produced by the tool remains in the context of the tool and it must be adapted to be placed in the context of the DSML (reverse migration).In this context, reuse makes sense only if the migration and the reverse migration are not very expensive. The main objective of this thesis is to provide a mechanism to integrate the migration, the reuse and the reversemigration and apply them efficiently. The main contribution is an approach that facilitates the reuse of existing functionalities by means of model migrations. This approach facilitates the production of the tooling for a DSML. It allows reversible migration between two DSMLs semantically close. The user is guided during the ruse process to quickly provide the tooling of his DSML.The approach has been formalised et applied to a DSML (Orcc) in the context of the MPSoC
Gani, Kahina. "Using timed automata formalism for modeling and analyzing home care plans." Thesis, Clermont-Ferrand 2, 2015. http://www.theses.fr/2015CLF22628/document.
Full textIn this thesis we are interested in the problems underlying the design and the management of home care plans. A home care plan defines the set of medical and/or social activities that are carried out day after day at a patient's home. Such a care plan is usually constructed through a complex process involving a comprehensive assessment of patient's needs as well as his/her social and physical environment. Specication of home care plans is challenging for several reasons: home care plans are inherently nonstructured processes which involve repetitive, but irregular, activities, whose specification requires complex temporal expressions. These features make home care plans difficult to model using traditional process modeling technologies. First, we present a DSL (Domain Specific Language) based approach tailored to express home care plans using high level and user-oriented abstractions. DSL enables us through this thesis to propose a temporalities language to specify temporalities of home care plan activities. Then, we describe how home care plans, formalized as timed automata, can be generated from these abstractions. We propose a three-step approach which consists in (i) mapping between elementary temporal specifications and timed automata called Pattern automata, (ii) combining patterns automata to build the activity automata using our composition algorithm and then (iii) constructing the global care plan automaton. The resulting care plan automaton encompasses all the possible allowed schedules of activities for a given patient. Finally, we show how verification and monitoring of the resulting care plan can be handled using existing techniques and tools, especially using UPPAAL Model Checker
Iovene, Valentin. "Answering meta-analytic questions on heterogeneous and uncertain neuroscientific data with probabilistic logic programming." Electronic Thesis or Diss., université Paris-Saclay, 2021. http://www.theses.fr/2021UPASG099.
Full textThis thesis contributes to the development of a probabilistic logic programming language specific to the domain of cognitive neuroscience, coined NeuroLang, and presents some of its applications to the meta-analysis of the functional brain mapping literature. By relying on logic formalisms such as datalog, and their probabilistic extensions, we show how NeuroLang makes it possible to combine uncertain and heterogeneous data to formulate rich meta-analytic hypotheses. We encode the Neurosynth database into a NeuroLang program and formulate probabilistic logic queries resulting in term-association brain maps and coactivation brain maps similar to those obtained with existing tools, and highlighting existing brain networks. We prove the correctness of our model by using the joint probability distribution defined by the Bayesian network translation of probabilistic logic programs, showing that queries lead to the same estimations as Neurosynth. Then, we show that modeling term-to-study associations probabilistically based on term frequency-document inverse frequency (TF-IDF) measures results in better accuracy on simulated data, and a better consistency on real data, for two-term conjunctive queries on smaller sample sizes. Finally, we use NeuroLang to formulate and test concrete functional brain mapping hypotheses, reproducing past results. By solving segregation logic queries combining the Neurosynth database, topic models, and the data-driven functional atlas DiFuMo, we find supporting evidence of the existence of an heterogeneous organisation of the frontoparietal control network (FPCN), and find supporting evidence that the subregion of the fusiform gyrus called visual word form area (VWFA) is recruited within attentional tasks, on top of language-related cognitive tasks
Peres, Florent. "Réseaux de Petri temporels à inhibitions / permissions : application à la modélisation et vérification de systèmes de tâches temps réel." Phd thesis, INSA de Toulouse, 2010. http://tel.archives-ouvertes.fr/tel-00462521.
Full textGratien, Jean-Marc. "A DSEL in C++ for lowest-order methods for diffusive problem on general meshes." Thesis, Grenoble, 2013. http://www.theses.fr/2013GRENM018/document.
Full textIndustrial simulation software has to manage : the complexity of the underlying physical models, usually expressed in terms of a PDE system completed with algebraic closure laws, the complexity of numerical methods used to solve the PDE systems, and finally the complexity of the low level computer science services required to have efficient software on modern hardware. Nowadays, this complexity management becomes a key issue for the development of scientific software. Some frameworks already offer a number of advanced tools to deal with the complexity related to parallelism in a transparent way. However, all these frameworks often provide only partial answers to the problem as they only deal with hardware complexity and low level numerical complexity like linear algebra. High level complexity related to discretization methods and physical models lack tools to help physicists to develop complex applications. New paradigms for scientific software must be developed to help them to seamlessly handle the different levels of complexity so that they can focus on their specific domain. Generative programming, component engineering and domain-specific languages (either DSL or DSEL) are key technologies to make the development of complex applications easier to physicists, hiding the complexity of numerical methods and low level computer science services. These paradigms allow to write code with a high level expressive language and take advantage of the efficiency of generated code for low level services close to hardware specificities. In the domain of numerical algorithms to solve partial differential equations, their application has been up to now limited to Finite Element (FE) methods, for which a unified mathematical framework has been existing for a long time. Such kinds of DSL have been developed for finite element or Galerkin methods in projects like Freefem++, Getdp, Getfem++, Sundance, Feel++ and Fenics. A new consistent unified mathematical frame has recently emerged and allows a unified description of a large family of lowest-order methods. This framework allows then, as in FE methods, the design of a high level language inspired from the mathematical notation, that could help physicists to implement their application writing the mathematical formulation at a high level. We propose to develop a language based on that frame, embedded in the C++ language. Our work relies on a mathematical framework that enables us to describe a wide family of lowest order methods including multiscale methods based on lowest order methods. We propose a DSEL developed on top of Arcane platform, based on the concepts presented in the unified mathematical frame and on the Feel++ DSEL. The DSEL is implemented with the Boost.Proto library by Niebler, a powerful framework to build a DSEL in C++. We have proposed an extension of the computational framework to multiscale methods and focus on the capability of our approach to handle complex methods.Our approach is extended to the runtime system layer providing an abstract layer that enable our DSEL to generate efficient code for heterogeneous architectures. We validate the design of this layer by benchmarking multiscale methods. This method provides a great amount of independent computations and is therefore the kind of algorithms that can take advantage efficiently of new hybrid hardware technology. Finally we benchmark various complex applications and study the performance results of their implementations with our DSEL
Bourgeois, Florent. "Système de Mesure Mobile Adaptif Qualifié." Thesis, Mulhouse, 2018. http://www.theses.fr/2018MULH8953/document.
Full textMobile devices offer measuring capabilities using embedded or connected sensors. They are more and more used in measuring processes. They are critical because the performed measurements must be reliable because possibly used in rigorous context. Despite a real demand, there are relatively few applications assisting users with their measuring processes that use those sensors. Such assistant should propose methods to visualise and to compute measuring procedures while using communication functions to handle connected sensors or to generate reports. Such rarity of applications arises because of the knowledges required to define correct measuring procedures. Those knowledges are brought by metrology and measurement theory and are rarely found in software development teams. Moreover, every user has specific measuring activities depending on his field of work. That implies many quality applications developments which could request expert certification. These premises bring the research question the presented works answer : What approach enables the conception of applications suitable to specific measurement procedures considering that the measurement procedures could be configured by the final user. The presented works propose a platform for the development of measuring assistant applications. The platform ensure the conformity of measuring processes without involving metrology experts. It is built upon metrology, model driven engineering and first order logic concepts. A study of metrology enables to show the need of applications measuring process expert evaluation. This evaluation encompasses terms and rules that ensure the process integrity and coherence. A conceptual model of the metrology domain is proposed. That model is then employed in the development process of applications. It is encoded into a first order logic knowledge scheme of the metrology concepts. That scheme enables to verify that metrology constraints holds in a given measuring process. The verification is performed by confronting measuring processes to the knowledge scheme in the form of requests. Those requests are described with a request language proposed by the scheme. Measuring assistant applications require to propose to the user a measuring process that sequences measuring activities. This implies to describe a measuring process, and also to define interactive interfaces and sequencing mechanism. An application editor is proposed. That editor uses a domain specific language dedicated to the description of measuring assistant applications. The language is built upon concepts, formalisms and tools proposed by the metamodeling environment : Diagrammatic Predicat Framework (DPF). The language encompasses syntactical constraints that prevent construction errors on the software level while reducing the semantical gap between the software architect using it and a potential metrology expert. Then, mobile platforms need to execute a behaviour conforming to the editor described one. An implementation modelling language is proposed. This language enables to describe measuring procedures as sequences of activities. Activities imply to measure, compute and present values. Quantities are all abstracted by numerical values. This eases their computation and the use of sensors. The implementation model is made up of software agents. A mobile application is also proposed. The application is built upon a framework of agents, an agent network composer and a runtime system. The application is able to consider an implementation model and to build the corresponding agent network in order to propose a behaviour matching the end users needs. This enables to answer to any user needs, considering he can access to the implementation model, without requiring to download several applications
Ridene, Youssef. "Ingéniérie dirigée par les modèles pour la gestion de la variabilité dans le test d'applications mobiles." Thesis, Pau, 2011. http://www.theses.fr/2011PAUU3010/document.
Full textMobile applications have increased substantially in volume with the emergence ofsmartphones. Ensuring high quality and successful user experience is crucial to the successof such applications. Only an efficient test procedure allows developers to meet these requirements. In the context of embedded mobile applications, the test is costly and repetitive. This is mainly due to the large number of different mobile devices. In this thesis, we describe MATeL, a Domain-Specific Modeling Language (DSML) for designing test scenarios for mobile applications. Its abstract syntax, i.e. a meta model and OCL constraints, enables the test designer to manipulate mobile applications testing concepts such as tester, mobile or outcomes and results. It also enables him/her to enrich these scenarios with variability points in the spirit of Software Product-Line engineering, that can specify variations in the test according to the characteristics of one mobile or a set of mobiles. The concrete syntax of MATeL that is inspired from UML sequence diagrams and its environment based on Eclipse allow the user to easily develop scenarios. MATeL is built upon an industrial platform (a test bed) in order to be able to run scenarios on several different phones. The approach is illustrated in this thesis through use cases and experiments that led to verify and validate our contribution
Křikava, Filip. "Langage de modélisation spécifique au domaine pour les architectures logicielles auto-adaptatives." Phd thesis, Université Nice Sophia Antipolis, 2013. http://tel.archives-ouvertes.fr/tel-00935083.
Full textMonthe, Djiadeu Valéry Marcial. "Développement des systèmes logiciels par transformation de modèles : application aux systèmes embarqués et à la robotique." Thesis, Brest, 2017. http://www.theses.fr/2017BRES0113/document.
Full textWith the construction of increasingly complex robots, the growth of robotic software architectures and the explosion of ever greater diversity of applications and robots missions, the design, development and integration of software entities of robotic systems, constitute a major problem for the robotics community. Indeed, robotic software architectures and software development platforms for robotics are numerous, and are dependent on the type of robot (service robot, collaborative, agricultural, medical, etc.) and its usage mode (In cage, outdoor, environment with obstacles, etc.).The maintenance effort of these platforms and their development cost are therefore considerable.Roboticists are therefore asking themselves a fundamental question: how to reduce the development costs of robotic software systems, while increasing their quality and preserving the specificity and independence of each robotic system? This question induces several others: on the one hand, how to describe and encapsulate the various functions that the robot must provide, in the form of a set of interactive software entities? And on the other hand, how to give these software entities, properties of modularity, portability, reusability, interoperability etc.?In our opinion, one of the most likely and promising solutions to this question, is to raise the level of abstraction in defining the software entities that make up robotic systems. To do this, we turn to model-driven engineering, specifically the design of Domain Specific Modeling Language (DSML).In this thesis, we first realize a comparative study of modeling languages and methods used in the development of embedded real time systems in general. The objective of this first work is to see if there are some that can make it possible to answer the aforementioned questions of the roboticists. This study not only shows that these approaches are not adapted to the definition of robotic software architectures, but mainly results in a framework, which we propose and which helps to choose the method (s) and / or the modeling language (s) best suited to the needs of the designer. Subsequently, we propose a DSML called Robotic Software Architecture Modeling Language (RsaML), for the definition of robotic software architectures with real-time properties. To do this, a meta-model is proposed from the concepts that roboticists are used to in defining their applications. It constitutes the abstract syntax of the language. Real-time properties are identified and included in the relevant concepts. Semantic rules in the field of robotics are then defined as OCL constraints and then integrated into the meta-model, to allow non-functional and realtime property checks to be performed on the constructed models.Eclipse Modeling Framework has been used to implement an editor that supports the RsaML language. The rest of the work done in this thesis involved defining model transformations and then using them to implement generators. These generators make it possible from a RsaML model built, to produce its documentation and source code in C language. These contributions are validated through a case study describing a scenario based on the Khepera III robot
Koussaifi, Maroun. "Modélisation centrée utilisateur pour la configuration logicielle en environnement ambiant." Thesis, Toulouse 3, 2020. http://www.theses.fr/2020TOU30212.
Full textAmbient intelligence aims to provide to human users applications and services that are personalized and adapted to the current situation. The ambient environment which surrounds the human consists of a set of connected objects and software components that are bricks used for the construction of applications by composition. The availability of these components can vary dynamically, in case of mobility for example. In addition, their appearance or disappearance is usually unanticipated. Moreover, in these dynamic and open environments, the user needs are not stable nor always well defined. To build these applications and provide the user with "the right applications at the right time", our team explores an original approach called "opportunistic software composition": the idea is to build applications on the fly by assembling software components present in the environment at the time, without relying on explicit user needs or predefined applications models. Here, it is the availability of the components that triggers opportunistically the on-the-fly building of applications. It is controlled by an intelligent system, called opportunistic composition engine, which decides on the "right" compositions to be made without user input. In such a way, the applications "emerge" dynamically from the ambient environment. Thus, emerging applications can be unexpected or unknown to the user. At the center of the system, the latter must be informed of these applications. On the first hand, she/he must be able to control them, i.e., accept or reject them, and if she/he has the required skills, modify them or eventually build applications herself/himself by assembling software components present in the ambient environment. However, in the control tasks, the user must be assisted as well as possible. On the other hand, in order for the opportunistic composition engine to build relevant assemblies in the absence of explicit needs, it must receive information from the user. In this thesis, we propose an approach based on Model Driven Engineering (MDE) in order to put the user "at the center of the loop". The objective is to present the emerging applications to the user, to assist him in his interventions and to extract useful feedback data to provide to the "intelligent" composition engine. Our solution is based on a metamodel for assembling software components, on different domain-specific languages (DSL) that support application descriptions, and on a graphical editor for editing applications and capturing user feedback. Different methods for model transformations are used to generate structural and semantic application descriptions for different users, from the applications models build by the intelligent engine. In addition, the descriptions can be easily adjusted to a particular human, by changing or adapting the DSL and the model transformations to the user's profile. Unlike the traditional use of MDE where tools and techniques are used by engineers to develop software and generate code, the focus in our approach is on the end users. The entire solution has been implemented and works coupled with the engine. That is to say, our solution is able to intercept the applications models built by the engine, to transform them into presentable models that can be understood and modified by the user, and finally to capture the user feedback and give it back to the engine to update its knowledge
Touil, Amara. "Vers un langage de modélisation spécifique au domaine des systèmes de télécontrôle ubiquitaire." Brest, 2011. http://www.theses.fr/2011BRES2041.
Full textInformation and Communication Technologies (ICT) allow a widespread use of computers, intelligent systems, communication networks, etc. Potentially we are able to access any communicating object and exchanging information with it. In this context, that can be described as ubiquitous, we would be able to act remotely (telecontrol) on communicating objects. In this work we combine telecontrol and ubiquity to provide a framework for defining ubiquitous telecontrol systems and identifying some concerns in terms of modelling and analysis. We propose in this prospect a Domain Specific Modelling Language (DSML) for these systems and an analysing approach for their structure and their behaviour. The DSML is built within the context of Systems Engineering by adopting the Model Driven Engineering (MDE) paradigm. This method has allowed us first to capitalize on knowledge and terminology in the field of ubiquitous telecontrol. Second, the alternative to develop an approach for structural and behavioural analysis and to test some examples of systems. Ubiquitous telecontrol systems dependability is a part of their modelling and analysis. In the proposed DSML, dependability properties are integrated with the guide for start and stop modes and QoS (GEMMA-Q) in order to take into account the system dynamicity and behaviour. This thesis includes also a methodology for building a library of reusable components according to the concepts defined by the ubiquitous telecontrol
Yang, Tong. "Constitution et exploitation d’une base de données pour l’enseignement/apprentissage des phrasèmes NAdj du domaine culinaire français auprès d’apprenants non-natifs." Thesis, Paris 3, 2019. http://www.theses.fr/2019PA030049.
Full textThis thesis project aims to study the teaching method of FOS (French on Specific Objectives) catering to foreign cooks who come to work in French restaurants or who have chosen catering as a specialty. The objective of our research is therefore to teach the culinary NAdj phrasemas to foreign A2 level learners. The teaching/learning of phraseology is required in specialty languages and the high frequency of NAdj phrasems has caught our attention. Several questions are then addressed: where to find this specific lexicon? How to extract them? By which approach do we teach the selected phrasems? To answer these questions, we made our own corpus Cuisitext - written and oral - and then used NooJ to extract the NAdj phrasems from the corpus. Finally, we have proposed the three approaches to the use of corpora for the teaching/learning of NAdj phrasems: guided inductive approach, deductive approach, pure inductive approach
Tazine, Camal. "Modélisation statistique du langage pour un domaine spécifique en reconnaissance automatique de la parole." Avignon, 2005. http://www.theses.fr/2005AVIG0137.
Full textSaboia, Aragao Karoline. "Études structure-fonction de lectines (Discl et Discll) de Dictyostelium discoideum." Grenoble 1, 2008. http://www.theses.fr/2008GRE10299.
Full textLectins are sugar-binding proteins which recognise saccharides with high specificity, in a reversible manner. The amoeba Dictyostelium discoideum is a eukaryote model organism used in the study of many biological processes such as phagocytosis, cell death and cell differentiation. When the amoebas adopt a cohesive stage upon starvation, they produce Discoidin I and II, two proteins able to bind galactose and N-acetyl-galactosamine. DiscI and DiscII present a sequence identity of 48%, form a trimer in solution, and exhibit a similar domain organisation. The N-terminal domain, or discoidin domain (DS), is implied in the processes of cellular adhesion binding the C-terminal domain or lectin domain of the H type family. The lectin domain presents similarities with the snail HPA lectine. This GalNAc specific lectin is used extensively in histopathology as a tumour cell marker with strong metastasizing characteristics. The research tasks developed in this thesis relate to the structural and functional study of the interaction of DiscI and DiscII with the sugars Gal/GalNAc according to a multidisciplinary approach. These two lectins were cloned and expressed in recombined form in Escherichia coli before being purified. Their specificity and their affinity were determined by the use of the printed array and titration microcalorimetry. The determination of their 3D structure in native or complexed form by X-ray crystallography has allowed the analysis of the interactions on a molecular level. The comparison of the binding sites between Discoidins and HPA has allowed a better understanding of their recognition mechanisms, specificity and affinity on the molecular level
Ouraiba, El Amine. "Scénarisation pédagogique pour des EIAH ouverts : Une approche dirigée par les modèles et spécifique au domaine métier." Phd thesis, Université du Maine, 2012. http://tel.archives-ouvertes.fr/tel-00790696.
Full textGuillou, Pierre. "Compilation efficace d'applications de traitement d'images pour processeurs manycore." Thesis, Paris Sciences et Lettres (ComUE), 2016. http://www.theses.fr/2016PSLEM022/document.
Full textMany mobile devices now integrate optic sensors; smartphones, tablets, drones... are foreshadowing an impending Internet of Things (IoT). New image processing applications (filters, compression, augmented reality) are taking advantage of these sensors under strong constraints of speed and energy efficiency. Modern architectures, such as manycore processors or GPUs, offer good performance, but are hard to program.This thesis aims at checking the adequacy between the image processing domain and these modern architectures: conciliating programmability, portability and performance is still a challenge today. Typical image processing applications feature strong, inherent parallelism, which can potentially be exploited by the various levels of hardware parallelism inside current architectures. We focus here on image processing based on mathematical morphology, and validate our approach using the manycore architecture of the Kalray MPPA processor.We first prove that integrated compilation chains, composed of compilers, libraries and run-time systems, allow to take advantage of various hardware accelerators from high-level languages. We especially focus on manycore processors, through various programming models: OpenMP, data-flow language, OpenCL, and message passing. Three out of four compilation chains have been developed, and are available to applications written in domain-specific languages (DSL) embedded in C or Python. They greatly improve the portability of applications, which can now be executed on a large panel of target architectures.Then, these compilation chains have allowed us to perform comparative experiments on a set of seven image processing applications. We show that the MPPA processor is on average more energy-efficient than competing hardware accelerators, especially with the data-flow programming model. We show that compiling a DSL embedded in Python to a DSL embedded in C increases both the portability and the performance of Python-written applications.Thus, our compilation chains form a complete software environment dedicated to image processing application development. This environment is able to efficiently target several hardware architectures, among them the MPPA processor, and offers interfaces in high-level languages
Campagne, Sébastien. "Déterminants structuraux de la reconnaissance spécifique de l'ADN par le domaine THAP de hTHAP1 et implications dans la dystonie DYT6." Toulouse 3, 2010. http://thesesups.ups-tlse.fr/901/.
Full textThe THAP protein family is characterized by the presence of a protein motif designed the THAP domain. The THAP domain of hTHAP1 defines a new C2CH zinc coordination motif responsive of the DNA binding essential for transcription factor function of the hTHAP1 protein implicated in cell proliferation regulation. On the structural frame, the THAP domain is characterized by an atypical fold including C2CH zinc coordination and the long insertion between the two zinc ligand pairs adopt a ßaß fold. Specific DNA binding mode has been structurally characterized using Nuclear Magnetic Resonance. This domain binds to 5'-TXXGGGCA-3' consensus DNA target establishing bases specific contacts using its N-terminal loop, its ß-sheet, its loop L3 and its loop L4. Solution structure of the THAP-DNA complex explain how the THAP domain binds specifically to DNA, the first step of the transcriptional regulation mediated by hTHAP1. Recently, mutations in hTHAP1 gene have been genetically linked to the development of dystonia DYT6, a neurodegenerative disease. Some of these mutations disrupt THAP domain of hTHAP1 function highlighting that the DNA binding activity of hTHAP1 and hTHAP1 function are essential to maintain motor neuronal ways
Gutmann, Bernard. "Etude d'un nouveau type de RNase P spécifique des eucaryotes chez Arabidopsis thaliana." Thesis, Strasbourg, 2012. http://www.theses.fr/2012STRAJ133/document.
Full textRNase P is involved in the maturation of tRNA precursors by cleaving their 5’ leader sequences. Until recently this enzyme was considered to be universally occurring as a ribonucleoprotein complex. The breakthrough from the existing model came with the identification of protein-only RNase P in human mitochondria as well as in plants. These proteins that we called PRORP (PROteinaceous RNase P) have three paralogs in Arabidopsis, which are localised in organelles (PRORP1) and nuclei (PRORP2 and 3). We have shown that PRORP proteins have RNase P activity in vitro as single proteins. In vivo the functions of PRORP proteins are essential and the function of PROPR2 and 3 are redundant. PRORP down-regulation mutants, show that PRORP proteins have a variety of other substrates and RNase MRP, another ribonucleoprotein, is not involved in the tRNA maturation. Results show that PRORP proteins would be the only enzymes responsible for RNase P activity in Arabidopsis
Chevallier, Sylvie. "Relations structure-fonction de l'oligopeptidase proline-spécifique (EC 3. 4. 21. 26) de Flavobacterium meningosepticum." Grenoble 1, 1993. http://www.theses.fr/1993GRE10076.
Full textChoi, Mi-Kyung. "La cotraduction : domaine littéraire coréen-français." Thesis, Paris 3, 2014. http://www.theses.fr/2014PA030025/document.
Full textOur aim was to study the issue of literary translation when it is carried into the B language of the translator (here from Korean to French) with the assistance of a co-translator native speaker of the target language and to point out the conditions of success. In the field of literary translation from Korean into French, this kind of team-work aims at overcoming the lack of French translators able to produce literary standard translations by themselves. When the translation is done by a Korean translator working into his B language, a thorough work of editing and rewriting is required due to the formal requirements of literary writing. An increasing number of Korean novels and short stories are translated by such a dual team and published in France.We analyze the different steps of the process of translation, from understanding to reformulation in the light of the Interpretive Theory of Translation, trying to show why the key concept of “de-verbalization” makes co-translation justifiable, translation being defined as an operation from text to text not from language to language.The first part of our research is devoted to the theoretical aspects of our study (definition of some key notions, mainly of the Interpretive Theory of Translation) and its practical aspects (general survey of translated Korean literary works into French). In the second part, we analyze a large number of our translation samples with the objective of showing the sort of dialog the translator and her co-translator entertain, and underlining the nature of guidance proposed by the first one to second one and the contribution of the last one.Our conclusion is that, thanks to this dual-team process, often wrongly considered as a lesser evil, we are able to produce quality literary translation provided that the method implemented takes into account the conditions we describe here
Neifar, Wafa. "Méthodes d'acquisition terminologique en arabe : Application au domaine médical." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS085/document.
Full textThe goal of this thesis is to reduce the lack of available resources and NLP tools for Arabic language in specialised domains by proposing methods allowing the extraction of terms from texts in Modern Standard Arabic. In this context, we first constructed an English-Arabic parallel corous in a specific domain.It is a set of medical texts produced by the US National Library of Medicine (NLM). Thereafter, we have proposed terminological acquisition methods, toextract terms or acquire relations between these terms, for Arabic based on: i) the adaptation of an existing terminology extractor for French or English, ii) the transliteration of English terms in Arabic characters and iii) cross-lingual transfer. Applied at the terminological level, transfer aims to implement a process of term extraction or relationship acquisition between terms in the texts of a source language (here, French or English) and then to transfer the extracted information to target language texts (in this case, Modern Standard Arabic), thereby identifying the same type of terminologicalinformation. We have evaluated the monolingual and bilingual term lists that we have obtained by the experiments we carried out, according to a transparent, direct and semi-automatic method: the extracted term candidates are confronted with a reference terminology before being validated manually. This evaluation follows a protocol that we proposed
Girerd, Nicolas. "Apport des méthodes d'inférence causale et de la modélisation additive de survie pour l’évaluation d’un effet thérapeutique dans le domaine spécifique de la cardiologie et de la chirurgie cardiaque." Thesis, Lyon 1, 2014. http://www.theses.fr/2014LYO10329.
Full textClinical trials are difficult to conduct in the field of cardiac surgery. As a consequent, relatively few clinical trials are available in this field and most of the available data is observational. Yet, surgical techniques are very dependent on the patients’ characteristics, which translate into a high amount of attribution bias. We studied the impact of the type of surgical revascularization (complete or incomplete) on long-term survival using a 4/1 propensity score matching. We identified a significant interaction on a relative scale between the type of revascularization and age with regards to long-term all-cause mortality. The treatment effect was weaker in older patients. We then studied the interaction between age and treatment effect on both an additive scale and a relative scale using additive and multiplicative hazard models. We identified a significant submultiplicative interaction whereas we did not identify noteworthy additive interaction. This result indicate that treatment effect is additive in this illustration. We also measured risk differences from the multiplicative hazard model in several age subsets. Risk differences extracted from the multiplicative model were similar in age subsets, which confirmed a constant treatment effect across subsets on an additive scale despite a weaker treatment effect on a multiplicative scale in older patients. Our work encourages an evaluation of treatment effect on both an additive and a relative scale in propensity score based analyses of observational data
Mercadal, Julien. "Approche langage au développement logiciel : application au domaine des systèmes d’informatique ubiquitaire." Thesis, Bordeaux 1, 2011. http://www.theses.fr/2011BOR14315/document.
Full textThe sheer size and complexity of today's software systems posechallenges for both their programming and verification, making itcritical to raise the level of abstraction of software developmentbeyond the code. However, the use of high-level descriptions in thedevelopment process still remains rudimentary, improving and guidingthis process marginally.This thesis proposes a new approach to making software developmentsimpler and safer. This approach is based on the use ofdomain-specific languages and a tight coupling between a specificationand architecture layer, and an implementation layer. It consists ofdescribing functional and non-functional aspects of a software systemat a high level of abstraction, using the specification andarchitecture layer. These high-level descriptions are then analyzedand used to customize the implementation layer, greatly facilitatingthe programming and verification of the software system.We have validated our approach in the domain of pervasive computingsystems development. From a complete domain analysis, we haveintroduced two domain-specific languages, Pantaxou and Pantagruel,dedicated to the orchestration of networked smart devices
Alawadhi, Hamid Ali Motea. "La difficulté en traduction approche théorique et pratique dans le domaine de la traduction français-arabe : thèse pour obtenir le grade de docteur de l'Université de Paris III, discipline, traductologie, présentée et soutenue publiquement /." Villeneuve d'Ascq : Presses universitaires du Septentrion, 2001. http://books.google.com/books?id=iiZcAAAAMAAJ.
Full textJin, Gan. "Système de traduction automatique français-chinois dans le domaine de la sécurité globale." Thesis, Besançon, 2015. http://www.theses.fr/2015BESA1006.
Full textIn this paper, in addition to our research results for a French-Chinese machine translation system, we present the theoretical contributions from the SyGULAC theory and from the micro-systemic theory with its calculations as well as the methodologies developed aimed at a secure and reliable application in the context of machine translation. The application covers critical safety areas such as aerospace, medicine and civil security.After presenting the state of the art in the field of machine translation in China and France, the reasons of the choice of the micro-systemic theory and SyGULAC theory are explained. Then, we explain the problems encountered during our research. The ambiguity, which is the major obstacle to the understandability and to the translatability of a text, is present at all language levels: syntactic, morphological, lexical, nominal and verbal. The identification of the units of a sentence is also a preliminary step for global understanding, whether for human beings or for a translation system. We present an inventory of the divergences between the french and the chinese language in order to achieve an machine translation system. We try to observe the verbal, nominal and vocabulary structure levels, in order to understand their interconnections and their interactions. We also define the obstacles to this research, with a theoretical point of view but also by studying our corpus.The chosen formalism starts from a thorough study of the language used in security protocols. A language is suitable for automatic processing only if this language is formalized. Therefore, An analysis of several French/Chinese bilingual corpora, but also monolingual, from civil security agencies, was conducted. The goal is to find out and present the linguistic characteristics (lexical, syntactic ...) which characterize the language of security in general, and to identify all the syntactic structures used by this language. After presenting the formalization of our system, we show the recognition, transfer and generation processes
Rivière, Gwladys. "Étude par RMN de la créatine kinase musculaire et d'un nouveau domaine de liaison à l'ubiquitine dans la protéine STAM2." Phd thesis, Université Claude Bernard - Lyon I, 2011. http://tel.archives-ouvertes.fr/tel-00861128.
Full textGosme, Julien. "Énumération exhaustive et détection spécifique des analogies : étude pour les modèles de langue et la traduction automatique." Phd thesis, Université de Caen, 2012. http://tel.archives-ouvertes.fr/tel-00700559.
Full textLebeaupin, Benoit. "Vers un langage de haut niveau pour une ingénierie des exigences agile dans le domaine des systèmes embarqués avioniques." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLC078/document.
Full textSystems are becoming more and more complex, because to stay competitive, companies whichdesign systems search to add more and more functionalities to them. Additionally, this competition impliesthat the design of systems needs to be reactive, so that the system is able to evolve during its conception andfollow the needs of the market.This capacity to design flexibly complex systems is hindered or even prevented by various variouselements, with one of them being the system specifications. In particular, the use of natural language tospecify systems have several drawbacks. First, natural language is inherently ambiguous and this can leadsto non-conformity if customer and supplier of a system disagree on the meaning of its specification.Additionally, natural language is hard to process automatically : for example, it is hard to determine, usingonly a computer program, that two natural language requirements contradict each other. However, naturallanguage is currently unavoidable in the specifications we studied, because it remains very practical, and itis the most common way to communicate.We aim to complete these natural language requirements with elements which allow to make them lessambiguous and facilitate automatic processing. These elements can be parts of models (architectural modelsfor example) and allow to define the vocabulary and the syntax of the requirements. We experimented theproposed principles on real industrial specifications and we developped a software prototype allowing totest a specification enhanced with these vocabulary and syntax elements
Rigaut, Olivier. "Nouveaux concepts pour les matrices de bolomètres destinées à l’exploration de l’Univers dans le domaine millimétrique." Thesis, Paris 11, 2014. http://www.theses.fr/2014PA112076/document.
Full textSince its discovery in 1964, the study of the Cosmic Microwave Background (CMB) in the field as of millimetre-length wavelengths became a major stake of experimental research in the field of cosmology. In particular, its anisotropies in temperature, measured for the first time by satellite COBE then more finely by the experiment WMAP and the PLANCK satellite. The predicted existence of anisotropies of polarization of the Cosmic Microwave Background is currently been part of the privileged field of experimentation of the study of the CMB. Indeed, the proof of exists modes B of polarization, single signature of the paramount gravitational waves, currently is the object of an intensive experimental research by the means in particular of the instrument BICEP2 which would have detected its signature in 2014 in values of the tensor report on scalar R = 0.2. Project QUBIC makes party of these experiments intended to reveal the modes B of polarization thanks to its instrument based on the technique of the interferometers and the development of bolometers array, asking for a thorough field of investigation including, amongst other things, the solid state physics, the physics of the low temperatures and cosmology. The thesis presented here is within this framework, with for objective making of a bolometers array whose performance and optimization should make it possible to acquire the necessary sensitivity to the observation of the B-mode polarization. The various experimental techniques acquired with the CSNSM of Orsay indeed make it possible to consider the optimization of the key elements of the bolometers array while being pressed in particular on amorphous alloy of NbxSi1-x for making of an optimized thermal sensor, and on an innovative material, titanium-vanadium alloy, for the clarification of an effective superconducting absorber of radiation, whose low specific heat must make it possible to reach a response time of the detector about ten millisecond, value of the response time necessary to an effective reading of the signal of the Cosmic Microwave Background. The manuscript of thesis here present has as an ambition to develop the physical principles necessary to the field of investigation of work to be achieved. Thus, this study proposes to work out the various elements of a bolometer, joining together a thermal sensor optimized as well as an absorber of radiation of low specific heat, making it possible to consider the clarification of a bolometers array optimized within the framework of the project QUBIC whose observation campaign is envisaged during 2015 with the dome C of the south pole
Rigaut, Olivier. "Nouveaux concepts pour les matrices de bolomètres destinées à l'exploration de l'Univers dans le domaine millimétrique." Phd thesis, Université Paris Sud - Paris XI, 2014. http://tel.archives-ouvertes.fr/tel-01023011.
Full textChanier, Thierry. "Compréhension de textes dans un domaine technique : le système Actes ; application des grammaires d'unification et de la théorie du discours." Paris 13, 1989. http://www.theses.fr/1989PA132015.
Full textKouniali, Samy Habib. "Désambigüisation de groupes nominaux complexes en conformité avec les connaissances du domaine : application a la traduction automatique." Vandoeuvre-les-Nancy, INPL, 1993. http://www.theses.fr/1993INPL096N.
Full textFoucault, Nicolas. "Questions-Réponses en domaine ouvert : sélection pertinente de documents en fonction du contexte de la question." Phd thesis, Université Paris Sud - Paris XI, 2013. http://tel.archives-ouvertes.fr/tel-00944622.
Full textBenamar, Alexandra. "Évaluation et adaptation de plongements lexicaux au domaine à travers l'exploitation de connaissances syntaxiques et sémantiques." Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG035.
Full textWord embeddings have established themselves as the most popular representation in NLP. To achieve good performance, they require training on large data sets mainly from the general domain and are frequently finetuned for specialty data. However, finetuning is a resource-intensive practice and its effectiveness is controversial.In this thesis, we evaluate the use of word embedding models on specialty corpora and show that proximity between the vocabularies of the training and application data plays a major role in the representation of out-of-vocabulary terms. We observe that this is mainly due to the initial tokenization of words and propose a measure to compute the impact of the tokenization of words on their representation. To solve this problem, we propose two methods for injecting linguistic knowledge into representations generated by Transformers: one at the data level and the other at the model level. Our research demonstrates that adding syntactic and semantic context can improve the application of self-supervised models to specialty domains, both for vocabulary representation and for NLP tasks.The proposed methods can be used for any language with linguistic information or external knowledge available. The code used for the experiments has been published to facilitate reproducibility and measures have been taken to limit the environmental impact by reducing the number of experiments
Atcero, Milburga. "Les technologies de l'information et de la communication (TIC) et le développement de l'expression orale en français sur objectif spécifique (FOS) dans le contexte ougandais." Thesis, Paris 3, 2013. http://www.theses.fr/2013PA030049/document.
Full textThe initial objective of this study, which lies within the field of language teaching andespecially on the role of information and Communication Technology (ICT), is to investigate the potential of ICT in triggering oral language development in the learners of French for Specific purposes (FSP) at Makerere University Business School. This studyadopts action research that focuses on the role of technologies deployed in oral technical presentations of macro-tasks such as the use of MS Office. The aim is to enhance Frenchlearners’ skills in French for Specific purposes. The social constructivist or cultural hypotheses posit that social interaction plays an important role in L2 acquisition (French in this case) in FSP classes through a hybrid environment based on macro-tasks performed indistance and presented in class.The current action research project involved identifying and putting into place a learningsystem for learners of FSP who experienced several difficulties with their spoken French inthe learning process. It further posits that learners construct the new language through socially mediated interaction. Subsequently, this involved establishing whether the use ofPowerPoint presentation (PPP) would engage learners of FSP in collective actions both inthe classroom and in the real world activities. In addition, there was an attempt to establishif relevant web quest materials were likely to enhance oral language acquisition and prompt learners to take responsibility for their own learning
Richa, Elie. "Qualification des générateurs de code source dans le domaine de l'avionique : le test automatisé des chaines de transformation de modèles." Thesis, Paris, ENST, 2015. http://www.theses.fr/2015ENST0082/document.
Full textIn the avionics industry, Automatic Code Generators (ACG) are increasingly used to produce parts of the embedded software. Since the generated code is part of critical software, safety standards require a thorough verification of the ACG called qualification. In this thesis in collaboration with AdaCore, we seek to reduce the cost of testing activities by automatic and effective methods.The first part of the thesis addresses the topic of unit testing which ensures exhaustiveness but is difficult to achieve for ACGs. We propose a method that guarantees the same level of exhaustiveness by using only integration tests which are easier to carry out. First, we propose a formalization of the ATL language in which the ACG is defined in the Algebraic Graph Transformation theory. We then define a translation of postconditions expressing the exhaustiveness of unit testing into equivalent preconditions that ultimately support the production of integration tests providing the same level of exhaustiveness. Finally, we propose to optimize the complex algorithm of our analysis using simplification strategies that we assess experimentally.The second part of the work addresses the oracles of ACG tests, i.e. the means of validating the code generated by the ACG during a test. We propose a language for the specification of textual constraints able to automatically check the validity of the generated code. This approach is experimentally deployed at AdaCore for a Simulink® to Ada/C ACG called QGen
Richa, Elie. "Qualification des générateurs de code source dans le domaine de l'avionique : le test automatisé des chaines de transformation de modèles." Electronic Thesis or Diss., Paris, ENST, 2015. http://www.theses.fr/2015ENST0082.
Full textIn the avionics industry, Automatic Code Generators (ACG) are increasingly used to produce parts of the embedded software. Since the generated code is part of critical software, safety standards require a thorough verification of the ACG called qualification. In this thesis in collaboration with AdaCore, we seek to reduce the cost of testing activities by automatic and effective methods.The first part of the thesis addresses the topic of unit testing which ensures exhaustiveness but is difficult to achieve for ACGs. We propose a method that guarantees the same level of exhaustiveness by using only integration tests which are easier to carry out. First, we propose a formalization of the ATL language in which the ACG is defined in the Algebraic Graph Transformation theory. We then define a translation of postconditions expressing the exhaustiveness of unit testing into equivalent preconditions that ultimately support the production of integration tests providing the same level of exhaustiveness. Finally, we propose to optimize the complex algorithm of our analysis using simplification strategies that we assess experimentally.The second part of the work addresses the oracles of ACG tests, i.e. the means of validating the code generated by the ACG during a test. We propose a language for the specification of textual constraints able to automatically check the validity of the generated code. This approach is experimentally deployed at AdaCore for a Simulink® to Ada/C ACG called QGen
Rivière, Gwladys. "Étude par RMN de la créatine kinase musculaire et d’un nouveau domaine de liaison à l’ubiquitine dans la protéine STAM2." Thesis, Lyon 1, 2011. http://www.theses.fr/2011LYO10285/document.
Full textIn this thesis, we study two proteins by NMR: the muscular creatine kinase (CK-MM) and the SH3 domain of STAM2 protein, in the free and complexed forms. CK-MM is an active homodimeric enzyme which belongs to the guanidino-phosphagen-kinase family. This enzyme is involved in energetic process in the cell. The aim of this study is to elucidate the functional mode of the CK-MM. For this purpose, we measured R1 and R2 relaxation rates and chemical shit perturbation experiments on the substrate-free CK-MM, the CK-MM/MgADP complex, and the inhibitory ternary complex CK-MM/MgADP-creatine-nitrate. The experiments show that the loop 320s, specific recognition of the substrates, possesses a fast dynamic in absence of substrates (in the order of nano-picosecond) and a slower dynamic in presence of creatine-MgADP-nitrate ion. The binding of the substrate in the two active sites induces of significant conformational modification of the CK-MM. STAM2 protein consists in two ubiquitin binding domains (VHS and UIM) and a SH3 domain which interacts with deubiquinating enzymes AMSH and UBPY. This protein is involved in the lysosomal degradation pathway. The aim of this study is the characterization of the interaction between SH3 domain of STAM2 and ubiquitin. For this, we recorded the R1, R2, nOes relaxation experiments and chemical shift perturbation experiments on the UIM-SH3/ubiquitin complex. These experiments show that SH3 and UIM domains interact each with a single ubiquitin, with affinity of the order of hundred micromolars. The interface between these UBDs and ubiquitin, involves mainly hydrophobic and conserved amino-acids
Thomauske, Nathalie saskia. "Des constructions de "speechlessness" : une étude comparative Allemagne-France sur les rapports sociaux langagiers de pouvoir dans le domaine de l'éducation de la petite enfance." Thesis, Sorbonne Paris Cité, 2015. http://www.theses.fr/2015USPCD050/document.
Full textGermany and France face similar challenges concerning questions of immigration. Both countries are nation states, in which the majoritarian society is convinced that the people should be unified through speaking a common language. This conception of the nation-state is nevertheless strongly opposed by plurilingual people (of Color). The aim of the thesis is to analyze how a discrimination of plurilingual children is constructed and legitimized in daily life in the domain of early childhood education.To this end, focus group discussions with practitioners and parents have been conducted and analyzed following a constructionist “grounded theory” approach. Findings show, among others, that practitioners do not know or are insecure of how to deal with children who do not speak the target language. Some of them react by expecting children to adapt and to learn the language on their own through "language submersion". The “Other” languages of the children and their parents are relegated to the private context and their speakers are silenced in the ECEC setting. Other practitioners criticize these de facto language policies and describe how they contribute to support children in expressing themselves in their favorite language(s)
Hadjem, Abdelhamid. "Analyse de l' influence de la morphologie sur le SAR induit dans les tissus de tête d' enfant." Paris 6, 2005. http://www.theses.fr/2005PA066411.
Full textSilly-Carette, Jessica. "Modélisation avancée de l'absorption des ondes électromagnétiques dans les tissus biologiques : schémas en temps, approches adjointe et stochastique." Paris 6, 2008. http://www.theses.fr/2008PA066368.
Full textNgwaba, Chidinma. "Les termes de la gynécologie obstétrique en igbo : enquête sur un domaine tabou dans une langue sans documents écrits." Thesis, Lyon, 2016. http://www.theses.fr/2016LYSE2107/document.
Full textThis study focuses on gynaecology-obstetrics terminology in Igbo. Our main objective is to take an inventory of Igbo terms in the area of gynaecology-obstetrics and classify them. This will enable us to examine and evaluate the adequacy of Igbo terms in relation to English and French terms. A second objective involves exposing the methodology used in Igbo term creation in the gynaecology-obstetrics domain.In our research we noticed that gaps exist in the area of gynaecology-obstetrics of the Igbo language when compared to English and French languages. We tried to fill in the gaps thereby validating the idea whereby the Igbo language like all other languages, is capable of naming concepts in any area. Our research specifically aims at collecting Igbo terms from the domain of gynaecology-obstetrics in a way that enables us explain or give information on the method of collection of such terms.The method used in the terminological study of the gynaecology-obstetrics domain in Igbo, should be suitable in studying a taboo domain of a language without written documents. An oral documentary research became necessary. Thus the Igbo terms were compiled by means oral documentation research using techniques that helped us bypass the hesitation or reluctance of many Igbo speakers to express themselves on our area of research.To constitute the nomenclature of the domain, we carried out field work. This involved the observation of and interviews with Igbo speakers namely: traditional doctors, orthodox doctors, midwives both traditional and orthodox, nurses, local chiefs and elderly persons. We thus interviewed 57 resource-persons and experts: 20 doctors, 3 nurses and 10 midwives (for the orthodox medicine component), 15 traditional doctors, 5 traditional midwives, 2 local chiefs and 2 elderly persons (for the traditional medicine component). We were equally inspired by socioterminology as exposed by Gaudin (2003, 2005) and the methodology of research as outlined in Halaoui (1990, 1991) from which we borrowed the methodology of research in terminology of African languages.Looking at our field work result, we noticed terminological gaps which we tried to fill using proposals from the people we interviewed. The work also involved the creation of terms for concepts and objects not already named in Igbo. This naming drew principally on the method described in Diki-Kidiri (2008). An analysis of the process underlying each coinage is included. The result obtained is a clear indication that the Igbo language can be used to name things.This work equally proposes a trilingual glossary: English-French-Igbo. The glossary covers such areas as: Anatomy of the female pelvis and the external genitalia, Anatomy of the internal genital organs – female, Anatomy of the male reproductive system, Physiology of the reproductive system, Development of the embryo, Physiology and nutrition in pregnancy and lactation, Foetal surveillance, Labour, The new born infant, Infections of the reproductive organs, Infections of the reproductive tract, Sexually transmitted diseases, Structural anomalies, Cancers of the reproductive system and Disorders of the urinary system.Our work comprises three parts. Part 1: “The Igbo Language of Nigeria” consists of three chapters. Chapter 1: “Nigeria a Land with ethnicity and Linguistic diversity”, Chapter 2: “Description of the Igbo Language” and Chapter 3: “Problems of Igbo Terminology”. Part 2: entitled “A Distinctive Terminological Domain: Medicine” is made up of two chapters. Chapter 4: “Sickness and Heath among the Igbos” and Chapter 5: “Practicing Medicine in Nigeria”. Part 3 comprises two chapters. Chapter 6: “Field Work” and Chapter 7: “Creating Terms in Igbo: the Gynaecology-Obstetrics Domain”
Nnyọcha anyị a dabere n’ihe gbasara amụmamụ maka ọmụmụ nwa na nwa ohụụ n’asụsụ igbo.Ebum n’obi anyị nke mbụ bụ ịchọpụta ma hazie aha dị iche iche e nwere n’asụsụ igbo gbasaraọmụmụ nwa na nwa ohụụ na ngalaba amụmamụ maka ọmụmụ nwa na nlekọta nwa ohụụ. Nkea ga-eme ka anyị nwalee aha ndịa e nwere n’asụsụ igbo na ngalaba amụmamụ maka ọmụmụnwa na nlekọta nwa ohụụ na aha ndi e nwere na olu bekee m’obụ frenchi. Ebum n’obi anyị nkeabụọ bụ ikwupụta otu anyị si nwete ma depụta aha gbasara ọmụmụ nwa na nlekọta nwa ohụụn’asụsụ igbo. Anyị kwadoro usoro mkpụrụ edemede nke igbo izugbe.Mgbe anyi n’eme nnyocha a, anyị chọpụtara n’oghere dị n’asụsụ igbo n’ihe metutara mkpọpụtaaha ihe. Nke a mere n’enwere ọtụtụ ihe ndi n’enweghị aha n’asụsụ igbo na ngalaba amụmamụmaka ọmụmụ nwa na nlekọta nwa ohụụ. Ihe ndia nwechara aha n’asụsụ ndi ọzọ. Anyị gbalịrịịfachisi oghere ndia dị n’asụsụ igbo iji gosi n’asụsụ a bụ asụsụ igbo nwekwara ike ịkpọpụta ahaihe ndi ha aka akpọbeghị aha.Usoro anyị kwesiri ịgbaso mgbe anyị na-amụ gbasara mkpọ aha n’asụsụ igbo na ngalabaamụmamụ maka ọmụmụ nwa na nlekọta nwa ohụụ, kwesiri ka ọ bụrụ nke ga-adaba na ọmụmụihe gbasara asụsụ n’enweghị ihe ndeda gbasara ngalaba amụmamụ a na kwa ngalaba nwereọtụtụ nsọ ala. Nke a mere oji dị mkpa na anyị gara mee nchọpụta n’obodo jụọ ajụjụ ọnụ iji mataaha ndi a n’agbanyeghị na ọ dịghịrị ndi mmadụ mfe ikwu maka ngalaba ihe ọmụmụ a.viNdi anyị gakwuru maka ajụjụ ọnụ a bụ ndi dibịa bekee, ndi nọọsụ, ndi dibịa ọdịnala, ndi ọghọnwa, ndi nchịkọta obodo na ndi okenye. N’ihe niile, anyị na ihe dịka mmadụ 57 kparịtara ụka.Nke a gụnyere ndi ọkachamara. N’ime ha e nwere ndi dibịa bekee 20, ndi nọọsụ 3 na ndi ọghọnwa bekee 10 n’otu akụkụ. N’akụkụ nke ọzọ, e nwere ndi dibịa ọdịnala 15, ndi ọghọ nwaọdịnala 5, ndi nchịkọta obodo 2 na ndi okenye 2. Anyị dabekwara na sosioteminọlọjị nkeGaudin (2003, 2005) na kwa usoro Halaoui (1990, 1991). Usoro a gbasara ịjụ ndi igbo ụfọdụajụjụ ọnụ na iso ha nọrọ mgbe ha na-arụ ọrụ.Nchọcha anyị gụnyekwara ịkpọpụta aha dị iche iche n’asụsụ igbo nke sistemu njiamụnwa nkenwoke na nwaanyị, aha gbasara nwa e bu n’afọ na nke nwa a mụrụ ọhụụ. Anyị gbasoro usoroDiki-Kidiri (2008) maka mkpọpụta aha. Anyị mekwara nkọwa iji gosipụta otu anyị si kpọọ ahandịa. N’ikpe azụ anyị depụtara aha ndi niile anyị ji rụọ ọrụ na asụsụ bekee, frenchi na kwa igbo.Aha ndi anyị depụtara gbasara : Amụmamụ ọkpụkpụ ukwu nwaanyị na njiamụnwa, Amụmamụime njiamụnwa kenwaanyị, Amụmamụ ọganụ njiamụnwa kenwoke, Fiziọlọjị sistemunjiamụnwa, Ntolite nwa nọ n’afọ, Fiziọlọjị kenri na mmiriara n’afọ ime, Nledo nwa nọ n’afọna kwa nwaọhụụ, Imeomume, Mbido ndụ nwaọhụụ, Ọrịa ọganụ njiamụnwa, Ọrịa nwaanyị,Nkwarụ, Kansa njiamụnwa na kwa Ọrịa akpamamịrị
El, Boukkouri Hicham. "Domain adaptation of word embeddings through the exploitation of in-domain corpora and knowledge bases." Electronic Thesis or Diss., université Paris-Saclay, 2021. http://www.theses.fr/2021UPASG086.
Full textThere are, at the basis of most NLP systems, numerical representations that enable the machine to process, interact with and—to some extent—understand human language. These “word embeddings” come in different flavours but can be generally categorised into two distinct groups: on one hand, static embeddings that learn and assign a single definitive representation to each word; and on the other, contextual embeddings that instead learn to generate word representations on the fly, according to a current context. In both cases, training these models requires a large amount of texts. This often leads NLP practitioners to compile and merge texts from multiple sources, often mixing different styles and domains (e.g. encyclopaedias, news articles, scientific articles, etc.) in order to produce corpora that are sufficiently large for training good representations. These so-called “general domain” corpora are today the basis on which most word embeddings are trained, greatly limiting their use in more specific areas. In fact, “specialized domains” like the medical domain usually manifest enough lexical, semantic and stylistic idiosyncrasies (e.g. use of acronyms and technical terms) that general-purpose word embeddings are unable to effectively encode out-of-the-box. In this thesis, we explore how different kinds of resources may be leveraged to train domain-specific representations or further specialise preexisting ones. Specifically, we first investigate how in-domain corpora can be used for this purpose. In particular, we show that both corpus size and domain similarity play an important role in this process and propose a way to leverage a small corpus from the target domain to achieve improved results in low-resource settings. Then, we address the case of BERT-like models and observe that the general-domain vocabularies of these models may not be suited for specialized domains. However, we show evidence that models trained using such vocabularies can be on par with fully specialized systems using in-domain vocabularies—which leads us to accept re-training general domain models as an effective approach for constructing domain-specific systems. We also propose CharacterBERT, a variant of BERT that is able to produce word-level open-vocabulary representations by consulting a word's characters. We show evidence that this architecture leads to improved performance in the medical domain while being more robust to misspellings. Finally, we investigate how external resources in the form of knowledge bases may be leveraged to specialise existing representations. In this context, we propose a simple approach that consists in constructing dense representations of these knowledge bases then combining these knowledge vectors with the target word embeddings. We generalise this approach and propose Knowledge Injection Modules, small neural layers that incorporate external representations into the hidden states of a Transformer-based model. Overall, we show that these approaches can lead to improved results, however, we intuit that this final performance ultimately depends on whether the knowledge that is relevant to the target task is available in the input resource. All in all, our work shows evidence that both in-domain corpora and knowledge may be used to construct better word embeddings for specialized domains. In order to facilitate future research on similar topics, we open-source our code and share pre-trained models whenever appropriate
Coupat, Raphaël. "Méthodologie pour les études d’automatisation et la génération automatique de programmes Automates Programmables Industrielssûrs de fonctionnement. Application aux Equipements d’Alimentation des Lignes Électrifiées." Thesis, Reims, 2014. http://www.theses.fr/2014REIMS019/document.
Full textThe research project presented in this thesis has been realized with the collaboration of the Engineer Department of the SNCF and the CReSTIC of the University of Reims Champagne-Ardenne. The goal of this project is to contribute to the improvement of the control studies of the electrification projects realized by the design engineers. This project must meet human, economic and technical aims expressed by the SNCF applied to the field of the Power Supply Equipments of the Electrified Lines (EALE in french). To answer these problems, a methodology for the automation studies is proposed. It integrates two research orientations were studied. The first axis is the automatic generation the deliverables (codes, documents, diagrams…). This axis is based on standardization and modeling of the “work”. MDD (Model Driven Development) and DSM (Domain Specific Modeling) approaches, brings suggestions for solution based on the use of “work templates”. However, it is fundamental to generate quality deliverables and safe PLC (Programmable Logic Controller) code. The second research orientation is interested in safe control. Three approaches of control synthesis (Supervisory Control Theory (SCT), the algebraic synthesis, the control by logical constraints) permitting a priori to reach these aims of safety are presented and discussed. The major advantage of the control by logical constraints is to separate the safety (which is checked formally off line by model-checking) and the functional parts. It can be used with existing PLC programs, which doesn't change thus the working methodology of the design engineers
Peixoto, Paul. "Ciblage de l'ADN par de molécules antitumorales et modulation de l'activité des partenaires protéiques." Phd thesis, Université du Droit et de la Santé - Lille II, 2008. http://tel.archives-ouvertes.fr/tel-00322954.
Full textLes dérivés DB, retenus pour cette étude, sont des molécules synthétisées par les Prs David Boykin et David Wilson (Atlanta) qui se fixent sur des séquences spécifiques dans le petit sillon de l'ADN. Ainsi, le composé diphényl-furane DB75 se lie en monomère à des séquences riches en paires de bases AT, alors que le dérivé phényl-furane-benzimidazole DB293 reconnaît la séquence 5'-ATGA en dimère. Cette reconnaissance «séquence-spécifique» pourrait cibler spécifiquement des interactions ADN-facteurs de transcription impliquées dans la régulation des gènes et la prolifération cellulaire.
Dans cette optique notre travail a été de déterminer la spécificité d'interaction à l'ADN de nouvelles molécules et d'en évaluer leur distribution cellulaire afin de pouvoir, dans un second temps, étudier la modulation de la liaison à l'ADN de facteur de transcription après fixation des dérivés.
Ainsi, nous nous sommes tout d'abord intéressés à la distribution cellulaire des composés DB dérivant du DB75 et du DB293 afin de déterminer s'ils pénètrent efficacement dans la cellule et se dirigent vers l'ADN nucléaire. Les résultats obtenus valident ceux publiés précédemment (Lansiaux et al., 2002a, 2002b) et montrent le faible impact des modifications du corps polycyclique sur la distribution des composés DB. Ainsi, les composés diphényl-furanes substitués sur le cycle furane, pénètrent efficacement dans le noyau alors que seules certaines substitutions sur les groupements phényles empêchent la molécule de pénétrer dans le noyau. Les conséquences cellulaires sont surprenantes puisque ces derniers composés présentent une très bonne cytotoxicité. Dans un second temps, nous avons étudié la spécificité et l'affinité de ces dérivés pour l'ADN. Nous avons ainsi découvert de nouveaux ligands des sites riches en paires de bases AT et des séquences ATGA qui ont permis de compléter nos connaissances des relations structure/affinité de ces composés.
Afin de déterminer si cette famille de ligands peut moduler sélectivement l'activité de liaison à l'ADN de facteurs de transcription, un criblage en mode compétitif de l'activité de 54 facteurs de transcription a été réalisé avec le DB293. Pour cela nous avons utilisé une approche innovante basée sur le principe des macroarrays : les membranes Transignal/Protein/DNA array I. Nous avons mis en évidence l'inhibition de la fixation de Pit-1 et Brn-3, deux facteurs de transcription à domaine POU dont les sites consensus contiennent un site riche en paires de bases AT et la séquence 5'-ATGA. Cependant, la seule présence d'un site 5'-ATGA ne prévient pas l'inhibition de l'activité de liaison à l'ADN par le DB293 puisque le facteur de transcription IRF-1 présentant aussi un site 5'-ATGA dans son site consensus n'est pas inhibé par le DB293. Nous avons montré que le DB293 interagit en dimère aux séquences 5'-ATGA des sites consensus Brn-3 et Pit-1 mais pas sur celui de IRF-1.
En parallèle, un ciblage de facteurs de transcription associés de manière privilégiée à un cancer donné et se liant au même type de séquence que les composés DB a orienté notre étude sur le complexe de transcription PBX/HoxA9. Ce complexe protéique se lie à une séquence nucléotidique à la fois riche en paire de base AT et contenant un site ATGA. Les membres de ce complexe sont sur-exprimés ou transloqués dans bon nombre de leucémies aiguës myéloïdes, lymphoïdes ou de syndromes myéloprolifératifs. Des tests d'interaction protéine/ADN nous ont permis de sélectionner des composés empêchant la fixation de HoxA9 sur sa séquence ADN cible. Les premiers résultats prometteurs dans la cellule montrent une toxicité induite par nos dérivés sur des cellules dont la prolifération est dépendante de l'activité de HoxA9 alors que cette toxicité est moindre dans des cellules n'exprimant pas HoxA9. Des tests clonogéniques effectués sur des cellules de moelle osseuse transformées par HoxA9 montrent un effet antiprolifératif du composé sélectionné qui est plus important sur les progéniteurs les moins engagés dans les voies de différentiation hématopoïétique.
De plus, le criblage de nouvelles séries de molécules dicationiques a permis d'identifier 3 nouveaux composés, les dérivés DB1255, DB1242 et RT-29 qui se lient à des séquences originales riches en paires de base GC. Il s'avère que le composé DB1255 inhibe efficacement la fixation du facteur de transcription Erg sur son site consensus EBS.
Cette étude montre pour la première fois l'inhibition de la fixation de facteurs de transcription par les composés DB et ouvre ainsi de nombreuses pistes prometteuses dans l'inhibition spécifiques de facteurs de transcription impliquées dans des processus de tumorogenèse.
Piat, Guilhem Xavier. "Incorporating expert knowledge in deep neural networks for domain adaptation in natural language processing." Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG087.
Full textCurrent state-of-the-art Language Models (LMs) are able to converse, summarize, translate, solve novel problems, reason, and use abstract concepts at a near-human level. However, to achieve such abilities, and in particular to acquire ``common sense'' and domain-specific knowledge, they require vast amounts of text, which are not available in all languages or domains. Additionally, their computational requirements are out of reach for most organizations, limiting their potential for specificity and their applicability in the context of sensitive data.Knowledge Graphs (KGs) are sources of structured knowledge which associate linguistic concepts through semantic relations. These graphs are sources of high quality knowledge which pre-exist in a variety of otherwise low-resource domains, and are denser in information than typical text. By allowing LMs to leverage these information structures, we could remove the burden of memorizing facts from LMs, reducing the amount of text and computation required to train them and allowing us to update their knowledge with little to no additional training by updating the KGs, therefore broadening their scope of applicability and making them more democratizable.Various approaches have succeeded in improving Transformer-based LMs using KGs. However, most of them unrealistically assume the problem of Entity Linking (EL), i.e. determining which KG concepts are present in the text, is solved upstream. This thesis covers the limitations of handling EL as an upstream task. It goes on to examine the possibility of learning EL jointly with language modeling, and finds that while this is a viable strategy, it does little to decrease the LM's reliance on in-domain text. Lastly, this thesis covers the strategy of using KGs to generate text in order to leverage LMs' linguistic abilities and finds that even naïve implementations of this approach can result in measurable improvements on in-domain language processing
Xue, Lin. "Aspects évolutifs de l’agir professoral dans le domaine de l’enseignement des langues : une étude à travers les discours de verbalisation de six enseignants de français langue étrangère et de chinois langue étrangère." Thesis, Sorbonne Paris Cité, 2016. http://www.theses.fr/2016USPCA148/document.
Full textThis dissertation is devoted to reconstructing the dynamics of foreign language teachers’ thinking by a multimodal and longitudinal approach. Focused on the field of Applied Linguistics and teacher cognition in particular, the present work is part of a multidisciplinary theoretical framework co-constructed of Social constructivism and Emergentism. This study involved six teachers of French as a foreign language (FFL) and Chinese as a foreign language (CFL) working in China and in France, each being followed up for one semester, through classroom observation and different kinds of interviews (semi-directive interviews and stimulated recall). Their verbalization was then analyzed by a mixed approach combining content analysis and discourse analysis. Besides an instable self-image characterized by multi-identity, emerge from each teacher’s discourse a knowledge and belief system and its historicity, subjectivity, contextuality and contradiction. The validity of Activity Theory is confirmed by a division of labour based on learners’ profiles that the teacher typifies. The importance of embodied action is dependent on the expected outcome. Teachers wish to not only complete their teaching activity but also reach an effect which is an integrated part of their thinking patterns. The non-linearity of context changing explains the updating of teacher’s thinking and practice. Teacher cognition’s complexity is structured around a dynamic between intentionality, embodied action and situational constraints. The break of reflexivity during action, discovered in neurosciences and validated here in a human and social sciences’ methodology, constitutes the key contribution of this work