Tesi sul tema "Ontologie génique"
Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili
Vedi i top-24 saggi (tesi di laurea o di dottorato) per l'attività di ricerca sul tema "Ontologie génique".
Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.
Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.
Vedi le tesi di molte aree scientifiche e compila una bibliografia corretta.
Bedhiafi, Walid. "Sciences de l'information pour l'étude des systèmes biologiques (exemple du vieillissement du système immunitaire)". Electronic Thesis or Diss., Paris 6, 2017. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2017PA066139.pdf.
High-throughput experimental approaches for gene expression study involve several processing steps for the quantification, the annotation and interpretation of the results. The i3 lab and the LGIPH, applies these approaches in various experimental setups. However, limitations have been observed when using conventional approaches for annotating gene expression signatures. The main objective of this thesis was to develop an alternative annotation approach to overcome this problem. The approach we have developed is based on the contextualization of genes and their products, and then biological pathways modeling to produce a knowledge base for the study of gene expression. We define a gene expression context as follows: cell population+ anatomical compartment+ pathological condition. For the production of gene contexts, we have opted for the massive screening of literature. We have developed a Python package, which allows annotating the texts according to three ontologies chosen according to our definition of the context. We show here that it ensures better performance for text annotation the reference tool. We used our package to screen an aging immune system text corpus. The results are presented here. To model the biological pathways we have developed, in collaboration with the LIPAH lab a modeling method based on a genetic algorithm that allows combining the results semantics proximity using the Biological Process ontology and the interactions data from db-string. We were able to find networks with an error rate of 0.47
Bedhiafi, Walid. "Sciences de l'information pour l'étude des systèmes biologiques (exemple du vieillissement du système immunitaire)". Thesis, Paris 6, 2017. http://www.theses.fr/2017PA066139/document.
High-throughput experimental approaches for gene expression study involve several processing steps for the quantification, the annotation and interpretation of the results. The i3 lab and the LGIPH, applies these approaches in various experimental setups. However, limitations have been observed when using conventional approaches for annotating gene expression signatures. The main objective of this thesis was to develop an alternative annotation approach to overcome this problem. The approach we have developed is based on the contextualization of genes and their products, and then biological pathways modeling to produce a knowledge base for the study of gene expression. We define a gene expression context as follows: cell population+ anatomical compartment+ pathological condition. For the production of gene contexts, we have opted for the massive screening of literature. We have developed a Python package, which allows annotating the texts according to three ontologies chosen according to our definition of the context. We show here that it ensures better performance for text annotation the reference tool. We used our package to screen an aging immune system text corpus. The results are presented here. To model the biological pathways we have developed, in collaboration with the LIPAH lab a modeling method based on a genetic algorithm that allows combining the results semantics proximity using the Biological Process ontology and the interactions data from db-string. We were able to find networks with an error rate of 0.47
Garcia, del Rio Diego Fernando. "Studying protein complexes for assessing the function of ghost proteins (Ghost in the Cell)". Electronic Thesis or Diss., Université de Lille (2022-....), 2023. https://pepite-depot.univ-lille.fr/ToutIDP/EDBSL/2023/2023ULILS115.pdf.
Ovarian cancer (OvCa) has the highest mortality rate among female reproductive cancers worldwide. OvCa is often referred to as a stealth killer because it is commonly diagnosed late or misdiagnosed. Once diagnosed, OvCa treatment options include surgery or chemotherapy. However, chemotherapy resistance is a significant obstacle. Therefore, there is an urgent need to identify new targets and develop novel therapeutic strategies to overcome therapy resistance.In this context the ghost proteome is a potentially rich source of biomarkers. The ghost proteome, also known as the alternative proteome, consists of proteins translated from alternative open reading frames (AltORFs). These AltORFs originate from different start codons within mRNA molecules, such as the coding DNA sequence (CDS) in frameshifts (+1, +2), the 5'-UTR, 3'-UTR, and possible translation products from non-coding RNAs (ncRNA).Studies on alternative proteins (AltProts) are often limited due to their case-by-case occurrence and complexity. Obtaining functional protein information for AltProts requires complex and costly biomolecular studies. However, their functions can be inferred by profiling their interaction partners, known as "guilty by association" approaches. Indeed, assessing AltProts' protein-protein interactions (PPIs) with reference proteins (RefProts) can help identify their function and set them as research targets. Since there is a lack of antibodies against AltProts, crosslinking mass spectrometry (XL-MS) is an appropriate tool for this task. Additionally, bioinformatic tools that link protein functional information through networks and gene ontology (GO) analysis are also powerful. These tools enable the visualization of signaling pathways and the grouping of RefProts based on their biological process, molecular function, or cellular localization, thus enhancing our understanding of cellular mechanisms.In this work, we developed a methodology that combines XL-MS and subcellular fractionation. The key step of subcellular fractionation allowed us to reduce the complexity of the samples analyzed by liquid chromatography tandem mass spectrometry (LC-MS/MS). To assess the validity of crosslinked interactions, we performed molecular modeling of the 3D structures of the AltProts, followed by docking studies and measurement of the corresponding crosslink distances. Network analysis indicated potential roles for AltProts in biological functions and processes. The advantages of this workflow include non-targeted AltProt identification and subcellular identification.Additionally, a proteogenomic analysis was performed to investigate the proteomes of two ovarian cancer cell lines (PEO-4 and SKOV-3 cells) in comparison to a normal ovarian epithelial cell line (T1074 cell). Using RNA-seq data, customized protein databases for each cell line were generated. Differential expression of several proteins, including AltProts, was identified between the cancer and normal cell lines. The expression of some RefProts and their transcripts were associated with cancer-related pathways. Moreover, the XL-MS methodology described above was used to identify PPIs in the cancerous cell lines.This work highlights the significant potential of proteogenomics in uncovering new aspects of ovarian cancer biology. It enables us to identify previously unknown proteins and variants that may have functional significance. The use of customized protein databases and the crosslinking approach have shed light on the "ghost proteome," an area that has remained unexplored until now
Lalloz, Jean-Pierre. "Genie et verite. Ontologie de la verite et metaphysique des createurs". Lille 3, 1991. http://www.theses.fr/1991LIL30004.
Every practice and every theory suppose an implicite conception of trhuth which must be questionned not simply about his existence because of the juridical nature of this notion, but about the invention of its essence and the legityimacy of this invention. That gives a definition of the genius therefore the questions of truth and geniusity meet to gether, not as a positive reality but as a juridical necessity after the introduction which states the necessity for philosophy to create a definition of truth of which it will be the explication and therefore to be itself of genius, as tis history attests it, will be study the question of the nature of what is true by the creativity of genius, the works. These ones will be questionned about the worldly vanity they assume and about the legitimacy of its existence, the institution of which it is the next problem of the genius is to reconcile juridical relaity of its notion with the ontological nature of truth which is by definition substancial. The second part will be denoted to the development of knowledge which is necessasity involved geniusity the creator of genius must own the answer to the most fundamental questions of ontology and metaphysics we will clarify each question and we will bring out the answer involved by the notion of an absolutly legitimale speaker. The conclusion is about the notion of the person, subject of right and not of fact, as completely made of the "subconsqcious" supposition of what is truen and leads to a "psychianalysis of right"
Benabderrahmane, Sidahmed. "Prise en compte des connaissances du domaine dans l'analyse transcriptomique : Similarité sémantique, classification fonctionnelle et profils flous : application au cancer colorectal". Phd thesis, Université Henri Poincaré - Nancy I, 2011. http://tel.archives-ouvertes.fr/tel-00653169.
Laadhar, Amir. "Local matching learning of large scale biomedical ontologies". Thesis, Toulouse 3, 2019. http://www.theses.fr/2019TOU30126.
Although a considerable body of research work has addressed the problem of ontology matching, few studies have tackled the large ontologies used in the biomedical domain. We introduce a fully automated local matching learning approach that breaks down a large ontology matching task into a set of independent local sub-matching tasks. This approach integrates a novel partitioning algorithm as well as a set of matching learning techniques. The partitioning method is based on hierarchical clustering and does not generate isolated partitions. The matching learning approach employs different techniques: (i) local matching tasks are independently and automatically aligned using their local classifiers, which are based on local training sets built from element level and structure level features, (ii) resampling techniques are used to balance each local training set, and (iii) feature selection techniques are used to automatically select the appropriate tuning parameters for each local matching context. Our local matching learning approach generates a set of combined alignments from each local matching task, and experiments show that a multiple local classifier approach outperforms conventional, state-of-the-art approaches: these use a single classifier for the whole ontology matching task. In addition, focusing on context-aware local training sets based on local feature selection and resampling techniques significantly enhances the obtained results
Fortineau, Virginie. "Contribution à une modélisation ontologique des informations tout au long du cycle de vie du produit". Phd thesis, Ecole nationale supérieure d'arts et métiers - ENSAM, 2013. http://pastel.archives-ouvertes.fr/pastel-01064598.
Pham, Tuan Anh. "OntoApp : une approche déclarative pour la simulation du fonctionnement d’un logiciel dès une étape précoce du cycle de vie de développement". Thesis, Université Côte d'Azur (ComUE), 2017. http://www.theses.fr/2017AZUR4075/document.
In this thesis, we study several models of collaboration between Software Engineering and Semantic Web. From the state of the art, we propose an approach to the use of ontology in the business application layer. The main objective of our work is to provide the developer with the tools to design, in the declarative manner, a business "executable" layer of an application in order to simulate its operation and thus show the compliance of the application with the customer requirements defined at the beginning of the software life cycle. On the other hand, another advantage of this approach is to allow the developer to share and reuse the business layer description of a typical application in a domain using ontology. This typical application description is called "Application Template". The reuse of the business layer description of an application is an interesting aspect of software engineering. That is the key point we want to consider in this thesis. In the first part of this thesis, we deal with the modeling of the business layer. We first present an ontology-based approach to represent business process and the business rules and show how to verify the consistency of business process and the set of business rules. Then, we present an automatic check mechanism of compliance of business process with a set of business rules. The second part of this thesis is devoted to define a methodology, called personalization, of creating of an application from an "Application Template". This methodology will allow the user to use an Application Template to create his own application by avoiding deadlock and semantic errors. We introduce at the end of this part the description of an experimental platform to illustrate the feasibility of the mechanisms proposed in the thesis. This platform s carried out on a relational DBMS.Finally, we present, in a final chapter, the conclusion, the perspective and other annexed works developed during this thesis
Benabderrahmane, Sidahmed. "Prise en compte des connaissances du domaine dans l'analyse transcriptomique : Similarité sémantique, classification fonctionnelle et profils flous : application au cancer colorectal". Electronic Thesis or Diss., Nancy 1, 2011. http://www.theses.fr/2011NAN10097.
Bioinformatic analyses of transcriptomic data aims to identify genes with variations in their expression level in different tissue samples, for example tissues from healthy versus seek patients, and to characterize these genes on the basis of their functional annotation. In this thesis, I present four contributions for taking into account domain knowledge in these methods. Firstly, I define a new semantic and functional similarity measure which optimally exploits functional annotations from Gene Ontology (GO). Then, I show, thanks to a rigorous evaluation method, that this measure is efficient for the functional classification of genes. In the third contribution, I propose a differential approach with fuzzy assignment for building differential expression profiles (DEPs). I define an algorithm for analyzing overlaps between functional clusters and reference sets such as DEPs here, in order to point out genes that have both similar functional annotation and similar variations in expression. This method is applied to experimental data produced from samples of healthy tissue, colorectal tumor and cancerous cultured cell line. Finally the similarity measure IntelliGO is generalized to another structured vocabulary organized as GO as a rooted directed acyclic graph, with an application concerning the semantic reduction of attributes before mining
Perry, Nicolas. "INDUSTRIALISATION DES CONNAISSANCES : APPROCHES D'INTEGRATION POUR UNE UTILISATION OPTIMALE EN INGENIERIE (CAS DE L'EVALUATION ECONOMIQUE)". Habilitation à diriger des recherches, Université de Nantes, 2007. http://tel.archives-ouvertes.fr/tel-00204563.
Nous développons ainsi des environnements que nous voulons interactifs, pour l'aide à la prise de décision (SIAD).
Dès lors, et indépendamment du domaine abordé, la connaissance des experts d'un métier devient une ressource que l'on voudrait partager au maximum, voir faire fonctionner de manières semi-automatiques. Or ces connaissances sont immatérielles, souvent non formalisées et échangées au travers de flux d'informations dont la quantité, la qualité et la véracité ne sont pas toujours assurées ou certifiées.
Il importe donc de s'intéresser aux approches de gestion des connaissances ou Knowledge Management, pour spécifier des environnements virtuels les plus riches possibles (sémantiquement parlant) dans lesquels on peut espérer intégrer une partie des connaissances d'experts. Certaines actions d'ingénierie, dont les processus et les règles ont été formalisés puis modélisés, sont alors partiellement automatisables, laissant la place à l'utilisateur pour décider dans des situations non gérées par l'expertise automatique, de critiquer, révoquer ou faire évoluer les choix et alternatives proposées.
Un pan important des études et recherches faites dans les domaines des sciences de l'information et de la communication font la part belle à l'intelligence artificielle. Mais les domaines d'application restent souvent très loin des cas d'utilisation ou d'application d'ingénierie (ingénierie mécanique en particulier).
Il nous importe donc de nous imprégner des approches, méthodes et outils existants pour nous les approprier tout en les faisant évoluer pour les adapter à nos besoins. Il faut donc pouvoir maîtriser ces méthodes et leurs solutions techniques pour pouvoir être porteur d'évolutions et requêtes et attirer ces thématiques de recherches vers nos problématiques.
En conséquences, mes travaux se sont inscrits dans cette démarche d'appropriation des approches de gestion et d'automatisation des connaissances qui relève d'une thématique d'ingénierie des connaissances. Nous avons donc cherché à développer des environnements à base de connaissances supportant les démarches d'ingénierie.
Cette position (ingénieur des connaissances) se place en interface entre l'expert (réceptacle et source des connaissances à partager), l'informaticien (qui va développer l'environnement virtuel à base de connaissances) et l'utilisateur (qui va utiliser et faire vivre l'outil).
Dès lors différents points de vue se confrontent, et il devient tout aussi important de maîtriser les aspects techniques (méthodes et solutions existantes pour formaliser et modéliser des connaissances, développer les applications, valider l'intégrité et la cohérence des éléments intégrés et à intégrer, faire vivre les bases de connaissances, etc...) que les aspects de gestion de projet, puisqu'il reste très difficile d'évaluer le retour sur investissement direct de tels projets. Un échec conduisant généralement à l'abandon de ce type d'approches, longues, lourdes et souvent opaques pour l'expert ou l'utilisateur.
Cette première partie de mes activités de recherches a donc participé à l'enrichissement d'aspects de modélisation (d'objets d'entreprise : information, connaissances ...), de structuration de ces modèles (méthode de modélisation) et d'intégration dans le cycle de développement d'applications informatiques.
En analysant les démarches d'ingénierie en mécanique (au sens large), il est ressorti que nous continuons à approfondir et affiner des modèles et calculs sur les solutions techniques qui répondent à des fonctions et des besoins clients. Cependant, le critère de coût ou de valeur reste très peu évalué ou abordé. Or, dans le contexte mondial et concurrentiel actuel il importe d'assurer l'équilibre entre les fonctions techniques et économiques des produits qui sont devenus de plus en plus complexes.
Ainsi nous essayons d'aborder la performance au travers des aspects économiques le long du cycle de vie du produit. Ceci nous donne un domaine d'application de nos travaux sur l'intégration et la mise à disposition d'expertise(s) issue(s) des sciences économiques et de gestion dans les démarches d'ingénierie.
Ces travaux ont initialement abordé l'adéquation entre les outils et méthodes d'évaluation économique au regard du cycle de vie produit. Les problèmes d'information partielle ou de passage de modèles se posent alors à chaque étape charnière.
De plus, nos réflexions et échanges avec les chercheurs en sciences de gestion nous ont porté vers la prise en compte du concept de valeur englobant le coût. Il est ainsi possible d'évaluer les alternatives de chaînes de valeurs et d'envisager d'aborder la conception conjointe de produits et de processus à valeur ajoutée maîtrisée. Ainsi l'environnement d'aide à la décision complète les contextes de décisions en intégrant les différents points de vus d'acteurs et leurs visions sur la valeur à prendre en compte.
L'ensemble de ces travaux et réflexions m'ont conduit au néologisme d'Industrialisation des Connaissances, au sens de la mise à disposition fiable, évolutive et objectivée, des connaissances d'experts d'entreprise. Ces travaux ont été alimentés par 4 thèses soutenues (une en cours), 7 DEA et Masters, des implications dans un Réseau Thématique du FP5 et deux Réseaux d'Excellence du FP6, ainsi qu'une collaboration soutenue depuis 2003 avec une équipe de recherche Sud Africaine.
Ayllón-Benítez, Aarón. "Development of new computational methods for a synthetic gene set annotation". Thesis, Bordeaux, 2019. http://www.theses.fr/2019BORD0305.
The revolution in new sequencing technologies, by strongly improving the production of omics data, is greatly leading to new understandings of the relations between genotype and phenotype. To interpret and analyze data grouped according to a phenotype of interest, methods based on statistical enrichment became a standard in biology. However, these methods synthesize the biological information by a priori selecting the over-represented terms and focus on the most studied genes that may represent a limited coverage of annotated genes within a gene set. During this thesis, we explored different methods for annotating gene sets. In this frame, we developed three studies allowing the annotation of gene sets and thus improving the understanding of their biological context.First, visualization approaches were applied to represent annotation results provided by enrichment analysis for a gene set or a repertoire of gene sets. In this work, a visualization prototype called MOTVIS (MOdular Term VISualization) has been developed to provide an interactive representation of a repertoire of gene sets combining two visual metaphors: a treemap view that provides an overview and also displays detailed information about gene sets, and an indented tree view that can be used to focus on the annotation terms of interest. MOTVIS has the advantage to solve the limitations of each visual metaphor when used individually. This illustrates the interest of using different visual metaphors to facilitate the comprehension of biological results by representing complex data.Secondly, to address the issues of enrichment analysis, a new method for analyzing the impact of using different semantic similarity measures on gene set annotation was proposed. To evaluate the impact of each measure, two relevant criteria were considered for characterizing a "good" synthetic gene set annotation: (i) the number of annotation terms has to be drastically reduced while maintaining a sufficient level of details, and (ii) the number of genes described by the selected terms should be as large as possible. Thus, nine semantic similarity measures were analyzed to identify the best possible compromise between both criteria while maintaining a sufficient level of details. Using GO to annotate the gene sets, we observed better results with node-based measures that use the terms’ characteristics than with edge-based measures that use the relations terms. The annotation of the gene sets achieved with the node-based measures did not exhibit major differences regardless of the characteristics of the terms used. Then, we developed GSAn (Gene Set Annotation), a novel gene set annotation web server that uses semantic similarity measures to synthesize a priori GO annotation terms. GSAn contains the interactive visualization MOTVIS, dedicated to visualize the representative terms of gene set annotations. Compared to enrichment analysis tools, GSAn has shown excellent results in terms of maximizing the gene coverage while minimizing the number of terms.At last, the third work consisted in enriching the annotation results provided by GSAn. Since the knowledge described in GO may not be sufficient for interpreting gene sets, other biological information, such as pathways and diseases, may be useful to provide a wider biological context. Thus, two additional knowledge resources, being Reactome and Disease Ontology (DO), were integrated within GSAn. In practice, GO terms were mapped to terms of Reactome and DO, before and after applying the GSAn method. The integration of these resources improved the results in terms of gene coverage without affecting significantly the number of involved terms. Two strategies were applied to find mappings (generated or extracted from the web) between each new resource and GO. We have shown that a mapping process before computing the GSAn method allowed to obtain a larger number of inter-relations between the two knowledge resources
Guo, Jing. "Serious Games pour la e-Santé : application à la formation des médecins généralistes". Phd thesis, Toulouse 3, 2016. http://oatao.univ-toulouse.fr/17813/1/the%CC%80se_GUO.pdf.
Hudelot, Céline. "Towards a Cognitive Vision Platform for Semantic Image Interpretation; Application to the Recognition of Biological Organisms". Phd thesis, Université de Nice Sophia-Antipolis, 2005. http://tel.archives-ouvertes.fr/tel-00328034.
Le problème de l'interprétation sémantique d'images est un problème complexe qui peut se séparer en 3 sous-problèmes plus faciles à résoudre en tant que problèmes indépendants: (1) l'interprétation sémantique, (2) la gestion des données visuelles pour la mise en correspondance des représentations abstraites haut niveau de la scène avec les données image issues des capteurs et (3) le traitement d'images. Nous proposons une architecture distribuée qui se base sur la coopération de trois systèmes à base de connaissances (SBCs).
Chaque SBC est spécialisé pour un des sous problèmes de l'interprétation d'images. Pour chaque SBC nous avons proposé un modèle générique en formalisant la connaissance et des stratégies de raisonnement dédiées. De plus, nous proposons d'utiliser deux ontologies pour faciliter l'acquisition de la connaissance et permettre l'interopérabilité entre les trois différents SBCs.
Un travail d'implémentation de la plate forme de vision cognitive a été fait à l'aide de la plate-forme de développement de systèmes à base de connaissances LAMA conçue par l'équipe ORION.
Les solutions proposées ont été validées sur une application concrète et difficile: le diagnostic précoce des pathologies végétales et en particulier des pathologies du rosier de serre. Ce travail a été effectué en coopération avec l'INRA (Institut National de la Recherche Agronomique).
Bouhours, Cédric. "Détection, Explications et Restructuration de défauts de conception : les patrons abîmés". Phd thesis, Toulouse 3, 2010. http://tel.archives-ouvertes.fr/tel-00457623.
Qasim, Lara. "System reconfiguration : A Model based approach; From an ontology to the methodology bridging engineering and operations Model-Based System Reconfiguration: A Descriptive Study of Current Industrial Challenges Towards a reconfiguration framework for systems engineering integrating use phase data An overall Ontology for System Reconfiguration using Model-Based System Engineering An Ontology for System Reconfiguration: Integrated Modular Avionics IMA Case Study A model-based method for System Reconfiguration". Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPAST031.
System evolutions have to be managed to ensure system effectiveness and efficiency through its whole lifecycle, particularly when it comes to complex systems that take years of development and dozens of years of usage. System Reconfiguration is key in complex systems management, as it is an enabler of system flexibility and adaptability regarding system evolutions. System reconfiguration ensures operational effectiveness and increases system qualities (e.g., reliability, availability, safety, and usability).This research has been conducted in the context of a large international aerospace, space, ground transportation, defense, and security company. This research aims at supporting system reconfiguration during operations.First, we conducted a descriptive study based on a field study and a literature review to identify the industrial challenges related to system reconfiguration. The main issue lies in the development of reconfiguration support. More specifically, challenges related to data identification and integration were identified.In this thesis, we present the OSysRec ontology, which captures and formalizes the reconfiguration data. The ontology synthesizes the structure, dynamics, and management aspects necessary to support the system reconfiguration process in an overall manner.Furthermore, we present a model-based method (MBSysRec) that integrates system reconfiguration data and bridges both the engineering and the operational phases. MBSysRec is a multidisciplinary method that involves combinatorial configuration generation and a multi-criteria decision-making method for configuration evaluation and selection.This thesis is a step towards a model-based approach for system reconfiguration of evolving systems, ensuring their flexibility and adaptability
Bennaceur, Amel. "Synthèse dynamique de médiateurs dans les environnements ubiquitaires". Phd thesis, Université Pierre et Marie Curie - Paris VI, 2013. http://tel.archives-ouvertes.fr/tel-00849402.
Gravier, Christophe. "Vers la généralisation de manipulations distantes et collaboratives d'instruments de haute technologie". Phd thesis, Université Jean Monnet - Saint-Etienne, 2007. http://tel.archives-ouvertes.fr/tel-00287637.
Blondet, Gaëtan. "Système à base de connaissances pour le processus de plan d'expériences numériques". Thesis, Compiègne, 2017. http://www.theses.fr/2017COMP2363/document.
In order to improve industrial competitiveness, product design relies more and more on numerical tools, such as numerical simulation, to develop better and cheaper products faster. Numerical Design of Experiments (NDOE) are more and more used to include variabilities during simulation processes, to design more robust, reliable and optimized product earlier in the product development process. Nevertheless, a NDOE process may be too expensive to be applied to a complex product, because of the high computational cost of the model and the high number of required experiments. Several methods exist to decrease this computational cost, but they required expert knowledge to be efficiently applied. In addition to that, NDoE process produces a large amount of data which must be managed. The aim of this research is to propose a solution to define, as fast as possible, an efficient NDoE process, which produce as much useful information as possible with a minimal number of simulations, for complex products. The objective is to shorten both process definition and execution steps. A knowledge-based system is proposed, based on a specific ontology and a bayesian network, to capitalise, share and reuse knowledge and data to predict the best NDoE process definition regarding to a new product. This system is validated on a product from automotive industry
Nzetchou, Stéphane. "Méthodologie d'enrichissement sémantique de la CAO dans un environnement de continuité numérique". Thesis, Compiègne, 2021. http://www.theses.fr/2021COMP2642.
The digital transition in the manufacturing industry is characterised by a three or even four-decade liability. Some CAO models or digital mock-ups accumulated du ring this period are frozen, i.e. 3D models without a construction tree, which are characterised by missing geometries, due to software changes or versions of 3D formats that have not been updated Reverse engineering activities of CAO models, aiming at obtaining semantically rich 3D models, i.e. parametric and modifiable, made up of construction operations, carrying attributes and metadata, with geometric ru les and constraints, etc., thanks to the use of engineering tools such as CATIA for example, or by approaches based on point clouds coming from a scan for example. But, this is still not satisfactory, because at the end of the reverse engineering activities, we often obtain a solid with a weak semantic representation or an absent construction tree. This leads us to propose in the framework of this thesis work, a methodology for managing information linked to CAO models in order to integrate expert information that we call semantic into these CAO models. The frozen CAO models handled are usually in low-level formats such as STL, IGES or STEP AP203. They are used as input data for our methodology and they can be associated with product definition data, such as a product drawing or documents. The processing of CAO models requires a solution that is able to_manage the digital models and the information they couId possibly integrate. And also the incompleteness of some CAO models that is linked to the 3D format or to the limit of the technology used to obtain the CAO model (e.g. software li mit, 3D format for geometric representation only and that does not support a representation of the construction tree or that cannot graphically represent geometric dimensions and tolerances, etc.). Finally, the relevance of integrated information into CAO model, of a non-geometric nature, during the semantic overlay phase should make it possible, in certain cases, to produce parameterised CAO models, specific to the activity of the application domain. The state of the art, concerning the information representation contained in CAO model and the management of this information, makes it possible to identify techniques and approaches that help the semantic enrichment of CAO models at various levels of granularity. This thesis proposes a methodology named Vaquero For CAO Semantic Enrichment (VFCSE), which is made of three step access, identification and annotation. The aim of this methodology is to integrate missing and standardised information of a non-geometric nature, such as product specifications, tolerances, geometric dimensions, etc., into frozen CAO models. This information will be derived from user needs working on the CAO model and will corne from a semantically rich standard in order to be useful for many operations related to the product life cycle. The enrichment, thanks to this semantically rich standard, will allow for a perpetuation of the information and an efficient reuse of CAO model information. ln order to do this, a CAO model is retrieved from a PDM (Product Data Management) thanks to a user request. lt is visualised in a CAO viewer supporting STL, IGES and STEP AP203 formats. Then, follows a step of identifying components of CAO model. These components can be parts or assemblies. The identified components are annotated based on the STEP AP242 format, which represents the semantically rich standard. These annotations are stored in a standardised ontology, which serves as a minimal basis for carrying all the semantics to be integrated into the CAO mode in order to make the CAO model durable and reusable. The scientific contribution of this work is mainly based on the possibility of reverse engineering by using ontologies to annotate 3D models, according to user needs who has the CAO model at his disposal
Carloni, Olivier. "Introduction de raisonnement dans un outil industriel de gestion des connaissances". Phd thesis, Montpellier 2, 2008. http://www.theses.fr/2008MON20101.
Carloni, Olivier. "Introduction de raisonnement dans un outil industriel de gestion des connaissances". Phd thesis, Université Montpellier II - Sciences et Techniques du Languedoc, 2008. http://tel.archives-ouvertes.fr/tel-00387017.
Sarcevic, Lara. "Aporie du second degré : la forme à la quête d'une nouvelle autonomie : réflexions sur le rôle et le statut de la discursivité théorique dans l'art contemporain de la fin des années soixante à nos jours". Thesis, Lille 3, 2014. http://www.theses.fr/2014LIL30008.
Whether by artists, museums or economic actors, now a new discursivity is added to the traditional critical production of historians, theoreticians and philosophers of art, and accompanies almost systematically the plastics works of contemporary artists. This discursive growth answers also to a proliferation of functions of the theoretical discourse, which is used, in some contemporary artistic practices, as the very plastic material of the work. Marked by a singular plasticity, contemporary art works extend the expressive possibilities to the point that the work can manifest itself in any form, and even completely disappear. This extreme diversity results from the adherence to a new artistic principle, the "second degree", or the self-reflexivity of the artwork, which attributes to the idea of the work, the same importance as its’ very material formulation. What became undetermined in the plastic form seems, in return, to be offset by the requirement for greater conceptual determination. We were committed, in our research, to highlight the origin of this discursive primacy in the new paradigmatic situation of visual arts. Not being limited only to the purpose of mediation and social legitimization of the art works, discursivity opens also a new creative and hermeneutic space, involving theoretical discourse as a creative act complementary to plastic form
Benaicha, Mohamed. "Identification des concepts pour la ré-ingénierie des ontologies". Thèse, 2017. http://constellation.uqac.ca/4231/1/Benaicha_uqac_0862N_10328.pdf.
Boily, Gino. "Identification de transcrits modulés par ETV6 : un gène candidat suppresseur de tumeur". Thèse, 2005. http://hdl.handle.net/1866/15576.