Tesis sobre el tema "Données biomédicales"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 33 mejores tesis para su investigación sobre el tema "Données biomédicales".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
BARRA, Vincent. "Modélisation, classification et fusion de données biomédicales". Habilitation à diriger des recherches, Université Blaise Pascal - Clermont-Ferrand II, 2004. http://tel.archives-ouvertes.fr/tel-00005998.
Texto completoChoquet, Rémy. "Partage de données biomédicales : modèles, sémantique et qualité". Phd thesis, Université Pierre et Marie Curie - Paris VI, 2011. http://tel.archives-ouvertes.fr/tel-00824931.
Texto completoPersoneni, Gabin. "Apport des ontologies de domaine pour l'extraction de connaissances à partir de données biomédicales". Thesis, Université de Lorraine, 2018. http://www.theses.fr/2018LORR0235/document.
Texto completoThe semantic Web proposes standards and tools to formalize and share knowledge on the Web, in the form of ontologies. Biomedical ontologies and associated data represents a vast collection of complex, heterogeneous and linked knowledge. The analysis of such knowledge presents great opportunities in healthcare, for instance in pharmacovigilance. This thesis explores several ways to make use of this biomedical knowledge in the data mining step of a knowledge discovery process. In particular, we propose three methods in which several ontologies cooperate to improve data mining results. A first contribution of this thesis describes a method based on pattern structures, an extension of formal concept analysis, to extract associations between adverse drug events from patient data. In this context, a phenotype ontology and a drug ontology cooperate to allow a semantic comparison of these complex adverse events, and leading to the discovery of associations between such events at varying degrees of generalization, for instance, at the drug or drug class level. A second contribution uses a numeric method based on semantic similarity measures to classify different types of genetic intellectual disabilities, characterized by both their phenotypes and the functions of their linked genes. We study two different similarity measures, applied with different combinations of phenotypic and gene function ontologies. In particular, we investigate the influence of each domain of knowledge represented in each ontology on the classification process, and how they can cooperate to improve that process. Finally, a third contribution uses the data component of the semantic Web, the Linked Open Data (LOD), together with linked ontologies, to characterize genes responsible for intellectual deficiencies. We use Inductive Logic Programming, a suitable method to mine relational data such as LOD while exploiting domain knowledge from ontologies by using reasoning mechanisms. Here, ILP allows to extract from LOD and ontologies a descriptive and predictive model of genes responsible for intellectual disabilities. These contributions illustrates the possibility of having several ontologies cooperate to improve various data mining processes
Personeni, Gabin. "Apport des ontologies de domaine pour l'extraction de connaissances à partir de données biomédicales". Electronic Thesis or Diss., Université de Lorraine, 2018. http://www.theses.fr/2018LORR0235.
Texto completoThe semantic Web proposes standards and tools to formalize and share knowledge on the Web, in the form of ontologies. Biomedical ontologies and associated data represents a vast collection of complex, heterogeneous and linked knowledge. The analysis of such knowledge presents great opportunities in healthcare, for instance in pharmacovigilance. This thesis explores several ways to make use of this biomedical knowledge in the data mining step of a knowledge discovery process. In particular, we propose three methods in which several ontologies cooperate to improve data mining results. A first contribution of this thesis describes a method based on pattern structures, an extension of formal concept analysis, to extract associations between adverse drug events from patient data. In this context, a phenotype ontology and a drug ontology cooperate to allow a semantic comparison of these complex adverse events, and leading to the discovery of associations between such events at varying degrees of generalization, for instance, at the drug or drug class level. A second contribution uses a numeric method based on semantic similarity measures to classify different types of genetic intellectual disabilities, characterized by both their phenotypes and the functions of their linked genes. We study two different similarity measures, applied with different combinations of phenotypic and gene function ontologies. In particular, we investigate the influence of each domain of knowledge represented in each ontology on the classification process, and how they can cooperate to improve that process. Finally, a third contribution uses the data component of the semantic Web, the Linked Open Data (LOD), together with linked ontologies, to characterize genes responsible for intellectual deficiencies. We use Inductive Logic Programming, a suitable method to mine relational data such as LOD while exploiting domain knowledge from ontologies by using reasoning mechanisms. Here, ILP allows to extract from LOD and ontologies a descriptive and predictive model of genes responsible for intellectual disabilities. These contributions illustrates the possibility of having several ontologies cooperate to improve various data mining processes
Seitz, Ludwig. "Conception et mise en oeuvre de mécanismes sécurisés d'échange de données confidentielles : application à la gestion de données biomédicales dans le cadre d'architectures de grilles de calcul / données". Lyon, INSA, 2005. http://theses.insa-lyon.fr/publication/2005ISAL0055/these.pdf.
Texto completoGrid computing allows users to share multiple heterogeneous resources, such as computing power, storage capacity and data, and provides an architecture for transparent interoperation of these resources from the user's point of view. An upcoming application for Grids is health-care. More than for the first applications of Grids (e. G. Particle physics, terrestrial observation), security is a major issue for medical applications. Conventional data protection mechanisms are only of limited use, due to the novel security challenges posed by Grids. To respond to these challenges we propose an access control system that is decentralized and where the owners of some data are in control of the permissions concerning their data. Furthermore data may be needed at very short notice, the access control system must support a delegation of rights that is effective immediately. Grid users also need delegation mechanisms to give rights to processes, that act on their behalf. As these processes may spawn sub processes, multi-step delegation must be possible. In addition to these usability requirements, the transparent storage and replication mechanisms of Grids make it necessary to implement additional protection mechanisms for confidential data. Access control can be circumvented by attackers having access to the physical storage medium. We therefore need encrypted storage mechanisms to enhance the protection of data stored on a Grid. In this thesis we propose a comprehensive architecture for the protection of confidential data on Grids. This architecture includes an access control system and an encrypted storage scheme
Rivault, Yann. "Analyse de trajectoires de soins à partir de bases de données médico-administratives : apport d'un enrichissement par des connaissances biomédicales issues du Web des données". Thesis, Rennes 1, 2019. http://www.theses.fr/2019REN1B003/document.
Texto completoReusing healthcare administrative databases for public health research is relevant and opens new perspectives. In pharmacoepidemiology, it allows to study large scale diseases as well as care consumption for a population. Nevertheless, reusing these information systems that were initially designed for accounting purposes and whose interoperability is limited raises new challenges in terms of representation, integration, exploration and analysis. This thesis deals with the joint use of healthcare administrative databases and biomedical knowledge for the study of patient care trajectories. This includes both (1) exploration and identification through queries of relevant care pathways in voluminous flows, and (2) analysis of retained trajectories. Semantic Web technologies and biomedical ontologies from the Linked Data allowed to identify care trajectories containing a drug interaction or a potential contraindication between a prescribed drug and the patient’s state of health. In addition, we have developed the R queryMed package to enable public health researchers to carry out such studies by overcoming the difficulties of using Semantic Web technologies and ontologies. After identifying potentially interesting trajectories, knowledge from biomedical nomenclatures and ontologies has also enriched existing methods of analysing care trajectories to better take into account the complexity of data. This resulted notably in the integration of semantic similarities between medical concepts. Semantic Web technologies have also been used to explore obtained results
Nikiema, Jean. "Intégration de connaissances biomédicales hétérogènes grâce à un modèle basé sur les ontologies de support". Thesis, Bordeaux, 2019. http://www.theses.fr/2019BORD0179/document.
Texto completoIn the biomedical domain, there are almost as many knowledge resources in health as there are application fields. These knowledge resources, described according to different representation models and for different contexts of use, raise the problem of complexity of their interoperability, especially for actual public health problematics such as personalized medicine, translational medicine and the secondary use of medical data. Indeed, these knowledge resources may represent the same notion in different ways or represent different but complementary notions.For being able to use knowledge resources jointly, we studied three processes that can overcome semantic conflicts (difficulties encountered when relating distinct knowledge resources): the alignment, the integration and the semantic enrichment of the integration. The alignment consists in creating a set of equivalence or subsumption mappings between entities from knowledge resources. The integration aims not only to find mappings but also to organize all knowledge resources’ entities into a unique and coherent structure. Finally, the semantic enrichment of integration consists in finding all the required mapping relations between entities of distinct knowledge resources (equivalence, subsumption, transversal and, failing that, disjunction relations).In this frame, we firstly realized the alignment of laboratory tests terminologies: LOINC and the local terminology of Bordeaux hospital. We pre-processed the noisy labels of the local terminology to reduce the risk of naming conflicts. Then, we suppressed erroneous mappings (confounding conflicts) using the structure of LOINC.Secondly, we integrated RxNorm to SNOMED CT. We constructed formal definitions for each entity in RxNorm by using their definitional features (active ingredient, strength, dose form, etc.) according to the design patterns proposed by SNOMED CT. We then integrated the constructed definitions into SNOMED CT. The obtained structure was classified and the inferred equivalences generated between RxNorm and SNOMED CT were compared to morphosyntactic mappings. Our process resolved some cases of naming conflicts but was confronted to confounding and scaling conflicts, which highlights the need for improving RxNorm and SNOMED CT.Finally, we performed a semantically enriched integration of ICD-10 and ICD-O3 using SNOMED CT as support. As ICD-10 describes diagnoses and ICD-O3 describes this notion according to two different axes (i.e., histological lesions and anatomical structures), we used the SNOMED CT structure to identify transversal relations between their entities (resolution of open conflicts). During the process, the structure of the SNOMED CT was also used to suppress erroneous mappings (naming and confusion conflicts) and disambiguate multiple mappings (scale conflicts)
Courilleau, Nicolas. "Visualisation et traitements interactifs de grilles régulières 3D haute-résolution virtualisées sur GPU. Application aux données biomédicales pour la microscopie virtuelle en environnement HPC". Thesis, Reims, 2019. http://www.theses.fr/2019REIMS013.
Texto completoData visualisation is an essential aspect of scientific research in many fields.It helps to understand observed or even simulated phenomena and to extract information from them for purposes such as experimental validations or solely for project review.The focus given in this thesis is on the visualisation of volume data in medical and biomedical imaging.The acquisition devices used to acquire the data generate scalar or vector fields represented in the form of regular 3D grids.The increasing accuracy of the acquisition devices implies an increasing size of the volume data.Therefore, it requires to adapt the visualisation algorithms in order to be able to manage such volumes.Moreover, visualisation mostly relies on the use of GPUs because they suit well to such problematics.However, they possess a very limited amount of memory compared to the generated volume data.The question then arises as to how to dissociate the calculation units, allowing visualisation, from those of storage.Algorithms based on the so-called "out-of-core" principle are the solutions for managing large volume data sets.In this thesis, we propose a complete GPU-based pipeline allowing real-time visualisation and processing of volume data that are significantly larger than the CPU and GPU memory capacities.The pipeline interest comes from its GPU-based approach of an out-of-core addressing structure, allowing the data virtualisation, which is adequate for volume data management.We validate our approach using different real-time applications of visualisation and processing.First, we propose an interactive virtual microscope allowing 3D auto-stereoscopic visualisation of stacks of high-resolution images.Then, we verify the adaptability of our structure to all data types with a multimodal virtual microscope.Finally, we demonstrate the multi-role capabilities of our structure through a concurrent real-time visualisation and processing application
Chevaillier, Béatrice. "Analyse de données d'IRM fonctionnelle rénale par quantification vectorielle". Electronic Thesis or Diss., Metz, 2010. http://www.theses.fr/2010METZ005S.
Texto completoDynamic-Contrast-Enhanced Magnetic Resonance Imaging has a great potential for renal function assessment but has to be evaluated on a large scale before its clinical application. Registration of image sequences and segmentation of internal renal structures is mandatory in order to exploit acquisitions. We propose a reliable and user-friendly tool to partially automate these two operations. Statistical registration methods based on mutual information are tested on real data. Segmentation of cortex, medulla and cavities is performed using time-intensity curves of renal voxels in a two step process. Classifiers are first built with pixels of the slice that contains the largest proportion of renal tissue : two vector quantization algorithms, namely the K-means and the Growing Neural Gas with targeting, are used here. These classifiers are first tested on synthetic data. For real data, as no ground truth is available for result evaluation, a manual anatomical segmentation is considered as a reference. Some discrepancy criteria like overlap, extra pixels and similarity index are computed between this segmentation and functional one. The same criteria are also evaluated between the referencee and another manual segmentation. Results are comparable for the two types of comparisons. Voxels of other slices are then sorted with the optimal classifier. Generalization theory allows to bound classification error for this extension. The main advantages of functional methods are the following : considerable time-saving, easy manual intervention, good robustness and reproductibility
Coupier, Jérôme. "Contribution à la modélisation des doigts longs et développement d’un protocole clinique d’évaluation de la mobilité de la main". Doctoral thesis, Universite Libre de Bruxelles, 2016. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/229442.
Texto completoDoctorat en Sciences biomédicales et pharmaceutiques (Médecine)
info:eu-repo/semantics/nonPublished
Chevaillier, Béatrice. "Analyse de données d'IRM fonctionnelle rénale par quantification vectorielle". Phd thesis, Université de Metz, 2010. http://tel.archives-ouvertes.fr/tel-00557235.
Texto completoChicheportiche, Alexandre. "Données de base des ions atomiques et moléculaires de l'hélium et de l'argon pour l'optimisation des jets de plasmas froids utilisés dans le domaine biomédical". Toulouse 3, 2014. http://thesesups.ups-tlse.fr/2437/.
Texto completoThe use of cold plasma jets at atmospheric pressure (AP) for biomedical applications is a hot research topic. Such devices produce many active species (photons, radicals, charged particles, electric field, etc. ) very useful for biomedical applications. The challenge for the plasma physics community is to tune such plasma devices to abundantly or selectively produce actives species beforehand identified for their biological effects. To reach this goal, physicochemical models have been developed but require, in input data, the transport coefficients (not always available in the literature) of ions affecting the kinetics of the plasma jet. In this thesis work we are interested in helium or argon plasma jets. Thus, transport coefficients of He+ and He2+ ions as Ar+ and Ar2+ ions have been calculated in their parent gas. The originality of the work concerns the molecular ions (He2+ and Ar2+) which play the main role in the plasma jet dynamics since they are overwhelmingly present at the AP. The transport coefficients are closely related to the collision cross sections and then to the ion-neutral interaction potential curves. For the He+/He interaction system, a 1D quantum method without approximation has been used for the collision cross section calculation and an optimized Monte Carlo code allowed us to obtained the transport coefficients in the experimental error bars. On the other side, for the molecular ions He2+, two calculation methods have been considered: a 1D quantum method and a hybrid method mixing classical and quantum formulations. A compromise between these two methods finally allowed us to obtain reduced mobilities with a mean relative deviation from experiments of 5% and to expand the latter to higher electric fields. Diffusion coefficients and reaction rates, not available in the literature, have been also calculated. For the argon plasma jet, the transport coefficients for atomic ions in the ground 2P3/2 state and metastable 2P1/2 state have been obtained, using quantum collision cross sections, up to 1500 Td (1 Td = 10-17 V. Cm²) with a mean relative deviation from measurements below 0. 2%. Finally, for Ar2+ ions, the hybrid method allowed us to obtain reduced mobilities with a mean relative deviation of 2% from experiments and to calculate the diffusion coefficients and reaction rates not available in the literature
Lossio-Ventura, Juan Antonio. "Towards the French Biomedical Ontology Enrichment". Thesis, Montpellier, 2015. http://www.theses.fr/2015MONTS220/document.
Texto completoBig Data for biomedicine domain deals with a major issue, the analyze of large volume of heterogeneous data (e.g. video, audio, text, image). Ontology, conceptual models of the reality, can play a crucial role in biomedical to automate data processing, querying, and matching heterogeneous data. Various English resources exist but there are considerably less available in French and there is a strong lack of related tools and services to exploit them. Initially, ontologies were built manually. In recent years, few semi-automatic methodologies have been proposed. The semi-automatic construction/enrichment of ontologies are mostly induced from texts by using natural language processing (NLP) techniques. NLP methods have to take into account lexical and semantic complexity of biomedical data : (1) lexical refers to complex phrases to take into account, (2) semantic refers to sense and context induction of the terminology.In this thesis, we propose methodologies for enrichment/construction of biomedical ontologies based on two main contributions, in order to tackle the previously mentioned challenges. The first contribution is about the automatic extraction of specialized biomedical terms (lexical complexity) from corpora. New ranking measures for single- and multi-word term extraction methods have been proposed and evaluated. In addition, we present BioTex software that implements the proposed measures. The second contribution concerns the concept extraction and semantic linkage of the extracted terminology (semantic complexity). This work seeks to induce semantic concepts of new candidate terms, and to find the semantic links, i.e. relevant location of new candidate terms, in an existing biomedical ontology. We proposed a methodology that extracts new terms in MeSH ontology. The experiments conducted on real data highlight the relevance of the contributions
Robert, Jean-Jacques. "Vues conceptuelles sur des bases d'information biomédicale : contribution au projet ARIANE". Aix-Marseille 3, 1997. http://www.theses.fr/1997AIX30022.
Texto completoPham, Cong Cuong. "Multi-utilisation de données complexes et hétérogènes : application au domaine du PLM pour l’imagerie biomédicale". Thesis, Compiègne, 2017. http://www.theses.fr/2017COMP2365/document.
Texto completoThe emergence of Information and Comunication Technologies (ICT) in the early 1990s, especially the Internet, made it easy to produce data and disseminate it to the rest of the world. The strength of new Database Management System (DBMS) and the reduction of storage costs have led to an exponential increase of volume data within entreprise information system. The large number of correlations (visible or hidden) between data makes them more intertwined and complex. The data are also heterogeneous, as they can come from many sources and exist in many formats (text, image, audio, video, etc.) or at different levels of structuring (structured, semi-structured, unstructured). All companies now have to face with data sources that are more and more massive, complex and heterogeneous.technical information. The data may either have different denominations or may not have verifiable provenances. Consequently, these data are difficult to interpret and accessible by other actors. They remain unexploited or not maximally exploited for the purpose of sharing and reuse. Data access (or data querying), by definition, is the process of extracting information from a database using queries to answer a specific question. Extracting information is an indispensable function for any information system. However, the latter is never easy but it always represents a major bottleneck for all organizations (Soylu et al. 2013). In the environment of multiuse of complex and heterogeneous, providing all users with easy and simple access to data becomes more difficult for two reasons : - Lack of technical skills : In order to correctly formulate a query a user must know the structure of data, ie how the data is organized and stored in the database. When data is large and complex, it is not easy to have a thorough understanding of all the dependencies and interrelationships between data, even for information system technicians. Moreover, this understanding is not necessarily linked to the domain competences and it is therefore very rare that end users have sufficient theses such skills. - Different user perspectives : In the multi-use environment, each user introduces their own point of view when adding new data and technical information. Data can be namedin very different ways and data provenances are not sufficiently recorded. Consequently, they become difficultly interpretable and accessible by other actors since they do not have sufficient understanding of data semantics. The thesis work presented in this manuscript aims to improve the multi-use of complex and heterogeneous data by expert usiness actors by providing them with a semantic and visual access to the data. We find that, although the initial design of the databases has taken into account the logic of the domain (using the entity-association model for example), it is common practice to modify this design in order to adapt specific techniques needs. As a result, the final design is often a form that diverges from the original conceptual structure and there is a clear distinction between the technical knowledge needed to extract data and the knowledge that the expert actors have to interpret, process and produce data (Soylu et al. 2013). Based on bibliographical studies about data management tools, knowledge representation, visualization techniques and Semantic Web technologies (Berners-Lee et al. 2001), etc., in order to provide an easy data access to different expert actors, we propose to use a comprehensive and declarative representation of the data that is semantic, conceptual and integrates domain knowledge closeed to expert actors
Pellay, François-Xavier. "Méthodes d'estimation statistique de la qualité et méta-analyse de données transcriptomiques pour la recherche biomédicale". Thesis, Lille 1, 2008. http://www.theses.fr/2008LIL10058/document.
Texto completoTo understand the biological phenomena taking place in a cell under physiological or pathological conditions, it is essential to know the genes that it expresses Measuring genetic expression can be done with DNA chlp technology on which are set out thousands of probes that can measure the relative abundance of the genes expressed in the cell. The microarrays called pangenomic are supposed to cover all existing proteincoding genes, that is to say currently around thirty-thousand for human beings. The measure, analysis and interpretation of such data poses a number of problems and the analytlcal methods used will determine the reliability and accuracy of information obtained with the microarrays technology. The aim of thls thesis is to define methods to control measures, improve the analysis and deepen interpretation of microarrays to optimize their utilization in order to apply these methods in the transcriptome analysis of juvenile myelomocytic leukemia patients, to improve the diagnostic and understand the biological mechanisms behind this rare disease. We thereby developed and validated through several independent studies, a quality control program for microarrays, ace.map QC, a software that improves biological Interpretations of microarrays data based on genes ontologies and a visualization tool for global analysis of signaling pathways. Finally, combining the different approaches described, we have developed a method to obtain reliable biological signatures for diagnostic purposes
Savinaud, Mickaël. "Recalage de flux de données cinématiques pour l'application à l'imagerie optique". Phd thesis, Ecole Centrale Paris, 2010. http://tel.archives-ouvertes.fr/tel-00545424.
Texto completoZanella, Calzada Laura A. "Biomedical Event Extraction Based on Transformers and Knowledge Graphs". Electronic Thesis or Diss., Université de Lorraine, 2023. http://www.theses.fr/2023LORR0235.
Texto completoBiomedical event extraction can be divided into three main subtasks; (1) biomedical event trigger detection, (2) biomedical argument identification and (3) event construction. In this work, for the first subtask, we analyze a set of transformer language models that are commonly used in the biomedical domain to evaluate and compare their capacity for event trigger detection. We fine-tune the models using seven manually annotated corpora to assess their performance in different biomedical subdomains. SciBERT emerged as the highest-performing model, presenting a slight improvement compared to baseline models. For the second subtask, we construct a knowledge graph (KG) from the biomedical corpora and integrate its KG embeddings to SciBERT to enrich its semantic information. We demonstrate that adding the KG embeddings to the model improves the argument identification performance by around 20 %, and by around 15 % compared to two baseline models. For the third subtask, we use the generative model, ChatGPT, based on prompts to construct the final set of extracted events. Our results suggest that fine-tuning a transformer model that is pre-trained from scratch with biomedical and general data allows to detect event triggers and identify arguments covering different biomedical subdomains, and therefore improving its generalization. Furthermore, the integration of KG embeddings into the model can significantly improve the performance of biomedical event argument identification, outperforming the results of baseline models
Pierret, Jean-Dominique. "Méthodologie et structuration d'un outil de découverte de connaissances basé sur la littérature biomédicale : une application basée sur l'exploitation du MeSH". Toulon, 2006. http://tel.archives-ouvertes.fr/tel-00011704.
Texto completoThe information available in bibliographic databases is dated and validated by a long process and becomes not very innovative. Usually bibliographic databases are consultated in a boolean way. The result of a request represente is a set of known which do not bring any additional novelty. In 1985 Don Swanson proposed an original method to draw out innovative information from bibliographic databases. His reasoning is based on systematic use of the biomedical literature to draw the latent connections between different well established knowledges. He demonstrated unsuspected potential of bibliographic databases in knowledge discovery. The value of his work did not lie in the nature of the available information but consisted in the methodology he used. This general methodology was mainly applied on validated and structured information that is bibliographic information. We propose to test the robustness of Swanson's theory by setting out the methods inspired by this theory. These methods led to the same conclusions as Don Swanson's ones. Then we explain how we developed a knowledge discovery system based on the literature available from public biomedical information sources
Lindberg, Arvid. "Development of rigid polarimetric endoscope for early detection of cancer in vivo". Thesis, Institut polytechnique de Paris, 2020. http://www.theses.fr/2020IPPAX010.
Texto completoEarly diagnosis of a cancerous lesion and complete surgical resection of the diseased areas are both crucial points in order to greatly improve the chances for recovery of a patient. However, early detection of cancer is a very difficult task. It relies on random biopsies of suspicious areas which are not easy to identify at this stage of the disease using conventional imaging techniques (visible imaging, ultrasound, magnetic resonance, X-ray scanner, positron emission tomography). In addition, the correct evaluation of surgical resection margins remains often very difficult or even impossible in some cases.Polarimetric imaging is a promising technique for the early detection of cancerous lesions on the surface of the organs and for a better definition of the resection limits during surgery. Biomedical research activity, conducted within the 'Applied Optics and Polarimetry' team of the LPICM, focuses on the development of Mueller polarimetric imaging systems for improving the management of epithelial cancers, also known as carcinomas, which represent 80-90% of all cancers. In this regard the LPICM leads a project funded by the “Institut National du Cancer (INCa)”, on the use of Mueller polarimetric imaging for improving the management of cervical cancer at different stages of its evolution. At present an extensive series of ex vivo measurements is in progress in three different hospitals of Paris (Institut Gustave Roussy, Kremlin Bicêtre and Institut Mutualiste Montsouris). The final goal of this study is to evaluate the performance of Mueller polarimetric imaging technique in terms of sensitivity and specificity, while using an interpretation of corresponding histology slides by pathologists as a “golden standard” of cancer diagnostics. Ex vivo measurements provide a precise knowledge of the systematic effects which can negatively affect image quality. Hence, the results of this study represent a good starting point for in vivo applications of polarimetric imaging technique. Within the frame of INCa project the analysis of uterine cervix in vivo is planned, using a classical colposcope modified to obtain polarimetric Mueller images.The endoscope is another medical instrument used also to detect cancerous or precancerous lesions in the internal cavities of human body (esophagus, colon, rectum, etc.). The proposed thesis subject consists in developing a Mueller polarimetric rigid endoscope and evaluating its performance in terms of sensitivity and specificity. The work of PhD student will be concerned with instrumentation in optics, acquisition of data, signal processing and statistical evaluation of the performance of technique. Thus, the subject of this thesis is on the interface between physics and medical diagnostics and it shows a strong potential for industrial development with a significant societal impact
Letendre-Jauniaux, Mathieu. "Ajout de degrés de liberté à un appareil d'imagerie optique pour acquisition de données destinées à la reconstruction 3D par tomographie optique diffuse". Mémoire, Université de Sherbrooke, 2013. http://hdl.handle.net/11143/6191.
Texto completoBrancotte, Bryan. "Agrégation de classements avec égalités : algorithmes, guides à l'utilisateur et applications aux données biologiques". Thesis, Paris 11, 2015. http://www.theses.fr/2015PA112184/document.
Texto completoThe rank aggregation problem is to build consensus among a set of rankings (ordered elements). Although this problem has numerous applications (consensus among user votes, consensus between results ordered differently by different search engines ...), computing an optimal consensus is rarely feasible in cases of real applications (problem NP-Hard). Many approximation algorithms and heuristics were therefore designed. However, their performance (time and quality of product loss) are quite different and depend on the datasets to be aggregated. Several studies have compared these algorithms but they have generally not considered the case (yet common in real datasets) that elements can be tied in rankings (elements at the same rank). Choosing a consensus algorithm for a given dataset is therefore a particularly important issue to be studied (many applications) and it is an open problem in the sense that none of the existing studies address it. More formally, a consensus ranking is a ranking that minimizes the sum of the distances between this consensus and the input rankings. Like much of the state-of-art, we have considered in our studies the generalized Kendall-Tau distance, and variants. Specifically, this thesis has three contributions. First, we propose new complexity results associated with cases encountered in the actual data that rankings may be incomplete and where multiple items can be classified equally (ties). We isolate the different "features" that can explain variations in the results produced by the aggregation algorithms (for example, using the generalized distance of Kendall-Tau or variants, pre-processing the datasets with unification or projection). We propose a guide to characterize the context and the need of a user to guide him into the choice of both a pre-treatment of its datasets but also the distance to choose to calculate the consensus. We finally adapt existing algorithms to this new context. Second, we evaluate these algorithms on a large and varied set of datasets both real and synthetic reproducing actual features such as similarity between rankings, the presence of ties and different pre-treatments. This large evaluation comes with the proposal of a new method to generate synthetic data with similarities based on a Markov chain modeling. This evaluation led to the isolation of datasets features that impact the performance of the aggregation algorithms, and to design a guide to characterize the needs of a user and advise him in the choice of the algorithm to be use. A web platform to replicate and extend these analyzes is available (rank-aggregation-with-ties.lri.fr). Finally, we demonstrate the value of using the rankings aggregation approach in two use cases. We provide a tool to reformulating the text user queries through biomedical terminologies, to then query biological databases, and ultimately produce a consensus of results obtained for each reformulation (conqur-bio.lri.fr). We compare the results to the references platform and show a clear improvement in quality results. We also calculate consensus between list of workflows established by experts in the context of similarity between scientific workflows. We note that the computed consensus agree with the expert in a very large majority of cases
Qin, Yingying. "Early breast anomalies detection with microwave and ultrasound modalities". Electronic Thesis or Diss., université Paris-Saclay, 2021. http://www.theses.fr/2021UPASG058.
Texto completoImaging of the breast for early detec-tion of tumors is studied by associating microwave (MW) and ultrasound (US) data. No registration is enforced since a free pending breast is tackled. A 1st approach uses prior information on tissue boundaries yielded from US reflection data. Regularization incorporates that two neighboring pixels should exhibit similar MW properties when not on a boundary while a jump allowed otherwise. This is enforced in the distorted Born iterative and the contrast source inversion methods. A 2nd approach involves deterministic edge preserving regularization via auxiliary variables indicating if a pixel is on an edge or not, edge markers being shared by MW and US parameters. Those are jointly optimized from the last parameter profiles and guide the next optimization as regularization term coefficients. Alternate minimization is to update US contrast, edge markers and MW contrast. A 3rd approach involves convolutional neural networks. Estimated contrast current and scattered field are the inputs. A multi-stream structure is employed to feed MW and US data. The network outputs the maps of MW and US parameters to perform real-time. Apart from the regression task, a multi-task learning strategy is used with a classifier that associates each pixel to a tissue type to yield a segmentation image. Weighted loss assigns a higher penalty to pixels in tumors when wrongly classified. A 4th approach involves a Bayesian formalism where the joint posterior distribution is obtained via Bayes’ rule; this true distribution is then approximated by a free-form separable law for each set of unknowns to get the estimate sought. All those solution methods are illustrated and compared from a wealth of simulated data on simple synthetic models and on 2D cross-sections of anatomically-realistic MRI-derived numerical breast phantoms in which small artificial tumors are inserted
Maschino, Emeric. "Fusion de modèles neuroanatomiques tridimensionnels". Paris 6, 2005. http://www.theses.fr/2005PA066528.
Texto completoCassar, Quentin. "Terahertz radiations for breast tumour recognition". Thesis, Bordeaux, 2020. http://www.theses.fr/2020BORD0032.
Texto completoThe failure to accurately delineate breast tumor margins during breast conserving surgeries results in a 20% re-excision rate. Consequently, there is a clear need for an operating room device that can precisely define intraoperatively breast tumor margins in a simple, fast, and inexpensive manner. This manuscript reports investigations that were conducted towards the ability of terahertz radiations to recognize breast malignant lesions among freshly excised breast volumes. Preliminary works on terahertz far-field spectroscopy have highlighted the existence of a contrast between healthy fibrous tissues and breast tumors by about 8% in refractive index over a spectral window spanning from 300 GHz to 1 THz. The origin for contrast was explored. Results seem to indicate that the dynamics of quasi-free water molecules may be a key factor for demarcation. On these basis, different methods for tissue segmentation based on refractive index map were investigated. A cancer sensitivity of 80% was reported while preserving a specificity of 82%. Eventually, these pilot studies have guided the design of a BiCMOS-compatible near-field resonator-based imager operating at 560 GHz and sensitive to permittivity changes over breast tissue surface
Curé, Olivier. "Siam : système intéractif d'automédication multimédia". Paris 5, 1999. http://www.theses.fr/1999PA05S019.
Texto completoBoudaoud, Sofiane. "Analyse de la variabilité de forme des signaux : Application aux signaux électrophysiologiques". Phd thesis, Université de Nice Sophia-Antipolis, 2006. http://tel.archives-ouvertes.fr/tel-00377428.
Texto completoAu chapitre 2, nous nous intéressons à la caractérisation objective de l'acouphène, une sensation sonore fantôme. En effet, un problème majeur est l'absence de critère objectif pour le caractériser. Pour cela nous étudions l'activité spontanée composite (ASC) issue du nerf auditif et les potentiels évoqués (PE) issus de relais auditifs en présence de salicylate, un générateur d'acouphènes, chez le cochon d'Inde. La première partie du travail consiste en la présentation d'un modèle de génération de l'ASC. Ce modèle nous sert à tester en simulation des scénarios possibles d'altérations neurosensorielles en présence de salicylate. En complément de l'index spectral décrit dans la littérature, nous proposons d'employer un critère de similarité sur la distribution d'amplitude de l'ASC pour mesurer ces altérations. La seconde partie du chapitre consiste à étudier la variabilité temporelle des PE sur plusieurs relais auditifs en présence de salicylate.
Au chapitre 3, nous montrons des applications de détection de pathologies à partir de l'analyse de forme d'une composante spécifique de l'ECG, l'onde P. Les pathologies concernées sont la fibrillation auriculaire et l'apnée du sommeil.
Minard, Anne-Lyse. "Extraction de relations en domaine de spécialité". Phd thesis, Université Paris Sud - Paris XI, 2012. http://tel.archives-ouvertes.fr/tel-00777749.
Texto completoBenferhat, Djamel. "Conception d'un système de communication tolérant la connectivité intermittente pour capteurs mobiles biométriques - Application à la supervision médicale de l'activité cardiaque de marathoniens". Phd thesis, Université de Bretagne Sud, 2013. http://tel.archives-ouvertes.fr/tel-00904627.
Texto completoGautier, Isabelle. "Evolution quantitative et qualitative des protocoles d'essais cliniques présentés devant un comité d'éthique français". Thesis, Aix-Marseille, 2017. http://www.theses.fr/2017AIXM0660.
Texto completoMethodological quality in clinical research is mandatory to ensure the reliability of medical experiments, with benefits for both practitioners and patients. This PhD thesis aims at measuring the quality of therapeutic trials submitted to the Ethics Committee of the South-East Region II, and its evolution over several years. Two comprehensive cross-sectional studies were conducted. The first explore the field of pediatric research, and aims at measuring the impact of the introduction of European Pediatric Regulation in 2007, on the evolution of the quantity and quality of trials in this field, given the low number of research in this population. The second analyzes the quality of randomized controlled trials using the JADAD score and seeks to identify the elements that influence it. These studies were conducted using the protocols submitted to the Ethics Committee, and not from a literature analysis. The concept of quality was first studied based on their ethical and scientific reliability. The various assessing tools proposed by the experts to measure the quality were appraised, which allowed the selection of the most methodologically scale adapted to this study. Conclusion: we show that the level of quality observed for pediatric trials is high, but was not influenced by the introduction of the European Regulation, which could, on the other hand, have led to an important increase in the number of pediatric trials. Regarding randomized controlled trials, a multivariate analysis allowed the identification of two statistically significant markers associated with high quality score of the protocol: the multicentric character of the research, and the drugs trials
Neumann, Markus. "Automatic multimodal real-time tracking for image plane alignment in interventional Magnetic Resonance Imaging". Phd thesis, Université de Strasbourg, 2014. http://tel.archives-ouvertes.fr/tel-01038023.
Texto completoTassé, Anne Marie. "La recherche internationale en génétique et l’utilisation secondaire des données : entre dissociation et harmonisation". Thèse, 2015. http://hdl.handle.net/1866/15850.
Texto completoThe study of polymorphisms and multifactorial aspects of health determinants enthuses many researchers with regard to populationnal research in genetics and genomics. The research method accompanying this field of research, however, requires the collection and analysis of a large number of biological samples and associated data, which fosters the development of biobanks. Biobanks, which contain personal and health data of thousands of participants, are therefore an essential resource to study the complex etiology of multifactorial diseases, and increase the speed and reliability of results. To optimize the use of these resources, many researchers now combine information from different biobanks to create “virtual” mega-cohorts of research participants. Thus, any attempt to share the data for international research is dependent on the legal and ethical right to use such data. Irrespective, the right to use the personal, medical and genetic data of participants in the context of international research is subject to complex and comprehensive legal and ethical frameworks. This complexity is exacerbated when research participants are deceased. Based on a review of the individualistic interpretation of the notion of informed consent and a constructivist approach to trust and autonomy, this thesis situates itself at the crossroads of research, law and ethics. It aims to propose a model promoting the legal and ethical harmonization of data for international genetic research.
Bélaise, Colombe. "Estimation des forces musculaires du membre supérieur humain par optimisation dynamique en utilisant une méthode directe de tir multiple". Thèse, 2018. http://hdl.handle.net/1866/21856.
Texto completo