Tesis sobre el tema "Domaines de données"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 50 mejores tesis para su investigación sobre el tema "Domaines de données".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
Alia, Mourad. "Canevas de domaines pour l'intégration de données". Phd thesis, Grenoble INPG, 2005. http://tel.archives-ouvertes.fr/tel-00010341.
Texto completoLenfant, Nicolas. "L'interactome des domaines PDZ de Caenorhabditis elegans". Thesis, Aix-Marseille 2, 2010. http://www.theses.fr/2010AIX22038/document.
Texto completoPDZ domains allow the organization of molecular networks responsible for cellular functions essential for multicellularity as polarization or transduction of extracellular signals. Exploration of this network by two-hybrid revealed a functional diversity for ligands of Caenorhabditis elegans’s PDZ domains. New putative functions were being observed through GO-terms and an unexpected proportion of internal ligands appeared, confirmed by Co-IP. We then functionally validated in silico groups of interactions that form our interactome microarrays co-expressed by the integration of data from expression profiles. Finally, this work has enabled the construction of an exploratory tool, the PIPE (PDZ Interacting Protein Explorer) that allows screening of all PDZ domains looking for interactions with a protein of interest and had already showed many additional interactions between PDZ domains and ligands
Nasser, Bassem. "Organisation virtuelle : gestion de politique de contrôle d'accès inter domaines". Toulouse 3, 2006. http://www.theses.fr/2006TOU30286.
Texto completoInformation technology offers a flexible support on which new organisational and collaboration structures, called Virtual Organisation (VO), can be built. Contrary to a classical organisation, the VO doesn't have a physical presence where its boundaries are flexible and even fuzzy, defined by its constituent members. These boundaries are defined within each organisation according to its strategy on how its services should be supplied. The deployment of the “Virtual Organisation” requires the definition of a security policy that indicates “who can do what” at the user-resource level as well as the administration level. This research work treats access control issues within the VO mainly addressing how to define a trans-organisational access control policy, how to specify a collaboration access control policy where entities (users and resources) are managed by independent partner organisations, and how to dissociate the partner internal structure from the VO structure to support multiple VOs simultaneously. For an unambiguous specification of access control policy, formal security models are of particular interest where formal tools may serve to reason about and verify the policy coherence. We argue that OrBAC (Organisation Based Access Control model) is an appropriate model for the VO environment. The major contribution of this thesis constitutes a new access control and administration model for virtual organisations on the top of OrBAC. A prototype is implemented to validate the proposal. The prototype integrates “Identity Federation” notion (using Shibboleth) and authorization infrastructure (using modified PERMIS) for enforcing access control within a Virtual Organisation
Li, Yubing. "Analyse de vitesse par migration quantitative dans les domaines images et données pour l’imagerie sismique". Thesis, Paris Sciences et Lettres (ComUE), 2018. http://www.theses.fr/2018PSLEM002/document.
Texto completoActive seismic experiments are widely used to characterize the structure of the subsurface. Migration Velocity Analysis techniques aim at recovering the background velocity model controlling the kinematics of wave propagation. The first step consists of obtaining the reflectivity images by migrating observed data in a given macro velocity model. The estimated model is then updated, assessing the quality of the background velocity model through the image coherency or focusing criteria. Classical migration techniques, however, do not provide a sufficiently accurate reflectivity image, leading to incorrect velocity updates. Recent investigations propose to couple the asymptotic inversion, which can remove migration artifacts in practice, to velocity analysis in the subsurface-offset domain for better robustness. This approach requires large memory and cannot be currently extended to 3D. In this thesis, I propose to transpose the strategy to the more conventional common-shot migration based velocity analysis. I analyze how the approach can deal with complex models, in particular with the presence of low velocity anomaly zones or discontinuous reflectivities. Additionally, it requires less memory than its counterpart in the subsurface-offset domain. I also propose to extend Inversion Velocity Analysis to the data-domain, leading to a more linearized inverse problem than classic waveform inversion. I establish formal links between data-fitting principle and image coherency criteria by comparing the new approach to other reflection-based waveform inversion techniques. The methodologies are developed and analyzed on 2D synthetic data sets
Li, Yubing. "Analyse de vitesse par migration quantitative dans les domaines images et données pour l’imagerie sismique". Electronic Thesis or Diss., Paris Sciences et Lettres (ComUE), 2018. http://www.theses.fr/2018PSLEM002.
Texto completoActive seismic experiments are widely used to characterize the structure of the subsurface. Migration Velocity Analysis techniques aim at recovering the background velocity model controlling the kinematics of wave propagation. The first step consists of obtaining the reflectivity images by migrating observed data in a given macro velocity model. The estimated model is then updated, assessing the quality of the background velocity model through the image coherency or focusing criteria. Classical migration techniques, however, do not provide a sufficiently accurate reflectivity image, leading to incorrect velocity updates. Recent investigations propose to couple the asymptotic inversion, which can remove migration artifacts in practice, to velocity analysis in the subsurface-offset domain for better robustness. This approach requires large memory and cannot be currently extended to 3D. In this thesis, I propose to transpose the strategy to the more conventional common-shot migration based velocity analysis. I analyze how the approach can deal with complex models, in particular with the presence of low velocity anomaly zones or discontinuous reflectivities. Additionally, it requires less memory than its counterpart in the subsurface-offset domain. I also propose to extend Inversion Velocity Analysis to the data-domain, leading to a more linearized inverse problem than classic waveform inversion. I establish formal links between data-fitting principle and image coherency criteria by comparing the new approach to other reflection-based waveform inversion techniques. The methodologies are developed and analyzed on 2D synthetic data sets
Leprettre, Benoit. "Reconnaissance de signaux sismiques d'avalanches par fusion de données estimées dans les domaines temps, temps-fréquence et polarisation". Université Joseph Fourier (Grenoble ; 1971-2015), 1996. http://www.theses.fr/1996GRE10182.
Texto completoChaix, Christophe. "Climatologie hivernale des versants alpins (Savoie) : types de temps, température et vents : analyse des données météorologiques des domaines skiables". Chambéry, 2007. http://www.theses.fr/2007CHAML028.
Texto completoIn mountainous area, the variability of the climatic parameters are still not well known at small scale. Indeed, the systematic measurements of temperature, humidity and wind's parameters are often restricted because of difficult climatic conditions, specially in winter. But since the production of artificial snow in the winters sports resorts began, it becam possible to use the information of the very dense meteorological network of anemometers and probes installed for this purpose. This PhD thesis aims at exploiting the data of french alpine selected sites in the Savoie area (Les Menuires, Val Thorens, Aussois, Valloire). A statistical and exploratory data analysis permit to answer the recurrent problematic of the mountain winter climatology, maily the influence of the meteorological large scale and the mountain topography on the small scale spatial and temporal variability of the temperature and wind. This research deals with the hourly and winter means temperature behavior, their inversions, the thermal gradients according to the weather type classification of the Savoie area. A new model of the evolution of the winter thermal breezes is proposed, after having identified diurnal catabatic breezes' mechanisms which were still unknown. Lastly, we propose real applications for the management of skiing areas and artificial snow
Raynaud, Jean-Louis. "Exploitation simultanée des données spatiales et fréquentielles dans l'identification modale linéaire et non-linéaire". Besançon, 1986. http://www.theses.fr/1987BESA2013.
Texto completoAlborzi, Seyed Ziaeddin. "Automatic Discovery of Hidden Associations Using Vector Similarity : Application to Biological Annotation Prediction". Thesis, Université de Lorraine, 2018. http://www.theses.fr/2018LORR0035/document.
Texto completoThis thesis presents: 1) the development of a novel approach to find direct associations between pairs of elements linked indirectly through various common features, 2) the use of this approach to directly associate biological functions to protein domains (ECDomainMiner and GODomainMiner), and to discover domain-domain interactions, and finally 3) the extension of this approach to comprehensively annotate protein structures and sequences. ECDomainMiner and GODomainMiner are two applications to discover new associations between EC Numbers and GO terms to protein domains, respectively. They find a total of 20,728 and 20,318 non-redundant EC-Pfam and GO-Pfam associations, respectively, with F-measures of more than 0.95 with respect to a “Gold Standard” test set extracted from InterPro. Compared to around 1500 manually curated associations in InterPro, ECDomainMiner and GODomainMiner infer a 13-fold increase in the number of available EC-Pfam and GO-Pfam associations. These function-domain associations are then used to annotate thousands of protein structures and millions of protein sequences for which their domain composition is known but that currently lack experimental functional annotations. Using inferred function-domain associations and considering taxonomy information, thousands of annotation rules have automatically been generated. Then, these rules have been utilized to annotate millions of protein sequences in the TrEMBL database
Nguyen, Huy Hoang. "Equations de Navier-Stokes dans des domaines non bornés en dimension trois et problèmes elliptiques à données dans L/\1". Pau, 2008. http://www.theses.fr/2008PAUU3018.
Texto completoIn this thesis, we deal with the problems which are directly or indirectly related to fluid mechanics using weighted Sobolev spaces. The first part of this thesis contains three chapters which mainly concerns about the regularity of solutions of the stationary Navier-Stokes equations for incompressible viscous fluids in three-dimensional exterior domains or in the whole three-dimensional space with some additional results concerning the Oseen equations as well as the characterization of the kernel of the Laplace operator with Dirichlet boundary conditions in n-dimensional exterior domains and the characterization of the kernel of the Oseen system in threedimensional exterior domains. In the second part, we deal with certain properties of the gradient, divergence and rotational operators with applications to some elliptic problems in the whole space and in the half-space with L1-data
Nguyen, Quoc-Hung. "THÉORIE NON LINÉAIRE DU POTENTIEL ET ÉQUATIONS QUASILINÉAIRES AVEC DONNÉES MESURES". Phd thesis, Université François Rabelais - Tours, 2014. http://tel.archives-ouvertes.fr/tel-01063365.
Texto completoDe, Moliner Anne. "Estimation robuste de courbes de consommmation électrique moyennes par sondage pour de petits domaines en présence de valeurs manquantes". Thesis, Bourgogne Franche-Comté, 2017. http://www.theses.fr/2017UBFCK021/document.
Texto completoIn this thesis, we address the problem of robust estimation of mean or total electricity consumption curves by sampling in a finite population for the entire population and for small areas. We are also interested in estimating mean curves by sampling in presence of partially missing trajectories.Indeed, many studies carried out in the French electricity company EDF, for marketing or power grid management purposes, are based on the analysis of mean or total electricity consumption curves at a fine time scale, for different groups of clients sharing some common characteristics.Because of privacy issues and financial costs, it is not possible to measure the electricity consumption curve of each customer so these mean curves are estimated using samples. In this thesis, we extend the work of Lardin (2012) on mean curve estimation by sampling by focusing on specific aspects of this problem such as robustness to influential units, small area estimation and estimation in presence of partially or totally unobserved curves.In order to build robust estimators of mean curves we adapt the unified approach to robust estimation in finite population proposed by Beaumont et al (2013) to the context of functional data. To that purpose we propose three approaches : application of the usual method for real variables on discretised curves, projection on Functional Spherical Principal Components or on a Wavelets basis and thirdly functional truncation of conditional biases based on the notion of depth.These methods are tested and compared to each other on real datasets and Mean Squared Error estimators are also proposed.Secondly we address the problem of small area estimation for functional means or totals. We introduce three methods: unit level linear mixed model applied on the scores of functional principal components analysis or on wavelets coefficients, functional regression and aggregation of individual curves predictions by functional regression trees or functional random forests. Robust versions of these estimators are then proposed by following the approach to robust estimation based on conditional biais presented before.Finally, we suggest four estimators of mean curves by sampling in presence of partially or totally unobserved trajectories. The first estimator is a reweighting estimator where the weights are determined using a temporal non parametric kernel smoothing adapted to the context of finite population and missing data and the other ones rely on imputation of missing data. Missing parts of the curves are determined either by using the smoothing estimator presented before, or by nearest neighbours imputation adapted to functional data or by a variant of linear interpolation which takes into account the mean trajectory of the entire sample. Variance approximations are proposed for each method and all the estimators are compared to each other on real datasets for various missing data scenarios
Marchand, Morgane. "Domaines et fouille d'opinion : une étude des marqueurs multi-polaires au niveau du texte". Thesis, Paris 11, 2015. http://www.theses.fr/2015PA112026/document.
Texto completoIn this thesis, we are studying the adaptation of a text level opinion classifier across domains. Howerver, people express their opinion in a different way depending on the subject of the conversation. The same word in two different domains can refer to different objects or have an other connotation. If these words are not detected, they will lead to classification errors.We call these words or bigrams « multi-polarity marquers ». Their presence in a text signals a polarity wich is different according to the domain of the text. Their study is the subject of this thesis. These marquers are detected using a khi2 test if labels exist in both targeted domains. We also propose a semi-supervised detection method for the case with labels in only one domain. We use a collection of auto-epurated pivot words in order to assure a stable polarity accross domains.We have also checked the linguistic interest of the selected words with a manual evaluation campaign. The validated words can be : a word of context, a word giving an opinion, a word explaining an opinion or a word wich refer to the evaluated object. Our study also show that the causes of the changing polarity are of three kinds : changing meaning, changing object or changing use.Finally, we have studyed the influence of multi-polarity marquers on opinion classification at text level in three different cases : adaptation of a source domain to a target domain, multi-domain corpora and open domain corpora. The results of our experiments show that the potential improvement is bigger when the initial transfer was difficult. In the favorable cases, we improve accurracy up to five points
Mareuil, Fabien. "DaDiModO un algorithme génétique pour l'étude de protéines à domaines à l'aide de données de RMN et de SAXS : application à la protéine ribosomale S1 d'Escherichia Coli". Paris 7, 2008. http://www.theses.fr/2008PA077191.
Texto completoTo increase our Knowledge about the biological properties of macromolecules, especially proteins, it is necessary to know their three-dimensional structures. About one thousand of different domains are sufficient to build most proteins and it is estimated that half of these domain structures is determined (Koonin et al. 2002). Eventually, it will be possible to obtain close models of protein domain structures. However the information concerning the relative position of the domains will always be missing. Hence, having a tool that finds the relative position of domains by using experimental data easy to obtain is a major issue. For that purpose, we have developed an algorithm that uses NMR and SAXS data to position the domains of a multi-domain protein. The main advantage of this tool is to leave the user free to choose the deformability of the domains. We validated our method on two test cases and thus showed that when the definition of domains is accurate enough and the experimental data are of fairly good quality, our program could approach the structural solution with an error of less than 1 A. We have then applied our method to the structural study of two fragments of the ribosomal protein S1 which is composed of six repetitions of the S1 domain. This study focused on the fragment; made of domains 3-4 and 4-5. The structure of the domain 4 was determined by NMR. The domain: 3 and 5 were obtained by homology modelling. Our study allowed us to validate a biologically relevant model of the fragment 3-5
Mondet, Jean. "Etude des paramètres de surface de la calotte polaire antarctique, dans les domaines spectraux du visible et du proche infrarouge, à partir des données de l'instrument de télédétection POLDER". Phd thesis, Grenoble 1, 1999. http://tel.archives-ouvertes.fr/tel-00766029.
Texto completoOropeza, Alip. "Sur une classe de problèmes elliptiques quasilinéaires avec conditions de Robin non linéaires et données L1 : existence et homogénéisation". Rouen, 2016. http://www.theses.fr/2016ROUES043.
Texto completoGhouzam, Yassine. "Nouvelles approches pour l'analyse et la prédiction de la structure tridimensionnelle des protéines". Thesis, Sorbonne Paris Cité, 2016. http://www.theses.fr/2016USPCC217.
Texto completoThis thesis deals with three complementary themes in the field of structural bioinformatics : the characterization of a new level of description of the protein structure (Protein Units) which is an intermediate level between the secondary structures and protein domains. The second part focus on the development of a new method for predicting protein structures,called ORION. It boosts the detection of remote protein homologs by taking into account thestructural information in the form of a structural alphabet (Protein Blocks). A second improved version was made available to the scientific community through a web interface : http://www.dsimb.inserm.fr/ORION/. The last part of this thesis describes the collaborative development of new tools for predicting and assessing the orientation of proteins in the membrane. The two methods developed (ANVIL and MAIDEN) were made available to the scientific community through a webinterface called OREMPRO: http: / /www.dsimb.inserm.fr/OREMPRO
Exibard, Léo. "Automatic synthesis of systems with data". Electronic Thesis or Diss., Aix-Marseille, 2021. http://www.theses.fr/2021AIXM0312.
Texto completoWe often interact with machines that react in real time to our actions (robots, websites etc). They are modelled as reactive systems, that continuously interact with their environment. The goal of reactive synthesis is to automatically generate a system from the specification of its behaviour so as to replace the error-prone low-level development phase by a high-level specification design.In the classical setting, the set of signals available to the machine is assumed to be finite. However, this assumption is not realistic to model systems which process data from a possibly infinite set (e.g. a client id, a sensor value, etc.). The goal of this thesis is to extend reactive synthesis to the case of data words. We study a model that is well-suited for this more general setting, and examine the feasibility of its synthesis problem(s). We also explore the case of non-reactive systems, where the machine does not have to react immediately to its inputs
Carel, Léna. "Analyse de données volumineuses dans le domaine du transport". Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLG001/document.
Texto completoThe aim of this thesis is to apply new methodologies to public transportation data. Indeed, we are more and more surrounded by sensors and computers generating huge amount of data. In the field of public transportation, smart cards generate data about our purchases and our travels every time we use them. In this thesis, we used this data for two purposes. First of all, we wanted to be able to detect passenger's groups with similar temporal habits. To that end, we began to use the Non-negative Matrix Factorization as a pre-processing tool for clustering. Then, we introduced the NMF-EM algorithm allowing simultaneous dimension reduction and clustering on a multinomial mixture model. The second purpose of this thesis is to apply regression methods on these data to be able to forecast the number of check-ins on a network and give a range of likely check-ins. We also used this methodology to be able to detect anomalies on the network
Mokhtarian, Hossein. "Modélisation intégrée produit-process à l'aide d'une approche de métamodélisation reposant sur une représentation sous forme de graphes : Application à la fabrication additive". Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAI013/document.
Texto completoAdditive manufacturing (AM) has created a paradigm shift in product design and manufacturing sector due to its unique capabilities. However, the integration of AM technologies in the mainstream production faces the challenge of ensuring reliable production and repeatable quality of parts. Toward this end, Modeling and simulation play a significant role to enhance the understanding of the complex multi-physics nature of AM processes. In addition, a central issue in modeling AM technologies is the integration of different models and concurrent consideration of the AM process and the part to be manufactured. Hence, the ultimate goal of this research is to present and apply a modeling approach to develop integrated modeling in additive manufacturing. Accordingly, the thesis oversees the product development process and presents the Dimensional Analysis Conceptual Modeling (DACM) Framework to model the product and manufacturing processes at the design stages of product development process. The Framework aims at providing simulation capabilities and systematic search for weaknesses and contradictions to the models for the early evaluation of solution variants. The developed methodology is applied in multiple case studies to present models integrating AM processes and the parts to be manufactured. This thesis results show that the proposed modeling framework is not only able to model the product and manufacturing process but also provide the capability to concurrently model product and manufacturing process, and also integrate existing theoretical and experimental models. The DACM framework contributes to the design for additive manufacturing and helps the designer to anticipate limitations of the AM process and part design earlier in the design stage. In particular, it enables the designer to make informed decisions on potential design alterations and AM machine redesign, and optimized part design or process parameter settings. DACM Framework shows potentials to be used as a metamodeling approach for additive manufacturing
Lassoued, Khaoula. "Localisation de robots mobiles en coopération mutuelle par observation d'état distribuée". Thesis, Compiègne, 2016. http://www.theses.fr/2016COMP2289/document.
Texto completoIn this work, we study some cooperative localization issues for mobile robotic systems that interact with each other without using relative measurements (e.g. bearing and relative distances). The considered localization technologies are based on beacons or satellites that provide radio-navigation measurements. Such systems often lead to offsets between real and observed positions. These systematic offsets (i.e, biases) are often due to inaccurate beacon positions, or differences between the real electromagnetic waves propagation and the observation models. The impact of these biases on robots localization should not be neglected. Cooperation and data exchange (estimates of biases, estimates of positions and proprioceptive measurements) reduce significantly systematic errors. However, cooperative localization based on sharing estimates is subject to data incest problems (i.e, reuse of identical information in the fusion process) that often lead to over-convergence problems. When position information is used in a safety-critical context (e.g. close navigation of autonomous robots), one should check the consistency of the localization estimates. In this context, we aim at characterizing reliable confidence domains that contain robots positions with high reliability. Hence, set-membership methods are considered as efficient solutions. This kind of approach enables merging adequately the information even when it is reused several time. It also provides reliable domains. Moreover, the use of non-linear models does not require any linearization. The modeling of a cooperative system of nr robots with biased beacons measurements is firstly presented. Then, we perform an observability study. Two cases regarding the localization technology are considered. Observability conditions are identified and demonstrated. We then propose a set-membership method for cooperativelocalization. Cooperation is performed by sharing estimated positions, estimated biases and proprioceptive measurements. Sharing biases estimates allows to reduce the estimation error and the uncertainty of the robots positions. The algorithm feasibility is validated through simulation when the observations are beacons distance measurements with several robots. The cooperation provides better performance compared to a non-cooperative method. Afterwards, the cooperative algorithm based on set-membership method is tested using real data with two experimental vehicles. Finally, we compare the interval method performance with a sequential Bayesian approach based on covariance intersection. Experimental results indicate that the interval approach provides more accurate positions of the vehicles with smaller confidence domains that remain reliable. Indeed, the comparison is performed in terms of accuracy and uncertainty
Wechman, Christophe. "Intégration de méthodes de data mining dans le domaine de l'olfaction". Orléans, 2005. http://www.theses.fr/2005ORLE2047.
Texto completoAbbas, Karine. "Système d'accès personnalisé à l'information : application au domaine médica". Lyon, INSA, 2008. http://theses.insa-lyon.fr/publication/2008ISAL0092/these.pdf.
Texto completoThe thesis work relays on a central problematic : the personalisation access to information. Indeed, with the considerable growth of data, the heterogeneity of roles and needs and the spread development of mobile systems, it becomes important to propose a personalised system to get relevent information. This system allows to user to provide relevent and adapted information. This system takes into account the different user characteristics as well as the different contextual situations which influence his behavior during the information access process. The personalised access system we propose is based on the profiles management. A generic profile model is defined to cover all personalisation facets. The model is able to collect information on user and his context of use and to represent all personalisation needs. The personalised system is mainly founded on three elements : the profiles, the context and the services. The profiles are containers of knowledge on users. The context defines a set of parameters characterising the user environment when the system is used. The services are autonomous programs able to execut the personalisation tasks. The personalisation process starts when the user sends a request that produces the extraction of data useful for it
Elisabeth, Erol. "Fouille de données spatio-temporelles, résumés de données et apprentissage automatique : application au système de recommandations touristique, données médicales et détection des transactions atypiques dans le domaine financier". Thesis, Antilles, 2021. http://www.theses.fr/2021ANTI0607.
Texto completoData mining is one of the components of Customer Relationship Management (CRM), widely deployed in companies. It is the process of extracting interesting, non-trivial, implicit, unknown and potentially useful knowledge from data. This process relies on algorithms from various scientific disciplines (statistics, artificial intelligence, databases) to build models from data stored in data warehouses.The objective of determining models, established from clusters in the service of improving knowledge of the customer in the generic sense, the prediction of his behavior and the optimization of the proposed offer. Since these models are intended to be used by users who are specialists in the field of data, researchers in health economics and management sciences or professionals in the sector studied, this research work emphasizes the usability of data mining environments.This thesis is concerned with spatio-temporal data mining. It particularly highlights an original approach to data processing with the aim of enriching practical knowledge in the field.This work includes an application component in four chapters which corresponds to four systems developed:- A model for setting up a recommendation system based on the collection of GPS positioning data,- A data summary tool optimized for the speed of responses to requests for the medicalization of information systems program (PMSI),- A machine learning tool for the fight against money laundering in the financial system,- A model for the prediction of activity in VSEs which are weather-dependent (tourism, transport, leisure, commerce, etc.). The problem here is to identify classification algorithms and neural networks for data analysis aimed at adapting the company's strategy to economic changes
Leclère-Vanhoeve, Annette. "Interprétation des données SEASAT dans l'Atlantique sud : Implications sur l'évolution du domaine carai͏̈be". Brest, 1988. http://www.theses.fr/1988BRES2032.
Texto completoChbeir, Richard. "Modélisation de la description d'images : application au domaine médical". Lyon, INSA, 2001. http://theses.insa-lyon.fr/publication/2001ISAL0065/these.pdf.
Texto completoThe management of images remains a complex task that is currently a cause for several research works. In line with this, we are interested in this work with the problem of image retrieval in medical databases. This problem is mainly related to the complexity of image description or representation. In literature, three paradigms are proposed: 1- The context-oriented paradigm that describes the context of the image without considering its content, 2- The content-oriented paradigm considering the physical characteristics of the image such as colors, textures, shapes, etc. 3- The semantic-oriented paradigm trying to provide an interpretation of the image using keywords, legends, etc. In this thesis, we propose an original model able to describe all image characteristics. This model is structured according to two spaces: 1- External space containing factual information associated to the image such as the patient name, the acquisition date, image type, etc;, 2-Internal space considering the physical characteristics (color, texture, etc. ), the spatial characteristics (form, position), and the semantics (scene, interpretation, etc. ) of the image content. The model is elaborated with several levels of granularity that considers characteristics of the whole image and/or its salient objects. We provide as well a referential module and a rules module that maintains coherence between description spaces. We also propose a meta-model of relations. The purpose of this meta-model is to provide, in a precise way, the several types of relations between two objects in function of common characteristics (shape, color, position, etc. ). This meta-model contributes to define a powerful indexing mechanism. In order to validate our approach, we developed a prototype named MIMS (Medical Image System management) with a user-friendly interface for storage and retrieval of images based on icons and hypermedia. MIMS is web-accessible on http://mims. Myip. Org
Folch, Helka. "Articuler les classifications sémantiques induites d'un domaine". Paris 13, 2002. http://www.theses.fr/2002PA132015.
Texto completoCHIKHI, YASMINA. "Reutilisation de structures de donnees dans le domaine des reseaux electriques". Paris 6, 1998. http://www.theses.fr/1998PA066068.
Texto completoCoulibaly, Ibrahim. "La protection des données à caractère personnel dans le domaine de la recherche scientifique". Phd thesis, Université de Grenoble, 2011. http://tel.archives-ouvertes.fr/tel-00798112.
Texto completoSirgue, Laurentf1975. "Inversion de la forme d'onde dans le domaine fréquentiel de données sismiques grands offsets". Paris 11, 2003. http://www.theses.fr/2003PA112088.
Texto completoThe standard imaging approach in exploration seismology relies on a decomposition of the velocity model by spatial scales: the determination of the low wavenumbers of the velocity field is followed by the reconstruction of the high wavenumbers. However, for models presenting a complex structure, the recovery of the high wavenumbers may be significantly improved by the determination of intermediate wavenumbers. These, can potentially be recovered by local, non-linear waveform inversion of wide-angle data. However, waveform inversion is limited by the non-linearity of the inverse problem, which is in turn governed by the minimum frequency in the data and the starting model. For very low frequencies, below 7 Hz, the problem is reasonably linear so that waveform inversion may be applied using a starting model obtained from traveltime tomography. The frequency domain is then particularly advantageous as the inversion from the low to the high frequencies is very efficient. Moreover, it is possible to discretise the frequencies with a much larger sampling interval than dictated by the sampling theorem and still obtain a good imaging result. A strategy for selecting frequencies is developed where the number of input frequencies can be reduced when a range of offsets is available: the larger the maximum offset is, the fewer frequencies are required. Real seismic data unfortunatly do not contain very low frequencies and waveform inversion at higher frequencies are likely to fail due to convergence into a local minimum. Preconditioning techniques must hence be applied on the gradient vector and the data residuals in order to enhance the efficacy of waveform inversion starting from realistic frequencies. The smoothing of the gradient vector and inversion of early arrivals significantly improve the chance of convergence into the global minimum. The efficacy of preconditioning methods are however limited by the accuracy of the starting model
Soudani, Mohamed Tahar Amine. "Techniques de traitement des données sismiques OBC dans le domaine (т, p) 2D-3D". Grenoble INPG, 2006. https://tel.archives-ouvertes.fr/tel-00204530.
Texto completoThe following PhD thesis deals with methods of water-Iayer multiple attenuation in OBC (Ocean Bottom Cable) data. These multiples are created by the reverberation of primary arrivaIs in the water column. The multiples have a strong negative impact on the final structural image obtained from OBC processing. Ln this document, we propose a new methodology for multiple attenuation by developing a new PZ summation algorithm in the (т,p) domain. We start by expressing the hydrophone and geophone measurements in the plane wave domain. We show that these measurements can be expressed in terms of primary and water-Iayer multiple arrivaIs. These expressions allow us to establish a new algorithm based on the physics of wave propagation in elastic media. The new algorithm also takes into account the properties of OBC acquisitions such as geophone coupling and orientation, impulse response of the sensors and noise characteristics. The new algorithm was first validated on synthetic data and then applied on a real 2D dataset as one step of a processing workflow. This processing sequence results in attenuation of water-Iayer multiples and noise, thus improving image quality in comparison with standard processing approaches. Finally, we extend the processing methodology to 3D datasets through the 3D(т,p) transform. This application is not straightforward and necessitates additional steps in the workflow because, in this context, 3D data interpolation becomes crucial. The final results of the 3D methodology show an important improvement of data quality in comparison with the standard processing sequences
Personeni, Gabin. "Apport des ontologies de domaine pour l'extraction de connaissances à partir de données biomédicales". Thesis, Université de Lorraine, 2018. http://www.theses.fr/2018LORR0235/document.
Texto completoThe semantic Web proposes standards and tools to formalize and share knowledge on the Web, in the form of ontologies. Biomedical ontologies and associated data represents a vast collection of complex, heterogeneous and linked knowledge. The analysis of such knowledge presents great opportunities in healthcare, for instance in pharmacovigilance. This thesis explores several ways to make use of this biomedical knowledge in the data mining step of a knowledge discovery process. In particular, we propose three methods in which several ontologies cooperate to improve data mining results. A first contribution of this thesis describes a method based on pattern structures, an extension of formal concept analysis, to extract associations between adverse drug events from patient data. In this context, a phenotype ontology and a drug ontology cooperate to allow a semantic comparison of these complex adverse events, and leading to the discovery of associations between such events at varying degrees of generalization, for instance, at the drug or drug class level. A second contribution uses a numeric method based on semantic similarity measures to classify different types of genetic intellectual disabilities, characterized by both their phenotypes and the functions of their linked genes. We study two different similarity measures, applied with different combinations of phenotypic and gene function ontologies. In particular, we investigate the influence of each domain of knowledge represented in each ontology on the classification process, and how they can cooperate to improve that process. Finally, a third contribution uses the data component of the semantic Web, the Linked Open Data (LOD), together with linked ontologies, to characterize genes responsible for intellectual deficiencies. We use Inductive Logic Programming, a suitable method to mine relational data such as LOD while exploiting domain knowledge from ontologies by using reasoning mechanisms. Here, ILP allows to extract from LOD and ontologies a descriptive and predictive model of genes responsible for intellectual disabilities. These contributions illustrates the possibility of having several ontologies cooperate to improve various data mining processes
Castano, Eric. "Conception et installation d'un système de veille technologique : application au domaine pétrolier". Aix-Marseille 3, 1994. http://www.theses.fr/1994AIX30040.
Texto completoHébert, Céline. "Extraction et usages de motifs minimaux en fouille de données, contribution au domaine des hypergraphes". Phd thesis, Université de Caen, 2007. http://tel.archives-ouvertes.fr/tel-00253794.
Texto completoBascol, Kevin. "Adaptation de domaine multisource sur données déséquilibrées : application à l'amélioration de la sécurité des télésièges". Thesis, Lyon, 2019. http://www.theses.fr/2019LYSES062.
Texto completoBluecime has designed a camera-based system to monitor the boarding station of chairlifts in ski resorts, which aims at increasing the safety of all passengers. This already successful system does not use any machine learning component and requires an expensive configuration step. Machine learning is a subfield of artificial intelligence which deals with studying and designing algorithms that can learn and acquire knowledge from examples for a given task. Such a task could be classifying safe or unsafe situations on chairlifts from examples of images already labeled with these two categories, called the training examples. The machine learning algorithm learns a model able to predict one of these two categories on unseen cases. Since 2012, it has been shown that deep learning models are the best suited machine learning models to deal with image classification problems when many training data are available. In this context, this PhD thesis, funded by Bluecime, aims at improving both the cost and the effectiveness of Bluecime's current system using deep learning
Pham, Cong Cuong. "Multi-utilisation de données complexes et hétérogènes : application au domaine du PLM pour l’imagerie biomédicale". Thesis, Compiègne, 2017. http://www.theses.fr/2017COMP2365/document.
Texto completoThe emergence of Information and Comunication Technologies (ICT) in the early 1990s, especially the Internet, made it easy to produce data and disseminate it to the rest of the world. The strength of new Database Management System (DBMS) and the reduction of storage costs have led to an exponential increase of volume data within entreprise information system. The large number of correlations (visible or hidden) between data makes them more intertwined and complex. The data are also heterogeneous, as they can come from many sources and exist in many formats (text, image, audio, video, etc.) or at different levels of structuring (structured, semi-structured, unstructured). All companies now have to face with data sources that are more and more massive, complex and heterogeneous.technical information. The data may either have different denominations or may not have verifiable provenances. Consequently, these data are difficult to interpret and accessible by other actors. They remain unexploited or not maximally exploited for the purpose of sharing and reuse. Data access (or data querying), by definition, is the process of extracting information from a database using queries to answer a specific question. Extracting information is an indispensable function for any information system. However, the latter is never easy but it always represents a major bottleneck for all organizations (Soylu et al. 2013). In the environment of multiuse of complex and heterogeneous, providing all users with easy and simple access to data becomes more difficult for two reasons : - Lack of technical skills : In order to correctly formulate a query a user must know the structure of data, ie how the data is organized and stored in the database. When data is large and complex, it is not easy to have a thorough understanding of all the dependencies and interrelationships between data, even for information system technicians. Moreover, this understanding is not necessarily linked to the domain competences and it is therefore very rare that end users have sufficient theses such skills. - Different user perspectives : In the multi-use environment, each user introduces their own point of view when adding new data and technical information. Data can be namedin very different ways and data provenances are not sufficiently recorded. Consequently, they become difficultly interpretable and accessible by other actors since they do not have sufficient understanding of data semantics. The thesis work presented in this manuscript aims to improve the multi-use of complex and heterogeneous data by expert usiness actors by providing them with a semantic and visual access to the data. We find that, although the initial design of the databases has taken into account the logic of the domain (using the entity-association model for example), it is common practice to modify this design in order to adapt specific techniques needs. As a result, the final design is often a form that diverges from the original conceptual structure and there is a clear distinction between the technical knowledge needed to extract data and the knowledge that the expert actors have to interpret, process and produce data (Soylu et al. 2013). Based on bibliographical studies about data management tools, knowledge representation, visualization techniques and Semantic Web technologies (Berners-Lee et al. 2001), etc., in order to provide an easy data access to different expert actors, we propose to use a comprehensive and declarative representation of the data that is semantic, conceptual and integrates domain knowledge closeed to expert actors
Temal, Lynda. "Ontologie de partage de données et d'outils de traitement dans le domaine de la neuroimagerie". Rennes 1, 2008. ftp://ftp.irisa.fr/techreports/theses/2008/temal.pdf.
Texto completoD'Orangeville, Vincent. "Analyse automatique de données par Support Vector Machines non supervisés". Thèse, Université de Sherbrooke, 2012. http://hdl.handle.net/11143/6678.
Texto completoChbeir, Richard Flory André Amghar Youssef. "Modélisation de la description d'images application au domaine médical /". Villeurbanne : Doc'INSA, 2005. http://docinsa.insa-lyon.fr/these/pont.php?id=chbeir.
Texto completoBoutayeb, Samy. "Les concepts lexicalisés dans le domaine des techniques documentaires". Paris 13, 1995. http://www.theses.fr/1995PA131023.
Texto completoThe study of documentation techniques is achieved combining terminological and associated data based on a textual corpus which then undergoes a terminological analysis. A terminological analysis tool, based on a linguistic representation of knowledge model, is modelised and experimented so as to highlight the aspects concerned by specialization: language, discourse, texts, vocabulaires, knowledge and language users. This tool comprises a terminological database which compilation allows us to set regularities about lexicalised concepts, the core category in this study. The category makes it possible on the one hand to bring out the conceptualisation-symbolisation dynamics by semitosing a conceptual representation. On the other hand it contributes to the comprhension of the denomination mechanism, a caracteristic of languages for special purposes. These properties of lexicalized concepts are in relation with units themselves. Moreover lexicalised concepts are defined by the relations they share: conceptual, morphological and syntagmatical relations. The model of linguistic representation of knowledge allows us to bring out terminological and associated data and as such stands out as a terminological analysis tool of great interest in the comprehension and production of specialized discourses
Hanf, Matthieu. "Valorisation des données libres en épidémiologie : intérêt des études écologiques dans le domaine des maladies infectieuses". Thesis, Antilles-Guyane, 2011. http://www.theses.fr/2011AGUY0482/document.
Texto completoEcological studies are now considered promising because of their ability to integrate as well as individual factors than populational ones in the same model. The recent open data movement could play an important role in the sustainability of multidisciplinary approaches. The studies developed in this thesis show that the combination of ecological methods with open data could give original results in the issues of infectious diseases.In French Guiana, ecological methods called time series, coupled with open climate data, have contributed to a better understanding of the role of climate on the dynamics of malaria, cutaneous leishmaniasis and disseminated histoplasmosis. The use of ecological methods with open data from the scientific literature concerning toxoplasmosis seroprevalence in human populations has permitted to identify the main factors influencing the level of overall seroprevalence and indirectly to estimate the associated risk of congenital toxoplasmosis.The combination of UN data to ecological methods has shown that a high prevalence of ascariasis is associated with a reduction from 10 in the incidence of malaria and that corruption has a significant impact on child mortality and resistance to TB.The various studies developed in this thesis show that the combination of ecological methods to public data sheds a new light on the issues of infectious diseases. This type of study provides the flexibility to study the complex interactions of many determinants of health
Melzi, Fateh. "Fouille de données pour l'extraction de profils d'usage et la prévision dans le domaine de l'énergie". Thesis, Paris Est, 2018. http://www.theses.fr/2018PESC1123/document.
Texto completoNowadays, countries are called upon to take measures aimed at a better rationalization of electricity resources with a view to sustainable development. Smart Metering solutions have been implemented and now allow a fine reading of consumption. The massive spatio-temporal data collected can thus help to better understand consumption behaviors, be able to forecast them and manage them precisely. The aim is to be able to ensure "intelligent" use of resources to consume less and consume better, for example by reducing consumption peaks or by using renewable energy sources. The thesis work takes place in this context and aims to develop data mining tools in order to better understand electricity consumption behaviors and to predict solar energy production, then enabling intelligent energy management.The first part of the thesis focuses on the classification of typical electrical consumption behaviors at the scale of a building and then a territory. In the first case, an identification of typical daily power consumption profiles was conducted based on the functional K-means algorithm and a Gaussian mixture model. On a territorial scale and in an unsupervised context, the aim is to identify typical electricity consumption profiles of residential users and to link these profiles to contextual variables and metadata collected on users. An extension of the classical Gaussian mixture model has been proposed. This allows exogenous variables such as the type of day (Saturday, Sunday and working day,...) to be taken into account in the classification, thus leading to a parsimonious model. The proposed model was compared with classical models and applied to an Irish database including both electricity consumption data and user surveys. An analysis of the results over a monthly period made it possible to extract a reduced set of homogeneous user groups in terms of their electricity consumption behaviors. We have also endeavoured to quantify the regularity of users in terms of consumption as well as the temporal evolution of their consumption behaviors during the year. These two aspects are indeed necessary to evaluate the potential for changing consumption behavior that requires a demand response policy (shift in peak consumption, for example) set up by electricity suppliers.The second part of the thesis concerns the forecast of solar irradiance over two time horizons: short and medium term. To do this, several approaches have been developed, including autoregressive statistical approaches for modelling time series and machine learning approaches based on neural networks, random forests and support vector machines. In order to take advantage of the different models, a hybrid model combining the different models was proposed. An exhaustive evaluation of the different approaches was conducted on a large database including four locations (Carpentras, Brasilia, Pamplona and Reunion Island), each characterized by a specific climate as well as weather parameters: measured and predicted using NWP models (Numerical Weather Predictions). The results obtained showed that the hybrid model improves the results of photovoltaic production forecasts for all locations
Legeay, Marc. "Étude de la régulation anti-sens par l’analyse différentielle de données transcriptomiques dans le domaine végétal". Thesis, Angers, 2017. http://www.theses.fr/2017ANGE0021/document.
Texto completoA challenging task in bioinformatics is to decipher cell regulation mechanisms. The objective of this thesis is to study gene networks from apple data with the particularity to integrate anti-sense transcription data. Anti-sense transcripts are mostly non coding RNAs and their different roles in the cell are still not well known. In our study, to explore the role of anti-sense transcripts, we first propose a differential functional analysis that highlights the interest of integrating anti-sense data into a transcriptomic analysis. Then, regarding gene networks, we propose to focus on inference of a core network and we introduce a new differential analysis method that allows to compare a sense network with a sense and anti-sense network. We thus introduce the notion of AS-impacted genes, that allows to identify genes that are highly co-expressed with anti-sense transcripts. We analysed apple data related to ripening of fruits stored in cold storage; biological interpretation of the results of our differential analysisprovides some promising leads to a more targeted experimental study of genes or pathways, which role could be underestimated without integration of anti-sense data
Maaroufi, Meriem. "Interopérabilité des données médicales dans le domaine des maladies rares dans un objectif de santé publique". Thesis, Paris 6, 2016. http://www.theses.fr/2016PA066275/document.
Texto completoThe digitalization of healthcare is on and multiple e-health projects are unceasingly coming up. In the rare diseases context, a field that has become a public health policy priority in France, e-health could be a solution to improve rare diseases epidemiology and to propose a better care for patients. The national data bank for rare diseases (BNDMR) offers the centralization of these epidemiological studies conduction for all rare diseases and all affected patients followed in the French healthcare system. The BNDMR must grow in a dense and heterogeneous digital landscape. Developing the BNDMR interoperability is the objective of this thesis’ work. How to identify patients, including fetuses? How to federate patients’ identities to avoid duplicates creation? How to link patients’ data to allow studies’ conduction? In response to these questions, we propose a universal method for patients’ identification that meets the requirements of health data protection. Which data should be collected in the national data bank? How to improve and facilitate the development of interoperability between these data and those from the wide range of the existing systems? In response to these questions, we first propose the collection of a standardized minimum data set for all rare diseases. The implementation of international standards provides a first step toward interoperability. We then propose to move towards the discovery of mappings between heterogeneous data sources. Minimizing human intervention by adopting automated alignment techniques and making these alignments’ results reliable and exploitable were the main motivations of our proposal
Maissa, Sandrine. "Accés intuitif à l'information technico-règlementaire via une interface immersive : Application au domaine du bâtiment". Paris, ENSAM, 2003. http://www.theses.fr/2003ENAM0002.
Texto completoRiffaud, Sébastien. "Modèles réduits : convergence entre calcul et données pour la mécanique des fluides". Thesis, Bordeaux, 2020. http://www.theses.fr/2020BORD0334.
Texto completoThe objective of this thesis is to significantly reduce the computational cost associated with numerical simulations governed by partial differential equations. For this purpose, we consider reduced-order models (ROMs), which typically consist of a training stage, in which high-fidelity solutions are collected to define a low-dimensional trial subspace, and a prediction stage, where this data-driven trial subspace is then exploited to achieve fast or real-time simulations. The first contribution of this thesis concerns the modeling of gas flows in both hydrodynamic and rarefied regimes. In this work, we develop a new reduced-order approximation of the Boltzmann-BGK equation, based on Proper Orthogonal Decomposition (POD) in the training stage and on the Galerkin method in the prediction stage. We investigate the simulation of unsteady flows containing shock waves, boundary layers and vortices in 1D and 2D. The results demonstrate the stability, accuracy and significant computational speedup factor delivered by the ROM with respect to the high-fidelity model. The second topic of this thesis deals with the optimal transport problem and its applications to model order reduction. In particular, we propose to use the optimal transport theory in order to analyze and enrich the training database containing the high-fidelity solution snapshots. Reproduction and prediction of unsteady flows, governed by the 1D Boltzmann-BGK equation, show the improvement of the accuracy and reliability of the ROM resulting from these two applications. Finally, the last contribution of this thesis concerns the development of a domain decomposition method based on the Discontinuous Galerkin method. In this approach, the ROM approximates the solution where a significant dimensionality reduction can be achieved while the high-fidelity model is employed elsewhere. The Discontinuous Galerkin method for the ROM offers a simple way to recover the global solution by linking local solutions through numerical fluxes at cell interfaces. The proposed method is evaluated for parametric problems governed by the quasi-1D and 2D Euler equations. The results demonstrate the accuracy of the proposed method and the significant reduction of the computational cost with respect to the high-fidelity model
Lenart, Marcin. "Sensor information scoring for decision-aid systems in railway domain". Electronic Thesis or Diss., Sorbonne université, 2020. http://www.theses.fr/2020SORUS157.
Texto completoIn this thesis, the problem of assessing information quality produced by sensors is investigated. Indeed, sensors, usually used in networks, do not always provide correct information and the scoring of this information is needed. An approach is proposed that deals with some of the major limitations in the literature by providing a model designed to be sensor-generic, not dependent on ground truth and dependent only on easy-to-access meta-information, exploiting only attributes shared among the majority of sensors. The proposed model is called ReCLiC from the four dimensions that it considers: Reliability, Competence, Likelihood and Credibility. The thesis discusses in depth the requirements of these dimensions and proposes motivated definitions for each of them. Furthermore, it proposes an implementation of the generic ReCLiC definition to a real case, for a specific sensor in the railway signalling domain: the form of the four dimensions for this case is discussed and a formal and experimental study of the information scoring behaviour is performed, analysing each dimension separately. The proposed implementation of the ReCLiC model is experimentally validated using realistic simulated data, based on an experimental protocol that allows to control various quality issues as well as their quantity, Finally, the ReCLiC model is used to analyse a real datasetapplying a new visualisation method that, in addition, allows to study the notion of trust dynamic
Carpentier, Anne-Sophie. "Le transcriptome : un domaine d'application pour les statistiques, de nouveaux horizons pour la biologie". Evry-Val d'Essonne, 2006. http://www.theses.fr/2006EVRY0005.
Texto completoAnalysing transcriptome data requires statistical methods in order to provide reliable findings. Amongst the enormous amount of methods available, biologists may have difficulties to choose the most appropriate one for their needs. The existing criterions to compare different methods are either incomplete or use criteria that are not biologically relevant. The organisation of bacterial genomes offers a biologically relevant criterion to compare the methods independently of the goal of the experiment: the operons. We have developed a protocol based on this criterion and compared some classical methods: PCA, ICA, t-test and ANOVA. Furthermore, meta-analysis of transcriptome data is currently developed. These meta-analyses allow the study of new biological fields such as the chromosomal organisation of gene expression. We have analysed three bacteria, B. Subtilis, E. Coli and S. Meliloti and have revealed long-range correlations of expression in all organisms, whatever the gene studied
Larbre, David. "Les échanges de données personnelles entre l’union européenne et les tiers dans le domaine de la sécurité". Thesis, Paris 10, 2014. http://www.theses.fr/2014PA100174.
Texto completoEnabling security between the European Union and third party personal data exchange leads one to reflect on the related legal framework and safeguards regarding data protection. As states are at the origin of police networks and judicial cooperation, the emergence of the EU and its agencies in sovereign spheres has been astonishing. For the EU,respecting the conditions of such exchanges requires adequate guarantees from third states. To better understand this, one should first analyze to which extent these exchanges have gradually become an instrument servicing the areas of freedom, security and justice (AFSJ, "security" here implies the fight against terrorism, organized crime and illegal immigration). This thesis aims to detect, analyze and highlight the rules governing the exchanges of personal data and the protection attached to them. Its goal is to understand the function of the EU and the role of member states in these exchanges, to assess the guarantees provided by the EU or its partners and to lead to the emergence of a system which could provide adequate protection. The first part will determine the modalities of cooperation between the EU and third parties in the field of personal data security exchanges; identifying the existence of safety data exchange networks before looking into the fight against terrorism and organized crime’s international dimension. A focus on external standards in the EU will lead the reader to grasp how safety within third party data exchange networks may be structured and to understand the role of international organizations such as the UN (or extraterritorial jurisdiction from third countries such as the USA). The EU having developed its cooperation regarding safety data exchanges, its foreign policy in terms of AFSJ gives one an overview of safety data exchange networks and their diversity, but it also shows the limits of their extension. These different forms of cooperation are the foundations of constituent EU treaties, yet they face legal and democratic issues as far as EU legitimacy is concerned. The EU integration process, on which safety with third party data exchanges is based, will also be studied; if this integration is a success overall, sovereignty issues have also brought their share of safety data protection alterations. This thesis’ second part focuses on the guarantees related to safety data exchanges, fundamental rights protection regarding this personal data and the need for adequate protection when transferring data to third parties. The adequacy of "normative" protection must be analyzed in global terms, that is to say within an international framework. The study of normative protection will be followed by a thorough examination of their effective protection. The reader will see how data exchange security transparency enables people to exercise their right to both access data and challenge decisions taken on the basis of data exchange safety. Effective protection leads to the identification of responsibilities related to safety data exchanges, the mechanisms of which may highlight that the EU or third parties have breaches in their obligations
Mazauric, Cyril. "Assimilation de données pour les modèles d'hydraulique fluviale : estimation de paramètres, analyse de sensibilité et décomposition de domaine". Phd thesis, Université Joseph Fourier (Grenoble), 2003. http://tel.archives-ouvertes.fr/tel-00004632.
Texto completo