Littérature scientifique sur le sujet « Entrepôts de données – Médecine »
Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres
Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Entrepôts de données – Médecine ».
À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.
Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.
Articles de revues sur le sujet "Entrepôts de données – Médecine"
Piarroux, R., F. Batteux, S. Rebaudet et P. Y. Boelle. « Les indicateurs d’alerte et de surveillance de la Covid-19 ». Annales françaises de médecine d’urgence 10, no 4-5 (septembre 2020) : 333–39. http://dx.doi.org/10.3166/afmu-2020-0277.
Texte intégralBellakhdar, Jamal. « Le naskaphthon de Dioscoride et le bunk de la médecine arabo-islamique, un seul et même simple. Partie I : étude analytique des textes anciens pour un essai de détermination ». Revue d'histoire de la pharmacie 108, no 412 (2021) : 509–26. http://dx.doi.org/10.3406/pharm.2021.24479.
Texte intégralPitarch, Yoann, Cécile Favre, Anne Laurent et Pascal Poncelet. « Généralisation contextuelle de mesures dans les entrepôts de données. Application aux entrepôts de données médicales ». Ingénierie des systèmes d'information 16, no 6 (30 décembre 2011) : 67–90. http://dx.doi.org/10.3166/isi.16.6.67-90.
Texte intégralSchöpfel, Joachim. « Éditorialisation des données de recherche : le rôle des professionnels de l’information ». I2D - Information, données & ; documents 2, no 2 (17 novembre 2020) : 82–84. http://dx.doi.org/10.3917/i2d.202.0082.
Texte intégralGarcelon, N. « Des données médicales à la connaissance : entrepôts et fouilles de données ». Annales de Dermatologie et de Vénéréologie 142, no 12 (décembre 2015) : S389—S390. http://dx.doi.org/10.1016/j.annder.2015.10.171.
Texte intégralHerbert, J., C. Salpetrier, L. Godillon, F. Fourquet, E. Laurent et L. Grammatico-Guillon. « Entrepôts de données cliniques, outil du pilotage de crise ». Revue d'Épidémiologie et de Santé Publique 70 (mars 2022) : S8. http://dx.doi.org/10.1016/j.respe.2022.01.069.
Texte intégralBouattour, Soumia, Omar Boussaid, Hanene Ben Abdallah et Jamel Feki. « Modélisation et analyse dans les entrepôts de données actifs ». Techniques et sciences informatiques 30, no 8 (28 octobre 2011) : 975–94. http://dx.doi.org/10.3166/tsi.30.975-994.
Texte intégralBimonte, Sandro. « Des entrepôts de données, l’analyse en ligne et l’information géographique ». Journal of Decision Systems 17, no 4 (janvier 2008) : 463–86. http://dx.doi.org/10.3166/jds.17.463-486.
Texte intégralBimonte, Sandro, et François Pinet. « Conception des entrepôts de données : de l’implémentation à la restitution ». Journal of Decision Systems 21, no 1 (janvier 2012) : 1–2. http://dx.doi.org/10.1080/12460125.2012.678677.
Texte intégralRiou, C., M. Cuggia et N. Garcelon. « Comment assurer la confidentialité dans les entrepôts de données biomédicaux ? » Revue d'Épidémiologie et de Santé Publique 60 (mars 2012) : S19—S20. http://dx.doi.org/10.1016/j.respe.2011.12.116.
Texte intégralThèses sur le sujet "Entrepôts de données – Médecine"
Assele, Kama Ariane. « Interopérabilité sémantique et entreposage de données cliniques ». Paris 6, 2013. http://www.theses.fr/2013PA066359.
Texte intégralIn medicine, data warehouses allow to integrate various data sources for decisional analysis. The integrated data often come from distributed and heterogeneous sources, in order to provide an overview of information to analysts and deciders. The clinical data warehousing raises the issue of medical knowledge representation constantly evolving, requiring the use of new methodologies to integrate the semantic dimension of the study domain. The storage problem is related to the complexity of the field to describe and model, but more importantly, to the need to combine domain knowledge with data. Therefore, one of the research topics in the field of data warehouses is about the cohabitation of knowledge and data, and the role of ontologies in data warehouse modeling, data integration and data mining. This work, carried out in an INSERM research laboratory specialized in knowledge health engineering (UMRS 872 EQ20), is part of issue on modeling, sharing and clinical data use, within a semantic interoperability platform. To address this issue, we support the thesis that: (i) the integration of a standardized information model with a knowledge model allows to implement semantic data warehouses in order to optimize the data use; (ii) the use of terminological and ontological resources aids the interconnection of distributed and heterogeneous resources; (iii) data representation impact its exploitation and helps to optimization of decision support systems (e. G. Monitoring tools). Using innovative methods and Semantic Web tools, we have optimized the integration and exploitation of clinical data for the implementation of a monitoring system to assess the evolution of bacterial resistance to antibiotics in Europe. As a first step, we defined the multidimensional model of a semantic data warehouse based on existing standards such as HL7. We subsequently articulated these data with domain knowledge of infectious diseases. For this, we have represented the data across their structure, vocabulary and semantics in an ontology called « data definition ontology », to map data to the domain ontology via mapping rules. We proposed a method for semi-automatic generation of « data definition ontology » from a database schema, using existing tools and projects results. Finally, the data warehouse and semantic resources are accessed and used via a semantic interoperability system developed in the framework of the DebugIT European project (Detecting and Eliminating Bacteria UsinG Information Technology), that we have experimented within the G. Pompidou university hospital (HEGP, France)
Loizillon, Sophie. « Deep learning for automatic quality control and computer-aided diagnosis in neuroimaging using a large-scale clinical data warehouse ». Electronic Thesis or Diss., Sorbonne université, 2024. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2024SORUS258.pdf.
Texte intégralPatient's hospitalisation generates data about their health, which is essential to ensure that they receive the best possible care. Over the last decade, clinical data warehouses (CDWs) have been created to exploit this vast amount of clinical information for research purposes. CDWs offer remarkable potential for research by bringing together a huge amount of real-world data of diverse nature (electronic health records, imaging data, pathology and laboratory tests...) from up to millions of patients. Access to such large clinical routine datasets, which are an excellent representation of what is acquired daily in clinical practice, is a major advantage in the development and deployment of powerful artificial intelligence models in clinical routine. Currently, most computer-aided diagnosis models are limited by a training performed only on research datasets with patients meeting strict inclusion criteria and data acquired under highly standardised research protocols, which differ considerably from the realities of clinical practice. This gap between research and clinical data is leading to the failure of AI systems to be well generalised in clinical practice.This thesis examined how to leverage clinical data warehouse brain MRI data for research purposes.Because images gathered in CDW are highly heterogeneous, especially regarding their quality, we first focused on developing an automated solution capable of effectively identifying corrupted images in CDWs. We improved the initial automated 3D T1 weighted brain MRI quality control developed by (Bottani et al. 2021) by proposing an innovative transfer learning method, leveraging artefact simulation.In the second work, we extended our automatic quality control for T1-weighted MRI to another common anatomical sequence: 3D FLAIR. As machine learning models are sensitive to distribution shifts, we proposed a semi-supervised domain adaptation framework. Our automatic quality control tool was able to identify images that are not proper 3D FLAIR brain MRIs and assess the overall image quality with a limited number of new manual annotation of FLAIR images. Lastly, we conducted a feasibility study to assess the potential of variational autoencoders for unsupervised anomaly detection. We obtained promising results showing a correlation between Fazekas scores and volumes of lesions segmented by our model, as well as the robustness of the method to image quality. Nevertheless, we still observed failure cases where no lesion is detected at all in lesional cases, which prevents this type of model to be used in clinical routine for now.Although clinical data warehouses are an incredible research ecosystem, to enable a better understanding of the health of the general population and, in the long term, contributing to the development of predictive and preventive medicine, their use for research purposes is not without its difficulties
El, Malki Mohammed. « Modélisation NoSQL des entrepôts de données multidimensionnelles massives ». Thesis, Toulouse 2, 2016. http://www.theses.fr/2016TOU20139/document.
Texte intégralDecision support systems occupy a large space in companies and large organizations in order to enable analyzes dedicated to decision making. With the advent of big data, the volume of analyzed data reaches critical sizes, challenging conventional approaches to data warehousing, for which current solutions are mainly based on R-OLAP databases. With the emergence of major Web platforms such as Google, Facebook, Twitter, Amazon...etc, many solutions to process big data are developed and called "Not Only SQL". These new approaches are an interesting attempt to build multidimensional data warehouse capable of handling large volumes of data. The questioning of the R-OLAP approach requires revisiting the principles of modeling multidimensional data warehouses.In this manuscript, we proposed implementation processes of multidimensional data warehouses with NoSQL models. We defined four processes for each model; an oriented NoSQL column model and an oriented documents model. Each of these processes fosters a specific treatment. Moreover, the NoSQL context adds complexity to the computation of effective pre-aggregates that are typically set up within the ROLAP context (lattice). We have enlarged our implementations processes to take into account the construction of the lattice in both detained models.As it is difficult to choose a single NoSQL implementation that supports effectively all the applicable treatments, we proposed two translation processes. While the first one concerns intra-models processes, i.e., pass rules from an implementation to another of the same NoSQL logic model, the second process defines the transformation rules of a logic model implementation to another implementation on another logic model
Benitez-Guerrero, Edgard. « Infrastructure adaptable pour l'évolution des entrepôts de données ». Université Joseph Fourier (Grenoble), 2002. http://tel.archives-ouvertes.fr/tel-00010335.
Texte intégralSautot, Lucile. « Conception et implémentation semi-automatique des entrepôts de données : application aux données écologiques ». Thesis, Dijon, 2015. http://www.theses.fr/2015DIJOS055/document.
Texte intégralThis thesis concerns the semi-automatic design of data warehouses and the associated OLAP cubes analyzing ecological data.The biological sciences, including ecology and agronomy, generate data that require an important collection effort: several years are often required to obtain a complete data set. Moreover, objects and phenomena studied by these sciences are complex and require many parameter recording to be understood. Finally, the collection of complex data over a long time results in an increased risk of inconsistency. Thus, these sciences generate numerous and heterogeneous data, which can be inconsistent. It is interesting to offer to scientists, who work in life sciences, information systems able to store and restore their data, particularly when those data have a significant volume. Among the existing tools, business intelligence tools, including online analytical systems (On-Line Analytical processing: OLAP), particularly caught our attention because it is data analysis process working on large historical collections (i.e. a data warehouse) to provide support to the decision making. The business intelligence offers tools that allow users to explore large volumes of data, in order to discover patterns and knowledge within the data, and possibly confirm their hypotheses.However, OLAP systems are complex information systems whose implementation requires advanced skills in business intelligence. Thus, although they have interesting features to manage and analyze multidimensional data, their complexity makes them difficult to manage by potential users, who would not be computer scientists.In the literature, several studies have examined the automatic multidimensional design, but the examples provided by theses works were traditional data. Moreover, other articles address the multidimensional modeling adapted to complex data (inconsistency, heterogeneous data, spatial objects, texts, images within a warehouse ...) but the proposed methods are rarely automatic. The aim of this thesis is to provide an automatic design method of data warehouse and OLAP cubes. This method must be able to take into account the inherent complexity of biological data. To test the prototypes, that we proposed in this thesis, we have prepared a data set concerning bird abundance along the Loire. This data set is structured as follows: (1) we have the census of 213 bird species (described with a set of qualitative factors, such as diet) in 198 points along the river for 4 census campaigns; (2) each of the 198 points is described by a set of environmental variables from different sources (land surveys, satellite images, GIS). These environmental variables address the most important issue in terms of multidimensional modeling. These data come from different sources, sometimes independent of bird census campaigns, and are inconsistent in time and space. Moreover, these data are heterogeneous: they can be qualitative factors, quantitative varaibles or spatial objects. Finally, these environmental data include a large number of attributes (158 selected variables) (...)
Bouchakri, Rima. « Conception physique statique et dynamique des entrepôts de données ». Thesis, Chasseneuil-du-Poitou, Ecole nationale supérieure de mécanique et d'aérotechnique, 2015. http://www.theses.fr/2015ESMA0012/document.
Texte intégralData Warehouses store into a single location a huge amount of data. They are interrogated by complex decisional queries called star join queries. To optimize such queries, several works propose algorithms for selecting optimization techniques such as Binary Join Indexes and Horizontal Partitioning during the DW physical design. However, these works propose static algorithms, select optimization techniques in and isolated way and focus on optimizing a single objective which is the query performance. Our main contribution in this thesis is to propose a new vision of optimization techniques selection. Our first contribution is an incremental selection that updates continuously the optimization scheme implemented on the DW, to ensure the continual optimization of queries. To deal with queries complexity increase, our second contribution is a join incremental selection of two optimization techniques which covers the optimization of a maximum number or queries and respects the optimization constraints. Finally, we note that the incremental selection generates a maintenance cost to update the optimization schemes. Thus, our third prop05ilion is to formulate and resolve a multi-objective selection problem or optimization techniques where we have two objectives to optimize : queries performance and maintenance cost of the DW
Boly, Aliou. « Fonctions d'oubli et résumés dans les entrepôts de données ». Paris, ENST, 2006. http://www.theses.fr/2006ENST0049.
Texte intégralThe amount of data stored in data warehouses grows very quickly so that they get saturated. To overcome this problem, the solution is generally to archive older data when new data arrive if there is no space left. This solution is not satisfactory because data mining analyses based on long term historical data become impossible. As a matter of fact data mining analysis cannot be done on archived data without re-loading them in the data warehouse; and the cost of loading back a large dataset of archived data is too high to be operated just for one analysis. So, archived data must be considered as lost data regarding to data mining applications. In this thesis, we propose a solution for solving this problem: a language is defined to specify forgetting functions on older data. The specifications include the definition of some summaries of deleted data to define what data should be present in the data warehouse at each step of time. These summaries are aggregates and samples of deleted data and will be kept in the data warehouse. The goal of these forgetting functions is to control the size of the data warehouse. This control is provided both for the aggregate summaries and the samples. The specification language for forgetting function is defined in the context of relational databases. Once forgetting functions have been specified, the data warehouse is automatically updated in order to follow the specifications. This thesis presents both the language for specifications, the structure of the summaries, the algorithms to update the data warehouse and the possibility of performing interesting analyses of historical data
Badri, Mohamed. « Maintenance des entrepôts de données issus de sources hétérogènes ». Paris 5, 2008. http://www.theses.fr/2008PA05S006.
Texte intégralThis work has been performed in the field of data warehouses (DW). DW are in the core of Decision making information system and are used to support decision making tools (OLAP, data mining, reporting). A DW is an alive entity which content is continuously fed and refreshed. Updating aggregates of DW is crucial for the decision making. That is why the DW maintenance has a strategic place in the decision system process. It is also used as a performance criterion of a DW system. Since the communication technologies especially Internet are steadily growing, data are becoming more and more heterogeneous and distributed. We can classify them in three categories: structured data, semi-structured data and unstructured data. In this work we are presenting first a modelling approach with the aim of integrating all this data. On the bases of this approach, we are thereafter proposing a process that insures an incremental warehouse data and aggregates maintenance. We are also proposing a tree structure to manage aggregates as well as algorithms that insure its evolution. Being in the context of heterogeneity, all our proposals are independent of the warehouse model and of its management system. In order to validate our contribution, the Heterogeneous Data Integration and Maintenance (HDIM) prototype has been developped and some experiments performed
Aouiche, Kamel. « Techniques de fouille de données pour l'optimisation automatique des performances des entrepôts de données ». Lyon 2, 2005. http://theses.univ-lyon2.fr/documents/lyon2/2005/aouiche_k.
Texte intégralWith the development of databases in general and data warehouses in particular, it becomes very important to reduce the function of administration. The aim of auto-administrative systems is administrate and adapt themselves automatically, without loss or even with a gain in performance. The idea of using data mining techniques to extract useful knowledge for administration from the data themselves has been in the air for some years. However, no research has ever been achieved. As for as we know, it nevertheless remains a very promising approach, notably in the field of the data warehousing, where the queries are very heterogeneous and cannot be interpreted easily. The aim of this thesis is to study auto-administration techniques in databases and data warehouses, mainly performance optimization techniques such as indexing and view materialization, and to look for a way of extracting from stored data themselves useful knowledge to apply these techniques. We have designed a tool that finds an index and view configuration allowing to optimize data access time. Our tool searches frequent itemsets in a given workload and clusters the query workload to compute this index and view configuration. Finally, we have extended the performance optimization to XML data warehouses. In this area, we proposed an indexing technique that precomputes joins between XML facts and dimensions and adapted our materialized view selection strategy for XML materialized views
Khrouf, Kaïs. « Entrepôts de documents : de l'alimentation à l'exploitation ». Toulouse 3, 2004. http://www.theses.fr/2004TOU30109.
Texte intégralIn this thesis, we propose the concept of document warehouse which consists in the storage of heterogeneous, selected and filtered documents, and their classification according to generic logical structures (common structures to a set of documents). Such warehouses organization facilitates the exploitation of the integrated documentary information through several complementary techniques : the information retrieval which consists in the restitution document granules in response to a query formulated with keywords (free language), the data interrogation which consists in the restitution factual data (structure or content) by using a declarative language, the multidimensional analysis which consists in the manipulation of warehouse information according to not-predefined dimensions. To validate our propositions, we developed an aid tool DOCWARE (DOCument WAREhouse) for the integration and the analysis of documents
Livres sur le sujet "Entrepôts de données – Médecine"
Rode, Gilles. Handicap, médecine physique et réadaptation, guide pratique. Montrouge : Édition Xavier Montauban, 2003.
Trouver le texte intégralInmon, W. H. Building the Data Warehouse. New York : John Wiley & Sons, Ltd., 2005.
Trouver le texte intégralE, Burkey Roxanne, et Breakfield Charles V, dir. Designing a total data solution : Technology, implementation, and deployment. Boca Raton, FL : Auerbach, 2001.
Trouver le texte intégralSilvers, Fon. Building and Maintaining a Data Warehouse. London : Taylor and Francis, 2008.
Trouver le texte intégralE, Sanders Roger. DB2 universal database application programming interface (API) developer's guide. New York : McGraw-Hill, 2000.
Trouver le texte intégralPeter, Aiken, dir. Building corporate portals using XML. New York : McGraw-Hill, 2000.
Trouver le texte intégral1968-, Gupta Ashish, et Mumick Inderpal Singh, dir. Materialized views : Techniques, implementations, and applications. Cambridge, Mass : MIT Press, 1998.
Trouver le texte intégralKimball, Ralph. The data warehouse toolkit : Practical techniques for building dimensional data warehouses. New York : John Wiley & Sons, 1996.
Trouver le texte intégralKatcher, Brian S. MEDLINE : A guide to effective searching in PubMed and other interfaces. 2e éd. San Francisco : Ashbury Press, 2006.
Trouver le texte intégralKatcher, Brian S. MEDLINE : A guide to effective searching in PubMed and other interfaces. 2e éd. San Francisco : Ashbury Press, 2006.
Trouver le texte intégralChapitres de livres sur le sujet "Entrepôts de données – Médecine"
Goffinet, F., N. Lelong, A. C. Thieulin, V. Vodovar, L. Faure, T. Andrieu et B. Khoshnood. « Évaluation en population du dépistage prénatal des cardiopathies congénitales : Données du registre des malformations congénitales de Paris et de la cohorte EPICARD ». Dans 41es Journées nationales de la Société Française de Médecine Périnatale (Grenoble 12–14 octobre 2011), 141–56. Paris : Springer Paris, 2011. http://dx.doi.org/10.1007/978-2-8178-0257-2_14.
Texte intégralWACK, Maxime. « Entrepôts de données cliniques ». Dans Intégration de données biologiques, 9–31. ISTE Group, 2022. http://dx.doi.org/10.51926/iste.9030.ch1.
Texte intégralREBOUILLAT, Violaine, et Joachim SCHÖPFEL. « Le dispositif d’entrepôt de données de recherche ». Dans Partage et valorisation des données de la recherche, 7–37. ISTE Group, 2023. http://dx.doi.org/10.51926/iste.9073.ch1.
Texte intégralHAHNEL, Mark. « Figshare : une place pour les résultats de la recherche scientifique ouverte ». Dans Partage et valorisation des données de la recherche, 193–216. ISTE Group, 2023. http://dx.doi.org/10.51926/iste.9073.ch10.
Texte intégralSCHÖPFEL, Joachim. « Enjeux et perspectives des entrepôts de données de recherche ». Dans Partage et valorisation des données de la recherche, 231–50. ISTE Group, 2023. http://dx.doi.org/10.51926/iste.9073.ch12.
Texte intégralGodeau, Emmanuelle, Marlène Monégat et Dibia Pacoricona Alfaro. « Données épidémiologiques en santé ». Dans Médecine et Santé de L'adolescent, 29–34. Elsevier, 2019. http://dx.doi.org/10.1016/b978-2-294-75919-2.00003-5.
Texte intégralBrochériou, I. « Données Anatomopathologiques en Pathologie Vasculaire ». Dans Traité de médecine vasculaire., 5–28. Elsevier, 2010. http://dx.doi.org/10.1016/b978-2-294-70917-3.50001-0.
Texte intégralSCHÖPFEL, Joachim. « Le paysage des entrepôts de données de recherche en France ». Dans Partage et valorisation des données de la recherche, 39–55. ISTE Group, 2023. http://dx.doi.org/10.51926/iste.9073.ch2.
Texte intégralWEISWEILER, Nina, et Gabriele KLOSKA. « COREF : un projet pour le développement de re3data ». Dans Partage et valorisation des données de la recherche, 217–30. ISTE Group, 2023. http://dx.doi.org/10.51926/iste.9073.ch11.
Texte intégralHAAK, Wouter, Juan GARCÍA MORGADO, Jennifer RUTTER, Alberto ZIGONI et David TUCKER. « Mendeley Data ». Dans Partage et valorisation des données de la recherche, 167–91. ISTE Group, 2023. http://dx.doi.org/10.51926/iste.9073.ch9.
Texte intégralActes de conférences sur le sujet "Entrepôts de données – Médecine"
Fricain, J. C. « Mucites : une prise en charge basée sur la preuve ». Dans 66ème Congrès de la SFCO. Les Ulis, France : EDP Sciences, 2020. http://dx.doi.org/10.1051/sfco/20206601008.
Texte intégralRapports d'organisations sur le sujet "Entrepôts de données – Médecine"
McAdams-Roy, Kassandra, Philippe Després et Pierre-Luc Déziel. La gouvernance des données dans le domaine de la santé : Pour une fiducie de données au Québec ? Observatoire international sur les impacts sociétaux de l’intelligence artificielle et du numérique, février 2023. http://dx.doi.org/10.61737/nrvw8644.
Texte intégralCatherine, Hugo. Étude comparative des services nationaux de données de recherche Facteurs de réussite. Ministère de l'enseignement supérieur et de la recherche, janvier 2021. http://dx.doi.org/10.52949/6.
Texte intégralVaillancourt, François, Brahim Boudarbat et Feriel Grine. Le rendement privé et social de la scolarité postsecondaire professionnelle, collégiale et universitaire au Québec : résultats pour 2020. CIRANO, novembre 2024. http://dx.doi.org/10.54932/zzbr5677.
Texte intégralRousseau, Henri-Paul. Gutenberg, L’université et le défi numérique. CIRANO, décembre 2022. http://dx.doi.org/10.54932/wodt6646.
Texte intégral