Academic literature on the topic 'Données massives – Prise de décision'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Données massives – Prise de décision.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Données massives – Prise de décision"
Fujiki, Kenji, and Mélanie Laleau. "Une approche géographique pour spatialiser les besoins en hébergements d'urgence en situation de crise : une étude appliquée au cas d'une évacuation massive provoquée par une crue majeure de la seine en région francilienne." La Houille Blanche, no. 3-4 (October 2019): 75–83. http://dx.doi.org/10.1051/lhb/2019043.
Full textJullian-Desayes, Ingrid, Marie Joyeux-Faure, Sébastien Baillieul, Rita Guzun, Renaud Tamisier, and Jean-Louis Pepin. "Quelles perspectives pour le syndrome d’apnées du sommeil et la santé connectée ?" L'Orthodontie Française 90, no. 3-4 (September 2019): 435–42. http://dx.doi.org/10.1051/orthodfr/2019019.
Full textSomé, Mwinisso Géraldine, Hervé Hien, Ziemlé Clément Méda, Bernard Ilboudo, Ali Sié, Mamadou Bountogo, Moubassira Kagoné, et al. "Utilisation des données probantes pour la prise de décision en santé publique au Burkina Faso." Santé Publique 36, no. 6 (December 18, 2024): 99–108. https://doi.org/10.3917/spub.246.0099.
Full textIFOURAH, Hocine, and Tayeb CHABI. "information comptable et financière face à la prise de décision dans les entreprises Algériennes." Journal of Academic Finance 11, no. 1 (June 30, 2020): 104–21. http://dx.doi.org/10.59051/joaf.v11i1.394.
Full textTemprado, Jean-Jacques. "Prise de décision en sport : modalités d'études et données actuelles." STAPS 10, no. 19 (1989): 53–67. http://dx.doi.org/10.3406/staps.1989.1523.
Full textCHENARD, Pierre. "L’utilisation de l’information par les cégépiens du secteur général pour leur orientation vers l’université, une étude de sociologie institutionnelle." Sociologie et sociétés 20, no. 1 (September 30, 2002): 71–82. http://dx.doi.org/10.7202/001135ar.
Full textLégaré, France. "Le partage des décisions en santé entre patients et médecins." Recherche 50, no. 2 (September 21, 2009): 283–99. http://dx.doi.org/10.7202/037958ar.
Full textBesse, Philippe, Céline Castets-Renard, and Aurélien Garivier. "L’IA du Quotidien peut elle être Éthique ?" Statistique et société 6, no. 3 (2018): 9–31. https://doi.org/10.3406/staso.2018.1083.
Full textGnoumou Thiombiano, Bilampoa. "Genre et prise de décision au sein du ménage au Burkina Faso." Articles 43, no. 2 (January 9, 2015): 249–78. http://dx.doi.org/10.7202/1027979ar.
Full textCozic, Mikaël. "Anti-réalisme, rationalité limitée et théorie expérimentale de la décision." Social Science Information 48, no. 1 (March 2009): 35–56. http://dx.doi.org/10.1177/0539018408099636.
Full textDissertations / Theses on the topic "Données massives – Prise de décision"
Belghaouti, Fethi. "Interopérabilité des systèmes distribués produisant des flux de données sémantiques au profit de l'aide à la prise de décision." Electronic Thesis or Diss., Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLL003.
Full textInternet is an infinite source of data coming from sources such as social networks or sensors (home automation, smart city, autonomous vehicle, etc.). These heterogeneous and increasingly large data can be managed through semantic web technologies, which propose to homogenize, link these data and reason above them, and data flow management systems, which mainly address the problems related to volume, volatility and continuous querying. The alliance of these two disciplines has seen the growth of semantic data stream management systems also called RSP (RDF Stream Processing Systems). The objective of this thesis is to allow these systems, via new approaches and "low cost" algorithms, to remain operational, even more efficient, even for large input data volumes and/or with limited system resources.To reach this goal, our thesis is mainly focused on the issue of "Processing semantic data streamsin a context of computer systems with limited resources". It directly contributes to answer the following research questions : (i) How to represent semantic data stream ? And (ii) How to deal with input semantic data when their rates and/or volumes exceed the capabilities of the target system ?As first contribution, we propose an analysis of the data in the semantic data streams in order to consider a succession of star graphs instead of just a success of andependent triples, thus preserving the links between the triples. By using this approach, we significantly impoved the quality of responses of some well known sampling algoithms for load-shedding. The analysis of the continuous query allows the optimisation of this solution by selection the irrelevant data to be load-shedded first. In the second contribution, we propose an algorithm for detecting frequent RDF graph patterns in semantic data streams.We called it FreGraPaD for Frequent RDF Graph Patterns Detection. It is a one pass algorithm, memory oriented and "low-cost". It uses two main data structures : A bit-vector to build and identify the RDF graph pattern, providing thus memory space optimization ; and a hash-table for storing the patterns.The third contribution of our thesis consists of a deterministic load-shedding solution for RSP systems, called POL (Pattern Oriented Load-shedding for RDF Stream Processing systems). It uses very low-cost boolean operators, that we apply on the built binary patterns of the data and the continuous query inorder to determine which data is not relevant to be ejected upstream of the system. It guarantees a recall of 100%, reduces the system load and improves response time. Finally, in the fourth contribution, we propose Patorc (Pattern Oriented Compression for RSP systems). Patorc is an online compression toolfor RDF streams. It is based on the frequent patterns present in RDF data streams that factorizes. It is a data lossless compression solution whith very possible querying without any need to decompression.This thesis provides solutions that allow the extension of existing RSP systems and makes them able to scale in a bigdata context. Thus, these solutions allow the RSP systems to deal with one or more semantic data streams arriving at different speeds, without loosing their response quality while ensuring their availability, even beyond their physical limitations. The conducted experiments, supported by the obtained results show that the extension of existing systems with the new solutions improves their performance. They illustrate the considerable decrease in their engine’s response time, increasing their processing rate threshold while optimizing the use of their system resources
Conort, Paul. "Le Big Data au service de la création : Au-Delà des tensions, le knowledge brokering pour gérer la co-création de valeur à partir des données utilsateur." Electronic Thesis or Diss., Institut polytechnique de Paris, 2024. http://www.theses.fr/2024IPPAX126.
Full textFor many companies, effectively leveraging Big Data to generate value remains a challenge, especially in creative industries. This thesis by articles explores the impact of Big Data on creative projects within the video game industry and examines how insights from Big Data can be integrated. Drawing on two main streams of literature—Big Data and knowledge brokering—it explores how Big Data influences decision-making processes and value creation, highlighting knowledge brokering (KB) as a mechanism that facilitates the creation and dissemination of Big Data insights among project stakeholders. The research framework is based on four years of observations and 57 semi-structured interviews within the creative projects of a video game company.The first article explores the uses of Big Data in creative projects and the resulting tensions. Three uses of Big Data are distinguished: decision support, exploration tool, and negotiation artifact. Eight organizational tensions are identified around three foci: control, coordination, and decision-making. These tensions negatively impact creativity, underscoring the delicate balance between data utilization and maintaining creativity.The second article describes the process of integrating Big Data analyses in three stages: anticipation, analysis, and alignment. The anticipation stage involves updating analysis needs based on the evolving environment and project requirements. The analysis stage reformulates stakeholders’ questions and prioritizes them before applying an appropriate reasoning mode. Finally, the alignment stage allows stakeholders to informally converge on a common interpretation of the analyses (through the exchange of tacit knowledge) and disseminate a narrative of the data-driven decisions. Intermediaries emerge to facilitate relationships between stakeholders.The third article examines the conditions for the emergence of KB and its effects on collaboration among Big Data stakeholders. Three main challenges are identified: attention management, information retrieval, and processing. The establishment of KB structures and the arrival of coordinators promote the integration of data into projects. Alters (analysts, designers, project managers) agree to participate in this intermediation process because they find benefits: access to information, development of their expertise, and creation of new shared knowledge.Thus, the creation of value from Big Data in creative projects involves creating user knowledge, which requires informal exchanges among many actors, including the user, analyst, and designer. The emergence of KB in this context creates the necessary spaces and times for these exchanges to result in new user knowledge, which will be used by creative projects. The thesis makes several contributions: clarifying the link between Big Data and value creation for creative projects, identifying the tensions generated by Big Data integration, and proposing KB as a mechanism that can moderate them. It also reveals factors for the emergence of knowledge brokers and reasons that motivate alters to participate in the knowledge brokering process.Managerial implications suggest that integrating Big Data brings a paradigm shift where the user becomes central, but it also generates tensions. A three-phase process (anticipation, analysis, alignment) is proposed to foster knowledge creation, and it is suggested to identify intermediaries to support their coordination activities in this process
Belghaouti, Fethi. "Interopérabilité des systèmes distribués produisant des flux de données sémantiques au profit de l'aide à la prise de décision." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLL003.
Full textInternet is an infinite source of data coming from sources such as social networks or sensors (home automation, smart city, autonomous vehicle, etc.). These heterogeneous and increasingly large data can be managed through semantic web technologies, which propose to homogenize, link these data and reason above them, and data flow management systems, which mainly address the problems related to volume, volatility and continuous querying. The alliance of these two disciplines has seen the growth of semantic data stream management systems also called RSP (RDF Stream Processing Systems). The objective of this thesis is to allow these systems, via new approaches and "low cost" algorithms, to remain operational, even more efficient, even for large input data volumes and/or with limited system resources.To reach this goal, our thesis is mainly focused on the issue of "Processing semantic data streamsin a context of computer systems with limited resources". It directly contributes to answer the following research questions : (i) How to represent semantic data stream ? And (ii) How to deal with input semantic data when their rates and/or volumes exceed the capabilities of the target system ?As first contribution, we propose an analysis of the data in the semantic data streams in order to consider a succession of star graphs instead of just a success of andependent triples, thus preserving the links between the triples. By using this approach, we significantly impoved the quality of responses of some well known sampling algoithms for load-shedding. The analysis of the continuous query allows the optimisation of this solution by selection the irrelevant data to be load-shedded first. In the second contribution, we propose an algorithm for detecting frequent RDF graph patterns in semantic data streams.We called it FreGraPaD for Frequent RDF Graph Patterns Detection. It is a one pass algorithm, memory oriented and "low-cost". It uses two main data structures : A bit-vector to build and identify the RDF graph pattern, providing thus memory space optimization ; and a hash-table for storing the patterns.The third contribution of our thesis consists of a deterministic load-shedding solution for RSP systems, called POL (Pattern Oriented Load-shedding for RDF Stream Processing systems). It uses very low-cost boolean operators, that we apply on the built binary patterns of the data and the continuous query inorder to determine which data is not relevant to be ejected upstream of the system. It guarantees a recall of 100%, reduces the system load and improves response time. Finally, in the fourth contribution, we propose Patorc (Pattern Oriented Compression for RSP systems). Patorc is an online compression toolfor RDF streams. It is based on the frequent patterns present in RDF data streams that factorizes. It is a data lossless compression solution whith very possible querying without any need to decompression.This thesis provides solutions that allow the extension of existing RSP systems and makes them able to scale in a bigdata context. Thus, these solutions allow the RSP systems to deal with one or more semantic data streams arriving at different speeds, without loosing their response quality while ensuring their availability, even beyond their physical limitations. The conducted experiments, supported by the obtained results show that the extension of existing systems with the new solutions improves their performance. They illustrate the considerable decrease in their engine’s response time, increasing their processing rate threshold while optimizing the use of their system resources
Vazquez, Llana Jordan Diego. "Environnement big data et prise de décision intuitive : le cas du Centre d'Information et de Commandement (CIC) de la Police nationale des Bouches du Rhône (DDSP 13)." Thesis, Lyon, 2018. http://www.theses.fr/2018LYSE3063.
Full textGodé and Vazquez have previously demonstrated that French Police team operate in extreme contexts (Godé & Vazquez, 2017), simultaneously marked by high levels of change, uncertainty and mainly vital, material and legal risks (Godé, 2016), but also technological. In this context, the notion of big data environment, can affect the police decision-making process. The problematic of this thesis is : "What is the status of intuition in decision-making process in a big data environment?". We explain how the growth of available information volumes, the great diversity of their sources (social networks, websites, connected objects), their speed of diffusion (in real time or near real time) and their unstructured nature (Davenport & Soulard, 2014) introduces new decision-making challenges for National Police forces
Nicol, Olivier. "Data-driven evaluation of contextual bandit algorithms and applications to dynamic recommendation." Thesis, Lille 1, 2014. http://www.theses.fr/2014LIL10211/document.
Full textThe context of this thesis work is dynamic recommendation. Recommendation is the action, for an intelligent system, to supply a user of an application with personalized content so as to enhance what is refered to as "user experience" e.g. recommending a product on a merchant website or even an article on a blog. Recommendation is considered dynamic when the content to recommend or user tastes evolve rapidly e.g. news recommendation. Many applications that are of interest to us generates a tremendous amount of data through the millions of online users they have. Nevertheless, using this data to evaluate a new recommendation technique or even compare two dynamic recommendation algorithms is far from trivial. This is the problem we consider here. Some approaches have already been proposed. Nonetheless they were not studied very thoroughly both from a theoretical point of view (unquantified bias, loose convergence bounds...) and from an empirical one (experiments on private data only). In this work we start by filling many blanks within the theoretical analysis. Then we comment on the result of an experiment of unprecedented scale in this area: a public challenge we organized. This challenge along with a some complementary experiments revealed a unexpected source of a huge bias: time acceleration. The rest of this work tackles this issue. We show that a bootstrap-based approach allows to significantly reduce this bias and more importantly to control it
Duarte, Kevin. "Aide à la décision médicale et télémédecine dans le suivi de l’insuffisance cardiaque." Electronic Thesis or Diss., Université de Lorraine, 2018. http://www.theses.fr/2018LORR0283.
Full textThis thesis is part of the "Handle your heart" project aimed at developing a drug prescription assistance device for heart failure patients. In a first part, a study was conducted to highlight the prognostic value of an estimation of plasma volume or its variations for predicting major short-term cardiovascular events. Two classification rules were used, logistic regression and linear discriminant analysis, each preceded by a stepwise variable selection. Three indices to measure the improvement in discrimination ability by adding the biomarker of interest were used. In a second part, in order to identify patients at short-term risk of dying or being hospitalized for progression of heart failure, a short-term event risk score was constructed by an ensemble method, two classification rules, logistic regression and linear discriminant analysis of mixed data, bootstrap samples, and by randomly selecting predictors. We define an event risk measure by an odds-ratio and a measure of the importance of variables and groups of variables using standardized coefficients. We show a property of linear discriminant analysis of mixed data. This methodology for constructing a risk score can be implemented as part of online learning, using stochastic gradient algorithms to update online the predictors. We address the problem of sequential multidimensional linear regression, particularly in the case of a data stream, using a stochastic approximation process. To avoid the phenomenon of numerical explosion which can be encountered and to reduce the computing time in order to take into account a maximum of arriving data, we propose to use a process with online standardized data instead of raw data and to use of several observations per step or all observations until the current step. We define three processes and study their almost sure convergence, one with a variable step-size, an averaged process with a constant step-size, a process with a constant or variable step-size and the use of all observations until the current step without storing them. These processes are compared to classical processes on 11 datasets. The third defined process with constant step-size typically yields the best results
Duarte, Kevin. "Aide à la décision médicale et télémédecine dans le suivi de l’insuffisance cardiaque." Thesis, Université de Lorraine, 2018. http://www.theses.fr/2018LORR0283/document.
Full textThis thesis is part of the "Handle your heart" project aimed at developing a drug prescription assistance device for heart failure patients. In a first part, a study was conducted to highlight the prognostic value of an estimation of plasma volume or its variations for predicting major short-term cardiovascular events. Two classification rules were used, logistic regression and linear discriminant analysis, each preceded by a stepwise variable selection. Three indices to measure the improvement in discrimination ability by adding the biomarker of interest were used. In a second part, in order to identify patients at short-term risk of dying or being hospitalized for progression of heart failure, a short-term event risk score was constructed by an ensemble method, two classification rules, logistic regression and linear discriminant analysis of mixed data, bootstrap samples, and by randomly selecting predictors. We define an event risk measure by an odds-ratio and a measure of the importance of variables and groups of variables using standardized coefficients. We show a property of linear discriminant analysis of mixed data. This methodology for constructing a risk score can be implemented as part of online learning, using stochastic gradient algorithms to update online the predictors. We address the problem of sequential multidimensional linear regression, particularly in the case of a data stream, using a stochastic approximation process. To avoid the phenomenon of numerical explosion which can be encountered and to reduce the computing time in order to take into account a maximum of arriving data, we propose to use a process with online standardized data instead of raw data and to use of several observations per step or all observations until the current step. We define three processes and study their almost sure convergence, one with a variable step-size, an averaged process with a constant step-size, a process with a constant or variable step-size and the use of all observations until the current step without storing them. These processes are compared to classical processes on 11 datasets. The third defined process with constant step-size typically yields the best results
Gingras, François. "Prise de décision à partir de données séquentielles." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape9/PQDD_0019/NQ56697.pdf.
Full textHaddad, Raja. "Apprentissage supervisé de données symboliques et l'adaptation aux données massives et distribuées." Thesis, Paris Sciences et Lettres (ComUE), 2016. http://www.theses.fr/2016PSLED028/document.
Full textThis Thesis proposes new supervised methods for Symbolic Data Analysis (SDA) and extends this domain to Big Data. We start by creating a supervised method called HistSyr that converts automatically continuous variables to the most discriminant histograms for classes of individuals. We also propose a new method of symbolic decision trees that we call SyrTree. SyrTree accepts many types of inputs and target variables and can use all symbolic variables describing the target to construct the decision tree. Finally, we extend HistSyr to Big Data, by creating a distributed method called CloudHistSyr. Using the Map/Reduce framework, CloudHistSyr creates of the most discriminant histograms for data too big for HistSyr. We tested CloudHistSyr on Amazon Web Services. We show the efficiency of our method on simulated data and on actual car traffic data in Nantes. We conclude on overall utility of CloudHistSyr which, through its results, allows the study of massive data using existing symbolic analysis methods
Albert-Lorincz, Hunor. "Contributions aux techniques de prise de décision et de valorisation financière." Lyon, INSA, 2007. http://theses.insa-lyon.fr/publication/2007ISAL0039/these.pdf.
Full textThis thesis investigates and develops tools for flnancial decision making. Our first contribution is aimed at the extraction of frequents sequential patterns from, for example, discretized flnancial lime series. We introduce well partitioned constraints that allow a hierarchical structuration of the search space for increased efficiency. In particular, we look at the conjunction of a minimal frequency constraint and a regular expression constraint. It becomes possible to build adaptative strategies that find a good balance between the pruning based on the anti-monotonic frequency and the pruning based on the regular expression constraint which is generally neither monotonie nor antimonotonic. Then, we develop two financial applications. At first, we use frequent patterns to characterise market configurations by means of signatures in order to improve some technical indicators functions for automated trading strategies. Then, we look at the pricing of Bermudan options, i. E. , a financial derivative product which allows to terminate an agreement between two parties at a set of pre-defined dates. This requires to compute double conditional expectations at a high computational cos!. Our new method, neighbourhood Monte Carlo can be up to 20 times faster th an the traditional methods
Books on the topic "Données massives – Prise de décision"
Hammami-Abid, Ines. Systèmes d'information et performance de la prise de décision: Étude théorique et étude expérimentale dans le cas des systèmes à base d'entrepôt de données. Grenoble: A.N.R.T, Université Pierre Mendes France (Grenoble II), 2001.
Find full textBoris, Kovalerchuk, and Schwing James, eds. Visual and spatial analysis: Advances in data mining reasoning, and problem solving. Dordrecht: Springer, 2004.
Find full textE, Krider Robert, ed. Customer and business analytics: Applied data mining for business decision making using R. Boca Raton, FL: CRC Press, 2012.
Find full textSifeng, Liu, and Lin Yi 1959-, eds. Hybrid rough sets and applications in uncertain decision-making. Boca Raton: Auerbach Publications, 2010.
Find full textBhatia, Surbhi, Parul Gandhi, and Kapal Dev. Data Driven Decision Making Using Analytics. Taylor & Francis Group, 2021.
Find full textBhatia, Surbhi, Parul Gandhi, and Kapil Dev. Data Driven Decision Making Using Analytics. CRC Press LLC, 2021.
Find full textBig Data Analytics Using Multiple Criteria Decision-Making Models. Taylor & Francis Group, 2017.
Find full textRavindran, A. Ravi, Ramakrishnan Ramanathan, and Muthu Mathirajan. Big Data Analytics Using Multiple Criteria Decision-Making Models. Taylor & Francis Group, 2017.
Find full textRavindran, A. Ravi, Ramakrishnan Ramanathan, and Muthu Mathirajan. Big Data Analytics Using Multiple Criteria Decision-Making Models. Taylor & Francis Group, 2017.
Find full textBook chapters on the topic "Données massives – Prise de décision"
"Une prise de décision basée sur des données probantes." In Communauté d'apprentissage professionnelle, 169–79. Presses de l'Université du Québec, 2012. http://dx.doi.org/10.1515/9782760535329-014.
Full textSullivan, Terrence, Margo Orchard, and Muriah Umoquit. "Compétences en Leadership Pour une Prise de Décision éclairéE Par Les Données Probantes." In Améliorer le leadership dans les services de santé au Canada, 77–92. McGill-Queen's University Press, 2012. http://dx.doi.org/10.1515/9780773587526-009.
Full textRAHHAL, Anabelle, Samia BEN RAJEB, and Pierre LECLERCQ. "Enjeux collaboratifs de la numérisation de l’information en construction : le cas du BIM." In Les activités cognitives de conception en architecture, 209–39. ISTE Group, 2025. https://doi.org/10.51926/iste.9204.ch6.
Full textReports on the topic "Données massives – Prise de décision"
Cheng, Yeeva, and Cara Kraus-Perrotta. L’élaboration de la liste de contrôle pour les politiques A3. Population Council, 2022. http://dx.doi.org/10.31899/sbsr2022.1017.
Full textCheng, Yeeva, and Cara Krause-Perrotta. Guide d’utilisation de la liste de contrôle de la politique A3. Population Council, 2022. http://dx.doi.org/10.31899/sbsr2022.1021.
Full textGoerzen, C., H. Kao, R. Visser, R. M. H. Dokht, and S. Venables. A comprehensive earthquake catalogue for northeastern British Columbia, 2021 and 2022. Natural Resources Canada/CMSS/Information Management, 2024. http://dx.doi.org/10.4095/332532.
Full textÉlaboration du tableau de bord A3 des indicateurs relatifs aux adolescents et du tableau de bord des écarts entre les sexes. Population Council, 2022. http://dx.doi.org/10.31899/sbsr2022.1015.
Full text