Dissertations / Theses on the topic 'Données massives – Prise de décision'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Données massives – Prise de décision.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Belghaouti, Fethi. "Interopérabilité des systèmes distribués produisant des flux de données sémantiques au profit de l'aide à la prise de décision." Electronic Thesis or Diss., Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLL003.
Full textInternet is an infinite source of data coming from sources such as social networks or sensors (home automation, smart city, autonomous vehicle, etc.). These heterogeneous and increasingly large data can be managed through semantic web technologies, which propose to homogenize, link these data and reason above them, and data flow management systems, which mainly address the problems related to volume, volatility and continuous querying. The alliance of these two disciplines has seen the growth of semantic data stream management systems also called RSP (RDF Stream Processing Systems). The objective of this thesis is to allow these systems, via new approaches and "low cost" algorithms, to remain operational, even more efficient, even for large input data volumes and/or with limited system resources.To reach this goal, our thesis is mainly focused on the issue of "Processing semantic data streamsin a context of computer systems with limited resources". It directly contributes to answer the following research questions : (i) How to represent semantic data stream ? And (ii) How to deal with input semantic data when their rates and/or volumes exceed the capabilities of the target system ?As first contribution, we propose an analysis of the data in the semantic data streams in order to consider a succession of star graphs instead of just a success of andependent triples, thus preserving the links between the triples. By using this approach, we significantly impoved the quality of responses of some well known sampling algoithms for load-shedding. The analysis of the continuous query allows the optimisation of this solution by selection the irrelevant data to be load-shedded first. In the second contribution, we propose an algorithm for detecting frequent RDF graph patterns in semantic data streams.We called it FreGraPaD for Frequent RDF Graph Patterns Detection. It is a one pass algorithm, memory oriented and "low-cost". It uses two main data structures : A bit-vector to build and identify the RDF graph pattern, providing thus memory space optimization ; and a hash-table for storing the patterns.The third contribution of our thesis consists of a deterministic load-shedding solution for RSP systems, called POL (Pattern Oriented Load-shedding for RDF Stream Processing systems). It uses very low-cost boolean operators, that we apply on the built binary patterns of the data and the continuous query inorder to determine which data is not relevant to be ejected upstream of the system. It guarantees a recall of 100%, reduces the system load and improves response time. Finally, in the fourth contribution, we propose Patorc (Pattern Oriented Compression for RSP systems). Patorc is an online compression toolfor RDF streams. It is based on the frequent patterns present in RDF data streams that factorizes. It is a data lossless compression solution whith very possible querying without any need to decompression.This thesis provides solutions that allow the extension of existing RSP systems and makes them able to scale in a bigdata context. Thus, these solutions allow the RSP systems to deal with one or more semantic data streams arriving at different speeds, without loosing their response quality while ensuring their availability, even beyond their physical limitations. The conducted experiments, supported by the obtained results show that the extension of existing systems with the new solutions improves their performance. They illustrate the considerable decrease in their engine’s response time, increasing their processing rate threshold while optimizing the use of their system resources
Conort, Paul. "Le Big Data au service de la création : Au-Delà des tensions, le knowledge brokering pour gérer la co-création de valeur à partir des données utilsateur." Electronic Thesis or Diss., Institut polytechnique de Paris, 2024. http://www.theses.fr/2024IPPAX126.
Full textFor many companies, effectively leveraging Big Data to generate value remains a challenge, especially in creative industries. This thesis by articles explores the impact of Big Data on creative projects within the video game industry and examines how insights from Big Data can be integrated. Drawing on two main streams of literature—Big Data and knowledge brokering—it explores how Big Data influences decision-making processes and value creation, highlighting knowledge brokering (KB) as a mechanism that facilitates the creation and dissemination of Big Data insights among project stakeholders. The research framework is based on four years of observations and 57 semi-structured interviews within the creative projects of a video game company.The first article explores the uses of Big Data in creative projects and the resulting tensions. Three uses of Big Data are distinguished: decision support, exploration tool, and negotiation artifact. Eight organizational tensions are identified around three foci: control, coordination, and decision-making. These tensions negatively impact creativity, underscoring the delicate balance between data utilization and maintaining creativity.The second article describes the process of integrating Big Data analyses in three stages: anticipation, analysis, and alignment. The anticipation stage involves updating analysis needs based on the evolving environment and project requirements. The analysis stage reformulates stakeholders’ questions and prioritizes them before applying an appropriate reasoning mode. Finally, the alignment stage allows stakeholders to informally converge on a common interpretation of the analyses (through the exchange of tacit knowledge) and disseminate a narrative of the data-driven decisions. Intermediaries emerge to facilitate relationships between stakeholders.The third article examines the conditions for the emergence of KB and its effects on collaboration among Big Data stakeholders. Three main challenges are identified: attention management, information retrieval, and processing. The establishment of KB structures and the arrival of coordinators promote the integration of data into projects. Alters (analysts, designers, project managers) agree to participate in this intermediation process because they find benefits: access to information, development of their expertise, and creation of new shared knowledge.Thus, the creation of value from Big Data in creative projects involves creating user knowledge, which requires informal exchanges among many actors, including the user, analyst, and designer. The emergence of KB in this context creates the necessary spaces and times for these exchanges to result in new user knowledge, which will be used by creative projects. The thesis makes several contributions: clarifying the link between Big Data and value creation for creative projects, identifying the tensions generated by Big Data integration, and proposing KB as a mechanism that can moderate them. It also reveals factors for the emergence of knowledge brokers and reasons that motivate alters to participate in the knowledge brokering process.Managerial implications suggest that integrating Big Data brings a paradigm shift where the user becomes central, but it also generates tensions. A three-phase process (anticipation, analysis, alignment) is proposed to foster knowledge creation, and it is suggested to identify intermediaries to support their coordination activities in this process
Belghaouti, Fethi. "Interopérabilité des systèmes distribués produisant des flux de données sémantiques au profit de l'aide à la prise de décision." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLL003.
Full textInternet is an infinite source of data coming from sources such as social networks or sensors (home automation, smart city, autonomous vehicle, etc.). These heterogeneous and increasingly large data can be managed through semantic web technologies, which propose to homogenize, link these data and reason above them, and data flow management systems, which mainly address the problems related to volume, volatility and continuous querying. The alliance of these two disciplines has seen the growth of semantic data stream management systems also called RSP (RDF Stream Processing Systems). The objective of this thesis is to allow these systems, via new approaches and "low cost" algorithms, to remain operational, even more efficient, even for large input data volumes and/or with limited system resources.To reach this goal, our thesis is mainly focused on the issue of "Processing semantic data streamsin a context of computer systems with limited resources". It directly contributes to answer the following research questions : (i) How to represent semantic data stream ? And (ii) How to deal with input semantic data when their rates and/or volumes exceed the capabilities of the target system ?As first contribution, we propose an analysis of the data in the semantic data streams in order to consider a succession of star graphs instead of just a success of andependent triples, thus preserving the links between the triples. By using this approach, we significantly impoved the quality of responses of some well known sampling algoithms for load-shedding. The analysis of the continuous query allows the optimisation of this solution by selection the irrelevant data to be load-shedded first. In the second contribution, we propose an algorithm for detecting frequent RDF graph patterns in semantic data streams.We called it FreGraPaD for Frequent RDF Graph Patterns Detection. It is a one pass algorithm, memory oriented and "low-cost". It uses two main data structures : A bit-vector to build and identify the RDF graph pattern, providing thus memory space optimization ; and a hash-table for storing the patterns.The third contribution of our thesis consists of a deterministic load-shedding solution for RSP systems, called POL (Pattern Oriented Load-shedding for RDF Stream Processing systems). It uses very low-cost boolean operators, that we apply on the built binary patterns of the data and the continuous query inorder to determine which data is not relevant to be ejected upstream of the system. It guarantees a recall of 100%, reduces the system load and improves response time. Finally, in the fourth contribution, we propose Patorc (Pattern Oriented Compression for RSP systems). Patorc is an online compression toolfor RDF streams. It is based on the frequent patterns present in RDF data streams that factorizes. It is a data lossless compression solution whith very possible querying without any need to decompression.This thesis provides solutions that allow the extension of existing RSP systems and makes them able to scale in a bigdata context. Thus, these solutions allow the RSP systems to deal with one or more semantic data streams arriving at different speeds, without loosing their response quality while ensuring their availability, even beyond their physical limitations. The conducted experiments, supported by the obtained results show that the extension of existing systems with the new solutions improves their performance. They illustrate the considerable decrease in their engine’s response time, increasing their processing rate threshold while optimizing the use of their system resources
Vazquez, Llana Jordan Diego. "Environnement big data et prise de décision intuitive : le cas du Centre d'Information et de Commandement (CIC) de la Police nationale des Bouches du Rhône (DDSP 13)." Thesis, Lyon, 2018. http://www.theses.fr/2018LYSE3063.
Full textGodé and Vazquez have previously demonstrated that French Police team operate in extreme contexts (Godé & Vazquez, 2017), simultaneously marked by high levels of change, uncertainty and mainly vital, material and legal risks (Godé, 2016), but also technological. In this context, the notion of big data environment, can affect the police decision-making process. The problematic of this thesis is : "What is the status of intuition in decision-making process in a big data environment?". We explain how the growth of available information volumes, the great diversity of their sources (social networks, websites, connected objects), their speed of diffusion (in real time or near real time) and their unstructured nature (Davenport & Soulard, 2014) introduces new decision-making challenges for National Police forces
Nicol, Olivier. "Data-driven evaluation of contextual bandit algorithms and applications to dynamic recommendation." Thesis, Lille 1, 2014. http://www.theses.fr/2014LIL10211/document.
Full textThe context of this thesis work is dynamic recommendation. Recommendation is the action, for an intelligent system, to supply a user of an application with personalized content so as to enhance what is refered to as "user experience" e.g. recommending a product on a merchant website or even an article on a blog. Recommendation is considered dynamic when the content to recommend or user tastes evolve rapidly e.g. news recommendation. Many applications that are of interest to us generates a tremendous amount of data through the millions of online users they have. Nevertheless, using this data to evaluate a new recommendation technique or even compare two dynamic recommendation algorithms is far from trivial. This is the problem we consider here. Some approaches have already been proposed. Nonetheless they were not studied very thoroughly both from a theoretical point of view (unquantified bias, loose convergence bounds...) and from an empirical one (experiments on private data only). In this work we start by filling many blanks within the theoretical analysis. Then we comment on the result of an experiment of unprecedented scale in this area: a public challenge we organized. This challenge along with a some complementary experiments revealed a unexpected source of a huge bias: time acceleration. The rest of this work tackles this issue. We show that a bootstrap-based approach allows to significantly reduce this bias and more importantly to control it
Duarte, Kevin. "Aide à la décision médicale et télémédecine dans le suivi de l’insuffisance cardiaque." Electronic Thesis or Diss., Université de Lorraine, 2018. http://www.theses.fr/2018LORR0283.
Full textThis thesis is part of the "Handle your heart" project aimed at developing a drug prescription assistance device for heart failure patients. In a first part, a study was conducted to highlight the prognostic value of an estimation of plasma volume or its variations for predicting major short-term cardiovascular events. Two classification rules were used, logistic regression and linear discriminant analysis, each preceded by a stepwise variable selection. Three indices to measure the improvement in discrimination ability by adding the biomarker of interest were used. In a second part, in order to identify patients at short-term risk of dying or being hospitalized for progression of heart failure, a short-term event risk score was constructed by an ensemble method, two classification rules, logistic regression and linear discriminant analysis of mixed data, bootstrap samples, and by randomly selecting predictors. We define an event risk measure by an odds-ratio and a measure of the importance of variables and groups of variables using standardized coefficients. We show a property of linear discriminant analysis of mixed data. This methodology for constructing a risk score can be implemented as part of online learning, using stochastic gradient algorithms to update online the predictors. We address the problem of sequential multidimensional linear regression, particularly in the case of a data stream, using a stochastic approximation process. To avoid the phenomenon of numerical explosion which can be encountered and to reduce the computing time in order to take into account a maximum of arriving data, we propose to use a process with online standardized data instead of raw data and to use of several observations per step or all observations until the current step. We define three processes and study their almost sure convergence, one with a variable step-size, an averaged process with a constant step-size, a process with a constant or variable step-size and the use of all observations until the current step without storing them. These processes are compared to classical processes on 11 datasets. The third defined process with constant step-size typically yields the best results
Duarte, Kevin. "Aide à la décision médicale et télémédecine dans le suivi de l’insuffisance cardiaque." Thesis, Université de Lorraine, 2018. http://www.theses.fr/2018LORR0283/document.
Full textThis thesis is part of the "Handle your heart" project aimed at developing a drug prescription assistance device for heart failure patients. In a first part, a study was conducted to highlight the prognostic value of an estimation of plasma volume or its variations for predicting major short-term cardiovascular events. Two classification rules were used, logistic regression and linear discriminant analysis, each preceded by a stepwise variable selection. Three indices to measure the improvement in discrimination ability by adding the biomarker of interest were used. In a second part, in order to identify patients at short-term risk of dying or being hospitalized for progression of heart failure, a short-term event risk score was constructed by an ensemble method, two classification rules, logistic regression and linear discriminant analysis of mixed data, bootstrap samples, and by randomly selecting predictors. We define an event risk measure by an odds-ratio and a measure of the importance of variables and groups of variables using standardized coefficients. We show a property of linear discriminant analysis of mixed data. This methodology for constructing a risk score can be implemented as part of online learning, using stochastic gradient algorithms to update online the predictors. We address the problem of sequential multidimensional linear regression, particularly in the case of a data stream, using a stochastic approximation process. To avoid the phenomenon of numerical explosion which can be encountered and to reduce the computing time in order to take into account a maximum of arriving data, we propose to use a process with online standardized data instead of raw data and to use of several observations per step or all observations until the current step. We define three processes and study their almost sure convergence, one with a variable step-size, an averaged process with a constant step-size, a process with a constant or variable step-size and the use of all observations until the current step without storing them. These processes are compared to classical processes on 11 datasets. The third defined process with constant step-size typically yields the best results
Gingras, François. "Prise de décision à partir de données séquentielles." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape9/PQDD_0019/NQ56697.pdf.
Full textHaddad, Raja. "Apprentissage supervisé de données symboliques et l'adaptation aux données massives et distribuées." Thesis, Paris Sciences et Lettres (ComUE), 2016. http://www.theses.fr/2016PSLED028/document.
Full textThis Thesis proposes new supervised methods for Symbolic Data Analysis (SDA) and extends this domain to Big Data. We start by creating a supervised method called HistSyr that converts automatically continuous variables to the most discriminant histograms for classes of individuals. We also propose a new method of symbolic decision trees that we call SyrTree. SyrTree accepts many types of inputs and target variables and can use all symbolic variables describing the target to construct the decision tree. Finally, we extend HistSyr to Big Data, by creating a distributed method called CloudHistSyr. Using the Map/Reduce framework, CloudHistSyr creates of the most discriminant histograms for data too big for HistSyr. We tested CloudHistSyr on Amazon Web Services. We show the efficiency of our method on simulated data and on actual car traffic data in Nantes. We conclude on overall utility of CloudHistSyr which, through its results, allows the study of massive data using existing symbolic analysis methods
Albert-Lorincz, Hunor. "Contributions aux techniques de prise de décision et de valorisation financière." Lyon, INSA, 2007. http://theses.insa-lyon.fr/publication/2007ISAL0039/these.pdf.
Full textThis thesis investigates and develops tools for flnancial decision making. Our first contribution is aimed at the extraction of frequents sequential patterns from, for example, discretized flnancial lime series. We introduce well partitioned constraints that allow a hierarchical structuration of the search space for increased efficiency. In particular, we look at the conjunction of a minimal frequency constraint and a regular expression constraint. It becomes possible to build adaptative strategies that find a good balance between the pruning based on the anti-monotonic frequency and the pruning based on the regular expression constraint which is generally neither monotonie nor antimonotonic. Then, we develop two financial applications. At first, we use frequent patterns to characterise market configurations by means of signatures in order to improve some technical indicators functions for automated trading strategies. Then, we look at the pricing of Bermudan options, i. E. , a financial derivative product which allows to terminate an agreement between two parties at a set of pre-defined dates. This requires to compute double conditional expectations at a high computational cos!. Our new method, neighbourhood Monte Carlo can be up to 20 times faster th an the traditional methods
Castellanos-Paez, Sandra. "Apprentissage de routines pour la prise de décision séquentielle." Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAM043.
Full textIntuitively, a system capable of exploiting its past experiences should be able to achieve better performance. One way to build on past experiences is to learn macros (i.e. routines). They can then be used to improve the performance of the solving process of new problems. In automated planning, the challenge remains on developing powerful planning techniques capable of effectively explore the search space that grows exponentially. Learning macros from previously acquired knowledge has proven to be beneficial for improving a planner's performance. This thesis contributes mainly to the field of automated planning, and it is more specifically related to learning macros for classical planning. We focused on developing a domain-independent learning framework that identifies sequences of actions (even non-adjacent) from past solution plans and selects the most useful routines (i.e. macros), based on a priori evaluation, to enhance the planning domain.First, we studied the possibility of using sequential pattern mining for extracting frequent sequences of actions from past solution plans, and the link between the frequency of a macro and its utility. We found out that the frequency alone may not provide a consistent selection of useful macro-actions (i.e. sequences of actions with constant objects).Second, we discussed the problem of learning macro-operators (i.e. sequences of actions with variable objects) by using classic pattern mining algorithms in planning. Despite the efforts, we find ourselves in a dead-end with the selection process because the pattern mining filtering structures are not adapted to planning.Finally, we provided a novel approach called METEOR, which ensures to find the frequent sequences of operators from a set of plans without a loss of information about their characteristics. This framework was conceived for mining macro-operators from past solution plans, and for selecting the optimal set of macro-operators that maximises the node gain. It has proven to successfully mine macro-operators of different lengths for four different benchmarks domains and thanks to the selection phase, be able to deliver a positive impact on the search time without drastically decreasing the quality of the plans
Buitrago, Hurtado Alex Fernando. "Aide à la prise de décision stratégique : détection de données pertinentes de sources numériques sur Internet." Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENG002/document.
Full textOur research area is around the strategic decision within organizations. More precisely, it is applicable as an aid for strategic decision-making and detecting useful information for such decisions. On the one hand, the ‘information from the field' from the contacts between individuals, business meetings, etc. is always essential for managers. On the other hand, national and international newspapers can provide a considerable volume of data that can be defined as the raw data. However, besides these classical sources, gathering information has changed dramatically with the advent of information technology and particularly internet that is related to our research. We chose the area for the acquisition of ‘information from the field' provided by the national daily newspapers: the Colombian newspaper which concerns to our empirical study. In order to detect weak signals of potential internet base issues which help managers to discover and understand their environment, we proposed a research based on “Action Design Research” type and then applied for designing, building and testing an artifact to gain the required information. The artifact has been designed and built in two phases that is included of using theoretical concepts about the data overload, environmental scanning particularly the “anticipatory and collective environmental scanning model” (VAS-IC®) and the desirable characteristics of strategic decision making support systems. After its construction, the artifact applied to real experimentation that has allowed us to evaluate its effectiveness. Afterwards, we improved our knowledge about the relevance of digital data in the decision making process. The results of all the involved decision makers have been able to integrate these new practices into their information needs
Dossa, Maximilien. "Aide à la modélisation et au traitement de données massives : proposition d'un guide méthodologique." Thesis, Montpellier, 2019. http://www.theses.fr/2019MONTD030.
Full text.The world of corporations was revolutionized under the impact of the Big Data phenomenon. Truly a technological Big Bang, Big Data opened many doors towards research and development because of the analysis and treatment it requires. Big Data has always been recognized with a highly competitive potential, however today it appears that there is trouble in controlling this potential. The reason is a number of problems arising linked to size of the revolution; traditional methods are starting to be obsolete and are less effective. This research aims at proposing a contribution to making the transition easier between a classical analysis and innovative analysis. Following the methodology of the Science of Design, we propose creating an artifact that takes form in a methodological guide. It will be composed of a set of machine learning solutions that take root in data science. They will be made available to companies to help the access, the comprehension, and the usage of Big Data
Denis, Marie-Chantal. "Conception et réalisation d'un entrepôt de données institutionnel dans une perspective de support à la prise de décision." Thèse, Université du Québec à Trois-Rivières, 2008. http://depot-e.uqtr.ca/1267/1/030077904.pdf.
Full textRatté, Stéphane. "Étude comparative randomisée de l’efficacité et de l’impact sur la prise de décision clinique en médecine familiale de deux moteurs de recherche médicaux : InfoClinique et TRIP Database." Thesis, Université Laval, 2012. http://www.theses.ulaval.ca/2012/28993/28993.pdf.
Full textFicheur, Grégoire. "Réutilisation de données hospitalières pour la recherche d'effets indésirables liés à la prise d'un médicament ou à la pose d'un dispositif médical implantable." Thesis, Lille 2, 2015. http://www.theses.fr/2015LIL2S015/document.
Full textIntroduction:The adverse events associated with drug administration or placement of an implantable medical device should be sought systematically after the beginning of the commercialisation. Studies conducted in this phase are observational studies that can be performed from hospital databases. The objective of this work is to study the interest of the re-use of hospital data for the identification of such an adverse event.Materials and methods:Two hospital databases have been re-used between the years 2007 to 2013: the first contains 171 million inpatient stays including diagnostic codes, procedures and demographic data. This data is linked with a single patient identifier; the second database contains the same kinds of information for 80,000 stays and also the laboratory results and drug administrations for each inpatient stay. Four studies were conducted on these pieces of data to identify adverse drug events and adverse events following the placement of an implantable medical device.Results:The first study demonstrates the ability of a set of detection of rules to automatically identify adverse drug events with hyperkalaemia. The second study describes the variation of a laboratory results associated with the presence of a frequent sequential pattern composed of drug administrations and laboratory results. The third piece of work enables the user to build a web tool exploring on the fly the reasons for rehospitalisation of patients with an implantable medical device. The fourth and final study estimates the thrombotic and bleeding risks following a total hip replacement.Conclusion:The re-use of hospital data in a pharmacoepidemiological perspective allows the identification of adverse events associated with drug administration or placement of an implantable medical device. The value of this data is the amount statistical power they bring as well as the types of associations they allow to analyse
Poirier, Canelle. "Modèles statistiques pour les systèmes d'aide à la décision basés sur la réutilisation des données massives en santé : application à la surveillance syndromique en santé publique." Thesis, Rennes 1, 2019. http://www.theses.fr/2019REN1B019.
Full textOver the past few years, the Big Data concept has been widely developed. In order to analyse and explore all this data, it was necessary to develop new methods and technologies. Today, Big Data also exists in the health sector. Hospitals in particular are involved in data production through the adoption of electronic health records. The objective of this thesis was to develop statistical methods reusing these data in order to participate in syndromic surveillance and to provide decision-making support. This study has 4 major axes. First, we showed that hospital Big Data were highly correlated with signals from traditional surveillance networks. Secondly, we showed that hospital data allowed to obtain more accurate estimates in real time than web data, and SVM and Elastic Net models had similar performances. Then, we applied methods developed in United States reusing hospital data, web data (Google and Twitter) and climatic data to predict influenza incidence rates for all French regions up to 2 weeks. Finally, methods developed were applied to the 3-week forecast for cases of gastroenteritis at the national, regional and hospital levels
Chu, Junfei. "Méthodes d’amélioration pour l'évaluation de l'enveloppement des données évaluation de l'efficacité croisée." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLC096/document.
Full textData envelopment analysis (DEA) cross-efficiency evaluation has been widely applied for efficiencyevaluation and ranking of decision-making units (DMUs). However, two issues still need to be addressed: nonuniquenessof optimal weights attached to the inputs and outputs and non-Pareto optimality of the evaluationresults. This thesis proposes alternative methods to address these issues. We first point out that the crossefficiencytargets for the DMUs in the traditional secondary goal models are not always feasible. We then givea model which can always provide feasible cross-efficiency targets for all the DMUs. New benevolent andaggressive secondary goal models and a neutral model are proposed. A numerical example is further used tocompare the proposed models with the previous ones. Then, we present a DEA cross-efficiency evaluationapproach based on Pareto improvement. This approach contains two models and an algorithm. The models areused to estimate whether a given set of cross-efficiency scores is Pareto optimal and to improve the crossefficiencyscores if possible, respectively. The algorithm is used to generate a set of Pareto-optimal crossefficiencyscores for the DMUs. The proposed approach is finally applied for R&D project selection andcompared with the traditional approaches. Additionally, we give a cross-bargaining game DEA cross-efficiencyevaluation approach which addresses both the issues mentioned above. A cross-bargaining game model is proposedto simulate the bargaining between each pair of DMUs among the group to identify a unique set of weights to beused in each other’s cross-efficiency calculation. An algorithm is then developed to solve this model by solvinga series of linear programs. The approach is finally illustrated by applying it to green supplier selection. Finally,we propose a DEA cross-efficiency evaluation approach based on satisfaction degree. We first introduce theconcept of satisfaction degree of each DMU on the optimal weights selected by the other DMUs. Then, a maxminmodel is given to select the set of optimal weights for each DMU which maximizes all the DMUs’satisfaction degrees. Two algorithms are given to solve the model and to ensure the uniqueness of each DMU’soptimal weights, respectively. Finally, the proposed approach is used for a case study for technology selection
Ben, Taleb Romain. "Modélisation et optimisation des actifs pour l'aide à la prise de décision stratégique dans les entreprises." Electronic Thesis or Diss., Ecole nationale des Mines d'Albi-Carmaux, 2024. http://www.theses.fr/2024EMAC0001.
Full textThe tools and methods used to assist in strategic decision-making, particularly in SMEs, face several limitations. It is observed that they are primarily deterministic, based on past data, and are framed by an approach that is almost exclusively accounting and financial. However, strategic decisions in a company are activities aimed towards the future, highly subject to uncertainty, which aim to maximize the generated value of the company, whether it is financial or not. In this context, the research question addressed in this thesis is how to assist business leaders in making prospective strategic decisions in a context subject to uncertainty ? In terms of contributions, we first propose a conceptual framework based on a meta-model that allows representing a company according to a logic of assets and value. This modeling is then enriched with a causality diagram that establishes the existing dynamics between the assets that create value. To illustrate the applicability of this conceptual framework, an approach is proposed using experimental design based on a simulation model on one hand, and an optimization model in Mixed Integer Programming on the other hand. A set of experiments validates the relevance of the proposal, notably identifying the consequences of the decisions made on each asset in terms of generated value for the company
Khelifi, Mohammed Nedjib. "Méthode de conception d'un système d'information par ébauche systémique et aide à la décision." Paris 8, 1993. http://www.theses.fr/1993PA080795.
Full textThe systemic preliminary model has become a due way for the conception of a system realised in a perspective of change. In this research work, it is presented as a pedagogic help for future managers in various fields such as communication, economics, business, sociology etc. . . It is destined to improve the communication behaviours during individual interviews or to collect information in order to reach the aim in the most efficient way. Our aim (goal) is to give the survey manager tools, arguments, rules, criteria and parameters to enable him with the assistance of a programmer, to get through and to achieve a project. The concept of information has improved and gets more precise from day to day tanks to the developement of programming technics keeping in mind the fact that modelisation applies to every representation or abstract transcription of a concrete reality and plays at the scientific level an essential part in the research thanks to various representations. A theoretical research capable of solving the problem of complete modelisation of the system of information would allow to foresee for the future a conception methodology of the complete information system, the conception and structuration of overtated database
Hammami, Inès. "Systèmes d'information et performance de la prise de décision : étude théorique et étude expérimentale dans le cas des systèmes à base d'entrepôt de données." Nice, 2001. http://www.theses.fr/2001NICE0033.
Full textLopez, Orozco Francisco. "Modélisation cognitive computationnelle de la recherche d'information utilisant des données oculomotrices." Thesis, Grenoble, 2013. http://www.theses.fr/2013GRENS013/document.
Full textThis computer science thesis presents a computational cognitive modeling work using eye movements of people faced to different information search tasks on textual material. We studied situations of everyday life when people are seeking information on a newspaper or a web page. People should judge whether a piece of text is semantically related or not to a goal expressed by a few words. Because quite often time is a constraint, texts may not be entirely processed before the decision occurs. More specifically, we analyzed eye movements during two information search tasks: reading a paragraph with the task of quickly deciding i) if it is related or not to a given goal and ii) whether it is better related to a given goal than another paragraph processed previously. One model is proposed for each of these situations. Our simulations are done at the level of eye fixations and saccades. In particular, we predicted the time at which participants would decide to stop reading a paragraph because they have enough information to make their decision. The models make predictions at the level of words that are likely to be fixated before a paragraph is abandoned. Human semantic judgments are mimicked by computing the semantic similarities between sets of words using Latent Semantic Analysis (LSA) (Landauer et al., 2007). We followed a statistical parametric approach in the construction of our models. The models are based on a Bayesian classifier. We proposed a two-variable linear threshold to account for the decision to stop reading a paragraph, based on the Rank of the fixation and i) the semantic similarity (Cos) between the paragraph and the goal and ii) the difference of semantic similarities (Gap) between each paragraph and the goal. For both models, the performance results showed that we are able to replicate in average people's behavior faced to the information search tasks studied along the thesis. The thesis includes two main parts: 1) designing and carrying out psychophysical experiments in order to acquire eye movement data and 2) developing and testing the computational cognitive models
Meunier, François. "Prédiction de phénomènes géologiques pour l'aide à la décision lors de la prise de permis d'exploitation." Electronic Thesis or Diss., Sorbonne université, 2018. http://www.theses.fr/2018SORUS351.
Full textMachine learning, which is considered as an integral part of artificial intelligence, and should ultimately make computers "smart", continues to grow with time, and opens unsuspicious horizons. More and more complex structures tend to be studied by this way, raising the available information to the level of exploitable knowledge. This doctoral work proposes to valorize a particular type of data that are the 3D objects (structures) constructed from mesh, by empirically justifying the undeniable contributions of an extraction of sub-parts coming from these last one. This objective is achieved by solving a forecast problem by a new supervised classification approach for information recommendation. Beyond the expected result, a justification is also provided in the form of the visualization of sub-parts extracted discriminant, thus allowing interpretation by the specialist. In the Total Exploration service, this classification need is initially applied to large 3D structures such as geo-models of geological basins, whose relevant elements belong to sub-parts. During the study of a subsoil, geologists try to understand the subsoil by using 3D data reconstructed through acoustic waves. This understanding can be helped by providing a way to detect some types of shapes within these structures. We propose, in order to answer this problem, a classification system of these 3D structures. Thanks to an adaptation of Time series Shapelets and features selection methods, it is possible to only select the most relevant parts for the targeted classification. To summarize, the main idea is to randomly extract a certain number of sub-surfaces from each 3D object of the learning set, then to study its relevance depending on the expected classification, before using the most relevant one for a more traditional learning based on the degree of belonging of the extract in each object. In industrial companies, the lack of justification of results tends to assimilate machine learning techniques to a black box. The proposed method, however, corrects this problem, and allows the understanding of the result of the decision support provided by the classification built. Indeed, in addition to presenting slightly better forecast results than those provided by the state of the art, it offers a visualization of the sub-parts of the most discriminating 3D objects within the framework of the implemented classification model, and therefore the areas that will have mostly allowed to classify the data. Subsequently, we propose an improvement of this method by two main paths: the first one is the contribution of an adaptation of the transfer of knowledge (or transfer learning applied to the previously proposed algorithm; the second one is an innovative method of attribute selection, based on tools derived from fuzzy subset theory, which proves to be potentially applicable to any type of attribute selection challenge in supervised classification. These multiple results confirm the general potential of random selection of candidate attributes, especially in the context of large amounts of data
Dantan, Jérôme. "Une approche systémique unifiée pour l’optimisation durable des systèmes socio-environnementaux : ingénierie des systèmes de décision en univers incertain." Electronic Thesis or Diss., Paris, CNAM, 2016. http://www.theses.fr/2016CNAM1045.
Full textNowadays, the sustainability of human activities is a major worldwide concern. The challenge is to evaluate such activities not only in terms of efficiency and productivity, but also in terms of their economic, social, environmental, etc. durability. For this, the experts of these areas need to work collaboratively. In this context, human societies are facing several major challenges such as: (1) process a large amount of information whose volume increases exponentially (“big data”), (2) live in a both dynamic and imperfect real world, (3) predict and assess future states of its activities.The researches we have conducted in this thesis contribute in particular to the domain of decision systems engineering under uncertainty. We have chosen the field of general socio-environmental systems as subject of study, particularly the multidisciplinary field of agriculture. We propose a systemic approach for the sustainable optimization of socio-environmental systems: (1) the meta-modeling of socio-environmental systems, (2) the generic representation of data imperfection flowing in such systems, associated to a decision model in uncertain environment and finally (3) the simulation and the assessment of such systems in dynamic environment for the purpose of decision making by experts which we have illustrated by both a service-oriented architecture model and case studies applied to the agriculture domain
Leitzelman, Mylène. "Mise en place d'un système d'informations stratégiques multicritères facilitant l'intégration des ressources régionales et la prise de décision dans le domaine de l'environnement Application à la ville de Marseille." Aix-Marseille 3, 1998. http://www.theses.fr/1998AIX30078.
Full textGay, Antonin. "Pronostic de défaillance basé sur les données pour la prise de décision en maintenance : Exploitation du principe d'augmentation de données avec intégration de connaissances à priori pour faire face aux problématiques du small data set." Electronic Thesis or Diss., Université de Lorraine, 2023. http://www.theses.fr/2023LORR0059.
Full textThis CIFRE PhD is a joint project between ArcelorMittal and the CRAN laboratory, with theaim to optimize industrial maintenance decision-making through the exploitation of the available sources of information, i.e. industrial data and knowledge, under the industrial constraints presented by the steel-making context. Current maintenance strategy on steel lines is based on regular preventive maintenance. Evolution of preventive maintenance towards a dynamic strategy is done through predictive maintenance. Predictive maintenance has been formalized within the Prognostics and Health Management (PHM) paradigm as a seven steps process. Among these PHM steps, this PhD's work focuses on decision-making and prognostics. The Industry 4.0 context put emphasis on data-driven approaches, which require large amount of data that industrial systems cannot ystematically supply. The first contribution of the PhD consists in proposing an equation to link prognostics performances to the number of available training samples. This contribution allows to predict prognostics performances that could be obtained with additional data when dealing with small datasets. The second contribution of the PhD focuses on evaluating and analyzing the performance of data augmentation when applied to rognostics on small datasets. Data augmentation leads to an improvement of prognostics performance up to 10%. The third contribution of the PhD consists in the integration of expert knowledge into data augmentation. Statistical knowledge integration proved efficient to avoid performance degradation caused by data augmentation under some unfavorable conditions. Finally, the fourth contribution consists in the integration of prognostics in maintenance decision-making cost modeling and the evaluation of prognostics impact on maintenance decision cost. It demonstrates that (i) the implementation of predictive maintenance reduces maintenance cost up to 18-20% and ii) the 10% prognostics improvement can reduce maintenance cost by an additional 1%
Lopez, orozco Francisco. "Modélisation cognitive computationnelle de la recherche d'information utilisant des données oculomotrices." Phd thesis, Université de Grenoble, 2013. http://tel.archives-ouvertes.fr/tel-00910178.
Full textDebèse, Nathalie. "Recalage de la navigation par apprentissage sur les données bathymètriques." Compiègne, 1992. http://www.theses.fr/1992COMPD538.
Full textEkhteraei, Toussi Mohammad Massoud. "Analyse et reconstitution des décisions thérapeutiques des médecins et des patients à partir des données enregistrées dans les dossiers patient informatisés." Paris 13, 2009. http://www.theses.fr/2009PA132029.
Full textThis thesis deals with the study of the agreement between the therapeutic decisions and the recommendations of best practice. We propose three methods for the analysis and the reconstruction of physicians’ and patients’ therapeutic decisions through the information available in patient records. Our first method involves the analysis of the agreement between physicians’ prescriptions and the recommendations of best practice. We present a typology of drug therapy, applicable to chronic disease, allowing to formalize both prescriptions and recommendations and to compare them in three levels of detail: the type of treatment, pharmaco-therapeutic class, and the dose of each medication. Our second method involves the extraction of physicians’ therapeutic decisions through patient records when the guidelines do not offer recommendations. We first present a method for discovering knowledge gaps in clinical practice guidelines. Then we apply a machine learning algorithm (C5. 0 Quinlan) to a database of patient records to extract new rules that we graft to the decision tree of the original guideline. Our third method involves the analysis of compliance of patients’ therapeutic decisions with regard to the physicians’ recommendations concerning insulin dose adjustment. We present five indicators useful for the verification of the level of patient compliance: absolute agreement (AA) and the relative agreement (RA) show an acceptable compliance, extreme disagreement (ED) shows a dangerous behavior, over-treatment (OT) and under-treatment (UT) show that the administered dose was respectively too high or too low
Lomet, Aurore. "Sélection de modèle pour la classification croisée de données continues." Compiègne, 2012. http://www.theses.fr/2012COMP2041.
Full textPastor, Josette. "Ronsart : représentation objet du raisonnement dans un système d'aide à la surveillance de processus." Toulouse 3, 1990. http://www.theses.fr/1990TOU30246.
Full textOuedraogo, Boukary. "Système de surveillance épidémiologique au Burkina Faso : contribution à la mise en place d'un dispositif informatisé de remontée des données du paludisme et analyses géo-épidémiologiques pour la prise de décision." Thesis, Aix-Marseille, 2018. http://www.theses.fr/2018AIXM0675/document.
Full textOur work has shown that an epidemiological surveillance system (SEpi), such as those based on mobile phones, must be integrated into the national system. Often external actors impose and decide on the implementation of an information system (IS), without real consultation with users and managers, without integration into the national system, without long-term reflection on the functioning, costs and developments. Users and managers must take ownership of the system, both in its implementation and in its maintenance, development and analysis. The example of the spatial and temporal variation of malaria has shown that non-health factors, in this case environmental factors, have an impact on the occurrence of epidemics. It is therefore necessary, in order to have a vision of the epidemiological situation in a national decision-making context, to integrate these factors to optimize the analysis and the SEpi. It is essential, for a useful analysis of an epidemiological situation, to have, in real time, very fine spatial and temporal scales. The success of IS development depends mainly on the involvement of authorities at each hierarchical level. Without an SIS policy decided at the highest level, structured and actively coordinated, any implementation of a new tool (tablet, mobile phone, etc.) is doomed to failure, regardless of the budget allocated.It is necessary to move away from the tradition of annual review/reporting, which only analyses past aggregated information, disconnected from the national IS, to enter SEpi 2.0 in real time, reactive, integrated into a structured and nationally coordinated SIS
Romero, Aquino Nelson Marcelo. "A smart assessment of business processes for enterprises decision support." Electronic Thesis or Diss., Université de Lorraine, 2021. http://www.theses.fr/2021LORR0184.
Full textThe challenges faced by enterprises on a daily basis such as regulatory compliance, novel technology adoption, or cost optimisation, foster them to implement improvement initiatives. As a first step towards the implementation of those initiatives, there is a need to perform assessments to understand the As-Is state of the enterprise considering different aspects such as maturity, agility, performance, or readiness towards digitisation. However, assessments are expensive in terms of time and resources. Specifically when considering qualitative appraisals, such as maturity or capability assessments, since they often demand the participation of one or more human assessors to review documents, perform interviews, etc. Therefore, means to automate or semi-automate the assessment process are essential, since they could reduce the effort to perform it. In this sense, the objective of this thesis is to propose the Smart Assessment Framework (SAF), a conceptual framework grounded on the concepts of smart systems able to provide efficient support for the development of systems that allow to perform automated assessments in enterprises. The capabilities of smart systems that are considered as basis for the definition of the SAF are extracted from the scientific literature through a systematic literature review and the application of natural language processing techniques. The SAF is instantiated to design and develop systems able to perform business process capability assessment using two types of assessment evidences: text data and enterprise models. To treat the text data, a hybrid Long Short-Term Memory Network and Ontology-based system is defined. To treat enterprise models, a Graph Neural Network-based approach his devised
Brahimi, Lahcene. "Données de tests non fonctionnels de l'ombre à la lumière : une approche multidimensionnelle pour déployer une base de données." Thesis, Chasseneuil-du-Poitou, Ecole nationale supérieure de mécanique et d'aérotechnique, 2017. http://www.theses.fr/2017ESMA0009/document.
Full textChoosing appropriate database management systems (DBMS) and/or execution platforms for given database (DB) is complex and tends to be time- and effort-intensive since this choice has an important impact on the satisfaction of non-functional requirements (e.g., temporal performance or energy consumption). lndeed, a large number of tests have been performed for assessing the quality of developed DB. This assessment often involves metrics associated with non-functional requirement. That leads to a mine of tests covering all life-cycle phases of the DB's design. Tests and their environments are usually published in scientific articles or specific websites such as Transaction Processing Council (TPC). Therefore, this thesis bas taken a special interest to the capitalization and the reutilization of performed tests to reduce and mastery the complexity of the DBMS/platforms selection process. By analyzing the test accurately, we identify that tests concem: the data set, the execution platform, the addressed non-functional requirements, the used queries, etc. Thus, we propose an approach of conceptualization and persistence of all dimensions as well as the results of tests. Conseguently, this thesis leads to the following contributions. (1) The design model based on descriptive, prescriptive and ontological concepts to raise the different dimensions. (2) The development of a multidimensional repository to store the test environments and their results. (3) The development of a decision making methodology based on a recommender system for DBMS and platforms selection
Chakhchoukh, Mehdi. "Visualization to Support Multi-Criteria Decision-making in Agronomy." Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASG085.
Full textThe increasing complexity of agricultural systems necessitates sophisticated decision-making tools that can handle multiple criteria and accommodate complex trade-off analysis tasks. This thesis develops visualizations that facilitate decision-making processes in agronomy. This work has three main contributions: (i) understanding how provenance can support trade-off analysis, (ii) articulating high-level design and visualization needs to support group comparison in trade-off scenarios, and (iii) understanding how different visualizations can affect comparisons and decision-making in trade-off analysis. After an introductory chapter and a chapter on related work, the thesis details these three main contributions.The 3rd chapter of the thesis investigates how analytic provenance mechanisms can assist experts in recalling and tracking complex trade-off analyses. We developed VisProm, a web-based system integrating in-visualization provenance views to help experts track trade-offs and their objectives when exploring complex simulation results. Observation sessions with groups of experts revealed eight key tasks supported by our designs, highlighting new opportunities for provenance-driven trade-off analysis, such as monitoring trade-off space coverage and tracking alternative scenarios. One key outcome was the need to consider conflicting objectives and compare how different solutions or trade-off spaces fare under these objectives.Building on this, the 4th chapter explores the needs and challenges experts face when comparing trade-off spaces (that are often expressed as groups of data points, e.g., groups of simulation results) that optimize different objectives. Through workshops with domain experts and visualization designers, we identified high-level design and visualization needs to support group comparison in trade-off scenarios. This chapter lays the groundwork for developing effective visualization techniques for comparing groups that represent different trade-offs in terms of what objectives they optimize. They led to the implementation of a visualization prototype that visually encodes a variety of trade-off metrics. These encode and visually communicate experts' priorities in terms of objectives, the notion of ideal solutions, and how far current groups of solutions are from those ideals.The 5th chapter focuses on the evaluation of visualization techniques for comparing groups of points (solutions) when they represent different trade-offs. Motivated by the visualization needs and design requirements of the previous chapter, we selected three promising tabular-based visualization techniques to study. These techniques encode trade-off priorities and ideal solutions in different ways: coupling or decoupling the trade-off metrics and presenting them visually. We conducted a user study to understand how visualizations affected comparison decisions and quality of decision explanations. The findings from this study highlight that techniques that visually separate the encoding of priorities and ideal solutions lead to higher mental load and lower self-reported trust but may support more varied decision strategies than integrated visualizations. But they were always preferred over a baseline visualization.We conclude the thesis with a list of discussions and perspectives for future directions stemming from the results of this work
Ben, Jeddou Roukaya. "Football Selection Optimization through the Integration of Management Theories, AI and Multi-criteria Decision Making." Electronic Thesis or Diss., Bourgogne Franche-Comté, 2024. http://www.theses.fr/2024UBFCG009.
Full textThe research outlined in this thesis falls within the context of professional football club management, where establishing a balance between human and financial aspects is essential for long-term viability of sports organizations. In football management, the traditional methods of player selection have historically guided decision-making processes within clubs. This strategic decision-making process, which is often subjective and uncertain, can have a significant impact on the club's financial, economic and sporting situation.As football is increasingly becoming a data-driven sport, there is a growing recognition that traditional approaches need to be complemented by scientific methods based on artificial intelligenceomenclature{AI}{Artificial Intelligence} and multi-criteria decision makingomenclature{MCDM}{Multi-Criteria Decision Making} approaches to optimize player selection and improve both sporting and financial performance. It is becoming increasingly important to find an optimal balance between sporting success and financial performance to optimize the results of a specific entity: the football club.In this respect, the main purpose of this thesis is to propose a model that combines machine learning techniques with multi-criteria analysis methods to improve the efficiency and objectivity of the football player selection process, while taking into account financial and managerial considerations. Our first contribution is to prioritize the physical, technical, tactical, and behavioral criteria of players using Random Forest, Entropy, and CRITIComenclature{CRITIC}{CRiteria Importance Through Intercriteria Correlation}algorithms. The second contribution is to rank players based on their performance using the TOPSIS method.To validate these contributions, we designed a decision support system that assists the sports decision maker by proposing players in order of performance. Our model does not aim to replace coaches but rather to integrate subjective and objective evaluations to provide a thorough understanding of the factors influencing sporting and managerial performance, thereby improving the accuracy of player selection. As football moves towards more data-oriented approaches, the combination of AI and MCDM can further optimize player selection processes by leveraging the benefits of objective data analysis and subjective expertise.The results obtained show the effectiveness of our approach in improving the performance of football teams, especially when supported and promoted by emotional intelligence, which refers to the manager's ability to recognize the substantial state of the players
Devillers, Rodolphe. "Conception d'un système multidimensionnel d'information sur la qualité des données géospatiales." Phd thesis, Université de Marne la Vallée, 2004. http://tel.archives-ouvertes.fr/tel-00008930.
Full textChuchuk, Olga. "Optimisation de l'accès aux données au CERN et dans la Grille de calcul mondiale pour le LHC (WLCG)." Electronic Thesis or Diss., Université Côte d'Azur, 2024. http://www.theses.fr/2024COAZ4005.
Full textThe Worldwide LHC Computing Grid (WLCG) offers an extensive distributed computing infrastructure dedicated to the scientific community involved with CERN's Large Hadron Collider (LHC). With storage that totals roughly an exabyte, the WLCG addresses the data processing and storage requirements of thousands of international scientists. As the High-Luminosity LHC phase approaches, the volume of data to be analysed will increase steeply, outpacing the expected gain through the advancement of storage technology. Therefore, new approaches to effective data access and management, such as caches, become essential. This thesis delves into a comprehensive exploration of storage access within the WLCG, aiming to enhance the aggregate science throughput while limiting the cost. Central to this research is the analysis of real file access logs sourced from the WLCG monitoring system, highlighting genuine usage patterns.In a scientific setting, caching has profound implications. Unlike more commercial applications such as video streaming, scientific data caches deal with varying file sizes—from a mere few bytes to multiple terabytes. Moreover, the inherent logical associations between files considerably influence user access patterns. Traditional caching research has predominantly revolved around uniform file sizes and independent reference models. Contrarily, scientific workloads encounter variances in file sizes, and logical file interconnections significantly influence user access patterns.My investigations show how LHC's hierarchical data organization, particularly its compartmentalization into datasets, impacts request patterns. Recognizing the opportunity, I introduce innovative caching policies that emphasize dataset-specific knowledge, and compare their effectiveness with traditional file-centric strategies. Furthermore, my findings underscore the "delayed hits" phenomenon triggered by limited connectivity between computing and storage locales, shedding light on its potential repercussions for caching efficiency.Acknowledging the long-standing challenge of predicting Data Popularity in the High Energy Physics (HEP) community, especially with the upcoming HL-LHC era's storage conundrums, my research integrates Machine Learning (ML) tools. Specifically, I employ the Random Forest algorithm, known for its suitability with Big Data. By harnessing ML to predict future file reuse patterns, I present a dual-stage method to inform cache eviction policies. This strategy combines the power of predictive analytics and established cache eviction algorithms, thereby devising a more resilient caching system for the WLCG. In conclusion, this research underscores the significance of robust storage services, suggesting a direction towards stateless caches for smaller sites to alleviate complex storage management requirements and open the path to an additional level in the storage hierarchy. Through this thesis, I aim to navigate the challenges and complexities of data storage and retrieval, crafting more efficient methods that resonate with the evolving needs of the WLCG and its global community
Es, soufi Widad. "Modélisation et fouille des processus en vue d'assister la prise de décisions dans le contexte de la conception et la supervision des systèmes." Thesis, Paris, ENSAM, 2018. http://www.theses.fr/2018ENAM0067/document.
Full textData sets are growing rapidly because of two things. First, the fourth industrial revolution that aims to transform factories into smart entities in which cyber physical systems monitor the physical processes of the factory. Second, the need to innovate in order to achieve and maintain competitiveness. Due to this huge volume of data (Big Data), (i) design and supervision processes are becoming chaotic, (ii) data within organizations is increasingly becoming difficult to exploit and (iii) engineers are increasingly lost when making decisions. Indeed, several issues are identified in industry: (i) when researching, visualizing and exchanging information, (ii) when making decisions and (iii) when managing contextual changes. Through this research work, we propose an Intelligent and modular Decision Support System (IDSS), where each of the four modules solves one of the identified issues. Process modelling and traceability modules aim to model processes and capture how they are actualy executed. The decision support module proposes the process patterns that best fit the decision context, as well as their most significant activity parameters. The contextual change management module continuously updates the decision-making module, in order to handle the dynamic aspect of the decision context. The proposed system is fully verified and half-validated in the context of the Gontrand project, aiming at intelligent and real-time supervision of gas networks favoring the injection of green gas. In order to be fully validated, the performance of the system must be analyzed after integrating and exploitating it in a real industrial environment
Dantan, Jérôme. "Une approche systémique unifiée pour l’optimisation durable des systèmes socio-environnementaux : ingénierie des systèmes de décision en univers incertain." Thesis, Paris, CNAM, 2016. http://www.theses.fr/2016CNAM1045/document.
Full textNowadays, the sustainability of human activities is a major worldwide concern. The challenge is to evaluate such activities not only in terms of efficiency and productivity, but also in terms of their economic, social, environmental, etc. durability. For this, the experts of these areas need to work collaboratively. In this context, human societies are facing several major challenges such as: (1) process a large amount of information whose volume increases exponentially (“big data”), (2) live in a both dynamic and imperfect real world, (3) predict and assess future states of its activities.The researches we have conducted in this thesis contribute in particular to the domain of decision systems engineering under uncertainty. We have chosen the field of general socio-environmental systems as subject of study, particularly the multidisciplinary field of agriculture. We propose a systemic approach for the sustainable optimization of socio-environmental systems: (1) the meta-modeling of socio-environmental systems, (2) the generic representation of data imperfection flowing in such systems, associated to a decision model in uncertain environment and finally (3) the simulation and the assessment of such systems in dynamic environment for the purpose of decision making by experts which we have illustrated by both a service-oriented architecture model and case studies applied to the agriculture domain
Jaffré, Marc-Olivier. "Connaissance et optimisation de la prise en charge des patients : la science des réseaux appliquée aux parcours de soins." Thesis, Compiègne, 2018. http://www.theses.fr/2018COMP2445/document.
Full textIn France, the streamlining of means assigned hospitals result in concentration of resources ana growing complexily of heallhcare facilities. Piloting and planning (them turn out to be all the more difficult, thus leading of optimjzation problems. The use of massive data produced by these systems in association with network science an alternative approach for analyzing and improving decision-making support jn healthcare. Method : Various preexisting optimisation are first highblighted based on observations in operating theaters chosen as experirnentai sites. An analysis of merger of two hospitlas also follows as an example of an optimization method by massification. These two steps make it possible to defend an alternative approach that combines the use of big data science of networks data visualization techniques. Two sets of patient data in orthopedic surgery in the ex-Midi-Pyrénées region in France are used to create a network of all sequences of care. The whole is displayed in a visual environment developed in JavaScript allowing a dynamic mining of the graph. Results: Visualizing healthcare sequences in the form of nodes and links graphs has been sel out. The graphs provide an additional perception of' the redundancies of he healthcare pathways. The dynamic character of the graphs also allows their direct rnining. The initial visual approach is supplernented by a series of objcctive measures from the science of networks. Conciusion: Healthcare facilities produce massive data valuable for their analysis and optimization. Data visualizalion together with a framework such as network science gives prelimiaary encouraging indicators uncovering redondant healthcare pathway patterns. Furthev experimentations with various and larger sets of data is required to validate and strengthen these observations and methods
Nallapu, Bhargav Teja. "A closed loop framework of decision-making and learning in primate prefrontal circuits." Thesis, Bordeaux, 2019. http://www.theses.fr/2019BORD0300.
Full textThis thesis attempts to build a computational systems-level framework that would help to develop an understanding of the organization of the prefrontal cortex (PFC) and the basal ganglia (BG) systems and their functional interactions in the process of decision-making and goal-directed behaviour in humans. A videogame environment, Minecraft is used to design experiments to test the framework in an environment that could be more complex and realistic, if necessary. The framework, along with virtual experimentation forms a closed-loop architecture for studying the high-level animal behavior.The neural systems framework in this work rests on the network dynamics between the subsystems of PFC and BG. PFC is believed to play a crucial role, in executive functions like planning, attention, goal-directed behavior, etc. BG are a group of sub-cortical nuclei that have been extensively studied in the field of motor control and action selection.Different regions in the PFC and structures within BG are anatomically organized, in parallel and segregated loops (each of them referred as a CBG loop). These loops can be, on a high level, divided into 3 kinds : limbic loops, associative loops and sensori-motor loops.First, a comprehensive framework with the above mentioned parallel loops is implemented. The emphasis rests on the limbic loops. Therefore the associative and sensori-motor loops are modeled algorithmically, taking help of the experimentation platform for motor control. As for the limbic loops, the orbitofrontal cortex (OFC) is the part of a loop for preferences and the anterior cingulate cortex (ACC), for internal needs. These loops are formed through their limbic counterpart in BG, ventral striatum (VS). VS has been widely studied and reported to be encoding various substrates of value, forming an integral part of value-based decision making. Simplistic scenarios are designed in the virtual environment using the agent and some objects and appetitive rewards in the environment. The limbic loops have been implemented according to existing computational models of decision making in the BG. Thus the framework and the experimental platform stand as a testbed to computational models of specific processes that have to fit in a bigger picture. Next, we use this framework to study more closely, the role of OFC in value-guided decision making and goal-directed behavior. As part of this thesis, several outstanding observations about the role of OFC in behavior have been summarized by consolidating numerous experimental evidences and reviews.Lastly, to explain the findings of different roles of lateral and medial regions of OFC, existing computational architecture of CBG loops, pavlovian learning in amygdala and multiple evidences of amygdala-OFC-VS interactions are put together into a single model. The learning rules of reinforcement have been adapted to accommodate the appropriate credit assignment (correct outcome to correct chosen stimulus) and the value difference of the choice options. As a result, several findings from animal experiments studying the separable roles, were replicated. Difference in choice impairments depending on the value difference between the best and the second best option is one of them. Dissociable roles in Pavlovian Instrumental Transfer were also observed.The investigations into the observed evidences around OFC offer great insight into understanding the very process of decision-making, value computation in general. By venturing into a realm of bio-inspired adaptive learning in an embodied virtual agent, describing the principles of motivation, goal-selection and self-evaluation, it is highlighted that the field of reinforcement learning and artificial intelligence has a lot to gain from studying the role of prefrontal systems in decision-making
Sene, Serigne Kosso. "Du modèle à l’aide à la décision par la modélisation : application à l’irrigation déficitaire des végétaux en ville, contexte France et Sénégal." Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS579.
Full textUrban green space management is constantly evolving, transitioning from mere ornamental elements to ecosystems that provide a wide range of essential services to both humans and the environment. However, these services are not without cost, particularly in terms of water use. In this context, agronomic models, while numerous, do not constitute comprehensive decision support tools for urban green space professionals. To address this challenge, an approach that integrates decision theory and decision modeling has been developed. This approach aims to combine scientific knowledge with context-specific operational constraints. The goal is to provide precise and tailored scenarios for each situation, taking into account operational constraints and stakeholder expectations. A concrete example of this approach is the SARa agronomic model developed in this study, which assesses root activity in trees using tensiometric measurements. This model allows for continuous monitoring of tree establishment after transplantation, offering a proactive rather than reactive view of their survival. SARa parameters are then integrated into a decision support tool for deficit irrigation. In addition to the SARa agronomic model, the study also utilized the TeadXpert model to provide data on several parameters, such as the useful reserve at the roots (RUR). This data was crucial for informing the decision-making process regarding urban green space irrigation. The integration of these models, including SARa and TeadXpert, as well as the approach based on decision theory and decision modeling, constitutes a holistic approach aimed at improving the management of urban green spaces while optimizing water use. Moreover, the use of an ontology was explored to formalize knowledge in the field of deficit irrigation of urban green spaces, facilitating communication among stakeholders in the field. This study highlights the importance of integrating complex system models and operational constraint data in the development of decision support tools for urban green space professionals. This enables the consideration of operational realities while harnessing scientific advancements for more efficient and sustainable management of vegetation in urban environments
Al, Hage Joelle. "Fusion de données tolérante aux défaillances : application à la surveillance de l’intégrité d’un système de localisation." Thesis, Lille 1, 2016. http://www.theses.fr/2016LIL10074/document.
Full textThe interest of research in the multi-sensor data fusion field is growing because of its various applications sectors. Particularly, in the field of robotics and localization, the use of different sensors informations is a vital step to ensure a reliable position estimation. In this context of multi-sensor data fusion, we consider the diagnosis, leading to the identification of the cause of a failure, and the sensors faults tolerance aspect, discussed in limited work in the literature. We chose to develop an approach based on a purely informational formalism: information filter on the one hand and tools of the information theory on the other. Residuals based on the Kullback-Leibler divergence are developed. These residuals allow to detect and to exclude the faulty sensors through optimized thresholding methods. This theory is tested in two applications. The first application is the fault tolerant collaborative localization of a multi-robot system. The second application is the localization in outdoor environments using a tightly coupled GNSS/odometer with a fault tolerant aspect
Bouillot, Flavien. "Classification de textes : de nouvelles pondérations adaptées aux petits volumes." Thesis, Montpellier, 2015. http://www.theses.fr/2015MONTS167.
Full textEvery day, classification is omnipresent and unconscious. For example in the process of decision when faced with something (an object, an event, a person), we will instinctively think of similar elements in order to adapt our choices and behaviors. This storage in a particular category is based on past experiences and characteristics of the element. The largest and the most accurate will be experiments, the most relevant will be the decision. It is the same when we need to categorize a document based on its content. For example detect if there is a children's story or a philosophical treatise. This treatment is of course more effective if we have a large number of works of these two categories and if books had a large number of words. In this thesis we address the problem of decision making precisely when we have few learning documents and when the documents had a limited number of words. For this we propose a new approach based on new weights. It enables us to accurately determine the weight to be given to the words which compose the document.To optimize treatment, we propose a configurable approach. Five parameters make our adaptable approach, regardless of the classification given problem. Numerous experiments have been conducted on various types of documents in different languages and in different configurations. According to the corpus, they highlight that our proposal allows us to achieve superior results in comparison with the best approaches in the literature to address the problems of small dataset. The use of parameters adds complexity since it is then necessary to determine optimitales values. Detect the best settings and best algorithms is a complicated task whose difficulty is theorized through the theorem of No-Free-Lunch. We treat this second problem by proposing a new meta-classification approach based on the concepts of distance and semantic similarities. Specifically we propose new meta-features to deal in the context of classification of documents. This original approach allows us to achieve similar results with the best approaches to literature while providing additional features. In conclusion, the work presented in this manuscript has been integrated into various technical implementations, one in the Weka software, one in a industrial prototype and a third in the product of the company that funded this work
O'Connor, Daniel. "Étude sur les perspectives des omnipraticiens du Québec quant à leur rôle-conseil concernant l'utilisation des médecines alternatives et complémentaires (MAC)." Mémoire, Université de Sherbrooke, 2008. http://savoirs.usherbrooke.ca/handle/11143/3975.
Full textHamdan, Hani. "Développement de méthodes de classification pour le contrôle par émission acoustique d'appareils à pression." Compiègne, 2005. http://www.theses.fr/2005COMP1583.
Full textThis PhD thesis deals with real-time computer-aided decision for acoustic emission-based control of pressure equipments. The addressed problem is the taking into account of the location uncertainty of acoustic emission signals, in the mixture model-based clustering. Two new algorithms (EM and CEM for uncertain data) are developed. These algorithms are only based on uncertainty zone data and their development is carried out by optimizing new likelihood criteria adapted to this kind of data. In order to speed up the data processing when the data size becomes very big, we have also developed a new method for the discretization of uncertainty zone data. This method is compared with the traditional one applied to imprecise data. An experimental study using simulated and real data shows the efficiency of the various developed approaches
Douplat, Marion. "Les décisions de limitations et d'arrêts des thérapeutiques à l'épreuve de la temporalité des urgences : enjeux éthiques." Thesis, Aix-Marseille, 2018. http://www.theses.fr/2018AIXM0756.
Full textDecisions of withholding or withdrawing life-sustaining treatments are frequent in emergency departments where death is an everyday reality. These decisions lead to ethical dilemmas for physicians, nurses, and relatives with regards to the timing of the emergency departments and the evolution of the French legislation in particular the law on the rights of the patients and the persons who are nearing the end of life. Moreover, the caregivers should have an ethical thinking-process in order to respect ethical principles such as autonomy, beneficence, non-malfeasance and justice. There seems to be a conflict between the emergency situation and the complexity of the decision-making process. There is only few data concerning the decisions of withholding or withdrawing life sustaining treatments especially about modality of these decisions and the implications for caregivers and relatives. Because of the lack of data, we decided to explore the physicians’ experience during the decision-making process in the emergency departments and the perception of the relatives after the decision of withholding or withdrawing life-sustaining treatment. Our study consists in three parts: a study which evaluates the physicians’ experience during the decision-making process; then, a study describing the relatives’ perception of the decisions of withholding or withdrawing life-sustaining treatments and finally a study about the involvement of general practitioners in the decision-making process
De, Carvalho Gomes Fernando. "Utilisation d'algorithmes stochastiques en apprentissage." Montpellier 2, 1992. http://www.theses.fr/1992MON20254.
Full textBouba, Fanta. "Système d'information décisionnel sur les interactions environnement-santé : cas de la Fièvre de la Vallée du Rift au Ferlo (Sénégal)." Electronic Thesis or Diss., Paris 6, 2015. http://www.theses.fr/2015PA066461.
Full textOur research is in part of the QWeCI european project (Quantifying Weather and Climate Impacts on Health in Developing Countries, EU FP7) in partnership with UCAD, the CSE and the IPD, around the theme of environmental health with the practical case on vector-borne diseases in Senegal and particularly the Valley Fever (RVF). The health of human and animal populations is often strongly influenced by the environment. Moreover, research on spread factors of vector-borne diseases such as RVF, considers this issue in its dimension both physical and socio-economic. Appeared in 1912-1913 in Kenya, RVF is a widespread viral anthropo-zoonosis in tropical regions which concerns animals but men can also be affected. In Senegal, the risk area concerns mainly the Senegal River Valley and the forestry-pastoral areas Ferlo. With a Sahelian climate, the Ferlo has several ponds that are sources of water supply for humans and livestock but also breeding sites for potential vectors of RVF. The controlling of the RVF, which is crossroads of three (03) large systems (agro-ecological, pathogen, economic/health/social), necessarily entails consideration of several parameters if one wants to first understand the mechanisms emergence but also consider the work on risk modeling. Our work focuses on the decision making process for quantify the use of health data and environmental data in the impact assessment for the monitoring of RVF. Research teams involved produce data during their investigations periods and laboratory analyzes. The growing flood of data should be stored and prepared for correlated studies with new storage techniques such as datawarehouses. About the data analysis, it is not enough to rely only on conventional techniques such as statistics. Indeed, the contribution on the issue is moving towards a predictive analysis combining both aggregate storage techniques and processing tools. Thus, to discover information, it is necessary to move towards datamining. Furthermore, the evolution of the disease is strongly linked to environmental spatio-temporal dynamics of different actors (vectors, viruses, and hosts), cause for which we rely on spatio-temporal patterns to identify and measure interactions between environmental parameters and the actors involved. With the decision-making process, we have obtained many results :i. following the formalization of multidimensional modeling, we have built an integrated datawarehouse that includes all the objects that are involved in managing the health risk - this model can be generalized to others vector-borne diseases;ii. despite a very wide variety of mosquitoes, Culex neavei, Aedes ochraceus and Aedes vexans are potential vectors of FVR. They are most present in the study area and, during the rainy season period which is most prone to suspected cases; the risk period still remains the month of October;iii. the analyzed ponds have almost the same behavior, but significant variations exist in some points.This research shows once again the interest in the discovery of relationships between environmental data and the FVR with datamining methods for the spatio-temporal monitoring of the risk of emergence