Letteratura scientifica selezionata sul tema "Optimisation dirigée par les données"
Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili
Consulta la lista di attuali articoli, libri, tesi, atti di convegni e altre fonti scientifiche attinenti al tema "Optimisation dirigée par les données".
Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.
Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.
Articoli di riviste sul tema "Optimisation dirigée par les données"
Beaudoin, Carl. "Perceptions des enseignants et des garçons à l’égard de la relation enseignant-élève au secondaire : quand les stéréotypes de genre s’immiscent en classe". Canadian Journal of Education/Revue canadienne de l'éducation 44, n. 3 (20 settembre 2021): 848–74. http://dx.doi.org/10.53967/cje-rce.v44i3.4825.
Testo completoCarignan, Isabelle. "La mobilisation de stratégies de lecture sur trois formes de documents en 3e secondaire". Nouveaux cahiers de la recherche en éducation 12, n. 2 (30 luglio 2013): 161–78. http://dx.doi.org/10.7202/1017465ar.
Testo completoPogačnik, Vladimir. "Analyse linguistique et approches de l'oral - Recueil d'études offert en hommage à Claire Blanche-Benveniste (M.Bilger - K.van den Eynde -F.Gadet, éds.; Leuven/Paris, 1988: Peeters, Orbis/supplementa)". Linguistica 38, n. 2 (1 dicembre 1998): 212–13. http://dx.doi.org/10.4312/linguistica.38.2.212-213.
Testo completoEichholzer e Camenzind. "Übergewicht, Adipositas und Untergewicht in der Schweiz: Resultate der Nutri-Trend-Studie 2000". Praxis 92, n. 18 (1 aprile 2003): 847–58. http://dx.doi.org/10.1024/0369-8394.92.18.847.
Testo completoProulx-Boucher, Karène, Mylène Fernet, Martin Blais, Joseph Josy Lévy, Joanne Otis, Jocelyne Thériault, Johanne Samson, Guylaine Morin, Normand Lapointe e Germain Trottier. "Bifurcations biographiques : l’expérience du dévoilement du diagnostic du point de vue d’adolescents infectés par le VIH en période périnatale". Enfances, Familles, Générations, n. 21 (22 luglio 2014): 197–215. http://dx.doi.org/10.7202/1025966ar.
Testo completoVuillaume, M. L., F. Kwiatkowski, N. Uhrhammer, Y. Bidet e Y. J. Bignon. "Analyse de données d’expression transcriptomiques rythmées par des gènes-horloge : approche méthodologique et optimisation". Pathologie Biologie 61, n. 5 (ottobre 2013): e89-e95. http://dx.doi.org/10.1016/j.patbio.2010.12.001.
Testo completoSalami, Bukola, Benjamin Denga, Robyn Taylor, Nife Ajayi, Margot Jackson, Msgana Asefaw e Jordana Salma. "L’accès des jeunes Noirs de l’Alberta aux services en santé mentale". Promotion de la santé et prévention des maladies chroniques au Canada 41, n. 9 (settembre 2021): 271–80. http://dx.doi.org/10.24095/hpcdp.41.9.01f.
Testo completoBouchard-Valentine, Vincent. "fonofone pour iPad et iPhone : cadrage historique et curriculaire d’une application québécoise conçue pour la création sonore en milieu scolaire". Les Cahiers de la Société québécoise de recherche en musique 17, n. 1 (17 aprile 2018): 11–24. http://dx.doi.org/10.7202/1044666ar.
Testo completoLafaye, Marie Christine, Georges Louis e Antoine Wiedemann. "Qualité des données : conception du schéma de la base de données en utilisant l’ingénierie dirigée par les modèles. Un outil de conception de base de données relationnelle utilisant les métamodèles de l'OMG". Ingénierie des systèmes d'information 16, n. 5 (30 ottobre 2011): 109–42. http://dx.doi.org/10.3166/isi.16.5.109-142.
Testo completoCloutier, Frédéric, Guillaume Jalby, Paul Lessard e Peter A. Vanrolleghem. "Modélisation dynamique du comportement des métaux lourds dans des stations d’épuration". Revue des sciences de l'eau 22, n. 4 (22 ottobre 2009): 461–71. http://dx.doi.org/10.7202/038325ar.
Testo completoTesi sul tema "Optimisation dirigée par les données"
Bouarar, Selma. "Vers une conception logique et physique des bases de données avancées dirigée par la variabilité". Thesis, Chasseneuil-du-Poitou, Ecole nationale supérieure de mécanique et d'aérotechnique, 2016. http://www.theses.fr/2016ESMA0024/document.
Testo completoThe evolution of computer technology has strongly impacted the database design process which is henceforth requiring more time and resources to encompass the diversity of DB applications.Note that designers rely on their talent and knowledge, which have proven insufficient to face the increasing diversity of design choices, raising the problem of the reliability and completeness of this knowledge. This problem is well known as variability management in software engineering. While there exist some works on managing variability of physical and conceptual phases, very few have focused on logical design. Moreover, these works focus on design phases separately, thus ignore the different interdependencies. In this thesis, we first present a methodology to manage the variability of the whole DB design process using the technique of software product lines, so that (i)interdependencies between design phases can be considered, (ii) a holistic vision is provided to the designer and (iii) process automation is increased. Given the scope of the study, we proceed step-bystepin implementing this vision, by studying a case that shows: (i) the importance of logical design variability (iii) its impact on physical design (multi-phase management), (iv) the evaluation of logical design, and the impact of logical variability on the physical design (materialized view selection) in terms of non-functional requirements: execution time, energy consumption and storage space
Djilani, Zouhir. "Donner une autre vie à vos besoins fonctionnels : une approche dirigée par l'entreposage et l'analyse en ligne". Thesis, Chasseneuil-du-Poitou, Ecole nationale supérieure de mécanique et d'aérotechnique, 2017. http://www.theses.fr/2017ESMA0012/document.
Testo completoFunctiona] and non-functional requirements represent the first step for the design of any application, software, system, etc. Ail the issues associated to requirements are analyzed in the Requirements Engineering (RE) field. The RE process consists of several steps consisting of discovering, analyzing, validating and evolving the requirements related to the functionalities of the system. The RE community proposed a well-defined life-cycle for the requirements process that includes the following phases: elicitation, modeling, specification, validation and management. Once the requirements are validated, they are archived or stored in repositories in companies. With the continuous storage of requirements, companies accumulate an important amount of requirements information that needs to be analyzed in order to reproduce the previous experiences and the know-how acquired by reusing and exploiting these requirements for new projects. Proposing to these companies a warehouse in which all requirements are stored represents an excellent opportunity to analyze them for decision-making purposes. Recently, the Business Process Management Community (BPM) emitted the same needs for processes. In this thesis, we want to exploit the success of data warehouses and to replicate it for functional requirements. The issues encountered in the design of data warehouses are almost identical in the case of functional requirements. Requirements are often heterogeneous, especially in the case of large companies such Airbus, where each panner bas the freedom to use its own vocabulary and formalism to describe the requirements. To reduce this heterogeneity, using ontologies is necessary. In order to ensure the autonomy of each partner, we assume that each source bas its own ontology. This requires matching efforts between ontologies to ensure the integration of functional requirements. An important feature related to the storage of requirements is that they are often expressed using semi-forma! formalisms such as use cases of UML with an important textual part. In order to get as close as possible to our contributions in data warehousing,we proposed a pivot model factorizing three well-known semi-formalisms. This pivot model is used to define the multidimensional model of the requirements warehouse, which is then alimented by the sources requirements using an ETL algorithm (Extract,Transform, Load).Using reasoning mechanisms otfered by ontologies and matching metrics, we cleaned up our requirements warehouse. Once the warehouse is deployed, it is exploited using OLAP analysis tools. Our methodology is supported by a tool covering all design phases of the requirements warehouse
Loger, Benoit. "Modèles d’optimisation basés sur les données pour la planification des opérations dans les Supply Chain industrielles". Electronic Thesis or Diss., Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2023. http://www.theses.fr/2023IMTA0389.
Testo completoWith the increasing complexity of supply chains, automated decision-support tools become necessary in order to apprehend the multiple sources of uncertainty that may impact them, while maintaining a high level of performance. To meet these objectives, managers rely more and more on approaches capable of improving the resilience of supply chains by proposing robust solutions that remain valid despite uncertainty, to guarantee both a quality of service and a control of the costs induced by the production, storage and transportation of goods. As data collection and analysis become central to define the strategy of companies, a proper usage of this information to characterize more precisely these uncertainties and their impact on operations is becoming a major challenge for optimizing modern production and distribution systems. This thesis addresses these new challenges by developing different mathematical optimization methods based on historical data, with the aim of proposing robust solutions to several supply and production planning problems. To validate the practical relevance of these new techniques, numerical experiments on various applications compare them with several other classical approachesfrom the literature. The results obtained demonstrate the value of these contributions, which offer comparable average performance while reducing their variability in an uncertain context. In particular, the solutions remain satisfactory when confronted with extreme scenarios, whose probability of occurrence is low. Finally, the computational time of the procedures developed remain competitive, making them suitable for industrial-scale applications
Ikken, Sonia. "Efficient placement design and storage cost saving for big data workflow in cloud datacenters". Thesis, Evry, Institut national des télécommunications, 2017. http://www.theses.fr/2017TELE0020/document.
Testo completoThe typical cloud big data systems are the workflow-based including MapReduce which has emerged as the paradigm of choice for developing large scale data intensive applications. Data generated by such systems are huge, valuable and stored at multiple geographical locations for reuse. Indeed, workflow systems, composed of jobs using collaborative task-based models, present new dependency and intermediate data exchange needs. This gives rise to new issues when selecting distributed data and storage resources so that the execution of tasks or job is on time, and resource usage-cost-efficient. Furthermore, the performance of the tasks processing is governed by the efficiency of the intermediate data management. In this thesis we tackle the problem of intermediate data management in cloud multi-datacenters by considering the requirements of the workflow applications generating them. For this aim, we design and develop models and algorithms for big data placement problem in the underlying geo-distributed cloud infrastructure so that the data management cost of these applications is minimized. The first addressed problem is the study of the intermediate data access behavior of tasks running in MapReduce-Hadoop cluster. Our approach develops and explores Markov model that uses spatial locality of intermediate data blocks and analyzes spill file sequentiality through a prediction algorithm. Secondly, this thesis deals with storage cost minimization of intermediate data placement in federated cloud storage. Through a federation mechanism, we propose an exact ILP algorithm to assist multiple cloud datacenters hosting the generated intermediate data dependencies of pair of files. The proposed algorithm takes into account scientific user requirements, data dependency and data size. Finally, a more generic problem is addressed in this thesis that involve two variants of the placement problem: splittable and unsplittable intermediate data dependencies. The main goal is to minimize the operational data cost according to inter and intra-job dependencies
Ikken, Sonia. "Efficient placement design and storage cost saving for big data workflow in cloud datacenters". Electronic Thesis or Diss., Evry, Institut national des télécommunications, 2017. http://www.theses.fr/2017TELE0020.
Testo completoThe typical cloud big data systems are the workflow-based including MapReduce which has emerged as the paradigm of choice for developing large scale data intensive applications. Data generated by such systems are huge, valuable and stored at multiple geographical locations for reuse. Indeed, workflow systems, composed of jobs using collaborative task-based models, present new dependency and intermediate data exchange needs. This gives rise to new issues when selecting distributed data and storage resources so that the execution of tasks or job is on time, and resource usage-cost-efficient. Furthermore, the performance of the tasks processing is governed by the efficiency of the intermediate data management. In this thesis we tackle the problem of intermediate data management in cloud multi-datacenters by considering the requirements of the workflow applications generating them. For this aim, we design and develop models and algorithms for big data placement problem in the underlying geo-distributed cloud infrastructure so that the data management cost of these applications is minimized. The first addressed problem is the study of the intermediate data access behavior of tasks running in MapReduce-Hadoop cluster. Our approach develops and explores Markov model that uses spatial locality of intermediate data blocks and analyzes spill file sequentiality through a prediction algorithm. Secondly, this thesis deals with storage cost minimization of intermediate data placement in federated cloud storage. Through a federation mechanism, we propose an exact ILP algorithm to assist multiple cloud datacenters hosting the generated intermediate data dependencies of pair of files. The proposed algorithm takes into account scientific user requirements, data dependency and data size. Finally, a more generic problem is addressed in this thesis that involve two variants of the placement problem: splittable and unsplittable intermediate data dependencies. The main goal is to minimize the operational data cost according to inter and intra-job dependencies
Atigui, Faten. "Approche dirigée par les modèles pour l’implantation et la réduction d’entrepôts de données". Thesis, Toulouse 1, 2013. http://www.theses.fr/2013TOU10044/document.
Testo completoOur work handles decision support systems based on multidimensional Data Warehouse (DW). A Data Warehouse (DW) is a huge amount of data, often historical, used for complex and sophisticated analysis. It supports the business process within an organization. The relevant data for the decision-making process are collected from data sources by means of software processes commonly known as ETL (Extraction-Transformation-Loading) processes. The study of existing systems and methods shows two major limits. Actually, when building a DW, the designer deals with two major issues. The first issue treats the DW's design, whereas the second addresses the ETL processes design. Current frameworks provide partial solutions that focus either on the multidimensional structure or on the ETL processes, yet both could benefit from each other. However, few studies have considered these issues in a unified framework and have provided solutions to automate all of these tasks. Since its creation, the DW has a large amount of data, mainly due to the historical data. Looking into the decision maker's analysis over time, we can see that they are usually less interested in old data.To overcome these shortcomings, this thesis aims to formalize the development of a time-varying (with a temporal dimension) DW from its design to its physical implementation. We use the Model Driven Engineering (MDE) that automates the process and thus significantly reduce development costs and improve the software quality. The contributions of this thesis are summarized as follows: 1. To formalize and to automate the development of a time-varying DW within a model-driven approach that provides: - A set of unified (conceptual, logical and physical) metamodels that describe data and transformation operations. - An OCL (Object Constraint Language) extension that aims to conceptually formalize the transformation operations. - A set of transformation rules that maps the conceptual model to logical and physical models. - A set of transformation rules that generates the code. 2. To formalize and to automate historical data reduction within a model-driven approach that provides : - A set of (conceptual, logical and physical) metamodels that describe the reduced data. - A set of reduction operations. - A set of transformation rules that implement these operations at the physical level.In order to validate our proposals, we have developed a prototype composed of three parts. The first part performs the transformation of models to lower level models. The second part transforms the physical model into code. The last part allows the DW reduction
Mahéo, Yves. "Environnements pour la compilation dirigée par les données : supports d'exécution et expérimentations". Phd thesis, Université Rennes 1, 1995. http://tel.archives-ouvertes.fr/tel-00497580.
Testo completoMaheo, Yves. "Environnement pour la compilation dirigée par les données : supports d'exécution et expérimentations". Rennes 1, 1995. http://www.theses.fr/1995REN10059.
Testo completoBeneyton, Thomas. "Évolution dirigée et biopile enzymatique : étude de la laccase CotA et optimisation par évolution dirigée en microfluidique digitale". Strasbourg, 2011. https://publication-theses.unistra.fr/public/theses_doctorat/2011/BENEYTON_Thomas_2011.pdf.
Testo completoEnzymatic biofuel cells have been recently developed to create miniature renewable electricity sources. However, this new technology is still limited in terms of power and lifetime compared to classical fuel cells. Although it has been rarely used yet, one strategy to improve these performances is to optimize the catalytic and stability properties of the enzymes. This PhD work describes the development of a droplet-based microfluidic platform for thr directed evolution of CotA laccase from Bacillus subtilis for enzymatic biofuel cells application. This work demonstrates the possibility of using an extremophilic enzyme inside an enzymatique biofuel cell. The efficiency of CotA as a biocatalyst for O2 reduction has been evaluated for the first time developing biocathodes or complete Glucose/O2 biofuel cells. A droplet-based microfluidic high-throughput screening platform for CotA directed evolution has also been developed and validated. This platform allows the encapsulation of E. Coli cells expressing the protein in aqueous droplets of few picoliters, the incubation of droplets, the addition of the substrate using picoinjection and then the detection and the sorting of CotA enzymatic activity using very high-throughput (1 million clones in only 4 hours). The platform can be directly used for the screening of mutant libraries. Optimized selected mutants would lead to the creation of a new and more efficient generation of enzymatic biofuel cells. This universal droplet-based microfluidic screening platform is a very powerful tool for directed evolution of proteins
Ait, Brahim Amal. "Approche dirigée par les modèles pour l'implantation de bases de données massives sur des SGBD NoSQL". Thesis, Toulouse 1, 2018. http://www.theses.fr/2018TOU10025/document.
Testo completoLe résumé en anglais n'a pas été communiqué par l'auteur
Atti di convegni sul tema "Optimisation dirigée par les données"
Hascoet, E., G. Valette, G. Le Toux e S. Boisramé. "Proposition d’un protocole de prise en charge implanto-portée de patients traités en oncologie tête et cou suite à une étude rétrospective au CHRU de Brest". In 66ème Congrès de la SFCO. Les Ulis, France: EDP Sciences, 2020. http://dx.doi.org/10.1051/sfco/20206602009.
Testo completoRapporti di organizzazioni sul tema "Optimisation dirigée par les données"
Enria, Luisa. Ethnographie citoyenne dans la réponse aux épidémies : orientation pour l’établissement de réseaux de chercheurs. SSHAP, maggio 2022. http://dx.doi.org/10.19088/sshap.2022.032.
Testo completo