Rozprawy doktorskie na temat „Exploration sous contrainte”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Sprawdź 16 najlepszych rozpraw doktorskich naukowych na temat „Exploration sous contrainte”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.
Dorval, Valérie. "Planification des activités chirurgicales sous contrainte de capacité". Thesis, Valenciennes, Université Polytechnique Hauts-de-France, 2019. http://www.theses.fr/2019UPHF0004.
Pełny tekst źródłaSurgical services face difficulties in meeting demand and patients face long waiting lists for treatment. In order to improve services, maximum deadlines have been set for certain types of surgery, but this adds a constraint to the already overloaded system. Finally, the cancellation of surgeries due to a lack of beds in intensive care and on care units is considered quite frequent, causing a bottleneck in the patient flow. In this context, the objective of this thesis is to propose and validate a surgical activity planning procedure that takes into account capacity in post-operative care units, with the aim of improving the use of hospital beds and thus increasing patient flow in the system. This thesis proposes a decision support tool to formalize the surgical activity planning process at the tactical/operational level and to take into account the availability of hospital beds and the variability in patients' length of stay according to different factors. This tool takes into account the current functioning of the system and the context surrounding it in order to ensure the feasibility of implementation. First, a model for predicting the length of patients' stay is designed by combining a data classification method, classification and regression tree theory, with a method for estimating the data distribution, phase-type distributions. A validation step will compare the model results with empirical data. Second, a surgical activity planning tool is being developed using integer linear programming and incorporating the "length of stay" component to control hospital bed occupancy in addition to surgical room occupancy. Finally, a simulator is developed and used to evaluate different strategies and criteria for scheduling activities and to take into account the inherent variability of the problem. At this point, it is possible to integrate the model for predicting the length of stay developed at the beginning of the project
Garcelon, Evrard. "Constrained Exploration in Reinforcement Learning". Electronic Thesis or Diss., Institut polytechnique de Paris, 2022. http://www.theses.fr/2022IPPAG007.
Pełny tekst źródłaA major application of machine learning is to provide personnalized content to different users. In general, the algorithms powering those recommandation are supervised learning algorithm. That is to say the data used to train those algorithms are assumed to be sampled from the same distribution. However, the data are generated through interactions between the users and the recommendation algorithms. Thus, recommendations for a user a time t can have an impact on the set of pertinent recommandation at a later time. Therefore, it is necessary to take those interactions into account. This setting is reminiscent of the online learning setting. Among online learning algorithms, Reinforcement Learning algorithms (RL) looks the most promising to replace supervised learning algorithms for applications requiring a certain degree of personnalization. The deployement in production of RL algorithms presents some challenges such as being able to guarantee a certain level of performance during exploration phases or how to guarantee privacy of the data collected by RL algorithms. In this thesis, we consider different constraints limiting the use of RL algorithms and provides both empirical and theoretical results on the impact of those constraints on the learning process
Aklil, Nassim. "Apprentissage actif sous contrainte de budget en robotique et en neurosciences computationnelles. Localisation robotique et modélisation comportementale en environnement non stationnaire". Thesis, Paris 6, 2017. http://www.theses.fr/2017PA066225/document.
Pełny tekst źródłaDecision-making is a highly researched field in science, be it in neuroscience to understand the processes underlying animal decision-making, or in robotics to model efficient and rapid decision-making processes in real environments. In neuroscience, this problem is resolved online with sequential decision-making models based on reinforcement learning. In robotics, the primary objective is efficiency, in order to be deployed in real environments. However, in robotics what can be called the budget and which concerns the limitations inherent to the hardware, such as computation times, limited actions available to the robot or the lifetime of the robot battery, are often not taken into account at the present time. We propose in this thesis to introduce the notion of budget as an explicit constraint in the robotic learning processes applied to a localization task by implementing a model based on work developed in statistical learning that processes data under explicit constraints, limiting the input of data or imposing a more explicit time constraint. In order to discuss an online functioning of this type of budgeted learning algorithms, we also discuss some possible inspirations that could be taken on the side of computational neuroscience. In this context, the alternation between information retrieval for location and the decision to move for a robot may be indirectly linked to the notion of exploration-exploitation compromise. We present our contribution to the modeling of this compromise in animals in a non-stationary task involving different levels of uncertainty, and we make the link with the methods of multi-armed bandits
Leleu, Marion. "Extraction de motifs séquentiels sous contraintes dans des données contenant des répétitions consécutives". Lyon, INSA, 2004. http://theses.insa-lyon.fr/publication/2004ISAL0001/these.pdf.
Pełny tekst źródłaThis PhD Thesis concerns the particular data mining field that is the sequential pattern extractions from event sequence databases (e. G. Customer transaction sequences, web logs, DNA). Among existing algorithms, those based on the use of a representation in memory of the pattern locations (called occurrence lists), present a lost of efficiency when the sequences contain consecutive repetitions. This PhD Thesis proposes some efficient solutions to the sequential pattern extraction in such a context (constraints and repetitions) based on a condensation of informations contained in the occurrence lists, without lost for the extraction process. This new representation leads to new sequential pattern extraction algorithms (GoSpade and GoSpec) particularly well adapted to the presence of consecutive repetitions in the datasets. These algorithms have been proved to be sound and complete and experiments on both real and synthetic datasets enabled to show that the gain in term of memory space and execution time is important and that they increase with the number of consecutive repetitions contained in the datasets. Finally, a financial application has been performed in order to make a condensed representation of market trends by means of frequent sequential patterns
Leleu, Marion Boulicaut Jean-François. "Extraction de motifs séquentiels sous contraintes dans des données contenant des répétitions consécutives". Villeurbanne : Doc'INSA, 2005. http://docinsa.insa-lyon.fr/these/pont.php?id=leleu.
Pełny tekst źródłaProst-Boucle, Adrien. "Génération rapide d'accélerateurs matériels par synthèse d'architecture sous contraintes de ressources". Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENT039/document.
Pełny tekst źródłaIn the field of high-performance computing, FPGA circuits are very attractive for their performance and low consumption. However, their presence is still marginal, mainly because of the limitations of current development tools. These limitations force the user to have expert knowledge about numerous technical concepts. They also have to manually control the synthesis processes in order to obtain solutions both fast and that fulfill the hardware constraints of the targeted platforms.A novel generation methodology based on high-level synthesis is proposed in order to push these limits back. The design space exploration consists in the iterative application of transformations to an initial circuit, which progressively increases its rapidity and its resource consumption. The rapidity of this process, along with its convergence under resource constraints, are thus guaranteed. The exploration is also guided towards the most pertinent solutions thanks to the detection of the most critical sections of the applications to synthesize, for the targeted execution context. This information can be refined with an execution scenarion specified by the user.A demonstration tool for this methodology, AUGH, has been built. Experiments have been conducted with several applications known in the field of high-level synthesis. Of very differen sizes, these applications confirm the pertinence of the proposed methodology for fast and automatic generation of complex hardware accelerators, under strict resource constraints. The proposed methodology is very close to the compilation process for microprocessors, which enable it to be used even by users non experts about digital circuit design. These works constitute a significant progress for a broader adoption of FPGA as general-purpose hardware accelerators, in order to make computing machines both faster and more energy-saving
Prost-Boucle, A. "Génération rapide d'accélérateurs matériels par synthèse d'architecture sous contraintes de ressources". Phd thesis, Université de Grenoble, 2014. http://tel.archives-ouvertes.fr/tel-01071661.
Pełny tekst źródłaTurbatte, Hervé-Claude. "Conception d'architectures d'instrumentation sous contraintes d'observabilité et de fiabilité : application à des plates-formes pétrolières". Vandoeuvre-les-Nancy, INPL, 1992. http://www.theses.fr/1992INPL087N.
Pełny tekst źródłaFor a given process, our aim is on the one hand to assess the availability of the necessary informations for the process control, and on the other hand to define the sensor location in order that the necessary variables for control are al ways observable when one or more sensors are defective subject to reliability,location and cost constraints. Two approaches to estimate the reliability of a measurement system are presented. In the first, the reliability is simply calculated by searching the sensory breakdown configurations for which. The system remains observable. Ln the second; we calculate the unrelizbility function using Fault Tree Method. To search the most reliable measurement system, four methods are presented. These methods are based on the MTIF comparison among different measurement systems, or on the structural analysis of the process graph, or on the analysis of the reliability evaluation vs time or more on the cycles study of the process graph. The proposed methods are successfully applied to an off-shore platform of the French petroleum company Elf Aquitaine
Soulet, Arnaud. "Un cadre générique de découverte de motifs sous contraintes fondées sur des primitives". Phd thesis, Université de Caen, 2006. http://tel.archives-ouvertes.fr/tel-00123185.
Pełny tekst źródłal'extraction de connaissances dans les bases de données. Cette thèse
traite de l'extraction de motifs locaux sous contraintes. Nous
apportons un éclairage nouveau avec un cadre combinant des primitives
monotones pour définir des contraintes quelconques. La variété de ces
contraintes exprime avec précision l'archétype des motifs recherchés
par l'utilisateur au sein d'une base de données. Nous proposons alors
deux types d'approche d'extraction automatique et générique malgré les
difficultés algorithmiques inhérentes à cette tâche. Leurs efficacités
reposent principalement sur l'usage de conditions nécessaires pour
approximer les variations de la contrainte. D'une part, des méthodes
de relaxations permettent de ré-utiliser les nombreux algorithmes
usuels du domaines. D'autre part, nous réalisons des méthodes
d'extraction directes dédiées aux motifs ensemblistes pour les données
larges ou corrélées en exploitant des classes d'équivalences. Enfin,
l'utilisation de nos méthodes ont permi la découverte de phénomènes
locaux lors d'applications industrielles et médicales.
Merabet, Massinissa. "Solutions optimales des problèmes de recouvrement sous contraintes sur le degré des nœuds". Thesis, Montpellier 2, 2014. http://www.theses.fr/2014MON20138/document.
Pełny tekst źródłaThe work conducted in this thesis is focused on the minimum spanning problems in graphs under constraints on the vertex degrees. As the spanning tree covers the vertices of a connected graph with a minimum number of links, it is generally proposed as a solution for this kind of problems. However, for some applications such as the routing in optical networks, the solution is not necessarily a sub-graph. In this thesis, we assume that the degree constraints are due to a limited instantaneous capacity of the vertices and that the only pertinent requirement on the spanning structure is its connectivity. In that case, the solution may be different from a tree. We propose the reformulation of this kind of spanning problems. To find the optimal coverage of the vertices, an extension of the tree concept called hierarchy is proposed. Our main purpose is to show its interest regarding the tree in term of feasibility and costs of the coverage. Thus, we take into account two types of degree constraints: either an upper bound on the degree of vertices and an upper bound on the number of branching vertices. We search a minimum cost spanning hierarchy in both cases. Besides, we also illustrate the applicability of hierarchies by studying a problem that takes more into account the reality of the optical routing. For all those NP-hard problems, we show the interest of the spanning hierarchy for both costs of optimal solutions and performance guarantee of approximate solutions. These results are confirmed by several experimentations on random graphs
Lasbouygues, Adrien. "Exploration robotique de l’environnement aquatique : les modèles au coeur du contrôle". Thesis, Montpellier, 2015. http://www.theses.fr/2015MONTS078/document.
Pełny tekst źródłaUnderwater robots can nowadays operate in complex environments in a broad scope of missions where the use of human divers is difficult for cost or safety reasons. However the complexity of aquatic environments requires to give the robotic vector an autonomy sufficient to perform its mission while preserving its integrity. This requires to design control laws according to application requirements. They are built on knowledge from several scientific fields, underlining the interdisciplinarity inherent to robotics. Once the control law designed, it must be implemented as a control Software working on a real-time Software architecture.Nonetheless the current conception of control laws, as "monolithic" blocks, makes difficult the adaptation of a control from an application to another and the integration of knowledge from various scientific fields which are often not fully understood by control engineers. It also penalizes the implementation of control on Software architectures, at least its modularity and evolution. To solve those problems we seek a proper separation of knowledge so that each knowledge item can be easily used, its role precisely defined and we want to reify the interactions between them. Moreover this will allow us a more efficient projection on the Software architecture. We thus propose a new formalism for control laws description as a modular composition of basic entities named Atoms used to encapsulate the knowledge items.We also aim at building a better synergy between control and software engineering based on shared concerns such as temporal constraints and stability. Hence we extend the definition of our Atoms with constraints carrying information related to their temporal behaviour. We propose as well a methodology relying on our formalism to guide the implementation of control on a real-time Middleware. We will focus on the ContrACT Middleware developed at LIRMM.Finally we illustrate our approach on several robotic functionalities that can be used during aquatic environments exploration and especially for wall avoidance during the exploration of a karst aquifer
Ouali, Abdelkader. "Méthodes hybrides parallèles pour la résolution de problèmes d'optimisation combinatoire : application au clustering sous contraintes". Thesis, Normandie, 2017. http://www.theses.fr/2017NORMC215/document.
Pełny tekst źródłaCombinatorial optimization problems have become the target of many scientific researches for their importance in solving academic problems and real problems encountered in the field of engineering and industry. Solving these problems by exact methods is often intractable because of the exorbitant time processing that these methods would require to reach the optimal solution(s). In this thesis, we were interested in the algorithmic context of solving combinatorial problems, and the modeling context of these problems. At the algorithmic level, we have explored the hybrid methods which excel in their ability to cooperate exact methods and approximate methods in order to produce rapidly solutions of best quality. At the modeling level, we worked on the specification and the exact resolution of complex problems in pattern set mining, in particular, by studying scaling issues in large databases. On the one hand, we proposed a first parallelization of the DGVNS algorithm, called CPDGVNS, which explores in parallel the different clusters of the tree decomposition by sharing the best overall solution on a master-worker model. Two other strategies, called RADGVNS and RSDGVNS, have been proposed which improve the frequency of exchanging intermediate solutions between the different processes. Experiments carried out on difficult combinatorial problems show the effectiveness of our parallel methods. On the other hand, we proposed a hybrid approach combining techniques of both Integer Linear Programming (ILP) and pattern mining. Our approach is comprehensive and takes advantage of the general ILP framework (by providing a high level of flexibility and expressiveness) and specialized heuristics for data mining (to improve computing time). In addition to the general framework for the pattern set mining, two problems were studied: conceptual clustering and the tiling problem. The experiments carried out showed the contribution of our proposition in relation to constraint-based approaches and specialized heuristics
Jacquemont, Stéphanie. "Contributions de l'inférence grammaticale à la fouille de données séquentielles". Phd thesis, Université Jean Monnet - Saint-Etienne, 2008. http://tel.archives-ouvertes.fr/tel-00366358.
Pełny tekst źródłaDans ce contexte, nous avons montré que l'exploitation brute, non seulement des séquences d'origine mais aussi des automates probabilistes inférés à partir de celles-ci, ne garantit pas forcément une extraction de connaissance pertinente. Nous avons apporté dans cette thèse plusieurs contributions, sous la forme de bornes minimales et de contraintes statistiques, permettant ainsi d'assurer une exploitation fructueuse des séquences et des automates probabilistes. De plus, grâce à notre modèle nous apportons une solution efficace à certaines applications mettant en jeux des problèmes de préservation de vie privée des individus.
Blanchard, Julien. "Un système de visualisation pour l'extraction, l'évaluation, et l'exploration interactives des règles d'association". Phd thesis, Université de Nantes, 2005. http://tel.archives-ouvertes.fr/tel-00421413.
Pełny tekst źródłareprésentation de la connaissance en sciences cognitives. En fouille de données, la principale technique à base de règles est l'extraction de règles d'association, qui a donné lieu à de nombreux travaux de recherche.
La limite majeure des algorithmes d'extraction de règles d'association est qu'ils produisent communément de grandes quantités de règles, dont beaucoup se révèlent même sans aucun intérêt pour l'utilisateur. Ceci s'explique par la nature non supervisée de ces algorithmes : ne considérant aucune variable endogène, ils envisagent dans les règles toutes les combinaisons possibles de variables. Dans la pratique, l'utilisateur ne peut pas exploiter les résultats tels quels directement à la sortie des algorithmes. Un post-traitement consistant en une seconde opération de fouille se
révèle indispensable pour valider les volumes de règles et découvrir des connaissances utiles. Cependant, alors que la fouille de données est effectuée automatiquement par des algorithmes combinatoires, la fouille de règles est une
tâche laborieuse à la charge de l'utilisateur.
La thèse développe deux approches pour assister l'utilisateur dans le post-traitement des règles d'association :
– la mesure de la qualité des règles par des indices numériques,
– la supervision du post-traitement par une visualisation interactive.
Pour ce qui concerne la première approche, nous formalisons la notion d'indice de qualité de règles et réalisons une classification inédite des nombreux indices de la littérature, permettant d'aider l'utilisateur à choisir les indices pertinents pour son besoin. Nous présentons également trois nouveaux indices aux propriétés originales : l'indice
probabiliste d'écart à l'équilibre, l'intensité d'implication entropique, et le taux informationnel. Pour ce qui concerne la seconde approche, nous proposons une méthodologie de visualisation pour l'exploration interactive des règles. Elle
est conçue pour faciliter la tâche de l'utilisateur confronté à de grands ensembles de règles en prenant en compte ses capacités de traitement de l'information. Dans cette méthodologie, l'utilisateur dirige la découverte de connaissances
par des opérateurs de navigation adaptés en visualisant des ensembles successifs de règles décrits par des indices de qualité.
Les deux approches sont intégrées au sein de l'outil de visualisation ARVis (Association Rule Visualization) pour l'exploration interactive des règles d'association. ARVis implémente notre méthodologie au moyen d'une représentation
3D, inédite en visualisation de règles, mettant en valeur les indices de qualité. De plus, ARVis repose sur un algorithme spécifique d'extraction sous contraintes permettant de générer les règles interactivement au fur et à mesure de la navigation de l'utilisateur. Ainsi, en explorant les règles, l'utilisateur dirige à la fois l'extraction et le
post-traitement des connaissances.
Fiot, Céline. "Extraction de séquences fréquentes : des données numériques aux valeurs manquantes". Phd thesis, Montpellier 2, 2007. http://www.theses.fr/2007MON20056.
Pełny tekst źródłaFiot, Céline. "Extraction de séquences fréquentes : des données numériques aux valeurs manquantes". Phd thesis, Université Montpellier II - Sciences et Techniques du Languedoc, 2007. http://tel.archives-ouvertes.fr/tel-00179506.
Pełny tekst źródła