Rozprawy doktorskie na temat „Infrastructures à large échelle”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „Infrastructures à large échelle”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.
Quesnel, Flavien. "Vers une gestion coopérative des infrastructures virtualisées à large échelle : le cas de l'ordonnancement". Phd thesis, Ecole des Mines de Nantes, 2013. http://tel.archives-ouvertes.fr/tel-00821103.
Pełny tekst źródłaRais, Issam. "Discover, model and combine energy leverages for large scale energy efficient infrastructures". Thesis, Lyon, 2018. http://www.theses.fr/2018LYSEN051/document.
Pełny tekst źródłaEnergy consumption is a growing concern on the verge of Exascale computing, a machine reaching 10^18 operations per seconds, 10 times the actual best public supercomputers, it became a crucial focus. Data centers consumed about 7% of total demand of electricity and are responsible of 2% of global carbon emission. With the multiplication of connected devices per person around the world, reducing the energy consumption of large scale computing system is a mandatory step to address in order to build a sustainable digital society.Several techniques, that we call leverage, have been developed in order to lower the electricalconsumption of computing facilities. To face this growing concern many solutions have beendeveloped at multiple levels of computing facilities: infrastructure, hardware, middle-ware, andapplication.It is urgent to embrace energy efficiency as a major concern of our modern computing facilities. Using these leverages is mandatory to better energy efficiency. A lot of leverages are available on large scale computing center. In spite of their potential gains, users and administrators don't fully use them or don't use them at all to better energy efficiency. Although, using these techniques, alone and combined, could be complicated and counter productive if not wisely used.This thesis defines and investigates the discovery, understanding and smart usage of leverages available on a large scale data center or supercomputer. We focus on various single leverages and understand them. We then combine them to other leverages and propose a generic solution to the dynamic usage of combined leverages
Moise, Diana Maria. "Optimizing data management for MapReduce applications on large-scale distributed infrastructures". Thesis, Cachan, Ecole normale supérieure, 2011. http://www.theses.fr/2011DENS0067/document.
Pełny tekst źródłaData-intensive applications are nowadays, widely used in various domains to extract and process information, to design complex systems, to perform simulations of real models, etc. These applications exhibit challenging requirements in terms of both storage and computation. Specialized abstractions like Google’s MapReduce were developed to efficiently manage the workloads of data-intensive applications. The MapReduce abstraction has revolutionized the data-intensive community and has rapidly spread to various research and production areas. An open-source implementation of Google's abstraction was provided by Yahoo! through the Hadoop project. This framework is considered the reference MapReduce implementation and is currently heavily used for various purposes and on several infrastructures. To achieve high-performance MapReduce processing, we propose a concurrency-optimized file system for MapReduce Frameworks. As a starting point, we rely on BlobSeer, a framework that was designed as a solution to the challenge of efficiently storing data generated by data-intensive applications running at large scales. We have built the BlobSeer File System (BSFS), with the goal of providing high throughput under heavy concurrency to MapReduce applications. We also study several aspects related to intermediate data management in MapReduce frameworks. We investigate the requirements of MapReduce intermediate data at two levels: inside the same job, and during the execution of pipeline applications. Finally, we show how BSFS can enable extensions to the de facto MapReduce implementation, Hadoop, such as the support for the append operation. This work also comprises the evaluation and the obtained results in the context of grid and cloud environments
Pastor, Jonathan. "Contributions à la mise en place d'une infrastructure de Cloud Computing à large échelle". Thesis, Nantes, Ecole des Mines, 2016. http://www.theses.fr/2016EMNA0240/document.
Pełny tekst źródłaThe continuous increase of computing power needs has favored the triumph of the Cloud Computing model. Customers asking for computing power will receive supplies via Internet resources hosted by providers of Cloud Computing infrastructures. To make economies of scale, Cloud Computing that are increasingly large and concentrated in few attractive places, leading to problems such energy supply, fault tolerance and the fact that these infrastructures are far from most of their end users. During this thesis we studied the implementation of an fully distributed and decentralized IaaS system operating a network of micros data-centers deployed in the Internet backbone, using a modified version of OpenStack that leverages non relational databases. A prototype has been experimentally validated onGrid’5000, showing interesting results, however limited by the fact that OpenStack doesn’t take advantage of a geographically distributed functioning. Thus, we focused on adding the support of network locality to improve performance of Cloud Computing services by favoring collaborations between close nodes. A prototype of the DVMS algorithm, working with an unstructured topology based on the Vivaldi algorithm, has been validated on Grid’5000. This prototype got the first prize at the large scale challenge of the Grid’5000 spring school in 2014. Finally, the work made with DVMS enabled us to participate at the development of the VMPlaceS simulator
Esteves, José Jurandir Alves. "Optimization of network slice placement in distributed large-scale infrastructures : from heuristics to controlled deep reinforcement learning". Electronic Thesis or Diss., Sorbonne université, 2021. http://www.theses.fr/2021SORUS325.
Pełny tekst źródłaThis PhD thesis investigates how to optimize Network Slice Placement in distributed large-scale infrastructures focusing on online heuristic and Deep Reinforcement Learning (DRL) based approaches. First, we rely on Integer Linear Programming (ILP) to propose a data model for enabling on-Edge and on-Network Slice Placement. In contrary to most studies related to placement in the NFV context, the proposed ILP model considers complex Network Slice topologies and pays special attention to the geographic location of Network Slice Users and its impact on the End-to-End (E2E) latency. Extensive numerical experiments show the relevance of taking into account the user location constraints. Then, we rely on an approach called the “Power of Two Choices"(P2C) to propose an online heuristic algorithm for the problem which is adapted to support placement on large-scale distributed infrastructures while integrating Edge-specific constraints. The evaluation results show the good performance of the heuristic that solves the problem in few seconds under a large-scale scenario. The heuristic also improves the acceptance ratio of Network Slice Placement Requests when compared against a deterministic online ILP-based solution. Finally, we investigate the use of ML methods, more specifically DRL, for increasing scalability and automation of Network Slice Placement considering a multi-objective optimization approach to the problem. We first propose a DRL algorithm for Network Slice Placement which relies on the Advantage Actor Critic algorithm for fast learning, and Graph Convolutional Networks for feature extraction automation. Then, we propose an approach we call Heuristically Assisted Deep Reinforcement Learning (HA-DRL), which uses heuristics to control the learning and execution of the DRL agent. We evaluate this solution trough simulations under stationary, cycle-stationary and non-stationary network load conditions. The evaluation results show that heuristic control is an efficient way of speeding up the learning process of DRL, achieving a substantial gain in resource utilization, reducing performance degradation, and is more reliable under unpredictable changes in network load than non-controlled DRL algorithms
Tsafack, Chetsa Ghislain Landry. "Profilage système et leviers verts pour les infrastructures distribuées à grande échelle". Phd thesis, Ecole normale supérieure de lyon - ENS LYON, 2013. http://tel.archives-ouvertes.fr/tel-00925320.
Pełny tekst źródłaCapizzi, Sirio <1980>. "A tuple space implementation for large-scale infrastructures". Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2008. http://amsdottorato.unibo.it/914/1/Tesi_Capizzi_Sirio.pdf.
Pełny tekst źródłaCapizzi, Sirio <1980>. "A tuple space implementation for large-scale infrastructures". Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2008. http://amsdottorato.unibo.it/914/.
Pełny tekst źródłaQuinson, Martin. "Méthodologies d'expérimentation pour l'informatique distribuée à large échelle". Habilitation à diriger des recherches, Université de Lorraine, 2013. http://tel.archives-ouvertes.fr/tel-00927316.
Pełny tekst źródłaGriesner, Jean-Benoit. "Systèmes de recommandation de POI à large échelle". Electronic Thesis or Diss., Paris, ENST, 2018. http://www.theses.fr/2018ENST0037.
Pełny tekst źródłaThe task of points-of-interest (POI) recommendations has become an essential feature in location-based social networks. However it remains a challenging problem because of specific constraints of these networks. In this thesis I investigate new approaches to solve the personalized POI recommendation problem. Three main contributions are proposed in this work. The first contribution is a new matrix factorization model that integrates geographical and temporal influences. This model is based on a specific processing of geographical data. The second contribution is an innovative solution against the implicit feedback problem. This problem corresponds to the difficulty to distinguish among unvisited POI the actual "unknown" from the "negative" ones. Finally the third contribution of this thesis is a new method to generate recommendations with large-scale datasets. In this approach I propose to combine a new geographical clustering algorithm with users’ implicit social influences in order to define local and global mobility scales
Ludinard, Romaric. "Caractérisation locale de fautes dans les systèmes large échelle". Thesis, Rennes 1, 2014. http://www.theses.fr/2014REN1S065/document.
Pełny tekst źródłaThe Internet is a global system of interconnected computer networks that carries lots of services consumed by users. Unfortunately, each element this system may exhibit failures. A failure can be perceived by a variable range of users, according to the location of the failure source. This thesis proposes a set of contributions that aims at determining from a user perception if a failure is perceived by a few number of users (isolated failure) or in contrast by lots of them (massive failure). We formalize failures with respect to their impact on the services that are consumed by users. We show that it is impossible to determine with certainty if a user perceives a local or a massive failure, from the user point of view. Nevertheless, it is possible to determine for each user whether it perceives a local failure, a massive one or whether it is impossible to determine. This characterization is optimal and can be run in parallel. Then, we propose a self-Organizing architecture for fault characterization. Entities of the system organize themselves in a two-Layered overlay that allows to gather together entities with similar perception. This gathering allows us to successfully apply our characterization. Finally, a probabilistic evaluation of the resilience to dynamism and malicious behaviors of this architecture is performed
Vouzoukidou, Despoina. "Evaluation de requêtes top-k continues à large-échelle". Thesis, Paris 6, 2015. http://www.theses.fr/2015PA066659/document.
Pełny tekst źródłaIn this thesis, we are interested in efficient evaluation techniques of continuous top-k queries over text and feedback streams featuring generalized scoring functions which capture dynamic ranking aspects. As a first contribution, we generalize state of the art continuous top-k query models, by introducing a general family of non-homogeneous scoring functions combining query-independent item importance with query-dependent content relevance and continuous score decay reflecting information freshness. Our second contribution consists in the definition and implementation of efficient in-memory data structures for indexing and evaluating this new family of continuous top-k queries. Our experiments show that our solution is scalable and outperforms other existing state of the art solutions, when restricted to homogeneous functions. Going a step further, in the second part of this thesis we consider the problem of incorporating dynamic feedback signals to the original scoring function and propose a new general real-time query evaluation framework with a family of new algorithms for efficiently processing continuous top-k queries with dynamic feedback scores in a real-time web context. Finally, putting together the outcomes of these works, we present MeowsReader, a real-time news ranking and filtering prototype which illustrates how a general class of continuous top-k queries offers a suitable abstraction for modelling and implementing continuous online information filtering applications combining keyword search and real-time web activity
Vouzoukidou, Despoina. "Evaluation de requêtes top-k continues à large-échelle". Electronic Thesis or Diss., Paris 6, 2015. http://www.theses.fr/2015PA066659.
Pełny tekst źródłaIn this thesis, we are interested in efficient evaluation techniques of continuous top-k queries over text and feedback streams featuring generalized scoring functions which capture dynamic ranking aspects. As a first contribution, we generalize state of the art continuous top-k query models, by introducing a general family of non-homogeneous scoring functions combining query-independent item importance with query-dependent content relevance and continuous score decay reflecting information freshness. Our second contribution consists in the definition and implementation of efficient in-memory data structures for indexing and evaluating this new family of continuous top-k queries. Our experiments show that our solution is scalable and outperforms other existing state of the art solutions, when restricted to homogeneous functions. Going a step further, in the second part of this thesis we consider the problem of incorporating dynamic feedback signals to the original scoring function and propose a new general real-time query evaluation framework with a family of new algorithms for efficiently processing continuous top-k queries with dynamic feedback scores in a real-time web context. Finally, putting together the outcomes of these works, we present MeowsReader, a real-time news ranking and filtering prototype which illustrates how a general class of continuous top-k queries offers a suitable abstraction for modelling and implementing continuous online information filtering applications combining keyword search and real-time web activity
Gueye, Modou. "Gestion de données de recommandation à très large échelle". Electronic Thesis or Diss., Paris, ENST, 2014. http://www.theses.fr/2014ENST0083.
Pełny tekst źródłaIn this thesis, we address the scalability problem of recommender systems. We propose accu rate and scalable algorithms. We first consider the case of matrix factorization techniques in a dynamic context, where new ratings..are continuously produced. ln such case, it is not possible to have an up to date model, due to the incompressible time needed to compute it. This happens even if a distributed technique is used for matrix factorization. At least, the ratings produced during the model computation will be missing. Our solution reduces the loss of the quality of the recommendations over time, by introducing some stable biases which track users' behavior deviation. These biases are continuously updated with the new ratings, in order to maintain the quality of recommendations at a high leve for a longer time. We also consider the context of online social networks and tag recommendation. We propose an algorithm that takes account of the popularity of the tags and the opinions of the users' neighborhood. But, unlike common nearest neighbors' approaches, our algorithm doe not rely on a fixed number of neighbors when computing a recommendation. We use a heuristic that bounds the network traversai in a way that allows to faster compute the recommendations while preserving the quality of the recommendations. Finally, we propose a novel approach that improves the accuracy of the recommendations for top-k algorithms. Instead of a fixed list size, we adjust the number of items to recommend in a way that optimizes the likelihood that ail the recommended items will be chosen by the user, and find the best candidate sub-list to recommend to the user
Gattoni, Gaia. "Analysis of the infrastructures to build immersive visit at large scale". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2022.
Znajdź pełny tekst źródłaTsafack, Chetsa Ghislain Landry. "System Profiling and Green Capabilities for Large Scale and Distributed Infrastructures". Phd thesis, Ecole normale supérieure de lyon - ENS LYON, 2013. http://tel.archives-ouvertes.fr/tel-00946583.
Pełny tekst źródłaKeriven, Nicolas. "Apprentissage de modèles de mélange à large échelle par Sketching". Thesis, Rennes 1, 2017. http://www.theses.fr/2017REN1S055/document.
Pełny tekst źródłaLearning parameters from voluminous data can be prohibitive in terms of memory and computational requirements. Furthermore, new challenges arise from modern database architectures, such as the requirements for learning methods to be amenable to streaming, parallel and distributed computing. In this context, an increasingly popular approach is to first compress the database into a representation called a linear sketch, that satisfies all the mentioned requirements, then learn the desired information using only this sketch, which can be significantly faster than using the full data if the sketch is small. In this thesis, we introduce a generic methodology to fit a mixture of probability distributions on the data, using only a sketch of the database. The sketch is defined by combining two notions from the reproducing kernel literature, namely kernel mean embedding and Random Features expansions. It is seen to correspond to linear measurements of the underlying probability distribution of the data, and the estimation problem is thus analyzed under the lens of Compressive Sensing (CS), in which a (traditionally finite-dimensional) signal is randomly measured and recovered. We extend CS results to our infinite-dimensional framework, give generic conditions for successful estimation and apply them analysis to many problems, with a focus on mixture models estimation. We base our method on the construction of random sketching operators such that some Restricted Isometry Property (RIP) condition holds in the Banach space of finite signed measures with high probability. In a second part we introduce a flexible heuristic greedy algorithm to estimate mixture models from a sketch. We apply it on synthetic and real data on three problems: the estimation of centroids from a sketch, for which it is seen to be significantly faster than k-means, Gaussian Mixture Model estimation, for which it is more efficient than Expectation-Maximization, and the estimation of mixtures of multivariate stable distributions, for which, to our knowledge, it is the only algorithm capable of performing such a task
Reis, Valentin. "Apprentissage pour le contrôle de plateformes parallèles à large échelle". Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAM045/document.
Pełny tekst źródłaProviding the computational infrastucture needed to solve complex problemsarising in modern society is a strategic challenge. Organisations usuallyadress this problem by building extreme-scale parallel and distributedplatforms. High Performance Computing (HPC) vendors race for more computingpower and storage capacity, leading to sophisticated specific Petascaleplatforms, soon to be Exascale platforms. These systems are centrally managedusing dedicated software solutions called Resource and Job Management Systems(RJMS). A crucial problem adressed by this software layer is the job schedulingproblem, where the RJMS chooses when and on which resources computational taskswill be executed. This manuscript provides ways to adress this schedulingproblem. No two platforms are identical. Indeed, the infrastructure, userbehavior and organization's goals all change from one system to the other. Wetherefore argue that scheduling policies should be adaptative to the system'sbehavior. In this manuscript, we provide multiple ways to achieve thisadaptativity. Through an experimental approach, we study various tradeoffsbetween the complexity of the approach, the potential gain, and the riskstaken
Sellami, Sana. "Méthodologie de matching à large échelle pour des schémas XML". Lyon, INSA, 2009. http://theses.insa-lyon.fr/publication/2009ISAL0088/these.pdf.
Pełny tekst źródłaNowadays, the Information Technology domains (semantic web, deep web, e-business, digital libraries, life science, biology, etc) abound with a large variety of DB schemas, XML schemas or ontologies stored in many heterogeneous databases and information sources. One can observe commonly in e-business applications for example schemas with several thousand elements and expressed in different formats. Thereby, a hard problem has been brought up: solving the semantic heterogeneity in the large and perform the integration of such heterogeneous collections of schemas and ontologies. Matching techniques are solutions to automatically find correspondences between these schemas/ontologies in order to allow their integration in information systems. More precisely, matching is an operation that takes as input (e. G XML schemas, ontologies, relational database schemas) and returns the semantic similarity values of their elements. Even if matching has found considerable interest in both research and practice “in the small”, it still represents a laborious process “in the large”. The standard approaches trying to match the complete input schemas often leads to shading off performance. Various schema matching systems have been developed to solve the problem semi-automatically. Since schema matching is a semi-automatic task, efficient implementations are required to support interactive user feedback. In this context, scalable matching becomes a hard problem to be solved. A number of approaches and principles have been developed for matching small or medium schemas and ontologies (50-100 components), whereas in practice, real world schemas/ ontologies are voluminous (hundred or thousand components). In consequence, matching algorithms are facing up to more complicated contexts. As a result, many problems can appear, for example: performance decreasing when the matching algorithms deal with large schemas/ontologies, their complexity becomes consequently exponential, increasing human effort and poor quality of matching results is observed. In this context, a major challenge that is still largely to be tackled is to scale up semantic matching according to two facets: a large number of schemas to be aligned or matched and very large schemas. While the former is primarily addressed in the database area, the latter has been addressed by researchers in schema and ontology matching. Based on this observation, we propose a new scalable methodology for schema matching. Our methodology supports ii) a hybrid approach trying to address the two facets based on the combination of pair-wise and holistic strategies and is deployed in three phases (pre-matching, matching and post-matching; ii) a decomposition strategy to divide large XML schemas into small ones using tree mining technique. Our methodology has been evaluated and implemented in PLASMA (Platform for LArge Schema MAtching) prototype specifically developed to this aim. Our experiments on real world schemas show that PLASMA offers a good quality of matching and the proposed decomposition approach improves the performance of schema matching
Dang, Quang Vinh. "Évaluation de la confiance dans la collaboration à large échelle". Thesis, Université de Lorraine, 2018. http://www.theses.fr/2018LORR0002/document.
Pełny tekst źródłaLarge-scale collaborative systems wherein a large number of users collaborate to perform a shared task attract a lot of attention from both academic and industry. Trust is an important factor for the success of a large-scale collaboration. It is difficult for end-users to manually assess the trust level of each partner in this collaboration. We study the trust assessment problem and aim to design a computational trust model for collaborative systems. We focused on three research questions. 1. What is the effect of deploying a trust model and showing trust scores of partners to users? We designed and organized a user-experiment based on trust game, a well-known money-exchange lab-control protocol, wherein we introduced user trust scores. Our comprehensive analysis on user behavior proved that: (i) showing trust score to users encourages collaboration between them significantly at a similar level with showing nick- name, and (ii) users follow the trust score in decision-making. The results suggest that a trust model can be deployed in collaborative systems to assist users. 2. How to calculate trust score between users that experienced a collaboration? We designed a trust model for repeated trust game that computes user trust scores based on their past behavior. We validated our trust model against: (i) simulated data, (ii) human opinion, and (iii) real-world experimental data. We extended our trust model to Wikipedia based on user contributions to the quality of the edited Wikipedia articles. We proposed three machine learning approaches to assess the quality of Wikipedia articles: the first one based on random forest with manually-designed features while the other two ones based on deep learning methods. 3. How to predict trust relation between users that did not interact in the past? Given a network in which the links represent the trust/distrust relations between users, we aim to predict future relations. We proposed an algorithm that takes into account the established time information of the links in the network to predict future user trust/distrust relationships. Our algorithm outperforms state-of-the-art approaches on real-world signed directed social network datasets
Le, Merrer Erwan. "Protocoles décentralisés pour la gestion de réseaux logiques large-échelle". Rennes 1, 2007. ftp://ftp.irisa.fr/techreports/theses/2007/lemerrer.pdf.
Pełny tekst źródłaWe focus on large scale distributed and dynamic systems. We are interested in methods that get information from the network, for monitoring and administration purposes. After surveying related work about techniques that assure the service maintenance, we present four protocols which are aimed to mesure key characteristics about the overlay. We introduce an uniform sampling method, based on a random walk. We then present two techniques aimed at estimate the syze of a system. The first method rely on a random walk, and the second one use the birthday paradox reversal. A comparative study is driven, and finally the best one is compared with other techniques of the related work. We also worked on the replica placement issue, for potentially highly used services. Finally we introduce, to the best of our knowledge, the first distributed estimation method on the arrivals and departures dynamics on the network
Yahiaoui, Houssame. "Simulation à large échelle des instabilités du routage inter-domaine". Versailles-St Quentin en Yvelines, 2011. http://www.theses.fr/2011VERS0056.
Pełny tekst źródłaIn this thesis, we discuss the study and resolution of inter-domain routing instabilities using large scale simulation. For over fifteen years now, the inter-domain infrastructure has been suffering from serious problems of instability and reliability, still unsolved. We propose a new environment for simulating inter-domain routing instabilities, that allows analysis of instability causes, as well as experimenting with BGP improvement methods. The combination of a large-scale simulator of the BGP protocol and the use of topologies and routing policies inferred from real inter-domain neighborhood data, allows to reproduce, qualitatively, some real-life instabilities in a controlled environment. This environment provides a field of study and testing faithful to reality, since it can reproduce the three main characteristics of inter-domain routing infrastructure: large scale topologies, persistent instability and network heterogeneity. We also studied the effects of certain pathological changes of user traffic on the inter-domain routing. By modeling the effects of certain malicious code spread on BGP routers, we could quantify these effects. This model could be used to reproduce worm-induced load changes in the proposed simulation environment, to measure its impact on routing instability
Moise, Diana Maria. "Optimisation de la gestion des données pour les applications MapReduce sur des infrastructures distribuées à grande échelle". Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2011. http://tel.archives-ouvertes.fr/tel-00696062.
Pełny tekst źródłaMoise, Diana. "Optimisation de la gestion des données pour les applications MapReduce sur des infrastructures distribuées à grande échelle". Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2011. http://tel.archives-ouvertes.fr/tel-00653622.
Pełny tekst źródłaKAMMOUH, OMAR. "Resilience assessment of Physical infrastructures and social systems of large scale communities". Doctoral thesis, Politecnico di Torino, 2019. http://hdl.handle.net/11583/2735173.
Pełny tekst źródłaRodrigues, Preston. "Interoperabilité à large échelle dans le contexte de l'Internet du future". Phd thesis, Université Sciences et Technologies - Bordeaux I, 2013. http://tel.archives-ouvertes.fr/tel-00920457.
Pełny tekst źródłaLegrand, Contes Virginie. "UNE APPROCHE À COMPOSANT POUR L'ORCHESTRATION DE SERVICES À LARGE ÉCHELLE". Phd thesis, Université Nice Sophia Antipolis, 2011. http://tel.archives-ouvertes.fr/tel-00710427.
Pełny tekst źródłaSarr, Idrissa. "Routage des transactions dans les bases de données à large échelle". Paris 6, 2010. http://www.theses.fr/2010PA066330.
Pełny tekst źródłaMaggiori, Emmanuel. "Approches d'apprentissage pour la classification à large échelle d'images de télédétection". Thesis, Université Côte d'Azur (ComUE), 2017. http://www.theses.fr/2017AZUR4041/document.
Pełny tekst źródłaThe analysis of airborne and satellite images is one of the core subjects in remote sensing. In recent years, technological developments have facilitated the availability of large-scale sources of data, which cover significant extents of the earth’s surface, often at impressive spatial resolutions. In addition to the evident computational complexity issues that arise, one of the current challenges is to handle the variability in the appearance of the objects across different geographic regions. For this, it is necessary to design classification methods that go beyond the analysis of individual pixel spectra, introducing higher-level contextual information in the process. In this thesis, we first propose a method to perform classification with shape priors, based on the optimization of a hierarchical subdivision data structure. We then delve into the use of the increasingly popular convolutional neural networks (CNNs) to learn deep hierarchical contextual features. We investigate CNNs from multiple angles, in order to address the different points required to adapt them to our problem. Among other subjects, we propose different solutions to output high-resolution classification maps and we study the acquisition of training data. We also created a dataset of aerial images over dissimilar locations, and assess the generalization capabilities of CNNs. Finally, we propose a technique to polygonize the output classification maps, so as to integrate them into operational geographic information systems, thus completing the typical processing pipeline observed in a wide number of applications. Throughout this thesis, we experiment on hyperspectral, atellite and aerial images, with scalability, generalization and applicability goals in mind
Nzekwa, Russel. "Construction flexible des boucles de contrôles autonomes pour les applications à large échelle". Phd thesis, Université des Sciences et Technologie de Lille - Lille I, 2013. http://tel.archives-ouvertes.fr/tel-00843874.
Pełny tekst źródłaKermarrec, Anne-Marie. "Réseaux logiques collaboratifs pour la recherche décentralisée dans les systèmes à large échelle". Rennes 1, 2007. ftp://ftp.irisa.fr/techreports/theses/2007/riviere.pdf.
Pełny tekst źródłaIt is necessary to propose system-level mechanisms to help the deployment of large-scale distributed applications, adapted to dynamism and scale shifts. More specifically, one such service of primordial importance is the search mechanism. These systems are based upon overlay structures, linking application data elements in a logical network whose structure provides a support for decentralized search. This thesis investigates first the support of instant query mechanisms, and presents VoroNet and RayNet, two systems that natively support search with high expressivity and full exhaustiveness. Then, the support of the publish/subscribe communication paradigm in a fully decentralized way is investigated. Two self-organizing overlays are presented. Rappel supports efficiently RSS/Atom feeds dissemination by leveraging both network and semantic proximities. Sub-2-Sub is a fully distributed system supporting content-based publish and subscribe
Ghamri-Doudane, Samir. "Une approche pair à pair pour la découverte de ressources à large échelle". Paris 6, 2008. http://www.theses.fr/2008PA066450.
Pełny tekst źródłaFellus, Jérôme. "Algorithmes décentralisés et asynchrones pour l'apprentissage statistique large échelle et application à l'indexation multimédia". Thesis, Cergy-Pontoise, 2017. http://www.theses.fr/2017CERG0899/document.
Pełny tekst źródłaWith the advent of the "data era", the amount of computational resources required by information processing systems has exploded, largely exceeding the technological evolutions of modern processors. Specifically, contemporary machine learning applications necessarily resort to massively distributed computation.Distributed algorithmics borrows most of its concepts from classical centralized and sequential algorithmics, where the system's behavior is defined as a sequence of instructions, executed one after the other. The importance of communication between computation units is generally neglected and pushed back to implementation details. Yet, as the number of units grows, the impact of local operations vanishes behind the emergent effects related to the large network of units. To preserve the desirable properties of centralized algorithmics such as stability, predictability and programmability, distributed computational paradigms must encompass this graph-theoretical dimension.This thesis proposes an algorithmic framework for large scale machine learning, which prevent two major drawbacks of classical methods, namely emph{centralization} and emph{synchronization}. We therefore introduce several new algorithms based on decentralized and asynchronous Gossip protocols, for solving clustering, density estimation, dimension reduction, classification and general convex optimization problems, while offering an appreciable speed-up on large networks with a very low communication cost. These practical advantages are mathematically supported by a theoretical convergence analysis. We finally illustrate the relevance of proposed methods on multimedia indexing applications and real image classification tasks
Creus, Tomàs Jordi. "ROSES : Un moteur de requêtes continues pour l'agrégation de flux RSS à large échelle". Phd thesis, Université Pierre et Marie Curie - Paris VI, 2012. http://tel.archives-ouvertes.fr/tel-00771539.
Pełny tekst źródłaCreus, Tomas Jordi. "Roses : Un moteur de requêtes continues pour l’aggrégation de flux RSS à large échelle". Paris 6, 2012. http://www.theses.fr/2012PA066658.
Pełny tekst źródłaRSS and Atom are generally less known than the HTML web format, but they are omnipresent in many modern web applications for publishing highly dynamic web contents. Nowadays, news sites publish thousands of RSS/Atom feeds, often organized into general topics like politics, economy, sports, culture, etc. Weblog and microblogging systems like Twitter use the RSS publication format, and even more general social media like Facebook produce an RSS feed for every user and trending topic. This vast number of continuous data-sources can be accessed by using general-purpose feed aggregator applications like Google Reader, desktop clients like Firefox or Thunderbird and by RSS mash-up applications like Yahoo! pipes, Netvibes or Google News. Today, RSS and Atom feeds represent a huge stream of structured text data which potential is still not fully exploited. In this thesis, we first present ROSES –Really Open Simple and Efficient Syndication–, a data model and continuous query language for RSS/Atom feeds. ROSES allows users to create new personalized feeds from existing real-world feeds through a simple, yet complete, declarative query language and algebra. The ROSES algebra has been implemented in a complete scalable prototype system capable of handling and processing ROSES feed aggregation queries. The query engine has been designed in order to scale in terms of the number of queries. In particular, it implements a new cost-based multi-query optimization approach based on query normalization and shared filter factorization. We propose two different factorization algorithms: (i) STA, an adaption of an existing approximate algorithm for finding minimal directed Steiner trees [CCC+98a], and (ii) VCA, a greedy approximation algorithm based on efficient heuristics outperforming the previous one with respect to optimization cost. Our optimization approach has been validated by extensive experimental evaluation on real world data collections
Rihawi, Omar. "Modelling and simulation of distributed large scale situated multi-agent systems". Thesis, Lille 1, 2014. http://www.theses.fr/2014LIL10148/document.
Pełny tekst źródłaThis thesis aims to design a distributed large scale MAS simulation. When the number of agents reaches several millions, it is necessary to distribute MAS simulation. However, this can raise some issues: agents allocation, interactions from different machines, time management, etc. When we distribute MAS simulation on different machines, agents must be separated between these machines and should still be able to produce their normal behaviours. Our distribution is able to cover all agents' perceptions during the simulation and allow all agents to interact normally. Moreover, with large-scale simulations the main observations are done on the macroscopic level. In this thesis, we study two main aspects to distribute large-scale simulations. The first aspect is the efficient strategy that can be used to distribute MAS concepts (agents and environment). We propose two efficient distribution approaches: agents distribution and environment distribution. The second aspect is the relaxation of synchronization constraints in order to speed up the execution of large-scale simulations. Relaxing this constraint can induce incoherent interactions, which do not exist in a synchronized context. But, in some applications that can not affect the macroscopic level. Our experiments on different categories of MAS applications show that some applications can be distributed efficiently in one distribution approach more than the other. In addition, we have studied the impact of incoherent iterations on the emerging behaviour of different applications, and we have evidenced situations in which unsynchronized simulations still produced the expected macroscopic behaviour
Rawat, Subhandu. "Dynamique cohérente de mouvements turbulents à grande échelle". Thesis, Toulouse, INPT, 2014. http://www.theses.fr/2014INPT0116/document.
Pełny tekst źródłaMy thesis work focused on ‘dynamical systems’ understanding of the large-scale dynamics in fully developed turbulent shear flow. In plane Couette flow, large-eddy simulation (L.E.S) is used to model small scale motions and to only resolve large-scale motions in order to compute nonlinear traveling waves (NTW) and relative periodic orbits (RPO). Artificial over-damping has been used to quench an increasing range of small-scale motions and prove that the motions in large-scale are self-sustained. The lower-branch traveling wave solutions that lie on laminar-turbulent basin boundary are obtained for these over-damped simulation and further continued in parameter space to upper branch solutions. This approach would not have been possible if, as conjectured in some previous investigations, large-scale motions in wall bounded shear flows are forced by mechanism based on the existence of active structures at smaller scales. In Poseuille flow, relative periodic orbits with shift-reflection symmetry on the laminar-turbulent basin boundary are computed using DNS. We show that the found RPO are connected to the pair of traveling wave (TW) solution via global bifurcation (saddle-node-infinite period bifurcation). The lower branch of this TW solution evolve into a spanwise localized state when the spanwise domain is increased. The upper branch solution develops multiple streaks with spanwise spacing consistent with large-scale motions in turbulent regime
Braun, Johannes [Verfasser], Johannes [Akademischer Betreuer] Buchmann i Max [Akademischer Betreuer] Mühlhäuser. "Maintaining Security and Trust in Large Scale Public Key Infrastructures / Johannes Braun. Betreuer: Johannes Buchmann ; Max Mühlhäuser". Darmstadt : Universitäts- und Landesbibliothek Darmstadt, 2015. http://d-nb.info/1111113351/34.
Pełny tekst źródłaBabbar, Rohit. "Machine Learning Strategies for Large-scale Taxonomies". Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENM064/document.
Pełny tekst źródłaIn the era of Big Data, we need efficient and scalable machine learning algorithms which can perform automatic classification of Tera-Bytes of data. In this thesis, we study the machine learning challenges for classification in large-scale taxonomies. These challenges include computational complexity of training and prediction and the performance on unseen data. In the first part of the thesis, we study the underlying power-law distribution in large-scale taxonomies. This analysis then motivates the derivation of bounds on space complexity of hierarchical classifiers. Exploiting the study of this distribution further, we then design classification scheme which leads to better accuracy on large-scale power-law distributed categories. We also propose an efficient method for model-selection when training multi-class version of classifiers such as Support Vector Machine and Logistic Regression. Finally, we address another key model selection problem in large scale classification concerning the choice between flat versus hierarchical classification from a learning theoretic aspect. The presented generalization error analysis provides an explanation to empirical findings in many recent studies in large-scale hierarchical classification. We further exploit the developed bounds to propose two methods for adapting the given taxonomy of categories to output taxonomies which yield better test accuracy when used in a top-down setup
Madeira, De Campos Velho Pedro Antonio. "Evaluation de précision et vitesse de simulation pour des systèmes de calcul distribué à large échelle". Phd thesis, Université de Grenoble, 2011. http://tel.archives-ouvertes.fr/tel-00625497.
Pełny tekst źródłaSAKKA, Mohamed Amin. "Contributions à la modélisation et la conception des systèmes de gestion de provenance à large échelle". Phd thesis, Institut National des Télécommunications, 2012. http://tel.archives-ouvertes.fr/tel-00762641.
Pełny tekst źródłaMadeira, de Campos Velho Pedro Antonio. "Evaluation de précision et vitesse de simulation pour des systèmes de calcul distribué à large échelle". Thesis, Grenoble, 2011. http://www.theses.fr/2011GRENM027/document.
Pełny tekst źródłaLarge-Scale Distributed Computing (LSDC) systems are in production today to solve problems that require huge amounts of computational power or storage. Such systems are composed by a set of computational resources sharing a communication infrastructure. In such systems, as in any computing environment, specialists need to conduct experiments to validate alternatives and compare solutions. However, due to the distributed nature of resources, performing experiments in LSDC environments is hard and costly. In such systems, the execution flow depends on the order of events which is likely to change from one execution to another. Consequently, it is hard to reproduce experiments hindering the development process. Moreover, resources are very likely to fail or go off-line. Yet, LSDC archi- tectures are shared and interference among different applications, or even among processes of the same application, affects the overall application behavior. Last, LSDC applications are time consuming, thus conducting many experiments, with several parameters is often unfeasible. Because of all these reasons, experiments in LSDC often rely on simulations. Today we find many simulation approaches for LSDC. Most of them objective specific architectures, such as cluster, grid or volunteer computing. Each simulator claims to be more adapted for a particular research purpose. Nevertheless, those simulators must address the same problems: modeling network and managing computing resources. Moreover, they must satisfy the same requirements providing: fast, accurate, scalable, and repeatable simulations. To match these requirements, LSDC simulation use models to approximate the system behavior, neglecting some aspects to focus on the desired phe- nomena. However, models may be wrong. When this is the case, trusting on models lead to random conclusions. In other words, we need to have evidence that the models are accurate to accept the con- clusions supported by simulated results. Although many simulators exist for LSDC, studies about their accuracy is rarely found. In this thesis, we are particularly interested in analyzing and proposing accurate models that respect the requirements of LSDC research. To follow our goal, we propose an accuracy evaluation study to verify common and new simulation models. Throughout this document, we propose model improvements to mitigate simulation error of LSDC simulation using SimGrid as case study. We also evaluate the effect of these improvements on scalability and speed. As a main contribution, we show that intuitive models have better accuracy, speed and scalability than other state-of-the art models. These better results are achieved by performing a thorough and systematic analysis of problematic situations. This analysis reveals that many small yet common phenomena had been neglected in previous models and had to be accounted for to design sound models
Sakka, Mohamed Amin. "Contributions à la modélisation et la conception des systèmes de gestion de provenance à large échelle". Thesis, Evry, Institut national des télécommunications, 2012. http://www.theses.fr/2012TELE0023/document.
Pełny tekst źródłaProvenance is a key metadata for assessing electronic documents trustworthiness. It allows to prove the quality and the reliability of its content. With the maturation of service oriented technologies and Cloud computing, more and more data is exchanged electronically and dematerialization becomes one of the key concepts to cost reduction and efficiency improvement. Although most of the applications exchanging and processing documents on the Web or in the Cloud become provenance aware and provide heterogeneous, decentralized and not interoperable provenance data, most of Provenance Management Systems (PMSs) are either dedicated to a specific application (workflow, database, ...) or a specific data type. Those systems were not conceived to support provenance over distributed and heterogeneous sources. This implies that end-users are faced with different provenance models and different query languages. For these reasons, modeling, collecting and querying provenance across heterogeneous distributed sources is considered today as a challenging task. This is also the case for designing scalable PMSs providing these features. In the fist part of our thesis, we focus on provenance modelling. We present a new provenance modelling approach based on semantic Web technologies. Our approach allows to import provenance data from heterogeneous sources, to enrich it semantically to obtain high level representation of provenance. It provides syntactic interoperability between those sources based on a minimal domain model (MDM), supports the construction of rich domain models what allows high level representations of provenance while keeping the semantic interoperability. Our modelling approch supports also semantic correlation between different provenance sources and allows the use of a high level semantic query language. In the second part of our thesis, we focus on the design, implementation and scalability issues of provenance management systems. Based on our modelling approach, we propose a centralized logical architecture for PMSs. Then, we present a mediator based architecture for PMSs aiming to preserve provenance sources distribution. Within this architecture, the mediator has a global vision on all provenance sources and possesses query processing and distribution capabilities. The validation of our modelling approach was performed in a document archival context within Novapost, a company offering SaaS services for documents archiving. Also, we propose a non-functional validation aiming to test the scalability of our architecture. This validation is based on two implementation of our PMS : he first uses an RDF triple store (Sesame) and the second a NoSQL DBMS coupled with the map-reduce parallel model (CouchDB). The tests we performed show the limits of Sesame in storing and querying large amounts of provenance data. However, the PMS based on CouchDB showed a good performance and a linear scalability
Cernay, Charles. "Identifier des légumineuses à graines productives en Europe par synthèses quantitatives de données à large échelle". Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLA014.
Pełny tekst źródłaSeveral studies have stressed the importance of increasing grain legume production in Europe. To date, no quantitative data syntheses have been conducted to compare the productive (and environmental) performances of different grain legumes in this region. The objective of the PhD thesis was to identify grain legume species displaying high productivity levels in Europe. Three data sources were used on a large scale: statistical data, experimental data across Europe and other world regions, and food and feed composition data for grain legumes. In total, 29 species were compared on the basis of their productivity levels, and on their effects on the yields of the subsequent cereals. We estimated the interannual variability in grain legume yields across Europe and the Americas. Results show that grain legume yields are significantly more variable than non-legume yields in Europe. These differences are smaller in the Americas. We built a global experimental dataset including 173 published articles, 41 countries, and 8,581 crop observations. A first meta-analysis was conducted using this experimental dataset. Results show that soybean (Glycine max), narrow-leafed lupin (Lupinus angustifolius), and faba bean (Vicia faba), display, in general, similar productivity levels, and sometimes higher, compared with those of pea (Pisum sativum) in Europe. Based on the results of this meta-analysis, we estimated that a replacement of 25% of the area currently under pea (Pisum sativum) with faba bean (Vicia faba), narrow-leafed lupin (Lupinus angustifolius), and soybean (Glycine max), would increase protein production by +3%, +4%, and +28%, in Europe, respectively. A second meta-analysis was conducted using the same experimental dataset. Results show that the yields of cereals cultivated after grain legumes are, on average, +29% significantly higher than the yields of cereals cultivated after cereals; this positive effect is significant for 13 of 16 grain legume species. The effect of preceding grain legume cultivation decreases as a function of the nitrogen (N) fertilization rate applied to subsequent cereals, and becomes negligible when the mean nitrogen fertilization rate exceeds 150 kg N ha-1. Based on the results of this meta-analysis, we estimated that the expected relative decrease in cereal production, resulting from an increase in the proportion of a grain legume in a cereal monoculture, is partially mitigated by the positive effect of the grain legume on the yield of the subsequent cereal under low nitrogen input conditions. Globally, the PhD thesis identifies faba bean (Vicia faba) as an interesting candidate species in Europe, followed by pea (Pisum sativum), soybean (Glycine max), and lupins (Lupinus spp.). Lentil (Lens culinaris), chickpea (Cicer arietinum), and kidney bean (Phaseolus vulgaris), display low productivity levels. However, these species are often promoted for their nutritional benefits for the human diet. Based on comparative insight gained from experiments in North America and Oceania, we suggest assessing the productivity levels of several vetches and lupins (i.e., Lathyrus, Lupinus, and Vicia species excluding Vicia faba), in future field experiments in Europe
Gougeaud, Sebastien. "Simulation générique et contribution à l'optimisation de la robustesse des systèmes de données à large échelle". Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLV011/document.
Pełny tekst źródłaCapacity of data storage systems does not cease to increase to currently reach the exabyte scale. This observation gets a real impact on storage system robustness. In fact, the more the number of disks in a system is, the greater the probability of a failure happening is. Also, the time used for a disk reconstruction is proportional to its size. Simulation is an appropriate technique to test new mechanisms in almost real conditions and predict their behavior. We propose a new software we callOpen and Generic data Storage system Simulation tool (OGSSim). It handles the heterogeneity andthe large size of these modern systems. Its modularity permits the undertaking of each storage technology, placement scheme or computation model as bricks which can be added and combined to optimally configure the simulation.Robustness is a critical issue for these systems. We use the declustered RAID to distribute the data reconstruction in case of a failure. We propose the Symmetric Difference of Source Sets (SD2S) algorithmwhich uses data block shifhting to achieve the placement scheme. The shifting offset comes from the computation of the distance between logical source sets of physical disk blocks. To evaluate the SD2S efficiency, we compared it to Crush method without replicas. It results in a faster placement scheme creation in normal and failure modes with SD2S and in a significant reduced memory space cost (null without failure). Furthermore, SD2S ensures the partial, if not total, reconstruction of data in case of multiple failures
Joulin, Pierre-Antoine. "Modélisation à fine échelle des interactions entre parcs éoliens et météorologie locale". Thesis, Toulouse, INPT, 2019. http://www.theses.fr/2019INPT0135.
Pełny tekst źródłaThe development of wind energy, encouraged by the french Multiannual Energy Program, raisesnew questions. Some parks will be located on mountainous and offshore terrains. To forecast the energy production and try to optimize it, a better understanding of the flow within wind farm on thattype of terrain is needed. In addition, modern offshore wind turbines are getting larger and willinteract more strongly with local weather. It seems important to characterize these interactions. To respond to this industrial and environmental challenge, a new digital toolwas created during this thesis work. The first part of this manuscript focuses on the concepts andtheoretical models of the Atmospheric Boundary Layer (ABL) and wind turbines. In particular, theMeso-NH meteoro- logical model, used in the Large-Eddy Simulation (LES) framework, andsimplified models of wind turbines have been investigated : Actuator Disk (AD) with and without rotation and the Actuator Line (AL). The second part is devoted to the development and validation of the coupled tool. By implementing the AD and AL methods within Meso-NH, it becomes possible to simulate the presence of wind turbines in a realistic atmospheric boundary layer. A firstvalidation step is based on a wind tunnel experiment, involving five wind turbines on a hill, toanalyze the coupling with the non-rotating Actuator Disk. A second focuses on the MextNext experiment of a small wind turbine, to study the coupling with the Actuator Line. All the resultsobtained are very satisfactory. The third part focuses on the potential impact of wind farms on localweather. The ability of the tool to reproduce complex meteorological interactions has been demonstrated by simulating the case of the Horns Rev 1 photos. The cloud development obtainedby the coupled system demonstrates the potential of the developed tool. In order to characterizethe impact of future offshore parks on the local meteorology, large wind turbines immersed in a thin atmospheric boundary layer were simulated. A clear weather case and a cloudy one wereexamined. Additional studies will be needed to complement these preliminary results. Thus, newMeso-NH parameterizations make now possible to represent wind turbines in a realistic atmosphere, widening the scope of possible CFD simulations for wind farms
Bellassen, Valentin. "Gestion forestière et cycle du carbone : apports de la modélisation à large échelle et de la télédétection". Paris 6, 2010. http://www.theses.fr/2010PA066361.
Pełny tekst źródłaCastel, David. "Inférence du réseau génétique d'Id2 dans les kératinocytes humains par intégration de données génomiques à large échelle". Evry-Val d'Essonne, 2007. http://www.biblio.univ-evry.fr/theses/2007/interne/2007/2007EVRY0026.pdf.
Pełny tekst źródłaWe report in the present study the characterization of the genetic regulatory network of Id2, a dominant negative regulator of bHLH, to further understand its role in the control of the proliferation/differentiation balance in human keratinocytes. To identify Id2 gene targets, we first used gene expression profiling in cells exhibiting Id2 overexpression or knock-down. At the same time we screened an siRNA library using an siRNA microarrays approach to characterize Id2 transcriptionnal regulators. These results, with additional phenotypic observations, show that Id2 exert a key role in the control of keratinocyte commitment into differentiation or proliferation. Furthermore, we unravel new functions of Id2 in anaphase promotion and DNA recombination control. Overal, our results alllowed a first description of Id2 genetic regulatory network topology
Emery, Charlotte. "Contribution de la future mission altimétrique à large fauchée SWOT pour la modélisation hydrologique à grande échelle". Thesis, Toulouse 3, 2017. http://www.theses.fr/2017TOU30034/document.
Pełny tekst źródłaScientific objective of this PhD work is to improve water fluxes estimation on the continental surfaces, at interanual and interseasonal scale (from few years to decennial time period). More specifically, it studies contribution of remotely-sensed measurements to improve hydrology model. Notably, this work focuses on the incoming SWOT mission (Surface Water and Ocean Topography, launch scheduled for 2021) for the study of the continental water cycle at global scale, and using the land surface model ISBA-TRIP. In this PhD work, I explore the potential of satellite data to correct both input parameters of the river routing scheme TRIP and its state variables. To do so, a data assimilation platform has been set to assimilate SWOT virtual observation as well as discharge estimated from real nadir altimetry data. Beforehand, it was necessary to do a sensibility analysis of TRIP model to its parameters. The aim of such study was to highlight what are the most impacting parameters on SWOT-observed variables and therefore select the ones to correct via data assimilation. The sensibility analysis (ANOVA) has been led on TRIP main parameters. The study has been done over the Amazon basin. The results showed that the simulated water levels are sensitive to local geomorphological parmaters exclusively. On the other hand, the simulated discharges are sensitive to upstream parameters (according to the TRIP river routing network) and more particularly to the groundwater time constant. Finally, water anomalies present sensitivities similar to those of the water levels but with more pronounced temporal variations. These results also lead me to do some choices in the implementation of the assimilation scheme and have been published. Therefore, in the second part of my PhD, I focused on developing a data assimilation platform which consists in an Ensemble Kalman Filter (EnKF). It could either correct the model input parameters or directly its state. A series of twin experiments is used to test and validate the parameter estimation module of the platform. SWOT virtual-observations of water heights and anomalies along SWOT tracks are assimilated to correct the river manning coefficient, with the possibility to easily extend to other parameters. First results show that the platform is able to recover the "true" Manning distribution assimilating SWOT-like water heights and anomalies. In the state estimation mode, daily assimilation cycles are realized to correct TRIP river water storage initial state by assimilating ENVISAT-based discharge. Those observations are derived from ENVISAT water elevation measures, using rating curves from the MGB-IPH hydrological model (calibrated over the Amazon using in situ gages discharge). Using such kind of observation allows going beyond idealized twin experiments and also to test contribution of a remotely-sensed discharge product, which could prefigure the SWOT discharge product. The results show that discharge after assimilation are globally improved : the root-mean-square error between the analysis discharge ensemble mean and in situ discharges is reduced by 28 \%, compared to the root-mean-square error between the free run and in situ discharges (RMSE are respectively equal to 2.79 x 103 m3/s and 1.98 x 103 m3/s)
Ductor, Sylvain. "Mécanismes de coordination pour l'allocation dynamique de ressources dans des systèmes multi-agents large-échelle et ouverts". Paris 6, 2013. http://www.theses.fr/2013PA066036.
Pełny tekst źródłaMAS offer a paradigm that is adapted to resolve distributed constraint optimisation problemsNowadays, more application must handle such problems, and notably in domains like cloud computing or ubiquitous computing. In those domains, differents agents, that may have potentially conflicting objectives, must coordinate in order to find a common solution. The aim is to optimise agents utilities while respecting problem constraints. We are interested in large-scale open and dynamic applications. Welfare engineering has recently propose a solid theoretical and experimental analysis for those kind of problems : iterated consensual negociation. This domain studies the relations between the agent rationalities, the coordination mecanism and the social abjective. However, as far as we know, no study of this domain was about formalising and designing coordination mecanisms. This thesis is about designing operational mecanisms in the context of welfare engineering. We firstly contribute to this domain by elaborating a formal model of coordination mecanisms and then we develop an abstract architecture for agent negociation. We propose five mecanisms that are applicable to large scale dynamic and open application. Four of them consider the restricted contect of resource allocation. Finally an experimental validation has been conducted and compared the mecanisms to a parallel and a distributed approach