Дисертації з теми "Réseau du Edge"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-48 дисертацій для дослідження на тему "Réseau du Edge".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.
Abderrahim, Mohamed. "Conception d’un système de supervision programmable et reconfigurable pour une infrastructure informatique et réseau répartie." Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2018. http://www.theses.fr/2018IMTA0119/document.
Повний текст джерелаCloud offers compute, storage and network as services. To reduce the offer cost, the operators tend to rely on centralized and massive infrastructures. However, such a configuration hinders the satisfaction of the latency and bandwidth requirements of new generation applications. The Edge aims to rise this challenge by relying on massively distributed resources. To satisfy the operators and the users of Edge, management services similar to the ones that made the success of Cloud should be designed. In this thesis, we focus on the monitoring service. We design a framework to establish a holistic monitoring service. This framework determines a peer-to-peer deployment architecture for the observation, processing, and exposition of measurements. It verifies that this architecture satisfies the functional and quality of service constraints of the users. For this purpose, it relies on a description of users requirement sand a description of the Edge infrastructure.The expression of these two elements can be unified with two languages offered by the Framework. The deployment architecture is determined with the aim of minimizing the compute and network footprint of the monitoring service. For this purpose, the functions are mutualized as much as possible among the different users. The tests we did showed the relevance of our proposal for reducing monitoring footprint with a gain of -28% for the compute and -24% for the network
Minerva, Roberto. "Will the Telco survive to an ever changing world ? Technical considerations leading to disruptive scenarios." Thesis, Evry, Institut national des télécommunications, 2013. http://www.theses.fr/2013TELE0011/document.
Повний текст джерелаThe telecommunications industry is going through a difficult phase because of profound technological changes, mainly originated by the development of the Internet. They have a major impact on the telecommunications industry as a whole and, consequently, the future deployment of new networks, platforms and services. The evolution of the Internet has a particularly strong impact on telecommunications operators (Telcos). In fact, the telecommunications industry is on the verge of major changes due to many factors, such as the gradual commoditization of connectivity, the dominance of web services companies (Webcos), the growing importance of software based solutions that introduce flexibility (compared to static system of telecom operators). This thesis develops, proposes and compares plausible future scenarios based on future solutions and approaches that will be technologically feasible and viable. Identified scenarios cover a wide range of possibilities: 1) Traditional Telco; 2) Telco as Bit Carrier; 3) Telco as Platform Provider; 4) Telco as Service Provider; 5) Telco Disappearance. For each scenario, a viable platform (from the point of view of telecom operators) is described highlighting the enabled service portfolio and its potential benefits
Renoust, Benjamin. "Analysis and Visualisation of Edge Entanglement in Multiplex Networks." Phd thesis, Université Sciences et Technologies - Bordeaux I, 2013. http://tel.archives-ouvertes.fr/tel-00942358.
Повний текст джерелаAouadj, Messaoud. "AirNet, le modèle de virtualisation « Edge-Fabric » comme plan de contrôle pour les réseaux programmables." Thesis, Toulouse 3, 2016. http://www.theses.fr/2016TOU30138/document.
Повний текст джерелаThe work of this thesis falls within the general context of software-defined networking (SDN). This new paradigm is one of the most significant initiatives to enable networks programmability or, in other words, to make current networks easier to configure, test, debug and evolve. Within an SDN ecosystem, the Northbound interface is used by network administrators to define policies and to program the control plane, it thus represents a major challenge. Ideally, this northbound interface should allow administrators to describe, as simply as possible, network services and their interactions, rather than specifying how and on what physical device they need to be deployed. Current related works show that this can be partly achieved through virtualization solutions and high-level domain specific languages (DSL). The objective of this thesis is to propose a new Northbound interface which will, on the one hand, rely on network virtualization and, on the other hand, expose its services as a domain specific programming language. Currently, several languages that include network virtualization solutions exist. Nevertheless, we believe that the abstract models they are using to build virtual networks remain inadequate to ensure simplicity, modularity and flexibility of virtual topologies and control programs. In this context, we propose a new network control language named AirNet. Our language is built on top of an abstraction model whose main feature is to provide a clear separation between edge and core network devices. This concept is a well-known and accepted idea within the network designer community. The originality of our contribution is to lift up this concept at the virtual control plane, not limiting it solely at the physical plane. Thus, logical boundaries between different types of policies will exist (control and data functions vs. transport functions), ensuring modularity and reusability of the control program. Moreover, in the proposed approach, the definition of the virtual network and policies is totally dissociated from the target physical infrastructure, promoting the portability of control applications. An implementation of the AirNet language has also been done. This prototype includes in particular a library that implements the primitives and operators of the language, and a hypervisor that achieves the composition of the control policies on the virtual network, and their mapping on the physical infrastructure. In order to rely on existing SDN controllers, the hypervisor includes integration modules for the POX and RYU controllers. An experimental validation has been also conducted on different use cases (filtering, load balancing, dynamic authentication, bandwidth throttling, etc.), whose results demonstrate the feasibility of our solution. Finally, performance measurements have shown that the additional cost brought by this new abstraction layer is perfectly acceptable
Mekki, Mohamed. "Enabling Zero-Touch Cloud Edge Computing Continuum Management." Electronic Thesis or Diss., Sorbonne université, 2024. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2024SORUS231.pdf.
Повний текст джерелаThe maturation of cloud computing and edge computing infrastructure provisioning and management has led to the emergence of Cloud Edge Computing Continuum (CECC). CECC enables seamless deployment and migration of applications between centralized cloud infrastructures and decentralized edge infrastructure. This evolution has driven new use cases across industries, including Industrial Internet of Things (IIoT), autonomous vehicles, and augmented reality, all benefiting from this distributed architecture.These use cases require scalability and storage from massive data centers typical of traditional cloud computing, as well as the low latency and high bandwidth offered by edge computing infrastructures. Several factors enable the development and deployment of applications to fully leverage CECC : advancements in application deployment technologies like virtualization and containerization, a shift in application architecture and development methodology towards microservices architectures, and innovations in networking technologies such as 5G mobile networks. Efficiently orchestrating applications within the CECC framework is crucial for meeting performance requirements and optimizing infrastructure resource utilization. This thesis proposes solutions for zero-touch management of CECC, focusing on automated monitoring, profiling, and decision-making processes. These solutions aim to automate application management, facilitating seamless orchestration and resource optimization.In the first contribution, a novel monitoring system for multi-domain services is proposed, utilizing a unified structure for Key Performance Indicators (KPIs) to abstract underlying technologies. This scalable system monitors end-to-end network slices, including Radio Access Network (RAN), Core Network (CN), and Cloud/Edge domains.The second contribution presents results from an experimental study aiming to detect if a tenant's configuration allows running its service optimally. The study provides insights on detecting and correcting performance degradation due to misconfiguration of service resources.Moving towards decision-making of a CECC manager, the third contribution proposes a Zero-Touch Service Management (ZSM) framework featuring a fine-granular computing resource scaler in a cloud-native environment. The scaler uses AI/ML models to predict microservice performances, with an XAI module conducting root-cause analysis for service degradation. Afterwards, the proposed framework scales only the needed resources (i.e., CPU or memory) to overcome the service degradation. Finally, in the last contribution, an architecture of CECC Application Orchestrator is proposed, leveraging applications and infrastructures profiling for efficient management. These profiles represent current and future applications' requirements, guiding decision-making processes (placement, resources scaling. migration) to minimize carbon footprint and deployment costs
Li, Yue. "Edge computing-based access network selection for heterogeneous wireless networks." Thesis, Rennes 1, 2017. http://www.theses.fr/2017REN1S042/document.
Повний текст джерелаTelecommunication network has evolved from 1G to 4G in the past decades. One of the typical characteristics of the 4G network is the coexistence of heterogeneous radio access technologies, which offers end-users the capability to connect them and to switch between them with their mobile devices of the new generation. However, selecting the right network is not an easy task for mobile users since access network condition changes rapidly. Moreover, video streaming is becoming the major data service over the mobile network where content providers and network operators should cooperate to guarantee the quality of video delivery. In order to cope with this context, the thesis concerns the design of a novel approach for making an optimal network selection decision and architecture for improving the performance of adaptive streaming in the context of a heterogeneous network. Firstly, we introduce an analytical model (i.e. linear discrete-time system) to describe the network selection procedure considering one traffic class. Then, we consider the design of a selection strategy based on foundations from linear optimal control theory, with the objective to maximize network resource utilization while meeting the constraints of the supported services. Computer simulations with MATLAB are carried out to validate the efficiency of the proposed mechanism. Based on the same principal we extend this model with a general analytical model describing the network selection procedures in heterogeneous network environments with multiple traffic classes. The proposed model was, then, used to derive a scalable mechanism based on control theory, which allows not only to assist in steering dynamically the traffic to the most appropriate network access but also helps in blocking the residual traffic dynamically when the network is congested by adjusting dynamically the access probabilities. We discuss the advantages of a seamless integration with the ANDSF. A prototype is also implemented into ns-3. Simulation results sort out that the proposed scheme prevents the network congestion and demonstrates the effectiveness of the controller design, which can maximize the network resources allocation by converging the network workload to the targeted network occupancy. Thereafter, we focus on enhancing the performance of DASH in a mobile network environment for the users which has one access network. We introduce a novel architecture based on MEC. The proposed adaptation mechanism, running as an MEC service, can modify the manifest files in real time, responding to network congestion and dynamic demand, thus driving clients towards selecting more appropriate quality/bitrate video representations. We have developed a virtualized testbed to run the experiment with our proposed scheme. The simulation results demonstrate its QoE benefits compared to traditional, purely client-driven, bitrate adaptation approaches since our scheme notably improves both on the achieved MOS and on fairness in the face of congestion. Finally, we extend the proposed the MEC-based architecture to support the DASH service in a multi-access heterogeneous network in order to maximize the QoE and fairness of mobile users. In this scenario, our scheme should help users select both video quality and access network and we formulate it as an optimization problem. This optimization problem can be solved by IBM CPLEX tool. However, this tool is time-consuming and not scalable. Therefore, we introduce a heuristic algorithm to make a sub-optimal solution with less complexity. Then we implement a testbed to conduct the experiment and the result demonstrates that our proposed algorithm notably can achieve similar performance on overall achieved QoE and fairness with much more time-saving compared to the IBM CPLEX tool
Morvan, Alexis. "Honeycomb lattices of superconducting microwave resonators : Observation of topological Semenoff edge states." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS037/document.
Повний текст джерелаThis thesis describes the realization and study of honeycomb lattices of superconducting resonators. This work is a first step towards the simulation of condensed matter systems with superconducting circuits. Our lattices are micro-fabricated and typically contains a few hundred sites. In order to observe the eigen-modes that appear between 4 and 8 GHz, we have developed a mode imaging technique based on the local dissipation introduced by a laser spot that we can move across the lattice. We have been able to measure the band structure and to characterize the edge states of our lattices. In particular, we observe localized states that appear at the interface between two Semenoff insulators with opposite masses. These states, called Semenoff states, have a topological origin. Our observations are in good agreement with ab initio electromagnetic simulations
Toumlilt, Ilyas. "Colony : a Hybrid Consistency System for Highly-Available Collaborative Edge Computing." Electronic Thesis or Diss., Sorbonne université, 2021. http://www.theses.fr/2021SORUS447.
Повний текст джерелаImmediate response, autonomy and availability is brought to edge applications, such as gaming, cooperative engineering, or in-the-field information sharing, by distributing and replicating data at the edge. However, application developers and users demand the highest possible consistency guarantees, and specific support for group collaboration. To address this challenge, COLONY guarantees Transactional Causal Plus Consistency (TCC+) globally, dovetailing with Snapshot Isolation within edge groups. To help with scalability, fault tolerance and security, its logical communication topology is tree-like, with replicated roots in the core cloud, but with the flexibility to migrate a node or a group. Despite this hybrid approach, applications enjoy the same semantics everywhere in the topology. Our experiments show that local caching and peer groups improve throughput and response time significantly, performance is not affected in offline mode, and that migration is seamless
Harmassi, Mariem. "Thing-to-thing context-awareness at the edge." Thesis, La Rochelle, 2019. http://www.theses.fr/2019LAROS037.
Повний текст джерелаInternet of Things IoT (IoT) today comprises a plethora of different sensors and diverse connected objects, constantly collecting and sharing heterogeneous sensory data from their environment. This enables the emergence of new applications exploiting the collected data towards facilitating citizens lifestyle. These IoT applications are made context-aware thanks to data collected about user's context, to adapt their behavior autonomously without human intervention. In this Thesis, we propose a novel paradigm that concern Machine to Machine (M2M)/Thing To Thing (T2T) interactions to be aware of each other context named \T2T context-awareness at the edge", it brings conventional context-awareness from the application front end to the application back-end. More precisely, we propose to empower IoT devices with intelligence, allowing them to understand their environment and adapt their behaviors based on, and even act upon, the information captured by the neighboringdevices around, thus creating a collective intelligence. The first challenge we face in order to make IoT devices context-aware is (i) How can we extract such information without deploying any dedicated resources for this task? To do so we propose in our first work a context reasoner [1] based a cooperation among IoT devices located in the same surrounding. Such cooperation aims at mutually exchange data about each other context. To enable IoT devices to see, hear, and smell the physical world for themselves, we need firstly to make them connected to share their observations. For a mobile and energy- constrained device, the second challenge we face is (ii) How to discover as much neighbors as possible in its vicinity while preserving its energy resource? We propose Welcome [2] a Low latency and Energy efficient neighbor discovery scheme that is based on a single-delegate election method. Finally, a Publish-Subscribe that take into account the context at the edge of IoT devices, can greatly reduce the overhead and save the energy by avoiding unnecessary transmission of data that doesn't match application requirements. However, if not thought about properly building such T2T context-awareness could imply an overload of subscriptions to meet context-estimation needs. So our third contribution is (iii) How to make IoT devices context-aware while saving energy. To answer this, We propose an Energy efficient and context-aware Publish-Subscribe [3] that strike a balance between energy-consumption due to context estimation and energy-saving due to context-based filtering near to data sources
Minerva, Roberto. "Will the Telco survive to an ever changing world ? Technical considerations leading to disruptive scenarios." Electronic Thesis or Diss., Evry, Institut national des télécommunications, 2013. http://www.theses.fr/2013TELE0011.
Повний текст джерелаThe telecommunications industry is going through a difficult phase because of profound technological changes, mainly originated by the development of the Internet. They have a major impact on the telecommunications industry as a whole and, consequently, the future deployment of new networks, platforms and services. The evolution of the Internet has a particularly strong impact on telecommunications operators (Telcos). In fact, the telecommunications industry is on the verge of major changes due to many factors, such as the gradual commoditization of connectivity, the dominance of web services companies (Webcos), the growing importance of software based solutions that introduce flexibility (compared to static system of telecom operators). This thesis develops, proposes and compares plausible future scenarios based on future solutions and approaches that will be technologically feasible and viable. Identified scenarios cover a wide range of possibilities: 1) Traditional Telco; 2) Telco as Bit Carrier; 3) Telco as Platform Provider; 4) Telco as Service Provider; 5) Telco Disappearance. For each scenario, a viable platform (from the point of view of telecom operators) is described highlighting the enabled service portfolio and its potential benefits
Foroughi, Parisa. "Towards network automation : planning and monitoring." Electronic Thesis or Diss., Institut polytechnique de Paris, 2022. http://www.theses.fr/2022IPPAT038.
Повний текст джерелаNetwork management is undergoing drastic changes due to the high expectations of the infrastructure to support new services. The diverse requirements of these services, call for the integration of new enabler technologies that complicate the network monitoring and planning process. Therefore, to alleviate the burden and increase the monitoring and planning accuracy, more automated solutions on the element/device level are required. In this thesis, we propose a semi-automated framework called AI-driven telemetry (ADT) for collecting, processing, and assessing the state of routers using streaming telemetry data. ADT consists of 4 building blocks: collector, detector, explainer, and exporter. We concentrate on the detection block in ADT and propose a multi-variate online change detection technique called DESTIN. Our study on the explainer block of ADT is limited to exploring the potential of the input data and showcasing the possibility of the automated event description. Then, we tackle the problem of planning and dimensioning in radio access networks equipped with distributed edge servers. We propose a model that satisfies the service requirements and makes use of novel enabler technologies, i.e. network slicing and virtualization techniques. We showcase the advantages of using our holistic model to automate RAN planning by utilizing simulated annealing and greedy methods
Cuadrado-Cordero, Ismael. "Microclouds : an approach for a network-aware energy-efficient decentralised cloud." Thesis, Rennes 1, 2017. http://www.theses.fr/2017REN1S003/document.
Повний текст джерелаThe current datacenter-centralized architecture limits the cloud to the location of the datacenters, generally far from the user. This architecture collides with the latest trend of ubiquity of Cloud computing. Also, current estimated energy usage of data centers and core networks adds up to 3% of the global energy production, while according to latest estimations only 42,3% of the population is connected. In the current work, we focused on two drawbacks of datacenter-centralized Clouds: Energy consumption and poor quality of service. On the one hand, due to its centralized nature, energy consumption in networks is affected by the centralized vision of the Cloud. That is, backbone networks increase their energy consumption in order to connect the clients to the datacenters. On the other hand, distance leads to increased utilization of the broadband Wide Area Network and poor user experience, especially for interactive applications. A distributed approach can provide a better Quality of Experience (QoE) in large urban populations in mobile cloud networks. To do so, the cloud should confine local traffic close to the user, running on the users and network devices. In this work, we propose a novel distributed cloud architecture based on microclouds. Microclouds are dynamically created and allow users to contribute resources from their computers, mobile and network devices to the cloud. This way, they provide a dynamic and scalable system without the need of an extra investment in infrastructure. We also provide a description of a realistic mobile cloud use case, and the adaptation of microclouds on it. Through simulations, we show an overall saving up to 75% of energy consumed in standard centralized clouds with our approach. Also, our results indicate that this architecture is scalable with the number of mobile devices and provide a significantly lower latency than regular datacenter-centralized approaches. Finally, we analyze the use of incentives for Mobile Clouds, and propose a new auction system adapted to the high dynamism and heterogeneity of these systems. We compare our solution to other existing auctions systems in a Mobile Cloud use case, and show the suitability of our solution
Khizar, Sadia. "Metrology for 5G edge networks (MEC). Leveraging mobile devices beyond the edge toward task offloading." Electronic Thesis or Diss., Sorbonne université, 2022. http://www.theses.fr/2022SORUS069.
Повний текст джерелаThe pervasiveness of mobile devices equipped with internet connectivity and positioning systems leads us to regard them as a valuable resource to leverage. In this thesis, we tackle the use of mobile devices from a new perspective. We consider the extension of the capacity of the MEC by using the available resources of mobile devices beyond the edge of the infrastructure network. The goal is to leverage their untapped resources to process computation on behalf of the MEC in a distributed way. It is fundamental for the MEC to be aware of its operating environment to rely on mobile nodes. In the first part of the thesis, we have focused on the temporal availability of beyond-the-edge resources. We chose to investigate the co-location of terminals and analyze their persistence in a cell. Then, we turn our attention to task allocation. We shift the focus on the spatio-temporal aspect by quantifying the resources that a cell can provide to perform a MEC task. We estimate the potential amount of computational tasks performed by nodes based on the cumulative presence time in a given cell and a given completion delay. Results provide insight into the possibilities of offloading computing tasks on mobile nodes. Furthermore, it allows knowing the locations where it is advisable to offload tasks and the time duration of tasks offloadable
Jouni, Zalfa. "Analog spike-based neuromorphic computing for low-power smart IoT applications." Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPAST114.
Повний текст джерелаAs the Internet of Things (IoT) expands with more connected devices and complex communications, the demand for precise, energy-efficient localization technologies has intensified. Traditional machine learning and artificial intelligence (AI) techniques provide high accuracy in radio-frequency (RF) localization but often at the cost of greater complexity and power usage. To address these challenges, this thesis explores the potential of neuromorphic computing, inspired by brain functionality, to enable energy-efficient AI-based RF localization. It introduces an end-to-end analog spike-based neuromorphic system (RF NeuroAS), with a simplified version fully implemented in BiCMOS 55 nm technology. RF NeuroAS is designed to identify source positions within a 360-degree range on a two-dimensional plane, maintaining high resolution (10 or 1 degree) even in noisy conditions. The core of this system, an analog-based spiking neural network (A-SNN), was trained and tested on a simulated dataset (SimLocRF) from MATLAB and an experimental dataset (MeasLocRF) from anechoic chamber measurements, both developed in this thesis.The learning algorithms for A-SNN were developed through two approaches: software-based deep learning (DL) and bio-plausible spike-timing-dependent plasticity (STDP). RF NeuroAS achieves a localization accuracy of 97.1% with SimLocRF and 90.7% with MeasLoc at a 10-degree resolution, maintaining high performance with low power consumption in the nanowatt range. The simplified RF NeuroAS consumes just over 1.1 nW and operates within a 30 dB dynamic range. A-SNN learning, via DL and STDP, demonstrated performance on XOR and MNIST problems. DL depends on the non-linearity of post-layout transfer functions of A-SNN's neurons and synapses, while STDP depends on the random noise in analog neuron circuits. These findings highlight advancements in energy-efficient IoT through neuromorphic computing, promising low-power smart edge IoT breakthroughs inspired by brain mechanisms
Li, Yifan. "Edge partitioning of large graphs." Thesis, Paris 6, 2017. http://www.theses.fr/2017PA066346/document.
Повний текст джерелаIn this thesis, we mainly focus on a fundamental problem, graph partitioning, in the context of unexpectedly fast growth of data sources, ranging from social networks to internet of things. Particularly, to conquer intractable properties existing in many graphs, e.g. power-law degree distribution, we apply the novel fashion vertex-cut, instead of the traditional edge-cut method, for achieving balanced workload in distributed graph processing. Besides, to reduce the inter-partition communication cost, we present a block-based edge partition method who can efficiently explore the locality underlying graphical structures, to enhance the execution of graph algorithm. With this method, the overhead of both communication and runtime can be decreased greatly, compared to existing approaches. The challenges arising in big graphs also include their high-variety. As we know, most of real life graph applications produce heterogenous datasets, in which the vertices and/or edges are allowed to have different types or labels. A big number of graph mining algorithms are also proposed with much concern for the label attributes. For this reason, our work is extended to multi-layer graphs with taking into account the edges closeness and labels distribution during partitioning process. Its outstanding performance over real-world datasets is demonstrated finally
Huygens, David. "Design of survivable networks with bounded-length paths." Doctoral thesis, Universite Libre de Bruxelles, 2005. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/211008.
Повний текст джерелаWe first study the particular case where the set of demands D is reduced to a single pair {s,t}. We propose an integer programming formulation for the problem, which consists in the st-cut and trivial inequalities, along with the so-called L-st-path-cut inequalities. We show that these three classes of inequalities completely describe the associated polytope when k=2 and L=2 or 3, and give necessary and sufficient conditions for them to be facet-defining. We also consider the dominant of the associated polytope, and discuss how the previous inequalities can be separated in polynomial time.
We then extend the complete and minimal description obtained above to any number k of required edge-disjoint L-st-paths, but when L=2 only. We devise a cutting plane algorithm to solve the problem, using the previous polynomial separations, and present some computational results.
After that, we consider the case where there is more than one demand in D. We first show that the problem is strongly NP-hard, for all L fixed, even when all the demands in D have one root node in common. For k=2 and L=2,3, we give an integer programming formulation, based on the previous constraints written for all pairs {s,t} in D. We then proceed by giving several new classes of facet-defining inequalities, valid for the problem in general, but more adapted to the rooted case. We propose separation procedures for these inequalities, which are embedded within a Branch-and-Cut algorithm to solve the problem when L=2,3. Extensive computational results from it are given and analyzed for both random and real instances.
Since those results appear less satisfactory in the case of arbitrary demands (non necessarily rooted), we present additional families of valid inequalites in that situation. Again, separation procedures are devised for them, and added to our previous Branch-and-Cut algorithm, in order to see the practical improvement granted by them.
Finally, we study the problem for greater values of L. In particular, when L=4, we propose new families of constraints for the problem of finding a subgraph that contains at least two L-st-paths either node-disjoint, or edge-disjoint. Using these, we obtain an integer programming formulation in the space of the design variables for each case.
------------------------------------------------
Dans cette thèse, nous considérons le problème de conception de réseau k-arete connexe à chemins L-bornés. Etant donné un graphe pondéré G=(N,E), un ensemble D de paires de noeuds terminaux, et deux entiers k,L > 1, ce problème consiste à trouver, dans G, un sous-graphe de cout minimum tel que, entre chaque paire dans D, il existe au moins k chemins arete-disjoints de longueur au plus L. Ce problème est d'un grand intéret dans l'industrie des télécommunications, où des réseaux hautement fiables doivent etre construits.
Nous étudions tout d'abord le cas particulier où l'ensemble des demandes D est réduit à une seule paire de noeuds. Nous proposons une formulation du problème sous forme de programme linéaire en nombres entiers, laquelle consiste en les inégalités triviales et de coupe, ainsi que les inégalités dites de L-chemin-coupe. Nous montrons que ces trois types d'inégalités décrivent complètement le polytope associé lorsque k=2 et L=2,3, et donnons des conditions nécessaires et suffisantes pour que celles-ci en définissent des facettes. Nous considérons également le dominant du polytope associé et discutons de la séparation polynomiale des trois classes précédentes.
Nous étendons alors cette description complète et minimale à tout nombre k de chemins arete-disjoints de longueur au plus 2. De plus, nous proposons un algorithme de plans coupants utilisant les précédentes séparations polynomiales, et en présentons quelques résultats calculatoires, pour tout k>1 et L=2,3.
Nous considérons ensuite le cas où plusieurs demandes se trouvent dans D. Nous montrons d'abord que le problème est fortement NP-dur, pour tout L fixé et ce, meme si les demandes sont toutes enracinées en un noeud. Pour k=2 et L=2,3, nous donnons une formulation du problème sous forme de programme linéaire en nombres entiers. Nous proposons également de nouvelles classes d'inégalités valides, pour lesquelles nous réalisons une étude faciale. Celles-ci sont alors séparées dans le cadre d'un algorithme de coupes et branchements pour résoudre des instances aléatoires et réelles du problème.
Enfin, nous étudions le problème pour de plus grandes valeurs de L. En particulier, lorsque L=4, nous donnons de nouvelles familles de contraintes pour le problème consistant à déterminer un sous-graphe contenant entre deux noeuds fixés au moins deux chemins de longueur au plus 4, que ceux-ci doivent etre arete-disjoints ou noeud-disjoints. Grace à ces dernières, nous parvenons à donner une formulation naturelle du problème dans chacun de ces deux cas.
Doctorat en sciences, Spécialisation Informatique
info:eu-repo/semantics/nonPublished
Li, Yifan. "Edge partitioning of large graphs." Electronic Thesis or Diss., Paris 6, 2017. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2017PA066346.pdf.
Повний текст джерелаIn this thesis, we mainly focus on a fundamental problem, graph partitioning, in the context of unexpectedly fast growth of data sources, ranging from social networks to internet of things. Particularly, to conquer intractable properties existing in many graphs, e.g. power-law degree distribution, we apply the novel fashion vertex-cut, instead of the traditional edge-cut method, for achieving balanced workload in distributed graph processing. Besides, to reduce the inter-partition communication cost, we present a block-based edge partition method who can efficiently explore the locality underlying graphical structures, to enhance the execution of graph algorithm. With this method, the overhead of both communication and runtime can be decreased greatly, compared to existing approaches. The challenges arising in big graphs also include their high-variety. As we know, most of real life graph applications produce heterogenous datasets, in which the vertices and/or edges are allowed to have different types or labels. A big number of graph mining algorithms are also proposed with much concern for the label attributes. For this reason, our work is extended to multi-layer graphs with taking into account the edges closeness and labels distribution during partitioning process. Its outstanding performance over real-world datasets is demonstrated finally
Du, Yifan. "Collaborative crowdsensing at the edge." Electronic Thesis or Diss., Sorbonne université, 2020. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2020SORUS032.pdf.
Повний текст джерелаMobile crowdsensing is a powerful mechanism to contribute to the ubiquitous sensing of data at a relatively low cost. With mobile crowdsensing, people provide valuable observations across time and space using sensors embedded in/connected to their smart devices, e.g., smartphones. Particularly, opportunistic crowdsensing empowers citizens to sense objective phenomena at an urban and fine-grained scale, leveraging an application running in the background. Still, crowdsensing faces challenges: The relevance of the provided measurements depends on the adequacy of the sensing context with respect to the phenomenon that is analyzed; The uncontrolled collection of massive data leads to low sensing quality and high resource consumption on devices; Crowdsensing at scale also involves significant communication, computation, and financial costs due to the dependence on the cloud for the post-processing of raw sensing data. This thesis aims to establish opportunistic crowdsensing as a reliable means of environmental monitoring. We advocate enforcing the cost-effective collection of high-quality data and inference of the physical phenomena at the end device. To this end, our research focuses on defining a set of protocols that together implement collaborative crowdsensing at the edge, combining : Inference of the crowdsensor’s physical context characterizing the gathered data: We assess the context beyond geographical position. We introduce an online learning approach running on the device to overcome the diversity of the classification performance due to the heterogeneity of the crowdsensors. We specifically introduce a hierarchical algorithm for context inference that requires little feedback from users, while increasing the inference accuracy per user. Context-aware grouping of crowdsensors to share the workload and support selective sensing: We introduce an ad hoc collaboration strategy, which groups co-located crowdsensors together, and assigns them various roles according to their respective contexts. Evaluation results show that the overall resource consumption due to crowdsensing is reduced, and the data quality is enhanced, compared to the cloudcentric architecture. Data aggregation on the move to enhance the knowledge transferred to the cloud: We introduce a distributed interpolation-mediated aggregation approach running on the end device. We model interpolation as a tensor completion problem and propose tensor-wise aggregation, which is performed when crowdsensors encounter. Evaluation results show significant savings in terms of cellular communication, cloud computing, and, therefore, financial costs, while the overall data accuracy remains comparable to the cloud-centric approach. In summary, the proposed collaborative crowdsensing approach reduces the costs at both the end device and the cloud, while increasing the overall data quality
Le, Xuan-Chien. "Improving performance of non-intrusive load monitoring with low-cost sensor networks." Thesis, Rennes 1, 2017. http://www.theses.fr/2017REN1S019/document.
Повний текст джерелаIn smart homes, human intervention in the energy system needs to be eliminated as much as possible and an energy management system is required to automatically fluctuate the power consumption of the electrical devices. To design such system, a load monitoring system is necessary to be deployed in two ways: intrusive or non-intrusive. The intrusive approach requires a high deployment cost and too much technical intervention in the power supply. Therefore, the Non-Intrusive Load Monitoring (NILM) approach, in which the operation of a device can be detected based on the features extracted from the aggregate power consumption, is more promising. The difficulty of any NILM algorithm is the ambiguity among the devices with the same power characteristics. To overcome this challenge, in this thesis, we propose to use an external information to improve the performance of the existing NILM algorithms. The first proposed additional features relate to the previous state of each device such as state transition probability or the Hamming distance between the current state and the previous state. They are used to select the most suitable set of operating devices among all possible combinations when solving the l1-norm minimization problem of NILM by a brute force algorithm. Besides, we also propose to use another external feature that is the operating probability of each device provided by an additional Wireless Sensor Network (WSN). Different from the intrusive load monitoring, in this so-called SmartSense system, only a subset of all devices is monitored by the sensors, which makes the system quite less intrusive. Two approaches are applied in the SmartSense system. The first approach applies an edge detector to detect the step-changes on the power signal and then compare with the existing library to identify the corresponding devices. Meanwhile, the second approach tries to solve the l1-norm minimization problem in NILM with a compositional Pareto-algebraic heuristic and dynamic programming algorithms. The simulation results show that the performance of the proposed algorithms is significantly improved with the operating probability of the monitored devices provided by the WSN. Because only part of the devices are monitored, the selected ones must satisfy some criteria including high using rate and more confusions on the selected patterns with the others
Abernot, Madeleine. "Digital oscillatory neural network implementation on FPGA for edge artificial intelligence applications and learning." Electronic Thesis or Diss., Université de Montpellier (2022-....), 2023. http://www.theses.fr/2023UMONS074.
Повний текст джерелаIn the last decades, the multiplication of edge devices in many industry domains drastically increased the amount of data to treat and the complexity of tasks to solve, motivating the emergence of probabilistic machine learning algorithms with artificial intelligence (AI) and artificial neural networks (ANNs). However, classical edge hardware systems based on von Neuman architecture cannot efficiently handle this large amount of data. Thus, novel neuromorphic computing paradigms with distributed memory are explored, mimicking the structure and data representation of biological neural networks. Lately, most of the neuromorphic paradigm research has focused on Spiking neural networks (SNNs), taking inspiration from signal transmission through spikes in biological networks. In SNNs, information is transmitted through spikes using the time domain to provide a natural and low-energy continuous data computation. Recently, oscillatory neural networks (ONNs) appeared as an alternative neuromorphic paradigm for low-power, fast, and efficient time-domain computation. ONNs are networks of coupled oscillators emulating the collective computational properties of brain areas through oscillations. The recent ONN implementations combined with the emergence of low-power compact devices for ONN encourage novel attention over ONN for edge computing. State-of-the-art ONN is configured as an oscillatory Hopfield network (OHN) with fully coupled recurrent connections to perform pattern recognition with limited accuracy. However, the large number of OHN synapses limits the scalability of ONN implementation and the ONN application scope. The focus of this thesis is to study if and how ONN can solve meaningful AI edge applications using a proof-of-concept of the ONN paradigm with a digital implementation on FPGA. First, it explores novel learning algorithms for OHN, unsupervised and supervised, to improve accuracy performances and to provide continual on-chip learning. Then, it studies novel ONN architectures, taking inspiration from state-of-the-art layered ANN models, to create cascaded OHNs and multi-layer ONNs. Novel learning algorithms and architectures are demonstrated with the digital design performing edge AI applications, from image processing with pattern recognition, image edge detection, feature extraction, or image classification, to robotics applications with obstacle avoidance
Da, Silva Veith Alexandre. "Quality of Service Aware Mechanisms for (Re)Configuring Data Stream Processing Applications on Highly Distributed Infrastructure." Thesis, Lyon, 2019. http://www.theses.fr/2019LYSEN050/document.
Повний текст джерелаA large part of this big data is most valuable when analysed quickly, as it is generated. Under several emerging application scenarios, such as in smart cities, operational monitoring of large infrastructure, and Internet of Things (IoT), continuous data streams must be processed under very short delays. In multiple domains, there is a need for processing data streams to detect patterns, identify failures, and gain insights. Data is often gathered and analysed by Data Stream Processing Engines (DSPEs).A DSPE commonly structures an application as a directed graph or dataflow. A dataflow has one or multiple sources (i.e., gateways or actuators); operators that perform transformations on the data (e.g., filtering); and sinks (i.e., queries that consume or store the data). Most complex operator transformations store information about previously received data as new data is streamed in. Also, a dataflow has stateless operators that consider only the current data. Traditionally, Data Stream Processing (DSP) applications were conceived to run in clusters of homogeneous resources or on the cloud. In a cloud deployment, the whole application is placed on a single cloud provider to benefit from virtually unlimited resources. This approach allows for elastic DSP applications with the ability to allocate additional resources or release idle capacity on demand during runtime to match the application requirements.We introduce a set of strategies to place operators onto cloud and edge while considering characteristics of resources and meeting the requirements of applications. In particular, we first decompose the application graph by identifying behaviours such as forks and joins, and then dynamically split the dataflow graph across edge and cloud. Comprehensive simulations and a real testbed considering multiple application settings demonstrate that our approach can improve the end-to-end latency in over 50% and even other QoS metrics. The solution search space for operator reassignment can be enormous depending on the number of operators, streams, resources and network links. Moreover, it is important to minimise the cost of migration while improving latency. Reinforcement Learning (RL) and Monte-Carlo Tree Search (MCTS) have been used to tackle problems with large search spaces and states, performing at human-level or better in games such as Go. We model the application reconfiguration problem as a Markov Decision Process (MDP) and investigate the use of RL and MCTS algorithms to devise reconfiguring plans that improve QoS metrics
Da, Silva Silvestre Guthemberg. "Designing Adaptive Replication Schemes for Efficient Content Delivery in Edge Networks." Phd thesis, Université Pierre et Marie Curie - Paris VI, 2013. http://tel.archives-ouvertes.fr/tel-00931562.
Повний текст джерелаLiu, Chenguang. "Low level feature detection in SAR images." Electronic Thesis or Diss., Institut polytechnique de Paris, 2020. http://www.theses.fr/2020IPPAT015.
Повний текст джерелаIn this thesis we develop low level feature detectors for Synthetic Aperture Radar (SAR) images to facilitate the joint use of SAR and optical data. Line segments and edges are very important low level features in images which can be used for many applications like image analysis, image registration and object detection. Contrarily to the availability of many efficient low level feature detectors dedicated to optical images, there are very few efficient line segment detector and edge detector for SAR images mostly because of the strong multiplicative noise. In this thesis we develop a generic line segment detector and an efficient edge detector for SAR images.The proposed line segment detector which is named as LSDSAR, is based on a Markovian a contrario model and the Helmholtz principle, where line segments are validated according to their meaningfulness. More specifically, a line segment is validated if its expected number of occurences in a random image under the hypothesis of the Markovian a contrario model is small. Contrarily to the usual a contrario approaches, the Markovian a contrario model allows strong filtering in the gradient computation step, since dependencies between local orientations of neighbouring pixels are permitted thanks to the use of a first order Markov chain. The proposed Markovian a contrario model based line segment detector LSDSAR benefit from the accuracy and efficiency of the new definition of the background model, indeed, many true line segments in SAR images are detected with a control of the number of false detections. Moreover, very little parameter tuning is required in the practical applications of LSDSAR. The second work of this thesis is that we propose a deep learning based edge detector for SAR images. The contributions of the proposed edge detector are two fold: 1) under the hypothesis that both optical images and real SAR images can be divided into piecewise constant areas, we propose to simulate a SAR dataset using optical dataset; 2) we propose to train a classical CNN (convolutional neural network) edge detector, HED, directly on the graident fields of images. This, by using an adequate method to compute the gradient, enables SAR images at test time to have statistics similar to the training set as inputs to the network. More precisely, the gradient distribution for all homogeneous areas are the same and the gradient distribution for two homogeneous areas across boundaries depends only on the ratio of their mean intensity values. The proposed method, GRHED, significantly improves the state-of-the-art, especially in very noisy cases such as 1-look images
Ben, ameur Ayoub. "Artificial intelligence for resource allocation in multi-tenant edge computing." Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAS019.
Повний текст джерелаWe consider in this thesis Edge Computing (EC) as a multi-tenant environment where Network Operators (NOs) own edge resources deployed in base stations, central offices and/or smart boxes, virtualize them and let third party Service Providers (SPs) - or tenants - distribute part of their applications in the edge in order to serve the requests sent by the users. SPs with heterogeneous requirements coexist in the edge, ranging from Ultra-Reliable Low Latency Communications (URLLC) for controlling cars or robots, to massive Machine Type Communication (mMTC) for Internet of Things (IoT) requiring a massive number of connected devices, to media services, such as video streaming and Augmented/Virtual Reality (AR/VR), whose quality of experience is strongly dependant on the available resources. SPs independently orchestrate their set of microservices, running on containers, which can be easily replicated, migrated or stopped. Each SP can adapt to the resources allocated by the NO, deciding whether to run microservices in the devices, in the edge nodes or in the cloud.We aim in this thesis to advance the emergence of real deployments of the“true” EC in real networks, by showingthe utility that NOs can collect thanks to EC. We believe that thiscan contribute to encourage concrete engagement and investments engagement of NOs in EC. For this, we point to design novel data-driven strategiesthat efficiently allocate resources between heterogeneous SPs, at the edge owned by the NO, in order to optimize its relevant objectives, e.g., cost reduction, revenue maximization and better Quality of Service (QoS) perceived by end users, in terms of latency, reliability and throughput, while satisfying the SPs requirements.This thesis presents a perspective on how NOs, the sole owners of resources at the far edge (e.g., at base stations), can extract value through the implementation of EC within a multi-tenant environment.By promoting this vision of EC and by supporting it via quantitativeresults and analysis, this thesis provides, mainly to NOs, findings that can influence decision strategies about the future deployment of EC. This might foster the emergence of novel low-latency and data-intensive applications, such as high resolution augmented reality, which are not feasible in the current Cloud Computing (CC) setting.Another contribution of the thesis it that it provides solutions based on novel methods that harness the power of data-driven optimization.We indeed adapt cutting-edge techniques from Reinforcement Learning (RL) and sequential decision making to the practical problem of resource allocation inEC. In doing so, we succeed in reducing the learning time of the adopted strategies up to scales that are compatible with the EC dynamics, via careful design of estimation models embedded in the learning process. Our strategies are conceived in order not to violate the confidentiality guarantees that are essential for SPs to accept running their computation at the EC, thanks to the multi-tenant setting
Mestoukirdi, Mohamad. "Reliable and Communication-Efficient Federated Learning for Future Intelligent Edge Networks." Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS432.
Повний текст джерелаIn the realm of future 6G wireless networks, integrating the intelligent edge through the advent of AI signifies a momentous leap forward, promising revolutionary advancements in wireless communication. This integration fosters a harmonious synergy, capitalizing on the collective potential of these transformative technologies. Central to this integration is the role of federated learning, a decentralized learning paradigm that upholds data privacy while harnessing the collective intelligence of interconnected devices. By embracing federated learning, 6G networks can unlock a myriad of benefits for both wireless networks and edge devices. On one hand, wireless networks gain the ability to exploit data-driven solutions, surpassing the limitations of traditional model-driven approaches. Particularly, leveraging real-time data insights will empower 6G networks to adapt, optimize performance, and enhance network efficiency dynamically. On the other hand, edge devices benefit from personalized experiences and tailored solutions, catered to their specific requirements. Specifically, edge devices will experience improved performance and reduced latency through localized decision-making, real-time processing, and reduced reliance on centralized infrastructure. In the first part of the thesis, we tackle the predicament of statistical heterogeneity in federated learning stemming from divergent data distributions among devices datasets. Rather than training a conventional one-model-fits-all, which often performs poorly with non-IID data, we propose user-centric set of rules that produce personalized models tailored to each user objectives. To mitigate the prohibitive communication overhead associated with training distinct personalized model for each user, users are partitioned into clusters based on their objectives similarity. This enables collective training of cohort-specific personalized models. As a result, the total number of personalized models trained is reduced. This reduction lessens the consumption of wireless resources required to transmit model updates across bandwidth-limited wireless channels. In the second part, our focus shifts towards integrating IoT remote devices into the intelligent edge by leveraging unmanned aerial vehicles as a federated learning orchestrator. While previous studies have extensively explored the potential of UAVs as flying base stations or relays in wireless networks, their utilization in facilitating model training is still a relatively new area of research. In this context, we leverage the UAV mobility to bypass the unfavorable channel conditions in rural areas and establish learning grounds to remote IoT devices. However, UAV deployments poses challenges in terms of scheduling and trajectory design. To this end, a joint optimization of UAV trajectory, device scheduling, and the learning performance is formulated and solved using convex optimization techniques and graph theory. In the third and final part of this thesis, we take a critical look at thecommunication overhead imposed by federated learning on wireless networks. While compression techniques such as quantization and sparsification of model updates are widely used, they often achieve communication efficiency at the cost of reduced model performance. We employ over-parameterized random networks to approximate target networks through parameter pruning rather than direct optimization to overcome this limitation. This approach has been demonstrated to require transmitting no more than a single bit of information per model parameter. We show that SoTA methods fail to capitalize on the full attainable advantages in terms of communication efficiency using this approach. Accordingly, we propose a regularized loss function which considers the entropy of transmitted updates, resulting in notable improvements to communication and memory efficiency during federated training on edge devices without sacrificing accuracy
Yu, Shuai. "Multi-user computation offloading in mobile edge computing." Electronic Thesis or Diss., Sorbonne université, 2018. http://www.theses.fr/2018SORUS462.
Повний текст джерелаMobile Edge Computing (MEC) is an emerging computing model that extends the cloud and its services to the edge of the network. Consider the execution of emerging resource-intensive applications in MEC network, computation offloading is a proven successful paradigm for enabling resource-intensive applications on mobile devices. Moreover, in view of emerging mobile collaborative application (MCA), the offloaded tasks can be duplicated when multiple users are in the same proximity. This motivates us to design a collaborative computation offloading scheme for multi-user MEC network. In this context, we separately study the collaborative computation offloading schemes for the scenarios of MEC offloading, device-to-device (D2D) offloading and hybrid offloading, respectively. In the MEC offloading scenario, we assume that multiple mobile users offload duplicated computation tasks to the network edge servers, and share the computation results among them. Our goal is to develop the optimal fine-grained collaborative offloading strategies with caching enhancements to minimize the overall execution delay at the mobile terminal side. To this end, we propose an optimal offloading with caching-enhancement scheme (OOCS) for femto-cloud scenario and mobile edge computing scenario, respectively. Simulation results show that compared to six alternative solutions in literature, our single-user OOCS can reduce execution delay up to 42.83% and 33.28% for single-user femto-cloud and single-user mobile edge computing, respectively. On the other hand, our multi-user OOCS can further reduce 11.71% delay compared to single-user OOCS through users' cooperation. In the D2D offloading scenario, we assume that where duplicated computation tasks are processed on specific mobile users and computation results are shared through Device-to-Device (D2D) multicast channel. Our goal here is to find an optimal network partition for D2D multicast offloading, in order to minimize the overall energy consumption at the mobile terminal side. To this end, we first propose a D2D multicast-based computation offloading framework where the problem is modelled as a combinatorial optimization problem, and then solved using the concepts of from maximum weighted bipartite matching and coalitional game. Note that our proposal considers the delay constraint for each mobile user as well as the battery level to guarantee fairness. To gauge the effectiveness of our proposal, we simulate three typical interactive components. Simulation results show that our algorithm can significantly reduce the energy consumption, and guarantee the battery fairness among multiple users at the same time. We then extend the D2D offloading to hybrid offloading with social relationship consideration. In this context, we propose a hybrid multicast-based task execution framework for mobile edge computing, where a crowd of mobile devices at the network edge leverage network-assisted D2D collaboration for wireless distributed computing and outcome sharing. The framework is social-aware in order to build effective D2D links [...]
Garcia, Lorca Federico. "Filtres récursifs temps réel pour la détection de contours : optimisations algorithmiques et architecturales." Paris 11, 1996. http://www.theses.fr/1996PA112439.
Повний текст джерелаBouzidi, Halima. "Efficient Deployment of Deep Neural Networks on Hardware Devices for Edge AI." Electronic Thesis or Diss., Valenciennes, Université Polytechnique Hauts-de-France, 2024. http://www.theses.fr/2024UPHF0006.
Повний текст джерелаNeural Networks (NN) have become a leading force in today's digital landscape. Inspired by the human brain, their intricate design allows them to recognize patterns, make informed decisions, and even predict forthcoming scenarios with impressive accuracy. NN are widely deployed in Internet of Things (IoT) systems, further elevating interconnected devices' capabilities by empowering them to learn and auto-adapt in real-time contexts. However, the proliferation of data produced by IoT sensors makes it difficult to send them to a centralized cloud for processing. This is where the allure of edge computing becomes captivating. Processing data closer to where it originates -at the edge- reduces latency, makes real-time decisions with less effort, and efficiently manages network congestion.Integrating NN on edge devices for IoT systems enables more efficient and responsive solutions, ushering in a new age of self-sustaining Edge AI. However, Deploying NN on resource-constrained edge devices presents a myriad of challenges: (i) The inherent complexity of neural network architectures, which requires significant computational and memory capabilities. (ii) The limited power budget of IoT devices makes the NN inference prone to rapid energy depletion, drastically reducing system utility. (iii) The hurdle of ensuring harmony between NN and HW designs as they evolve at different rates. (iv) The lack of adaptability to the dynamic runtime environment and the intricacies of input data.Addressing these challenges, this thesis aims to establish innovative methods that extend conventional NN design frameworks, notably Neural Architecture Search (NAS). By integrating HW and runtime contextual features, our methods aspire to enhance NN performances while abstracting the need for the human-in-loop}. Firstly, we incorporate HW properties into the NAS by tailoring the design of NN to clock frequency variations (DVFS) to minimize energy footprint. Secondly, we leverage dynamicity within NN from a design perspective, culminating in a comprehensive Hardware-aware Dynamic NAS with DVFS features. Thirdly, we explore the potential of Graph Neural Networks (GNN) at the edge by developing a novel HW-aware NAS with distributed computing features on heterogeneous MPSoC. Fourthly, we address the SW/HW co-optimization on heterogeneous MPSoCs by proposing an innovative scheduling strategy that leverages NN adaptability and parallelism across computing units. Fifthly, we explore the prospect of ML4ML -- Machine Learning for Machine Learning by introducing techniques to estimate NN performances on edge devices using neural architectural features and ML-based predictors. Finally, we develop an end-to-end self-adaptive evolutionary HW-aware NAS framework that progressively learns the importance of NN parameters to guide the search process toward Pareto optimality effectively.Our methods can contribute to elaborating an end-to-end design framework for neural networks on edge hardware devices. They enable leveraging multiple optimization opportunities at both the software and hardware levels, thus improving the performance and efficiency of Edge AI
Jardel, Fanny. "Codage et traitements distribués pour les réseaux de communication." Thesis, Paris, ENST, 2016. http://www.theses.fr/2016ENST0001/document.
Повний текст джерелаThis work is dedicated to the design, analysis, and the performance evaluation of new coding schemes suitable for distributed storage systems. The first part is devoted to spatially coupled codes for erasure channels. A new method of spatial coupling for low-density parity-check ensembles is proposed. The method is inspired from overlapped layered coding. Edges of local ensembles and those defining the spatial coupling are separately built. We also propose to saturate the whole Root-LDPC boundary via spatial coupling of its parity bits to cope with quasi-static fading. Then, spatial coupling is applied on a Root-LDPC ensemble with double diversity designed for a channel with 4 block-erasure states. In the second part of this work, we consider non-binary product codes with MDS components and their iterative row-column algebraic decoding on the erasure channel. Both independent and block erasures are considered. A compact graph representation is introduced on which we define double-diversity edge colorings via the rootcheck concept. Stopping sets are defined and a full characterization is given in the context of MDS components. A differential evolution edge coloring algorithm that produces colorings with a large population of minimal rootcheck order symbols is presented. The performance of MDS-based product codes with and without double-diversity coloring is analyzed in presence of both block and independent erasures. Furthermore, numerical results show excellent performance in presence of unequal erasure probability due to double-diversity colorings
Bouchireb, Khaled. "Amélioration des services vidéo fournis à travers les réseaux radio mobiles." Phd thesis, Paris, Télécom ParisTech, 2010. https://pastel.hal.science/pastel-00006335.
Повний текст джерелаIn this thesis, video communication systems are studied for application to video services provided over wireless mobile networks. This work emphasizes on point-to-multipoint communications and proposes many enhancements to the current systems : First, a scheme combining robust decoding with retransmissions is defined so that the number of retransmissions is reduced and the quality of the received video can be controlled. As opposed to current retransmissionless and retransmission-based schemes, this scheme also offers the possibility to trade throughput for quality and vice versa. Then, the transmission of a two-level scalable video sequence towards several clients is considered. Schemes using the basic Go-back-N (GBN) and Selective Repeat (SR) Automatic Repeat reQuest (ARQ) techniques are studied. A new scheme is also proposed and studied. The new scheme reduces the buffering requirement at the receiver end while keeping the performance optimal (in terms of the amount of data successfully transmitted within a given period of time). The different schemes were shown to be applicable to 2G, 3G and WiMAX systems. Finally, we prove that retransmissions can be used in point-to-multipoint communications up to a given limit on the number of receivers (contrary to the current wireless systems where ARQ is only used sin point-to-point communications). If retransmissions are introduced in the current Multicast/Broadcast services (supported by the 3GPP and mobile WiMAX), the system will guarantee a certain amount of receivers to have the nominal quality whereas the current Multicast/Broadcast services do not garantee any receiver of the nominal quality
Bouchireb, Khaled. "Amélioration des services vidéo fournis à travers les réseaux radio mobiles." Phd thesis, Télécom ParisTech, 2010. http://pastel.archives-ouvertes.fr/pastel-00006335.
Повний текст джерелаOueis, Jessica. "Gestion conjointe de ressources de communication et de calcul pour les réseaux sans fils à base de cloud." Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAM007/document.
Повний текст джерелаMobile Edge Cloud brings the cloud closer to mobile users by moving the cloud computational efforts from the internet to the mobile edge. We adopt a local mobile edge cloud computing architecture, where small cells are empowered with computational and storage capacities. Mobile users’ offloaded computational tasks are executed at the cloud-enabled small cells. We propose the concept of small cells clustering for mobile edge computing, where small cells cooperate in order to execute offloaded computational tasks. A first contribution of this thesis is the design of a multi-parameter computation offloading decision algorithm, SM-POD. The proposed algorithm consists of a series of low complexity successive and nested classifications of computational tasks at the mobile side, leading to local computation, or offloading to the cloud. To reach the offloading decision, SM-POD jointly considers computational tasks, handsets, and communication channel parameters. In the second part of this thesis, we tackle the problem of small cell clusters set up for mobile edge cloud computing for both single-user and multi-user cases. The clustering problem is formulated as an optimization that jointly optimizes the computational and communication resource allocation, and the computational load distribution on the small cells participating in the computation cluster. We propose a cluster sparsification strategy, where we trade cluster latency for higher system energy efficiency. In the multi-user case, the optimization problem is not convex. In order to compute a clustering solution, we propose a convex reformulation of the problem, and we prove that both problems are equivalent. With the goal of finding a lower complexity clustering solution, we propose two heuristic small cells clustering algorithms. The first algorithm is based on resource allocation on the serving small cells where tasks are received, as a first step. Then, in a second step, unserved tasks are sent to a small cell managing unit (SCM) that sets up computational clusters for the execution of these tasks. The main idea of this algorithm is task scheduling at both serving small cells, and SCM sides for higher resource allocation efficiency. The second proposed heuristic is an iterative approach in which serving small cells compute their desired clusters, without considering the presence of other users, and send their cluster parameters to the SCM. SCM then checks for excess of resource allocation at any of the network small cells. SCM reports any load excess to serving small cells that re-distribute this load on less loaded small cells. In the final part of this thesis, we propose the concept of computation caching for edge cloud computing. With the aim of reducing the edge cloud computing latency and energy consumption, we propose caching popular computational tasks for preventing their re-execution. Our contribution here is two-fold: first, we propose a caching algorithm that is based on requests popularity, computation size, required computational capacity, and small cells connectivity. This algorithm identifies requests that, if cached and downloaded instead of being re-computed, will increase the computation caching energy and latency savings. Second, we propose a method for setting up a search small cells cluster for finding a cached copy of the requests computation. The clustering policy exploits the relationship between tasks popularity and their probability of being cached, in order to identify possible locations of the cached copy. The proposed method reduces the search cluster size while guaranteeing a minimum cache hit probability
Le, Falher Géraud. "Characterizing edges in signed and vector-valued graphs." Thesis, Lille 1, 2018. http://www.theses.fr/2018LIL1I013/document.
Повний текст джерелаWe develop methods to efficiently and accurately characterize edges in complex networks. In simple graphs, nodes are connected by a single semantic. For instance, two users are friends in a social networks. Moreover, those connections are typically driven by node similarity, according to homophily. In the previous example, users become friends because of common features. By contrast, complex networks are graphs where every connection has one semantic among k possible ones. Those connections are moreover based on both partial homophily and heterophily of their endpoints. This additional information enable finer analysis of real world graphs. However, it can be expensive to acquire, or is sometimes not known beforehand. We address the problems of inferring edge semantics in various settings. First, we consider graphs where edges have two opposite semantics, and where we observe the label of some edges. These so-called signed graphs are a common way to represent polarized interactions. We propose two learning biases suited for directed and undirected signed graphs respectively. This leads us to design several algorithms leveraging the graph topology to solve a binary classification problem that we call edge sign prediction. Second, we consider graphs with k > 2 available semantics for edge. In that case of multilayer graphs, we are not provided with any edge label, but instead are given one feature vector for each node. Faced with such an unsupervised problem, we devise a quality criterion expressing how well an edge k-partition and k semantical vectors explains the observed connections. We optimize this goodness of explanation criterion in vectorial and matricial forms
Jardel, Fanny. "Codage et traitements distribués pour les réseaux de communication." Electronic Thesis or Diss., Paris, ENST, 2016. http://www.theses.fr/2016ENST0001.
Повний текст джерелаThis work is dedicated to the design, analysis, and the performance evaluation of new coding schemes suitable for distributed storage systems. The first part is devoted to spatially coupled codes for erasure channels. A new method of spatial coupling for low-density parity-check ensembles is proposed. The method is inspired from overlapped layered coding. Edges of local ensembles and those defining the spatial coupling are separately built. We also propose to saturate the whole Root-LDPC boundary via spatial coupling of its parity bits to cope with quasi-static fading. Then, spatial coupling is applied on a Root-LDPC ensemble with double diversity designed for a channel with 4 block-erasure states. In the second part of this work, we consider non-binary product codes with MDS components and their iterative row-column algebraic decoding on the erasure channel. Both independent and block erasures are considered. A compact graph representation is introduced on which we define double-diversity edge colorings via the rootcheck concept. Stopping sets are defined and a full characterization is given in the context of MDS components. A differential evolution edge coloring algorithm that produces colorings with a large population of minimal rootcheck order symbols is presented. The performance of MDS-based product codes with and without double-diversity coloring is analyzed in presence of both block and independent erasures. Furthermore, numerical results show excellent performance in presence of unequal erasure probability due to double-diversity colorings
Le, Xuan Sang. "Co-conception Logiciel/FPGA pour Edge-computing : promotion de la conception orientée objet." Thesis, Brest, 2017. http://www.theses.fr/2017BRES0041/document.
Повний текст джерелаCloud computing is often the most referenced computational model for Internet of Things. This model adopts a centralized architecture where all sensor data is stored and processed in a sole location. Despite of many advantages, this architecture suffers from a low scalability while the available data on the network is continuously increasing. It is worth noting that, currently, more than 50% internet connections are between things. This can lead to the reliability problem in realtime and latency-sensitive applications. Edge-computing which is based on a decentralized architecture, is known as a solution for this emerging problem by: (1) reinforcing the equipment at the edge (things) of the network and (2) pushing the data processing to the edge.Edge-centric computing requires sensors nodes with more software capability and processing power while, like any embedded systems, being constrained by energy consumption. Hybrid hardware systems consisting of FPGA and processor offer a good trade-off for this requirement. FPGAs are known to enable parallel and fast computation within a low energy budget. The coupled processor provides a flexible software environment for edge-centric nodes.Applications design for such hybrid network/software/hardware (SW/HW) system always remains a challenged task. It covers a large domain of system level design from high level software to low-level hardware (FPGA). This result in a complex system design flow and involves the use of tools from different engineering domains. A common solution is to propose a heterogeneous design environment which combining/integrating these tools together. However the heterogeneous nature of this approach can pose the reliability problem when it comes to data exchanges between tools.Our motivation is to propose a homogeneous design methodology and environment for such system. We study the application of a modern design methodology, in particular object-oriented design (OOD), to the field of embedded systems. Our choice of OOD is motivated by the proven productivity of this methodology for the development of software systems. In the context of this thesis, we aim at using OOD to develop a homogeneous design environment for edge-centric systems. Our approach addresses three design concerns: (1) hardware design where object-oriented principles and design patterns are used to improve the reusability, adaptability, and extensibility of the hardware system. (2) hardware / software co-design, for which we propose to use OOD to abstract the SW/HW integration and the communication that encourages the system modularity and flexibility. (3) middleware design for Edge Computing. We rely on a centralized development environment for distributed applications, while the middleware facilitates the integration of the peripheral nodes in the network, and allows automatic remote reconfiguration. Ultimately, our solution offers software flexibility for the implementation of complex distributed algorithms, complemented by the full exploitation of FPGAs performance. These are placed in the nodes, as close as possible to the acquisition of the data by the sensors† in order to deploy a first effective intensive treatment
Lassouaoui, Lilia. "Ordonnancement et routage pour l'augmentation de la durée de vie dans les réseaux de capteurs sans fil." Thesis, Paris, CNAM, 2018. http://www.theses.fr/2018CNAM1187/document.
Повний текст джерелаWireless sensor networks (RCSF) is a technology that has a wide range of civil or military applications, including battlefield monitoring, environmental monitoring or smart city. However, WSN are characterized by high limitations in terms of energy (battery-operated nodes) and wireless links (low power and lossy links). The work done in this PhD thesis aims to provide solutions that guarantee a certain quality of service in the context of wireless sensor networks. The first part of this work concerns the medium access control layer with the aim of increasing the lifetime of the network. The access to the wireless medium is analyzed and modeled as a link scheduling problem, taking into account collisions. First, a study of the complexity of this problem is carried out, then a distributed and fault-tolerant approach with guaranteed performance is proposed (SS-DD2EC) to solve this problem. The second part is about message routing with the IPv6 Routing Protocol for Low Power and Lossy Network (RPL). First of all, a comparison between the various existing routing metrics for the optimization of the energy consumed has been carried out. In addition of lifetime, the reliability and end-to-end latency criteria are considered for evaluating these metrics. Then, two new RPL metrics (R_MinMax and R_Delai) were proposed, achieving significant gains over the state of the art. The first one only considers the energy consumption and reliability, while the second one takes also into account the end-to-end latency
Dureau, Maxime. "Characterization and simulation of the mechanical forces that control the process of Dorsal Closure during Drosophila melanogaster embryogenesis." Thesis, Lyon, École normale supérieure, 2015. http://www.theses.fr/2015ENSL0999/document.
Повний текст джерелаThe work presented here aims at characterizing and simulating the mechanical forces involved in the process of Dorsal Closure in the organism Drosophila melanogaster, an embryonic process. In particular, Dorsal Closure participates in the acquisition of the final form of the embryo. Therefore, the work presented here aims at fathoming our knowledge on tissues mechanics, as well as their role in the acquisition of shape. The tissues involved in Dorsal Closure are the epidermis and the amnioserosa. At this stage of development, the epidermis surrounds almost all the embryo. Nevertheless, the amnioserosa still covers a large area of the dorsal side called dorsal hole. Hence, Dorsal Closure aims at shutting this hole and joining the lateral sides of the epidermis, in a process similar to wound healing. In order to fuse the two sides of the epidermis on the dorsal line, the epidermis must be drawn dorsalward. This movement is driven by the amnioserosa on the one hand, and by the dorsalmost row of the epidermis (called Leading Edge cells) on the other hand. The latter first form a transcellular Actin Cable around the dorsal hole. The cable, contracting, will reduce the area of the dorsal hole, covered by the amnioserosa. Second, the Leading Edge cells emit protrusions that will attach to the opposite Leading Edge and drag it toward themselves, untill the two sides of the epidermis fuse. These protrusions have a limited range, hence the dragging and fusion only take place at the ends of the dorsal hole (called canthi), where the distance between the two Leading Edges is small enough. The Amnioserosa also drags the epidermis toward the dorsal line. Its cells produce a contractile network. Interstingly, Amnioserosa cells see the area of their top side (apical side) vary in a periodic way. Although these variations have been widely studied, their role in Dorsal Closure remains unknown. This PhD aims at improving our knowledge of the mechanical concepts involved in these oscillations, and to build a physical model representing these movements. The work presented here also studies the movements of the Leading Edge cells, in order to understand the effect of the Actin Cableon the dynamics of Dorsal Closure. In order to study the cells movements and the role of the tissues involved in Dorsal Closure, an algorithm was developped, allowing to detect the cells edges, their position, as well as those of their vertices (multiple junction between three or four cells) and to track them over time. A user interface was also developped, in order to facilitate the adjustment of the parameters allowing the detection, as well as the correction of possible errors. Various dynamical models were then built following the lagrangian approach. The systems of equations deriving from the Euler-Lagrange equations were numerically solved, and their predictions compared to the biological data extracted thanks to the algorithm presented earlier, following the least square approach. The model validation was performed thanks to the autocorrelation function test. Finally, the Leading Edge dynamics was studied characterising the cellular movements at the interface between the epidermis and the amnioserosa. Wild type embryos dynamics were compared to those of mutated embryos showing specific defects in the Actin Cable formation. The results presented in this manuscript allow a better understanding of the processes involved in in Amnioserosa cells oscicllations. They also give clues on their biological characteristics. Finally, they assess the role of the actin cable in this process similar to wound healing
Aissioui, Abdelkader. "Le chemin vers les architectures futures des services mobiles : du Follow Me Cloud (FMC) au Follow Me edge Cloud (FMeC)." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLV095.
Повний текст джерелаThis Ph.D. thesis aims to deal with the future delivery architectures of mobile cloud-based services, through network infrastructures evolving from Mobile Cloud Computing (MCC) to Mobile Edge Computing (MEC). We mainly focused on Follow Me Cloud (FMC) concept as a new service delivery strategy for improved user experience and efficient resource utilization. That enables cloud-based services to follow their mobile users during their movement across access network technologies and by delivering the cloud-service via the optimal service point inside the cloud infrastructure. Several contributions are proposed in this thesis and evaluated in both theoretical analysis and scientific simulation.First, we proposed an alternative FMC architecture that allows: (i) to open the FMC design on non-3GPP mobile network access technologies (ii) to provide interoperability among different PMIPv6 domains permitting MNs inter-PMIPv6 domain roaming with seamless IP mobility and service session continuity (iii) to offer a tunnel-free architecture in MNs roaming situation, avoiding any additional overhead associated with tunneling in mobility management. This proposed scheme leverage SDN/OpenFlow technology and PMIPv6 mobility management protocol by integrating them within a framework permitting to realize the FMC vision.Second, to address the scalability and resiliency concerns in centralized SDN/OpenFlow control plane architecture, we introduced a new design of an elastic distributed SDN controller tailored for Mobile Cloud Computing (MCC) and more notably for Follow Me Cloud (FMC) management systems. We illustrated how the new control plane scheme is distributed on two-level hierarchical architecture, a first level with a single global SDN controller and a second level with several local SDN controllers. Then, we presented the building blocks of our novel control plane framework, the system Key Performance Indicator (KPI) computation and set the key objective of our design aiming to keep the system KPI value within a predefined threshold window. Last, we proved how this goal is achieved by adapting the number of local SDN controllers and their locations in an elastic manner and deploying them as VNF instances on the cloud thanks to NFV technology.Third, we introduced FMeC concept, leveraging the intertwining of MEC and FMC architectures with the aim of sustaining requirements of the 5G automotive systems. We began by defining FMeC key concept elements permitting to provide FMC technology at the edge of mobile networks. Then, we presented an automated driving use case projection of our FMeC solution integrating automotive with Telco infrastructures towards the future 5G automotive vision. Focusing on the V2I/N communications types, we introduced our FMeC design architecture based on SDN/OpenFlow technologies and MEC infrastructure entities whose resources are pooled together to provide a federated edge clouds. Finally, we presented our mobility-aware framework for edge-cloud service placement based on a set of basic algorithms that permit achieving the automated driving QoS requirements in terms of ultra-short latency within 5G network
Lassouaoui, Lilia. "Ordonnancement et routage pour l'augmentation de la durée de vie dans les réseaux de capteurs sans fil." Electronic Thesis or Diss., Paris, CNAM, 2018. http://www.theses.fr/2018CNAM1187.
Повний текст джерелаWireless sensor networks (RCSF) is a technology that has a wide range of civil or military applications, including battlefield monitoring, environmental monitoring or smart city. However, WSN are characterized by high limitations in terms of energy (battery-operated nodes) and wireless links (low power and lossy links). The work done in this PhD thesis aims to provide solutions that guarantee a certain quality of service in the context of wireless sensor networks. The first part of this work concerns the medium access control layer with the aim of increasing the lifetime of the network. The access to the wireless medium is analyzed and modeled as a link scheduling problem, taking into account collisions. First, a study of the complexity of this problem is carried out, then a distributed and fault-tolerant approach with guaranteed performance is proposed (SS-DD2EC) to solve this problem. The second part is about message routing with the IPv6 Routing Protocol for Low Power and Lossy Network (RPL). First of all, a comparison between the various existing routing metrics for the optimization of the energy consumed has been carried out. In addition of lifetime, the reliability and end-to-end latency criteria are considered for evaluating these metrics. Then, two new RPL metrics (R_MinMax and R_Delai) were proposed, achieving significant gains over the state of the art. The first one only considers the energy consumption and reliability, while the second one takes also into account the end-to-end latency
Limnios, Stratis. "Graph Degeneracy Studies for Advanced Learning Methods on Graphs and Theoretical Results Edge degeneracy: Algorithmic and structural results Degeneracy Hierarchy Generator and Efficient Connectivity Degeneracy Algorithm A Degeneracy Framework for Graph Similarity Hcore-Init: Neural Network Initialization based on Graph Degeneracy." Thesis, Institut polytechnique de Paris, 2020. http://www.theses.fr/2020IPPAX038.
Повний текст джерелаExtracting Meaningful substructures from graphs has always been a key part in graph studies. In machine learning frameworks, supervised or unsupervised, as well as in theoretical graph analysis, finding dense subgraphs and specific decompositions is primordial in many social and biological applications among many others.In this thesis we aim at studying graph degeneracy, starting from a theoretical point of view, and building upon our results to find the most suited decompositions for the tasks at hand.Hence the first part of the thesis we work on structural results in graphs with bounded edge admissibility, proving that such graphs can be reconstructed by aggregating graphs with almost-bounded-edge-degree. We also provide computational complexity guarantees for the different degeneracy decompositions, i.e. if they are NP-complete or polynomial, depending on the length of the paths on which the given degeneracy is defined.In the second part we unify the degeneracy and admissibility frameworks based on degree and connectivity. Within those frameworks we pick the most expressive, on the one hand, and computationally efficient on the other hand, namely the 1-edge-connectivity degeneracy, to experiment on standard degeneracy tasks, such as finding influential spreaders.Following the previous results that proved to perform poorly we go back to using the k-core but plugging it in a supervised framework, i.e. graph kernels. Thus providing a general framework named core-kernel, we use the k-core decomposition as a preprocessing step for the kernel and apply the latter on every subgraph obtained by the decomposition for comparison. We are able to achieve state-of-the-art performance on graph classification for a small computational cost trade-off.Finally we design a novel degree degeneracy framework for hypergraphs and simultaneously on bipartite graphs as they are hypergraphs incidence graph. This decomposition is then applied directly to pretrained neural network architectures as they induce bipartite graphs and use the coreness of the neurons to re-initialize the neural network weights. This framework not only outperforms state-of-the-art initialization techniques but is also applicable to any pair of layers convolutional and linear thus being applicable however needed to any type of architecture
Croft, Marie-Ange. "Edme Boursault : de la farce à la fable (1661-1701)." Thesis, Paris 10, 2014. http://www.theses.fr/2014PA100105/document.
Повний текст джерелаEntre la mort de Molière et l’avènement de Marivaux, le théâtre connaît de profondes modifications. S’inscrivant dans le sillage des travaux de François Moureau, Christian Biet et Guy Spielmann sur la dramaturgie fin de règne, cette thèse s’intéresse à la manière dont s’est effectué le passage de la comédie classique à la comédie fin de règne. En prenant l’exemple d’Edme Boursault (1638-1701), écrivain mineur du XVIIe siècle, elle entend mettre en lumière une double trajectoire, celle d’un genre et celle d’un auteur. L’étude repose sur l’hypothèse selon laquelle le corpus comique de Boursault, produit entre 1661 et 1701, conserve les marques des mutations esthétiques qui a mené au théâtre fin de règne. Il s’agit donc de comprendre les enjeux qui ont conduit à un renouvellement de l’écriture dramaturgique, mais aussi d’observer la manière dont pouvait se construire une carrière littéraire chez un écrivain mineur de la seconde moitié du XVIIe siècle. Depuis ses premières comédies et farces (Le médecin volant, Le Mort vivant, Le jaloux endormy) jusqu’à ses comédies moralisantes (Les Fables d’Esope, Esope à la Cour), Boursault a su s’adapter aux changements que connaissent la société française et le théâtre, et a mis en œuvre diverses stratégies, tant sociales que littéraires. Par le moyen de l’histoire littéraire, entre sociologie de la littérature, poétique des genres et théorie de la réception, la thèse se penche sur les réseaux de sociabilité de Boursault (salons précieux, cercles littéraires, mécénat) et analyse son théâtre comique, tout en tenant compte des conditions de représentation et de la réception du public. L’étude tend à démontrer que cette évolution dramaturgique s’est faite graduellement, souvent au prix d’une coexistence de deux esthétiques au sein d’une même œuvre. Cherchant à mesurer l’apport de Boursault à la comédie et au comique du XVIIe siècle, la thèse révèle que le passage du classicisme au fin de règne implique chez le dramaturge un changement de stratégie. Entre 1660 et 1700, l’auteur passe en effet d’une stratégie du cursus où ses tendances polygraphiques le placent, à une stratégie du succès misant sur l’innovation et l’originalité. Ce faisant, l’écrivain explore les limites d’un genre qu’il participe à redéfinir, tant sur le plan de la structure et des thématiques que sur celui des personnages et du comique. L’examen du passage de la farce classique à la comédie moralisante, celui du comique burlesque au rire jaune du XVIIIe siècle positionne donc indéniablement Boursault comme un écrivain de transition. Transition entre l’esthétique classique et l’esthétique fin de règne, on s’en doute, mais aussi, en parallèle, entre la poétique classique-fin de règne, et celle des Lumières
Entre la mort de Molière et l’avènement de Marivaux, le théâtre connaît de profondes modifications. S’inscrivant dans le sillage des travaux de François Moureau, Christian Biet et Guy Spielmann sur la dramaturgie fin de règne, cette thèse s’intéresse à la manière dont s’est effectué le passage de la comédie classique à la comédie fin de règne. En prenant l’exemple d’Edme Boursault (1638-1701), écrivain mineur du XVIIe siècle, elle entend mettre en lumière une double trajectoire, celle d’un genre et celle d’un auteur. L’étude repose sur l’hypothèse selon laquelle le corpus comique de Boursault, produit entre 1661 et 1701, conserve les marques des mutations esthétiques qui a mené au théâtre fin de règne. Il s’agit donc de comprendre les enjeux qui ont conduit à un renouvellement de l’écriture dramaturgique, mais aussi d’observer la manière dont pouvait se construire une carrière littéraire chez un écrivain mineur de la seconde moitié du XVIIe siècle. Depuis ses premières comédies et farces (Le médecin volant, Le Mort vivant, Le jaloux endormy) jusqu’à ses comédies moralisantes (Les Fables d’Esope, Esope à la Cour), Boursault a su s’adapter aux changements que connaissent la société française et le théâtre, et a mis en œuvre diverses stratégies, tant sociales que littéraires. Par le moyen de l’histoire littéraire, entre sociologie de la littérature, poétique des genres et théorie de la réception, la thèse se penche sur les réseaux de sociabilité de Boursault (salons précieux, cercles littéraires, mécénat) et analyse son théâtre comique, tout en tenant compte des conditions de représentation et de la réception du public. L’étude tend à démontrer que cette évolution dramaturgique s’est faite graduellement, souvent au prix d’une coexistence de deux esthétiques au sein d’une même œuvre. Cherchant à mesurer l’apport de Boursault à la comédie et au comique du XVIIe siècle, la thèse révèle que le passage du classicisme au fin de règne implique chez le dramaturge un changement de stratégie. Entre 1660 et 1700, l’auteur passe en effet d’une stratégie du cursus où ses tendances polygraphiques le placent, à une stratégie du succès misant sur l’innovation et l’originalité. Ce faisant, l’écrivain explore les limites d’un genre qu’il participe à redéfinir, tant sur le plan de la structure et des thématiques que sur celui des personnages et du comique. L’examen du passage de la farce classique à la comédie moralisante, celui du comique burlesque au rire jaune du XVIIIe siècle positionne donc indéniablement Boursault comme un écrivain de transition. Transition entre l’esthétique classique et l’esthétique fin de règne, on s’en doute, mais aussi, en parallèle, entre la poétique classique-fin de règne, et celle des Lumières
Drira, Kaouther. "Coloration d’arêtes ℓ-distance et clustering : études et algorithmes auto-stabilisants". Thesis, Lyon 1, 2010. http://www.theses.fr/2010LYO10335/document.
Повний текст джерелаGraph coloring is a famous combinatorial optimization problem and is very attractive for its numerous applications. Many variants and generalizations of the graph-coloring problem have been introduced and studied. An edge-coloring assigns a color to each edge so that no two adjacent edges share the same color. In the first part of this thesis, we study the problem of the ℓ-distance-edge-coloring, which is a generalization of the classical edge-coloring. The study focuses on the following classes of graphs : paths, grids, hypercubes, trees and some power graphs. We are conducting a combinatorial and algorithmic study of the parameter. We give a sequential coloring algorithm for each class of graph. The ℓ-distance-edge-coloring is especially considered in large-scale networks. However, with the increasing number of nodes, networks are increasingly vulnerable to faults. In the second part, we focus on fault-tolerant algorithms and in particular self-stabilizing algorithms. We propose a self-stabilizing algorithm for proper edge-coloring. Our solution is based on Vizing’s result to minimize number of colors. Subsequently, we propose a selfstabilizing clustering algorithm for applications in the field of security in mobile ad hoc networks. Our solution is a partitioning into clusters based on trust relationships between nodes. We also propose a group key-management algorithm in mobile ad hoc networks based on the topology of clusters previously built. The security of our protocol is strengthened by its clustering criterion which constantly monitors trust relationships and expels malicious nodes out of the multicast session
Bazin, Cyrille. "L'interface photosphère solaire/chromosphère et couronne : apport des éclipses et des images EUV." Phd thesis, Aix-Marseille Université, 2013. http://tel.archives-ouvertes.fr/tel-00921889.
Повний текст джерелаMehamel, Sarra. "New intelligent caching and mobility strategies for MEC /ICN based architectures." Electronic Thesis or Diss., Paris, CNAM, 2020. http://www.theses.fr/2020CNAM1284.
Повний текст джерелаMobile edge computing (MEC) concept proposes to bring the computing and storage resources in close proximity to the end user by placing these resources at the network edge. The motivation is to alleviate the mobile core and to reduce latency for mobile users due to their close proximity to the edge. MEC servers are candidates to host mobile applications and serve web contents. Edge caching is one of the most emerging technologies recognized as a content retrieval solution in the edge of the network. It has been also considered as enabling technology of mobile edge computing that presents an interesting opportunity to perform caching services. Particularly, the MEC servers are implemented directly at the base stations which enable edge caching and ensure deployment in close-proximity to the mobile users. However, the integration of servers in mobile edge computing environment (base stations) complicates the energy saving issue because the power consumed by mobile edge computing servers is costly especially when the load changes dynamically over time. Furthermore, users with mobile devices arise their demands, introducing the challenge of handling such mobile content requests beside the limited caching size. Thus, it is necessary and crucial for caching mechanisms to consider context-aware factors, meanwhile most existing studies focus on cache allocation, content popularity and cache design. In this thesis, we present a novel energy-efficient fuzzy caching strategy for edge devices that takes into consideration four influencing features of mobile environment, while introducing a hardware implementation using Field-Programmable Gate Array (FPGA) to cut the overall energy requirements. Performing an adequate caching strategy on MEC servers opens the possibility of employing artificial intelligence (AI) techniques and machine learning at mobile network edges. Exploiting users context information intelligently makes it possible to design an intelligent context-aware mobile edge caching. Context awareness enables the cache to be aware of its environment, while intelligence enables each cache to make the right decisions of selecting appropriate contents to be cached so that to maximize the caching performance. Inspired by the success of reinforcement learning (RL) that uses agents to deal with decision making problems, we extended our fuzzy-caching system into a modified reinforcement learning model. The proposed framework aims to maximize the cache hit rate and requires a multi awareness. The modified RL differs from other RL algorithms in the learning rate that uses the method of stochastic gradient decent beside taking advantage of learning using the optimal caching decision obtained from fuzzy rules
Cherdo, Yann. "Détection d'anomalie non supervisée sur les séries temporelle à faible coût énergétique utilisant les SNNs." Electronic Thesis or Diss., Université Côte d'Azur, 2024. http://www.theses.fr/2024COAZ4018.
Повний текст джерелаIn the context of the predictive maintenance of the car manufacturer Renault, this thesis aims at providing low-power solutions for unsupervised anomaly detection on time-series. With the recent evolution of cars, more and more data are produced and need to be processed by machine learning algorithms. This processing can be performed in the cloud or directly at the edge inside the car. In such a case, network bandwidth, cloud services costs, data privacy management and data loss can be saved. Embedding a machine learning model inside a car is challenging as it requires frugal models due to memory and processing constraints. To this aim, we study the usage of spiking neural networks (SNNs) for anomaly detection, prediction and classification on time-series. SNNs models' performance and energy costs are evaluated in an edge scenario using generic hardware models that consider all calculation and memory costs. To leverage as much as possible the sparsity of SNNs, we propose a model with trainable sparse connections that consumes half the energy compared to its non-sparse version. This model is evaluated on anomaly detection public benchmarks, a real use-case of anomaly detection from Renault Alpine cars, weather forecasts and the google speech command dataset. We also compare its performance with other existing SNN and non-spiking models. We conclude that, for some use-cases, spiking models can provide state-of-the-art performance while consuming 2 to 8 times less energy. Yet, further studies should be undertaken to evaluate these models once embedded in a car. Inspired by neuroscience, we argue that other bio-inspired properties such as attention, sparsity, hierarchy or neural assemblies dynamics could be exploited to even get better energy efficiency and performance with spiking models. Finally, we end this thesis with an essay dealing with cognitive neuroscience, philosophy and artificial intelligence. Diving into conceptual difficulties linked to consciousness and considering the deterministic mechanisms of memory, we argue that consciousness and the self could be constitutively independent from memory. The aim of this essay is to question the nature of humans by contrast with the ones of machines and AI
Santi, Nina. "Prédiction des besoins pour la gestion de serveurs mobiles en périphérie." Electronic Thesis or Diss., Université de Lille (2022-....), 2023. http://www.theses.fr/2023ULILB050.
Повний текст джерелаMulti-access Edge computing is an emerging paradigm within the Internet of Things (IoT) that complements Cloud computing. This paradigm proposes the implementation of computing servers located close to users, reducing the pressure and costs of local network infrastructure. This proximity to users is giving rise to new use cases, such as the deployment of mobile servers mounted on drones or robots, offering a cheaper, more energy-efficient and flexible alternative to fixed infrastructures for one-off or exceptional events. However, this approach also raises new challenges for the deployment and allocation of resources in terms of time and space, which are often battery-dependent.In this thesis, we propose predictive tools and algorithms for making decisions about the allocation of fixed and mobile resources, in terms of both time and space, within dynamic environments. We provide rich and reproducible datasets that reflect the heterogeneity inherent in Internet of Things (IoT) applications, while exhibiting a high rate of contention and interference. To achieve this, we are using the FIT-IoT Lab, an open testbed dedicated to the IoT, and we are making all the code available in an open manner. In addition, we have developed a tool for generating IoT traces in an automated and reproducible way. We use these datasets to train machine learning algorithms based on regression techniques to evaluate their ability to predict the throughput of IoT applications. In a similar approach, we have also trained and analysed a neural network of the temporal transformer type to predict several Quality of Service (QoS) metrics. In order to take into account the mobility of resources, we are generating IoT traces integrating mobile access points embedded in TurtleBot robots. These traces, which incorporate mobility, are used to validate and test a federated learning framework based on parsimonious temporal transformers. Finally, we propose a decentralised algorithm for predicting human population density by region, based on the use of a particle filter. We test and validate this algorithm using the Webots simulator in the context of servers embedded in robots, and the ns-3 simulator for the network part
Ntumba, wa Ntumba Patient. "Ordonnancement d'opérateurs continus pour l'analyse de flux de données à la périphérie de l'Internet des Objets." Electronic Thesis or Diss., Sorbonne université, 2022. http://www.theses.fr/2022SORUS183.
Повний текст джерелаData stream processing and analytics (DSPA) applications are widely used to process the ever increasing amounts of data streams produced by highly geographically distributed data sources, such as fixed and mobile IoT devices, in order to extract valuable information in a timely manner for actuation. DSPA applications are typically deployed in the Cloud to benefit from practically unlimited computational resources on demand. However, such centralized and distant computing solutions may suffer from limited network bandwidth and high network delay. Additionally, data propagation to the Cloud may compromise the privacy of sensitive data. To effectively handle this volume of data streams, the emerging Edge/Fog computing paradigm is used as the middle-tier between the Cloud and the IoT devices to process data streams closer to their sources and to reduce the network resource usage and network delay to reach the Cloud. However, Edge/Fog computing comes with limited computational resource capacities and requires deciding which part of the DSPA application should be performed in the Edge/Fog layers while satisfying the application response time constraint for timely actuation. Furthermore, the computational and network resources across the Edge-Fog-Cloud architecture can be shareable among multiple DSPA (and other) applications, which calls for efficient resource usage. In this PhD research, we propose a new model for assessing the usage cost of resources across the Edge-Fog-Cloud architecture. Our model addresses both computational and network resources and enables dealing with the trade-offs that are inherent to their joint usage. It precisely characterizes the usage cost of resources by distinguishing between abundant and constrained resources as well as by considering their dynamic availability, hence covering both resources dedicated to a single DSPA application and shareable resources. We complement our system modeling with a response time model for DSPA applications that takes into account their windowing characteristics. Leveraging these models, we formulate the problem of scheduling streaming operators over a hierarchical Edge-Fog-Cloud resource architecture. Our target problem presents two distinctive features. First, it aims at jointly optimizing the resource usage cost for computational and network resources, while few existing approaches have taken computational resources into account in their optimization goals. More precisely, our aim is to schedule a DSPA application in a way that it uses available resources in the most efficient manner. This enables saving valuable resources for other DSPA (and non DSPA) applications that share the same resource architecture. Second, it is subject to a response time constraint, while few works have dealt with such a constraint; most approaches for scheduling time-critical (DSPA) applications include the response time in their optimization goals. To solve our formulated problem, we introduce several heuristic algorithms that deal with different versions of the problem: static resource-aware scheduling that each time calculates a new system deployment from the outset, time-aware and resource-aware scheduling, dynamic scheduling that takes into account the current deployment. Finally, we extensively and comparatively evaluate our algorithms with realistic simulations against several baselines that either we introduce or that originate / are inspired from the existing literature. Our results demonstrate that our solutions advance the current state of the art in scheduling DSPA applications
Naud, Marie-Eve. "Variation des biomarqueurs dans le spectre visible non résolu de la Terre." Thèse, 2010. http://hdl.handle.net/1866/4968.
Повний текст джерелаThe rapid evolution of the detection and characterization of exoplanets since the nineties is such that instruments like the Terrestrial Planet Finder (TPF) will surely take the first spectra of exoplanets similar to the Earth in the next decades. The study of the spectrum of the only inhabited planet we know, the Earth, is thus essential to conceive these instruments and to complete pertinent analyses of their results. This research presents the optical spectra (390-900 nm) of the Earth that were secured on 8 observing nights covering more than a year. These spectra were obtained by observing the Earthshine with the 1.6 m telescope at the Observatoire du Mont-Mégantic (OMM). Because the surface of the Moon reflects diffusely the light coming from a portion of the Earth, the observation of Earthshine allow us to get spatially unresolved spectra, like those that will likely be obtained for exoplanets with the first generation of instruments. The variation of the Earth’s spectrum with the changing contributing phase of the Earth is also similar to that of an exoplanet spectrum, which changes with its position around the star. Water, oxygen and ozone of the Earth’s atmosphere, detected in all of our spectra, are biomarkers. They give clues about the habitability and the possible presence of life on a planet. The Vegetation Red Edge (VRE), another spectral biomarker, caused by photosynthetic organisms, is characterized by an increase in reflectivity around 700 nm. For the spectra of 5 nights, this increase was evaluated to be between -5 and 15% ±~5%, after the contributions of Rayleigh and aerosol scattering, as well as of a wide ozone absorption band were removed. These values are consistent with the presence of vegetation in the phase of the Earth contributing to the spectra. However, they cover a larger range than that usually found in the literature (0-10%). A possible explanation could be the few arbitrary choices that were made during data processing and VRE computation or the presence of other surface and atmospheric elements with a spectral signature varying around 700 nm.