Literatura académica sobre el tema "Informatique en périphérie"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Informatique en périphérie".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Tesis sobre el tema "Informatique en périphérie"
Mazouzi, Houssemeddine. "Algorithmes pour le déchargement de tâches sur serveurs de périphérie". Thesis, Paris 13, 2019. http://www.theses.fr/2019PA131076.
Texto completoComputation offloading is one of the most promising paradigm to overcome the lack of computational resources in mobile devices. Basically, it allows the execution of part orall of a mobile application in the cloud. The main objective is to reduce both execution time and energy consumption for the mobile terminals. Unfortunately, even if clouds have rich computing and storage resources, they are usually geographically far from mobile applications and may suffer from large delays, which is particularly problematic for mobile applications with small response time requirements. To reduce this long delay, one of the emerging approach is to push the cloud to the network edge. This proximity gives the opportunity to mobile users to offload their tasks to “local” cloud for processing. An Edge Cloud can be seen as small data center acting as a shadow image of larger data centers. This geographical proximity between mobile applications and edge cloud means that the access delay can be greatly reduced, but affects also higher throughput, improved responsiveness and better scalability. In this thesis, we focus on computation offloading in mobile environment (Mobile Edge Computing - MEC), composed of several edge servers. Our goal is to explore new and effective offloading strategies to improve applications performances in both execution time and energy consumption, while ensuring application requirements. Our first contribution is a new offloading strategy in the case of multiple edge servers. Thenwe extend this strategy to include the Cloud. Both strategies have been evaluated theoretically and experimentally by the implementation of an offloading middleware. Finally, we propose a new elastic approach in the case of multitasking applications characterized by a graph of dependencies between tasks
Santi, Nina. "Prédiction des besoins pour la gestion de serveurs mobiles en périphérie". Electronic Thesis or Diss., Université de Lille (2022-....), 2023. http://www.theses.fr/2023ULILB050.
Texto completoMulti-access Edge computing is an emerging paradigm within the Internet of Things (IoT) that complements Cloud computing. This paradigm proposes the implementation of computing servers located close to users, reducing the pressure and costs of local network infrastructure. This proximity to users is giving rise to new use cases, such as the deployment of mobile servers mounted on drones or robots, offering a cheaper, more energy-efficient and flexible alternative to fixed infrastructures for one-off or exceptional events. However, this approach also raises new challenges for the deployment and allocation of resources in terms of time and space, which are often battery-dependent.In this thesis, we propose predictive tools and algorithms for making decisions about the allocation of fixed and mobile resources, in terms of both time and space, within dynamic environments. We provide rich and reproducible datasets that reflect the heterogeneity inherent in Internet of Things (IoT) applications, while exhibiting a high rate of contention and interference. To achieve this, we are using the FIT-IoT Lab, an open testbed dedicated to the IoT, and we are making all the code available in an open manner. In addition, we have developed a tool for generating IoT traces in an automated and reproducible way. We use these datasets to train machine learning algorithms based on regression techniques to evaluate their ability to predict the throughput of IoT applications. In a similar approach, we have also trained and analysed a neural network of the temporal transformer type to predict several Quality of Service (QoS) metrics. In order to take into account the mobility of resources, we are generating IoT traces integrating mobile access points embedded in TurtleBot robots. These traces, which incorporate mobility, are used to validate and test a federated learning framework based on parsimonious temporal transformers. Finally, we propose a decentralised algorithm for predicting human population density by region, based on the use of a particle filter. We test and validate this algorithm using the Webots simulator in the context of servers embedded in robots, and the ns-3 simulator for the network part
Ntumba, wa Ntumba Patient. "Ordonnancement d'opérateurs continus pour l'analyse de flux de données à la périphérie de l'Internet des Objets". Electronic Thesis or Diss., Sorbonne université, 2022. http://www.theses.fr/2022SORUS183.
Texto completoData stream processing and analytics (DSPA) applications are widely used to process the ever increasing amounts of data streams produced by highly geographically distributed data sources, such as fixed and mobile IoT devices, in order to extract valuable information in a timely manner for actuation. DSPA applications are typically deployed in the Cloud to benefit from practically unlimited computational resources on demand. However, such centralized and distant computing solutions may suffer from limited network bandwidth and high network delay. Additionally, data propagation to the Cloud may compromise the privacy of sensitive data. To effectively handle this volume of data streams, the emerging Edge/Fog computing paradigm is used as the middle-tier between the Cloud and the IoT devices to process data streams closer to their sources and to reduce the network resource usage and network delay to reach the Cloud. However, Edge/Fog computing comes with limited computational resource capacities and requires deciding which part of the DSPA application should be performed in the Edge/Fog layers while satisfying the application response time constraint for timely actuation. Furthermore, the computational and network resources across the Edge-Fog-Cloud architecture can be shareable among multiple DSPA (and other) applications, which calls for efficient resource usage. In this PhD research, we propose a new model for assessing the usage cost of resources across the Edge-Fog-Cloud architecture. Our model addresses both computational and network resources and enables dealing with the trade-offs that are inherent to their joint usage. It precisely characterizes the usage cost of resources by distinguishing between abundant and constrained resources as well as by considering their dynamic availability, hence covering both resources dedicated to a single DSPA application and shareable resources. We complement our system modeling with a response time model for DSPA applications that takes into account their windowing characteristics. Leveraging these models, we formulate the problem of scheduling streaming operators over a hierarchical Edge-Fog-Cloud resource architecture. Our target problem presents two distinctive features. First, it aims at jointly optimizing the resource usage cost for computational and network resources, while few existing approaches have taken computational resources into account in their optimization goals. More precisely, our aim is to schedule a DSPA application in a way that it uses available resources in the most efficient manner. This enables saving valuable resources for other DSPA (and non DSPA) applications that share the same resource architecture. Second, it is subject to a response time constraint, while few works have dealt with such a constraint; most approaches for scheduling time-critical (DSPA) applications include the response time in their optimization goals. To solve our formulated problem, we introduce several heuristic algorithms that deal with different versions of the problem: static resource-aware scheduling that each time calculates a new system deployment from the outset, time-aware and resource-aware scheduling, dynamic scheduling that takes into account the current deployment. Finally, we extensively and comparatively evaluate our algorithms with realistic simulations against several baselines that either we introduce or that originate / are inspired from the existing literature. Our results demonstrate that our solutions advance the current state of the art in scheduling DSPA applications
Aguiari, Davide. "Exploring Computing Continuum in IoT Systems : sensing, communicating and processing at the Network Edge". Electronic Thesis or Diss., Sorbonne université, 2021. http://www.theses.fr/2021SORUS131.
Texto completoAs Internet of Things (IoT), originally comprising of only a few simple sensing devices, reaches 34 billion units by the end of 2020, they cannot be defined as merely monitoring sensors anymore. IoT capabilities have been improved in recent years as relatively large internal computation and storage capacity are becoming a commodity. In the early days of IoT, processing and storage were typically performed in cloud. New IoT architectures are able to perform complex tasks directly on-device, thus enabling the concept of an extended computational continuum. Real-time critical scenarios e.g. autonomous vehicles sensing, area surveying or disaster rescue and recovery require all the actors involved to be coordinated and collaborate without human interaction to a common goal, sharing data and resources, even in intermittent networks covered areas. This poses new problems in distributed systems, resource management, device orchestration,as well as data processing. This work proposes a new orchestration and communication framework, namely CContinuum, designed to manage resources in heterogeneous IoT architectures across multiple application scenarios. This work focuses on two key sustainability macroscenarios: (a) environmental sensing and awareness, and (b) electric mobility support. In the first case a mechanism to measure air quality over a long period of time for different applications at global scale (3 continents 4 countries) is introduced. The system has been developed in-house from the sensor design to the mist-computing operations performed by the nodes. In the second scenario, a technique to transmit large amounts of fine-time granularity battery data from a moving vehicle to a control center is proposed jointly with the ability of allocating tasks on demand within the computing continuum
Yu, Shuai. "Multi-user computation offloading in mobile edge computing". Electronic Thesis or Diss., Sorbonne université, 2018. http://www.theses.fr/2018SORUS462.
Texto completoMobile Edge Computing (MEC) is an emerging computing model that extends the cloud and its services to the edge of the network. Consider the execution of emerging resource-intensive applications in MEC network, computation offloading is a proven successful paradigm for enabling resource-intensive applications on mobile devices. Moreover, in view of emerging mobile collaborative application (MCA), the offloaded tasks can be duplicated when multiple users are in the same proximity. This motivates us to design a collaborative computation offloading scheme for multi-user MEC network. In this context, we separately study the collaborative computation offloading schemes for the scenarios of MEC offloading, device-to-device (D2D) offloading and hybrid offloading, respectively. In the MEC offloading scenario, we assume that multiple mobile users offload duplicated computation tasks to the network edge servers, and share the computation results among them. Our goal is to develop the optimal fine-grained collaborative offloading strategies with caching enhancements to minimize the overall execution delay at the mobile terminal side. To this end, we propose an optimal offloading with caching-enhancement scheme (OOCS) for femto-cloud scenario and mobile edge computing scenario, respectively. Simulation results show that compared to six alternative solutions in literature, our single-user OOCS can reduce execution delay up to 42.83% and 33.28% for single-user femto-cloud and single-user mobile edge computing, respectively. On the other hand, our multi-user OOCS can further reduce 11.71% delay compared to single-user OOCS through users' cooperation. In the D2D offloading scenario, we assume that where duplicated computation tasks are processed on specific mobile users and computation results are shared through Device-to-Device (D2D) multicast channel. Our goal here is to find an optimal network partition for D2D multicast offloading, in order to minimize the overall energy consumption at the mobile terminal side. To this end, we first propose a D2D multicast-based computation offloading framework where the problem is modelled as a combinatorial optimization problem, and then solved using the concepts of from maximum weighted bipartite matching and coalitional game. Note that our proposal considers the delay constraint for each mobile user as well as the battery level to guarantee fairness. To gauge the effectiveness of our proposal, we simulate three typical interactive components. Simulation results show that our algorithm can significantly reduce the energy consumption, and guarantee the battery fairness among multiple users at the same time. We then extend the D2D offloading to hybrid offloading with social relationship consideration. In this context, we propose a hybrid multicast-based task execution framework for mobile edge computing, where a crowd of mobile devices at the network edge leverage network-assisted D2D collaboration for wireless distributed computing and outcome sharing. The framework is social-aware in order to build effective D2D links [...]
De, Souza Felipe Rodrigo. "Scheduling Solutions for Data Stream Processing Applications on Cloud-Edge Infrastructure". Thesis, Lyon, 2020. http://www.theses.fr/2020LYSEN082.
Texto completoTechnology has evolved to a point where applications and devicesare highly connected and produce ever-increasing amounts of dataused by organizations and individuals to make daily decisions. Forthe collected data to become information that can be used indecision making, it requires processing. The speed at whichinformation is extracted from data generated by a monitored systemTechnology has evolved to a point where applications and devicesare highly connected and produce ever-increasing amounts of dataused by organizations and individuals to make daily decisions. Forthe collected data to become information that can be used indecision making, it requires processing. The speed at whichinformation is extracted from data generated by a monitored systemor environment affects how fast organizations and individuals canreact to changes. One way to process the data under short delays isthrough Data Stream Processing (DSP) applications. DSPapplications can be structured as directed graphs, where the vertexesare data sources, operators, and data sinks, and the edges arestreams of data that flow throughout the graph. A data source is anapplication component responsible for data ingestion. Operatorsreceive a data stream, apply some transformation or user-definedfunction over the data stream and produce a new output stream,until the latter reaches a data sink, where the data is stored,visualized or provided to another application
Rasse, Alban. "Une Approche Orientée Modèles pour la Spécification, la Vérification et l’Implantation des Systèmes Logiciels Critiques". Mulhouse, 2006. https://www.learning-center.uha.fr/opac/resource/une-approche-orientee-modeles-pour-la-specification-la-verification-et-limplantation-des-systemes-lo/BUS3944436.
Texto completoMorgan, Benoît. "Protection des systèmes informatiques vis-à-vis des malveillances : un hyperviseur de sécurité assisté par le matériel". Thesis, Toulouse, INSA, 2016. http://www.theses.fr/2016ISAT0026/document.
Texto completoComputer system are nowadays evolving quickly. The classical model which consists in associating a physical machine to every users is becoming obsolete. Today, computer resources we are using can be distributed any place on the Internet and usual workstations are not systematically a physical machine anymore. This fact is enlightening two important phenomenons which are leading the evolution of the usage we make of computers: the Cloud computing and hardware virtualization. The cloud computing enable users to exploit computers resources, with a fine grained granularity, with a non-predefined amount of time, which are available into a cloud of resources. The resource usage is then financially charged to the user. This model can be obviously profitable for a company which wants to lean on a potentially unlimited amount of resources, without administrating and managing it. A company can thereby increase its productivity and furthermore save money. From the physical machine owner point of view, the financial gain related to the leasing of computing power is multiplied by the optimization of machine usage by different clients. The cloud computing must be able to adapt quickly to a fluctuating demand a being able to reconfigure itself quickly. One way to reach these goals is dependant of the usage of virtual machines and the associated virtualization techniques. Even if computer resource virtualization has not been introduced by the cloud, the arrival of the cloud it substantially increased its usage. Nowadays, each cloud provider is using virtual machines, which are much more deployable and movable than physical machines. Virtualization of computer resources was before essentially based on software techniques. But the increasing usage of virtual machines, in particular in the cloud computing, leads the microprocessor manufacturers to include virtualization hardware assistance mechanisms. Theses hardware extensions enable on the one hand to make virtualization process easier et on the other hand earn performances. Thus, some technologies have been created, such as Intel VT-x and VT-d or AMD-V by AMD and virtualization extensions by ARM. Besides, virtualization process needs the implementation of extra functionalities, to be able to manage the different virtual machine, schedule them, isolate and share hardware resources like memory and peripherals. These different functionalities are in general handled by a virtual machine manager, whose work can be more or less eased by the characteristics of the processor on which it is executing.In general, these technologies are introducing new execution modes on the processors, more and more privileged and complex.Thus, even if we can see that virtualization is a real interest for modern computer science, it is either clear that its implementation is adding complexity to computer systems, at the same time software and hardwarecomplexity. From this observation, it is legitimate do ask the question about computer security in this context where the architecture of processors is becoming more and more complex, with more and more privileged execution modes. Given the presence of multiple virtual machine, which do not trust each other, in the same physical machine, is it possible that the exploitation of one vulnerability be carried out by a compromised virtual machine ? Isn't it necessary to consider new security architectures taking these risks into account?This thesis is trying to answer to these questions. In particular, we are introducing state of the art security issues in virtualized environment of modern architectures. Starting from this work, we are proposing an originalarchitecture ensuring the integrity of a software being executed on a computer system, regardless its privilege level. This architecture is both using software, a security hypervisor, and hardware, a trusted peripheral, we have both designed and implemented
Pisano, Jean-Baptiste. "Histoire, histoires et outil informatique : l'application Tabellion pour l'étude du Sartenais : une région périphérique de la France bourgeoise, d'après les actes notariés : dynamique interne, permanences, et mutations socio-économiques". Nice, 1995. http://www.theses.fr/1995NICE2043.
Texto completoVittoria, Claude. "Études et principes de conception d'une machine langage Java : le processeur bytecode". Rennes 1, 2008. ftp://ftp.irisa.fr/techreports/theses/2008/vittoria.pdf.
Texto completoNowadays the democratization of the Internet facilitates the downloading of applications. However, the risk of corruption of these applications with malicious intents could affect the integrity of the system that executes them and data security. The Java language provides properties such as checking the integrity of the code, and safety enforcement applications to prevent these risks. We tried to use the Java language to build a minimum platform dedicated to enforce the bytecode processor. We isolated the missing elements in a JVM to write an operating system, such as the inability to handle natively material resources, as well as the features already provided and required by an operating system, and therefore dependent on a specific implementation, such as managing workflows
Libros sobre el tema "Informatique en périphérie"
Hazzah, Karen. Writing Windows VxDs and device drivers. Lawrence, Kan: R & D Publications, 1995.
Buscar texto completoWriting Windows VxDs and Device Drivers. 2a ed. Lawrence: R&D Books, 1997.
Buscar texto completo