Дисертації з теми "Délais de bout en bout"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-50 дисертацій для дослідження на тему "Délais de bout en bout".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.
Nguyen, Huu-Nghi. "Estimation de l’écart type du délai de bout-en-bout par méthodes passives." Thesis, Lyon, 2017. http://www.theses.fr/2017LYSE1044/document.
Since the early beginning of Internet, the amount of data exchanged over the networks has exponentially grown. The devices deployed on the networks are very heterogeneous, because of the growing presence of middleboxes (e.g., firewalls, NAT routers, VPN servers, proxy). The algorithms run on the networking devices (e.g., routing, spanning tree) are often complex, closed, and proprietary while the interfaces to access these devices typically vary from one manufacturer to the other. All these factors tend to hinder the understanding and the management of networks. Therefore a new paradigm has been introduced to ease the design and the management of networks, namely, the SDN (Software-defined Networking). In particular, SDN defines a new entity, the controller that is in charge of controlling the devices belonging to the data plane. Thus, in a SDN-network, the data plane, which is handled by networking devices called virtual switches, and the control plane, which takes the decisions and executed by the controller, are separated. In order to let the controller take its decisions, it must have a global view on the network. This includes the topology of the network and its links capacity, along with other possible performance metrics such delays, loss rates, and available bandwidths. This knowledge can enable a multi-class routing, or help guarantee levels of Quality of Service. The contributions of this thesis are new algorithms that allow a centralized entity, such as the controller in an SDN network, to accurately estimate the end-to-end delay for a given flow in its network. The proposed methods are passive in the sense that they do not require any additional traffic to be run. More precisely, we study the expectation and the standard deviation of the delay. We show how the first moment can be easily computed. On the other hand, estimating the standard deviation is much more complex because of the correlations existing between the different waiting times. We show that the proposed methods are able to capture these correlations between delays and thus providing accurate estimations of the standard deviation of the end-to-end delay. Simulations that cover a large range of possible scenariosvalidate these results
Jonglez, Baptiste. "Mécanismes de bout en bout pour améliorer la latence dans les réseaux de communication." Thesis, Université Grenoble Alpes, 2020. http://www.theses.fr/2020GRALM048.
The network technologies that underpin the Internet have evolved significantly over the last decades, but one aspect of network performance has remained relatively unchanged: latency. In 25 years, the typical capacity or "bandwidth" of transmission technologies has increased by 5 orders of magnitude, while latency has barely improved by an order of magnitude. Indeed, there are hard limits on latency, such as the propagation delay which remains ultimately bounded by the speed of light.This diverging evolution between capacity and latency is having a profound impact on protocol design and performance, especially in the area of transport protocols. It indirectly caused the Bufferbloat problem, whereby router buffers are persistently full, increasing latency even more. In addition, the requirements of end-users have changed, and they expect applications to be much more reactive. As a result, new techniques are needed to reduce the latency experienced by end-hosts.This thesis aims at reducing the experienced latency by using end-to-end mechanisms, as opposed to "infrastructure" mechanisms. Two end-to-end mechanisms are proposed. The first is to multiplex several messages or data flows into a single persistent connection. This allows better measurements of network conditions (latency, packet loss); this, in turn, enables better adaptation such as faster retransmission. I applied this technique to DNS messages, where I show that it significantly improves end-to-end latency in case of packet loss. However, depending on the transport protocol used, messages can suffer from Head-of-Line blocking: this problem can be solved by using QUIC or SCTP instead of TCP.The second proposed mechanism is to exploit multiple network paths (such as Wi-Fi, wired Ethernet, 4G). The idea is to use low-latency paths for latency-sensitive network traffic, while bulk traffic can still exploit the aggregated capacity of all paths. This idea was partially realized by Multipath TCP, but it lacks support for multiplexing. Adding multiplexing allows data flows to cooperate and ensures that the scheduler has better visibility on the needs of individual data flows. This effectively amounts to a scheduling problem that was identified only very recently in the literature as "stream-aware multipath scheduling". My first contribution is to model this scheduling problem. As a second contribution, I proposed a new stream-aware multipath scheduler, SRPT-ECF, that improves the performance of small flows without impacting larger flows. This scheduler could be implemented as part of a MPQUIC (Multipath QUIC) implementation. More generally, these results open new opportunities for cooperation between flows, with applications such as improving WAN aggregation
Bauer, Henri. "Analyse pire cas de flux hétérogènes dans un réseau embarqué avion." Thesis, Toulouse, INPT, 2011. http://www.theses.fr/2011INPT0008/document.
The certification process for avionics network requires guaranties on data transmission delays. However, calculating the worst case delay can be complex in the case of industrial AFDX (Avionics Full Duplex Switched Ethernet) networks. Tools such as Network Calculus provide a pessimistic upper bound of this worst case delay. Communication needs of modern commercial aircraft are expanding and a growing number of flows with various constraints and characteristics must share already existing resources. Currently deployed AFDX networks do not differentiate multiple classes of traffic: messages are processed in their arrival order in the output ports of the switches (FIFO servicing policy). The purpose of this thesis is to show that it is possible to provide upper bounds of end to end transmission delays in networks that implement more advanced servicing policies, based on static priorities (Priority Queuing) or on fairness (Fair Queuing). We show how the trajectory approach, based on scheduling theory in asynchronous distributed systems can be applied to current and future AFDX networks (supporting advanced servicing policies with flow differentiation capabilities). We compare the performance of this approach with the reference tools whenever it is possible and we study the pessimism of the computed upper bounds
Garnier, Ilias. "Formalisme pour la conception haut-niveau et détaillée de systèmes de contrôle-commande critiques." Phd thesis, Université Paris Sud - Paris XI, 2012. http://tel.archives-ouvertes.fr/tel-00676901.
Kemayo, Georges Arnaud. "Evaluation et validation des systèmes distribués avioniques." Thesis, Chasseneuil-du-Poitou, Ecole nationale supérieure de mécanique et d'aérotechnique, 2014. http://www.theses.fr/2014ESMA0010/document.
Avionics systems are subject to hard real-time constrainst and criticality. To certify these systems, it is neccessaryto compute the upper bound of the end-to-end delay of each message transmitted in the network. In this thesis,we mainly focus on civils avionics systems that use AFDX (Avionics Full Duplex Switched Ethernet) networkand that has been introduced in the Airbus A380 architecture.In this context, we focus in the computation of the end-to-end delays of messages crossing the network. Amongthe existing methods, we interested in the trajectory approach precedently proposed by researchers. The goal ofthis method is to compute end-to-end delay upper bounds of messages in the nodes of AFDX network. As a firstcontribution, we prove that the end-to-end delays computed by this method can be optimistic. This means thatwithout any modification, it cannot be used to validate transmission end-to-end delays for the AFDX. Despitethe identification of these optimistic problems in the trajectory approach, a solution to remove them seems notto be simple from our point of view. Hence, as a second contribution, we propose a new approach to computethese delays based on the characterization of the worst-case traffic encountered by a packet on each crossednode
Ghandi, Sanaa. "Analysis of network delay measurements : Data mining methods for completion and segmentation." Electronic Thesis or Diss., Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2023. http://www.theses.fr/2023IMTA0382.
The exponential growth of the Internet requires regular monitoring of network metrics. This thesis focuses on round-trip delays and the possibility of addressing the problems of missing data and multivariate segmentation. The first contribution includes the orchestration of delay measurement campaigns, as well as the development of a simulator that generates end-to-end delay traces. The second contribution of this thesis is the introduction of two missing data completion methods. The first is based on non-negative matrix factorization, while the second uses collaborative neural filtering. Tested on synthetic and real data, these methods demonstrate their efficiency and accuracy. The third contribution of this thesis involves multivariate delay segmentation. This approach is based on hierarchical clustering and is implemented in two stages. Firstly, the delay time series are grouped to obtain, within the same group, series with similar and synchronous variations and trends. Next, the multivariate segmentation step collectively and jointly segments the series within each group. This step uses hierarchical clustering followed by post-processing using the Viterbi algorithm to smooth the segmentation result. This method was tested on real delay traces from two major events affecting two Internet Exchange Points (IXPs). The results show that this method approximates the state-of-the-art in segmentation, while significantly reducing computing speed and costs
Despaux, François. "Modeling and evaluation of the end-to-end delay in wireless sensor networks." Thesis, Université de Lorraine, 2015. http://www.theses.fr/2015LORR0100/document.
In this thesis, we propose an approach that combines both measurements and analytical approaches for infering a Markov chain model from the MAC protocol execution traces in order to be able to estimate the end to end delay in multi-hop transmission scenarios. This approach allows capturing the main features of WSN. Hence, a suitable Markov chain for modelling the WSN is infered. By means of an approach based on frequency domain analysis, end to end delay distribution for multi-hop scenarios is found. This is an important contribution of our approach with regard to existing analytical approaches where the extension of these models for considering multi-hop scenarios is not possible due to the fact that the arrival distribution to intermediate nodes is not known. Since local delay distribution for each node is obtained by analysing the MAC protocol execution traces for a given traffic scenario, the obtained model (and therefore, the whole end to end delay distribution) is traffic-dependant. In order to overcome this problem, we have proposed an approach based on non-linear regression techniques for generalising our approach in terms of the traffic rate. Results were validated for different MAC protocols (X-MAC, ContikiMAC, IEEE 802.15.4) as well as a well-known routing protocol (RPL) over real test-beds (IOT-LAB)
Despaux, François. "Modeling and evaluation of the end-to-end delay in wireless sensor networks." Electronic Thesis or Diss., Université de Lorraine, 2015. http://www.theses.fr/2015LORR0100.
In this thesis, we propose an approach that combines both measurements and analytical approaches for infering a Markov chain model from the MAC protocol execution traces in order to be able to estimate the end to end delay in multi-hop transmission scenarios. This approach allows capturing the main features of WSN. Hence, a suitable Markov chain for modelling the WSN is infered. By means of an approach based on frequency domain analysis, end to end delay distribution for multi-hop scenarios is found. This is an important contribution of our approach with regard to existing analytical approaches where the extension of these models for considering multi-hop scenarios is not possible due to the fact that the arrival distribution to intermediate nodes is not known. Since local delay distribution for each node is obtained by analysing the MAC protocol execution traces for a given traffic scenario, the obtained model (and therefore, the whole end to end delay distribution) is traffic-dependant. In order to overcome this problem, we have proposed an approach based on non-linear regression techniques for generalising our approach in terms of the traffic rate. Results were validated for different MAC protocols (X-MAC, ContikiMAC, IEEE 802.15.4) as well as a well-known routing protocol (RPL) over real test-beds (IOT-LAB)
Hotescu, Oana Andreea. "Vers la convergence de réseaux dans l'avionique." Thesis, Toulouse, INPT, 2020. http://www.theses.fr/2020INPT0044.
AFDX is the standard switched Ethernet solution for transmitting avionic flows. Today’s AFDX deployments in commercial aircrafts are lightly loaded to ensure the determinism of control and command operations. This thesis aims at investigating a practical alternative envisioned by manufacturers that takes advantage of the remaining AFDX bandwidth to transfer additional nonavionic flows (video, audio, service). These flows must not compromise the in-time arrival of avionic ones. Thus, appropriate scheduling policies formultiplexing avionic flows with non-avionic flows are required at the emitting end systems and switch egress ports. We mainly focus on the transmission of additional flows carrying video streams from cameras located on the airplane to cockpit display. Multiplexing avionic flows with video flows is tackled by introducing table scheduling at the emitting end systems and a 2-priority levels SPQ service policy at switch egress ports. This solution preserves the real-time constraints of avionic flows but may introduce variations of the end-to-end delay of video ones. An appropriate allocation of slots to avionic flows in table scheduling can reduce the emission lag of video flows and thus, limit their delay variations. We propose two strategies to allocate avionic flows in the table scheduling: a simple one based on heuristics and an optimal one. Optimal schedules are derived by solving a constraint programming model minimizing the emission lag of video flows. For light traffic end systems, heuristic allocation is close to optimal
Floquet, Julien. "Mécanismes auto-organisants pour connexions de bout en bout." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLC102/document.
Fifth generation networks are being defined and their different components are beginning to emerge: new technologies for access to radio, fixed and mobile convergence of networks and virtualization.End-to-end (E2E) control and management of the network have a particular importance for network performance. Having this in mind, we segment the work of the thesis in two parts: the radio access network (RAN) with a focus on Massive MIMO (M-MIMO) technology and the E2E connection from a point of view of the transport layer.In the first part, we consider hierarchical beamforming in wireless networks. For a given population of flows, we propose computationally efficient algorithms for fair rate allocation. We next propose closed-form formulas for flow level performance, for both elastic (with either proportional fairness and max-min fairness) and streaming traffic. We further assess the performance of hierarchical beamforming using numerical experiments.In the second part, we identify an application of SON namely the control of the starvation probability of video streaming service. The buffer receives data from a server with an E2E connection following the TCP protocol. We propose a model that describes the behavior of a buffer content and we compare the analytical formulas obtained with simulations. Finally, we propose a SON function that by adjusting the application video rate, achieves a target starvation probability
Sallantin, Renaud. "Optimisation de bout-en-bout du démarrage des connexions TCP." Phd thesis, Toulouse, INPT, 2014. http://oatao.univ-toulouse.fr/12180/1/sallantin.pdf.
Medlej, Sara. "Scalable Trajectory Approach for ensuring deterministic guarantees in large networks." Thesis, Paris 11, 2013. http://www.theses.fr/2013PA112168/document.
In critical real-time systems, any faulty behavior may endanger lives. Hence, system verification and validation is essential before their deployment. In fact, safety authorities ask to ensure deterministic guarantees. In this thesis, we are interested in offering temporal guarantees; in particular we need to prove that the end-to-end response time of every flow present in the network is bounded. This subject has been addressed for many years and several approaches have been developed. After a brief comparison between the existing approaches, the Trajectory Approach sounded like a good candidate due to the tightness of its offered bound. This method uses results established by the scheduling theory to derive an upper bound. The reasons leading to a pessimistic upper bound are investigated. Moreover, since the method must be applied on large networks, it is important to be able to give results in an acceptable time frame. Hence, a study of the method’s scalability was carried out. Analysis shows that the complexity of the computation is due to a recursive and iterative processes. As the number of flows and switches increase, the total runtime required to compute the upper bound of every flow present in the network understudy grows rapidly. While based on the concept of the Trajectory Approach, we propose to compute an upper bound in a reduced time frame and without significant loss in its precision. It is called the Scalable Trajectory Approach. After applying it to a network, simulation results show that the total runtime was reduced from several days to a dozen seconds
Benammar, Nassima. "Modélisation, évaluation et validation des systèmes temps réel distribués." Thesis, Poitiers, 2018. http://www.theses.fr/2018POIT2282/document.
In this thesis, we analyze networks in the context of distributed real-time systems, especially in the fields of avionics, with “Avionics Full DupleX Switched Ethernet” (AFDX), and automobile, with “Audio Video Bridging Ethernet” (AVB). For such applications, network determinism needs to be guaranteed. It involves, in particular, assessing a guaranteed bound on the end-to-end traversal time across the network fr each frame; and dimensioning the buffers in order to avoid any loss of frame because of a buffer overflow.There are several methods for worst-case delay analysis, and we have mainly worked on the “Forward end-to-end Delay Analysis” (FA) method. FA had already been developed for “First-In-First-Out” scheduling policy in the AFDX context, so we generalized it to any Switched Ethernet network. We have also extended it to handle static priorities and the AVB protocol, shaping policy named “Credit Based Shaper” (CBS). Each contribution has been formaly proved and experiments have been led on industrial configurations. For our experimentations, we have compared our results with the results of competing approaches. Finally, we have developed and formally demonstrated an approach for buffer dimensioning in terms of number of frames. This approach has also been tested on an industrial configuration and has produced tight bounds
Urvoy-Keller, Guillaume. "Qualite de service de bout en bout et algorithmes d'admission d'appel." Paris 6, 1999. http://www.theses.fr/1999PA066509.
Carbajal, Guillaume. "Apprentissage profond bout-en-bout pour le rehaussement de la parole." Electronic Thesis or Diss., Université de Lorraine, 2020. http://www.theses.fr/2020LORR0017.
This PhD falls within the development of hands-free telecommunication systems, more specifically smart speakers in domestic environments. The user interacts with another speaker at a far-end point and can be typically a few meters away from this kind of system. The microphones are likely to capture sounds of the environment which are added to the user's voice, such background noise, acoustic echo and reverberation. These types of distortion degrade speech quality, intelligibility and listening comfort for the far-end speaker, and must be reduced. Filtering methods can reduce individually each of these types of distortion. Reducing all of them implies combining the corresponding filtering methods. As these methods interact with each other which can deteriorate the user's speech, they must be jointly optimized. First of all, we introduce an acoustic echo reduction approach which combines an echo cancellation filter with a residual echo postfilter designed to adapt to the echo cancellation filter. To do so, we propose to estimate the postfilter coefficients using the short term spectra of multiple known signals, including the output of the echo cancellation filter, as inputs to a neural network. We show that this approach improves the performance and the robustness of the postfilter in terms of echo reduction, while limiting speech degradation, on several scenarios in real conditions. Secondly, we describe a joint approach for multichannel reduction of echo, reverberation and noise. We propose to simultaneously model the target speech and undesired residual signals after echo cancellation and dereveberation in a probabilistic framework, and to jointly represent their short-term spectra by means of a recurrent neural network. We develop a block-coordinate ascent algorithm to update the echo cancellation and dereverberation filters, as well as the postfilter that reduces the undesired residual signals. We evaluate our approach on real recordings in different conditions. We show that it improves speech quality and reduction of echo, reverberation and noise compared to a cascade of individual filtering methods and another joint reduction approach. Finally, we present an online version of our approach which is suitable for time-varying acoustic conditions. We evaluate the perceptual quality achieved on real examples where the user moves during the conversation
Starzetz, Paul. "Amélioration des performances de bout-en-bout dans des réseaux sans fil." Grenoble INPG, 2006. http://www.theses.fr/2006INPG0100.
This thesis has a threefold contribution. The first part provides a statistical approach at analysing CSMA/CA type channel access algorithms with the focus on the IEEE 802. 11 WLAN standard and delivers novel insights into the structure of the IEEE 802. 11 CSMA/CA access algorithm. Further a proposition is made to base shared channel access on a set of hash functions and distributed information exchange. The second part of this work investigates a practical WLAN/LAN Integration scenario in the context of the end-to-end performance of today Internet protocols with focus on the classical Transmission Control Protocol (TCP). Performance issues of standard TCP in such WLAN/LAN integration scenario are identified and an active wireless queue management solution for TCP based on generalised fair cost scheduling called VFQ is presented. Ln the third and last part finally, the behaviour of the TCP protocol in simple wireless integration scenarios is analysed and a number of undesirable properties recognised and used to design an improved version of the classical TCP congestion control with focus on the short-time fairness
Bennani, Fayçal. "IP et la QoS : vers une maîtrise dynamique de bout en bout." Paris, ENST, 2002. http://www.theses.fr/2002ENSTA002.
Bennani, Fayçal. "IP et la QoS : vers une maîtrise dynamique de bout en bout /." Paris : École nationale supérieure des télécommunications, 2002. http://catalogue.bnf.fr/ark:/12148/cb38964648w.
Monot, Aurélien. "Vérification des contraintes temporelles de bout-en-bout dans le contexte AutoSar." Phd thesis, Université de Lorraine, 2012. http://tel.archives-ouvertes.fr/tel-00767128.
Monot, Aurélien. "Vérification des contraintes temporelles de bout-en-bout dans le contexte AutoSar." Electronic Thesis or Diss., Université de Lorraine, 2012. http://www.theses.fr/2012LORR0384.
The complexity of electronic embedded systems in cars is continuously growing. Hence, mastering the temporal behavior of such systems is paramount in order to ensure the safety and comfort of the passengers. As a consequence, the verification of end-to-end real-time constraints is a major challenge during the design phase of a car. The AUTOSAR software architecture drives us to address the verification of end-to-end real-time constraints as two independent scheduling problems respectively for electronic control units and communication buses. First, we introduce an approach, which optimizes the utilization of controllers scheduling numerous software components that is compatible with the upcoming multicore architectures. We describe fast and efficient algorithms in order to balance the periodic load over time on multicore controllers by adapting and improving an existing approach used for the CAN networks. We provide theoretical result on the efficiency of the algorithms in some specific cases. Moreover, we describe how to use these algorithms in conjunction with other tasks scheduled on the controller. The remaining part of this research work addresses the problem of obtaining the response time distributions of the messages sent on a CAN network. First, we present a simulation approach based on the modelisation of clock drifts on the communicating nodes connected on the CAN network. We show that we obtain similar results with a single simulation using our approach in comparison with the legacy approach consisting in numerous short simulation runs without clock drifts. Then, we present an analytical approach in order to compute the response time distributions of the CAN frames. We introduce several approximation parameters to cope with the very high computational complexity of this approach while limiting the loss of accuracy. Finally, we compare experimentally the simulation and analytical approaches in order to discuss the relative advantages of each of the two approaches
Adjetey-Bahun, Kpotissan. "Résilience de bout en bout pour la (re)conception d'un système de transport." Thesis, Troyes, 2016. http://www.theses.fr/2016TROY0012.
This thesis aims to develop a model that assesses and improves the resilience of mass railway transportation system. A state of the art on resilience quantification approaches in sociotechnical systems reveals some limitations relative to their adequacy to the mass railway transportation systems. The model developed in this work is helping to give some answers to these limitations. We identify and develop four interrelated subsystems: transportation, power, telecommunication and organization subsystems. We also characterized and modeled these subsystems' interdependencies. This allows us to get insight into the system holistically. We also propose and quantify some performance indicators of this system. These performance indicators are used afterwards to quantify the resilience of the system. The number of passengers that reach their destination station, passenger delay and passenger load are performance indicators used in this work. The model is applied to the Paris mass railway transportation system. After modeling perturbations, we also assess the extent to which some crisis management plans are taken into account in the model. Then, a simulator has been developed, and an approach that aims to implement an end-to-end resilient system is proposed. Operating conditions of railway transportation system are incorporated into topological indicators of transportation systems found in the literature through the model. This allows us to show the relevance of these operating-conditions dependent indicators relative to the usual topological indicators of the studied network
Maxim, Cristian. "Étude probabiliste des contraintes de bout en bout dans les systèmes temps réel." Thesis, Paris 6, 2017. http://www.theses.fr/2017PA066479/document.
In our times, we are surrounded by technologies meant to improve our lives, to assure its security, or programmed to realize different functions and to respect a series of constraints. We consider them as embedded systems or often as parts of cyber-physical systems. An embedded system is a microprocessor-based system that is built to control a function or a range of functions and is not designed to be programmed by the end user in the same way that a PC is. The Worst Case Execution Time (WCET) of a task represents the maximum time it can take to be executed. The WCET is obtained after analysis and most of the time it cannot be accurately determined by exhausting all the possible executions. This is why, in industry, the measurements are done only on a subset of possible scenarios (the one that would generate the highest execution times) and an execution time bound is estimated by adding a safety margin to the greatest observed time. Amongst all branches of real-time systems, an important role is played by the Critical Real-Time Embedded Systems (CRTES) domain. CRTESs are widely being used in fields like automotive, avionics, railway, health-care, etc. The performance of CRTESs is analyzed not only from the point of view of their correctness, but also from the perspective of time. In the avionics industry such systems have to undergo a strict process of analysis in order to fulfill a series of certification criteria demanded by the certifications authorities, being the European Aviation Safety Agency (EASA) in Europe or the Federal Aviation Administration (FAA) in United States. The avionics industry in particular and the real-time domain in general are known for being conservative and adapting to new technologies only when it becomes inevitable. For the avionics industry this is motivated by the high cost that any change in the existing functional systems would bring. Any change in the software or hardware has to undergo another certification process which cost the manufacturer money, time and resources. Despite their conservative tendency, the airplane producers cannot stay inactive to the constant change in technology and ignore the performance benefices brought by COTS processors which nowadays are mainly multi-processors. As a curiosity, most of the microprocessors found in airplanes flying actually in the world, have a smaller computation power than a modern home PC. Their chips-sets are specifically designed for embedded applications characterized by low power consumption, predictability and many I/O peripherals. In the actual context, where critical real-time systems are invaded by multi-core platforms, the WCET analysis using deterministic approaches becomes difficult, if not impossible. The time constraints of real-time systems need to be verified in the context of certification. This verification, done during the entire development cycle, must take into account architectures more and more complex. These architectures increase the cost and complexity of actual, deterministic, tools to identify all possible time constrains and dependencies that can occur inside the system, risking to overlook extreme cases. An alternative to these problems is the probabilistic approach, which is more adapted to deal with these hazards and uncertainty and which allows a precise modeling of the system. 2. Contributions. The contribution of the thesis is three folded containing the conditions necessary for using the theory of extremes on executions time measurements, the methods developed using the theory of extremes for analyzing real-time systems and experimental results. 2.1. Conditions for use of EVT in the real-time domain. In this chapter we establish the environment in which our work is done. The use of EVT in any domain comes with a series of restrictions for the data being analyzed. In our case the data being analyzed consists in execution time measurements
Maxim, Cristian. "Étude probabiliste des contraintes de bout en bout dans les systèmes temps réel." Electronic Thesis or Diss., Paris 6, 2017. http://www.theses.fr/2017PA066479.
In our times, we are surrounded by technologies meant to improve our lives, to assure its security, or programmed to realize different functions and to respect a series of constraints. We consider them as embedded systems or often as parts of cyber-physical systems. An embedded system is a microprocessor-based system that is built to control a function or a range of functions and is not designed to be programmed by the end user in the same way that a PC is. The Worst Case Execution Time (WCET) of a task represents the maximum time it can take to be executed. The WCET is obtained after analysis and most of the time it cannot be accurately determined by exhausting all the possible executions. This is why, in industry, the measurements are done only on a subset of possible scenarios (the one that would generate the highest execution times) and an execution time bound is estimated by adding a safety margin to the greatest observed time. Amongst all branches of real-time systems, an important role is played by the Critical Real-Time Embedded Systems (CRTES) domain. CRTESs are widely being used in fields like automotive, avionics, railway, health-care, etc. The performance of CRTESs is analyzed not only from the point of view of their correctness, but also from the perspective of time. In the avionics industry such systems have to undergo a strict process of analysis in order to fulfill a series of certification criteria demanded by the certifications authorities, being the European Aviation Safety Agency (EASA) in Europe or the Federal Aviation Administration (FAA) in United States. The avionics industry in particular and the real-time domain in general are known for being conservative and adapting to new technologies only when it becomes inevitable. For the avionics industry this is motivated by the high cost that any change in the existing functional systems would bring. Any change in the software or hardware has to undergo another certification process which cost the manufacturer money, time and resources. Despite their conservative tendency, the airplane producers cannot stay inactive to the constant change in technology and ignore the performance benefices brought by COTS processors which nowadays are mainly multi-processors. As a curiosity, most of the microprocessors found in airplanes flying actually in the world, have a smaller computation power than a modern home PC. Their chips-sets are specifically designed for embedded applications characterized by low power consumption, predictability and many I/O peripherals. In the actual context, where critical real-time systems are invaded by multi-core platforms, the WCET analysis using deterministic approaches becomes difficult, if not impossible. The time constraints of real-time systems need to be verified in the context of certification. This verification, done during the entire development cycle, must take into account architectures more and more complex. These architectures increase the cost and complexity of actual, deterministic, tools to identify all possible time constrains and dependencies that can occur inside the system, risking to overlook extreme cases. An alternative to these problems is the probabilistic approach, which is more adapted to deal with these hazards and uncertainty and which allows a precise modeling of the system. 2. Contributions. The contribution of the thesis is three folded containing the conditions necessary for using the theory of extremes on executions time measurements, the methods developed using the theory of extremes for analyzing real-time systems and experimental results. 2.1. Conditions for use of EVT in the real-time domain. In this chapter we establish the environment in which our work is done. The use of EVT in any domain comes with a series of restrictions for the data being analyzed. In our case the data being analyzed consists in execution time measurements
Hehn, Olivier. "Analyse expérimentale et simulation thermomécanique du soudage bout à bout de tubes de polyéthylène." Phd thesis, École Nationale Supérieure des Mines de Paris, 2006. http://pastel.archives-ouvertes.fr/pastel-00002138.
Hehn, Olivier. "Analyse expérimentale et simulation thermomécanique du soudage bout a bout de tubes de polyéthylène." Paris, ENMP, 2006. http://www.theses.fr/2006ENMP1427.
This work concerns the comprehension and the digital simulation of the thermomechanical phenomena governing the development of the welds during the process of butt fusion welding of polyethylene tubes. This process consists in melting the ends of the tubes and pressing them together to form the weld after the matter is cooled. Butt fusion welding, which seems to be simple a priori, brings in some phenomena interacting ones with the others. Thus, there are strong couplings between thermics, mechanics and phase changes. This manuscript is composed of three principal parts. Firstly, a complete analysis of the process was carried out. Thus, thermal phenomena occurring during the process, which are the main origins of welding, and displacements of matter, which are responsible for the formation of the weld bead, were studied. The importance of thermal dilatation, in particular during the heating of the matter, but also complicated kinetics of bead formation and thermal phenomena are highlighted (formation of a vertical plane in the bead during heating, importance of the radiation and the convection during tubes heating ; …). In a second part, we characterized the matter with an aim of obtaining a digital simulation of the process that is as realistic as possible. The fusion and crystallization laws of polyethylene were given using the Avrami and Ozawa laws. The behaviour of polyethylene in a liquid state, solid state and during the transition were also given. Moreover, measurements of thermal dilatation enthalpy and phase changes were made. Finally, all the stages of the process were simulated using Forge2 software, which is well adapted to the resolution of thermal and mechanical problems responsible for the welds formation. The laws and the parameters obtained in experiments were integrated into the software. The results obtained are very satisfactory, as well from the point of view of thermics as of the matter displacement and the shape of the beads. We have now a better comprehension of butt fusion welding and a first operational tool to simulate the process
Hamze, Mohamad. "Autonomie, sécurité et QoS de bout en bout dans un environnement de Cloud Computing." Thesis, Dijon, 2015. http://www.theses.fr/2015DIJOS033/document.
Today, Cloud Networking is one of the recent research areas within the Cloud Computing research communities. The main challenges of Cloud Networking concern Quality of Service (QoS) and security guarantee as well as its management in conformance with a corresponding Service Level Agreement (SLA). In this thesis, we propose a framework for resource allocation according to an end-to-end SLA established between a Cloud Service User (CSU) and several Cloud Service Providers (CSPs) within a Cloud Networking environment (Inter-Cloud Broker and Federation architectures). We focus on NaaS and IaaS Cloud services. Then, we propose the self-establishing of several kinds of SLAs and the self-management of the corresponding Cloud resources in conformance with these SLAs using specific autonomic cloud managers. In addition, we extend the proposed architectures and the corresponding SLAs in order to deliver a service level taking into account security guarantee. Moreover, we allow autonomic cloud managers to expand the self-management objectives to security functions (self-protection) while studying the impact of the proposed security on QoS guarantee. Finally, our proposed architecture is validated by different simulation scenarios. We consider, within these simulations, videoconferencing and intensive computing applications in order to provide them with QoS and security guarantee in a Cloud self-management environment. The obtained results show that our contributions enable good performances for these applications. In particular, we observe that the Broker architecture is the most economical while ensuring QoS and security requirements. In addition, we observe that Cloud self-management enables violations and penalties’ reduction as well as limiting security impact on QoS guarantee
Roset, Hervé. "Contribution à l'étude de la qualité de service de bout en bout des réseaux." Paris 6, 2004. http://www.theses.fr/2004PA066594.
Zhang, Lei. "Architecture et mecanismes de bout en bout pour les communications mobiles et sans fil dans l'internet." Phd thesis, Institut National Polytechnique de Toulouse - INPT, 2009. http://tel.archives-ouvertes.fr/tel-00435868.
Alaoui, Soulimani Houda. "Pilotage dynamique de la qualité de service de bout en bout pour une session "user-centric"." Phd thesis, Télécom ParisTech, 2012. http://pastel.archives-ouvertes.fr/pastel-00834199.
Alaoui, Soulimani Houda. "Pilotage dynamique de la qualité de service de bout en bout pour une session "user-centric"." Electronic Thesis or Diss., Paris, ENST, 2012. http://www.theses.fr/2012ENST0021.
Nowadays, the services market has become increasingly competitive. Customer requirements for service offerings in line with their uses and preferences led providers to offer new services to meet this new need and to stand out from competitors and attract new customers. With the success of the network and service convergence (NGN / NGS), new services have emerged. A mobile user desires to access his services anywhere, anytime and on any type of terminal.Thus, providing customized services to clients while ensuring the service continuity and the end-to-end quality of service in a heterogeneous and mobile environment became a challenge for mobile operators and service providers to improve the return on investment (ROI) and time-to-market (TTM). Our thinking about the provision of customized services according to the functional and non-functional (QoS) needs of the users has led us to identify the needs of the new context NGN / NGS defined by the intersection of these three elements "user-centric, mobility and QoS". How to dynamically control the end-to-end QoS for a single "user-centric" session? How to ensure the "Service Delivery" in the context of mobility and ubiquity? These new needs have led us to propose solutions through three main contributions that take into account the user and the operator vision. Our first contribution concerns the organizational model. We have proposed a new organization with a maximum of flexibility, adaptability and self-management which allows the control of the QoS at each level of the architecture (equipment, network and service). In this organization, we have defined actors and the role of each one in relation to the decision-making process during the user session in order to maintain the end-to-end QoS in an environment that is totally heterogeneous and mobile. Our second contribution addresses the autonomic service component. With the complexity of services personalization in a heterogeneous and mobile context and the need to satisfy the end to end QoS, services and network resources must be taken into account. Therefore, a high degree of self-sufficiency, self-management and automation is required in the resource service to improve the service delivery. We have therefore proposed an autonomic service component based on an integrated QoS-agent which is self-controlled and self-managed to dynamically adapt its resources in response to changing situations during the user’s session. Our third proposal covers the model protocol. The personalized services session requires more flexible interactions at the service level in order to obtain a single session with service continuity. We have proposed a signalling protocol SIP + that allows the negotiation of the QoS of personalized services at the session initialization phase and the renegotiation of the QoS during the utilization to maintain the service with the required QoS through a unique session. More concretely, we have presented our experiments through a scenario and demonstration platform that allows us to test the feasibility and the performance of our contributions. The contributions and perspectives of this thesis are stated in the conclusion
Chassot, Christophe. "Contribution aux protocoles et aux architectures de communication de bout en bout pour la QdS dans l'internet." Habilitation à diriger des recherches, Institut National Polytechnique de Toulouse - INPT, 2005. http://tel.archives-ouvertes.fr/tel-00012152.
Meilard, Nicolas. "Etoile Laser Polychromatique pour l’Optique Adaptative : modélisation de bout-en-bout, concepts et étude des systèmes optiques." Thesis, Lyon 1, 2012. http://www.theses.fr/2012LYO10107/document.
The polychromatic laser guide star (PLGS) provides adaptive optics (AO) with a phase referenceto correct corrugated wavefronts, including tip tilt. It relies on the chromatic dispersion of light returnedfrom the 2-photon resonant excitation of sodium in the mesosphere. Diffraction limited imaging in thevisible then becomes possible. This is mandatory for 80% of the prominent astrophysical cases for the EELT.A PLGS requires standard deviations of position measurements 26 times less than in classical cases. Thus Ihave studied the interferometric laser projector. I have designed a polychromatic base corrector to equalizethe fringe periods, a phase corrector to compensate atmospheric refraction and the optics for fringemeasurements and for keeping apart the PLGS from the science target images.The required accuracy leads me to study how the maximum likelihood algorithm approaches the Cramer-Rao bound.I have written an end-to-end code for numerical simulations of the PLGS, from the lasers to the Strehlmeasurement. I get for the VLT Strehl ratios larger than 40% at 500 nm if one uses an AO providing us a50% instantaneous Strehl (tip tilt Strehl : 80%). An analytical model validates these results.Finally I address the application of the PLGS to deep space communications and to space debris clearing
Meilard, Nicolas. "Etoile Laser Polychromatique pour l'Optique Adaptative : modélisation de bout-en-bout, concepts et étude des systèmes optiques." Phd thesis, Université Claude Bernard - Lyon I, 2012. http://tel.archives-ouvertes.fr/tel-00978730.
Diet, Ambre. "Une approche de bout en bout du tolérancement statistique sous contraintes industrielles : contribution au jumeau virtuel industriel." Thesis, Toulouse 3, 2021. http://www.theses.fr/2021TOU30014.
In the manufacturing process of a product, various assembly steps are necessary. Several types of requirements have to be met at each level and involve considerations about dimensional uncertainties on the parts to be assembled. Tolerancing is the activity in charge of the management of these uncertainties and takes place both in the product development phase and in the series production phase. In the context of the aeronautics industry, in particular with regards to tolerancing on aerostructures, specificities have to be taken into account in the development of adequate methods and tools. Prior to production, one of the main issues of tolerancing amounts to allocate tolerance limits suited to a given acceptable scrap rate. The aim is to allow the actors concerned with tolerance intervals to agree on a consistent and robust tolerance value. A statistical methodology based on a Chernov bound approach applied to a sum of uniform distributions is proposed. In the production phase, the availability of measurement data allows to refine the statistical tolerancing approach. The linear model often considered can be corrected to serve new approaches. A methodology to manage acceptance criteria on tolerance values is proposed, basing the decision support on risk concepts pertinently defined for industrial actors. Within the framework of the revision of tolerance sharing in an assembly, an optimization problem is formulated with appropriate industrial costs in order to propose the optimal tolerance re-sharing in a stack chain. Finally, the proposed methodologies are implemented in tools allowing industrial processing and end-to-end management of tolerances from elementary parts to final product assembly, thus contributing to the elaboration of the product virtual twin
Marie, Pierrick. "Gestion de bout en bout de la qualité de contexte pour l'internet des objets : le cadriciel QoCIM." Thesis, Toulouse 3, 2015. http://www.theses.fr/2015TOU30186/document.
The objective of the ANR INCOME project is to provide a framework for the development and the deployment of a context manager. A context manager is composed of software entities, which acquire, process, disseminate or deliver context data. These software entities have to be built and deployed over interconnected heterogeneous ICT infrastructures, which include sensor networks, ambient systems, mobile devices, cloud servers and, more generally, the Internet of Things (IoT). Related to this project, the research work presented in this thesis concerns more specifically the end-to-end management of Quality of Context (QoC) within the new generation of context managers that have to be deployed at large and multiple scales over the IoT. Quality of context data refers to criteria like accuracy, freshness, completeness or granularity. As for QoC management, it deals with all the operations that allow, throughout the life cycle of context data, to manage their qualification, but also to impact, according to this quality, on its dissemination and delivery to context-aware applications. Current QoC management solutions are dedicated to particular ambient environments or to specific applications. They are limited in terms of openness, genericity and computationability, properties required by greatly heterogeneous and dynamic IoT-based environments, in which producers and consumers of context data are no more static and highly coupled. Our contribution relies on QoCIM (QoC Information Model), a meta-model dedicated to define, in a uniform and open way, any atomic or composite QoC criterion. Based on QoCIM, some QoC management operations have been identified and specified. These operations allow to associate criteria of QoC, in the form of metadata, with the information of context; to characterize the metrics and units for their valuation; to infer QoC criteria of a higher level of abstraction; or even to express filtering conditions for such criteria or their values. A software tool for editing QoCIM models and a Java API are provided to developers to easily implement the management of any QoC criterion for their software entities that acquire, process, deliver or propagate context data, or their context-sensititive application. The use of this framework was experimented, both at design time and at run time, on a scenario related to urban pollution. Benchmarking was also led and showed that the additional cost brought when considering QoC in context information routing was acceptable. Finally, a solution for self-(re)configuring QoC management operations was also designed and prototyped
Villin, Olivier. "Gestion de la qualité de service de bout en bout dans les systèmes répartis : approche gestion des ressources." Evry-Val d'Essonne, 2002. http://www.theses.fr/2002EVRY0003.
Desot, Thierry. "Apport des modèles neuronaux de bout-en-bout pour la compréhension automatique de la parole dans l'habitat intelligent." Thesis, Université Grenoble Alpes, 2020. http://www.theses.fr/2020GRALM069.
Smart speakers offer the possibility of interacting with smart home systems, and make it possible to issue a range of requests about various subjects. They represent the first ambient voice interfaces that are frequently available in home environments. Very often they are only capable of inferring voice commands of a simple syntax in short utterances in the realm of smart homes that promote home care for senior adults. They support them during everyday situations by improving their quality of life, and also providing assistance in situations of distress. The design of these smart homes mainly focuses on the safety and comfort of its habitants. As a result, these research projects frequently concentrate on human activity detection, resulting in a lack of attention for the communicative aspects in a smart home design. Consequently, there are insufficient speech corpora, specific to the home automation field, in particular for languages other than English. However the availability of these corpora are crucial for developing interactive communication systems between the smart home and its inhabitants. Such corpora at one’s disposal could also contribute to the development of a generation of smart speakers capable of extracting more complex voice commands. As a consequence, part of our work consisted in developing a corpus generator, producing home automation domain specific voice commands, automatically annotated with intent and concept labels. The extraction of intents and concepts from these commands, by a Spoken Language Understanding (SLU) system is necessary to provide the decision-making module with the information, necessary for their execution. In order to react to speech, the natural language understanding (NLU) module is typically preceded by an automatic speech recognition (ASR) module, automatically converting speech into transcriptions. As several studies have shown, the interaction between ASR and NLU in a sequential SLU approach accumulates errors. Therefore, one of the main motivations of our work is the development of an end-to-end SLU module, extracting concepts and intents directly from speech. To achieve this goal, we first develop a sequential SLU approach as our baseline approach, in which a classic ASR method generates transcriptions that are passed to the NLU module, before continuing with the development of an End-to-end SLU module. These two SLU systems were evaluated on a corpus recorded in the home automation domain. We investigate whether the prosodic information that the end-to-end SLU system has access to, contributes to SLU performance. We position the two approaches also by comparing their robustness, facing speech with more semantic and syntactic variation.The context of this thesis is the ANR VocADom project
Barros, de Sales André. "Gestion de bout en bout de la qualité des services distribués : surveillance et sélection par une approche Modelware." Toulouse 3, 2004. http://www.theses.fr/2004TOU30075.
Our work contributes to offer a selection of published services in distributed systems based on monitored quality management information. From an informational approach, we had the means to model the influence of Systems and Networks so as to automate their management (identification of dependencies, automatic deduction of change of state etc. ). Our informational model driven architecture (Modelware) led us to specify a designing method for management applications, expressing a double independency: one in regard to specific managed domains and another in relation to management platforms. The integrated and contextual monitoring of service was linked to an "advanced selection of service" thus offering a real-time instrumentation of Quality of Service - QoS parameters. Our solution was tested for a CORBA environment, emphasizing our "Modelware" solution's specification steps towards a specific environment to be managed. This QoS information was added to the services published in the CORBA Trader service
Shanahan, Brendan. "Modelling of magnetic null points using BOUT++." Thesis, University of York, 2016. http://etheses.whiterose.ac.uk/15359/.
Asfaha, Samuel. "Epithelial dysfunction following a bout of colitis." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/nq64795.pdf.
Mayer, Rebecca F. "Speaking of, Talkin 'bout, Riffing on Tap." VCU Scholars Compass, 2016. http://scholarscompass.vcu.edu/etd/4173.
Kaced, Ahmed Réda. "Problèmes de sécurité posés par les proxies d'adaptation multimédia : proposition de solutions pour une sécurisation de bout-en-bout." Phd thesis, Télécom ParisTech, 2009. http://pastel.archives-ouvertes.fr/pastel-00005883.
Chen, Jinhui. "Sur des systèmes MIMO avec retour limité: distorsion bout-à-bout, retour analogique du canal, et multiplexage par couche." Phd thesis, Télécom ParisTech, 2009. http://pastel.archives-ouvertes.fr/pastel-00005644.
OUISSE, ADELINA. "Simulation de bout-en-bout d'un lidar vent doppler a 10 et 2 microns - etude de systemes multi-recepteurs." Rennes 1, 2000. http://www.theses.fr/2000REN10076.
Mbarek, Nader. "Autonomie dans les réseaux : négociation du niveau de service de bout en bout dans un framework de gestion autonome." Bordeaux 1, 2007. http://www.theses.fr/2007BOR13453.
Kaced, Ahmed Reda. "Problèmes de sécurité posés par les proxies d'adaptation multimédia : proposition de solutions pour une sécurisation de bout-en-bout." Paris, ENST, 2009. https://pastel.hal.science/pastel-00005883.
Due to the growing number of exotic networks, hardware and software and the increased demand for using multimedia services comes a tremendous need for providing adaptation techniques that end to deliver an understandable content to heterogeneous clients. These last are, generally, characterized by particular constraints and limitations which make the use of rich content difficult, in addition to the unsupported functionalities by the environment. An adaptation method transforms the content from a state to another in order to meet the constraints of the client context. In some situations, several adaptations are applied on the original content to reach that goal, on the level system (ex: choice of flows), on the level of the organization of the data (ex: choice of level in a coding scalable), on the level of the modification of flows themselves (transcoding). Offering an adaptable content for users showed the need for defining techniques and practices concerning the security of the exchanges on the networks. Insofar as the multimedia adaptation of the documents requires authorizing the modification of these documents between the Servers and the end-users, it is necessary to study the necessary conditions to ensure these modifications in a protected way. We have to present a multimedia content delivery system that preserves the end-to-end authenticity of original content while allowing content adaptation by intermediaries. It is the general aims of this thesis
Truck, Isis. "Calculs à l'aide de mots : vers un emploi de termes linguistiques de bout en bout dans la chaîne du raisonnement." Habilitation à diriger des recherches, Université Paris VIII Vincennes-Saint Denis, 2011. http://tel.archives-ouvertes.fr/tel-00639737.
Braud, Tristan. "TCP sur lien asymétrique : analyse des phénomènes et étude de solutions de faible empreinte mémoire ou de bout-en-bout." Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAM014/document.
Using TCP on asymmetric links may lead to unexpected and significant performance drops, severely degrading user experience. Those performance drops can come in various forms, among which a huge latency at the beginning of a connection, under-utilization of link capacities, or even excessive delays for the whole connection.In order to prevent those effects to happen, various approaches exist, either end-to-end through changes in the TCP/IP stack, or in the network core with a collection of scheduling algorithms.The first goal of this thesis is to explore if and how an end-to-end policy (i.e where CPU and memory resources are the most abundant) can achieve similar results as buffering policies in the core of the network. Secondly, we provide an in-depth analysis of the root cause of the performance drops, and evaluate existing algorithms. Finally, new solutions, both end-to-end and in the core of the network, are brought and tested in real life networks
Decker, Emily Sue. "Affective responses to physical activity in obese women a high-intensity interval bout vs. a longer, isocaloric moderate-intensity bout /." [Ames, Iowa : Iowa State University], 2009. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:1473196.
Villa, Monteiro Daniel. "Stratégies d'alliances dans la satisfaction bout en bout de la QoS au sein d'un réseau inter-domaines hiérarchique et égoïste." Versailles-St Quentin en Yvelines, 2011. http://www.theses.fr/2011VERS0007.
This thesis focuses on satisfying the QoS end to end in a inter-domain hierarchical network and selfish. Routing protocols only offer a single road between two domains (direct route), whose composition is mainly influenced by economic interests. Our main contribution has consisted in proposing a new model based on the concept of alliance. In this model, an alliance is a set of independent areas of an economic perspective that decide to share part of their information network and a particular routing service (service stop). The goal of this alliance is to improve customer service requests among the members area by using alternative routes to better direct routes respecting QoS constraints. We establish a first-time mechanisms for the construction of these alternative routes and how to obtain the necessary estimates. Subsequently, our work focuses on studying the characteristics defining an effective alliance. We propose, then, different possible compositions of alliance based on local characteristics of areas but also on their topological position. To validate our model and study the effectiveness of alliances, we have conducted numerous simulations on realistic topologies and hierarchical. We find that the effectiveness of an alliance depends of course on its size, its composition but also the nature and difficulty of QoS constraints to satisfy