Dissertations / Theses on the topic 'Analyse de pire cas'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Analyse de pire cas.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Adnan, Muhammad. "Analyse pire cas exact du réseau AFDX." Thesis, Toulouse, INPT, 2013. http://www.theses.fr/2013INPT0146/document.
Full textThe main objective of this thesis is to provide methodologies for finding exact worst case end to end communication delays of AFDX network. Presently, only pessimistic upper bounds of these delays can be calculated by using Network Calculus and Trajectory approach. To achieve this goal, different existing tools and approaches have been analyzed in the context of this thesis. Based on this analysis, it is deemed necessary to develop new approaches and algorithms. First, Model checking with existing well established real time model checking tools are explored, using timed automata. Then, exhaustive simulation technique is used with newly developed algorithms and their software implementation in order to find exact worst case communication delays of AFDX network. All this research work has been applied on real life implementation of AFDX network, allowing us to validate our research work on industrial scale configuration of AFDX network such as used on Airbus A380 aircraft
Hardy, Damien. "Analyse pire cas pour processeur multi-cœurs disposant de caches partagés." Phd thesis, Université Rennes 1, 2010. http://tel.archives-ouvertes.fr/tel-00557058.
Full textHardy, Damien. "Analyse pire cas pour processeur multi-coeurs disposant de caches partagés." Rennes 1, 2010. http://www.theses.fr/2010REN1S143.
Full textHard real-time systems are subject to timing constraints and failure to respect them can cause economic, ecological or human disasters. The validation process which guarantees the safety of such software, by ensuring the respect of these constraints in all situations including the worst case, is based on the knowledge of the worst case execution time of each task. However, determining the worst case execution time is a difficult problem for modern architectures because of complex hardware mechanisms that could cause significant execution time variability. This document focuses on the analysis of the worst case timing behavior of cache hierarchies, to determine their contribution to the worst case execution time. Several approaches are proposed to predict and improve the worst case execution time of tasks running on multicore processors with a cache hierarchy in which some cache levels are shared between cores
Bauer, Henri. "Analyse pire cas de flux hétérogènes dans un réseau embarqué avion." Thesis, Toulouse, INPT, 2011. http://www.theses.fr/2011INPT0008/document.
Full textThe certification process for avionics network requires guaranties on data transmission delays. However, calculating the worst case delay can be complex in the case of industrial AFDX (Avionics Full Duplex Switched Ethernet) networks. Tools such as Network Calculus provide a pessimistic upper bound of this worst case delay. Communication needs of modern commercial aircraft are expanding and a growing number of flows with various constraints and characteristics must share already existing resources. Currently deployed AFDX networks do not differentiate multiple classes of traffic: messages are processed in their arrival order in the output ports of the switches (FIFO servicing policy). The purpose of this thesis is to show that it is possible to provide upper bounds of end to end transmission delays in networks that implement more advanced servicing policies, based on static priorities (Priority Queuing) or on fairness (Fair Queuing). We show how the trajectory approach, based on scheduling theory in asynchronous distributed systems can be applied to current and future AFDX networks (supporting advanced servicing policies with flow differentiation capabilities). We compare the performance of this approach with the reference tools whenever it is possible and we study the pessimism of the computed upper bounds
Lesage, Benjamin. "Architecture multi-coeurs et temps d'exécution au pire cas." Phd thesis, Université Rennes 1, 2013. http://tel.archives-ouvertes.fr/tel-00870971.
Full textTemmerman, Michel. "Analyse et synthèse du tolérancement "au pire des cas" et statistique dans l'environnement CFAO." Châtenay-Malabry, Ecole centrale de Paris, 2001. http://www.theses.fr/2001ECAP0719.
Full textColin, Antoine. "Estimation de temps d'éxécution au pire cas par analyse statique et application aux systèmes d'exploitation temps réel." Rennes 1, 2001. http://www.theses.fr/2001REN10118.
Full textBourgade, Roman. "Analyse du temps d'exécution pire-cas de tâches temps-réel exécutées sur une architecture multi-cœurs." Phd thesis, Université Paul Sabatier - Toulouse III, 2012. http://tel.archives-ouvertes.fr/tel-00746073.
Full textBourgade, Roman. "Analyse du temps d'exécution pire-cas de tâches temps-réel exécutées sur une architecture multi-coeurs." Toulouse 3, 2012. http://thesesups.ups-tlse.fr/1740/.
Full textSoftware failures in hard real-time systems may have hazardous effects (industrial disasters, human lives endangering). The verification of timing constraints in a hard real-time system depends on the knowledge of the worst-case execution times (WCET) of the tasks accounting for the embedded program. Using multicore processors is a mean to improve embedded systems performances. However, determining worst-case execution times estimates on these architectures is made difficult by the sharing of some resources among cores, especially the interconnection bus that enables accesses to the shared memory. This document proposes a two-level arbitration scheme that makes it possible to improve executed tasks performances while complying with timing constraints. Described methods assess an optimal bus access priority level to each of the tasks. They also allow to find an optimal allocation of tasks to cores when tasks to execute are more numerous than available cores. Experimental results show a meaningful drop in worst-case execution times estimates and processor utilization
Mangoua, sofack William. "Amélioration des délais de traversée pire cas des réseaux embarqués à l’aide du calcul réseau." Thesis, Toulouse, ISAE, 2014. http://www.theses.fr/2014ESAE0024/document.
Full textThe thesis addresses performance analysis of embedded real time network using network calculus. Network calculus is a theory based on min-plus algebra. We use network calculus to assess the quality of service of a residual flow in two context : aggregation with non-preemptive priority policy and DRR policy. The main contribution concerns the evaluation of residual service, given to each flow. We also present how to handle DRR and non-preemptive priority policy hierrachically
Touzeau, Valentin. "Analyse statique de caches LRU : complexité, analyse optimale, et applications au calcul de pire temps d'exécution et à la sécurité." Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAM041.
Full textThe certification of real-time safety critical programs requires bounding their execution time.Due to the high impact of cache memories on memory access latency, modern Worst-Case Execution Time estimation tools include a cache analysis.The aim of this analysis is to statically predict if memory accesses result in a cache hit or a cache miss.This problem is undecidable in general, thus usual cache analyses perform some abstractions that lead to precision loss.One common assumption made to remove the source of undecidability is that all execution paths in the program are feasible.Making this hypothesis is reasonable because the safety of the analysis is preserved when adding spurious paths to the program model.However, classifying memory accesses as cache hits or misses is still hard in practice under this assumption, and efficient cache analysis usually involve additional approximations, again leading to precision loss.This thesis investigates the possibility of performing an optimally precise cache analysis under the common assumption that all execution paths in the program are feasible.We formally define the problems of classifying accesses as hits and misses, and prove that they are NP-hard or PSPACE-hard for common replacement policies (LRU, FIFO, NRU and PLRU).However, if these theoretical complexity results legitimate the use of additional abstraction, they do not preclude the existence of algorithms efficient in practice on industrial workloads.Because of the abstractions performed for efficiency reasons, cache analyses can usually classify accesses as Unknown in addition to Always-Hit (Must analysis) or Always-Miss (May analysis).Accesses classified as Unknown can lead to both a hit or a miss, depending on the program execution path followed.However, it can also be that they belong to one of the Always-Hit or Always-Miss category and that the cache analysis failed to classify them correctly because of a coarse approximation.We thus designed a new analysis for LRU instruction that is able to soundly classify some accesses into a new category, called Definitely Unknown, that represents accesses that can lead to both a hit or a miss.For those accesses, one knows for sure that their classification does not result from a coarse approximation but is a consequence of the program structure and cache configuration.By doing so, we also reduce the set of accesses that are candidate for a refined classification using more powerful and more costly analyses.Our main contribution is an analysis that can perform an optimally precise analysis of LRU instruction caches.We use a method called block focusing that allows an analysis to scale by only analyzing one cache block at a time.We thus take advantage of the low number of candidates for refinement left by our Definitely Unknown analysis.This analysis produces an optimal classification of memory accesses at a reasonable cost (a few times the cost of the usual May and Must analyses).We evaluate the impact of our precise cache analysis on the pipeline analysis.Indeed, when the cache analysis is not able to classify an access as Always-Hit or Always-Miss, the pipeline analysis must consider both cases.By providing a more precise memory access classification, we thus reduce the state space explored by the pipeline analysis and hence the WCET analysis time.Aside from this application of precise cache analysis to WCET estimation, we investigate the possibility of using the Definitely Unknown analysis in the domain of security.Indeed, caches can be used as side-channel to extract some sensitive data from a program execution, and we propose a variation of our Definitely Unknown analysis to help a developer finding the source of some information leakage
Ruiz, Jordy. "Détermination de propriétés de flot de données pour améliorer les estimations de temps d'exécution pire-cas." Thesis, Toulouse 3, 2017. http://www.theses.fr/2017TOU30285/document.
Full textThe search for an upper bound of the execution time of a program is an essential part of the verification of real-time critical systems. The execution times of the programs of such systems generally vary a lot, and it is difficult, or impossible, to predict the range of the possible times. Instead, it is better to look for an approximation of the Worst-Case Execution Time (WCET). A crucial requirement of this estimate is that it must be safe, that is, it must be guaranteed above the real WCET. Because we are looking to prove that the system in question terminates reasonably quickly, an overapproximation is the only acceptable form of approximation. The guarantee of such a safety property could not sensibly be done without static analysis, as a result based on a battery of tests could not be safe without an exhaustive handling of test cases. Furthermore, in the absence of a certified compiler (and tech- nique for the safe transfer of properties to the binaries), the extraction of properties must be done directly on binary code to warrant their soundness. However, this approximation comes with a cost : an important pessimism, the gap between the estimated WCET and the real WCET, would lead to superfluous extra costs in hardware in order for the system to respect the imposed timing requirements. It is therefore important to improve the precision of the WCET by reducing this gap, while maintaining the safety property, as such that it is low enough to not lead to immoderate costs. A major cause of overestimation is the inclusion of semantically impossible paths, said infeasible paths, in the WCET computation. This is due to the use of the Implicit Path Enumeration Technique (IPET), which works on an superset of the possible execution paths. When the Worst-Case Execution Path (WCEP), corresponding to the estimated WCET, is infeasible, the precision of that estimation is negatively affected. In order to deal with this loss of precision, this thesis proposes an infeasible paths detection technique, enabling the improvement of the precision of static analyses (namely for WCET estimation) by notifying them of the infeasibility of some paths of the program. This information is then passed as data flow properties, formatted in the FFX portable annotation language, and allowing the communication of the results of our infeasible path analysis to other analyses
Li, Hanbing. "Extraction and traceability of annotations for WCET estimation." Thesis, Rennes 1, 2015. http://www.theses.fr/2015REN1S040/document.
Full textReal-time systems have become ubiquitous, and play an important role in our everyday life. For hard real-time systems, computing correct results is not the only requirement. In addition, the worst-case execution times (WCET) are needed, and guarantee that they meet the required timing constraints. For tight WCET estimation, annotations are required. Annotations are usually added at source code level but WCET analysis is performed at binary code level. Compiler optimization is between these two levels and has an effect on the structure of the code and annotations.We propose a transformation framework for each optimization to trace the annotation information from source code level to binary code level. The framework can transform the annotations without loss of flow information. We choose LLVM as the compiler to implement our framework. And we use the Mälardalen, TSVC and gcc-loops benchmarks to demonstrate the impact of our framework on compiler optimizations and annotation transformation. The experimental results show that with our framework, many optimizations can be turned on, and we can still estimate WCET safely. The estimated WCET is better than the original one. We also show that compiler optimizations are beneficial for real-time systems
Hermant, Jean-François. "Quelques problèmes et solutions en ordonnancement temps réel pour systèmes répartis." Paris 6, 1999. http://www.theses.fr/1999PA066665.
Full textSimard, Catherine. "Analyse d'algorithmes de type Nesterov et leurs applications à l'imagerie numérique." Mémoire, Université de Sherbrooke, 2015. http://hdl.handle.net/11143/7714.
Full textMaroneze, André Oliveira. "Certified Compilation and Worst-Case Execution Time Estimation." Thesis, Rennes 1, 2014. http://www.theses.fr/2014REN1S030/document.
Full textSafety-critical systems - such as electronic flight control systems and nuclear reactor controls - must satisfy strict safety requirements. We are interested here in the application of formal methods - built upon solid mathematical bases - to verify the behavior of safety-critical systems. More specifically, we formally specify our algorithms and then prove them correct using the Coq proof assistant - a program capable of mechanically checking the correctness of our proofs, providing a very high degree of confidence. In this thesis, we apply formal methods to obtain safe Worst-Case Execution Time (WCET) estimations for C programs. The WCET is an important property related to the safety of critical systems, but its estimation requires sophisticated techniques. To guarantee the absence of errors during WCET estimation, we have formally verified a WCET estimation technique based on the combination of two main methods: a loop bound estimation and the WCET estimation via the Implicit Path Enumeration Technique (IPET). The loop bound estimation itself is decomposed in three steps: a program slicing, a value analysis based on abstract interpretation, and a loop bound calculation stage. Each stage has a chapter dedicated to its formal verification. The entire development has been integrated into the formally verified C compiler CompCert. We prove that the final estimation is correct and we evaluate its performances on a set of reference benchmarks. The contributions of this thesis include (a) the formalization of the techniques used to estimate the WCET, (b) the estimation tool itself (obtained from the formalization), and (c) the experimental evaluation. We conclude that our formally verified development obtains interesting results in terms of precision, but it requires special precautions to ensure the proof effort remains manageable. The parallel development of specifications and proofs is essential to this end. Future works include the formalization of hardware cost models, as well as the development of more sophisticated analyses to improve the precision of the estimated WCET
Ballabriga, Clément. "Vérification de contraintes temporelles strictes sur des programmes par composition d'analyses partielles." Toulouse 3, 2010. http://thesesups.ups-tlse.fr/1001/.
Full textHard real-time systems are subject to real-time constraints. It is necessary to prove that no constraint violation can occur, and the WCET (Worst Cast Execution Time) computation plays an important role in this proof. The WCET computation by static analysis is traditionally performed on a complete program. This approach has two drawbacks: most computation methods run in non linear time with respect to the size of the analysed program, and problems arise when this program is made up of multiple components (some components may be unavailable at analysis time). Therefore, it is necessary to introduce a partial analysis method, in order to process separately each program component, producing partial WCET results for each component. The partial results can then be composed to get the WCET of the whole program. The WCET computation by IPET (Implicit Path Enumeration Technique) involves several analyses, each one contributes to an ILP system, and this system is solved to get the WCET. To perform partial analysis, each analysis involved in WCET computation must be adapted. We illustrate this process by taking the example of several existing analyses, then a more general framework for partial analysis is described. Next, we show that it is possible to take advantage of partial analysis to speed up ILP solving. Our experimentations show that the partial analysis leads to faster WCET computation time and allows handling programs made up of components, without adding too much overestimation
Baga, Yohan. "Analyse de Flux de Trames AFDX en Réception et Méthode d’Optimisation Mémoire." Thesis, Cergy-Pontoise, 2018. http://www.theses.fr/2018CERG0957/document.
Full textThe rise of AFDX networks as a communication infrastructure between on-board equipment of civil aircraft motivates many research projects to reduce communication delays while guaranteeing a high level of determination and quality of service. This thesis deals with the effect of the back-ot-back frame reception on the reception End System, in particular, on the internal buffer, in order to guarantee a non-loss of frames and optimal memory dimensioning. A worst-case modeling of the frame flow is carried out according to a first pessimistic method, based on a periodic frame flow. Then a more optimistic method is presented based on the reception intervals and an iterative frame placement. A probabilistic study implements Gaussian distributions to evaluate the occurrence probabilities of the worst back-to-back frames and provides an illumination that opens a discussion on the relevance of not considering the worst-case modeling to size the reception buffer. Additional memory gain can be achieved by implementing LZW lossless compression
Rihani, Hamza. "Analyse temporelle des systèmes temps-réels sur architectures pluri-coeurs." Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAM074/document.
Full textPredictability is of paramount importance in real-time and safety-critical systems, where non-functional properties --such as the timing behavior -- have high impact on the system's correctness. As many safety-critical systems have agrowing performance demand, classical architectures, such as single-cores, are not sufficient anymore. One increasinglypopular solution is the use of multi-core systems, even in the real-time domain. Recent many-core architectures, such asthe Kalray MPPA, were designed to take advantage of the performance benefits of a multi-core architecture whileoffering certain predictability. It is still hard, however, to predict the execution time due to interferences on sharedresources (e.g., bus, memory, etc.).To tackle this challenge, Time Division Multiple Access (TDMA) buses are often advocated. In the first part of thisthesis, we are interested in the timing analysis of accesses to shared resources in such environments. Our approach usesSatisfiability Modulo Theory (SMT) to encode the semantics and the execution time of the analyzed program. To estimatethe delays of shared resource accesses, we propose an SMT model of a shared TDMA bus. An SMT-solver is used to find asolution that corresponds to the execution path with the maximal execution time. Using examples, we show how theworst-case execution time estimation is enhanced by combining the semantics and the shared bus analysis in SMT.In the second part, we introduce a response time analysis technique for Synchronous Data Flow programs. These are mappedto multiple parallel dependent tasks running on a compute cluster of the Kalray MPPA-256 many-core processor. Theanalysis we devise computes a set of response times and release dates that respect the constraints in the taskdependency graph. We derive a mathematical model of the multi-level bus arbitration policy used by the MPPA. Further,we refine the analysis to account for (i) release dates and response times of co-runners, (ii)task execution models, (iii) use of memory banks, (iv) memory accesses pipelining. Furtherimprovements to the precision of the analysis were achieved by considering only accesses that block the emitting core inthe interference analysis. Our experimental evaluation focuses on randomly generated benchmarks and an avionics casestudy
Silantiev, Alexey. "Groupes quantiques associés aux courbes rationnelles et elliptiques et leurs applications." Angers, 2008. http://www.theses.fr/2008ANGE0061.
Full textThe general context of the work developed here is the control of complex industrial processes. These works offer new methods of improvement of statistical process control for non Gaussian distribution : the control chart with variable parameters and the theoretical control chart for the Rayleigh distribution. A model of integration of the APC (Automatic Process Control) and MSP technics is introduced, and then analyzed by using the models of two real process
Henry, Julien. "Static analysis of program by Abstract Interpretation and Decision Procedures." Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENM037/document.
Full textStatic program analysis aims at automatically determining whether a program satisfies some particular properties. For this purpose, abstract interpretation is a framework that enables the computation of invariants, i.e. properties on the variables that always hold for any program execution. The precision of these invariants depends on many parameters, in particular the abstract domain, and the iteration strategy for computing these invariants. In this thesis, we propose several improvements on the abstract interpretation framework that enhance the overall precision of the analysis.Usually, abstract interpretation consists in computing an ascending sequence with widening, which converges towards a fixpoint which is a program invariant; then computing a descending sequence of correct solutions without widening. We describe and experiment with a method to improve a fixpoint after its computation, by starting again a new ascending/descending sequence with a smarter starting value. Abstract interpretation can also be made more precise by distinguishing paths inside loops, at the expense of possibly exponential complexity. Satisfiability modulo theories (SMT), whose efficiency has been considerably improved in the last decade, allows sparse representations of paths and sets of paths. We propose to combine this SMT representation of paths with various state-of-the-art iteration strategies to further improve the overall precision of the analysis.We propose a second coupling between abstract interpretation and SMT in a program verification framework called Modular Path Focusing, that computes function and loop summaries by abstract interpretation in a modular fashion, guided by error paths obtained with SMT. Our framework can be used for various purposes: it can prove the unreachability of certain error program states, but can also synthesize function/loop preconditions for which these error states are unreachable.We then describe an application of static analysis and SMT to the estimation of program worst-case execution time (WCET). We first present how to express WCET as an optimization modulo theory problem, and show that natural encodings into SMT yield formulas intractable for all current production-grade solvers. We propose an efficient way to considerably reduce the computation time of the SMT-solvers by conjoining to the formulas well chosen summaries of program portions obtained by static analysis.We finally describe the design and the implementation of Pagai,a new static analyzer working over the LLVM compiler infrastructure,which computes numerical inductive invariants using the various techniques described in this thesis.Because of the non-monotonicity of the results of abstract interpretation with widening operators, it is difficult to conclude that some abstraction is more precise than another based on theoretical local precision results. We thus conducted extensive comparisons between our new techniques and previous ones, on a variety of open-source packages and benchmarks used in the community
Naji, Amine. "Timing analysis for time-predictable architectures." Electronic Thesis or Diss., Sorbonne université, 2019. http://www.theses.fr/2019SORUS282.
Full textWith the rising complexity of the underlying computer hardware, the analysis of the timing behavior of real-time software is becoming more and more complex and imprecise. Time-predictable computer architectures thus have been proposed to provide hardware support for timing analysis. The goal is to deliver tighter worst-case execution time (WCET) estimates while keeping the analysis overhead minimal. These estimates are typically provided by standalone WCET analysis tools. The emergence of time-predictable architectures is, however, quite recent. While several designs have been introduced, efforts are still needed to assess their effectiveness in actually enhancing the worst-case performance. For many time-predictable hardware, timing analysis is either non-existing or lacking proper support. Consequently, time-predictable architectures are barely supported in existing WCET analysis tools. The general contribution of this thesis is to help filling this gap and turning some opportunities into concrete advantages. For this, we take interest in the Patmos processor. The already existing support around Patmos allows for an effective exploration of techniques to enhance the worst-case performance. Main contributions include: (1) Handling of predicated execution in timing analysis, (2) Comparison of the precision of stack cache occupancy analyses, (3) Analysis of preemption costs for the stack cache, (4) Preemption mechanisms for the stack cache, and (5) Prefetching-like technique for the stack cache. In addition, we present our WCET analysis tool Odyssey, which implements timing analyses for Patmos
Delbot, François. "Au delà de l'évaluation en pire cas : comparaison et évaluation en moyenne de processus d'optimisation pour le problème du vertex cover et des arbres de connexion de groupes dynamiques." Phd thesis, Université d'Evry-Val d'Essonne, 2009. http://tel.archives-ouvertes.fr/tel-00927315.
Full textBarré, Mathieu. "Worst-case analysis of efficient first-order methods." Electronic Thesis or Diss., Université Paris sciences et lettres, 2021. http://www.theses.fr/2021UPSLE064.
Full textMany modern applications rely on solving optimization problems (e.g., computational biology, mechanics, finance), establishing optimization methods as crucial tools in many scientific fields. Providing guarantees on the (hopefully good) behaviors of these methods is therefore of significant interest. A standard way of analyzing optimization algorithms consists in worst-case reasoning. That is, providing guarantees on the behavior of an algorithm (e.g. its convergence speed), that are independent of the function on which the algorithm is applied and true for every function in a particular class. This thesis aims at providing worst-case analyses of a few efficient first-order optimization methods. We start by the study of Anderson acceleration methods, for which we provide new explicit worst-case bounds guaranteeing precisely when acceleration occurs. We obtained these guarantees by providing upper bounds on a variation of the classical Chebyshev optimization problem on polynomials, that we believe of independent interest. Then, we extend the Performance Estimation Problem (PEP) framework, that was originally designed for principled analyses of fixed-step algorithms, to study first-order methods with adaptive parameters. This is illustrated in particular through the worst-case analyses of the canonical gradient method with Polyak step sizes that use gradient norms and function values information, and of an accelerated version of it. The approach is also presented on other standard adaptive algorithms. Finally, the last contribution of this thesis is to further develop the PEP methodology for analyzing first-order methods relying on inexact proximal computations. Using this framework, we produce algorithms with optimized worst-case guarantees and provide (numerical and analytical) worst-case bounds for some standard algorithms in the literature
Louise, Stéphane. "Calcul de majorants sûrs de temps d'exécution au pire pour des tâches d'applications temps-réels critiques, pour des systèmes disposants de caches mémoire." Phd thesis, Université Paris Sud - Paris XI, 2002. http://tel.archives-ouvertes.fr/tel-00695930.
Full textVaroumas, Steven. "Modèles de programmation de haut niveau pour microcontrôleurs à faibles ressources." Electronic Thesis or Diss., Sorbonne université, 2019. http://www.theses.fr/2019SORUS394.
Full textMicrocontrollers are programmable integrated circuit embedded in multiple everyday objects. Due to their scarce resources, they often are programmated using low-level languages such as C or assembly languages. These languages don't provide the same abstractions and guarantees than higher-level programming languages, such as OCaml. This thesis offers a set of solutions aimed at extending microcontrollers programming with high-level programming paradigms. These solutions provide multiple abstraction layers which, in particular, enable the development of portable programs, free from the specifics of the hardware. We thus introduce a layer of hardware abstraction through an OCaml virtual machine, that enjoys the multiple benefits of the language, while keeping a low memory footprint. We then extend the OCaml language with a synchronous programming model inspired from the Lustre dataflow language, which offers abstraction over the concurrent aspects of a program. The language is then formally specified and various typing properties are proven. Moreover, the abstractions offered by our work induce portability of some static analyses that can be done over the bytecode of programs. We thus propose such an analysis that consists of estimating the worst case execution time (WCET) of a synchronous program. All the propositions of this thesis form a complete development toolchain, and several practical examples that illustrate the benefits of the given solutions are thus provided
Babus, Florina. "Contrôle de processus industriels complexes et instables par le biais des techniques statistiques et automatiques." Phd thesis, Université d'Angers, 2008. http://tel.archives-ouvertes.fr/tel-00535668.
Full textGiroudot, Frédéric. "NoC-based Architectures for Real-Time Applications : Performance Analysis and Design Space Exploration." Thesis, Toulouse, INPT, 2019. https://oatao.univ-toulouse.fr/25921/1/Giroudot_Frederic.pdf.
Full textMonoprocessor architectures have reached their limits in regard to the computing power they offer vs the needs of modern systems. Although multicore architectures partially mitigate this limitation and are commonly used nowadays, they usually rely on intrinsically non-scalable buses to interconnect the cores. The manycore paradigm was proposed to tackle the scalability issue of bus-based multicore processors. It can scale up to hundreds of processing elements (PEs) on a single chip, by organizing them into computing tiles (holding one or several PEs). Intercore communication is usually done using a Network-on-Chip (NoC) that consists of interconnected onchip routers allowing communication between tiles. However, manycore architectures raise numerous challenges, particularly for real-time applications. First, NoC-based communication tends to generate complex blocking patterns when congestion occurs, which complicates the analysis, since computing accurate worst-case delays becomes difficult. Second, running many applications on large Systems-on-Chip such as manycore architectures makes system design particularly crucial and complex. On one hand, it complicates Design Space Exploration, as it multiplies the implementation alternatives that will guarantee the desired functionalities. On the other hand, once a hardware architecture is chosen, mapping the tasks of all applications on the platform is a hard problem, and finding an optimal solution in a reasonable amount of time is not always possible. Therefore, our first contributions address the need for computing tight worst-case delay bounds in wormhole NoCs. We first propose a buffer-aware worst-case timing analysis (BATA) to derive upper bounds on the worst-case end-to-end delays of constant-bit rate data flows transmitted over a NoC on a manycore architecture. We then extend BATA to cover a wider range of traffic types, including bursty traffic flows, and heterogeneous architectures. The introduced method is called G-BATA for Graph-based BATA. In addition to covering a wider range of assumptions, G-BATA improves the computation time; thus increases the scalability of the method. In a second part, we develop a method addressing design and mapping for applications with real-time constraints on manycore platforms. It combines model-based engineering tools (TTool) and simulation with our analytical verification technique (G-BATA) and tools (WoPANets) to provide an efficient design space exploration framework. Finally, we validate our contributions on (a) a serie of experiments on a physical platform and (b) two case studies taken from the real world: an autonomous vehicle control application, and a 5G signal decoder application
Mussot, Vincent. "Automates d'annotation de flot pour l'expression et l'intégration de propriétés dans l'analyse de WCET." Thesis, Toulouse 3, 2016. http://www.theses.fr/2016TOU30247/document.
Full textIn the domain of critical systems, the analysis of execution times of programs is needed to schedule various task at best and by extension to dimension the whole system. The execution time of a program depends on multiple factors such as entries of the program or the targeted hardware. Yet this time variation is an issue in real-time systems where the duration is required to allow correct processor time to each task, and in this purpose, we need to know their worst-case execution time. In the TRACES team at IRIT, we try to compute a safe upper bound of this worst-case execution time that would be as precise as possible. In order to do so, we work on the control flow graph of a program that represents an over-set of its possible executions and we combine this structure with annotations on specific behaviours of the program that might reduce the over-approximation of our estimation. Tools designed to compute worst-case execution times of programmes usually support the expression and the integration of annotations thanks to specific annotation languages. Our proposal is to replace these languages with a type of automata named flow fact automata so that not only the expression but also the integration of annotations in the analysis inherit from the formal basis of automata. Based on these automata enriched with constraints, variables and a hierarchy, we show how they support the various annotation types used in the worst-case execution time domain. Additionally, the integration of annotations in an analysis usually lead to associate numerical constraint to the control flow graph. The automata presented here support this method but their expressiveness offers new integration possibilities based on the partial unfolding of control flow graph. We present experimental results from the comparison of these two methods that show how the graph unfolding can improve the analysis precision. In the end, this precision gain in the worst-case execution time will ensure a better usage of the hardware as well as the absence of risks for the user or the system itself
Bettonte, Gabriella. "Quantum approaches for Worst-Case Execution-Times analysis of programs." Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG026.
Full textQuantum computing is gaining popularity in the computer science community. The awareness of the potential of quantum computing started in 1981, when Richard Feynman first speculated about building a quantum computer. However, until recently, the field has known much skepticism about its long-term practical capabilities to solve problems. In particular, researchers are still facing the challenge of building scalable and reliable quantum computers. Lately, many companies have obtained encouraging results and built quantum machines with enough qubits to start conducting interesting experiments. We chose the worst-case execution-time (WCET) evaluation as the application of our research on quantum computing, as it is crucial for various real-time applications. WCET analysis guarantees that a program's execution time matches all the scheduling and timing constraints. In quantum algorithms history, attention was often given to problems with a particular mathematical structure. The WCETs evaluation, as an opposite, is not a particularly quantum-friendly problem, and it has already proven efficient classical solutions. Hence, it is worth exploring the impact of quantum computing on those kinds of problems, with the spirit of finding new and concrete fields to which quantum computing could bring its potential. If not, research on such specific fields will help to set the boundaries of which applications could benefit from quantum computing. This thesis presents different quantum approaches to perform WCETs evaluations of programs under simplified assumptions
Stehlé, Damien. "Algorithmique de la réduction de réseaux et application à la recherche de pires cas pour l'arrondi defonctions mathématiques." Phd thesis, Université Henri Poincaré - Nancy I, 2005. http://tel.archives-ouvertes.fr/tel-00011150.
Full textplusieurs domaines de l'algorithmique, en cryptographie et en théorie
algorithmique des nombres par exemple. L'objet du présent mémoire est dual : nous améliorons les algorithmes de réduction des réseaux,
et nous développons une nouvelle application dans le domaine
de l'arithmétique des ordinateurs. En ce qui concerne l'aspect algorithmique, nous nous intéressons aux cas des petites dimensions (en dimension un, où il s'agit du calcul de pgcd, et aussi en dimensions 2 à 4), ainsi qu'à la description d'une nouvelle variante de l'algorithme LLL, en dimension quelconque. Du point de vue de l'application, nous utilisons la méthode
de Coppersmith permettant de trouver les petites racines de polynômes modulaires multivariés, pour calculer les pires cas pour l'arrondi des fonctions mathématiques, quand la fonction, le mode d'arrondi, et la précision sont donnés. Nous adaptons aussi notre technique aux mauvais cas simultanés pour deux fonctions. Ces deux méthodes sont des pré-calculs coûteux, qui une fois
effectués permettent d'accélérer les implantations des fonctions mathématiques élémentaires en précision fixée, par exemple en double précision.
La plupart des algorithmes décrits dans ce mémoire ont été validés
expérimentalement par des implantations, qui sont
disponibles à l'url http://www.loria.fr/~stehle.
Senoussaoui, Ikram. "Co-ordonnancement processeur et mémoire des applications temps-réel sur les plateformes multicœurs." Electronic Thesis or Diss., Université de Lille (2022-....), 2023. http://www.theses.fr/2023ULILB051.
Full textThe demand for computational power in real-time embedded systems has increased significantly in recent years. Multicore platforms which are generally equipped with a single memory subsystem shared by all cores, have satisfied this increasing need for computation capability to some extent. However, in real-time systems, simultaneous use of the memory subsystem may result in significantmemory interference. Such memory interference owing to resource contention may lead to very pessimistic worst-case execution time bounds (WCETs) and lead to under-utilization of the system.This thesis focuses on reducing interference resulting from shared resource contention (e.g., caches, buses and main memory) on multicore systems through processor and memory co-scheduling for real-time applications. To this end, we use existing task models such as DFPP (Deferred Fixed Preemption Point) model, PREM (Predictable-Execution-Model) and AER (Acquisition-Execution-Restitution) model. We also propose a new realistic task model and several algorithms for task set allocation and for processor and memory co-scheduling. We show that our proposed methodologies can improve schedulability by up to 50% compared to equivalent schedules generated with the state-of-the-art methods. Furthermore, we experimentally demonstrate the applicability of our methodologies on the Infineon AURIX TC-397 multicore family of processors using different benchmarks
Preda, Valentin. "Robust microvibration control and worst-case analysis for high pointing stability space missions." Thesis, Bordeaux, 2017. http://www.theses.fr/2017BORD0785/document.
Full textNext generation satellite missions will have to meet extremely challenging pointing stability requirements. Even low levels of vibration can introduce enough jitter in the optical elements to cause a significant reduction in image quality. The success of these projects is therefore constrained by the ability of on-board vibration isolation and optical control techniques to keep stable the structural elements of the spacecraft in the presence of external and internal disturbances.In this context, the research work presented in this thesis combines the expertise of the European Space Agency (ESA), the industry (Airbus Defence and Space) and the IMS laboratory (laboratoire de l’Intégration du Matériau au Système) with the aim of developing new generation of robust microvibration isolation systems for future space observation missions. More precisely, the thesis presents the development of an Integrated Modeling, Control and Analysis framework in which to conduct advanced studies related to reaction wheel microvibration mitigation.The thesis builds upon the previous research conducted by Airbus Defence and Space and ESA on the use of mixed active/passive microvibration mitigation techniques and provides a complete methodology for the uncertainty modeling, robust control system design and worst-case analysis of such systems for a typical satellite observation mission. It is shown how disturbances produced by mechanical spinning devices such as reaction wheels can be significantly attenuated in order to improve the pointing stability of the spacecraft even in the presence of model uncertainty and other nonlinear phenomenon.Finally, the work introduces a new disturbance model for the multi harmonic perturbation spectrum produced by spinning reaction wheels that is suitable for both controller synthesis and worst-case analysis using modern robust control tools. This model is exploited to provide new ways of simulating the image distortions induced by such disturbances
Burguière, Claire. "Modéliser la prédiction de branchement pour le calcul de temps d'exécution pire-cas." Toulouse 3, 2008. http://thesesups.ups-tlse.fr/383/.
Full textThe wider and wider use of high-performance processors as part of real-time systems makes it more and more difficult to guarantee that programs will respect their deadlines. While the computation of Worst-Case Execution Times relies on static analysis of the code, the challenge is to model with enough safety and accuracy the behaviour of intrinsically dynamic components. In this report, we focus on the impact of dynamic branch prediction on the worst-case execution time. We present two approaches to model this impact. Local approach examines each branch individually to determine its worst-case number of mispredictions. In the global approach, an ILP system of constraints describes the computation of the branch prediction. Each part of the dynamic branch predictor can be modelled separately: BHT indexing, conflicts on BHT entries and 2-bit counter computation of the prediction. We introduce two branch predictor models: the bimodal predictor and a 2-bit global predictor. We propose a way to compare the effort needed to build the system of constraints that we name modelling complexity. This complexity is quantified as a function of: the number of constraints, the number of variables and the system arity. We analyse the modelling complexity of some branch predictors and we deduce the context of use that fit for the global approach. We examine the differences between the two approaches
Mouafo, Tchinda Yves. "Robustesse des applications temps-réel multicoeurs : techniques de construction d'un ordonnacement équitable tolérant aux pannes matérielles." Thesis, Chasseneuil-du-Poitou, Ecole nationale supérieure de mécanique et d'aérotechnique, 2017. http://www.theses.fr/2017ESMA0015/document.
Full textThis thesis proposes several techniques to build a valid schedule with a Pfair algorithm for multicore real-time systems despite permanent processor failures. Depending on the nature of the tasks, additional time may be allocated or not to recover the lost execution. First, we consider a single core failure. We then show that if no additional time is allocated, the use of a single more core than the required minimum provides a valid schedule : it is the Limited Hardware Redundancy Technique. However, if full recovery is mandatory, we propose three techniques : the Substitute Subtasks Technique which increases the WCET to provide additionnal time which can be used to recover the lost time, the Constrain and Release Technique which creates a time margin between each task's deadline and the following period which can be used to recover the lost execution and the Aperiodic Flow Technique which reschedules the lost execution within the idle time units. Then, these techniques are mixed to adapt the scheduling behaviour to the nature of the impacted tasks. Finally, the case of the failure of several cores is studied.To adapt the system load to the number of remaining functionnal cores we use the criticality mode change which modifies the temporal parameters of some tasks or we discard some tasks according to their importance
Soni, Aakash. "Real-time performance analysis of a QoS based industrial embedded network." Thesis, Toulouse, INPT, 2020. http://www.theses.fr/2020INPT0047.
Full textAFDX serves as a backbone network for transmission of critical avionic flows. This network is certified thanks to the WCTT analysis using Network Calculus (NC) approach. However, the pessimism introduced by NC approach often leads to an over-sized and eventually an underutilized network. The manufacturers envision to better use the available network resources by increasing occupancy rate of the AFDX network by allowing additional traffic from other critical and non-critical functions. Such harmonization of AFDX network with mixed criticality flows necessitates the use of QoS mechanism to satisfy the delay constraints in different classes of flow. In this thesis we study such QoS-aware network, in particular, based on DRR and WRR scheduling. We propose an optimal bandwidth distribution method that ensures the service required by critical flows while providing maximum service to other non-critical flows. We also propose an optimized NC approach to compute tight delay bounds. Our approach has led to computation of up to 40% tighter bounds, in an industrial AFDX configuration, as compared to the classical approach
Emery, Laetitia. "Approches archéométriques des productions faïencières françaises au XVIIIe siècle : le cas de la manufacture Babut à Bergerac (env. 1740 - 1789)." Phd thesis, Université Michel de Montaigne - Bordeaux III, 2012. http://tel.archives-ouvertes.fr/tel-00751413.
Full textAit, Bensaid Samira. "Formal Semantics of Hardware Compilation Framework." Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG085.
Full textStatic worst-case timing analyses are used to ensure the timing deadlines required for safety-critical systems. In order to derive accurate bounds, these timing analyses require precise (micro-)architecture considerations. Usually, such micro-architecture models are constructed by hand from processor manuals.However, with the open-source hardware initiatives and high-level Hardware Description Languages (HCLs), the automatic generation of these micro-architecture models and, more specifically, the pipeline models are promoted. We propose a workflow that aims to automatically construct pipeline datapath models from processor designs described in HCLs. Our workflow is based on the Chisel/FIRRTL Hardware Compiler Framework. We build at the intermediate representation level the datapath pipeline models. Our work intends to prove the timing properties, such as the timing predictability-related properties. We rely on the formal verification as our method. The generated models are then translated into formal models and integrated into an existing model checking-based procedure for detecting timing anomalies. We use TLA+ modeling and verification language and experiment with our analysis with several open-source RISC-V processors. Finally, we advance the studies by evaluating the impact of automatic generation through a series of synthetic benchmarks
Bechtel, Andrew Joseph. "External strengthening of reinforced concrete pier caps." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/42809.
Full textBel, Hadj Aissa Nadia. "Maîtrise du temps d'exécution de logiciels déployés dans des dispositifs personnels de confiance." Thesis, Lille 1, 2008. http://www.theses.fr/2008LIL10133/document.
Full textThe proliferation of small and open objects such as personal trusted devices has encouraged the spread of dynamically adaptable runtime environments. Thus, new software can be deployed on the fly after the devices are delivered to their holders. Through our work, we aim to ensure that each new software, whose deployment is successful, will be able to deliver responses within a maximum delay previously established. These guarantees are crucial in terms of safety and security. To this end, we propose to distribute the computation of worst case execution time. Our solution embraces a proof carrying code approach making distinction between a powerful but untrusted computer used to produce the code, and a safe but resource constrained code consumer. The producer does not assume any prior knowledge of the runtime environment on which its software will be executed. The code is statically analyzed to extract loop bounds and a proof containing this information is joint to the software. By a straightforward inspection of the code, the consumer can verify the validity of the proof and compute the global worst case execution time. We experimentally validate our approach on a hardware and software architecture which meets the requirements of trusted personal devices. Finally, we address the challenges raised when software from different service providers potentially untrusted can coexist and interact in a single device. We focus on the impact of the interaction between different software units on the guarantees previously given by the system on the worst case execution time and we outline a solution based on contracts to maintain these guarantees
SOPHIYAIR, ELEONORE. "Diabete et grossesse : analyse retrospective de 57 cas." Montpellier 1, 1989. http://www.theses.fr/1989MON11299.
Full textFABRIZI, CARLO. "Analisi computazionale dell’aeroacustica di un pneumatico in rotolamento." Doctoral thesis, Università degli Studi di Roma "Tor Vergata", 2010. http://hdl.handle.net/2108/1369.
Full textRoad traffic is one of the major source of noise in modern society. Consequently, the development of new vehicles is subject to increasingly stringent guidelines in terms of noise emissions. The main noise sources of common road vehicles are the engine, the transmission, the aerodynamic and the tire-road interaction. The latter becomes dominant between 50 and 100 mph, speeds typical of urban and extra-urban roads. The noise that arises from tire-road is the combination of structural vibration and aeroacoustics phenomena that create and amplification/reduction of the sound emitted from the tire. The aim of the numerical analysis presented in this thesis is to investigate the aeroacoustic noise generation mechanisms of the tire and at the same time provide a tool to develop low noise tire. The present work is divided into two parts, the analysis of the steady aerodynamics and the aeroacoustic of the rolling tire. In the first part, the study of the Navier-Stokes numerical solution made it possible to highlight the aerodynamic phenomena, such as separations or jet streams, which can cause noise. In the second part these aspects have been analyzed in greater detail by means of aeroacoustic analogies, defining the capacity of the numerical tool to provide suggestions for the development of quieter tires.
Sirigu, Marco. "Synthétisation d'un calculateur de délai et de puissance : application à la technologie MOS complémentaire." Paris 11, 1985. http://www.theses.fr/1985PA112062.
Full textOkiye, Waais Idriss. "Analyse multidimensionnelle de la pauvreté : le cas de Djibouti." Thesis, Bourgogne Franche-Comté, 2017. http://www.theses.fr/2017UBFCB001/document.
Full textThe aim of this thesis is to propose and develop the various multidimensional measures of poverty. There is a consensus on the multidimensional nature of poverty. Scientists, policy makers and development professionals agree that the monetary dimension (lack of income) is inadequate to represent poverty. On the basis of the work of Sen (Nobel Proze of Economics), particularly on the capability approach, we propose four different measures of poverty. The first one is a monetary measure based on the utilitarian approach ; the second is a subjective measure founded on household experience ; the third is a multidimensional axiomatic measure and the final one is a non-axiomatic measure based on the theory of fuzzy sets. They are implemented using survey data EDAM3-IS (Djiboutian Survey of Households 2012). The esults fall within the framework of economic growth in Djibouti. However, all the measures used have shown great disparities between the capital and the regions in terms of basic infrastructure and household welfare. Each method produced results with different interpretations of the determinants of poverty. This does not mean that there is one method being better than the other but rather each approach, in a particular context, may be more relevant. Thus, identifying the poor by applying the different measures of poverty gave us a clear-cut profile, which implies that the decision-maker must first set the aim in view in the implementation of anti-poverty policies. It can be emphasized that the inclusion of a subjective weighting in the process of measuring of poverty is one of our contributions towards the development of multidimensional measures of poverty
Bouallegue, Olfa. "Analyse économique des révolutions : Cas de la révolution Tunisienne." Thesis, Montpellier, 2017. http://www.theses.fr/2017MONTD020/document.
Full textRevolution, which embodies major turns in the course of history, has for a long time been a social study subject. With the coming of the school of public choice in the 1960's, a new economic current helped to undestand revolution. Many economists such as: James M. Buchanan (1962), Gordon Tullock (1971-1974) and John E. Romer (1985) have applied economic theory to social and political science using tools developed by microeconomy. The goal of my research paper is to highlight the contribution of economic theory in the understanding of revolution. I have first drawn a line between two approaches that have studied revolution: The sociological approach which mainly explains why do people revolt when they are faced with structural imbalances. The economic approach which uses the theory of rational choice to demonstrate how people choose to be passive when they are confronted with a revolution
Chenguiti, Khalid. "Analyse en composantes principales fonctionnelle cas des prises alimentaires." Mémoire, Université de Sherbrooke, 2006. http://savoirs.usherbrooke.ca/handle/11143/4722.
Full textMéhel, Éric. "Kératotomie radiaire : analyse vidéokératoscopique : à propos de 80 cas." Nantes, 1994. http://www.theses.fr/1994NANT252M.
Full textOchs, Patrick. "L'investissement immatériel et la commercialisation : analyse du cas français." Paris 2, 1995. http://www.theses.fr/1995PA020005.
Full textIn france, intangible investment progressed more rapidly than tangible investment between 1973 an 1993. This change draw s together intangible investment and intangible expenditure. At present, for lack of relevant financial indicators to evaluate intangible investment of a business nature, companies are not able to asses the implications involved in these intangible commitments in the short, middle and long ter. . Primarily, it is important to define intangible investment in the company's marketing procedure, given the economic, fin ancial, accounting, fiscal and strategic inter-relationships. There follows an explanation of the contribution of intangible investment in marketing, both to the competitiveness and the value of the company. At this point, an appropri ate presentation is made of the results of an empirical validation of the hypotheses raised; this validation deals with 8213 companies and the results of a comparative study of 1109 companies which belong to four different industrial sector s. The conclusion proposes another framework for financial management of the intangible and a new patrimonial perspectiv e for intangible investment in marketing
Hancard, Olivier. "Le défi technologique : analyse théorique et étude de cas." Paris 1, 1997. http://www.theses.fr/1997PA010045.
Full textTechnological progress and its corollaries, technical improvement and scientific progress, constitute an analytical trilogy, which determine some empirical economic stakes as some theoretical and conceptual challenges. After having developed a critical reading of the "usual" theoretical discussion, we open this debate to neo-darwinist concepts. The procedure, not totally fruitless, comes up against its own bounds. Therefore, we choose then to argue about constitution of a "macroeconomic analysis of knowledge production". We define macroeconomic efficiency of the "cognitive production" (the production of a new knowledge whose assimilation is cognition), by an equilibrium between two antagonist dynamics : "diffusion" and "germination". Subsequently, we build another model about strategic "over-germination" of national scientific and technological policies under pressure of international technological rivalries. We also propose some theoretical reflexions about astonishing forms of R&D strategies, and about the cognitive economic nature of different kinds of products (by using a conceptual distinction between the "artisan craft", the "artist production" and the "industrial product"). We are coming up to problems concerning the value theory. Then, we overcome the theoretical discussion to face some empirical technological challenges. Internationally, we first compare R&D dynamics, before studying three "micro-technological challenges" (innovation on metals, work of metals and polymers), and, finally, two microeconomic subjects (a case of joint r&d between independant firms, a description of the small technological French firm)
Khalil, Emam. "Analyse du dumping et de l'antidumping : cas de l'Egypte." Caen, 2009. http://www.theses.fr/2009CAEN0658.
Full text