Дисертації з теми "Programmation à base de tâches"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-50 дисертацій для дослідження на тему "Programmation à base de tâches".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.
Benbernou, Salima. "Outils de base pour une approche de programmation fonctionnelle des tâches opératoires : application aux tâches d'assemblage." Valenciennes, 1991. https://ged.uphf.fr/nuxeo/site/esupversions/27732085-ba53-4b37-94ed-e9e67ed3fb2c.
Hermenier, Fabien. "Gestion dynamique des tâches dans les grappes : une approche à base de machines virtuelles." Phd thesis, Université de Nantes, 2009. http://tel.archives-ouvertes.fr/tel-00476822.
Sergent, Marc. "Passage à l'echelle d'un support d'exécution à base de tâches pour l'algèbre linéaire dense." Thesis, Bordeaux, 2016. http://www.theses.fr/2016BORD0372/document.
The ever-increasing supercomputer architectural complexity emphasizes the need for high-level parallel programming paradigms to design efficient, scalable and portable scientific applications. Among such paradigms, the task-based programming model abstracts away much of the architecture complexity by representing an application as a Directed Acyclic Graph (DAG) of tasks. Among them, the Sequential-Task-Flow (STF) model decouples the task submission step, sequential, from the parallel task execution step. While this model allows for further optimizations on the DAG of tasks at submission time, there is a key concern about the performance hindrance of sequential task submission when scaling. This thesis’ work focuses on studying the scalability of the STF-based StarPU runtime system (developed at Inria Bordeaux in the STORM team) for large scale 3D simulations of the CEA which uses dense linear algebra solvers. To that end, we collaborated with the HiePACS team of Inria Bordeaux on the Chameleon software, which is a collection of linear algebra solvers on top of task-based runtime systems, to produce an efficient and scalable dense linear algebra solver on top of StarPU up to 3,000 cores and 288 GPUs of CEA-DAM’s TERA-100 cluster
Polet, Pierre-Etienne. "Portage des chaînes de traitement sonar sur architecture hétérogène : conception et évaluation d'un environnement de programmation basé sur les tâches moldables." Electronic Thesis or Diss., Lyon, École normale supérieure, 2024. http://www.theses.fr/2024ENSL0004.
The increasing computational demands in SONAR processing chains lead to the choice of heterogeneous architectures based on GPGPU. The complexity of these architectures makes algorithm implementation challenging for individuals who are not specialists in both the application domain and parallel programming. This thesis aims to address this issue by leveraging task-based programming concepts. Static code analysis methods allowed us to group function calls on a GPU to limit certain overheads by increasing task granularity.To extend this approach to enable the utilization of multiple GPUs while controlling memory usage, we explored a moldable task model. This was instantiated into a new OpenMP directive that unifies several older parallelization directives. In this model, moldable tasks or sub-tasks retain the ability to have dependencies. The design of a prototype executive support for managing these moldable tasks focused on load balancing on a heterogeneous architecture and defining an algorithm to detect dependencies between such tasks. Experiments on a SONAR processing beamforming algorithm and the Cholesky factorization algorithm highlighted the method's benefits and some weaknesses in the implementation choices
Vet, Jean-Yves. "Parallélisme de tâches et localité de données dans un contexte multi-modèle de programmation pour supercalculateurs hiérarchiques et hétérogènes." Paris 6, 2013. http://www.theses.fr/2013PA066483.
This thesis makes several distinct contributions which rely on a dedicated task-based programming model. The novelty of this model resides in a dynamic adjustment of the quantity of embedded operations depending on the targeted processing unit. It is particularly well adapted to dynamically balance workloads between heterogeneous processing units. It better harnesses those units by strengthening responsiveness in the presence of execution times fluctuations induced by irregular codes or unpredictable hardware mechanisms. Moreover, the semantics and programming interface of the task-parallel model facilitates the use of automated behaviors such as data coherency of deported memories. It alleviates the burden of developers by taking care of this tedious work and which can be a source of errors. We developed H3LMS an execution platform designed to combine the propositions detailed in the thesis. The platform is integrated to the MPC programming environment in order to enhance cohabitation with other programming models and thus better harness clusters. H3LMS is elaborated to improve task scheduling between homogeneous and heterogeneous processing units by reducing the need to resort to distant accesses in a cluster node. This thesis also focuses on the adaptation of legacy codes which are originally designed to exploit traditional processors and may also consist of hundreds of thousand lines of code. The performance of this solution is evaluated on the Linpack library and on a legacy numerical application from the CEA
Rossignon, Corentin. "Un modèle de programmation à grain fin pour la parallélisation de solveurs linéaires creux." Thesis, Bordeaux, 2015. http://www.theses.fr/2015BORD0094/document.
Solving large sparse linear system is an essential part of numerical simulations. These resolve can takeup to 80% of the total of the simulation time.An efficient parallelization of sparse linear kernels leads to better performances. In distributed memory,parallelization of these kernels is often done by changing the numerical scheme. Contrariwise, in sharedmemory, a more efficient parallelism can be used. It’s necessary to use two levels of parallelism, a first onebetween nodes of a cluster and a second inside a node.When using iterative methods in shared memory, task-based programming enables the possibility tonaturally describe the parallelism by using as granularity one line of the matrix for one task. Unfortunately,this granularity is too fine and doesn’t allow to obtain good performance.In this thesis, we study the granularity problem of the task-based parallelization. We offer to increasegrain size of computational tasks by creating aggregates of tasks which will become tasks themself. Thenew coarser task graph is composed by the set of these aggregates and the new dependencies betweenaggregates. Then a task scheduler schedules this new graph to obtain better performance. We use as examplethe Incomplete LU factorization of a sparse matrix and we show some improvements made by this method.Then, we focus on NUMA architecture computer. When we use a memory bandwidth limited algorithm onthis architecture, it is interesting to reduce NUMA effects. We show how to take into account these effects ina task-based runtime in order to improve performance of a parallel program
Liang, Yan. "Mise en œuvre d'un simulateur en OCCAM pour la conception d'architectures parallèles à base d'une structure multiprocesseur hiérarchique." Compiègne, 1989. http://www.theses.fr/1989COMPD176.
The simulation has become an indispensable phase for conception of parallel processing systems, and enables to avoid construction of expensive prototypes. In this paper, a parallel process-oriented simulator written in OCCAM language has been developed. Our objective is to conceive a simulator adapted to a network of transputers for prototyping parallel processing systems by connecting directly the serial transputer channels. As a simulation example, a parallel processor system (coprocessor) based on hierarchical structure : master-slave has been realized at the processor-memory-switch level. The performance analysis is obtained via two queuing models : the former as independent M/M/1 systems and the latter as a M/M/s system. The experimental performance is measured respectively based on the independent tasks and the sequential tasks. The comparison of analytic and experimental results enables us to constate the advantage and limit of the coprocessor and to encourage us to its implementation
Tahan, Oussama. "Improving execution reliability of parallel applications on multicore architectures : an OpenMP based approach." Compiègne, 2012. http://www.theses.fr/2012COMP2046.
Today, computers are implemented more frequently within systems in order to provide products with better functionalities and improved performance. The fast growing appeal on computing based systems forced industry to meet new short time to market and cost related challenges. Besides that, systems providers are facing rapidly growing needs to supply computers with reliability and safety requirements. This is because both applications and processors complexity keeps increasing and die size remains shrinking, thus making computers more vulnerable toward transient faults. For all these reasons, engineers try to find cost-efficient tools and flexible solutions to properly develop these systems. In this PhD thesis, we will study a redundancy based fault tolerance approach using OpenMP taskcentric programming model. In this approach, we enable OpenMP programmers to tag reliable tasks within a program to automatically replicate them and apply a triple modular redundancy. Voting mechanism is used to detect and recover from erroneous results and behavior. This technique allows developers to take advantage of recent and future multi and many-core based systems since the number of available and not fully exploited resources keeps increasing with the growing number of cores. In addition, we will implement within Nanos runtime system a task cancellation approach allowing us to free resources by cancelling erroneous unnecessary subtrees of tasks. It will also allow OpenMP programmers to easily code applications involving task cancellation and speculative execution. Moreover, we will study runtime enhancements to improve execution of OpenMP applications on NUMA based architectures
Boillot, Lionel. "Contributions à la modélisation mathématique et à l'algorithmique parallèle pour l'optimisation d'un propagateur d'ondes élastiques en milieu anisotrope." Thesis, Pau, 2014. http://www.theses.fr/2014PAUU3043/document.
The most common method of Seismic Imaging is the RTM (Reverse Time Migration) which depends on wave propagation simulations in the subsurface. We focused on a 3D elastic wave propagator in anisotropic media, more precisely TTI (Tilted Transverse Isotropic). We directly worked in the Total code DIVA (Depth Imaging Velocity Analysis) which is based on a discretization by the Discontinuous Galerkin method and the Leap-Frog scheme, and developed for intensive parallel computing – HPC (High Performance Computing). We choose to especially target two contributions. Although they required very different skills, they share the same goal: to reduce the computational cost of the simulation. On one hand, classical boundary conditions like PML (Perfectly Matched Layers) are unstable in TTI media. We have proposed a formulation of a stable ABC (Absorbing Boundary Condition) in anisotropic media. The technique is based on slowness curve properties, giving to our approach an original side. On the other hand, the initial parallelism, which is based on a domain decomposition and communications by message passing through the MPI library, leads to load-imbalance and so poor parallel efficiency. We have fixed this issue by replacing the paradigm for parallelism by the use of task-based programming through runtime system. This PhD thesis have been done in the framework of the research action DIP (Depth Imaging Partnership) between the Total oil company and Inria
Oulamara, Ammar. "Flowshops avec détérioration des tâches et groupement des tâches." Université Joseph Fourier (Grenoble), 2001. http://www.theses.fr/2001GRE10074.
Manaa, Adel. "Ordonnancement de tâches multiprocesseur sur deux processeurs dédiés." Troyes, 2009. http://www.theses.fr/2009TROY0016.
In this thesis, we study scheduling problems with multiprocessor tasks where a task may simultaneously require more than one processor for its execution. We are particularly interested in the case of two dedicated processors for which we have considered three objective functions: minimising the makespan, minimizing the total (weighted) flowtime and minimizing the total tardiness. For these criteria, we have developed lower bounds, heuristics and exact methods. We have proposed different lower bounds based on intuitive relaxations or lagrangian and surrogate relaxations. Constructive heuristics have been developed and worst case performance ratios of some of them are proved. We have also proposed mathematical models for minimizing the total tardiness and we developed several heuristics based on these models with experimental study of their performance. We have developed exact branch-and-bound methods for which we have proved several dominance properties. Finally, we proposed a specific scheme to randomly test instances for this type of scheduling problems on dedicated processors taking into account the different types of tasks and processor loads and on which we tested our methods
Cavarero, Annie. "Lmad : un logiciel de conception de tâches d'assemblage robotisé." Besançon, 1986. http://www.theses.fr/1986BESA2033.
Tran, Thierry. "Programmation graphique interactive de tâches non répétitives de manipulation au contact." Toulouse 3, 1990. http://www.theses.fr/1990TOU30223.
Cojean, Terry. "Programmation des architectures hétérogènes à l'aide de tâches divisibles ou modulables." Thesis, Bordeaux, 2018. http://www.theses.fr/2018BORD0041/document.
Hybrid computing platforms equipped with accelerators are now commonplace in high performance computing platforms. Due to this evolution, researchers concentrated their efforts on conceiving tools aiming to ease the programmation of applications able to use all computing units of such machines. The StarPU runtime system developed in the STORM team at INRIA Bordeaux was conceived to be a target for parallel language compilers and specialized libraries (linear algebra, Fourier transforms,...). To provide the portability of codes and performances to applications, StarPU schedules dynamic task graphs efficiently on all heterogeneous computing units of the machine. One of the most difficult aspects when expressing an application into a graph of task is to choose the granularity of the tasks, which typically goes hand in hand with the size of blocs used to partition the problem's data. Small granularity do not allow to efficiently use accelerators such as GPUs which require a small amount of task with massive inner data-parallelism in order to obtain peak performance. Inversely, processors typically exhibit optimal performances with a big amount of tasks possessing smaller granularities. The choice of the task granularity not only depends on the type of computing units on which it will be executed, but in addition it will influence the quantity of parallelism available in the system: too many small tasks may flood the runtime system by introducing overhead, whereas too many small tasks may create a parallelism deficiency. Currently, most approaches rely on finding a compromise granularity of tasks which does not make optimal use of both CPU and accelerator resources. The objective of this thesis is to solve this granularity problem by aggregating resources in order to view them not as many small resources but fewer larger ones collaborating to the execution of the same task. One theoretical machine and scheduling model allowing to represent this process exists since several decades: the parallel tasks. The main contributions of this thesis are to make practical use of this model by implementing a parallel task mechanism inside StarPU and to implement and study parallel task schedulers of the literature. The validation of the model is made by improving the programmation and optimizing the execution of numerical applications on top of modern computing machines
Poder, Emmanuel. "Programmation par contraintes et ordonnancement de tâches avec consommation variable de ressource." Clermont-Ferrand 2, 2002. http://www.theses.fr/2002CLF21374.
Bouzat, Nicolas. "Algorithmes à grain fin et schémas numériques pour des simulations exascales de plasmas turbulents." Thesis, Strasbourg, 2018. http://www.theses.fr/2018STRAD052/document.
Recent high performance computing architectures come with more and more cores on a greater number of computational nodes. Memory buses and communication networks are facing critical levels of use. Programming parallel codes for those architectures requires to put the emphasize on those matters while writing tailored algorithms. In this thesis, a plasma turbulence simulation code is analyzed and its parallelization is overhauled. The gyroaverage operator benefits from a new algorithm that is better suited with regard to its data distribution and that uses a computation -- communication overlapping scheme. Those optimizations lead to an improvement by reducing both execution times and memory footprint. We also study new designs for the code by developing a prototype based on task programming model and an asynchronous communication scheme. It allows us to reach a better load balancing and thus to achieve better execution times by minimizing communication overheads. A new reduced mesh is introduced, shrinking the overall mesh size while keeping the same numerical accuracy but at the expense of more complex operators. This prototype also uses a new data distribution and twists the mesh to adapt to the complex geometries of modern tokamak reactors. Performance of the different optimizations is studied and compared to that of the current code. A case scaling on a large number of cores is given
Möller, Nathalie. "Adaptation de codes industriels de simulation en Calcul Haute Performance aux architectures modernes de supercalculateurs." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLV088.
For many years, the stability of the architecture paradigm has facilitated the performance portability of large HPC codes from one generation of supercomputers to another.The announced breakdown of the Moore's Law, which rules the progress of microprocessor engraving, ends this model and requires new efforts on the software's side.Code modernization, based on an algorithmic which is well adapted to the future systems, is mandatory.This modernization is based on well-known principles as the computation concurrency, or degree of parallelism, and the data locality.However, the implementation of these principles in large industrial applications, which often are the result of years of development efforts, turns out to be way more difficult than expected.This thesis contributions are twofold :On the one hand, we explore a methodology of software modernization based on the concept of proto-applications and compare it with the direct approach, while optimizing two simulation codes developed in a similar context.On the other hand, we focus on the identification of the main challenges for the architecture, the programming models and the applications.The two chosen application fields are the Computational Fluid Dynamics and Computational Electro Magnetics
Laruelle, Hervé. "Planification temporelle et exécution de tâches en robotique." Toulouse 3, 1994. http://www.theses.fr/1994TOU30048.
Koung, Daravuth. "Cooperative navigation of a fleet of mobile robots." Electronic Thesis or Diss., Ecole centrale de Nantes, 2022. http://www.theses.fr/2022ECDN0044.
The interest in integrating multirobot systems (MRS) into real-world applications is increasing more and more, especially for performing complex tasks. For loadcarrying tasks, various load-handling strategies have been proposed such as: pushingonly, caging, and grasping. In this thesis, we aim to use a simple handling strategy: placing the carrying object on top of a group of wheeled mobile robots. Thus, it requires a rigid formation control. A consensus algorithm is one of the two formation controllers we apply to the system. We adapt a dynamic flocking controller to be used in the singleintegrator system, and we propose an obstacle avoidance that can prevent splitting while evading the obstacles. The second formation control is based on hierarchical quadratic programming (HQP). The problem is decomposed into multiple task objectives: formation, navigation, obstacle avoidance, velocity limits. These tasks are represented by equality and inequality constraints with different levels of priority, which are solved sequentially by the HQP. Lastly, a study on task allocation algorithms (Contract Net Protocol and Tabu Search) is carried out in order to determine an appropriate solution for allocating tasks in the industrial environment
Mazon, Isabelle. "Raisonnement sur les contraintes géométriques et d'incertitudes pour la planification de tâches de manipulation robotisées." Toulouse 3, 1990. http://www.theses.fr/1990TOU30131.
Gurhem, Jérôme. "Paradigmes de programmation répartie et parallèle utilisant des graphes de tâches pour supercalculateurs post-pétascale." Thesis, Lille, 2021. http://www.theses.fr/2021LILUI005.
Since the middle of the 1990s, message passing libraries are the most used technology to implement parallel and distributed applications. However, they may not be a solution efficient enough on exascale machines since scalability issues will appear due to the increase in computing resources. Task-based programming models can be used, for example, to avoid collective communications along all the resources like reductions, broadcast or gather by transforming them into multiple operations on tasks. Then, these operations can be scheduled by the scheduler to place the data and computations in a way that optimize and reduce the data communications. The main objective of this thesis is to study what must be task-based programming for scientific applications and to propose a specification of such distributed and parallel programming, by experimenting for several simplified representations of important scientific applications for TOTAL, and classical dense and sparse linear methods.During the dissertation, several programming languages and paradigms are studied. Dense linear methods to solve linear systems, sequences of sparse matrix vector product and the Kirchhoff seismic pre-stack depth migration are studied and implemented as task-based applications. A taxonomy, based on several of these languages and paradigms is proposed.Software were developed using these programming models for each simplified application. As a result of these researches, a methodology for parallel task programming is proposed, optimizing data movements, in general, and for targeted scientific applications, in particular
Grabsia, Nassima. "Contribution à l'étude et à la spécification d'un système de programmation des tâches orientée métier." Aix-Marseille 3, 2000. http://www.theses.fr/2000AIX30014.
Lucquiaud, Vincent. "Sémantique et outil pour la modélisation des tâches utilisateur : n-mda." Poitiers, 2005. http://www.theses.fr/2005POIT2325.
The development process of an interactive application follows a cycle that is made of several parts. There is a stage of analysis, another focusing on design and a final stage of evaluation. Each stage is provided a suitable specification technique. In the analysis stage are specified the user needs through several possible user tasks models. Such models are the topic of this research. None of these models are based on a “formalized semantic” so that they cannot pretend to both augment they “generative capacity” and participate efficiently to such a user centred design. A careful examination of existing task models led to suggest a generic task models structure. The new model is built starting from Diane+, GTA, CTT and MAD* to remove ambiguites. In such a structure called “Projet MDA", a generic “kernel” (N-MDA) that concerns user activity description can be added diverse specialized “modules” on demand. The expressive power of our kernel is based on our classification of previous models, their tool and their use. Among the examination of several modules, one is related to user objects, others use languages that allow specifying tasks pre-conditions, iterations and post-conditions with such objects. In order to fill the lack of semantic in such previous models, kernel and modules data were described through EXPRESS language. This allows checking unambiguous structure of data in the model. The behaviour of the model, that is particularly important for scenarii composition, is built over several parameters. Those parameters are the “opérateur d'ordonnancement”, whether the task is mandatory or not, interrupts (with resume or skip), tasks pre-conditions, iterations, post-conditions, user choice and events. Beyond theoretical models, an implementation has been developed (K-MADe), that contains the kernel (N-MDA) and its previously described modules (Projet MDA in EXPRESS). This implementation can check user data inputs and simulate user activity. Thanks to EXPRESS use, NMDA project can be part of generic tools implementing specification techniques that cover a wider part of the lifecycle. This tool architecture allows user activity description so that data and their semantic can be used for other tools that complete the user development cycle. Whenever one limitation of this work is the lack of application feedback, its development is mandatory to additional extensive task models use studies, in particular concerning their generative capacity within interactive systems development
Sahbani, Anis. "Planification des tâches de manipulation en robotique par des approches probabilistes." Toulouse 3, 2003. http://www.theses.fr/2003TOU30008.
Thomas, Marie-Claude. "Apport du génie logiciel et de la conception par objets à la programmation des tâches robotisées." Nice, 1989. http://www.theses.fr/1989NICE4342.
Piccin, Olivier. "Spécification et résolution de tâches de manipulation complexes. Application à la téléprogrammation de robots distants." Phd thesis, Université Paul Sabatier - Toulouse III, 1995. http://tel.archives-ouvertes.fr/tel-00144080.
Granado, Ernesto. "Commande predictive a base de programmation semi definie." Phd thesis, INSA de Toulouse, 2004. http://tel.archives-ouvertes.fr/tel-00009241.
Granado, Migliore Ernesto. "Commande prédictive à base de programmation semi définie." Toulouse, INSA, 2004. http://www.theses.fr/2004ISAT0006.
This work develops some approaches for the synthesis of partial information robust output feedback controllers in discrete time systems. In the framework of predictive control the synthesis follows the minimization, at each sampling time, of an upper bound for a quadratic cost associated with an infinite time horizon. The optimization problem which takes into account state and control constraints is described in terms of a semi definite programming one including linear matrix inequalities. Two general approaches are investigated: the first one is based on invariant ellipsoidal concept with dynamic output control, the second one makes use of an extended formulation where the initial output feedback control is translated in terms of extended state feedback control
Loi, Michel. "Outils pour la construction de graphes de tâches acycliques à gros grain." Lyon 1, 1996. http://www.theses.fr/1996LYO10260.
Braschi, Bertrand. "Principes de base des algorithmes d'ordonnancement de liste et affectation de priorités aux tâches." Grenoble INPG, 1990. http://www.theses.fr/1990INPG0122.
Payan, Eric. "Etude d'une architecture cellulaire programmable : définition fonctionnelle et méthodologie de programmation." Phd thesis, Grenoble INPG, 1991. http://tel.archives-ouvertes.fr/tel-00339754.
Prost, Jean-Philippe. "Modélisation de la gradience syntaxique par analyse relâchée à base de contraintes." Aix-Marseille 1, 2008. http://www.theses.fr/2008AIX11087.
Garcia, Pinto Vinicius. "Stratégies d'analyse de performance pour les applications basées sur tâches sur plates-formes hybrides." Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAM058/document.
Programming paradigms in High-Performance Computing have been shiftingtoward task-based models that are capable of adapting readily toheterogeneous and scalable supercomputers. The performance oftask-based applications heavily depends on the runtime schedulingheuristics and on its ability to exploit computing and communicationresources.Unfortunately, the traditional performance analysis strategies areunfit to fully understand task-based runtime systems and applications:they expect a regular behavior with communication and computationphases, while task-based applications demonstrate no clearphases. Moreover, the finer granularity of task-based applicationstypically induces a stochastic behavior that leads to irregularstructures that are difficult to analyze.In this thesis, we propose performance analysis strategies thatexploit the combination of application structure, scheduler, andhardware information. We show how our strategies can help tounderstand performance issues of task-based applications running onhybrid platforms. Our performance analysis strategies are built on topof modern data analysis tools, enabling the creation of customvisualization panels that allow understanding and pinpointingperformance problems incurred by bad scheduling decisions andincorrect runtime system and platform configuration.By combining simulation and debugging we are also able to build a visualrepresentation of the internal state and the estimations computed bythe scheduler when scheduling a new task.We validate our proposal by analyzing traces from a Choleskydecomposition implemented with the StarPU task-based runtime systemand running on hybrid (CPU/GPU) platforms. Our case studies show howto enhance the task partitioning among the multi-(GPU, core) to getcloser to theoretical lower bounds, how to improve MPI pipelining inmulti-(node, core, GPU) to reduce the slow start in distributed nodesand how to upgrade the runtime system to increase MPI bandwidth. Byemploying simulation and debugging strategies, we also provide aworkflow to investigate, in depth, assumptions concerning the schedulerdecisions. This allows us to suggest changes to improve the runtimesystem scheduling and prefetch mechanisms
Garneau, Tony. "Langage de programmation pour les simulations géoréférencées à base d'agents." Thesis, Université Laval, 2011. http://www.theses.ulaval.ca/2011/27803/27803.pdf.
In the last decade, technologies based on software agents have been used in many domains such as video games, movies containing animated characters, virtual reality, in visual interfaces development where “wizards” are supplied and in educative Web applications using virtual characters, just to name a few. In many of these domains, agent-based simulations require the integration of geographic data. These add a spatial dimension and allow the simulation of many complex phenomena such as those included in urban dynamics. This has spawned a new research field: Multi-Agent- Geo-Simulation (MAGS for short). Some of the frameworks developed for MAGS use many different techniques to specify and implement tagent-based simulations. However, the agents’ behaviors that can be specified are usually very limited and are insufficient for the development of geo-referenced simulation of social phenomena. In this type of simulation, the agents must act autonomously and have the ability to perceive the environment in which they evolve, and then take decision based on these perceptions. To benefit from such characteristics, we consider that these agents must minimally have a perception mechanism that is autonomous and unique to each agent which need as well as to be proactive and have autonomous behavior in relation to their virtual environment. The specification of this type of agent is a difficult task and, to the best of our knowledge, none of the existing development environment offers a language able to fulfill it. In the context of the PLAMAGS (Programming LAnguage for Multi-Agent Geo-Simulations) Project, we developed a new agent-oriented programming language, an applied design methodology and an integrated development environment that allow a quick and simple design and execution cycle of agent-based geo-referenced simulations. The main contributions of this work are as follows: - A full-fledged descriptive programming language, procedural and object-oriented that is usable at every stage of the development cycle and that is dedicated to MAGS. This language eliminates the transition and transposition from the theoretical model to the programming language and thus avoids all the difficulties inherent to such a transposition task. - An applied development methodology where the modeling, design and implementation, execution and validation steps are merged and integrated throughout the development cycle. - A behavioral model that is powerful (agent wise), intuitive, modular, extensible and flexible and thus allows a sequential and iterative development using abstractions based on decomposition (sub-behaviors). - A spatialized interaction model that is clearly defined and directly integrated in the primitives of the programming language.
Monteil, Thierry. "Étude de nouvelles approches pour les communications, l'observation et le placement de tâches dans l'environnement de programmation parallèle LANDA." Toulouse, INPT, 1996. http://www.theses.fr/1996INPT097H.
Duchemin, Gilles. "Commande et programmation d'un robot d'assistance au geste médical pour des tâches de suivi au contact de tissus mous." Montpellier 2, 2002. http://www.theses.fr/2002MON20120.
Garlet, Milani Luís Felipe. "Autotuning assisté par apprentissage automatique de tâches OpenMP." Thesis, Université Grenoble Alpes, 2020. http://www.theses.fr/2020GRALM022.
Modern computer architectures are highly complex, requiring great programming effort to obtain all the performance the hardware is capable of delivering. Indeed, while developers know potential optimizations, the only feasible way to tell which of them is faster for some platform is to test it. Furthermore, the many differences between two computer platforms, in the number of cores, cache sizes, interconnect, processor and memory frequencies, etc, makes it very challenging to have the same code perform well over several systems. To extract the most performance, it is often necessary to fine-tune the code for each system. Consequently, developers adopt autotuning to achieve some degree of portable performance. This way, the potential optimizations can be specified once, and, after testing each possibility on a platform, obtain a high-performance version of the code for that particular platform. However, this technique requires tuning each application for each platform it targets. This is not only time consuming but the autotuning and the real execution of the application differ. Differences in the data may trigger different behaviour, or there may be different interactions between the threads in the autotuning and the actual execution. This can lead to suboptimal decisions if the autotuner chooses a version that is optimal for the training but not for the real execution of the application. We propose the use of autotuning for selecting versions of the code relevant for a range of platforms and, during the execution of the application, the runtime system identifies the best version to use using one of three policies we propose: Mean, Upper Confidence Bound, and Gradient Bandit. This way, training effort is decreased and it enables the use of the same set of versions with different platforms without sacrificing performance. We conclude that the proposed policies can identify the version to use without incurring substantial performance losses. Furthermore, when the user does not know enough details of the application to configure optimally the explore-then-commit policy usedy by other runtime systems, the more adaptable UCB policy can be used in its place
Simonin, Gilles. "Impact de la contrainte d'incompatibilité sur la complexité et l'approximation des problèmes d'ordonnancement en présence de tâches-couplées." Montpellier 2, 2009. http://www.theses.fr/2009MON20198.
The last couple of years have seen the advent of sub-marine machines like the sub-marine torpedo. This torpedo must execute two kind of tasks : acquisition tasks and treatment tasks. The acquisition tasks are equivalent to coupled-tasks, and treatment tasks are equivalent to classical tasks with preemption constraints. The torpedo possess different captors in order to realize the acquisitions, few captors cannot be use in same time because of the interferences. In this way, we introduce a compatibility graph in order to represent the acquisition tasks which can be executed in the same time. The torpedo possess a mono-processor used to execute the different tasks. In the first part, we introduce the model of the problem, and define the different tasks and their constraints. We discuss on the impact of the compatibility constraint on the general problem. Finally, we finish this part with a related works on general scheduling theory, on coupled-tasks, and on the different cover problems in the graph theory. In a second part, we give the classification of the different problems according to the parameters of the coupled-tasks. We give several proves of complexity for specific problem which are at the limit between polynomiality and completeness. For each studied problem, we propose either a optimal solution with an algorithm in polynomial time, or an approximation algorithm with guarantee of performance. For few problems, we study the complexity in details according to specific parameters or different topologies for the compatibility graph. All the long of this part, we try to show the impact of the introduction of the compatibility constraint on the scheduling problem with coupled-tasks
Abbas, Issam. "Optimisation d'un langage fonctionnel de requêtes pour une base de données orienté-objet." Aix-Marseille 1, 1999. http://www.theses.fr/1999AIX11003.
Lefebvre, Jean-Marc. "Contribution à la spécification et à l'implantation de tâches robotiques complexes." Phd thesis, Grenoble INPG, 1989. http://tel.archives-ouvertes.fr/tel-00335739.
Bouvry, Pascal. "Placement de tâches sur ordinateurs parallèles à mémoire distribuée." Grenoble INPG, 1994. http://tel.archives-ouvertes.fr/tel-00005081.
The growing needs in computing performance imply more complex computer architectures. The lack of good programming environments for these machines must be filled. The goal to be reached is to find a compromise solution between portability and performance. The subject of this thesis is studying the problem of static allocation of task graphs onto distributed memory parallel computers. This work takes part of the project INRIA-IMAG APACHE and of the european one SEPP-COPERNICUS (Software Engineering for Parallel Processing). The undirected task graph is the chosen programming model. A survey of the existing solutions for scheduling and for mapping problems is given. The possibility of using directed task graphs after a clustering phase is underlined. An original solution is designed and implemented ; this solution is implemented within a working programming environment. Three kinds of mapping algorithms are used: greedy, iterative and exact ones. Most developments have been done for tabu search and simulated annealing. These algorithms improve various objective functions (from most simple and portable to the most complex and architecturaly dependant). The weigths of the task graphs can be tuned using a post-mortem analysis of traces. The use of tracing tools leads to a validation of the cost function and of the mapping algorithms. A benchmark protocol is defined and used. The tests are runned on the Meganode (a 128 transputer machine) using VCR from the university of Southampton as a router, synthetic task graphs generation with ANDES of the ALPES project (developped by the performance evaluation team of the LGI-IMAG) and the Dominant Sequence Clustering of PYRROS (developped by Tao Yang and Apostolos Gerasoulis)
Hannachi, Marwa. "Placement des tâches matérielles de tailles variables sur des architectures reconfigurables dynamiquement et partiellement." Electronic Thesis or Diss., Université de Lorraine, 2017. http://www.theses.fr/2017LORR0297.
Adaptive systems based on Field-Programmable Gate Arrays (FPGA) architectures can benefit greatly from the high degree of flexibility offered by dynamic partial reconfiguration (DPR). Thanks to DPR, hardware tasks composing an adaptive system can be allocated and relocated on demand or depending on the dynamically changing environment. Existing design flows and commercial tools have evolved to meet the requirements of reconfigurables architectures, but that are limited in functionality. These tools do not allow an efficient placement and relocation of variable-sized hardware tasks. The main objective of this thesis is to propose a new methodology and a new approaches to facilitate to the designers the design phase of an adaptive and reconfigurable system and to make it operational, valid, optimized and adapted to dynamic changes in the environment. The first contribution of this thesis deals with the issues of relocation of variable-sized hardware tasks. A design methodology is proposed to address a major problem of relocation mechanisms: storing a single configuration bitstream to reduce memory requirements and increasing the reusability of generating hardware modules. A reconfigurable region partitioning technique is applied in this proposed relocation methodology to increase the efficiency of use of hardware resources in the case of reconfigurable tasks of variable sizes. This methodology also takes into account communication between different reconfigurable regions and the static region. To validate the design method, several cases studies are implemented. This validation shows an efficient use of hardware resources and a significant reduction in reconfiguration time. The second part of this thesis presents and details a mathematical formulations in order to automate the floorplanning of the reconfigurable regions in the FPGAs. The algorithms presented in this thesis are based on the optimization technique MILP (mixed integer linear programming). These algorithms allow to define automatically the location, the size and the shape of the dynamic reconfigurable region. We are mainly interested in this research to satisfy the constraints of placement of the reconfigurable zones and those related to the relocation. In addition, we consider the optimization of the hardware resources in the FPGA taking into account the tasks of variable sizes. Finally, an evaluation of the proposed approach is presented
Pessan, Cedric. "Optimisation de changements de séries par ordonnancement des tâches de réglage." Thesis, Tours, 2008. http://www.theses.fr/2008TOUR4026/document.
The work presented in this thesis aims at proposing new methods for setup optimization in production lines in order to improve production flexibility. This problem is modelized using an unrelated parallel machines problem : the tasks are the setup tasks of each machine of the production line and the ressources are the operators. We take into consideration the production line structure that may contain multiple machines on some stages and the skills of operators. The skill model has been validated using a simulation approach. We have used a Branch-and-Bound to solve the special case of serial production line and hill climbing and genetic algorithm meta heuristics for the general case. In both cases, we propose bounds that are used to evaluate the performances of the different methods. For the serial special case, we also propose a hybrid algorithm that use both a genetic algorithm and a Branch-and-Bound that are colaborating together
Sassine, Renée. "Sytème fondé sur la connaissance pour la modélisation du niveau objectif dans la programmation de tâches de préhension de robots." Aix-Marseille 3, 1995. http://www.theses.fr/1995AIX30002.
Gamboa, Dos Santos Carlos. "Apport de l'approche gestion de réseaux pour le placement de tâches dans le modèle de programmation par échange de messages." Nancy 1, 1998. http://www.theses.fr/1998NAN10244.
Taghbalout, Meryem. "Programmation et commande d’un robot à deux bras pour la manipulation et l’assemblage de pièces flexibles par des tâches collaboratives." Electronic Thesis or Diss., Université de Lorraine, 2021. http://www.theses.fr/2021LORR0165.
The main objective of this thesis is to develop a control law to control a two arm robot in order to achieve the manipulation tasks and to ensure the safety aspects for the human and the robot. To accomplish this, we presented an overview of the studies that have been conducted in this context. Thereafter, we proceeded to the kinematic and dynamic robot modeling. In order to calculate its dynamic model, we used the SYMORO+ software. After giving a detailed presentation of the method for identifying the parameters of manipulator robots, we applied it in a concrete way on our own robot. This allowed us to obtain a vector of parameters able to ensure a positive defined inertia matrix for any robot joint configuration, as well as a good quality of torque reconstruction for both constant and variable joint speeds. Thanks to this dynamic identification, we were able to realize an accurate simulator of the robot on Matlab/Simulink. The external forces of the robot have been estimated from the identified dynamic modulus and an experimental validation has been described. To apply control laws on an industrial robot, it was necessary to think about an external communication on the robot. The adopted approach was presented as well as its realization on the robot. The latter allowed us to program in python all our problems to be implemented on the robot. After having validated all the steps previously mentioned we moved on to the implementation of the control on the robot. The main task we adopted is the stripping of the cables. For this purpose, we chose to implement a hybrid force/position control in which the external forces are controlled in order to guarantee the absence of damage on the cables to be stripped. The robot used to perform all the experiments is the IRB14000 YuMi robot from Abb. All the experiments performed in this project have been described and presented. This thesis work was carried out within the framework of the Robotix Academy project financed by the European regional development fund INTERREG V-A Grande Région
Morat, Philippe. "Une étude sur la base de la programmation algorithmique notation et environnement de travail /." S.l. : Université Grenoble 1, 2008. http://tel.archives-ouvertes.fr/tel-00308504.
Mahout, Vincent. "Récurrences et fonctionnements complexes en robotique dans le contexte de tâches répétitives. Application à une structure SCARA." Toulouse, INSA, 1994. http://www.theses.fr/1994ISAT0037.
Ploquin, Catherine. "LAB langage d'analyse associé à une base de données." Bordeaux 1, 1985. http://www.theses.fr/1985BOR10534.
Hammoudi, Slimane. "Hyper-agenda : un système d'aide à la réalisation de tâches : Conceptualisation, spécification et représentation." Montpellier 2, 1993. http://www.theses.fr/1993MON20128.