Contents
Academic literature on the topic 'Calcul Haut Débit'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Calcul Haut Débit.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Calcul Haut Débit"
Belleville, Arnaud, Federico Garavaglia, Damien Sevrez, Véronique Mary, Aloïs Tilloy, Didier Scopel, and Hélène Combes. "Réanalyse des chroniques patrimoniales de débit. Évaluation de l'impact et valorisation." La Houille Blanche, no. 5-6 (October 2018): 29–35. http://dx.doi.org/10.1051/lhb/2018048.
Full textVasselon, V., F. Rimet, I. Domaizon, O. Monnier, Y. Reyjol, and A. Bouchez. "Évaluer la pollution des milieux aquatiques avec l’ADN des diatomées : où en sommes-nous ?" Techniques Sciences Méthodes, no. 5 (May 2019): 53–70. http://dx.doi.org/10.1051/tsm/201905053.
Full textBarget, Eric, and Jean-Jacques Gouguet. "De l’importance des dépenses des spectateurs étrangers dans l’impact touristique des grands événements sportifs." Téoros 30, no. 2 (September 7, 2012): 105–19. http://dx.doi.org/10.7202/1012247ar.
Full textLE ROY, P., H. CHAPUIS, and D. GUÉMENÉ. "Sélection génomique : quelles perspectives pour les filières avicoles ?" INRAE Productions Animales 27, no. 5 (December 12, 2014): 331–36. http://dx.doi.org/10.20870/productions-animales.2014.27.5.3080.
Full textPoznanski, Thaddée. "Loi modifiant la loi des accidents du travail." Commentaires 22, no. 4 (April 12, 2005): 558–65. http://dx.doi.org/10.7202/027838ar.
Full textDissertations / Theses on the topic "Calcul Haut Débit"
Doan, Trung-Tung. "Epidémiologie moléculaire et métagénomique à haut débit sur la grille." Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2012. http://tel.archives-ouvertes.fr/tel-00778073.
Full textHernane, Soumeya-Leila. "Modèles et algorithmes de partage de données cohérents pour le calcul parallèle distribué à haut débit." Thesis, Université de Lorraine, 2013. http://www.theses.fr/2013LORR0042/document.
Full textData Handover is a library of functions adapted to large-scale distributed systems. It provides routines that allow acquiring resources in reading or writing in the ways that are coherent and transparent for users. We modelled the life cycle of Dho by a finite state automaton and through experiments; we have found that our approach produced an overlap between the calculation of the application and the control of the data. These experiments were conducted both in simulated mode and in real environment (Grid'5000). We exploited the GRAS library of the SimGrid toolkit. Several clients try to access the resource concurrently according the client-server paradigm. By the theory of queues, the stability of the model was demonstrated in a centralized environment. We improved, the distributed algorithm for mutual exclusion (of Naimi and Trehel), by introducing following features: (1) Allowing the mobility of processes (ADEMLE), (2) introducing shared locks (AEMLEP) and finally (3) merging both properties cited above into an algorithm summarising (ADEMLEP). We proved the properties, safety and liveliness, theoretically for all extended algorithms. The proposed peer-to-peer system combines our extended algorithms and original Data Handover model. Lock and resource managers operate and interact each other in an architecture based on three levels. Following the experimental study of the underlying system on Grid'5000, and the results obtained, we have proved the performance and stability of the model Dho over a multitude of parameters
Hernane, Soumeya. "Modèles et algorithmes de partage de données cohérents pour le calcul parallèle et distribué à haut débit." Phd thesis, Université de Lorraine, 2013. http://tel.archives-ouvertes.fr/tel-00919272.
Full textHeidsieck, Gaetan. "Gestion distribuée de workflows scientifiques pour le phénotypage des plantes à haut débit." Thesis, Montpellier, 2020. http://www.theses.fr/2020MONTS066.
Full textIn many scientific domains, such as bio-science, complex numerical experiments typically require many processing or analysis steps over huge datasets. They can be represented as scientific workflows. These workflows ease the modeling, management, and execution of computational activities linked by data dependencies. As the size of the data processed and the complexity of the computation keep increasing, these workflows become data-intensive. In order to execute such workflows within a reasonable timeframe, they need to be deployed in a high-performance distributed computing environment, such as the cloud.Plant phenotyping aims at capturing plant characteristics, such as morphological, topological, phenological features. High-throughput phenotyping (HTP) platforms have emerged to speed up the phenotyping data acquisition in controlled conditions (e.g. greenhouse) or in the field. Such platforms generate terabytes of data used in plant breeding and plant biology to test novel mechanisms. These datasets are stored in different geodistributed sites (data centers). Scientists can use a Scientific Workflow Management System (SWMS) to manage the workflow execution over a multisite cloud.In bio-science, it is common for workflow users to reuse other workflows or data generated by other users. Reusing and re-purposing workflows allow the user to develop new analyses faster. Furthermore, a user may need to execute a workflow many times with different sets of parameters and input data to analyze the impact of some experimental step, represented as a workflow fragment, i.e., a subset of the workflow activities and dependencies. In both cases, some fragments of the workflow may be executed many times, which can be highly resource-consuming and unnecessary long. Workflow re-execution can be avoided by storing the intermediate results of these workflow fragments and reusing them in later executions.In this thesis, we propose an adaptive caching solution for efficient execution of data-intensive workflows in monosite and multisite clouds. By adapting to the variations in tasks’ execution times, our solution can maximize the reuse of intermediate data produced by workflows from multiple users. Our solution is based on a new SWMS architecture that automatically manages the storage and reuse of intermediate data. Cache management is involved during two main steps: workflows preprocessing, to remove all fragments of the workflow that do not need to be executed; and cache provisioning, to decide at runtime which intermediate data should be cached. We propose an adaptive cache provisioning algorithm that deals with the variations in task execution times and the size of data. We evaluated our solution by implementing it in OpenAlea and performing extensive experiments on real data with a complex data-intensive application in plant phenotyping.Our main contributions are i) a SWMS architecture to handle caching and cache-aware scheduling algorithms when executing workflows in both monosite and multisite clouds, ii) a cost model that includes both financial and time costs for both the workflow execution, and the cache management, iii) two cache-aware scheduling algorithms one adapted for monosite and one for multisite cloud, and iv) and an experimental validation on a data-intensive plant phenotyping application
Nguyen, Ly Thien Truong. "Mise en oeuvre matérielle de décodeurs LDPC haut débit, en exploitant la robustesse du décodage par passage de messages aux imprécisions de calcul." Thesis, Cergy-Pontoise, 2017. http://www.theses.fr/2017CERG0904/document.
Full textThe increasing demand of massive data rates in wireless communication systems will require significantly higher processing speed of the baseband signal, as compared to conventional solutions. This is especially challenging for Forward Error Correction (FEC) mechanisms, since FEC decoding is one of the most computationally intensive baseband processing tasks, consuming a large amount of hardware resources and energy. The conventional approach to increase throughput is to use massively parallel architectures. In this context, Low-Density Parity-Check (LDPC) codes are recognized as the foremost solution, due to the intrinsic capacity of their decoders to accommodate various degrees of parallelism. They have found extensive applications in modern communication systems, due to their excellent decoding performance, high throughput capabilities, and power efficiency, and have been adopted in several recent communication standards.This thesis focuses on cost-effective, high-throughput hardware implementations of LDPC decoders, through exploiting the robustness of message-passing decoding algorithms to computing inaccuracies. It aims at providing new approaches to cost/throughput optimizations, through the use of imprecise computing and storage mechanisms, without jeopardizing the error correction performance of the LDPC code. To do so, imprecise processing within the iterative message-passing decoder is considered in conjunction with the quantization process that provides the finite-precision information to the decoder. Thus, we first investigate a low complexity code and decoder aware quantizer, which is shown to closely approach the performance of the quantizer with decision levels optimized through exhaustive search, and then propose several imprecise designs of Min-Sum (MS)-based decoders. Proposed imprecise designs are aimed at reducing the size of the memory and interconnect blocks, which are known to dominate the overall area/delay performance of the hardware design. Several approaches are proposed, which allow storing the exchanged messages using a lower precision than that used by the processing units, thus facilitating significant reductions of the memory and interconnect blocks, with even better or only slight degradation of the error correction performance.We propose two new decoding algorithms and hardware implementations, obtained by introducing two levels of impreciseness in the Offset MS (OMS) decoding: the Partially OMS (POMS), which performs only partially the offset correction, and the Imprecise Partially OMS (I-POMS), which introduces a further level of impreciseness in the check-node processing unit. FPGA implementation results show that they can achieve significant throughput increase with respect to the OMS, while providing very close decoding performance, despite the impreciseness introduced in the processing units.We further introduce a new approach for hardware efficient LDPC decoder design, referred to as Non-Surjective Finite-Alphabet Iterative Decoders (FAIDs). NS-FAIDs are optimized by Density Evolution for regular and irregular LDPC codes. Optimization results reveal different possible trade-offs between decoding performance and hardware implementation efficiency. To validate the promises of optimized NS-FAIDs in terms of hardware implementation benefits, we propose three high-throughput hardware architectures, integrating NS-FAIDs decoding kernels. Implementation results on both FPGA and ASIC technology show that NS-FAIDs allow significant improvements in terms of both throughput and hardware resources consumption, as compared to the Min-Sum decoder, with even better or only slightly degraded decoding performance
Boyer, Alexandre. "Contributions to Computing needs in High Energy Physics Offline Activities : Towards an efficient exploitation of heterogeneous, distributed and shared Computing Resources." Electronic Thesis or Diss., Université Clermont Auvergne (2021-...), 2022. http://www.theses.fr/2022UCFAC108.
Full textPushing the boundaries of sciences and providing more advanced services to individuals and communities continuously demand more sophisticated software, specialized hardware, and a growing need for computing power and storage. At the beginning of the 2020s, we are entering a heterogeneous and distributed computing era where resources will be limited and constrained. Grid communities need to adapt their approach: (i) applications need to support various architectures; (ii) workload management systems have to manage various computing paradigms and guarantee a proper execution of the applications, regardless of the constraints of the underlying systems. This thesis focuses on the latter point through the case of the LHCb experiment.The LHCb collaboration currently relies on an infrastructure involving 170 computing centers across the world, the World LHC Computing Grid, to process a growing amount of Monte Carlo simulations, reproducing the experimental conditions of the experiment. Despite its huge size, it will be unable to handle simulations coming from the next LHC runs in a decent time. In the meantime, national science programs are consolidating computing resources and encourage using supercomputers, which provide a tremendous amount of computing power but pose higher integration challenges.In this thesis, we propose different approaches to supply distributed and shared computing resources with LHCb tasks. We developed methods to increase the number of computing resources allocations and their duration. It resulted in an improvement of the LHCb job throughput on a grid infrastructure (+40.86%). We also designed a series of software solutions to address highly-constrained environment issues that can be found in supercomputers, such as lack of external connectivity and software dependencies. We have applied those concepts to leverage computing power from four partitions of supercomputers ranked in the Top500
Ponsard, Raphael. "Traitement en temps réel, haut débit et faible latence, d'images par coprocesseurs GPU & FPGA utilisant les techniques d'accès direct à la mémoire distante." Thesis, Université Grenoble Alpes, 2020. http://www.theses.fr/2020GRALT071.
Full textThe constant evolution of X-ray photon sources associated to the increasing performance of high-end X-ray detectors allows cutting-edge experiments that can produce very high throughput data streams and generate large volumes of data that are challenging to manage and store.In this context, it becomes fundamental to optimize processing architectures that allow real-time image processing such as raw data pre-treatment, data reduction, data compression, fast-feedback.These data management challenges have still not been addressed in a fully satisfactory way as of today, and in any case, not in a generic manner.This thesis is part of the ESRF RASHPA project that aims at developing a RDMA-based Acquisition System for High Performance Applications.One of the main characteristics of this framework is the direct data placement, straight from the detector head (data producer) to the processing computing infrastructure (data receiver), at the highest acceptable throughput, using Remote Direct Memory Access (RDMA) and zero-copy techniques with minimal Central Processing Unit (CPU) interventions.The work carried out in this thesis is a contribution to the RASHPA framework, enabling data transfer directly to the internal memory of accelerator boards.A low-latency synchronisation mechanism between the RDMA network interface cards (RNIC) and the processing unit is proposed to trigger data processing while keeping pace with detector.Thus, a comprehensive solution fulfilling the online data analysis challenges is proposed on standard computer and massively parallel coprocessors as well.Scalability and versatility of the proposed approach is exemplified by detector emulators, leveraging RoCEv2 (RDMA over Converged Ethernet) or PCI-Express links and RASHPA Processing Units (RPUs) such as Graphic Processor Units (GPUs) and Field Gate Programmable Arrays (FPGAs).Real-time data processing on FPGA, seldom adopted in X ray science, is evaluated and the benefits of high level synthesis are exhibited.The framework is supplemented with an allocator of large contiguous memory chunk in main memory and an address translation system for accelerators, both geared towards DMA transfer.The assessment of the proposed pipeline was performed with online data analysis as found in serial diffraction experiments.This includes raw data pre-treatment as foreseen with adaptive gain detectors, image rejection using Bragg's peaks counting and data compression to sparse matrix format
Ben, Nsira Nadia. "Algorithme de recherche incrémentale d'un motif dans un ensemble de séquences d'ADN issues de séquençages à haut débit." Thesis, Normandie, 2017. http://www.theses.fr/2017NORMR143/document.
Full textIn this thesis, we are interested in the problem of on-line pattern matching in highly similar sequences, On-line Pattern Matching on Highly Similar Sequences, outcoming from Next Generation Sequencing technologies (NGS). These sequences only differ by a very small amount. There is thus a strong need for efficient algorithms for performing fast pattern matching in such specific sets of sequences. We develop new algorithms to process this problem. This thesis is partitioned into five parts. In the first part, we present a state of the art on the most popular algorithms of finding problem and the related indexes. Then, in the three following parts, we develop three algorithms directly dedicated to the on-line search for patterns in a set of highly similar sequences. Finally, in the fifth part, we conduct an experimental study on these algorithms. This study shows that our algorithms are efficient in practice in terms of computation time
Didelot, Sylvain. "Improving memory consumption and performance scalability of HPC applications with multi-threaded network communications." Thesis, Versailles-St Quentin en Yvelines, 2014. http://www.theses.fr/2014VERS0029/document.
Full textA recent trend in high performance computing shows a rising number of cores per compute node, while the total amount of memory per compute node remains constant. To scale parallel applications on such large machines, one of the major challenges is to keep a low memory consumption. This thesis develops a multi-threaded communication layer over Infiniband which provides both good performance of communications and a low memory consumption. We target scientific applications parallelized using the MPI standard in pure mode or combined with a shared memory programming model. Starting with the observation that network endpoints and communication buffers are critical for the scalability of MPI runtimes, the first contribution proposes three approaches to control their usage. We introduce a scalable and fully-connected virtual topology for connection-oriented high-speed networks. In the context of multirail configurations, we then detail a runtime technique which reduces the number of network connections. We finally present a protocol for dynamically resizing network buffers over the RDMA technology. The second contribution proposes a runtime optimization to enforce the overlap potential of MPI communications, showing a 2x improvement factor on communications. The third contribution evaluates the performance of several MPI runtimes running a seismic modeling application in a hybrid context. On large compute nodes up to 128 cores, the introduction of OpenMP in the MPI application saves up to 17 % of memory. Moreover, we show a performance improvement with our multi-threaded communication layer where the OpenMP threads concurrently participate to the MPI communications
Carpen-Amarie, Alexandra. "Utilisation de BlobSeer pour le stockage de données dans les Clouds: auto-adaptation, intégration, évaluation." Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2011. http://tel.archives-ouvertes.fr/tel-00696012.
Full text