Tesis sobre el tema "Traitement des données en temps réel"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 50 mejores tesis para su investigación sobre el tema "Traitement des données en temps réel".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
Al, Chami Zahi. "Estimation de la qualité des données multimedia en temps réel". Thesis, Pau, 2021. http://www.theses.fr/2021PAUU3066.
Texto completoOver the past decade, data providers have been generating and streaming a large amount of data, including images, videos, audio, etc. In this thesis, we will be focusing on processing images since they are the most commonly shared between the users on the global inter-network. In particular, treating images containing faces has received great attention due to its numerous applications, such as entertainment and social media apps. However, several challenges could arise during the processing and transmission phase: firstly, the enormous number of images shared and produced at a rapid pace requires a significant amount of time to be processed and delivered; secondly, images are subject to a wide range of distortions during the processing, transmission, or combination of many factors that could damage the images’content. Two main contributions are developed. First, we introduce a Full-Reference Image Quality Assessment Framework in Real-Time, capable of:1) preserving the images’content by ensuring that some useful visual information can still be extracted from the output, and 2) providing a way to process the images in real-time in order to cope with the huge amount of images that are being received at a rapid pace. The framework described here is limited to processing those images that have access to their reference version (a.k.a Full-Reference). Secondly, we present a No-Reference Image Quality Assessment Framework in Real-Time. It has the following abilities: a) assessing the distorted image without having its distortion-free image, b) preserving the most useful visual information in the images before publishing, and c) processing the images in real-time, even though the No-Reference image quality assessment models are considered very complex. Our framework offers several advantages over the existing approaches, in particular: i. it locates the distortion in an image in order to directly assess the distorted parts instead of processing the whole image, ii. it has an acceptable trade-off between quality prediction accuracy and execution latency, andiii. it could be used in several applications, especially these that work in real-time. The architecture of each framework is presented in the chapters while detailing the modules and components of the framework. Then, a number of simulations are made to show the effectiveness of our approaches to solve our challenges in relation to the existing approaches
Puthon, Anne-Sophie. "Détermination de la vitesse limite par fusion de données vision et cartographiques temps-réel embarquées". Phd thesis, Ecole Nationale Supérieure des Mines de Paris, 2013. http://pastel.archives-ouvertes.fr/pastel-00957392.
Texto completoHautot, Félix. "Cartographie topographique et radiologique 3D en temps réel : acquisition, traitement, fusion des données et gestion des incertitudes". Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLS141.
Texto completoIn the field of nuclear related activities such as maintenance, decontamination and dismantling status reports of potentially contaminated or activated elements are required beforehand. For economic reasons, this status report must be quickly performed. So as to be done quickly, the operation is realized by an operator, and his exposure time must be reduced as much as possible. Concerning indoor environments, it can be hard to make such investigations due to out-of-date plans or maps, loose of GPS signal, pre-positioning of underlying or precalibrated systems. Indeed, the premises status report is obtained by coupling nuclear measurements and topographical mapping. In such kind of situation it is necessary to have a portative instrument that delivers an exhaustive radiological and topographical mapping in order to deliver a decision support concerning the best intervention scenario to set up as fast as possible. Furthermore, and so as to reduce operator’s exposure time, such kind of method must be usable in real time. This method enables to proceed to complex intervention within the best radiological previsions for optimizing operator’s exposition time and waste management. In this goal, Areva STMI then developed a nuclear measurement probes autonomous positioning and motion estimation system based on visual SLAM (Simultaneous Localization And Mapping). These developments led to apply a patent. This thesis consisted in pursuing this survey, especially decomposing all the underlying systems, continuing the data fusion developments, proposing optimisations, and setting the basis of a real-time associated uncertainties analysis. SLAM based on visual odometry can be performed with RGB-D sensor (Microsoft Kinect®-like sensors). The acquisition process delivers a 3D map containing radiological sensors poses (positions and orientations in 3D) and measurements (dose rate and CZT gamma spectrometry) without any external signal or device. Moreover, a few radioactive sources localization algorithms based on geostatistics and back projection of measurements can also be performed in near real-time. It is then possible to evaluate the position of radioactive sources in the scene and compute fast radiological mappings of premises close to the acquisition. The last part of this work consisted in developing an original method for real-time evaluation of the process chain and results accuracies. The evaluation of uncertainties and their propagation along the acquisition and process chain in real-time provide feedbacks on employed methods for investigations or intervention processes and enable to evaluate the reliability of acquired data. Finally, a set of benchmarks has been performed in order to estimate the results quality by comparing them to reference methods
Marsit, Nadhem. "Traitement des requêtes dépendant de la localisation avec des contraintes de temps réel". Toulouse 3, 2007. http://thesesups.ups-tlse.fr/106/.
Texto completoIn last years, the mobility of units achieved an increasing development. One of the direct consequences in the database field is the appearance of new types of queries such as Location Dependent Queries (LDQ) (e. G. An ambulance driver asks for the closest hospital). These queries raise problems which have been considered by several researches. Despite the intensive work related to this field, the different types of queries studied so far do not meet all the needs of location based applications. In fact these works don’t take into account the real time aspect required by certain location based applications. These new requirements generate new types of queries such as mobile queries with real time constraints. Taking into account mobility and real time constraints is an important problem to deal with. Hence, our main objective is to propose a solution for considering real time constraints while location dependent query processing. First, we propose a language for expressing different type of queries. Then, we design a software architecture allowing to process location dependent queries with real time constraints. The modules of this architecture are designed to be implemented on top of existent DBMS (e. G. Oracle). We propose methods to take into account location of mobile client and his displacement after sending the query. We also propose methods in order to maximise the percentage of queries respecting their deadlines. Finally we validate our proposal by implementing the proposed methods and evaluating their performance
Martin, Patrick. "Conception et réalisation d'un système multiprocesseur, cadencé par les données, pour le traitement d'images linéaires en temps réel". Rouen, 1991. http://www.theses.fr/1991ROUE5046.
Texto completoGagnon-Turcotte, Gabriel. "Algorithmes et processeurs temps réel de traitement de signaux neuronaux pour une plateforme optogénétique sans fil". Master's thesis, Université Laval, 2015. http://hdl.handle.net/20.500.11794/26339.
Texto completoL’acquisition des signaux électriques provenant des neurones du cerveau permet aux neurobiologistes de mieux comprendre son fonctionnement. Dans le cadre de ce travail, de nouveaux algorithmes de compression des signaux neuronaux sont conçus et présentés. Ces nouveaux algorithmes sont incorporés dans trois nouveaux dispositifs optogénétiques sans fil miniature capable de stimuler optiquement l’activité neuronale et de transmettre sans fil les biopotentiels captés par plusieurs microélectrodes. Deux de ces systèmes sont capables de compresser les signaux provenant de deux microélectrodes ainsi que de stimuler optiquement via deux diodes électroluminescentes (DEL) haute puissance. Afin de réduire la bande passante des transmetteurs sans fil utilisés, ces deux systèmes sont dotés d’un nouvel algorithme de détection des potentiels d’actions qui génère de meilleurs taux de détection que les algorithmes existants, tout en nécessitant moins de ressources matérielles et de temps de processeur. Un troisième dispositif incorporant les algorithmes de détection et de compression fût conçu. Ce dispositif est le seul système optogénétique sans fil comportant 32 canaux de stimulation optique et 32 canaux d’enregistrement électrophysiologiques en parallèle. Il utilise une nouvelle technique de compression par ondelettes permettant d’augmenter significativement le nombre de canaux sous observation sans augmenter la consommation de l’émetteurrécepteur. Cette nouvelle méthode de compression se distingue des méthodes existantes en atteignant de meilleurs taux de compression tout en permettant de reconstruire les signaux compressés avec une meilleure qualité. Au moment de la rédaction de ce mémoire, il s’agit des premiers dispositifs optogénétiques sans fil à offrir simultanément de la stimulation optique multicanal, de l’enregistrement électrophysiologique multicanal ainsi que de la détection/compression in situ des potentiels d’actions. Grâce à leur design novateur et aux innovations apportées par les nouveaux algorithmes de traitement des signaux, les systèmes conçus sont plus légers et plus compacts que les systèmes précédents, rendant ces dispositifs indispensables afin de mener des expériences sur le cerveau de petits animaux libres de leurs mouvements. Les trois systèmes ont été validés avec grand succès par des expériences in vivo sur des souris transgéniques au Centre de Recherche de l’Institut Universitaire en Santé Mentale de Québec (CRIUSMQ).
The electrical signals acquisition from the brain’s neurons allows neuroscientists to better understand its functioning. In this work, new neural signals compression algorithms are designed and presented. These new algorithms are incorporated into three new miniature optogenetic wireless devices. These devices are capable to optically stimulate neural activity and to wirelessly transmit the biopotentials captured by several microelectrodes. Two of these systems are able to compress the signals from two microelectrodes and to stimulate optically via two high-power LED. Both systems feature a new spike detection algorithm to reduce the bandwidth used by the wireless transceiver. This new spike detection algorithm differs from existing algorithms by achieving better detection rate while using less material resources and processing time. A third device incorporating the detection and compression algorithms was designed. This device is the only optogenetic wireless system including 32 optical stimulation channels and 32 electrophysiological recording channels in parallel. This new system has the ability to compress the neural signals using a new wavelet compression technique that significantly increase the number of channels under observation without increasing the consumption of the wireless transceiver. In particular, this new compression technique differs from the existing wavelet based compression methods by achieving better compression ratio while allowing to reconstruct the compressed signals with better quality. At the time of writing this thesis, these are the first three devices that offer simultaneous multichannel optical stimulation, multichannel electrophysiological signals recording and on-the-fly spike detection. The resulting systems are more compact and lightweight than previous systems, making these devices essentials to conduct long term experiments on the brains of small freely moving animals. The three systems were validated within in vivo experiments using transgenic mice at the Centre de Recherche de l’Institut Universitaire en Santé Mentale de Québec (CRIUSMQ).
Cotton, Julien. "Analyse et traitement de données sismiques 4D en continu et en temps réel pour la surveillance du sous-sol". Electronic Thesis or Diss., Paris Sciences et Lettres (ComUE), 2019. http://www.theses.fr/2019PSLEM023.
Texto completo3D seismic reflection is widely used in the oil industry. This standard subsoil auscultation method provides information on geological structures and can be used to build reservoir models. However, the properties derived from3D (and 2D) seismic data are only static: 3D does not allow to evaluate the changes with calendar time. The addition of a temporal dimension to 3D data is obtained by repeating the measurements at several dates separated by several months or even several years. Thus, 4D seismic (time-lapse) makes it possible to measure and to analyze the changes of the subsoil in the long term. Since the 90s, this method is used worldwide at sea and on land. To carry out a much more frequent monitoring (daily), even continuous (a few hours) of the subsoil, CGG developed, in collaboration with Gazde France (now ENGIE) and Institut Français du Pétrole (now IFPEN), a solution based on buried sources and receptors: SeisMovie. SeisMovie was originally designed to monitor and map the gas front in real time during geological disposal operations. It is also used to observe the steam injection required for heavy oil production. In this thesis, we bring contributions to three challenges arising in the processing of seismic data from this system. The first one concerns the attenuation of near-surface variations caused by "ghost" waves that interfere with primary waves. The second one concerns the quantification of subsurface changes in terms of propagation velocity variation and acoustic impedance.The third one concerns real-time: the data processing must be at least as fast as the acquisition cycle (a few hours). Infact, the analysis of the data must enable the reservoir engineers to make quick decisions (stop of the injection, decreaseof the production). In a more general context, there are conceptual similarities between 3D and 4D. In 4D, the repeated acquisitions are compared with each other (or with a reference). In 3D, during acquisition, field geophysicists compare unitary shot points with each other to assess the quality of the data for decision-making (reshooting, skipping orcontinuing). Therefore, some 4D real-time tools developed during this thesis can be applied. A new approach called TeraMig for automated quality control in the field will also be presented
Sasportas, Raphaël. "Etude d'architectures dédiées aux applications temps réel d'analyse d'images par morphologie mathématique". Paris, ENMP, 2002. http://www.theses.fr/2002ENMP1082.
Texto completoPicard, Quentin. "Proposition de mécanismes d'optimisation des données pour la perception temps-réel dans un système embarqué hétérogène". Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG039.
Texto completoThe development of autonomous systems has an increasing need for perception of the environment in embedded systems. Autonomous cars, drones, mixed reality devices have limited form factor and a restricted budget of power consumption for real-time performances. For instance, those use cases have a budget in the range of 300W-10W, 15W-10W and 10W-10mW respectively. This thesis is focused on autonomous and mobile systems with a budget of 10mW to 15W with the use of imager sensors and the inertial measurement unit (IMU). Simultaneous Localization And Mapping (SLAM) provides accurate and robust perception of the environment in real-time without prior knowledge for autonomous and mobile systems. The thesis aims at the real-time execution of the whole SLAM system composed of advanced perception functions, from localization to 3D reconstruction, with restricted hardware resources. In this context, two main questions are raised to answer the challenges of the literature. How to reduce the resource requirements of advanced perception functions? What is the SLAM pipeline partitioning for the heterogeneous system that integrates several computing units, from the embedded chip in the imager, to the near-sensor processing (FPGA) and in the embedded platform (ARM, embedded GPU)?. The first issue addressed in the thesis is about the need to reduce the hardware resources used by the SLAM pipeline, from the sensor output to the 3D reconstruction. In this regard, the work described in the manuscript provides two main contributions. The first one presents the processing in the embedded chip with an impact on the image characteristics by reducing the dynamic range. The second has an impact on the management of the image flow injected in the SLAM pipeline with a near-sensor processing. The first contribution aims at reducing the memory footprint of the SLAM algorithms with the evaluation of the pixel dynamic reduction on the accuracy and robustness of real-time localization and 3D reconstruction. The experiments show that we can reduce the input data up to 75% corresponding to 2 bits per pixel while maintaining a similar accuracy than the baseline 8 bits per pixel. Those results have been obtained with the evaluation of the accuracy and robustness of four SLAM algorithms on two databases. The second contribution aims at reducing the amount of data injected in SLAM with a decimation strategy to control the input frame rate, called the adaptive filtering. Data are initially injected in constant rate (20 frames per second). This implies a consumption of energy, memory, bandwidth and increases the complexity of calculation. Can we reduce this amount of data ? In SLAM, the accuracy and the number of operations depend on the movement of the system. With the linear and angular accelerations from the IMU, data are injected based on the movement of the system. Those key images are injected with the adaptive filtering approach (AF). Although the results depend on the difficulty of the chosen database, the experiments describe that the AF allows the decimation of up to 80% of the images while maintaining low localization and reconstruction errors similar to the baseline. This study shows that in the embedded context, the peak memory consumption is reduced up to 92%
Ouerhani, Yousri. "Contribution à la définition, à l'optimisation et à l'implantation d'IP de traitement du signal et des données en temps réel sur des cibles programmables". Phd thesis, Université de Bretagne occidentale - Brest, 2012. http://tel.archives-ouvertes.fr/tel-00840866.
Texto completoSalmon, Loïc. "Une approche holistique combinant flux temps-réel et données archivées pour la gestion et le traitement d'objets mobiles : application au trafic maritime". Thesis, Brest, 2019. http://www.theses.fr/2019BRES0006/document.
Texto completoOver the past few years, the rapid prolifération of sensors and devices recording positioning information regularly produces very large volumes of heterogeneous data. This leads to many research challenges as the storage, distribution, management,Processing and analysis of the large mobility data generated still needs to be solved. Current works related to the manipulation of mobility data have been directed towards either mining archived historical data or continuous processing of incoming data streams.The aim of this research is to design a holistic System whose objective is to provide a combined processing of real time data streams and archived data positions. The proposed solution is real-time oriented, historical data and informations extracted from them allowing to enhance quality of the answers to queries. A event paradigm is discussed to facilitate the hybrid approach and to identify typical moving objects behaviors. Finally, a query concerning signal coverage of moving objects has been studied and applied to maritime data showing the relevance of a hybrid approach to deal with moving object data processing
Park, Young-Hwan. "Etude en vue de la réalisation d'un noyau temps réel multiprocesseur et l'environnement de développement intégré". Compiègne, 1997. http://www.theses.fr/1997COMP0992.
Texto completoThe purpose of my thesis is to realize a real-time multiprocessor kernel, called MLINDA, over the LINDA concept and the GIDOR concept. LINDA is a parallel programming model. It consists the tuple-space, the tuple and four simple tuple-space operations, out(), eval(), in() and read(). Through the LINDA concept, we can develope a parallel program without consideration of information's addresss and of process's synchronisation. The GIDOR (Gestion Integre D'Objets intelligents Repartis : Integrated Management of Distributed intelligent Objects) is a new concept in the domain of embadded real-time parallel processing. This concept consists OR (Objet intelligent Repartis : Distributed intelligent Object) in the material level and Noyau d'OR (OR Kernel) in the software level. An OR is a chip that has the nearly same size with the conventional processors. But it contains a processing unit, a local memory, and several communication ports. If we have not enough memory for a large real-time application, we utilize several ORs. As a consequence, the GIDOR concept contains the real-time parallel processing. For the management of Inter Process Communications (IPC), of the real-time process, and of the system resources, a small real-time multiprocessor kernel, Noyau d'OR, is located in each OR. We have realized a Noyau d'OR, which is called MLINDA, over the ADSP2106x of Analog Deviees. ADSP2106x is a kind of OR. We have also implemented a real-time parallel program developement environment. In this environment, there are a real-time parallel programming tool, a deadlock detection tool, and a performance evaluation tool. And we have evaluated the stability and the efficiency of MLINDA with two mobil robot vision systems, a real-time movement detection and tracking system and a real-time image fusion system
Maguer, Alain. "Transmission d'images sous-marines par ultrasons : étude théorique et implantation temps réel de la portée codage-décodage de la chaîne de transmission". Lyon, INSA, 1985. http://www.theses.fr/1985ISAL0047.
Texto completoGoavec-Merou, Gwenhael. "Générateur de coprocesseur pour le traitement de données en flux (vidéo ou similaire) sur FPGA". Thesis, Besançon, 2014. http://www.theses.fr/2014BESA2056/document.
Texto completoUsing Field Programmable Gate Arrays (FPGA) is one of the very few solution for real time processingdata flows of several hundreds of Msamples/second. However, using such componentsis technically challenging beyond the need to become familiar with a new kind of dedicateddescription language and ways of describing algorithms, understanding the hardware behaviouris mandatory for implementing efficient processing solutions. In order to circumvent these difficulties,past researches have focused on providing solutions which, starting from a description ofan algorithm in a high-abstraction level language, generetes a description appropriate for FPGAconfiguration. Our contribution, following the strategy of block assembly based on the skeletonmethod, aimed at providing a software environment called CoGen for assembling various implementationsof readily available and validated processing blocks. The resulting processing chainis optimized by including FPGA hardware characteristics, and input and output bandwidths ofeach block in order to provide solution fitting best the requirements and constraints. Each processingblock implementation is either generated automatically or manually, but must complywith some constraints in order to be usable by our tool. In addition, each block developer mustprovide a standardized description of the block including required resources and data processingbandwidth limitations. CoGen then provides to the less experienced user the means to assemblethese blocks ensuring synchronism and consistency of data flow as well as the ability to synthesizethe processing chain in the available hardware resources. This working method has beenapplied to video data flow processing (threshold, contour detection and tuning fork eigenmodesanalysis) and on radiofrequency data flow (wireless interrogation of sensors through a RADARsystem, software processing of a frequency modulated stream, software defined radio)
Ahmed, Sameer. "Application d'un langage de programmation de type flot de données à la synthèse haut-niveau de système de vision en temps-réel sur matériel reconfigurable". Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2013. http://tel.archives-ouvertes.fr/tel-00844399.
Texto completoChoquel, Jean-Bernard. "Contribution à l'étude de la méthodologie de conception d'architectures parallèles par flot de données pour le traitement temps réel de la représentation d'objets en trois dimensions". Lille 1, 1994. http://www.theses.fr/1994LIL10034.
Texto completoToss, Julio. "Algorithmes et structures de données parallèles pour applications interactives". Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAM056/document.
Texto completoThe quest for performance has been a constant through the history of computing systems. It has been more than a decade now since the sequential processing model had shown its first signs of exhaustion to keep performance improvements.Walls to the sequential computation pushed a paradigm shift and established the parallel processing as the standard in modern computing systems. With the widespread adoption of parallel computers, many algorithms and applications have been ported to fit these new architectures. However, in unconventional applications, with interactivity and real-time requirements, achieving efficient parallelizations is still a major challenge.Real-time performance requirement shows-up, for instance, in user-interactive simulations where the system must be able to react to the user's input within a computation time-step of the simulation loop. The same kind of constraint appears in streaming data monitoring applications. For instance, when an external source of data, such as traffic sensors or social media posts, provides a continuous flow of information to be consumed by an on-line analysis system. The consumer system has to keep a controlled memory budget and delivery fast processed information about the stream.Common optimizations relying on pre-computed models or static index of data are not possible in these highly dynamic scenarios. The dynamic nature of the data brings up several performance issues originated from the problem decomposition for parallel processing and from the data locality maintenance for efficient cache utilization.In this thesis we address data-dependent problems on two different application: one in physics-based simulation and other on streaming data analysis. To the simulation problem, we present a parallel GPU algorithm for computing multiple shortest paths and Voronoi diagrams on a grid-like graph. To the streaming data analysis problem we present a parallelizable data structure, based on packed memory arrays, for indexing dynamic geo-located data while keeping good memory locality
Belabbess, Badre. "Automatisation de détections d'anomalies en temps réel par combinaison de traitements numériques et sémantiques". Thesis, Paris Est, 2018. http://www.theses.fr/2018PESC2180/document.
Texto completoComputer systems involving anomaly detection are emerging in both research and industry. Thus, fields as varied as medicine (identification of malignant tumors), finance (detection of fraudulent transactions), information technologies (network intrusion detection) and environment (pollution situation detection) are widely impacted. Machine learning offers a powerful set of approaches that can help solve these use cases effectively. However, it is a cumbersome process with strict rules that involve a long list of tasks such as data analysis and cleaning, dimension reduction, sampling, algorithm selection, optimization of hyper-parameters. etc. It also involves several experts who will work together to find the right approaches. In addition, the possibilities opened today by the world of semantics show that it is possible to take advantage of web technologies to reason intelligently on raw data to extract information with high added value. The lack of systems combining numeric approaches to machine learning and semantic techniques of the web of data is the main motivation behind the various works proposed in this thesis. Finally, the anomalies detected do not necessarily mean abnormal situations in reality. Indeed, the presence of external information could help decision-making by contextualizing the environment as a whole. Exploiting the space domain and social networks makes it possible to build contexts enriched with sensor data. These spatio-temporal contexts thus become an integral part of anomaly detection and must be processed using a Big Data approach.In this thesis, we present three systems with different architectures, each focused on an essential element of big data, real-time, semantic web and machine learning ecosystems:WAVES: Big Data platform for real-time analysis of RDF data streams captured from dense networks of IoT sensors. Its originality lies in its ability to reason intelligently on raw data in order to infer implicit information from explicit information and assist in decision-making. This platform was developed as part of a FUI project whose main use case is the detection of anomalies in a drinking water network. RAMSSES: Hybrid machine learning system whose originality is to combine advanced numerical approaches as well as proven semantic techniques. It has been specifically designed to remove the heavy burden of machine learning that is time-consuming, complex, error-prone, and often requires a multi-disciplinary team. SCOUTER: Intelligent system of "web scrapping" allowing the contextualization of singularities related to the Internet of Things by exploiting both spatial information and the web of data
Wladdimiro, Cottet Daniel. "Dynamic adaptation in Stream Processing Systems". Electronic Thesis or Diss., Sorbonne université, 2024. http://www.theses.fr/2024SORUS028.
Texto completoThe amount of data produced by today’s web-based systems and applications increases rapidly, due to the many interactions with users (e.g. real-time stock market transactions, multiplayer games, streaming data produced by Twitter, etc.). As a result, there is a growing demand, particularly in the fields of commerce, security and research, for systems capable of processing this data in real time and providing useful information in a short space of time. Stream processing systems (SPS) meet these needs and have been widely used for this purpose. The aim of SPSs is to process large volumes of data in real time by housing a set of operators in applications based on Directed acyclic graphs (DAG). Most existing SPSs, such as Flink or Storm, are configured prior to deployment, usually defining the DAG and the number of operator replicas in advance. Overestimating the number of replicas can lead to a waste of allocated resources. On the other hand, depending on interaction with the environment, the rate of input data can fluctuate dynamically and, as a result, operators can become overloaded, leading to a degradation in system performance. These SPSs are not capable of dynamically adapting to operator workload and input rate variations. One solution to this problem is to dynamically increase the number of resources, physical or logical, allocated to the SPS when the processing demand of one or more operators increases. This thesis presents two SPSs, RA-SPS and PA-SPS, reactive and predictive approach respectively, for dynamically modifying the number of operator replicas. The reactive approach relies on the current state of operators computed on multiple metrics, while the predictive model is based on input rate variation, operator execution time, and queued events. The two SPSs extend Storm SPS to dynamically reconfigure the number of copies without having to downtime the application. They also implement a load balancer that distributes incoming events fairly among operator replicas. Experiments on the Google Cloud Platform (GCP) were carried out with applications that process Twitter data, DNS traffic, or logs traces. Performance was evaluated with different configurations and the results were compared with those of running the same applications on the original Storm as well as with state-of-the-art work such as SPS DABS-Storm, which also adapt the number of replicas. The comparison shows that both RA-SPS and PA-SPS can significantly improve the number of events processed, while reducing costs
Cron, Geneviève. "Diagnostic par reconnaissance des formes floue d'un système dynamique et réparti : Application à la gestion en temps réel du trafic téléphonique français". Compiègne, 1999. http://www.theses.fr/1999COMP1231.
Texto completoCarbillet, Thomas. "Monitoring en temps réel de la vitesse de déplacement sur dispositif connecté : modélisation mathématique sur plateforme mobile interfacée avec une base de données d'entraînement et d'audit physiologique". Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAM013/document.
Texto completoThe improvement running performance has become a major topic lately. We are getting closer to running a marathon in under 2 hours. However, there are not so many professionals working transversally regarding pre-race and in-race preparation concerning the general public. Training plans are based on trainers' experience and are often not custom-made. This exposes the runners to injury risk and motivation loss. It seems that the current analysis of training plans has reached a limit. The aim for BillaTraining® is to go beyond this limit by connecting the research with the general public of runners.This PhD has two main goals. The first one is trying to contribute to the research about running. After gathering and formatting trainings and races data from different origins, we tried to isolate and describe how humans run marathons including 2.5 to 4-hour performances. We studied acceleration, speed and heart rate time series among other things, with the idea of understanding the different running strategies.The second one is the development of a web application embracing the three steps of the BillaTraining® method. The first step is an energetic audit which is a 30-minute running session guided by the runner's sensations. The second step is the energetic radar which is the results of the audit. The last step is a tailor-made training plan built depending on the runner's objectives.In order to come up with a solution, we had to bring together Physiology, Mathematics and Computer Science.The knowledge we had in Physiology was based on professor Véronique Billat's past and current researches. These researches are now part of BillaTraining® and are central for the growth of the company.We used Mathematics to try to describe physiological phenomenons thanks to Statistics. By applying the Ornstein-Uhlenbeck model, we found that humans are able to run at an even acceleration. By using the PELT (Pruned Exact Linear Time) method we automated changepoints detection in time series.Finally, Computer Science allowed a communication between Physiology and Mathematics for research, as well as marketing training tools at the forefront of innovation
Goeller, Adrien. "Contribution à la perception augmentée de scènes dynamiques : schémas temps réels d’assimilation de données pour la mécanique du solide et des structures". Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLC013/document.
Texto completoThe development of sensors has always followed the ambition of industrial and scientific people to observe the unobservable. High speed cameras are part of this adventure, revealing invisible dynamics such as cracks formation or subtle mosquito flight. Industrial high speed vision is a very competitive domain in which cameras stand out through their acquisition speed. This thesis aims to broaden their capacity by augmenting the initial acquisition with dynamic models. This work proposes to develop methods linking in real time a model with a real system. Aimed benefits are interpolation, prediction and identification. Three parts are developed. The first one is based on video processing and submits to use kinematic elementary and generic models. An algorithm of motion estimation for large movements is proposed but the generic nature does not allow a sufficient knowledge to be conclusive. The second part proposes using sequential data assimilation methods known as Kalman filters. A scheme to assimilate video data with a mechanical model is successfully implemented. An application of data assimilation in modal analysis is developed. Two multi sensors real time assimilation schemes for nonlinear modal identification are proposed. These schemes are integrated in two applications on 3D reconstruction and motion magnification
Vissière, David. "Solution de guidage-navigation-pilotage pour véhicules autonomes hétérogènes en vue d'une mission collaborative". Phd thesis, École Nationale Supérieure des Mines de Paris, 2008. http://pastel.archives-ouvertes.fr/pastel-00004492.
Texto completoToss, Julio. "Parallel algorithms and data structures for interactive applications". reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2017. http://hdl.handle.net/10183/172043.
Texto completoA busca por desempenho tem sido uma constante na história dos sistemas computacionais. Ha mais de uma década, o modelo de processamento sequencial já mostrava seus primeiro sinais de exaustão pare suprir a crescente exigência por performance. Houveram "barreiras"para a computação sequencial que levaram a uma mudança de paradigma e estabeleceram o processamento paralelo como padrão nos sistemas computacionais modernos. Com a adoção generalizada de computadores paralelos, novos algoritmos foram desenvolvidos e aplicações reprojetadas para se adequar às características dessas novas arquiteturas. No entanto, em aplicações menos convencionais, com características de interatividade e tempo real, alcançar paralelizações eficientes ainda representa um grande desafio. O requisito por desempenho de tempo real apresenta-se, por exemplo, em simulações interativas onde o sistema deve ser capaz de reagir às entradas do usuário dentro do tempo de uma iteração da simulação. O mesmo tipo de exigência aparece em aplicações de monitoramento de fluxos contínuos de dados (streams). Por exemplo, quando dados provenientes de sensores de tráfego ou postagens em redes sociais são produzidos em fluxo contínuo, o sistema de análise on-line deve ser capaz de processar essas informações em tempo real e ao mesmo tempo manter um consumo de memória controlada A natureza dinâmica desses dados traz diversos problemas de performance, tais como a decomposição do problema para processamento em paralelo e a manutenção da localidade de dados para uma utilização eficiente da memória cache. As estratégias de otimização tradicionais, que dependem de modelos pré-computados ou de índices estáticos sobre os dados, não atendem às exigências de performance necessárias nesses cenários. Nesta tese, abordamos os problemas dependentes de dados em dois contextos diferentes: um na área de simulações baseada em física e outro em análise de dados em fluxo contínuo. Para o problema de simulação, apresentamos um algoritmo paralelo, em GPU, para computar múltiplos caminhos mínimos e diagramas de Voronoi em um grafo com topologia de grade. Para o problema de análise de fluxos de dados, apresentamos uma estrutura de dados paralelizável, baseada em Packed Memory Arrays, para indexar dados dinâmicos geo-localizados ao passo que mantém uma boa localidade de memória.
The quest for performance has been a constant through the history of computing systems. It has been more than a decade now since the sequential processing model had shown its first signs of exhaustion to keep performance improvements. Walls to the sequential computation pushed a paradigm shift and established the parallel processing as the standard in modern computing systems. With the widespread adoption of parallel computers, many algorithms and applications have been ported to fit these new architectures. However, in unconventional applications, with interactivity and real-time requirements, achieving efficient parallelizations is still a major challenge. Real-time performance requirement shows up, for instance, in user-interactive simulations where the system must be able to react to the user’s input within a computation time-step of the simulation loop. The same kind of constraint appears in streaming data monitoring applications. For instance, when an external source of data, such as traffic sensors or social media posts, provides a continuous flow of information to be consumed by an online analysis system. The consumer system has to keep a controlled memory budget and deliver a fast processed information about the stream Common optimizations relying on pre-computed models or static index of data are not possible in these highly dynamic scenarios. The dynamic nature of the data brings up several performance issues originated from the problem decomposition for parallel processing and from the data locality maintenance for efficient cache utilization. In this thesis we address data-dependent problems on two different applications: one on physically based simulations and another on streaming data analysis. To deal with the simulation problem, we present a parallel GPU algorithm for computing multiple shortest paths and Voronoi diagrams on a grid-like graph. Our contribution to the streaming data analysis problem is a parallelizable data structure, based on packed memory arrays, for indexing dynamic geo-located data while keeping good memory locality.
Antoine, Maeva. "Amélioration de la dissémination de données biaisées dans les réseaux structurés". Thesis, Nice, 2015. http://www.theses.fr/2015NICE4054/document.
Texto completoMany distributed systems face the problem of load imbalance between machines. With the advent of Big Data, large datasets whose values are often highly skewed are produced by heterogeneous sources to be often processed in real time. Thus, it is necessary to be able to adapt to the variations of size/content/source of the incoming data. In this thesis, we focus on RDF data, a format of the Semantic Web. We propose a novel approach to improve data distribution, based on the use of several order-preserving hash functions. This allows an overloaded peer to independently modify its hash function in order to reduce the interval of values it is responsible for. More generally, to address the load imbalance issue, there exist almost as many load balancing strategies as there are different systems. We show that many load balancing schemes are comprised of the same basic elements, and only the implementation and interconnection of these elements vary. Based on this observation, we describe the concepts behind the building of a common API to implement any load balancing strategy independently from the rest of the code. Implemented on our distributed storage system, the API has a minimal impact on the business code and allows the developer to change only a part of a strategy without modifying the other components. We also show how modifying some parameters can lead to significant improvements in terms of results
Aussel, Nicolas. "Real-time anomaly detection with in-flight data : streaming anomaly detection with heterogeneous communicating agents". Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLL007/document.
Texto completoWith the rise of the number of sensors and actuators in an aircraft and the development of reliable data links from the aircraft to the ground, it becomes possible to improve aircraft security and maintainability by applying real-time analysis techniques. However, given the limited availability of on-board computing and the high cost of the data links, current architectural solutions cannot fully leverage all the available resources limiting their accuracy.Our goal is to provide a distributed algorithm for failure prediction that could be executed both on-board of the aircraft and on a ground station and that would produce on-board failure predictions in near real-time under a communication budget. In this approach, the ground station would hold fast computation resources and historical data and the aircraft would hold limited computational resources and current flight's data.In this thesis, we will study the specificities of aeronautical data and what methods already exist to produce failure prediction from them and propose a solution to the problem stated. Our contribution will be detailed in three main parts.First, we will study the problem of rare event prediction created by the high reliability of aeronautical systems. Many learning methods for classifiers rely on balanced datasets. Several approaches exist to correct a dataset imbalance and we will study their efficiency on extremely imbalanced datasets.Second, we study the problem of log parsing as many aeronautical systems do not produce easy to classify labels or numerical values but log messages in full text. We will study existing methods based on a statistical approach and on Deep Learning to convert full text log messages into a form usable as an input by learning algorithms for classifiers. We will then propose our own method based on Natural Language Processing and show how it outperforms the other approaches on a public benchmark.Last, we offer a solution to the stated problem by proposing a new distributed learning algorithm that relies on two existing learning paradigms Active Learning and Federated Learning. We detail our algorithm, its implementation and provide a comparison of its performance with existing methods
Belfkih, Abderrahmen. "Contraintes temporelles dans les bases de données de capteurs sans fil". Thesis, Le Havre, 2016. http://www.theses.fr/2016LEHA0014/document.
Texto completoIn this thesis, we are interested in adding real-time constraints in the Wireless Sensor Networks Database (WSNDB). Temporal consistency in WSNDB must be ensured by respecting the transaction deadlines and data temporal validity, so that sensor data reflect the current state of the environment. However, delays of transmission and/or reception in a data collection process can lead to not respect the data temporal validity. A database solution is most appropriate, which should coincide with the traditional database aspects with sensors and their environment. For this purpose, the sensor in WSN is considered as a table in a distributed database, which applied transactions (queries, updates, etc.). Transactions in a WSNDB require modifications to take into account of the continuous datastream and real-time aspects. Our contribution in this thesis focus on three parts: (i) a comparative study of temporal properties between a periodic data collection based on a remote database and query processing approach with WSNDB, (ii) the proposition of a real-time query processing model, (iii) the implementation of a real time WSNDB, based on the techniques described in the second contribution
Paumard, José. "Reconnaissance multiéchelle d'objets dans des scènes". Cachan, Ecole normale supérieure, 1996. http://www.theses.fr/1996DENS0025.
Texto completoAlshaer, Mohammad. "An Efficient Framework for Processing and Analyzing Unstructured Text to Discover Delivery Delay and Optimization of Route Planning in Realtime". Thesis, Lyon, 2019. http://www.theses.fr/2019LYSE1105/document.
Texto completoInternet of Things (IoT) is leading to a paradigm shift within the logistics industry. The advent of IoT has been changing the logistics service management ecosystem. Logistics services providers today use sensor technologies such as GPS or telemetry to collect data in realtime while the delivery is in progress. The realtime collection of data enables the service providers to track and manage their shipment process efficiently. The key advantage of realtime data collection is that it enables logistics service providers to act proactively to prevent outcomes such as delivery delay caused by unexpected/unknown events. Furthermore, the providers today tend to use data stemming from external sources such as Twitter, Facebook, and Waze. Because, these sources provide critical information about events such as traffic, accidents, and natural disasters. Data from such external sources enrich the dataset and add value in analysis. Besides, collecting them in real-time provides an opportunity to use the data for on-the-fly analysis and prevent unexpected outcomes (e.g., such as delivery delay) at run-time. However, data are collected raw which needs to be processed for effective analysis. Collecting and processing data in real-time is an enormous challenge. The main reason is that data are stemming from heterogeneous sources with a huge speed. The high-speed and data variety fosters challenges to perform complex processing operations such as cleansing, filtering, handling incorrect data, etc. The variety of data – structured, semi-structured, and unstructured – promotes challenges in processing data both in batch-style and real-time. Different types of data may require performing operations in different techniques. A technical framework that enables the processing of heterogeneous data is heavily challenging and not currently available. In addition, performing data processing operations in real-time is heavily challenging; efficient techniques are required to carry out the operations with high-speed data, which cannot be done using conventional logistics information systems. Therefore, in order to exploit Big Data in logistics service processes, an efficient solution for collecting and processing data in both realtime and batch style is critically important. In this thesis, we developed and experimented with two data processing solutions: SANA and IBRIDIA. SANA is built on Multinomial Naïve Bayes classifier whereas IBRIDIA relies on Johnson's hierarchical clustering (HCL) algorithm which is hybrid technology that enables data collection and processing in batch style and realtime. SANA is a service-based solution which deals with unstructured data. It serves as a multi-purpose system to extract the relevant events including the context of the event (such as place, location, time, etc.). In addition, it can be used to perform text analysis over the targeted events. IBRIDIA was designed to process unknown data stemming from external sources and cluster them on-the-fly in order to gain knowledge/understanding of data which assists in extracting events that may lead to delivery delay. According to our experiments, both of these approaches show a unique ability to process logistics data. However, SANA is found more promising since the underlying technology (Naïve Bayes classifier) out-performed IBRIDIA from performance measuring perspectives. It is clearly said that SANA was meant to generate a graph knowledge from the events collected immediately in realtime without any need to wait, thus reaching maximum benefit from these events. Whereas, IBRIDIA has an important influence within the logistics domain for identifying the most influential category of events that are affecting the delivery. Unfortunately, in IBRIRDIA, we should wait for a minimum number of events to arrive and always we have a cold start. Due to the fact that we are interested in re-optimizing the route on the fly, we adopted SANA as our data processing framework
Hadim, Julien. "Etude en vue de la multirésolution de l’apparence". Thesis, Bordeaux 1, 2009. http://www.theses.fr/2009BOR13794/document.
Texto completoIn recent years, Bidirectional Texture Function (BTF) has emerged as a flexible solution for realistic and real-time rendering of material with complex appearance and low cost computing. However one drawback of this approach is the resulting huge amount of data: several methods have been proposed in order to compress and manage this data. In this document, we propose a new BTF representation that improves data coherency and allows thus a better data compression. In a first part, we study acquisition and digital generation methods of BTFs and more particularly, compression methods suitable for GPU rendering. Then, We realise a study with our software BTFInspect in order to determine among the different visual phenomenons present in BTF which ones induce mainly the data coherence per texel. In a second part, we propose a new BTF representation, named Flat Bidirectional Texture Function (Flat-BTF), which improves data coherency and thus, their compression. The analysis of results show statistically and visually the gain in coherency as well as the absence of a noticeable loss of quality compared to the original representation. In a third and last part, we demonstrate how our new representation may be used for realtime rendering applications on GPUs. Then, we introduce a compression of the appearance thanks to a quantification method on GPU which is presented in the context of a 3D data streaming between a server of 3D data and a client which want visualize them
Bouakaz, Adnan. "Ordonnancement temps-réel des graphes flots de données". Phd thesis, Université Rennes 1, 2013. http://tel.archives-ouvertes.fr/tel-00916515.
Texto completoBarbier, Sébastien. "Visualisation distance temps-réel de grands volumes de données". Grenoble 1, 2009. http://www.theses.fr/2009GRE10155.
Texto completoNumerical simulations produce huger and huger meshes that can reach dozens of million tetrahedra. These datasets must be visually analyzed to understand the physical simulated phenomenon and draw conclusions. The computational power for scientific visualization of such datasets is often smaller than for numerical simulation. As a consequence, interactive exploration of massive meshes is barely achieved. In this document, we propose a new interactive method to interactively explore massive tetrahedral meshes with over forty million tetrahedra. This method is fully integrated into the simulation process, based on two meshes at different resolutions , one fine mesh and one coarse mesh , of the same simulation. A partition of the fine vertices is computed guided by the coarse mesh. It allows the on-the-fly extraction of a mesh, called \textit{biresolution}, mixed of the two initial resolutions as in usual multiresolution approaches. The extraction of such meshes is carried out into the main memory (CPU), the last generation of graphics cards (GPU) and with an out-of-core algorithm. They guarantee extraction rates never reached in previous work. To visualize the biresolution meshes, a new direct volume rendering (DVR) algorithm is fully implemented into graphics cards. Approximations can be performed and are evaluated in order to guarantee an interactive rendering of any biresolution meshes
Cont, Arshia. "Traitement et programmation temps-réel des signaux musicaux". Habilitation à diriger des recherches, Université Pierre et Marie Curie - Paris VI, 2013. http://tel.archives-ouvertes.fr/tel-00829771.
Texto completoItthirad, Frédéric. "Acquisition et traitement d'images 3D couleur temps réel". Thesis, Saint-Etienne, 2011. http://www.theses.fr/2011STET4011.
Texto completoThe existing 3D sensors aren’t much used and are only capable of capturing 3D dat. When 2D data are necessary, one has to use another camera and correlate the 2 images. NT2I has decided to develop its own solution in order to control the acquisition chain. My work has been to develop a specific camera with color, calibration, and image processing algorithms. In that purpose, I've worked on the extension of the LIP model (Logarithmic Image Processing) for color images and on the implementation of real time algorithms
Holländer, Matthias. "Synthèse géométrique temps réel". Electronic Thesis or Diss., Paris, ENST, 2013. http://www.theses.fr/2013ENST0009.
Texto completoEal-time geometry synthesis is an emerging topic in computer graphics.Today's interactive 3D applications have to face a variety of challengesto fulfill the consumer's request for more realism and high quality images.Often, visual effects and quality known from offline-rendered feature films or special effects in movie productions are the ultimate goal but hard to achieve in real time.This thesis offers real-time solutions by exploiting the Graphics Processing Unit (GPU)and efficient geometry processing.In particular, a variety of topics related to classical fields in computer graphics such assubdivision surfaces, global illumination and anti-aliasing are discussedand new approaches and techniques are presented
Ammann, Lucas. "Visualisation temps réel de données à deux dimensions et demie". Strasbourg, 2010. https://publication-theses.unistra.fr/public/theses_doctorat/2010/AMMANN_Lucas_2010.pdf.
Texto completoHeightfield data is now a common representation for several kind of virtual objects. Indeed, they are frequently used to represent topographical or scientific data. They are also used by 3-dimensional digitisation devices to store real objects. However, some issues are introduced by this kind of data during their manipulation and especially their visualisation. During this thesis, we develop simple yet efficient methods to render heightfield data, especially data from art painting digitisation. In addition to the visualisation method, we propose a complete pipeline to acquire art paintings and to process data for the visualisation process. To generalize the proposed approach, another rendering method is described to display topographical data by combining a rasterization process with a ray-casting rendering. Both of the rendering techniques are based on an adaptive mecanism which combines several rendering algorithms to enhance visualisation performances. These mechanisms are designed to avoid pre-processing steps of the data and to make use of straightforward rendering methods
Bahri, Nejmeddine. "Étude et conception d’un encodeur vidéo H264/AVC de résolution HD sur une plateforme multicœur". Thesis, Paris Est, 2015. http://www.theses.fr/2015PESC1116/document.
Texto completoThe trend toward HD resolution in most of visual multimedia applications has involved the emergence of a large number of video compression standards such as H.264/AVC (Advanced Video Coding) and HEVC (High Efficiency Video Coding). These standards are characterized by high coding performances in terms of compression ratio and video quality compared to previous standards. However, these performances come with large computational complexities which make it difficult to meet real-time encoding for HD resolution on the most common single-core programmable processors. Moreover, as embedded systems have become increasingly used in various multimedia applications, designing an embedded software solution for the H264/AVC encoder represents another difficult challenge since we have to meet the embedded requirements in terms of hardware resources such as memory and power consumption. The new embedded systems with multicore technology represent an attractive solution to overcome these problems. In this context, this thesis is interested in exploiting the performance of the new generation of Texas Instruments multicore DSPs to design an embedded real-time H264/AVC high definition video encoder. We aim a software solution, characterized by high flexibility that allows setting all parameters (quality, bitrate etc) compared to existing IPs. This software flexibility allows also the system scalability by following the coding enhancements as the migration to the newest HEVC standard. Thus, we present the algorithmic, architectural, and structural optimizations which are applied to improve the encoding speed on a single DSP core before moving to a multicore implementation. Then, we propose parallel implementations of the H264/AVC encoder exploiting the multicore architecture of our platform and the potential parallelism in the encoding chain in order to meet real-time constraints while ensuring a good performance in terms of bitrate and video quality. We also explore the problem of resources allocation (computing resources, storage resources, communication resources) with hard execution time constraints. Finally, this thesis opens the way towards the implementation of the new HEVC video coding standard on two embedded systems in order to prepare a software solution for future research
Ali, Karim Mohamed Abedallah. "Architectures parallèles reconfigurables pour le traitement vidéo temps-réel". Thesis, Valenciennes, 2018. http://www.theses.fr/2018VALE0005/document.
Texto completoEmbedded video applications are now involved in sophisticated transportation systems like autonomous vehicles. Many challenges faced the designers to build those applications, among them: complex algorithms should be developed, verified and tested under restricted time-to-market constraints, the necessity for design automation tools to increase the design productivity, high computing rates are required to exploit the inherent parallelism to satisfy the real-time constraints, reducing the consumed power to extend the operating duration before recharging the vehicle, etc. In this thesis work, we used FPGA technologies to tackle some of these challenges to design parallel reconfigurable hardware architectures for embedded video streaming applications. First, we implemented a flexible parallel architecture with two main contributions: (1)We proposed a generic model for pixel distribution/collection to tackle the problem of the huge data transferring through the system. The required model parameters were defined then the architecture generation was automated to minimize the development time. (2) We applied frequency scaling as a technique for reducing power consumption. We derived the required equations for calculating the maximum level of parallelism as well as the ones used for calculating the depth of the inserted FIFOs for clock domain crossing. As the number of logic cells on a single FPGA chip increases, moving to higher abstraction design levels becomes inevitable to shorten the time-to-market constraint and to increase the design productivity. During the design phase, it is common to have a space of design alternatives that are different from each other regarding hardware utilization, power consumption and performance. We developed ViPar tool with two main contributions to tackle this problem: (1) An empirical model was introduced to estimate the power consumption based on the hardware utilization (Slice and BRAM) and the operating frequency. In addition to that, we derived the equations for estimating the hardware resources and the execution time for each point during the design space exploration. (2) By defining the main characteristics of the parallel architecture like parallelism level, the number of input/output ports, the pixel distribution pattern, etc. ViPar tool can automatically generate the parallel architecture for the selected designs for implementation. In the context of an industrial collaboration, we used high-level synthesis tools to implement a parallel hardware architecture for Multi-window Sum of Absolute Difference stereo matching algorithm. In this implementation, we presented a set of guiding steps to modify the high-level description code to fit efficiently for hardware implementation as well as we explored the design space for different alternatives in terms of hardware resources, performance, frequency and power consumption. During the thesis work, our designs were implemented and tested experimentally on Xilinx Zynq ZC706 (XC7Z045- FFG900) evaluation board
Holländer, Matthias. "Synthèse géométrique temps réel". Thesis, Paris, ENST, 2013. http://www.theses.fr/2013ENST0009/document.
Texto completoEal-time geometry synthesis is an emerging topic in computer graphics.Today's interactive 3D applications have to face a variety of challengesto fulfill the consumer's request for more realism and high quality images.Often, visual effects and quality known from offline-rendered feature films or special effects in movie productions are the ultimate goal but hard to achieve in real time.This thesis offers real-time solutions by exploiting the Graphics Processing Unit (GPU)and efficient geometry processing.In particular, a variety of topics related to classical fields in computer graphics such assubdivision surfaces, global illumination and anti-aliasing are discussedand new approaches and techniques are presented
Samé, Allou Badara. "Modèles de mélange et classification de données acoustiques en temps réel". Compiègne, 2004. http://www.theses.fr/2004COMP1540.
Texto completoThe motivation for this Phd Thesis was a real-time flaw diagnosis application for pressurized containers using acoustic emissions. It has been carried out in collaboration with the Centre Technique des Industries Mécaniques (CETIM). The aim was to improve LOTERE, a real-time computer-aided-decision software, which has been found to be too slow when the number of acoustic emissions becomes large. Two mixture model-based clustering approaches, taking into account time constraints, have been proposed. The first one consists in clustering 'bins' resulting from the conversion of original observations into an histogram. The second one is an on-line approach updating recursively the classification. An experimental study using both simulated and real data has shown that the proposed methods are very efficient
Mathieu, Jean. "Intégration de données temps-réel issues de capteurs dans un entrepôt de données géo-décisionnel". Thesis, Université Laval, 2011. http://www.theses.ulaval.ca/2011/28019/28019.pdf.
Texto completoIn the last decade, the use of sensors for measuring various phenomenons has greatly increased. As such, we can now make use of sensors to measure GPS position, temperature and even the heartbeats of a person. Nowadays, the wide diversity of sensor makes them the best tools to gather data. Along with this effervescence, analysis tools have also advanced since the creation of transactional databases, leading to a new category of tools, analysis systems (Business Intelligence (BI)), which respond to the need of the global analysis of the data. Data warehouses and OLAP (On-Line Analytical Processing) tools, which belong to this category, enable users to analyze big volumes of data, execute time-based requests and build statistic graphs in a few simple mouse clicks. Although the various types of sensor can surely enrich any analysis, such data requires heavy integration processes to be driven into the data warehouse, centerpiece of any decision-making process. The different data types produced by sensors, sensor models and ways to transfer such data are even today significant obstacles to sensors data streams integration in a geo-decisional data warehouse. Also, actual geo-decisional data warehouses are not initially built to welcome new data on a high frequency. Since the performances of a data warehouse are restricted during an update, new data is usually added weekly, monthly, etc. However, some data warehouses, called Real-Time Data Warehouses (RTDW), are able to be updated several times a day without letting its performance diminish during the process. But this technology is not very common, very costly and in most of cases considered as "beta" versions. Therefore, this research aims to develop an approach allowing to publish and normalize real-time sensors data streams and to integrate it into a classic data warehouse. An optimized update strategy has also been developed so the frequent new data can be added to the analysis without affecting the data warehouse performances.
Porquet, Damien. "Rendu en temps réel de scènes complexes". Limoges, 2004. http://aurore.unilim.fr/theses/nxfile/default/fdef9900-123a-4d8a-ada2-c65cf3c94a1f/blobholder:0/2004LIMO0036.pdf.
Texto completoIn computer graphics, we distinguish modeling from rendering, that is image synthesis. This thesis framework is real-time rendering of scene composed of complex non deformable 3D objects. Such objects gives high realism in computed images but are difficult to use (in terms of storage space and computing time). A lot of works as been driven in this context. We first describe commonly used approach that is point sample rendering, geometrical mesh simplification and image-based rendering. We first propose different algorithms to extend image based methods to real-time rendering. We then describe a method to generate 4D textures producing image of a given 3D object from arbitrary viewpoint, independently of its geometrical complexity. Finally, we present an image interpolation method to map relief onto a given depth image or a low polygon count version of a complex object. This works intensively use latest GPU capabilities, allowing high quality and real-time rendering at same time
Benkherrat, Moncef. "Analyse et traitement des potentiels évoqués cognitifs en temps réel". Paris 12, 1998. http://www.theses.fr/1998PA120033.
Texto completoPérez, Patricio Madain. "Stéréovision dense par traitement adaptatif temps réel : algorithmes et implantation". Lille 1, 2005. https://ori-nuxeo.univ-lille1.fr/nuxeo/site/esupversions/0c4f5769-6f43-455c-849d-c34cc32f7181.
Texto completoToutain, Laurent. "Samson : un simulateur pour systèmes repartis et temps-réel". Le Havre, 1991. http://www.theses.fr/1991LEHA0010.
Texto completoSemghouni, Samy Rostom. "Modélisation stochastique des transactions temps réel". Le Havre, 2007. http://www.theses.fr/2007LEHA0012.
Texto completoReal-time database systems (RTDBSs) are systems designed to address the applications which need real-time processing of large quantities of data. An RTDBS must guarantee the transactions ACID (Atomicity, Consistency, Isolation, Durability) properties on one hand, and must schedule the transactions in order to meet their individual deadlines, on the other hand. In this thesis, we focus on stochastic and probabilistic study of the behavior of real-time transactions. The study is conducted under some assumptions such as the arrival mean of transactions, transactions type, concurrency control protocol (an optimistic and a pessimistic), and scheduling policy. We have then designed and developed a flexible and extensible RTDBS simulator, on which the study is done. The obtained results have shown that the transactions behavior can be approximated by a probabilistic model. The model is used to predict the transactions success ratio according to the system workload. We also propose a new scheduling policy for real-time transactions which uses criteria based on both transaction deadlines and transaction importance. This policy contributes to enhance the system performances (maximization of committing transactions), improving then the RTDBSs quality of service
Engels, Laurent. "Acquisition en temps réel, identification et mise en correspondance de données 3D". Doctoral thesis, Universite Libre de Bruxelles, 2011. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209852.
Texto completoDoctorat en Sciences de l'ingénieur
info:eu-repo/semantics/nonPublished
Glory, Anne-Cécile. "Vérification de propriétés de programmes flots de données synchrones". Grenoble 1, 1989. http://tel.archives-ouvertes.fr/tel-00335630.
Texto completoCassinelli, Alvaro. "Processeurs parallèles optoélectroniques stochastiques pour le traitement d'images en temps réel". Phd thesis, Université Paris Sud - Paris XI, 2000. http://pastel.archives-ouvertes.fr/pastel-00715890.
Texto completoZaidouni, Jamal. "Traitement en temps réel de signaux radar appliqués aux transports terrestres". Valenciennes, 2008. http://ged.univ-valenciennes.fr/nuxeo/site/esupversions/dd122fcd-724b-44ee-bd07-0d44b76e1697.
Texto completoThe objective of this research is to design correlation anti-collision radar to be embedded on the vehicle. This sensor is based on pseudo-random codes in transmission and correlation in reception. We seek to improve its performances by studying the coding and signal processing unit. To reduce the received gaussian noise effect, we propose algorithms based on Higher Order Statistics (HOS). After several simulations, the CORR2 algorithm (inspired of an algorithm of order 4) associated to the Kasami codes type 2 family gives the best system achieving a compromise between best performances and large number of codes. To minimize an electromagnetic leakage effect observed as parasite peaks at the detection algorithms, we propose a solution based on the partial identification of radar channel. This solution, called Adaptive Leakage Cancellation (ALC), has shown good performances in numerical simulations realised in two main possible situations of road radar (single-user and multi-users). We develop and validate by simulations, the expressions of probability of detection and false alarm for the most efficient algorithm (CORR2). To improve the Receiver Operating Characteristics (ROC), we propose to use the meaning CORR2 algorithm (under -30 dB of SNRin). Finally we realise a radar prototype at 76-77 GHz with the implementation of coding and signal processing in FPGA (Field Programmable Gate Arrays). Tests with this mock-up allow us to evaluate the performances of our system in real conditions