Academic literature on the topic 'Applications sur GPUs'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Applications sur GPUs.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Applications sur GPUs"

1

Zhang, Yongqiang, Jianxiong Zhou, Zhiyong Song, and Kaixin Zhou. "High-Precision GPU-Accelerated Simulation Algorithm for Targets under Non-Uniform Cluttered Backgrounds." Remote Sensing 15, no. 19 (September 22, 2023): 4664. http://dx.doi.org/10.3390/rs15194664.

Full text
Abstract:
This article presents a high-precision airborne video synthetic aperture radar (SAR) raw echo simulation method aimed at addressing the issue of simulation accuracy in video SAR image generation. The proposed method employs separate techniques for simulating targets and ground clutter, utilizing pre-existing SAR images for clutter simulation and employing the shooting and bouncing rays (SBR) approach to generate target echoes. Additionally, the method accounts for target-generated shadows to enhance the realism of the simulation results. The fast simulation algorithm is implemented using the C++ programming language and the Accelerated Massive Parallelism (AMP) framework, providing a fusion technique for integrating clutter and target simulations. By combining the two types of simulated data to form the final SAR image, the method achieves efficient and accurate simulation technology. Experimental results demonstrate that this method not only improves computational speed but also ensures the accuracy and stability of the simulation outcomes. This research holds significant implications for the development of algorithms pertaining to video SAR target detection and tracking, providing robust support for practical applications.
APA, Harvard, Vancouver, ISO, and other styles
2

Kumar, B. P., and C. S. Paidimarry. "Improved Real Time GPS RF Data Capturing for GNSS SDR Applications." Giroskopiya i Navigatsiya 28, no. 1 (2020): 42–53. http://dx.doi.org/10.17285/0869-7035.0023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kumar, B. P., and C. S. Paidimarry. "Improved Real Time GPS RF Data Capturing for GNSS SDR Applications." Gyroscopy and Navigation 11, no. 1 (January 2020): 59–67. http://dx.doi.org/10.1134/s2075108720010083.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Tabibi, Sajad, Felipe G. Nievinski, Tonie van Dam, and João F. G. Monico. "Assessment of modernized GPS L5 SNR for ground-based multipath reflectometry applications." Advances in Space Research 55, no. 4 (February 2015): 1104–16. http://dx.doi.org/10.1016/j.asr.2014.11.019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Sajous, Patricia. "En route avec l’IA ! Incidence de l’IA et de l’informatique ubiquitaire dans l’établissement de la mobilité quotidienne : le cas des applications GPS." Flux Pub. anticipées, no. 4 (December 15, 2022): I3—XV. http://dx.doi.org/10.3917/flux1.pr1.0003.

Full text
Abstract:
L’IA (intelligence artificielle) colonise de plus en plus de secteurs du quotidien. Comment prendre cela en compte dans les recherches en sciences humaines portant sur la mobilité quotidienne ? Cet article est l’occasion de s’interroger sur les cadres théorique et méthodologique à mettre en place. Nous développons la notion de « corps augmenté » et considérons les objets techniques permettant l’augmentation comme des « non humains actants » (Callon, 2006). Nous débutons par une définition, précisons les modalités d’analyse de l’augmentation du corps en matière de mobilité quotidienne et mettons en perspective l’ancienneté de cette recherche. Le cadre théorique posé, nous nous penchons sur le cadre méthodologique permettant de cerner les ressorts de l’augmentation. Nous nous intéresserons alors aux applications GPS ( Global Positioning System ) intégrant l’IA, consultables sur smartphone. Objet d’un nombre croissant de téléchargements, nous montrons par une enquête exploratoire sur quatre applications, en France, comment l’IA fait évoluer la donne. Nous revenons dans la discussion sur le cadre réflexif mis en place et sur les tendances identifiées dans les résultats.
APA, Harvard, Vancouver, ISO, and other styles
6

Savelonas, Michalis A., Christos N. Veinidis, and Theodoros K. Bartsokas. "Computer Vision and Pattern Recognition for the Analysis of 2D/3D Remote Sensing Data in Geoscience: A Survey." Remote Sensing 14, no. 23 (November 27, 2022): 6017. http://dx.doi.org/10.3390/rs14236017.

Full text
Abstract:
Historically, geoscience has been a prominent domain for applications of computer vision and pattern recognition. The numerous challenges associated with geoscience-related imaging data, which include poor imaging quality, noise, missing values, lack of precise boundaries defining various geoscience objects and processes, as well as non-stationarity in space and/or time, provide an ideal test bed for advanced computer vision techniques. On the other hand, the developments in pattern recognition, especially with the rapid evolution of powerful graphical processing units (GPUs) and the subsequent deep learning breakthrough, enable valuable computational tools, which can aid geoscientists in important problems, such as land cover mapping, target detection, pattern mining in imaging data, boundary extraction and change detection. In this landscape, classical computer vision approaches, such as active contours, superpixels, or descriptor-guided classification, provide alternatives that remain relevant when domain expert labelling of large sample collections is often not feasible. This issue persists, despite efforts for the standardization of geoscience datasets, such as Microsoft’s effort for AI on Earth, or Google Earth. This work covers developments in applications of computer vision and pattern recognition on geoscience-related imaging data, following both pre-deep learning and post-deep learning paradigms. Various imaging modalities are addressed, including: multispectral images, hyperspectral images (HSIs), synthetic aperture radar (SAR) images, point clouds obtained from light detection and ranging (LiDAR) sensors or digital elevation models (DEMs).
APA, Harvard, Vancouver, ISO, and other styles
7

Duan, Huizhi, Yongsheng Li, Bingquan Li, and Hao Li. "Fast InSAR Time-Series Analysis Method in a Full-Resolution SAR Coordinate System: A Case Study of the Yellow River Delta." Sustainability 14, no. 17 (August 25, 2022): 10597. http://dx.doi.org/10.3390/su141710597.

Full text
Abstract:
Ground deformation is a major determinant of delta sustainability. Sentinel-1 Terrain Observation by Progressive Scans (TOPS) data are widely used in interferometric synthetic aperture radar (InSAR) applications to monitor ground subsidence. Due to the unparalleled mapping coverage and considerable data volume requirements, high-performance computing resources including graphics processing units (GPUs) are employed in state-of-the-art methodologies. This paper presents a fast InSAR time-series processing approach targeting Sentinel-1 TOPS images to process massive data with higher efficiency and resolution. We employed a GPU-assisted InSAR processing method to accelerate data processing. Statistically homogeneous pixel selection (SHPS) filtering was used to reduce noise and detect features in scenes with minimal image resolution loss. Compared to the commonly used InSAR processing software, the proposed method significantly improved the Sentinel-1 TOPS data processing efficiency. The feasibility of the method was investigated by mapping the surface deformation over the Yellow River Delta using SAR datasets acquired between January 2021 and February 2022. The findings indicate that several events of significant subsidence have occurred in the study area. Combined with the geological environment, underground brine and hydrocarbon extraction as well as sediment consolidation and compaction contribute to land subsidence in the Yellow River Delta.
APA, Harvard, Vancouver, ISO, and other styles
8

Shi, Ming Xing, Bi Yu Tang, and Ao Peng. "A Cascade Tracking Loop for Weak GPS Signals." Applied Mechanics and Materials 719-720 (January 2015): 1116–23. http://dx.doi.org/10.4028/www.scientific.net/amm.719-720.1116.

Full text
Abstract:
It’s important to get accurate carrier phase and frequency information when using a standalone GPS receiver. In weak signal applications, to keep a stable tracking is hard to achieve because measuring error will be huge when the SNR is low. Different methods are used to improve the SNR before the detector in a tracking process, such as coherence integration. And this paper keeps eyes on a different viewpoint, on how to refine estimation results. A cascade structure is introduced for weak signal tracking. This structure is divided into two levels. In the first level, raw phase estimation and accurate frequency estimation is provided to achieve stable work in low CNR environment. In the second level, the raw phase estimation is refined to achieve accurate tracking requirement. This cascade structure can also work jointly with any other SNR-improving technology to get a better performance.
APA, Harvard, Vancouver, ISO, and other styles
9

Yao, Yan Xin, Yun Zhao, and Jun Ye Zeng. "GPS Multipath Estimation with Improved SNR Adaptation Performance Utilizing Adaptive Filters." Advanced Materials Research 774-776 (September 2013): 1664–70. http://dx.doi.org/10.4028/www.scientific.net/amr.774-776.1664.

Full text
Abstract:
Global positioning system multipath estimation is important for both high precision positioning and reflection applications. The paper aims at proposing a method for estimating multipath parameters including carrier phases with good SNR adaptation performance utilizing adaptive filters. The method takes the signal after demodulation and spectrum dispreading for the desired signal. It is found to have a desired signal with relatively higher SNR so that the filtering performance is relatively better than previously proposed method; whats more, the method could estimate multipath code delay preferable to carrier phase ambiguity problem. Simulations validate the method can estimate multipath profile including amplitudes, code delay and carrier phases of direct path and multipath signals with small errors of 2% correct amplitude, smaller than one sampling interval code delay and 0.01 cycles carrier phase at 11dB SNR. The multipath mitigation performance of the method is better than that of narrow correlator.
APA, Harvard, Vancouver, ISO, and other styles
10

Alwhibi, Mona S., Dina A. Soliman, Manal A. Awad, Asma B. Alangery, Horiah Al Dehaish, and Yasmeen A. Alwasel. "Green synthesis of silver nanoparticles: Characterization and its potential biomedical applications." Green Processing and Synthesis 10, no. 1 (January 1, 2021): 412–20. http://dx.doi.org/10.1515/gps-2021-0039.

Full text
Abstract:
Abstract In recent times, research on the synthesis of noble metal nanoparticles (NPs) has developed rapidly and attracted considerable attention. The use of plant extracts is the preferred mode for the biological synthesis of NPs due to the presence of biologically active constituents. Aloe vera is a plant endowed with therapeutic benefits especially in skincare due to its unique curative properties. The present study focused on an environmental friendly and rapid method of phytosynthesis of silver nanoparticles (Ag-NPs) using A. vera gel extract as a reductant. The synthesized Ag-NPs were characterized by transmission electron microscopy (TEM), UV-Vis spectroscopy, Fourier transform infrared (FTIR), and dynamic light scattering (DLS). TEM micrographs showed spherical-shaped synthesized Ag-NPs with a diameter of 50–100 nm. The UV-Vis spectrum displayed a broad absorption peak of surface plasmon resonance (SPR) at 450 nm. The mean size and size distribution of the formed Ag-NPs were investigated using the DLS technique. Antibacterial studies revealed zones of inhibition by Ag-NPs of A. vera (9 and 7 mm) against Pseudomonas aeruginosa and Escherichia coli, respectively. Furthermore, the antifungal activity was screened, based on the diameter of the growth inhibition zone using the synthesized Ag-NPs for different fungal strains. Anticancer activity of the synthesized Ag-NPs against the mouse melanoma F10B16 cell line revealed 100% inhibition with Ag-NPs at a concentration of 100 µg mL−1. The phytosynthesized Ag-NPs demonstrated a marked antimicrobial activity and also exhibited a potent cytotoxic effect against mouse melanoma F10B16 cells. The key findings of this study indicate that synthesized Ag-NPs exhibit profound therapeutic activity and could be potentially ideal alternatives in medicinal applications.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Applications sur GPUs"

1

Nguyen, Dinh Quoc Dang. "Representation of few-group homogenized cross sections by polynomials and tensor decomposition." Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASP142.

Full text
Abstract:
Cette thèse se concentre sur l'étude de la modélisation mathématique des sections efficaces homogénéisées à peu de groupes, un élément essentiel du schéma à deux étapes, qui est largement utilisé dans les simulations de réacteurs nucléaires. À mesure que les demandes industrielles nécessitent de plus en plus des maillages spatiaux et énergétiques fins pour améliorer la précision des calculs cœur, la taille de la bibliothèque des sections efficaces peut devenir excessive, entravant ainsi les performances des calculs cœur. Il est donc essentiel de développer une représentation qui minimise l'utilisation de la mémoire tout en permettant une interpolation des données efficace.Deux approches, la représentation polynomiale et la décomposition "Canonical Polyadic" des tenseurs, sont présentées et appliquées aux données de sections efficaces homogénéisées à peu de groupes. Les données sont préparées à l'aide d'APOLLO3 sur la géométrie de deux assemblages dans le benchmark X2 VVER-1000. Le taux de compression et la précision sont évalués et discutés pour chaque approche afin de déterminer leur applicabilité au schéma standard en deux étapes.De plus, des implémentations sur GPUs des deux approches sont testées pour évaluer la scalabilité des algorithmes en fonction du nombre de threads impliqués. Ces implémentations sont encapsulées dans une bibliothèque appelée Merlin, destinée à la recherche future et aux applications industrielles utilisant ces approches.Les deux approches, en particulier la méthode de décomposition des tenseurs, montrent des résultats prometteurs en termes de compression des données et de précision de reconstruction. L'intégration de ces méthodes dans le schéma standard en deux étapes permettrait non seulement de réduire considérablement l'utilisation de la mémoire pour le stockage des sections efficaces, mais aussi de diminuer significativement l'effort de calcul requis pour l'interpolation des sections efficaces lors des calculs cœur, réduisant donc le temps de calcul global pour les simulations de réacteurs industriels
This thesis focuses on studying the mathematical modeling of few-group homogenized cross sections, a critical element in the two-step scheme widely used in nuclear reactor simulations. As industrial demands increasingly require finer spatial and energy meshes to improve the accuracy of core calculations, the size of the cross section library can become excessive, hampering the performance of core calculations. Therefore, it is essential to develop a representation that minimizes memory usage while still enabling efficient data interpolation.Two approaches, polynomial representation and Canonical Polyadic decomposition of tensors, are presented and applied to few-group homogenized cross section data. The data is prepared using APOLLO3 on the geometry of two assemblies in the X2 VVER-1000 benchmark. The compression rate and accuracy are evaluated and discussed for each approach to determine their applicability to the standard two-step scheme.Additionally, GPU implementations of both approaches are tested to assess the scalability of the algorithms based on the number of threads involved. These implementations are encapsulated in a library called Merlin, intended for future research and industrial applications that involve these approaches.Both approaches, particularly the method of tensor decomposition, demonstrate promising results in terms of data compression and reconstruction accuracy. Integrating these methods into the standard two-step scheme would not only substantially reduce memory usage for storing cross sections, but also significantly decrease the computational effort required for interpolating cross sections during core calculations, thereby reducing overall calculation time for industrial reactor simulations
APA, Harvard, Vancouver, ISO, and other styles
2

Degurse, Jean-François. "Traitement STAP en environnement hétérogène. Application à la détection radar et implémentation sur GPU." Phd thesis, Université Paris Sud - Paris XI, 2014. http://tel.archives-ouvertes.fr/tel-00958460.

Full text
Abstract:
Les traitements spatio-temporels adaptatifs (STAP) sont des traitements qui exploitent conjointement les deux dimensions spatiale et temporelle des signaux reçus sur un réseau d'antennes, contrairement au traitement d'antenne classique qui n'exploite que la dimension spatiale, pour leur filtrage. Ces traitements sont particulièrement intéressants dans le cadre du filtrage des échos reçus par un radar aéroporté en provenance du sol pour lesquels il existe un lien direct entre direction d'arrivée et fréquence Doppler. Cependant, si les principes des traitements STAP sont maintenant bien acquis, leur mise en œuvre pratique face à un environnement réel se heurte à des points durs non encore résolus dans le contexte du radar opérationnel. Le premier verrou, adressé par la thèse dans une première phase, est d'ordre théorique, et consiste en la définition de procédures d'estimation de la matrice de covariance du fouillis sur la base d'une sélection des données d'apprentissage représentatives, dans un contexte à la fois de fouillis non homogène et de densité parfois importante des cibles d'intérêts. Le second verrou est d'ordre technologique, et réside dans l'implémentation physique des algorithmes, lié à la grande charge de calcul nécessaire. Ce point, crucial en aéroporté, est exploré par la thèse dans une deuxième phase, avec l'analyse de la faisabilité d'une implémentation sur GPU des étapes les plus lourdes d'un algorithme de traitement STAP.
APA, Harvard, Vancouver, ISO, and other styles
3

Boulay, Thomas. "Développement d'algorithmes pour la fonction NCTR - Application des calculs parallèles sur les processeurs GPU." Phd thesis, Université Paris Sud - Paris XI, 2013. http://tel.archives-ouvertes.fr/tel-00907979.

Full text
Abstract:
Le thème principal de cette thèse est l'étude d'algorithmes de reconnaissance de cibles non coopératives (NCTR). Il s'agit de faire de la reconnaissance au sein de la classe "chasseur" en utilisant le profil distance. Nous proposons l'étude de quatre algorithmes : un basé sur l'algorithme des KPPV, un sur les méthodes probabilistes et deux sur la logique floue. Une contrainte majeure des algorithmes NCTR est le contrôle du taux d'erreur tout en maximisant le taux de succès. Nous avons pu montrer que les deux premiers algorithmes ne permettait pas de respecter cette contrainte. Nous avons en revanche proposé deux algorithmes basés sur la logique floue qui permettent de respecter cette contrainte. Ceci se fait au détriment du taux de succès (notamment sur les données réelles) pour le premier des deux algorithmes. Cependant la deuxième version de l'algorithme a permis d'augmenter considérablement le taux de succès tout en gardant le contrôle du taux d'erreur. Le principe de cet algorithme est de caractériser, case distance par case distance, l'appartenance à une classe en introduisant notamment des données acquises en chambre sourde. Nous avons également proposé une procédure permettant d'adapter les données acquises en chambre sourde pour une classe donnée à d'autres classes de cibles. La deuxième contrainte forte des algorithmes NCTR est la contrainte du temps réel. Une étude poussée d'une parallélisation de l'algorithme basé sur les KPPV a été réalisée en début de thèse. Cette étude a permis de faire ressortir les points à prendre en compte lors d'une parallélisation sur GPU d'algorithmes NCTR. Les conclusions tirées de cette étude permettront par la suite de paralléliser de manière efficace sur GPU les futurs algorithmes NCTR et notamment ceux proposés dans le cadre de cette thèse.
APA, Harvard, Vancouver, ISO, and other styles
4

Wang, Ke. "Exploiting Presence." Thesis, KTH, Kommunikationssystem, CoS, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-91663.

Full text
Abstract:
By exploiting context awareness, traditional applications can be extended to offer better quality or new functions. Utilizing a context-aware infrastructure, a variety of context information is merged and processed to produce information that is useful to applications. Applications exploiting such context can provide more intelligent and creative services to end users. This thesis examines two ways to make use of a user’s location along with room occupancy information in context aware applications: a Context Agent and a Call Secretary. In the former case, the application subscribes to room occupancy information via a context server, and provides a Meeting Room Booking System with “real-time” information about the utilization of the rooms which the room booking system is to manage. The Call Secretary acquires both location and room occupancy information from a server. When the user is in one of the meeting rooms and multiple people are present, then this is interpreted as the user being in a meeting -- therefore it triggers a CPL module in a SIP proxy to redirect the user’s incoming call to their voice mail box. A description of the implementation of these two applications will be presented along with an evaluation of these applications’ performance. The evaluation of the Context Agent showed that it was straightforward to integrate information from a presence source and to extend the meeting room booking system to use this information. The Call Secretary needs a more reliable source for the user's location. However, given this location the Call Secretary provides the service which the user expects.
Genom att utnyttja sammanhang medvetenhet, traditionella tillämpningar kan utvidgas till att erbjuda bättre kvalitet eller nya funktioner. Använda en kontextmedvetna infrastruktur, en rad olika kontextuppgifter är sammanslagna och bearbetas för att producera information som är användbar för tillämpningar. Tillämpningar som utnyttjar sådana sammanhang kan ge mer intelligenta och kreativa tjänster till slutanvändare. Denna avhandling undersöker två sätt att använda sig av ett användaren befinner sig längs med rummet beläggningen information i samband medveten program: ett sammanhang av ombud och en uppmaning Sekreterare. I det förra fallet skall ansökan under på rumspris information via ett sammanhang server, och ger ett mötesrum bokningssystem med "realtid" information om användningen av de rum som lokalbokning system är att hantera. Ring sekreterare förvärvar både plats och rumspris information från en server. När användaren är i en av konferenslokaler och flera människor är närvarande, så är det tolkas som att användarna är i ett möte - därför det utlöser en CPL-modul i en SIP-proxy för att dirigera om användarens inkommande samtal till deras telefonsvarare fält. En beskrivning av genomförandet av dessa två program kommer att presenteras tillsammans med en utvärdering av dessa ansökningar resultat. Utvärderingen av det sammanhang ombud visade att det var enkelt att integrera information från en närvaro källa och att utvidga mötesrum bokningssystem att använda denna information. Ring sekreterare behöver en mer tillförlitlig källa för användarens plats. Med tanke på denna plats för samtal sekreterare tillhandahåller tjänster som användaren förväntar sig.
APA, Harvard, Vancouver, ISO, and other styles
5

Recur, Benoît. "Précision et qualité en reconstruction tomographique : algorithmes et applications." Thesis, Bordeaux 1, 2010. http://www.theses.fr/2010BOR14113/document.

Full text
Abstract:
Il existe un grand nombre de modalités permettant l'acquisition d'un objet de manière non destructrice (Scanner à Rayons X, micro-scanner, Ondes Térahertz, Microscopie Électronique de Transmission, etc). Ces outils acquièrent un ensemble de projections autour de l'objet et une étape de reconstruction aboutit à une représentation de l'espace acquis. La principale limitation de ces méthodes est qu'elles s'appuient sur une modélisation continue de l'espace alors qu'elles sont exploitées dans un domaine fini. L'étape de discrétisation qui en résulte est une source d'erreurs sur les images produites. De plus, la phase d'acquisition ne s'effectue pas de manière idéale et peut donc être entachée d'artéfacts et de bruits. Un grand nombre de méthodes, directes ou itératives, ont été développées pour tenter de réduire les erreurs et reproduire une image la plus représentative possible de la réalité. Un panorama de ces reconstructions est proposé ici et est coloré par une étude de la qualité, de la précision et de la résistances aux bruits d'acquisition.Puisque la discrétisation constitue l'une des principales limitations, nous cherchons ensuite à adapter des méthodes discrètes pour la reconstruction de données réelles. Ces méthodes sont exactes dans un domaine fini mais ne sont pas adaptées à une acquisition réelle, notamment à cause de leur sensibilité aux erreurs. Nous proposons donc un lien entre les deux mondes et développons de nouvelles méthodes discrètes plus robustes aux bruits. Enfin, nous nous intéressons au problème des données manquantes, i.e. lorsque l'acquisition n'est pas uniforme autour de l'objet, à l'origine de déformations dans les images reconstruites. Comme les méthodes discrètes sont insensibles à cet effet nous proposons une amorce de solution utilisant les outils développés dans nos travaux
A large kind of methods are available now to acquire an object in a non-destructive way (X-Ray scanner, micro-scanner, Tera-hertz waves, Transmission Electron Microscopy, etc). These tools acquire a projection set around the object and a reconstruction step leads to a representation of the acquired domain. The main limitation of these methods is that they rely on a continuous domain modeling wheareas they compute in a finite domain. The resulting discretization step sparks off errors in obtained images. Moreover, the acquisition step is not performed ideally and may be corrupted by artifacts and noises. Many direct or iterative methods have been developped to try to reduce errors and to give a better representative image of reality. An overview of these reconstructions is proposed and it is enriched with a study on quality, precision and noise robustness.\\Since the discretization is one of the major limitations, we try to adjust discrete methods for the reconstruction of real data. These methods are accurate in a finite domain but are not suitable for real acquisition, especially because of their error sensitivity. Therefore, we propose a link between the two worlds and we develop new discrete and noise robust methods. Finally, we are interesting in the missing data problem, i.e. when the acquisition is not uniform around the object, giving deformations into reconstructed images. Since discrete reconstructions are insensitive to this effect, we propose a primer solution using the tools developed previously
APA, Harvard, Vancouver, ISO, and other styles
6

Claustre, Jonathan. "Modèle particulaire 2D et 3D sur GPU pour plasma froid magnétisé : Application à un filtre magnétique." Phd thesis, Université Paul Sabatier - Toulouse III, 2012. http://tel.archives-ouvertes.fr/tel-00796690.

Full text
Abstract:
La méthode PIC MCC (Particle-In-Cell Monte-Carlo Collision) est un outils très performant et efficace en ce qui concerne l'étude des plasmas (dans notre cas, pour des plasmas froids) car il permet de décrire l'évolution dans le temps et dans l'espace, des particules chargées sous l'effet des champs auto-consistants et des collisions. Dans un cas purement électrostatique, la méthode consiste à suivre les trajectoires d'un nombre représentatif de particules chargées, des électrons et des ions, dans l'espace des phases, et de décrire l'interaction collective de ces particules par la résolution de l'équation de Poisson. Dans le cas de plasmas froid, les trajectoires dans l'espace des phase sont déterminées par le champ électrique auto-consistant et par les collisions avec les atomes neutres ou les molécules et, pour des densités relativement importantes, par les collisions entre les particules chargées. Le coût des simulations pour ce type de méthode est très élevé en termes de ressources (CPU et mémoire). Ceci est dû aux fortes contraintes (dans les simulations PIC explicites) sur le pas de temps (plus petit qu'une fraction de la période plasma et inverse à la fréquence de giration électronique), sur le pas d'espace (de l'ordre de la longueur de Debye), et sur le nombre de particules par longueur de Debye dans la simulation (généralement de l'ordre de plusieurs dizaines). L'algorithme PIC MCC peut être parallélisé sur des fermes de calculs de CPU (le traitement de la trajectoires des particules est facilement parallélisable, mais la parallélisation de Poisson l'est beaucoup moins). L'émergence du GPGPU (General Purpose on Graphics Processing Unit) dans la recherche en informatique a ouvert la voie aux simulations massivement parallèle à faible coût et ceci par l'utilisation d'un très grand nombre de processeurs disponible sur les cartes graphiques permettant d'effectuer des opérations élémentaires (e.g. calcul de la trajectoires des particules) en parallèle. Un certain nombre d'outils numérique pour le calcul sur GPU ont été développés lors de ces 10 dernières années. De plus, le constructeur de cartes graphiques NVIDIA a développé un environnement de programmation appelé CUDA (Compute Unified Device Architecture) qui permet une parallélisation efficace des codes sur GPU. La simulation PIC avec l'utilisation des cartes graphiques ou de la combinaison des GPU et des CPU a été reporté par plusieurs auteurs, cependant les modèles PIC avec les collisions Monte-Carlo sur GPU sont encore en pleine étude. A l'heure actuelle, de ce que nous pouvons savoir, ce travail est le premier a montrer des résultats d'un code PIC MCC 2D et 3D entièrement parallélisé sur GPU et dans le cas de l'étude de plasma froid magnétisé. Dans les simulation PIC, il est relativement facile de suivre les particules lorsqu'il n'y a ni pertes ni création (e.g. limites périodiques ou pas d'ionisation) de particules au cours du temps. Cependant il devient nécessaire de réordonner les particules à chaque pas en temps dans le cas contraire (ionisation, recombinaison, absorption, etc). Cette Thèse met en lumière les stratégies qui peuvent être utilisées dans les modèles PIC MCC sur GPU permettant d'outre passer les difficultés rencontrées lors du réarrangement des particules après chaque pas de temps lors de la création et/ou des pertes. L'intérêt principal de ce travail est de proposer un algorithme implémenté sur GPU du modèle PIC MCC, de mesurer l'efficacité de celui-ci (parallélisation) et de le comparer avec les calculs effectués sur GPU et enfin d'illustrer les résultats de ce modèle par la simulation de plasma froid magnétisé. L'objectif est de présenter en détail le code utilisé en de montrer les contraintes et les avantages liées à la programmation de code PIC MCC sur GPU. La discussion est largement ciblé sur le cas en 2D, cependant un algorithme 3D a également été développé et testé comme il est montré à la fin de cette thèse.
APA, Harvard, Vancouver, ISO, and other styles
7

Bachmann, Etienne. "Imagerie ultrasonore 2D et 3D sur GPU : application au temps réel et à l'inversion de forme d'onde complète." Thesis, Toulouse 3, 2016. http://www.theses.fr/2016TOU30133/document.

Full text
Abstract:
Si les avancées majeures en imagerie ultrasonore ont longtemps été liées à la qualité de l'instrumentation, l'avènement de l'informatique a incontestablement changé la donne en introduisant des possibilités croissantes de traitement des données pour obtenir une meilleure image. Par ailleurs, les GPUs, composants principaux des cartes graphiques, offrent de par leur architecture des vitesses de calcul bien supérieures aux processeurs, y compris à des fins de calcul scientifique. Le but de cette thèse a été de tirer parti de ce nouvel outil de calcul, en ciblant deux applications complémentaires. La première est d'autoriser une imagerie en temps réel de meilleure qualité que les autres techniques d'imagerie échographique, en parallélisant le procédé d'imagerie FTIM (Fast Topological IMaging). La seconde est d'introduire l'imagerie quantitative et en particulier la reconstruction de la carte de vitesse du milieu inconnu, en utilisant l'inversion de la forme d'onde complète
If the most important progresses in ultrasound imaging have been closely linked to the instrumentation's quality, the advent of computing science revolutionized this discipline by introducing growing possibilities in data processing to obtain a better picture. In addition, GPUs, which are the main components of the graphics cards deliver thanks to their architecture a significantly higher processing speed compared with processors, and also for scientific calculation purpose. The goal of this work is to take the best benefit of this new computing tool, by aiming two complementary applications. The first one is to enable real-time imaging with a better quality than other sonographic imaging techniques, thanks to the parallelization of the FTIM (Fast Tpological IMaging) imaging process. The second one is to introduce quantitative imaging and more particularly reconstructing the wavespeed map of an unknown medium, using Full Waveform Inversion
APA, Harvard, Vancouver, ISO, and other styles
8

Roussel, Nicolas. "Application de la réflectométrie GNSS à l'étude des redistributions des masses d'eau à la surface de la terre." Thesis, Toulouse 3, 2015. http://www.theses.fr/2015TOU30327/document.

Full text
Abstract:
La réflectométrie GNSS (ou GNSS-R) est une technique de télédétection originale et pportuniste qui consiste à analyser les ondes électromagnétiques émises en continu par la soixantaine de satellites des systèmes de positionnement GNSS (GPS, GLONASS, etc.), qui sont captées par une antenne après réflexion sur la surface terrestre. Ces signaux interagissent avec la surface réfléchissante et contiennent donc des informations sur ses propriétés. Au niveau de l'antenne, les ondes réfléchies interfèrent avec celles arrivant directement des satellites. Ces interférences sont particulièrement visibles dans le rapport signal-sur-bruit (SNR, i.e., Signal-to-Noise Ratio), paramètre enregistré par une station GNSS classique. Il est ainsi possible d'inverser les séries temporelles du SNR pour estimer des caractéristiques du milieu réfléchissant. Si la faisabilité et l'intérêt de cette méthode ne sont plus à démontrer, la mise en oeuvre de cette technique pose un certain nombre de problèmes, à savoir quelles précisions et résolutions spatio-temporelles peuvent être atteintes, et par conséquent, quels sont les observables géophysiques accessibles. Mon travail de thèse a pour objectif d'apporter des éléments de réponse sur ce point, et est axé sur le développement méthodologique et l'exploitation géophysique des mesures de SNR réalisées par des stations GNSS classiques. Je me suis focalisé sur l'estimation des variations de hauteur de l'antenne par rapport à la surface réfléchissante (altimétrie) et de l'humidité du sol en domaine continental. La méthode d'inversion des mesures SNR que je propose a été appliquée avec succès pour déterminer les variations locales de : (1) la hauteur de la mer au voisinage du phare de Cordouan du 3 mars au 31 mai 2013 où les ondes de marées et la houle ont pu être parfaitement identifiées ; et (2) l'humidité du sol dans un champ agricole à proximité de Toulouse, du 5 février au 15 mars 2014. Ma méthode permet de s'affranchir de certaines restrictions imposées jusqu'à présent dans les travaux antérieurs, où la vitesse de variation verticale de la surface de réflexion était supposée négligeable. De plus, j'ai développé un simulateur qui m'a permis de tester l'influence de nombreux paramètres (troposphère, angle d'élévation du satellite, hauteur d'antenne, relief local, etc.) sur la trajectoire des ondes réfléchies et donc sur la position des points de réflexion. Mon travail de thèse montre que le GNSS-R est une alternative performante et un complément non négligeable aux techniques de mesure actuelles, en faisant le lien entre les différentes résolutions temporelles et spatiales actuellement atteintes par les outils classiques (sondes, radar, diffusiomètres, etc.). Cette technique offre l'avantage majeur d'être basé sur un réseau de satellites déjà en place et pérenne, et est applicable à n'importe quelle station GNSS géodésique, notamment celles des réseaux permanents (e.g., le RGP français). Ainsi, en installant une chaîne de traitement de ces acquisitions de SNR en domaine côtier, il serait possible d'utiliser les mesures continues des centaines de stations pré-existantes, et d'envisager de réaliser des mesures altimétriques à l'échelle locale, ou de mesurer l'humidité du sol pour les antennes situées à l'intérieur des terres
GNSS reflectometry (or GNSS-R) is an original and opportunistic remote sensing technique based on the analysis of the electromagnetic waves continuously emitted by GNSS positioning systems satellites (GPS, GLONASS, etc.) that are captured by an antenna after reflection on the Earth's surface. These signals interact with the reflective surface and hence contain information about its properties. When they reach the antenna, the reflected waves interfere with those coming directly from the satellites. This interference is particularly visible in the signal-to-noise ratio (SNR) parameter recorded by conventional GNSS stations. It is thus possible to reverse the SNR time series to estimate the reflective surface characteristics. If the feasibility and usefulness of thismethod are well established, the implementation of this technique poses a number of issues. Namely the spatio-temporal accuracies and resolutions that can be achieved and thus what geophysical observables are accessible.The aim of my PhD research work is to provide some answers on this point, focusing on the methodological development and geophysical exploitation of the SNR measurements performed by conventional GNSS stations. I focused on the estimation of variations in the antenna height relative to the reflecting surface (altimetry) and on the soil moisture in continental areas. The SNR data inversion method that I propose has been successfully applied to determine local variations of : (1) the sea level near the Cordouan lighthouse (not far from Bordeaux, France) from March 3 to May 31, 2013, where the main tidal periods and waves have been clearly identified ; and (2) the soil moisture in an agricultural plot near Toulouse, France, from February 5 to March 15, 2014. My method eliminates some restrictions imposed in earlier work, where the velocity of the vertical variation of the reflective surface was assumed to be negligible. Furthermore, I developed a simulator that allowed me to assess the influence of several parameters (troposphere, satellite elevation angle, antenna height, local relief, etc.) on the path of the reflected waves and hence on the position of the reflection points. My work shows that GNSS-R is a powerful alternative and a significant complement to the current measurement techniques, establishing a link between the different temporal and spatial resolutions currently achieved by conventional tools (sensors, radar, scatterometer, etc.). This technique offers the major advantage of being based on already-developed and sustainable satellites networks, and can be applied to any GNSS geodetic station, including permanent networks (e.g., the French RGP). Therefore, by installing a processing chain of these SNR acquisitions, data from hundreds of pre-existing stations could be used to make local altimetry measurements in coastal areas or to estimate soil moisture for inland antennas
APA, Harvard, Vancouver, ISO, and other styles
9

Doris-Blais, Delphine. "Modélisation de récepteurs GPS : application à l'étude de l'influence des multitrajets sur les performances du récepteur L1." Toulouse, INPT, 1997. http://www.theses.fr/1997INPT068H.

Full text
Abstract:
Le systeme gps, certifie comme moyen secondaire de navigation par la faa depuis decembre 1993, est generalement considere comme etant l'un des elements cles du futur systeme mondial de navigation par satellites devant remplacer, a plus ou moins long terme, les systemes actuels d'approche de precision et d'aide a l'atterrissage, tels que l'ils et le mls. Avec la mise en place de systemes de type gps differentiel - completes, certes, par l'utilisation des techniques de poursuite de la phase de la porteuse, - on arrive regulierement a obtenir une precision sur l'information de position inferieure au metre, repondant ainsi - en terme de precision, seulement - aux exigences de l'atterrissage en categorie iii. Cependant, meme l'usage des corrections differentielles ne permet pas de garantir en permanence cette precision, a cause d'un probleme majeur non encore resolu : 2la propagation par trajets multiples1. Les objectifs et l'originalite de ce travail de these sont en fait double puisqu'il s'agissait de 2modeliser1, dans un premier temps, 2la chaine complete de poursuite du signal l1 gps1 sur l'outil de simulation de chaines de transmission numerique cossap, puis d'2etudier les erreurs induites par la propagation par multitrajets1 sur les performances du recepteur. Nous presenterons donc, et argumenterons en premier lieu les modelisations des differentes architectures de recepteur gps que nous avons construites sur l'outil de simulation cossap. Les modelisations effectuees seront validees par la caracterisation de la reponse des diverses boucles mises en jeu a differents perturbateurs (saut de phase, de frequence, rampe de frequence, bruit thermique). Les problemes sous-jacents a la simulation numerique seront alors soulignes. Puis, nous realiserons une analyse generale des performances du recepteur l1 gps en presence de multitrajets et etudierons les differentes solutions de traitement du signal permettant de reduire les effets des multitrajets sur les observations de la pseudodistance gps. Des calculs symboliques des erreurs induites tant au niveau de la boucle de phase que de la boucle de code seront developpes et valides grace aux simulations logicielles des differentes architectures de recepteur gps que nous avons effectuees. Ces developpements theoriques seront en fait a la base de nos discussions concernant les solutions de traitement du signal permettant de reduire les effets des multitrajets sur les performances des boucles de poursuite. Ainsi, apres avoir detaille les principes de fonctionnement des solutions classiques presentees dans la litterature et en avoir souligne les limitations, nous introduirons differentes structures de discriminateur de code, qui - basees sur des calculs de correlations multiples - permettent de reduire, de maniere tres significative, tant l'amplitude des erreurs de code observees que la plage des retards relatifs des signaux multitrajets ayant une influence sur le processus de determination de l'information de pseudodistance. Nous montrerons, qu'en utilisant ces techniques de correlation, nous approchons les performances du recepteur code p en utilisant des techniques de reception en code c/a. Elles s'avereront de plus tout a fait competitives par rapport aux techniques reposant sur les principes de la diversite d'espace, qui permettent certes d'identifier chacune des composantes recues, mais seulement au prix de la mise en place d'increments materiels et calculatoires importants.
APA, Harvard, Vancouver, ISO, and other styles
10

Aranzulla, Massimo. "Atmospheric water vapour tomography for DInSAR application and effect of volcanic plume on the microwaves." Doctoral thesis, Università di Catania, 2014. http://hdl.handle.net/10761/1543.

Full text
Abstract:
A particular synergy among GPS and SAR techniques, to improve the precision of the current ground deformation monitoring techniques,is investigated. The study of atmospheric anomalies in the GPS EM waves propagation is useful to extrapolate information about the wet refractivity ?eld. Because of its height and the quite variable weather conditions, the estimation of Mount Etna atmospheric anomalies using GPS measurements have noticeable importance to calibrate the SAR interferograms and to establish the effective ground deformation of the volcanic edifice. In this study we presented a method to obtain a 3D electromagnetic waves velocity tomography, starting from the GPS output data analysis. Thanks to the agreement between the University of Catania and the INGV-OE, the GPS data used in this work come from Etnanet framework. The GPS processing has been carried out by using the GAMIT software, by adopting appropriate processing parameters. A new software was developed for deriving the tropospheric tomography from the GPS data. The code was validated by using synthetic tests which assume different structure of atmospheric anomalies and with random noise about twice severe than the typical errors of the GPS. The results of the tests proved that the tomography software is able to reconstruct the simulated anomalies faithfully. The code was applied to study the structure of the atmosphere in an actual case: the period of August 12, 2011 at 10.00 am. The results of the tomography indicate clearly important features of the refractivity field of the studied day. In conclusion, the synthetic tests and the application on actual data sets of the new software demonstrate that it is able to reveal the tropospheric anomalies and thus it is an useful tool to improve the results of the SAR interferometry. An indirect outcome of the use of the GPS for the atmospheric sounding on an active volcanic area is that concerning the detection of volcanic products in the atmosphere. Due to the Mt. Etna persistent activity occurred during the last two years, the capability of GPS to detect the volcanic plume was investigated. The Etna volcano is particularly suited for an in-depth investigation into the aptitude of GPS observations to detect volcanic plumes, owing to both the high frequency of explosive episodes and also the well-developed GPS network. Two di?erent approaches were tested, in order to examine the capability of the GPS network to detect volcanic plumes at Etna. The ?rst approach is applied on the signal strength of the GPS L2 carrier phase data, the second approach, instead, is statistical, and analyzes the single di?erence post ?t residual of elaboration signals to assert the hypothesis that the plume a?ects the GPS data. The proposed method has been tested for the September 4 5, 2007 activity of Mt. Etna. Results from nineteen GPS permanent stations show that during this explosive activity, the GPS residuals definitely include the contribution of the volcanic plume. In the future, data derived from the GPS stations located on Etna's flanks could be used to improve the alerting system of volcanic ash, already operating at the Istituto Nazionale di Geofisica e Vulcanologia, Osservatorio Etneo.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Applications sur GPUs"

1

Wittman, David M. The Elements of Relativity. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780199658633.001.0001.

Full text
Abstract:
Relativity is a set of remarkable insights into the way space and time work. The basic notion of relativity, first articulated by Galileo, explains why we do not feel Earth moving as it orbits the Sun and was successful for hundreds of years. We present thinking tools that elucidate Galilean relativity and prepare us for the more modern understanding. We then show how Galilean relativity breaks down at speeds near the speed of light, and follow Einstein’s steps in working out the unexpected relationships between space and time that we now call special relativity. These relationships give rise to time dilation, length contraction, and the twin “paradox” which we explain in detail. Throughout, we emphasize how these effects are tightly interwoven logically and graphically. Our graphical understanding leads to viewing space and time as a unified entity called spacetime whose geometry differs from that of space alone, giving rise to these remarkable effects. The same geometry gives rise to the energy?momentum relation that yields the famous equation E = mc2, which we explore in detail. We then show that this geometric model can explain gravity better than traditional models of the “force” of gravity. This gives rise to general relativity, which unites relativity and gravity in a coherent whole that spawns new insights into the dynamic nature of spacetime. We examine experimental tests and startling predictions of general relativity, from everyday applications (GPS) to exotic phenomena such as gravitomagnetism, gravitational waves, Big Bang cosmology, and especially black holes.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Applications sur GPUs"

1

Guinn, Joseph, Ronald Muellerschoen, Laureano Cangahuala, Dah-Ning Yuan, Bruce Haines, Michael Watkins, and Edward Christensen. "TOPEX/Poseidon Precision Orbit Determination Using Combined GPS, SLR and DORIS." In GPS Trends in Precise Terrestrial, Airborne, and Spaceborne Applications, 128–32. Berlin, Heidelberg: Springer Berlin Heidelberg, 1996. http://dx.doi.org/10.1007/978-3-642-80133-4_20.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Cangahuala, Laureano, Ronald Muellerschoen, Dah-Ning Yuan, Edward Christensen, Eric Graat, and Joseph Guinn. "TOPEX/Poseidon Precision Orbit Determination With SLR and GPS Anti-Spoofing Data." In GPS Trends in Precise Terrestrial, Airborne, and Spaceborne Applications, 123–27. Berlin, Heidelberg: Springer Berlin Heidelberg, 1996. http://dx.doi.org/10.1007/978-3-642-80133-4_19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zhang, Ning, Shuangcheng Zhang, Yuefan He, Qin Zhang, Xiaojuan Zhang, and Tianhe Wan. "Characteristic of GPS SNR and It’s Application for Snow Depth Monitoring Analysis." In China Satellite Navigation Conference (CSNC) 2017 Proceedings: Volume I, 175–85. Singapore: Springer Singapore, 2017. http://dx.doi.org/10.1007/978-981-10-4588-2_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Bhura, Mayank, Pranav H. Deshpande, and K. Chandrasekaran. "CUDA or OpenCL." In Research Advances in the Integration of Big Data and Smart Computing, 267–79. IGI Global, 2016. http://dx.doi.org/10.4018/978-1-4666-8737-0.ch015.

Full text
Abstract:
Usage of General Purpose Graphics Processing Units (GPGPUs) in high-performance computing is increasing as heterogeneous systems continue to become dominant. CUDA had been the programming environment for nearly all such NVIDIA GPU based GPGPU applications. Still, the framework runs only on NVIDIA GPUs, for other frameworks it requires reimplementation to utilize additional computing devices that are available. OpenCL provides a vendor-neutral and open programming environment, with many implementations available on CPUs, GPUs, and other types of accelerators, OpenCL can thus be regarded as write once, run anywhere framework. Despite this, both frameworks have their own pros and cons. This chapter presents a comparison of the performance of CUDA and OpenCL frameworks, using an algorithm to find the sum of all possible triple products on a list of integers, implemented on GPUs.
APA, Harvard, Vancouver, ISO, and other styles
5

"Space Applications Used in Everyday Life: Communication and GPS." In Sun Above the Horizon, 275–91. Pan Stanford Publishing, 2014. http://dx.doi.org/10.1201/b17086-30.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Khemiri, Randa, Soulef Bouaafia, Asma Bahba, Maha Nasr, and Fatma Ezahra Sayadi. "Performance Analysis of OpenCL and CUDA Programming Models for the High Efficiency Video Coding." In Digital Image Processing - Advances and Applications [Working Title]. IntechOpen, 2021. http://dx.doi.org/10.5772/intechopen.99823.

Full text
Abstract:
In Motion estimation (ME), the block matching algorithms have a great potential of parallelism. This process of the best match is performed by computing the similarity for each block position inside the search area, using a similarity metric, such as Sum of Absolute Differences (SAD). It is used in the various steps of motion estimation algorithms. Moreover, it can be parallelized using Graphics Processing Unit (GPU) since the computation algorithm of each block pixels is similar, thus offering better results. In this work a fixed OpenCL code was performed firstly on several architectures as CPU and GPU, secondly a parallel GPU-implementation was proposed with CUDA and OpenCL for the SAD process using block of sizes from 4x4 to 64x64. A comparative study established between execution time on GPU on the same video sequence. The experimental results indicated that GPU OpenCL execution time was better than that of CUDA times with performance ratio that reached the double.
APA, Harvard, Vancouver, ISO, and other styles
7

Sheybani, Ehsan. "Real-Time Digital Signal Processing-Based Algorithm for Universal Software Radio Peripheral to Detect GPS Signal." In Strategic Innovations and Interdisciplinary Perspectives in Telecommunications and Networking, 241–54. IGI Global, 2019. http://dx.doi.org/10.4018/978-1-5225-8188-8.ch013.

Full text
Abstract:
Software-defined radios (SDR) are gradually becoming a practical option for implementing RF communication systems due to their low cost, off-the-shelf availability, and flexibility. Although the analog limitations of the hardware devices in these systems create barriers to some applications, creative algorithms in digital signal processing (DSP) can improve the results. In some cases, this improvement is essential to establishing a robust and reliable communication. The universal software radio peripheral (USRP) is a popular hardware that can be used alongside the SDR. Among many capabilities of USRP and its changeable daughter boards is receiving GPS signals. The GPS satellites transmit data on two main frequencies, L1 (1575.42 MHz) and L2 (1227.60 MHz). In this chapter, the focus is on describing a detailed implementation of the real-time DSP-based algorithm for USRP to detect GPS signal, namely the L1 band that transmits at 1575.42 MHz.
APA, Harvard, Vancouver, ISO, and other styles
8

Goodman, David. "Precision Agriculture: Big Data Analytics, Farm Support Platforms, and Concentration in the AgTech Space." In Transforming Agriculture and Foodways, 12–20. Policy Press, 2023. http://dx.doi.org/10.1332/policypress/9781529231465.003.0002.

Full text
Abstract:
This chapter examines the digitalisation of major field crop production in the US and Western Europe with the emergence of PA, aka ‘smart farming’, with the diffusion of GPS-assisted mapping, GPS auto-steer farm equipment, and site-specific variable rate technologies for input applications. These innovations are instrumental in the transition from field-level management to a high-resolution sub-field scale. The chapter discusses the role of Big Data analytics in this transition and corporate farm management platforms, whose emergence set the stage for a series of recent mega-mergers in the ‘seed-chemical complex’ of agricultural life science companies. This round of consolidation was triggered by Monsanto’s acquisition of Climate Corporation in 2013 and this complex is now dominated by the ‘Big Four’. A key argument is that PA technologies and corporate support platforms offering algorithmic ‘prescriptions’ for variable rate applications of farm inputs have reinforced farmer lock-in to the hegemonic paradigm of industrial agriculture.
APA, Harvard, Vancouver, ISO, and other styles
9

Kim, Donghyun, Jae-Ryul Shin, and Hwang-Hui Jeong. "A Study of FDS Computational Performance in Heterogeneous Hardware Architectures -Applied for grassland fires." In Advances in Forest Fire Research 2022, 494–97. Imprensa da Universidade de Coimbra, 2022. http://dx.doi.org/10.14195/978-989-26-2298-9_76.

Full text
Abstract:
Fire Dynamic Simulator (FDS), a fire simulation program, applies Message Passing Interface (MPI) and Open Multi Processing (OpenMP) libraries for large-scale simulation. FDS can be executed by dividing simulation problems in a computing cluster using MPI. The main point is to divide the entire domain to be interpreted into several sub-domains and allow each sub-domain to be calculated by an individual computer with an individual processor. When performing parallel computation, FDS first decomposes each sub-domain, then supports two-step parallelization in which multi-threading is applied within each sub-domain, and uses the OpenMP library to implement multi-threading. In this study, OpenACC, a parallelization technique capable of using heterogeneous hardware architectures, was partially applied to FDS. As an application problem, the calculation performance is evaluated through CSIRO Grassland Fires, a verification case of FDS. The hardware for evaluation was a personal computer consisting of dual Xeon 2678-V3 and GeForce GTX 1070. The FDS source code applies OpenACC using PGI Fortran as a compiler in Linux environments. In calculation performance, calculations using CPU and GPU together show 1.89 times faster performance than calculations using a single CPU. In case of using 1 GPU and 16 CPUs (MPP + OpenACC), the analysis result is 21 times faster. In this regard, analysis of grassland fire of WFDS was performed.
APA, Harvard, Vancouver, ISO, and other styles
10

Paradzayi, Charles, and Harold J. Annegarn. "Estimating Potential Woody Biomass in Communal Savanna Woodlands from Synthetic Aperture Radar (SAR)." In Geographic Information Systems, 880–89. IGI Global, 2013. http://dx.doi.org/10.4018/978-1-4666-2038-4.ch054.

Full text
Abstract:
Recent developments in Synthetic Aperture Radar (SAR) technologies have shown their potential for assessing and quantifying above-ground biomass (AGB) at landscape levels in different biomes. This paper examines the application of full polarimetric data to retrieve information related to potential woody biomass in sparse communal savanna woodlands in southern Africa using the Advanced Land Observation Satellite’s Phased Array L-band Synthetic Aperture Radar (ALOS PALSAR). Woody vegetation classes were obtained from the unsupervised entropy/alpha Wishart classification of the full polarimetric ALOS/PALSAR data. A combination of Differential GPS and conventional surveying techniques was used for a field inventory survey to estimate plot-level biomass densities in Welverdiend communal woodlands of South Africa. Regression analysis was used to derive the logarithmic relationship between the sampled plot AGB densities and the mean backscatter intensity of the microwave signal, which is transmitted in the horizontal plane and received in the vertical plane (HV). The AGB density for each woody vegetation class is estimated by solving the logarithmic equation after extracting the mean HV backscatter intensity for the particular vegetation class. The potential woody biomass is estimated from the derived AGB densities and the areal extent of the respective woody vegetation classes.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Applications sur GPUs"

1

Zhicheng Xiao, Zhicheng Xiao, Yiming Meng Yiming Meng, Jianping Li Jianping Li, Chengwei Fan Chengwei Fan, and Hua Ouyang Hua Ouyang. "Experimental study on the uncertainties of individual blade vibrational parameter estimation based on blade tip timing." In GPPS Xi'an21. GPPS, 2022. http://dx.doi.org/10.33737/gpps21-tc-73.

Full text
Abstract:
Detailed and accurate measurement is fundamentally important in aeroelastic researches. Blade tip timing (BTT) is a promising non-intrusive vibration measuring approach. However, the uncertainty about BTT has not been evaluated adequately. In this paper, the precision of individual blade vibrational parameter recognition based on BTT data is studied using an experimental method. The ‘ground truth’ tip deflection data is obtained by laser displacement sensors during synchronous, non-synchronous, and compound vibrations. Based on this, algorithms such as sine-fitting, non-uniform Fourier transform, and compressed sensing are investigated. It is found that for a unimodal signal with a high signal-to-noise ratio (SNR), these algorithms can get ideal results. For complex response vibrations, their performance is reduced dramatically. Sine-fitting could yield the wrong relative magnitude of the components. Non-uniform Fourier transform gives heavily aliased spectrums but acceptable amplitude estimates. Compressed sensing could resolve all the main components while the amplitudes are too small. Our results could provide a reference for the practical application of BTT.
APA, Harvard, Vancouver, ISO, and other styles
2

Chandrashekar, B. N., K. Aditya Shastry, B. A. Manjunath, and V. Geetha. "Performance Model of HPC Application On CPU-GPU Platform*." In 2022 IEEE 2nd Mysore Sub Section International Conference (MysuruCon). IEEE, 2022. http://dx.doi.org/10.1109/mysurucon55714.2022.9972737.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Heyn, Toby, Andrew Seidl, Hammad Mazhar, David Lamb, Alessandro Tasora, and Dan Negrut. "Enabling Computational Dynamics in Distributed Computing Environments Using a Heterogeneous Computing Template." In ASME 2011 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2011. http://dx.doi.org/10.1115/detc2011-48347.

Full text
Abstract:
This paper describes a software infrastructure made up of tools and libraries designed to assist developers in implementing computational dynamics applications running on heterogeneous and distributed computing environments. Together, these tools and libraries compose a so called Heterogeneous Computing Template (HCT). The heterogeneous and distributed computing hardware infrastructure is assumed herein to be made up of a combination of CPUs and GPUs. The computational dynamics applications targeted to execute on such a hardware topology include many-body dynamics, smoothed-particle hydrodynamics (SPH) fluid simulation, and fluid-solid interaction analysis. The underlying theme of the solution approach embraced by HCT is that of partitioning the domain of interest into a number of sub-domains that are each managed by a separate core/accelerator (CPU/GPU) pair. Five components at the core of HCT enable the envisioned distributed computing approach to large-scale dynamical system simulation: (a) a method for the geometric domain decomposition and mapping onto heterogeneous hardware; (b) methods for proximity computation or collision detection; (c) support for moving data among the corresponding hardware as elements move from subdomain to subdomain; (d) numerical methods for solving the specific dynamics problem of interest; and (e) tools for performing visualization and post-processing in a distributed manner. In this contribution the components (a) and (c) of the HCT are demonstrated via the example of the Discrete Element Method (DEM) for rigid body dynamics with friction and contact. The collision detection task required in frictional-contact dynamics; i.e., task (b) above, is discussed separately and in the context of GPU computing. This task is shown to benefit of a two order of magnitude gain in efficiency when compared to traditional sequential implementations. Note: Reference herein to any specific commercial products, process, or service by trade name, trademark, manufacturer, or otherwise, does not imply its endorsement, recommendation, or favoring by the US Army. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Army, and shall not be used for advertising or product endorsement purposes.
APA, Harvard, Vancouver, ISO, and other styles
4

Guzman, Fabio A., Carolina M. Gomez, and Valeria Usuga. "Power efficient GPU-based accelerator for Embedded SDR applications." In 2018 IEEE Colombian Conference on Communications and Computing (COLCOM). IEEE, 2018. http://dx.doi.org/10.1109/colcomcon.2018.8466711.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Hong Zhu, Huaping Xu, and Liang Feng. "Application of GPU for missile-borne SAR raw signal simulation." In 2011 2nd International Conference on Artificial Intelligence, Management Science and Electronic Commerce (AIMSEC). IEEE, 2011. http://dx.doi.org/10.1109/aimsec.2011.6010350.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Li, Wei, Houxiang Zhang, and Ottar L. Osen. "A UAV SAR Prototype for Marine and Arctic Application." In ASME 2017 36th International Conference on Ocean, Offshore and Arctic Engineering. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/omae2017-61264.

Full text
Abstract:
SAR (Synthetic Aperture Radar ) systems are special types of radar that produce high resolution images (comparable to optical sensors) in all weather conditions, night and day. SAR sensors have many applications in marine and arctic applications. In this paper a compact SAR prototype system is developed for UAV (Unmanned Aerial Vehicle) platform. The radar is based on FMCW (Frequency-Modulated Continuous-Wave) radar mode. The system integrates a high performance RTK (Real Time Kinematic) GPS and IMU (inertial measurement unit) based motion compensation module, FPGA (Field Programmable Gate Array) based controller and signal processing module. It has a resolution of 0.3 meter with the weight below 2 kg. It has been test and verified on the guide rail, car and integrated on a rotary UAV. The system will extend the capability of UAV in the marine and arctic remote sensing area.
APA, Harvard, Vancouver, ISO, and other styles
7

Lohnert, Erwin, Wolfgang Bar, Eckart Gohler, and Jochen Mollmer. "Galileo / GPS indoor navigation & positioning for SAR and tracking applications." In 2010 International Conference on Indoor Positioning and Indoor Navigation (IPIN). IEEE, 2010. http://dx.doi.org/10.1109/ipin.2010.5647475.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ashjaee, J. M. "SUb-Meter Accuracy With GPS C/A Code In Dynamic Applications." In Offshore Technology Conference. Offshore Technology Conference, 1986. http://dx.doi.org/10.4043/5272-ms.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Stockamp, Julia, Zhenhong Li, Paul Bishop, Jim Hansom, Alistair Rennie, Elizabeth Petrie, Akiko Tanaka, Richard Bingley, and Dionne Hansen. "Ivestigating Glacial Isostatic Adjustement in Scotland with INSAR and GPS observation." In Fringe2015: Advances in the Science and Applications of SAR Interferometry and Sentinel-1 InSAR Workshop. European Space Agency, 2015. http://dx.doi.org/10.5270/fringe2015.pp171.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Liu, Zhen, Paul Lundgren, and Scott Hensley. "Mapping Fault Slip in Central California using Satellite, Airborne InSAR and GPS." In Fringe2015: Advances in the Science and Applications of SAR Interferometry and Sentinel-1 InSAR Workshop. European Space Agency, 2015. http://dx.doi.org/10.5270/fringe2015.pp187.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Applications sur GPUs"

1

Pstuty, Norbert, Mark Duffy, Dennis Skidds, Tanya Silveira, Andrea Habeck, Katherine Ames, and Glenn Liu. Northeast Coastal and Barrier Network Geomorphological Monitoring Protocol: Part I—Ocean Shoreline Position, Version 2. National Park Service, June 2022. http://dx.doi.org/10.36967/2293713.

Full text
Abstract:
Following a review of Vital Signs – indicators of ecosystem health – in the coastal parks of the Northeast Coastal and Barrier Network (NCBN), knowledge of shoreline change was ranked as the top variable for monitoring. Shoreline change is a basic element in the management of any coastal system because it contributes to the understanding of the functioning of the natural resources and to the administration of the cultural resources within the parks. Collection of information on the vectors of change relies on the establishment of a rigorous system of protocols to monitor elements of the coastal geomorphology that are guided by three basic principles: 1) all of the elements in the protocols are to be based on scientific principles; 2) the products of the monitoring must relate to issues of importance to park management; and 3) the application of the protocols must be capable of implementation at the local level within the NCBN. Changes in ocean shoreline position are recognized as interacting with many other elements of the Ocean Beach-Dune Ecosystem and are thus both driving and responding to the variety of natural and cultural factors active at the coast at a variety of temporal and spatial scales. The direction and magnitude of shoreline change can be monitored through the application of a protocol that tracks the spatial position of the neap-tide, high tide swash line under well-defined conditions of temporal sampling. Spring and fall surveys conducted in accordance with standard operating procedures will generate consistent and comparable shoreline position data sets that can be incorporated within a data matrix and subsequently analyzed for temporal and spatial variations. The Ocean Shoreline Position Monitoring Protocol will be applied to six parks in the NCBN: Assateague Island National Seashore, Cape Cod National Seashore, Fire Island National Seashore, Gateway National Recreation Area, George Washington Birthplace National Monument, and Sagamore Hill National Historic Site. Monitoring will be accomplished with a Global Positioning System (GPS )/ Global Navigation Satellite System (GNSS) unit capable of sub-meter horizontal accuracy that is usually mounted on an off-road vehicle and driven along the swash line. Under the guidance of a set of Standard Operating Procedures (SOPs) (Psuty et al., 2022), the monitoring will generate comparable data sets. The protocol will produce shoreline change metrics following the methodology of the Digital Shoreline Analysis System developed by the United States Geological Survey. Annual Data Summaries and Trend Reports will present and analyze the collected data sets. All collected data will undergo rigorous quality-assurance and quality-control procedures and will be archived at the offices of the NCBN. All monitoring products will be made available via the National Park Service’s Integrated Resource Management Applications Portal.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography