Inhaltsverzeichnis
Auswahl der wissenschaftlichen Literatur zum Thema „Calcul parallèle sur cartes graphique“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Calcul parallèle sur cartes graphique" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Calcul parallèle sur cartes graphique"
Cléry, Isabelle, und Marc Pierrot-Deseilligny. „Une interface ergonomique de calcul de modèles 3D par photogrammétrie“. Revue Française de Photogrammétrie et de Télédétection, Nr. 196 (15.04.2014): 40–51. http://dx.doi.org/10.52638/rfpt.2011.36.
Der volle Inhalt der QuelleHOUZET, Dominique. „Calcul généraliste sur carte graphique - Du rendu au calcul massivement parallèle“. Le traitement du signal et ses applications, August 2016. http://dx.doi.org/10.51257/a-v2-te5990.
Der volle Inhalt der QuelleANDRADE, Guillermo B. „Calcul généraliste sur carte graphique - Du rendu au calcul massivement parallèle“. Technologies logicielles Architectures des systèmes, Februar 2010. http://dx.doi.org/10.51257/a-v1-te5990.
Der volle Inhalt der QuelleDissertationen zum Thema "Calcul parallèle sur cartes graphique"
Hugues, Maxime. „Un paradigme de programmation multi-niveaux pour le calcul numérique sur les machines post-petascales et exascales“. Thesis, Lille 1, 2011. http://www.theses.fr/2011LIL10146/document.
Der volle Inhalt der QuelleThe coming of post-petscale and exascale supercomputers offers the perspective to accelerate the solving of engineering problems and to highly complex modeling. However, these future systems challenge computer scientists to built such machines. Many issues must be faced such as fault-tolerance, energy consumption and the programming of these complex systems composed of billion cores.In this thesis, we have focused on the programming aspect and propose a multi-level programming paradigm composed of three levels. For the low level, a data parallel paradigm is proposed to program many-cores processors for its focus on data mapping and movements. We have implemented and evaluated the SpMV with various sparse matrix formats on GPU to illustrate this point. For the intermediate level, we propose a message passing paradigm in order to optimize inter-sockets and inter-nodes communications. For the high level, a graph description paradigm is proposed to program and manage the parallelism between nodes.With a dense matrix inversion method developed in YML, we underline the interest of graph for the Time-To-Solution reduction and for the support of asynchronous communications in a transparent way. The interest of graph is also demonstrated for I/O optimizations and for their direct support into the programming model. We finally conclude by analyzing a such proposition of programming paradigm for exascale machines and outlines the future work direction
Rizk, Guillaume. „Parallélisation sur matériel graphique : contributions au repliement d'ARN et à l'alignement de séquences“. Phd thesis, Université Rennes 1, 2011. http://tel.archives-ouvertes.fr/tel-00634901.
Der volle Inhalt der QuelleTran, Tuan Tu. „Comparaisons de séquences biologiques sur architecture massivement multi-cœurs“. Phd thesis, Université des Sciences et Technologie de Lille - Lille I, 2012. http://tel.archives-ouvertes.fr/tel-00832663.
Der volle Inhalt der QuelleBrias, Antoine. „Conjurer la malédiction de la dimension dans le calcul du noyau de viabilité à l'aide de parallélisation sur carte graphique et de la théorie de la fiabilité : application à des dynamiques environnementales“. Thesis, Clermont-Ferrand 2, 2016. http://www.theses.fr/2016CLF22778/document.
Der volle Inhalt der QuelleViability theory provides tools to maintain a dynamical system in a constraint domain. The main concept of this theory is the viability kernel, which is the set of initial states from which there is at least one controlled trajectory remaining in the constraint domain. However, the time and space needed to calculate the viability kernel increases exponentially with the number of dimensions of the problem. This issue is known as “the curse of dimensionality”. This curse is even more present when applying the viability theory to uncertain systems. In this case, the viability kernel is the set of states for which there is at least a control strategy to stay in the constraint domain with some probability until the time horizon. The objective of this thesis is to study and develop approaches to beat back the curse of dimensionality. We propose two lines of research: the parallel computing and the use of reliability theory tools. The results are illustrated by several applications. The first line explores the use of parallel computing on graphics card. The version of the program using the graphics card is up to 20 times faster than the sequential version, dealing with problems until dimension 7. In addition to the gains in calculation time, our work shows that the majority of the resources is used to the calculation of transition probabilities. This observation makes the link with the second line of research which proposes an algorithm calculating a stochastic approximation of viability kernels by using reliability methods in order to compute the transition probabilities. The memory space required by this algorithm is a linear function of the number of states of the grid, unlike the memory space required by conventional dynamic programming algorithm which quadratically depends on the number of states. These approaches may enable the use of the viability theory in the case of high-dimension systems. So we applied it to a phosphorus dynamics for the management of Lake Bourget eutrophication, previously calibrated from experimental data. In addition the relationship between reliability and viability is highlighted with an application of stochastic viability kernel computation, otherwise known as reliability kernel, in reliable design in the case of a corroded beam
Quinto, Michele Arcangelo. „Méthode de reconstruction adaptive en tomographie par rayons X : optimisation sur architectures parallèles de type GPU“. Thesis, Grenoble, 2013. http://www.theses.fr/2013GRENT109/document.
Der volle Inhalt der QuelleTomography reconstruction from projections data is an inverse problem widely used inthe medical imaging field. With sufficiently large number of projections over the requiredangle, the FBP (filtered backprojection) algorithms allow fast and accurate reconstructions.However in the cases of limited views (lose dose imaging) and/or limited angle (specificconstrains of the setup), the data available for inversion are not complete, the problembecomes more ill-conditioned, and the results show significant artifacts. In these situations,an alternative approach of reconstruction, based on a discrete model of the problem,consists in using an iterative algorithm or a statistical modelisation of the problem to computean estimate of the unknown object. These methods are classicaly based on a volumediscretization into a set of voxels and provide 3D maps of densities. Computation time andmemory storage are their main disadvantages. Moreover, whatever the application, thevolumes are segmented for a quantitative analysis. Numerous methods of segmentationwith different interpretations of the contours and various minimized energy functionalare offered, and the results can depend on their use.This thesis presents a novel approach of tomographic reconstruction simultaneouslyto segmentation of the different materials of the object. The process of reconstruction isno more based on a regular grid of pixels (resp. voxel) but on a mesh composed of nonregular triangles (resp. tetraedra) adapted to the shape of the studied object. After aninitialization step, the method runs into three main steps: reconstruction, segmentationand adaptation of the mesh, that iteratively alternate until convergence. Iterative algorithmsof reconstruction used in a conventionnal way have been adapted and optimizedto be performed on irregular grids of triangular or tetraedric elements. For segmentation,two methods, one based on a parametric approach (snake) and the other on a geometricapproach (level set) have been implemented to consider mono and multi materials objects.The adaptation of the mesh to the content of the estimated image is based on the previoussegmented contours that makes the mesh progressively coarse from the edges to thelimits of the domain of reconstruction. At the end of the process, the result is a classicaltomographic image in gray levels, but whose representation by an adaptive mesh toits content provide a correspoonding segmentation. The results show that the methodprovides reliable reconstruction and leads to drastically decrease the memory storage. Inthis context, the operators of projection have been implemented on parallel archituecturecalled GPU. A first 2D version shows the feasability of the full process, and an optimizedversion of the 3D operators provides more efficent compoutations