Littérature scientifique sur le sujet « Algoritmi paralleli »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Algoritmi paralleli ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Articles de revues sur le sujet "Algoritmi paralleli"

1

Di Viggiano, Pasquale Luigi. « DEMOCRAZIA DIGITALE COME DIFFERENZA : ». Revista da Faculdade Mineira de Direito 24, no 48 (18 mars 2022) : 64–78. http://dx.doi.org/10.5752/p.2318-7999.2021v24n48p64-78.

Texte intégral
Résumé :
La partecipazione sociale e politica digitale contemporanea (e-Democracy) è un prodotto della digitalizzazione dello Stato e dei suoi apparati, caratterizzata dalla produzione di nuovi diritti resi possibili dalle tecnologie della comunicazione. La digitalizzazione degli apparati dello Stato attraverso le nuove tecnologie basate su algoritmi intelligenti e le norme sulla società dell’informazione e della comunicazione hanno innescato la produzione di cosiddetti “nuovi diritti” la cui esigibilità amplia il concetto di democrazia stabilendo una differenza tra il tradizionale governo della cosa pubblica e le crescenti pretese delle comunità sempre più legate al sistema della comunicazione digitale. I diritti di accedere a Internet e alla rete, all’e-voting, a comunicare con la PA attraverso le nuove tecnologie, a ricevere servizi pubblici digitali sono paralleli a doveri dello Stato caratterizzati dalla soddisfazione dei nuovi diritti. Contemporaneamente cresce il rischio che forme di partecipazione digitale producano livelli di esclusioni intollerabili che intaccano la democrazia. Osservare e descrivere, con gli strumenti concettuali del Centro di Studi sul Rischio, come il sistema del diritto, della politica e della società evolvono attraverso il rapporto con l’ecosistema digitale trainato dall’arcipelago delle intelligenze artificiali rappresenta l’obiettivo e la sfida sempre incerta negli esiti, sempre nuova nelle acquisizioni ma sempre stimolante e proficua sotto il profilo della ricerca sociale, politica e giuridica.
Styles APA, Harvard, Vancouver, ISO, etc.
2

TRINDER, P. W., K. HAMMOND, H. W. LOIDL et S. L. PEYTON JONES. « Algorithm + strategy = parallelism ». Journal of Functional Programming 8, no 1 (janvier 1998) : 23–60. http://dx.doi.org/10.1017/s0956796897002967.

Texte intégral
Résumé :
The process of writing large parallel programs is complicated by the need to specify both the parallel behaviour of the program and the algorithm that is to be used to compute its result. This paper introduces evaluation strategies: lazy higher-order functions that control the parallel evaluation of non-strict functional languages. Using evaluation strategies, it is possible to achieve a clean separation between algorithmic and behavioural code. The result is enhanced clarity and shorter parallel programs. Evaluation strategies are a very general concept: this paper shows how they can be used to model a wide range of commonly used programming paradigms, including divide-and-conquer parallelism, pipeline parallelism, producer/consumer parallelism, and data-oriented parallelism. Because they are based on unrestricted higher-order functions, they can also capture irregular parallel structures. Evaluation strategies are not just of theoretical interest: they have evolved out of our experience in parallelising several large-scale parallel applications, where they have proved invaluable in helping to manage the complexities of parallel behaviour. Some of these applications are described in detail here. The largest application we have studied to date, Lolita, is a 40,000 line natural language engineering system. Initial results show that for these programs we can achieve acceptable parallel performance, for relatively little programming effort.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Bassil, Youssef. « Implementation of Computational Algorithms using Parallel Programming ». International Journal of Trend in Scientific Research and Development Volume-3, Issue-3 (30 avril 2019) : 704–10. http://dx.doi.org/10.31142/ijtsrd22947.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Deng, An-Wen, et Chih-Ying Gwo. « Parallel Computing Zernike Moments via Combined Algorithms ». SIJ Transactions on Computer Science Engineering & ; its Applications (CSEA) 04, no 03 (28 juin 2016) : 01–09. http://dx.doi.org/10.9756/sijcsea/v4i3/04020050101.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Shu, Qin, Xiuli He, Chang Wang et Yunxiu Yang. « Parallel registration algorithm with arbitrary affine transformation ». Chinese Optics Letters 18, no 7 (2020) : 071001. http://dx.doi.org/10.3788/col202018.071001.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Liu, Yu, et Yi Xiao. « Parallel Solution of Magnetotelluric Occam Inversion Algorithm Based on Hybrid MPI/OpenMP Model ». Applied Mechanics and Materials 602-605 (août 2014) : 3751–54. http://dx.doi.org/10.4028/www.scientific.net/amm.602-605.3751.

Texte intégral
Résumé :
In order to improve the efficiency of magnetotelluric Occam inversion algorithm (MT Occam), a parallel algorithm is implemented on a hybrid MPI/OpenMP parallel programming model to increase its convergence speed and to decrease the operation time. MT Occam is partitioned to map the task on the parallel model. The parallel algorithm implements the coarse-grained parallelism between computation nodes and fine-grained parallelism between cores within each node. By analyzing the data dependency, the computing tasks are accurately partitioned so as to reduce transmission time. The experimental results show that with the increase of model scale, higher speedup can be obtained. The high efficiency of the parallel partitioning strategy of the model can improve the scalability of the parallel algorithm.
Styles APA, Harvard, Vancouver, ISO, etc.
7

DIKAIAKOS, MARIOS D., ANNE ROGERS et KENNETH STEIGLITZ. « FUNCTIONAL ALGORITHM SIMULATION OF THE FAST MULTIPOLE METHOD : ARCHITECTURAL IMPLICATIONS ». Parallel Processing Letters 06, no 01 (mars 1996) : 55–66. http://dx.doi.org/10.1142/s0129626496000078.

Texte intégral
Résumé :
Functional Algorithm Simulation is a methodology for predicting the computation and communication characteristics of parallel algorithms for a class of scientific problems, without actually performing the expensive numerical computations involved. In this paper, we use Functional Algorithm Simulation to study the parallel Fast Multipole Method (FMM), which solves the N-body problem. Functional Algorithm Simulation provides us with useful information regarding communication patterns in the algorithm, the variation of available parallelism during different algorithmic phases, and upper bounds on available speedups for different problem sizes. Furthermore, it allows us to predict the performance of the FMM on message-passing multiprocessors with topologies such as cliques, hypercubes, rings, and multirings, over a wider range of problem sizes and numbers of processors than would be feasible by direct simulation. Our simulations show that an implementation of the FMM on low-cost, scalable ring or multiring architectures can attain satisfactory performance.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Hahne, Jens, Stephanie Friedhoff et Matthias Bolten. « Algorithm 1016 ». ACM Transactions on Mathematical Software 47, no 2 (avril 2021) : 1–22. http://dx.doi.org/10.1145/3446979.

Texte intégral
Résumé :
In this article, we introduce the Python framework PyMGRIT, which implements the multigrid-reduction-in-time (MGRIT) algorithm for solving (non-)linear systems arising from the discretization of time-dependent problems. The MGRIT algorithm is a reduction-based iterative method that allows parallel-in-time simulations, i.e., calculating multiple time steps simultaneously in a simulation, using a time-grid hierarchy. The PyMGRIT framework includes many different variants of the MGRIT algorithm, ranging from different multigrid cycle types and relaxation schemes, various coarsening strategies, including time-only and space-time coarsening, and the ability to utilize different time integrators on different levels in the multigrid hierachy. The comprehensive documentation with tutorials and many examples and the fully documented code allow an easy start into the work with the package. The functionality of the code is ensured by automated serial and parallel tests using continuous integration. PyMGRIT supports serial runs suitable for prototyping and testing of new approaches, as well as parallel runs using the Message Passing Interface (MPI). In this manuscript, we describe the implementation of the MGRIT algorithm in PyMGRIT and present the usage from both a user and a developer point of view. Three examples illustrate different aspects of the package itself, especially running tests with pure time parallelism, as well as space-time parallelism through the coupling of PyMGRIT with PETSc or Firedrake.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Najoui, Mohamed, Anas Hatim, Said Belkouch et Noureddine Chabini. « Novel Implementation Approach with Enhanced Memory Access Performance of MGS Algorithm for VLIW Architecture ». Journal of Circuits, Systems and Computers 29, no 12 (19 février 2020) : 2050200. http://dx.doi.org/10.1142/s021812662050200x.

Texte intégral
Résumé :
Modified Gram–Schmidt (MGS) algorithm is one of the most-known forms of QR decomposition (QRD) algorithms. It has been used in many signal and image processing applications to solve least square problem and linear equations or to invert matrices. However, QRD is well-thought-out as a computationally expensive technique, and its sequential implementation fails to meet the requirements of many real-time applications. In this paper, we suggest a new parallel version of MGS algorithm that uses VLIW (Very Long Instruction Word) resources in an efficient way to get more performance. The presented parallel MGS is based on compact VLIW kernels that have been designed for each algorithm step taking into account architectural and algorithmic constraints. Based on instruction scheduling and software pipelining techniques, the proposed kernels exploit efficiently data, instruction and loop levels parallelism. Additionally, cache memory properties were used efficiently to enhance parallel memory access and to avoid cache misses. The robustness, accuracy and rapidity of the introduced parallel MGS implementation on VLIW enhance significantly the performance of systems under severe rea-time and low power constraints. Experimental results show great improvements over the optimized vendor QRD implementation and the state of art.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Li, Yong. « An Improved Parallel FFT Algorithm Based on the GPU ». Advanced Materials Research 647 (janvier 2013) : 880–84. http://dx.doi.org/10.4028/www.scientific.net/amr.647.880.

Texte intégral
Résumé :
With the extensive applications of FFT in digital signal processing and image signal processing which needs a extensive application of large-scale computing, it become more and more important to improve parallelism, especially efficient and scalable parallel of FFT algorithm. This paper improves the parallelism of the FFT algorithm based on the Six-Step FFT algorithm. The introduction of GPU to parallel computing is to realize parallel FFT computing in a single machine and to improve the speed of Frontier transform. With the optimization strategy of the mapping hiding the transport matrix, the performance of parallel FFT algorithm after optimization is remarkably promoted by the assignment of matrix calculation and butterfly computation to GPU. Finally it applies to design the digital filter in seismic data.
Styles APA, Harvard, Vancouver, ISO, etc.

Thèses sur le sujet "Algoritmi paralleli"

1

Retico, Alessandro. « Algoritmi paralleli per l'allineamento di immagini ». Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/19101/.

Texte intégral
Résumé :
In questa tesi si propone un algoritmo che, date una serie di immagini astronomiche raffiguranti uno stesso soggetto, cerca di ricavarne una rappresentazione migliore. Scattare foto astronomiche non è affatto facile: bisogna seguire vari accorgimenti e regolare opportunamente le impostazioni della propria fotocamera. In genere gli astrofili, che non sempre dispongono di sofisticatissime attrezzature per fotografare lo spazio, scattano una serie di foto o registrano un video (da cui estrarranno i fotogrammi) del soggetto che vogliono immortalare, attraverso la lente di un telescopio: in questa maniera riescono a catturare dettagli invisibili all'occhio umano, ma le immagini ottenute presentano parecchio rumore; per questo motivo le astrofotografie sono quasi sempre sottoposte ad una fase di elaborazione. Una semplice procedura per ridurre il rumore consiste nel produrre una nuova immagine i cui pixel abbiano valore pari alla media dei corrispettivi pixel di tutte le immagini scattate. Dato che la tecnica di cattura precedentemente descritta è enormemente sensibile allo spostamento, prima di calcolare la media dei pixel, è necessario allineare le immagini, rimuovendo lo scostamento presente tra esse. Per effettuare questo procedimento (noto anche come image registration) esistono varie tecniche più o meno complicate: in questa tesi ne verranno presentate brevemente alcune, ma ci si limiterà ad implementarne una versione banale, basata su una tecnica di brute force. La tematica affrontata si presta molto bene alla programmazione parallela, perciò l'implementazione dell'algoritmo che verrà proposto sfrutterà le clausole OpenMP e le istruzioni SIMD di un Raspberry Pi 2 B; è stata scelta tale architettura date le notevoli capacità di parallelizzazione che è in grado di offrire nonostante i ridotti costi, le contenute dimensioni e la minima energia richiesta.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Tentarelli, Edoardo. « Architetture serverless per algoritmi massicciamente paralleli in ambito Industria 4.0 ». Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/20286/.

Texte intégral
Résumé :
L’edge computing permette di distribuire l’elaborazione di servizi direttamente sulle macchine di produzione, evitando di inviare la richiesta verso data center esterni all’organizzazione, con vantaggi evidenti in termini di latenza e sicurezza. Questo modello di esecuzione, molto diffuso in industria manufatturiera, sta portando ad una migrazione dei servizi verso ambienti edge, ma la quantità limitata di risorse rende difficile il deployment di servizi computazionalmente onerosi verso questo modello. Ultimamente, sono state rilasciate sul mercato piattaforme che garantiscono la completa gestione dell’ambiente di esecuzione, sollevando lo sviluppatore da qualsiasi pratica operazionale. Ogni allocazione di risorse è ottimizzata, trasparentemente, dalla piattaforma, garantendo elevati gradi di disponibilità e tolleranza dei servizi. Questo modello di esecuzione viene definito serverless, e molte organizzazioni stanno migrando i propri servizi verso queste soluzioni. L’obiettivo di questo studio è stato quello di valutare le prestazioni del serverless nell’elaborazione di funzioni di image processing in ambienti edge. In particolare, lo studio è stato effettuato su algoritmi massicciamente paralleli, per cui è stato possibile parallelizzare il carico in task indipendenti. Le sperimentazioni hanno confrontato una soluzione serverless, in cui parti di immagini sono state ruotate in parallelo, ed una soluzione sequenziale, in cui la rotazione è stata effettuata sull’intera immagine. I risultati ottenuti mostrano evidenti benefici verso la soluzione serverless, in quanto offre parametri di scalabilità maggiori. Inoltre, i consumi di risorse sono decisamente più limitati, garantendo una soluzione più idonea ad ambienti edge e adatta al caso d’uso applicativo preso in esame. Per queste considerazioni, è consigliata la migrazione di servizi CPU intensive verso architetture serverless, per poter beneficiare dei risparmi e dei vantaggi offerti da questo tipo di soluzioni.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Montanari, Sofia. « Algoritmi paralleli per la trasformata di hough nell’individuazione di linee rette nelle immagini digitali ». Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2020.

Trouver le texte intégral
Résumé :
Nella mia tesi è presentato lo studio dell'algoritmo di Hough, utilizzato per l'identificazione di linee rette nelle immagini digitali. Nella tesi viene proposta l'implementazione della versione seriale dell'algoritmo di Hough; successivamente vengono analizzate diverse soluzioni che permettono la parallelizzazione del problema e ne vengono studiate le prestazioni.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Barocci, Federico. « Valutazione delle prestazioni di processori a basso consumo energetico in applicazioni parallele ». Master's thesis, Alma Mater Studiorum - Università di Bologna, 2016. http://amslaurea.unibo.it/10506/.

Texte intégral
Résumé :
In questo lavoro di tesi sono state impiegate le librerie grafiche OpenGL ES 2 per eseguire calcoli paralleli sulla GPU del Raspberry Pi. Sono stati affrontati e discussi concetti riguanrdati il calcolo parallelo, stream processing, GPGPU e le metriche di valutazione di algoritmi paralleli. Sono inoltre descritte le potenzialita e le limitazioni derivanti dall'impiego di OpenGL per implementare algoritmi paralleli. In particolare si e fatto riferimento all'algoritmo Seam Carving per il restringimento di immagini, realizzando e valutando una implementazione parallela di questo sul Raspberry Pi.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Dogan, Can. « Implementazione e valutazione delle prestazioni di algoritmi per il calcolo di cammini minimi su grafi ». Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2018.

Trouver le texte intégral
Résumé :
Lo scopo di questa tesi è di implementare e analizzare algoritmi paralleli su grafi. Gli algoritmi in oggetto sono quelli di Bellman-Ford per i cammini minimi da singola sorgente (Single-Source Shortest Paths), e di Floyd Warshall per i cammini minimi tra tutte le coppie di nodi (All-Pairs Shortest Paths). Analizzeremo a fondo entrambi gli algoritmi e vedremo come variano i tempi di esecuzione rispetto alle versioni seriali, e ne valuteremo la scalabilità. Per avere un’idea concreta di come e dove vengono utilizzati, vedremo alcune delle principali applicazioni che fanno uso di questi algoritmi. I capitoli saranno suddivisi nel seguente modo: la prima parte farà da introduzione e tratterà l’argomento sui grafi e i cammini, nella seconda parte si illustreranno gli algoritmi in dettaglio, nella terza parte si parlerà della piattaforma di parallelizzazione e delle implementazioni parallele degli algoritmi, infine nell’ultima parte verranno analizzate le prestazioni delle implementazioni proposte.
Styles APA, Harvard, Vancouver, ISO, etc.
6

GALVAN, Stefano. « Perception-motivated parallel algorithms for haptics ». Doctoral thesis, Università degli Studi di Verona, 2010. http://hdl.handle.net/11562/343948.

Texte intégral
Résumé :
Negli ultimi anni l’utilizzo di dispositivi aptici, atti cioè a riprodurre l’interazione fisica con l’ambiente remoto o virtuale, si sta diffondendo in vari ambiti della robotica e dell’informatica, dai videogiochi alla chirurgia robotizzata eseguita in teleoperazione, dai cellulari alla riabilitazione. In questo lavoro di tesi abbiamo voluto considerare nuovi punti di vista sull’argomento, allo scopo di comprendere meglio come riportare l’essere umano, che è l’unico fruitore del ritorno di forza, tattile e di telepresenza, al centro della ricerca sui dispositivi aptici. Allo scopo ci siamo focalizzati su due aspetti: una manipolazione del segnale di forza mutuata dalla percezione umana e l’utilizzo di architetture multicore per l’implementazione di algoritmi aptici e robotici. Con l’aiuto di un setup sperimentale creato ad hoc e attraverso l’utilizzo di un joystick con ritorno di forza a 6 gradi di libertà, abbiamo progettato degli esperimenti psicofisici atti all’identificazione di soglie differenziali di forze/coppie nel sistema mano-braccio. Sulla base dei risultati ottenuti abbiamo determinato una serie di funzioni di scalatura del segnale di forza, una per ogni grado di libertà, che permettono di aumentare l’abilità umana nel discriminare stimoli differenti. L’utilizzo di tali funzioni, ad esempio in teleoperazione, richiede la possibilità di variare il segnale di feedback e il controllo del dispositivo sia in relazione al lavoro da svolgere, sia alle peculiari capacità dell’utilizzatore. La gestione del dispositivo deve quindi essere in grado di soddisfare due obbiettivi tendenzialmente in contrasto, e cioè il raggiungimento di alte prestazioni in termini di velocità, stabilità e precisione, abbinato alla flessibilità tipica del software. Una soluzione consiste nell’affidare il controllo del dispositivo ai nuovi sistemi multicore che si stanno sempre più prepotentemente affacciando sul panorama informatico. Per far ciò una serie di algoritmi consolidati deve essere portata su sistemi paralleli. In questo lavoro abbiamo dimostrato che è possibile convertire facilmente vecchi algoritmi già implementati in hardware, e quindi intrinsecamente paralleli. Un punto da definire rimane però quanto costa portare degli algoritmi solitamente descritti in VLSI e schemi in un linguaggio di programmazione ad alto livello. Focalizzando la nostra attenzione su un problema specifico, la pseudoinversione di matrici che è presente in molti algoritmi di dinamica e cinematica, abbiamo mostrato che un’attenta progettazione e decomposizione del problema permette una mappatura diretta sulle unità di calcolo disponibili. In aggiunta, l’uso di parallelismo a livello di dati su macchine SIMD permette di ottenere buone prestazioni utilizzando semplici operazioni vettoriali come addizioni e shift. Dato che di solito tali istruzioni fanno parte delle implementazioni hardware la migrazione del codice risulta agevole. Abbiamo testato il nostro approccio su una Sony PlayStation 3 equipaggiata con un processore IBM Cell Broadband Engine.
In the last years the use of haptic feedback has been used in several applications, from mobile phones to rehabilitation, from video games to robotic aided surgery. The haptic devices, that are the interfaces that create the stimulation and reproduce the physical interaction with virtual or remote environments, have been studied, analyzed and developed in many ways. Every innovation in the mechanics, electronics and technical design of the device it is valuable, however it is important to maintain the focus of the haptic interaction on the human being, who is the only user of force feedback. In this thesis we worked on two main topics that are relevant to this aim: a perception based force signal manipulation and the use of modern multicore architectures for the implementation of the haptic controller. With the help of a specific experimental setup and using a 6 dof haptic device we designed a psychophysical experiment aimed at identifying of the force/torque differential thresholds applied to the hand-arm system. On the basis of the results obtained we determined a set of task dependent scaling functions, one for each degree of freedom of the three-dimensional space, that can be used to enhance the human abilities in discriminating different stimuli. The perception based manipulation of the force feedback requires a fast, stable and configurable controller of the haptic interface. Thus a solution is to use new available multicore architectures for the implementation of the controller, but many consolidated algorithms have to be ported to these parallel systems. Focusing on specific problem, i.e. the matrix pseudoinversion, that is part of the robotics dynamic and kinematic computation, we showed that it is possible to migrate code that was already implemented in hardware, and in particular old algorithms that were inherently parallel and thus not competitive on sequential processors. The main question that still lies open is how much effort is required in order to write these algorithms, usually described in VLSI or schematics, in a modern programming language. We show that a careful task decomposition and design permit a mapping of the code on the available cores. In addition, the use of data parallelism on SIMD machines can give good performance when simple vector instructions such as add and shift operations are used. Since these instructions are present also in hardware implementations the migration can be easily performed. We tested our approach on a Sony PlayStation 3 game console equipped with IBM Cell Broadband Engine processor.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Kang, Seunghwa. « On the design of architecture-aware algorithms for emerging applications ». Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/39503.

Texte intégral
Résumé :
This dissertation maps various kernels and applications to a spectrum of programming models and architectures and also presents architecture-aware algorithms for different systems. The kernels and applications discussed in this dissertation have widely varying computational characteristics. For example, we consider both dense numerical computations and sparse graph algorithms. This dissertation also covers emerging applications from image processing, complex network analysis, and computational biology. We map these problems to diverse multicore processors and manycore accelerators. We also use new programming models (such as Transactional Memory, MapReduce, and Intel TBB) to address the performance and productivity challenges in the problems. Our experiences highlight the importance of mapping applications to appropriate programming models and architectures. We also find several limitations of current system software and architectures and directions to improve those. The discussion focuses on system software and architectural support for nested irregular parallelism, Transactional Memory, and hybrid data transfer mechanisms. We believe that the complexity of parallel programming can be significantly reduced via collaborative efforts among researchers and practitioners from different domains. This dissertation participates in the efforts by providing benchmarks and suggestions to improve system software and architectures.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Thomin, Philippe. « Algorithmes parallèles pour la synthèse d'image par radiosité sur calculateur à mémoire distribuée ». Valenciennes, 1993. https://ged.uphf.fr/nuxeo/site/esupversions/a9efdd76-820d-4008-ab0e-23f35d428cdf.

Texte intégral
Résumé :
Pour répondre à la demande d'utilisateurs de plus en plus nombreux, l'image de synthèse doit concilier deux aspects souvent contradictoires : le réalisme et l'interactivité. En termes de réalisme, les algorithmes de radiosité permettent, à l'heure actuelle, d'obtenir des résultats spectaculaires. Toutefois, en dépit d'optimisations algorithmiques drastiques, la production d'une image est encore au mieux une affaire de minutes et les ressources mémoire mobilisées sont considérables. Les limites actuelles doivent donc être franchies en augmentant la puissance des moyens matériels utilisés. Le travail présenté dans ce mémoire porte sur l'étude des algorithmes parallèles de radiosité sur calculateurs à mémoire distribuée. Les solutions proposées privilégient l'utilisation optimale des ressources mémoire réparties, ce qui leur permet de traiter des scènes très complexes. La réduction des coûts de communication et le travail en synchronisme des différents processeurs permettent alors de concilier les comportements temporels et dimensionnels des algorithmes définis. Une maquette, réalisée sur un réseau de transputers, a permis de valider cette approche et de préciser ses limites d'utilisation. Deux directions peuvent alors être explorées, l'une concernant l'amélioration du comportement temporel, l'autre visant à étendre les algorithmes proposés au traitement des surfaces spéculaires.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Brown, Naïma. « Vérification et mise en œuvre distribuée des programmes unity ». Nancy 1, 1994. http://www.theses.fr/1994NAN10384.

Texte intégral
Résumé :
L'affinement successif de programmes parallèles à partir de spécifications formelles permet de s'assurer de la correction du programme obtenu par rapport a la specification initiale, puisque la transformation est fondée sur la préservation de propriétés telles que celles d'invariance et celles de fatalité. Cependant, les programmes obtenus sont écrits dans une notation abstraite éloignée des langages de programmation parallèles classiques et il convient de développer des techniques de transformation de programmes (ensemble d'actions) en des programmes écrits dans des langages de programmation parallèles (linda, occam). Bien entendu, cela conduit à analyser les techniques possibles de parallélisation de systèmes d'actions ou de distribution de systèmes d'actions. Notre travail de thèse s'appuie sur le formalisme unity de Chandy et Misra pour lequel nous avons construit un outil d'aide à la preuve dans l'outil b et nous avons examiné plusieurs types de transformations de programmes unity en d'autres langages ainsi que des stratégies de répartition. Nous proposons un langage intermédiaire que nous dénommons UCL (unity communication language) qui facilite l'implantation d'unity sur une architecture parallèle, et qui assure la correction formelle de cette implantation vis-à-vis de la spécification initiale. Le langage UCL est ensuite utilisé comme nouveau point de départ de la transformation et permet de produire soit un programme Clinda, soit un programme Occam. Cette approche favorise la réutilisation de cette première étape de transformation avant de cibler une architecture particulière
Styles APA, Harvard, Vancouver, ISO, etc.
10

Halverson, Ranette Hudson. « Efficient Linked List Ranking Algorithms and Parentheses Matching as a New Strategy for Parallel Algorithm Design ». Thesis, University of North Texas, 1993. https://digital.library.unt.edu/ark:/67531/metadc278153/.

Texte intégral
Résumé :
The goal of a parallel algorithm is to solve a single problem using multiple processors working together and to do so in an efficient manner. In this regard, there is a need to categorize strategies in order to solve broad classes of problems with similar structures and requirements. In this dissertation, two parallel algorithm design strategies are considered: linked list ranking and parentheses matching.
Styles APA, Harvard, Vancouver, ISO, etc.

Livres sur le sujet "Algoritmi paralleli"

1

Alan, Gibbons. Efficient parallel algorithms. Cambridge [England] : Cambridge University Press, 1988.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

H, Jamieson Leah, Gannon Dennis B. 1947- et Douglass Robert J, dir. The characteristics of parallel algorithms. Cambridge, Mass : MIT Press, 1987.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Casanova, Henri. Parallel algorithms. Boca Raton, FL : CRC Press, 2008.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Roosta, Seyed H. Parallel Processing and Parallel Algorithms. New York, NY : Springer New York, 2000. http://dx.doi.org/10.1007/978-1-4612-1220-1.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Designing efficient algorithms for parallel computers. Maidenhead : McGraw, 1988.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Designing efficient algorithms for parallel computers. New York : McGraw-Hill, 1987.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Luque, Gabriel, et Enrique Alba. Parallel Genetic Algorithms. Berlin, Heidelberg : Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-22084-5.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Keyes, David E., Ahmed Sameh et V. Venkatakrishnan, dir. Parallel Numerical Algorithms. Dordrecht : Springer Netherlands, 1997. http://dx.doi.org/10.1007/978-94-011-5412-3.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Sameh, Ahmed, David E. Keyes et V. Venkatakrishnan. Parallel numerical algorithms. Dordrecht : Springer, 1997.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Parallel sorting algorithms. Orlando : Academic Press, 1985.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Chapitres de livres sur le sujet "Algoritmi paralleli"

1

Choi, Jaeyoung. « Parallel Factorization Algorithms with Algorithmic Blocking ». Dans Parallel Numerical Computation with Applications, 19–32. Boston, MA : Springer US, 1999. http://dx.doi.org/10.1007/978-1-4615-5205-5_2.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Brucker, Peter. « Parallel Machines ». Dans Scheduling Algorithms, 107–54. Berlin, Heidelberg : Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-24804-0_5.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Brucker, Peter. « Parallel Machines ». Dans Scheduling Algorithms, 107–54. Berlin, Heidelberg : Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/978-3-662-04550-3_5.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Brucker, Peter. « Parallel Machines ». Dans Scheduling Algorithms, 101–44. Berlin, Heidelberg : Springer Berlin Heidelberg, 1998. http://dx.doi.org/10.1007/978-3-662-03612-9_5.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Brucker, Peter. « Parallel Machines ». Dans Scheduling Algorithms, 100–142. Berlin, Heidelberg : Springer Berlin Heidelberg, 1995. http://dx.doi.org/10.1007/978-3-662-03088-2_5.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Logan, Doug. « Parallel Algorithms ». Dans Modern Techniques in Computational Chemistry : MOTECC™-89, 547–76. Dordrecht : Springer Netherlands, 1989. http://dx.doi.org/10.1007/978-94-010-9057-5_13.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Winkler, Gerhard. « Parallel Algorithms ». Dans Image Analysis, Random Fields and Dynamic Monte Carlo Methods, 167–91. Berlin, Heidelberg : Springer Berlin Heidelberg, 1995. http://dx.doi.org/10.1007/978-3-642-97522-6_11.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Logan, Doug. « Parallel Algorithms ». Dans Modern Techniques in Computational Chemistry : MOTECC™-90, 1117–46. Dordrecht : Springer Netherlands, 1990. http://dx.doi.org/10.1007/978-94-009-2219-8_26.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Kurgalin, Sergei, et Sergei Borzunov. « Parallel Algorithms ». Dans Texts in Computer Science, 419–63. Cham : Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-92645-2_13.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Kurgalin, Sergei, et Sergei Borzunov. « Parallel Algorithms ». Dans Texts in Computer Science, 435–77. Cham : Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-42221-9_13.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Actes de conférences sur le sujet "Algoritmi paralleli"

1

Saukas, E. L. G., et S. W. Song. « A parallel algorithm for solving tridiagonal linear systems on coarse grained multicomputer ». Dans Simpósio Brasileiro de Arquitetura de Computadores e Processamento de Alto Desempenho. Sociedade Brasileira de Computação, 1997. http://dx.doi.org/10.5753/sbac-pad.1997.22642.

Texte intégral
Résumé :
The CGM (coarse-grained multicomputer) model has been proposed to be a model of parallelism sufficiently close to existing parallel machines. Despite its simplicity it intends to give a reasonable prediction of performance when parallel algorithms are implemented. Under the CGM model we design a communication-efficient parallel algorithm for the solution of tridiagonallinear systems with n equations and n unknowns. This algorithm requires only a constant number of communication rounds. The amount of data transmitted in each communication round is proportional to the number of processors and independent of n. In addition to showing its theoretical complexity, we have implemented this algorithm on a real distributed memory parallel machine. The results obtained are very promising and show an almost linear speedup for large n indicating the efficiency and scalability of the proposed algorithm.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Critchley, James H., et Kurt S. Anderson. « On Parallel Methods of Multibody Dynamics ». Dans ASME 2003 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2003. http://dx.doi.org/10.1115/detc2003/vib-48317.

Texte intégral
Résumé :
Optimal time efficient parallel computation methods for large multibody system dynamics are defined and investigated in detail. Comparative observations are made which demonstrate significant deficiencies in operating regions of practical importance and a new parallel algorithm is generated to address them. The new method of Recursive Coordinate Reduction Parallelism (RCRP) outperforms or directly reduces to the fastest general multibody algorithms available for small parallel resources and obtains “O(logk(n))” time complexity in the presence of larger parallel arrays. Performance of this method relative to the Divide and Conquer Algorithm is illustrated with an operations count for the worst case of a multibody chain system.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Giles, C. Lee, et B. Keith Jenkins. « Models of parallel computation and optical computing ». Dans OSA Annual Meeting. Washington, D.C. : Optica Publishing Group, 1986. http://dx.doi.org/10.1364/oam.1986.ml1.

Texte intégral
Résumé :
Theoretical models of computation (automata, Turing machines, random access machines, etc.) define not only what is computable but determine how much time and/or space is required for a computation. The parallel extensions of the sequential models usually offer significant speedup in algorithm execution (not all algorithms can be parallelized, however). Real parallel computer architectures based on these parallel computation models should exhibit the same algorithmic speedup. A complexity hierarchy can be defined for parallel computation models based on the space and/or time speedup of certain classes of algorithms. In particular the parallel shared memory models1 are some of the more powerful of the parallel computation models. Their close resemblance to actual computer architectures provide guidance for real architectural design and their massive interconnection requirements are well suited for implementation in optical computing architectures. In addition, the shared memory models have recently been shown to simulate deterministic Boltzmann machines and threshold-logic Turing machines. Thus optical machines based on these parallel models can potentially exhibit the computational power of the shared memory models. These parallel computation models and their implications for optical computing are discussed.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Tsai, Chih-Ming, Nenzi Wang et Yau-Zen Chang. « Performance Evaluation of DIRECT Algorithm in Parallel Optimization for a Thermohydrodynamic Lubrication Analysis ». Dans World Tribology Congress III. ASMEDC, 2005. http://dx.doi.org/10.1115/wtc2005-63319.

Texte intégral
Résumé :
In this study the optimization is illustrated by maximizing the load-carrying capacity of a slider bearing represented by a thermohydrodynamic lubrication model. An efficient optimization scheme, DIviding RECTangles (DIRECT), is coded for parallel computing in a single-system image cluster. The main focus of this study was to examine the performance of the standard DIRECT algorithm in coarse-grained parallelism. The parallel efficiency is compared with the results obtained in recent studies of air bearing optimization by using a genetic algorithm and a divide-and-conquer scheme. It was found that the limited possibility for parallelism of the standard DIRECT algorithm contributes somewhat small parallel efficiency.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Francois, Marianne M., Li-Ta Lo et Christopher Sewell. « Volume-of-Fluid Interface Reconstruction Algorithms on Next-Generation Computer Architectures ». Dans ASME 2014 4th Joint US-European Fluids Engineering Division Summer Meeting collocated with the ASME 2014 12th International Conference on Nanochannels, Microchannels, and Minichannels. American Society of Mechanical Engineers, 2014. http://dx.doi.org/10.1115/fedsm2014-21894.

Texte intégral
Résumé :
With the increasing heterogeneity and on-node parallelism of high-performance computing hardware, a major challenge to computational physicists is to work in close collaboration with computer scientists to develop portable and efficient algorithms and software. The objective of our work is to implement a portable code to perform interface reconstruction using NVIDIA’s Thrust library. Interface reconstruction is a technique commonly used in volume tracking methods for simulations of interfacial flows. For that, we have designed a two-dimensional mesh data structure that is easily mapped to the 1D vectors used by Thrust and at the same time is simple to work with using familiar data structures terminology (such as cell, vertices and edges). With this new data structure in place, we have implemented a recursive volume-of-fluid initialization algorithm and a standard piecewise interface reconstruction algorithm. Our interface reconstruction algorithm makes use of a table look-up to easily identify all intersection cases, as this design is efficient on parallel architectures such as GPUs. Finally, we report performance results which show that a single implementation of these algorithms can be compiled to multiple backends (specifically, multi-core CPUs, NVIDIA GPUs, and Intel Xeon Phi coprocessors), making efficient use of the available parallelism on each.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Rai, Ashwin, Travis Skinner et Aditi Chattopadhyay. « A Parallelized Generalized Method of Cells Framework for Multiscale Studies of Composite Materials ». Dans ASME 2019 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2019. http://dx.doi.org/10.1115/imece2019-11529.

Texte intégral
Résumé :
Abstract This paper presents a parallelized framework for a multi-scale material analysis method called the generalized method of cells (GMC) model which can be used to effectively homogenize or localize material properties over two different length scales. Parallelization is utlized at two instances: (a) for the solution of the governing linear equations, and (b) for the local analysis of each subcell. The governing linear equation is solved parallely using a parallel form of the Gaussian substitution method, and the subsequent local subcell analysis is performed parallely using a domain decomposition method wherein the lower length scale subcells are equally divided over available processors. The parellization algorithm takes advantage of a single program multiple data (SPMD) distributed memory architecture using the Message Passing Interface (MPI) standard, which permits scaling up of the analysis algorithm to any number of processors on a computing cluster. Results show significant decrease in solution time for the parallelized algorithm compared to serial algorithms, especially for denser microscale meshes. The consequent speed-up in processing time permits the analysis of complex length scale dependent phenomenon, nonlinear analysis, and uncertainty studies with multiscale effects which would otherwise be prohibitively expensive.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Wang, Aihu, Jianzhong Cha et Jinmin Wang. « An Efficient Parallel Algorithm for Rectangular Packing Based on Bintree Expression ». Dans ASME 1996 Design Engineering Technical Conferences and Computers in Engineering Conference. American Society of Mechanical Engineers, 1996. http://dx.doi.org/10.1115/96-detc/dac-1043.

Texte intégral
Résumé :
Abstract In this paper, a method using bintree structure to express the states of the packing space of rectangular packing is proposed. Through the sequential decomposition of the packing space, the optimal packing scheme of various sized rectangular packing can be obtained by every time putting the optimal piece that satisfies specular conditions toward the current packing space and by locating it at the up-left corner of the current packing space. Different optimal packing schemes that satisfy different demands can be obtained by adjusting the values of the ordering factors KA and KB. A parallel algorithm based on SIMD-CREW shared-memory computer is designed through the analysis of the parallelism of the bintree expression. The whole packing process is clearly expressed by the bintree. The computational complexity of the algorithm is shown to be O(n2logn). Both the experimental results and the comparison with other sequential packing algorithms have proved that the parallel packing algorithm is efficient. What is more, it nearly doubles the problem solving speed.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Huang, K. S., B. K. Jenkins et A. A. Sawchuk. « Binary Image Algebra and Digital Optical Cellular Image Processors ». Dans Optical Computing. Washington, D.C. : Optica Publishing Group, 1987. http://dx.doi.org/10.1364/optcomp.1987.mb5.

Texte intégral
Résumé :
Image processing and image analysis tasks have large data processing requirements and inherent parallelism and are well suited to implementation on digital optical processors because of the parallelism and free interconnection capabilities of optical systems [1][2]. Recently, several techniques for constructing optical cellular logic processors for image processing have been proposed [2]-[5]. Through parallel studies of architectures, algorithms, mathematical structures, and optics we have found that: 1) cellular automata are appropriate models for parallel image processing machines [6]; 2) an image algebra extending from mathematical morphology [7] [8] can lead to a formal parallel language approach to the design of image processing algorithms; 3) the algebraic structure serves as a framework for both algorithms and architectures of parallel image processing; and 4) optical computing techniques are able to efficiently implement image algebra based on cellular logic architectures (e.g. cellular array, cellular hypercube etc.). Here we will first discuss image algebra and then architectures for its implementation.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Rodriguez, J., et J. Sun. « A Domain Decomposition Study for a Parallel Finite Element Code ». Dans ASME 1992 International Computers in Engineering Conference and Exposition. American Society of Mechanical Engineers, 1992. http://dx.doi.org/10.1115/cie1992-0094.

Texte intégral
Résumé :
Abstract The primary objective of this study was the implementation and comparison of domain decomposition algorithms for a parallel Finite Element Method (FEM) used in the area of Computational Structural Mechanics (CSM). A parallelized FEM code exploits the concurrency inherent in the method to improve its computational efficiency. In order to use a larger size granularity in the parallel computation, the parallel FEM needs to partition its domain into subdomains in a proper manner. It is therefore necessary to search for domain decomposition algorithms to satisfy the special requirements of a parallel FEM. The domain decomposition algorithms investigated in this study physically decompose a meshed domain into a desired number of subdomains. Addressing the requirements of the parallel FEM, these algorithms are able to handle any type of two- and three-dimensional domains, balance the workloads across the multiple processors, minimize the communication overhead among the processors, maintain the integrity of each subdomain, minimize the overall bandwidth of the resulting system matrix, and require only a small amount of CPU time for the decomposition. Modifications to existing decomposition algorithms, such as the single wave propagating method and the bisecting method using vertical/horizontal cuts, are investigated. A new algorithm, based on the proposed multiple wave propagating method and the bisecting method using middle cuts, is formulated. These algorithms are compared with each other using performance criteria based on the overall FEM code and the algorithms themselves. An optimal combination algorithm is proposed. This algorithm combination is flexible and intelligent in some sense since several judgements are suggested to guide and organize different decompositions based on the general geometry of the meshes. The combination algorithm possesses both the desirable features of wave propagating and bisecting methods. As an application, the present algorithm is included in an existing parallel FEM code and some improvements in this code are made. The overall efficiency of the FEM code was increased.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Heyn, Toby, Alessandro Tasora, Mihai Anitescu et Dan Negrut. « A Parallel Algorithm for Solving Complex Multibody Problems With Stream Processors ». Dans ASME 2009 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2009. http://dx.doi.org/10.1115/detc2009-86478.

Texte intégral
Résumé :
This paper describes a numerical method for the parallel solution of the differential measure inclusion problem posed by mechanical multibody systems containing bilateral and unilateral frictional constraints. The method proposed has been implemented as a set of parallel algorithms leveraging NVIDIA’s Compute Unified Device Architecture (CUDA) library support for multi-core stream computing. This allows the proposed solution to run on a wide variety of GeForce and TESLA NVIDIA graphics cards for high performance computing. Although the methodology relies on the solution of cone complementarity problems known to be fine-grained in terms of data dependency, a suitable approach has been developed to exploit parallelism with low overhead in terms of memory access and thread synchronization. Additionally, a parallel collision detection algorithm has been incorporated to further exploit available parallelism. Initial numerical tests described in this paper demonstrate a speedup of one order of magnitude for the solution time of both the collision detection and the cone complementarity problems when performed in parallel. Since stream multiprocessors are becoming ubiquitous as embedded components of next-generation graphic boards, the solution proposed represents a cost-efficient way to simulate the time evolution of complex mechanical problems with millions of parts and constraints, a task that used to require powerful supercomputers. The proposed methodology facilitates the analysis of extremely complex systems such as granular material flows and off-road vehicle dynamics.
Styles APA, Harvard, Vancouver, ISO, etc.

Rapports d'organisations sur le sujet "Algoritmi paralleli"

1

Sahni, Sartaj. Parallel Algorithms. Fort Belvoir, VA : Defense Technical Information Center, juin 1999. http://dx.doi.org/10.21236/ada369856.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Reif, John H. Parallel Algorithms Derivation. Fort Belvoir, VA : Defense Technical Information Center, mars 1989. http://dx.doi.org/10.21236/ada248605.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Reif, John H. Parallel Algorithm Implementation. Fort Belvoir, VA : Defense Technical Information Center, mai 1996. http://dx.doi.org/10.21236/ada309408.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Robey, Robert W. Parallel Algorithms and Patterns. Office of Scientific and Technical Information (OSTI), juin 2016. http://dx.doi.org/10.2172/1258365.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Reif, John H. Implementation of Parallel Algorithms. Fort Belvoir, VA : Defense Technical Information Center, septembre 1991. http://dx.doi.org/10.21236/ada248759.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Reif, John H. Implementation of Parallel Algorithms. Fort Belvoir, VA : Defense Technical Information Center, mars 1992. http://dx.doi.org/10.21236/ada248826.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Reif, John H. Implementation of Parallel Algorithms. Fort Belvoir, VA : Defense Technical Information Center, juin 1992. http://dx.doi.org/10.21236/ada253638.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Reif, John H. Implementation of Parallel Algorithms. Fort Belvoir, VA : Defense Technical Information Center, décembre 1993. http://dx.doi.org/10.21236/ada275803.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Poggio, Tomaso, et James Little. Parallel Algorithms for Computer Vision. Fort Belvoir, VA : Defense Technical Information Center, mars 1988. http://dx.doi.org/10.21236/ada203947.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Rice, John R. Parallel Algorithms for PDE Solvers. Fort Belvoir, VA : Defense Technical Information Center, juillet 1988. http://dx.doi.org/10.21236/ada199625.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie