Academic literature on the topic 'Sparse Accelerator'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Sparse Accelerator.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Sparse Accelerator"

1

Xie, Xiaoru, Mingyu Zhu, Siyuan Lu, and Zhongfeng Wang. "Efficient Layer-Wise N:M Sparse CNN Accelerator with Flexible SPEC: Sparse Processing Element Clusters." Micromachines 14, no. 3 (February 24, 2023): 528. http://dx.doi.org/10.3390/mi14030528.

Full text
Abstract:
Recently, the layer-wise N:M fine-grained sparse neural network algorithm (i.e., every M-weights contains N non-zero values) has attracted tremendous attention, as it can effectively reduce the computational complexity with negligible accuracy loss. However, the speed-up potential of this algorithm will not be fully exploited if the right hardware support is lacking. In this work, we design an efficient accelerator for the N:M sparse convolutional neural networks (CNNs) with layer-wise sparse patterns. First, we analyze the performances of different processing element (PE) structures and extensions to construct the flexible PE architecture. Second, the variable sparse convolutional dimensions and sparse ratios are involved in the hardware design. With a sparse PE cluster (SPEC) design, the hardware can efficiently accelerate CNNs with the layer-wise N:M pattern. Finally, we employ the proposed SPEC into the CNN accelerator with flexible network-on-chip and specially designed dataflow. We implement hardware accelerators on Xilinx ZCU102 FPGA and Xilinx VCU118 FPGA and evaluate them with classical CNNs such as Alexnet, VGG-16, and ResNet-50. Compared with existing accelerators designed for structured and unstructured pruned networks, our design achieves the best performance in terms of power efficiency.
APA, Harvard, Vancouver, ISO, and other styles
2

Li, Yihang. "Sparse-Aware Deep Learning Accelerator." Highlights in Science, Engineering and Technology 39 (April 1, 2023): 305–10. http://dx.doi.org/10.54097/hset.v39i.6544.

Full text
Abstract:
In view of the difficulty of hardware implementation of convolutional neural network computing, most of the previous convolutional neural network accelerator designs focused on solving the bottleneck of computational performance and bandwidth, ignoring the importance of convolutional neural network scarcity for accelerator design. In recent years, there are a few convolutional neural network accelerators that can take advantage of the scarcity, but they are usually difficult to consider in terms of computational flexibility, parallel efficiency and resource overhead. In view of the problem that the application of convolutional neural network (CNN) on the embedded side is limited by real-time, and there is a large degree of sparsity in CNN convolution calculation. This paper summarizes the methods of sparsification from the algorithm level and based on FPGA level. The different methods of sparsification and the research and analysis of different application layers are introduced. The advantages and development trend of sparsification are analyzed and summarized.
APA, Harvard, Vancouver, ISO, and other styles
3

Xu, Jia, Han Pu, and Dong Wang. "Sparse Convolution FPGA Accelerator Based on Multi-Bank Hash Selection." Micromachines 16, no. 1 (December 27, 2024): 22. https://doi.org/10.3390/mi16010022.

Full text
Abstract:
Reconfigurable processor-based acceleration of deep convolutional neural network (DCNN) algorithms has emerged as a widely adopted technique, with particular attention on sparse neural network acceleration as an active research area. However, many computing devices that claim high computational power still struggle to execute neural network algorithms with optimal efficiency, low latency, and minimal power consumption. Consequently, there remains significant potential for further exploration into improving the efficiency, latency, and power consumption of neural network accelerators across diverse computational scenarios. This paper investigates three key techniques for hardware acceleration of sparse neural networks. The main contributions are as follows: (1) Most neural network inference tasks are typically executed on general-purpose computing devices, which often fail to deliver high energy efficiency and are not well-suited for accelerating sparse convolutional models. In this work, we propose a specialized computational circuit for the convolutional operations of sparse neural networks. This circuit is designed to detect and eliminate the computational effort associated with zero values in the sparse convolutional kernels, thereby enhancing energy efficiency. (2) The data access patterns in convolutional neural networks introduce significant pressure on the high-latency off-chip memory access process. Due to issues such as data discontinuity, the data reading unit often fails to fully exploit the available bandwidth during off-chip read and write operations. In this paper, we analyze bandwidth utilization in the context of convolutional accelerator data handling and propose a strategy to improve off-chip access efficiency. Specifically, we leverage a compiler optimization plugin developed for Vitis HLS, which automatically identifies and optimizes on-chip bandwidth utilization. (3) In coefficient-based accelerators, the synchronous operation of individual computational units can significantly hinder efficiency. Previous approaches have achieved asynchronous convolution by designing separate memory units for each computational unit; however, this method consumes a substantial amount of on-chip memory resources. To address this issue, we propose a shared feature map cache design for asynchronous convolution in the accelerators presented in this paper. This design resolves address access conflicts when multiple computational units concurrently access a set of caches by utilizing a hash-based address indexing algorithm. Moreover, the shared cache architecture reduces data redundancy and conserves on-chip resources. Using the optimized accelerator, we successfully executed ResNet50 inference on an Intel Arria 10 1150GX FPGA, achieving a throughput of 497 GOPS, or an equivalent computational power of 1579 GOPS, with a power consumption of only 22 watts.
APA, Harvard, Vancouver, ISO, and other styles
4

Zheng, Yong, Haigang Yang, Yiping Jia, and Zhihong Huang. "PermLSTM: A High Energy-Efficiency LSTM Accelerator Architecture." Electronics 10, no. 8 (April 8, 2021): 882. http://dx.doi.org/10.3390/electronics10080882.

Full text
Abstract:
Pruning and quantization are two commonly used approaches to accelerate the LSTM (Long Short-Term Memory) model. However, the traditional linear quantization usually suffers from the problem of gradient vanishing, and the existing pruning methods all have the problem of producing undesired irregular sparsity or large indexing overhead. To alleviate the problem of vanishing gradient, this work proposed a normalized linear quantization approach, which first normalize operands regionally and then quantize them in a local mix-max range. To overcome the problem of irregular sparsity and large indexing overhead, this work adopts the permuted block diagonal mask matrices to generate the sparse model. Due to the sparse model being highly regular, the position of non-zero weights can be obtained by a simple calculation, thus avoiding the large indexing overhead. Based on the sparse LSTM model generated from the permuted block diagonal mask matrices, this paper also proposed a high energy-efficiency accelerator, PermLSTM that comprehensively exploits the sparsity of weights, activations, and products regarding the matrix–vector multiplications, resulting in a 55.1% reduction in power consumption. The accelerator has been realized on Arria-10 FPGAs running at 150 MHz and achieved 2.19×∼24.4× energy efficiency compared with the other FPGA-based LSTM accelerators previously reported.
APA, Harvard, Vancouver, ISO, and other styles
5

Yavits, Leonid, and Ran Ginosar. "Accelerator for Sparse Machine Learning." IEEE Computer Architecture Letters 17, no. 1 (January 1, 2018): 21–24. http://dx.doi.org/10.1109/lca.2017.2714667.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Teodorovic, Predrag, and Rastislav Struharik. "Hardware Acceleration of Sparse Oblique Decision Trees for Edge Computing." Elektronika ir Elektrotechnika 25, no. 5 (October 6, 2019): 18–24. http://dx.doi.org/10.5755/j01.eie.25.5.24351.

Full text
Abstract:
This paper presents a hardware accelerator for sparse decision trees intended for FPGA applications. To the best of authors’ knowledge, this is the first accelerator of this type. Beside the hardware accelerator itself, a novel algorithm for induction of sparse decision trees is also presented. Sparse decision trees can be attractive because they require less memory resources and can be more efficiently processed using specialized hardware compared to traditional oblique decision trees. This can be of significant interest, particularly, in the edge-based applications, where memory and compute resources as well as power consumption are severely constrained. The performance of the proposed sparse decision tree induction algorithm as well as developed hardware accelerator are studied using standard benchmark datasets obtained from the UCI Machine Learning Repository database. The results of the experimental study indicate that the proposed algorithm and hardware accelerator are very favourably compared with some of the existing solutions.
APA, Harvard, Vancouver, ISO, and other styles
7

Vranjkovic, Vuk, Predrag Teodorovic, and Rastislav Struharik. "Universal Reconfigurable Hardware Accelerator for Sparse Machine Learning Predictive Models." Electronics 11, no. 8 (April 8, 2022): 1178. http://dx.doi.org/10.3390/electronics11081178.

Full text
Abstract:
This study presents a universal reconfigurable hardware accelerator for efficient processing of sparse decision trees, artificial neural networks and support vector machines. The main idea is to develop a hardware accelerator that will be able to directly process sparse machine learning models, resulting in shorter inference times and lower power consumption compared to existing solutions. To the author’s best knowledge, this is the first hardware accelerator of this type. Additionally, this is the first accelerator that is capable of processing sparse machine learning models of different types. Besides the hardware accelerator itself, algorithms for induction of sparse decision trees, pruning of support vector machines and artificial neural networks are presented. Such sparse machine learning classifiers are attractive since they require significantly less memory resources for storing model parameters. This results in reduced data movement between the accelerator and the DRAM memory, as well as a reduced number of operations required to process input instances, leading to faster and more energy-efficient processing. This could be of a significant interest in edge-based applications, with severely constrained memory, computation resources and power consumption. The performance of algorithms and the developed hardware accelerator are demonstrated using standard benchmark datasets from the UCI Machine Learning Repository database. The results of the experimental study reveal that the proposed algorithms and presented hardware accelerator are superior when compared to some of the existing solutions. Throughput is increased up to 2 times for decision trees, 2.3 times for support vector machines and 38 times for artificial neural networks. When the processing latency is considered, maximum performance improvement is even higher: up to a 4.4 times reduction for decision trees, a 84.1 times reduction for support vector machines and a 22.2 times reduction for artificial neural networks. Finally, since it is capable of supporting sparse classifiers, the usage of the proposed hardware accelerator leads to a significant reduction in energy spent on DRAM data transfers and a reduction of 50.16% for decision trees, 93.65% for support vector machines and as much as 93.75% for artificial neural networks, respectively.
APA, Harvard, Vancouver, ISO, and other styles
8

Gowda, Kavitha Malali Vishveshwarappa, Sowmya Madhavan, Stefano Rinaldi, Parameshachari Bidare Divakarachari, and Anitha Atmakur. "FPGA-Based Reconfigurable Convolutional Neural Network Accelerator Using Sparse and Convolutional Optimization." Electronics 11, no. 10 (May 22, 2022): 1653. http://dx.doi.org/10.3390/electronics11101653.

Full text
Abstract:
Nowadays, the data flow architecture is considered as a general solution for the acceleration of a deep neural network (DNN) because of its higher parallelism. However, the conventional DNN accelerator offers only a restricted flexibility for diverse network models. In order to overcome this, a reconfigurable convolutional neural network (RCNN) accelerator, i.e., one of the DNN, is required to be developed over the field-programmable gate array (FPGA) platform. In this paper, the sparse optimization of weight (SOW) and convolutional optimization (CO) are proposed to improve the performances of the RCNN accelerator. The combination of SOW and CO is used to optimize the feature map and weight sizes of the RCNN accelerator; therefore, the hardware resources consumed by this RCNN are minimized in FPGA. The performances of RCNN-SOW-CO are analyzed by means of feature map size, weight size, sparseness of the input feature map (IFM), weight parameter proportion, block random access memory (BRAM), digital signal processing (DSP) elements, look-up tables (LUTs), slices, delay, power, and accuracy. An existing architectures OIDSCNN, LP-CNN, and DPR-NN are used to justify efficiency of the RCNN-SOW-CO. The LUT of RCNN-SOW-CO with Alexnet designed in the Zynq-7020 is 5150, which is less than the OIDSCNN and DPR-NN.
APA, Harvard, Vancouver, ISO, and other styles
9

Dey, Sumon, Lee Baker, Joshua Schabel, Weifu Li, and Paul D. Franzon. "A Scalable Cluster-based Hierarchical Hardware Accelerator for a Cortically Inspired Algorithm." ACM Journal on Emerging Technologies in Computing Systems 17, no. 4 (June 30, 2021): 1–29. http://dx.doi.org/10.1145/3447777.

Full text
Abstract:
This article describes a scalable, configurable and cluster-based hierarchical hardware accelerator through custom hardware architecture for Sparsey, a cortical learning algorithm. Sparsey is inspired by the operation of the human cortex and uses a Sparse Distributed Representation to enable unsupervised learning and inference in the same algorithm. A distributed on-chip memory organization is designed and implemented in custom hardware to improve memory bandwidth and accelerate the memory read/write operations for synaptic weight matrices. Bit-level data are processed from distributed on-chip memory and custom multiply-accumulate hardware is implemented for binary and fixed-point multiply-accumulation operations. The fixed-point arithmetic and fixed-point storage are also adapted in this implementation. At 16 nm, the custom hardware of Sparsey achieved an overall 24.39× speedup, 353.12× energy efficiency per frame, and 1.43× reduction in silicon area against a state-of-the-art GPU.
APA, Harvard, Vancouver, ISO, and other styles
10

Liu, Sheng, Yasong Cao, and Shuwei Sun. "Mapping and Optimization Method of SpMV on Multi-DSP Accelerator." Electronics 11, no. 22 (November 11, 2022): 3699. http://dx.doi.org/10.3390/electronics11223699.

Full text
Abstract:
Sparse matrix-vector multiplication (SpMV) solves the product of a sparse matrix and dense vector, and the sparseness of a sparse matrix is often more than 90%. Usually, the sparse matrix is compressed to save storage resources, but this causes irregular access to dense vectors in the algorithm, which takes a lot of time and degrades the SpMV performance of the system. In this study, we design a dedicated channel in the DMA to implement an indirect memory access process to speed up the SpMV operation. On this basis, we propose six SpMV algorithm schemes and map them to optimize the performance of SpMV. The results show that the M processor’s SpMV performance reached 6.88 GFLOPS. Besides, the average performance of the HPCG benchmark is 2.8 GFLOPS.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Sparse Accelerator"

1

Syed, Akber. "A Hardware Interpreter for Sparse Matrix LU Factorization." University of Cincinnati / OhioLINK, 2002. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1024934521.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Jamal, Aygul. "A parallel iterative solver for large sparse linear systems enhanced with randomization and GPU accelerator, and its resilience to soft errors." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLS269/document.

Full text
Abstract:
Dans cette thèse de doctorat, nous abordons trois défis auxquels sont confrontés les solveurs d'algèbres linéaires dans la perspective des futurs systèmes exascale: accélérer la convergence en utilisant des techniques innovantes au niveau algorithmique, en profitant des accélérateurs GPU (Graphics Processing Units) pour améliorer le calcul sur plusieurs systèmes, en évaluant l'impact des erreurs due à l'augmentation du parallélisme dans les superordinateurs. Nous nous intéressons à l'étude des méthodes permettant d'accélérer la convergence et le temps d'exécution des solveurs itératifs pour les grands systèmes linéaires creux. Le solveur plus spécifiquement considéré dans ce travail est le “parallel Algebraic Recursive Multilevel Solver (pARMS)” qui est un soldeur parallèle sur mémoire distribuée basé sur les méthodes de sous-espace de Krylov.Tout d'abord, nous proposons d'intégrer une technique de randomisation appelée “Random Butterfly Transformations (RBT)” qui a été proposée avec succès pour éliminer le coût du pivotage dans la résolution des systèmes linéaires denses. Notre objectif est d'appliquer cette technique dans le préconditionneur ARMS de pARMS pour résoudre plus efficacement le dernier système Complément de Schur dans l'application du processus à multi-niveaux récursif. En raison de l'importance considérable du dernier Complément de Schur pour certains problèmes de test, nous proposons également d'utiliser une variante creux de RBT suivie d'un solveur direct creux (SuperLU). Les résultats expérimentaux sur certaines matrices de la collection de Davis montrent une amélioration de la convergence et de la précision par rapport aux implémentations existantes.Ensuite, nous illustrons comment une approche non intrusive peut être appliquée pour implémenter des calculs GPU dans le solveur pARMS, plus particulièrement pour la phase de préconditionnement locale qui représente une partie importante du temps pour la résolution. Nous comparons les solveurs purement CPU avec les solveurs hybrides CPU / GPU sur plusieurs problèmes de test issus d'applications physiques. Les résultats de performance du solveur hybride CPU / GPU utilisant le préconditionnement ARMS combiné avec RBT, ou le préconditionnement ILU(0), montrent un gain de performance jusqu'à 30% sur les problèmes de test considérés dans nos expériences.Enfin, nous étudions l'effet des défaillances logicielles variable sur la convergence de la méthode itérative flexible GMRES (FGMRES) qui est couramment utilisée pour résoudre le système préconditionné dans pARMS. Le problème ciblé dans nos expériences est un problème elliptique PDE sur une grille régulière. Nous considérons deux types de préconditionneurs: une factorisation LU incomplète à double seuil (ILUT) et le préconditionneur ARMS combiné avec randomisation RBT. Nous considérons deux modèle de fautes logicielles différentes où nous perturbons la multiplication du vecteur matriciel et la phase de préconditionnement, et nous comparons leur impact potentiel sur la convergence
In this PhD thesis, we address three challenges faced by linear algebra solvers in the perspective of future exascale systems: accelerating convergence using innovative techniques at the algorithm level, taking advantage of GPU (Graphics Processing Units) accelerators to enhance the performance of computations on hybrid CPU/GPU systems, evaluating the impact of errors in the context of an increasing level of parallelism in supercomputers. We are interested in studying methods that enable us to accelerate convergence and execution time of iterative solvers for large sparse linear systems. The solver specifically considered in this work is the parallel Algebraic Recursive Multilevel Solver (pARMS), which is a distributed-memory parallel solver based on Krylov subspace methods.First we integrate a randomization technique referred to as Random Butterfly Transformations (RBT) that has been successfully applied to remove the cost of pivoting in the solution of dense linear systems. Our objective is to apply this method in the ARMS preconditioner to solve more efficiently the last Schur complement system in the application of the recursive multilevel process in pARMS. The experimental results show an improvement of the convergence and the accuracy. Due to memory concerns for some test problems, we also propose to use a sparse variant of RBT followed by a sparse direct solver (SuperLU), resulting in an improvement of the execution time.Then we explain how a non intrusive approach can be applied to implement GPU computing into the pARMS solver, more especially for the local preconditioning phase that represents a significant part of the time to compute the solution. We compare the CPU-only and hybrid CPU/GPU variant of the solver on several test problems coming from physical applications. The performance results of the hybrid CPU/GPU solver using the ARMS preconditioning combined with RBT, or the ILU(0) preconditioning, show a performance gain of up to 30% on the test problems considered in our experiments.Finally we study the effect of soft fault errors on the convergence of the commonly used flexible GMRES (FGMRES) algorithm which is also used to solve the preconditioned system in pARMS. The test problem in our experiments is an elliptical PDE problem on a regular grid. We consider two types of preconditioners: an incomplete LU factorization with dual threshold (ILUT), and the ARMS preconditioner combined with RBT randomization. We consider two soft fault error modeling approaches where we perturb the matrix-vector multiplication and the application of the preconditioner, and we compare their potential impact on the convergence of the solver
APA, Harvard, Vancouver, ISO, and other styles
3

Pradels, Léo. "Efficient CNN inference acceleration on FPGAs : a pattern pruning-driven approach." Electronic Thesis or Diss., Université de Rennes (2023-....), 2024. http://www.theses.fr/2024URENS087.

Full text
Abstract:
Les modèles d'apprentissage profond basés sur les CNNs offrent des performances de pointe dans les tâches de traitement d'images et de vidéos, en particulier pour l'amélioration ou la classification d'images. Cependant, ces modèles sont lourds en calcul et en empreinte mémoire, ce qui les rend inadaptés aux contraintes de temps réel sur des FPGA embarqués. Il est donc essentiel de compresser ces CNNs et de concevoir des architectures d'accélérateurs pour l'inférence qui intègrent la compression dans une approche de co-conception matérielle et logicielle. Bien que des optimisations logicielles telles que l'élagage aient été proposées, elles manquent souvent de structure nécessaire à une intégration efficace de l'accélérateur. Pour répondre à ces limitations, cette thèse se concentre sur l'accélération des CNNs sur FPGA tout en respectant les contraintes de temps réel sur les systèmes embarqués. Cet objectif est atteint grâce à plusieurs contributions clés. Tout d'abord, elle introduit l'élagage des motifs, qui impose une structure à la sparsité du réseau, permettant une accélération matérielle efficace avec une perte de précision minimale due à la compression. Deuxièmement, un accélérateur pour l'inférence de CNN est présenté, qui adapte son architecture en fonction des critères de performance d'entrée, des spécifications FPGA et de l'architecture du modèle CNN cible. Une méthode efficace d'intégration de l'élagage des motifs dans l'accélérateur et un flux complet pour l'accélération de CNN sont proposés. Enfin, des améliorations de la compression du réseau sont explorées grâce à la quantification de Shift\&Add, qui modifie les méthodes de multiplication sur FPGA tout en maintenant la précision du réseau de base
CNN-based deep learning models provide state-of-the-art performance in image and video processing tasks, particularly for image enhancement or classification. However, these models are computationally and memory-intensive, making them unsuitable for real-time constraints on embedded FPGA systems. As a result, compressing these CNNs and designing accelerator architectures for inference that integrate compression in a hardware-software co-design approach is essential. While software optimizations like pruning have been proposed, they often lack the structured approach needed for effective accelerator integration. To address these limitations, this thesis focuses on accelerating CNNs on FPGAs while complying with real-time constraints on embedded systems. This is achieved through several key contributions. First, it introduces pattern pruning, which imposes structure on network sparsity, enabling efficient hardware acceleration with minimal accuracy loss due to compression. Second, a scalable accelerator for CNN inference is presented, which adapts its architecture based on input performance criteria, FPGA specifications, and target CNN model architecture. An efficient method for integrating pattern pruning within the accelerator and a complete flow for CNN acceleration are proposed. Finally, improvements in network compression are explored through Shift&Add quantization, which modifies FPGA computation methods while maintaining baseline network accuracy
APA, Harvard, Vancouver, ISO, and other styles
4

Ramachandran, Shridhar. "Incremental PageRank acceleration using Sparse Matrix-Sparse Vector Multiplication." The Ohio State University, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=osu1462894358.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Fernández, Becerra David. "Multicore acceleration of sparse electromagnetics computations." Thesis, McGill University, 2011. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=104641.

Full text
Abstract:
Multicore processors have become the dominant industry trend to increase computer systems performance, driving electromagnetics (EM) practitioners to redesign their applications using parallel programming paradigms. This is especially true for computations involving complex data structures such as sparse matrix computations that often arise in EM simulations with the finite element method (FEM). These computations require pointer manipulation that render useless many compiler optimizations and parallel shared memory frameworks (e.g. OpenMP). This work presents new sparse data structures and techniques to efficiently exploit multicore parallelism and short-vector units (the last of which has not been exploited by state of the art sparse matrix libraries) for recurrent computationally intensive kernels in EM simulations, such as the sparse matrix-vector multiplication (SMVM) and the conjugate gradient (CG) algorithms. Up to 14 times performance speedups are demonstrated for the accelerated SMVM kernel and 5.8x for the CG kernel using the proposed methods over conventional approaches for two different multicore architectures. Finally, a new method to solve the FEM for parallel processing is presented and an optimized implementation is realized on two different generations of NVIDIA GPUs (manycore) accelerators with performance increases of up to 27.53 times compared to compiler optimized CPU results.
Les processeurs multicœurs sont devenus la tendance dominante de l'industrie pour accroître la performance des systèmes informatiques, forçant les concepteurs de systèmes électromagnétiques (EM) à reconcevoir leurs applications en utilisant des paradigmes de programmation parallèle. Cela est particulièrement vrai pour les calculs impliquant des structures de données complexes comme les calculs de matrices creuses qui surviennent souvent dans des simulations électromagnétiques (EM) avec la méthode d'analyse par éléments finis (FÉM). Ces calculs nécessitent de manipulation de pointeurs qui rendent inutiles de nombreuses optimisations du compilateur et les bibliothèques de mémoire partagée parallèle (OpenMP, par exemple). Ce travail présente de nouvelles structures de données rares et de nouvelles techniques afin d'exploiter efficacement le parallélisme multicœur et les unités de vecteur court (dont le dernier n'a pas été exploité par des bibliothèques de matrices creuses à la fine pointe de la technologie) pour les noyaux de calcul intensif récurrents dans les simulations EM, tels que les multiplications matrice-vecteur rares (SMVM) et des algorithmes à gradient conjugué (CG). Des performances d'accélérations jusqu'à 14 fois supérieures sont démontrées pour le noyau accéléré par SMVM et jusqu'à 5,8 fois supérieures pour le noyau CG en utilisant les méthodes proposées par rapport aux approches conventionnelles pour deux architectures multicœurs différentes. Enfin, une nouvelle méthode pour résoudre la FÉM pour le traitement parallèle est présentée et une implantation optimisée est réalisée sur deux générations d'accélérateurs de GPU NVIDIA (multicœur) avec des augmentations de performances allant jusqu'à 27,53 fois par rapport aux résultats du CPU optimisé par compilateur.
APA, Harvard, Vancouver, ISO, and other styles
6

Grigoras, Paul. "Instance directed tuning for sparse matrix kernels on reconfigurable accelerators." Thesis, Imperial College London, 2018. http://hdl.handle.net/10044/1/62634.

Full text
Abstract:
We present a novel method to optimise sparse matrix kernels for reconfigurable accelerators, through instance directed tuning - the tuning of reconfigurable architectures based on a sparse matrix instance. First, we present two novel reconfigurable architectures for the Conjugate Gradient Method that are optimised based on the problem dimension and sparsity pattern. These architectures provide the context for illustrating the opportunities and challenges for tuning sparse matrix kernels, which guide the design of the proposed method. Second, we introduce CASK, a novel framework for sparse matrix kernels on reconfigurable accelerators. CASK is: (1) instance directed, since it can account for differences in the matrix instances to generate and select adequate architectures; (2) unified, as it can be applied to a broad range of kernels and optimisations; (3) systematic, since it can support optimisations at multiple levels of encompassed reconfigurable architectures; and (4) automated, since it can operate with minimal user input, encapsulating and simplifying the tuning process. Third, we demonstrate the benefits of the proposed approach, by applying it to the Sparse Matrix Vector Multiplication kernel: (1) to tune a novel parametric reconfigurable architecture, resulting in up to 2 times energy effciency gains compared to optimised GPU and Xeon Phi implementations; (2) to include a novel compression method for nonzero values, resulting in up to 2.5 times compression ratio compared to the Compressed Sparse Row format; and (3) to tune a novel architecture for the block diagonal sparsity pattern arising in the Finite Element Method, enabling larger problems to be supported with up to 3 times speedup compared to an optimised CPU implementation.
APA, Harvard, Vancouver, ISO, and other styles
7

Segura, Salvador Albert. "High-performance and energy-efficient irregular graph processing on GPU architectures." Doctoral thesis, Universitat Politècnica de Catalunya, 2021. http://hdl.handle.net/10803/671449.

Full text
Abstract:
Graph processing is an established and prominent domain that is the foundation of new emerging applications in areas such as Data Analytics and Machine Learning, empowering applications such as road navigation, social networks and automatic speech recognition. The large amount of data employed in these domains requires high throughput architectures such as GPGPU. Although the processing of large graph-based workloads exhibits a high degree of parallelism, memory access patterns tend to be highly irregular, leading to poor efficiency due to memory divergence.In order to ameliorate these issues, GPGPU graph applications perform stream compaction operations which process active nodes/edges so subsequent steps work on a compacted dataset. We propose to offload this task to the Stream Compaction Unit (SCU) hardware extension tailored to the requirements of these operations, which additionally performs pre-processing by filtering and reordering elements processed.We show that memory divergence inefficiencies prevail in GPGPU irregular graph-based applications, yet we find that it is possible to relax the strict relationship between thread and processed data to empower new optimizations. As such, we propose the Irregular accesses Reorder Unit (IRU), a novel hardware extension integrated in the GPU pipeline that reorders and filters data processed by the threads on irregular accesses improving memory coalescing.Finally, we leverage the strengths of both previous approaches to achieve synergistic improvements. We do so by proposing the IRU-enhanced SCU (ISCU), which employs the efficient pre-processing mechanisms of the IRU to improve SCU stream compaction efficiency and NoC throughput limitations due to SCU pre-processing operations. We evaluate the ISCU with state-of-the-art graph-based applications achieving a 2.2x performance improvement and 10x energy-efficiency.
El processament de grafs és un domini prominent i establert com a la base de noves aplicacions emergents en àrees com l'anàlisi de dades i Machine Learning, que permeten aplicacions com ara navegació per carretera, xarxes socials i reconeixement automàtic de veu. La gran quantitat de dades emprades en aquests dominis requereix d’arquitectures d’alt rendiment, com ara GPGPU. Tot i que el processament de grans càrregues de treball basades en grafs presenta un alt grau de paral·lelisme, els patrons d’accés a la memòria tendeixen a ser irregulars, fet que redueix l’eficiència a causa de la divergència d’accessos a memòria. Per tal de millorar aquests problemes, les aplicacions de grafs per a GPGPU realitzen operacions de stream compaction que processen nodes/arestes per tal que els passos posteriors funcionin en un conjunt de dades compactat. Proposem deslliurar d’aquesta tasca a la extensió hardware Stream Compaction Unit (SCU) adaptada als requisits d’aquestes operacions, que a més realitza un pre-processament filtrant i reordenant els elements processats.Mostrem que les ineficiències de divergència de memòria prevalen en aplicacions GPGPU basades en grafs irregulars, tot i que trobem que és possible relaxar la relació estricta entre threads i les dades processades per obtenir noves optimitzacions. Com a tal, proposem la Irregular accesses Reorder Unit (IRU), una nova extensió de maquinari integrada al pipeline de la GPU que reordena i filtra les dades processades pels threads en accessos irregulars que milloren la convergència d’accessos a memòria. Finalment, aprofitem els punts forts de les propostes anteriors per aconseguir millores sinèrgiques. Ho fem proposant la IRU-enhanced SCU (ISCU), que utilitza els mecanismes de pre-processament eficients de la IRU per millorar l’eficiència de stream compaction de la SCU i les limitacions de rendiment de NoC a causa de les operacions de pre-processament de la SCU.
APA, Harvard, Vancouver, ISO, and other styles
8

Yee, Wai Min. "Cache Design for a Hardware Accelerated Sparse Texture Storage System." Thesis, University of Waterloo, 2004. http://hdl.handle.net/10012/1197.

Full text
Abstract:
Hardware texture mapping is essential for real-time rendering. Unfortunately the memory bandwidth and latency often bounds performance in current graphics architectures. Bandwidth consumption can be reduced by compressing the texture map or by using a cache. However, the way a texture map occupies memory and how it is accessed affects the pattern of memory accesses, which in turn affects cache performance. Thus texture compression schemes and cache architectures must be designed in conjunction with each other. We define a sparse texture to be a texture where a substantial percentage of the texture is constant. Sparse textures are of interest as they occur often, and they are used as parts of more general texture compression schemes. We present a hardware compatible implementation of sparse textures based on B-tree indexing and explore cache designs for it. We demonstrate that it is possible to have the bandwidth consumption and miss rate due to the texture data alone scale with the area of the region of interest. We also show that the additional bandwidth consumption and hideable latency due to the B-tree indices are low. Furthermore, the caches necessary for these textures can be quite small.
APA, Harvard, Vancouver, ISO, and other styles
9

Mantell, Rosemary Genevieve. "Accelerated sampling of energy landscapes." Thesis, University of Cambridge, 2017. https://www.repository.cam.ac.uk/handle/1810/267990.

Full text
Abstract:
In this project, various computational energy landscape methods were accelerated using graphics processing units (GPUs). Basin-hopping global optimisation was treated using a version of the limited-memory BFGS algorithm adapted for CUDA, in combination with GPU-acceleration of the potential calculation. The Lennard-Jones potential was implemented using CUDA, and an interface to the GPU-accelerated AMBER potential was constructed. These results were then extended to form the basis of a GPU-accelerated version of hybrid eigenvector-following. The doubly-nudged elastic band method was also accelerated using an interface to the potential calculation on GPU. Additionally, a local rigid body framework was adapted for GPU hardware. Tests were performed for eight biomolecules represented using the AMBER potential, ranging in size from 81 to 22\,811 atoms, and the effects of minimiser history size and local rigidification on the overall efficiency were analysed. Improvements relative to CPU performance of up to two orders of magnitude were obtained for the largest systems. These methods have been successfully applied to both biological systems and atomic clusters. An existing interface between a code for free energy basin-hopping and the SuiteSparse package for sparse Cholesky factorisation was refined, validated and tested. Tests were performed for both Lennard-Jones clusters and selected biomolecules represented using the AMBER potential. Significant acceleration of the vibrational frequency calculations was achieved, with negligible loss of accuracy, relative to the standard diagonalisation procedure. For the larger systems, exploiting sparsity reduces the computational cost by factors of 10 to 30. The acceleration of these computational energy landscape methods opens up the possibility of investigating much larger and more complex systems than previously accessible. A wide array of new applications are now computationally feasible.
APA, Harvard, Vancouver, ISO, and other styles
10

Chen, Dong. "Acceleration of the spatial selective excitation of MRI via sparse approximation." kostenfrei, 2009. https://mediatum2.ub.tum.de/node?id=956913.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Sparse Accelerator"

1

United States. National Aeronautics and Space Administration., ed. Arc-driven rail accelerator research: Final report. Tuskegee, Ala: Mechanical Engineering Dept., Tuskegee University, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

United States. National Aeronautics and Space Administration., ed. Arc-driven rail accelerator research: Final report. Tuskegee, Ala: Mechanical Engineering Dept., Tuskegee University, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zana, Lynnette M. Rail accelerators for space transportation: An experimental investigation. [Washington, D.C.]: National Aeronautics and Space Administration, Scientific and Technical Information Branch, 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

United States. National Aeronautics and Space Administration., ed. Space Experiments with Particle Accelerators (SEPAC): Final report. [Washington, DC: National Aeronautics and Space Administration, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Bauer, Dominique, and Camilla Murgia, eds. Ephemeral Spectacles, Exhibition Spaces and Museums. NL Amsterdam: Amsterdam University Press, 2021. http://dx.doi.org/10.5117/9789463720908.

Full text
Abstract:
This book examines ephemeral exhibitions from 1750 to 1918. In an era of acceleration and elusiveness, these transient spaces functioned as microcosms in which reality was shown, simulated, staged, imagined, experienced and known. They therefore had a dimension of spectacle to them, as the volume demonstrates. Against this backdrop, the different chapters deal with a plethora of spaces and spatial installations: the Wunderkammer, the spectacle garden, cosmoramas and panoramas, the literary space, the temporary museum, and the alternative exhibition space.
APA, Harvard, Vancouver, ISO, and other styles
6

Blanchard, Robert C. Preliminary OARE absolute acceleration measurements on STS-50. Hampton, Va: National Aeronautics and Space Administration, Langley Research Center, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Blanchard, Robert C. Preliminary OARE absolute acceleration measurements on STS-50. Hampton, Va: National Aeronautics and Space Administration, Langley Research Center, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

T, Norfleet William, and Lyndon B. Johnson Space Center., eds. Issues on human acceleration tolerance after long-duration space flights. Houston, Texas: National Aeronautics and Space Administration, Lyndon B. Johnson Space Center, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

DeLombard, Richard. Quick look report on acceleration measurements on Mir space station during Mir-16. Cleveland, Ohio: NASA Lewis Research Center, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Richard, DeLombard, and United States. National Aeronautics and Space Administration., eds. SAMS acceleration measurements on Mir from June to November 1995. [Washington, D.C: National Aeronautics and Space Administration, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Sparse Accelerator"

1

Rabbi, Fazlay, Christopher S. Daley, Hasan Metin Aktulga, and Nicholas J. Wright. "Evaluation of Directive-Based GPU Programming Models on a Block Eigensolver with Consideration of Large Sparse Matrices." In Accelerator Programming Using Directives, 66–88. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-49943-3_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wang, Bo, Sheng Ma, Yuan Yuan, Yi Dai, Wei Jiang, Xiang Hou, Xiao Yi, and Rui Xu. "SparG: A Sparse GEMM Accelerator for Deep Learning Applications." In Algorithms and Architectures for Parallel Processing, 529–47. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-22677-9_28.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Anzalone, Erik, Maurizio Capra, Riccardo Peloso, Maurizio Martina, and Guido Masera. "Low-Power Hardware Accelerator for Sparse Matrix Convolution in Deep Neural Network." In Progresses in Artificial Intelligence and Neural Systems, 79–89. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-5093-5_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Meng, Zhaoteng, Long Xiao, Xiaoyao Gao, Zhan Li, Lin Shu, and Jie Hao. "BitHist: A Precision-Scalable Sparse-Awareness DNN Accelerator Based on Bit Slices Products Histogram." In Euro-Par 2023: Parallel Processing, 289–303. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-39698-4_20.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, Bo, Sheng Ma, Zhong Liu, Libo Huang, Yuan Yuan, and Yi Dai. "SADD: A Novel Systolic Array Accelerator with Dynamic Dataflow for Sparse GEMM in Deep Learning." In Lecture Notes in Computer Science, 42–53. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-21395-3_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Alonso, Daniel. "Data Innovation Spaces." In The Elements of Big Data Value, 211–42. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-68176-0_9.

Full text
Abstract:
AbstractWithin the European Big Data Ecosystem, cross-organisational and cross-sectorial experimentation and innovation environments play a central role. European Innovation Spaces (or i-Spaces for short) are the main elements to ensure that research on big data value technologies and novel applications can be quickly tested, piloted and exploited for the benefit of all stakeholders. In particular, i-Spaces enable stakeholders to develop new businesses facilitated by advanced Big Data Value (BDV) technologies, applications and business models, bringing together all blocks, actors and functionalities expected to provide IT infrastructure, support and assistance, data protection, privacy and governance, community building and linkages with other innovation spaces, as well as incubation and accelerator services. Thereby, i-Spaces contribute to building a community, providing a catalyst for engagement and acting as incubators and accelerators of data-driven innovation, with cross-border collaborations as a key aspect to fully unleash the potential of data to support the uptake of European AI and related technologies.
APA, Harvard, Vancouver, ISO, and other styles
7

Bordry, F., L. Bottura, A. Milanese, D. Tommasini, E. Jensen, Ph Lebrun, L. Tavian, et al. "Accelerator Engineering and Technology: Accelerator Technology." In Particle Physics Reference Library, 337–517. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-34245-6_8.

Full text
Abstract:
AbstractMagnets are at the core of both circular and linear accelerators. The main function of a magnet is to guide the charged particle beam by virtue of the Lorentz force, given by the following expression:where q is the electrical charge of the particle, v its velocity, and B the magnetic field induction. The trajectory of a particle in the field depends hence on the particle velocity and on the space distribution of the field. The simplest case is that of a uniform magnetic field with a single component and velocity v normal to it, in which case the particle trajectory is a circle. A uniform field has thus a pure bending effect on a charged particle, and the magnet that generates it is generally referred to as a dipole.
APA, Harvard, Vancouver, ISO, and other styles
8

Gajurel, Aavaas, Sushil J. Louis, Rui Wu, Lee Barford, and Frederick C. Harris. "GPU Acceleration of Sparse Neural Networks." In Advances in Intelligent Systems and Computing, 323–30. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-70416-2_41.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Puu, Tönu. "Multiplier-Accelerator Models Revisited." In Economics of Space and Time, 145–59. Berlin, Heidelberg: Springer Berlin Heidelberg, 1997. http://dx.doi.org/10.1007/978-3-642-60877-3_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Maeda, Hiroshi, and Daisuke Takahashi. "Parallel Sparse Matrix-Vector Multiplication Using Accelerators." In Computational Science and Its Applications – ICCSA 2016, 3–18. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-42108-7_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Sparse Accelerator"

1

Koul, Kalhan, Maxwell Strange, Jackson Melchert, Alex Carsello, Yuchen Mei, Olivia Hsu, Taeyoung Kong, et al. "Onyx: A Programmable Accelerator for Sparse Tensor Algebra." In 2024 IEEE Hot Chips 36 Symposium (HCS), 1–91. IEEE, 2024. http://dx.doi.org/10.1109/hcs61935.2024.10665150.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lai, Yu-Hsuan, Shanq-Jang Ruan, Ming Fang, Edwin Naroska, and Jeng-Lun Shieh. "A Throughput-Optimized Accelerator for Submanifold Sparse Convolutional Networks." In 2024 IEEE 13th Global Conference on Consumer Electronics (GCCE), 1010–11. IEEE, 2024. http://dx.doi.org/10.1109/gcce62371.2024.10760758.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Li, Zuohao, Yiwan Lai, and Hao Zhang. "Energy Efficient FPGA-Based Accelerator for Dynamic Sparse Transformer." In 2024 13th International Conference on Communications, Circuits and Systems (ICCCAS), 7–12. IEEE, 2024. http://dx.doi.org/10.1109/icccas62034.2024.10652850.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Luo, Shengbai, Bo Wang, Yihao Shi, Xueyi Zhang, Qingshan Xue, and Sheng Ma. "Sparm: A Sparse Matrix Multiplication Accelerator Supporting Multiple Dataflows." In 2024 IEEE 35th International Conference on Application-specific Systems, Architectures and Processors (ASAP), 122–30. IEEE, 2024. http://dx.doi.org/10.1109/asap61560.2024.00034.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Li, Zhengke, Wendong Mao, Siyu Zhang, Qiwei Dong, and Zhongfeng Wang. "An Efficient Sparse Hardware Accelerator for Spike-Driven Transformer." In 2024 IEEE Asia-Pacific Conference on Applied Electromagnetics (APACE), 250–53. IEEE, 2024. https://doi.org/10.1109/apace62360.2024.10877394.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Mao, Yingchang, Qiang Liu, and Ray C. C. Cheung. "MSCA: A Multi-Grained Sparse Convolution Accelerator for DNN Training." In 2024 IEEE 35th International Conference on Application-specific Systems, Architectures and Processors (ASAP), 34–35. IEEE, 2024. http://dx.doi.org/10.1109/asap61560.2024.00019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ma, Shenghong, Jinwei Xu, Jingfei Jiang, Yaohua Wang, and Dongsheng Li. "Funnel: An Efficient Sparse Attention Accelerator with Multi-Dataflow Fusion." In 2024 IEEE International Symposium on Parallel and Distributed Processing with Applications (ISPA), 1311–18. IEEE, 2024. https://doi.org/10.1109/ispa63168.2024.00176.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Xu, Xiangzhi, Qi Liu, Wenjin Huang, WenLu Peng, and Yihua Huang. "SpGCN: An FPGA-Based Graph Convolutional Network Accelerator for Sparse Graphs." In 2024 IEEE 32nd Annual International Symposium on Field-Programmable Custom Computing Machines (FCCM), 216. IEEE, 2024. http://dx.doi.org/10.1109/fccm60383.2024.00037.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ansarmohammadi, Ali, Seyed Ahmad Mirsalari, Reza Hojabr, Mostafa E. Salehi Nasab, and M. Hasan Najafi. "BISQ: A Bit-level Sparse Quantized Accelerator For Embedded Deep Neural Networks." In 2024 1st International Conference on Innovative Engineering Sciences and Technological Research (ICIESTR), 1–6. IEEE, 2024. https://doi.org/10.1109/iciestr60916.2024.10798342.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Feldmann, Axel, Courtney Golden, Yifan Yang, Joel S. Emer, and Daniel Sanchez. "Azul: An Accelerator for Sparse Iterative Solvers Leveraging Distributed On-Chip Memory." In 2024 57th IEEE/ACM International Symposium on Microarchitecture (MICRO), 643–56. IEEE, 2024. https://doi.org/10.1109/micro61859.2024.00054.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Sparse Accelerator"

1

Lee, L. Solving Large Sparse Linear Systems in End-to-end Accelerator Structure Simulations. Office of Scientific and Technical Information (OSTI), January 2004. http://dx.doi.org/10.2172/826910.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Gene Golub and Kwok Ko. Solving large-scale sparse eigenvalue problems and linear systems of equations for accelerator modeling. Office of Scientific and Technical Information (OSTI), March 2009. http://dx.doi.org/10.2172/950471.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Andrzejewski, D. Accelerated Gibbs Sampling for Infinite Sparse Factor Analysis. Office of Scientific and Technical Information (OSTI), September 2011. http://dx.doi.org/10.2172/1026471.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Garg, Raveesh, Eric Qin, Francisco Martinez, Robert Guirado, Akshay Jain, Sergi Abadal, Jose Abellan, et al. Understanding the Design Space of Sparse/Dense Multiphase Dataflows for Mapping Graph Neural Networks on Spatial Accelerators. Office of Scientific and Technical Information (OSTI), September 2021. http://dx.doi.org/10.2172/1821960.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Karl, Smith. Space Accelerator Engineering at Los Alamos. Office of Scientific and Technical Information (OSTI), June 2024. http://dx.doi.org/10.2172/2377320.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ottinger, M. B., T. Tajima, and K. Hiramoto. Space charge tracking code for a synchrotron accelerator. Office of Scientific and Technical Information (OSTI), June 1997. http://dx.doi.org/10.2172/491621.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Nguyen, Dinh Cong, and John W. Lewellen. High-Power Electron Accelerators for Space (and other) Applications. Office of Scientific and Technical Information (OSTI), May 2016. http://dx.doi.org/10.2172/1291275.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kishek, Rami, Santiago Bernal, Timothy Koeth, and Irving Haber. Physics of Space Charge for Advanced Accelerators - Closeout Report. Office of Scientific and Technical Information (OSTI), June 2015. http://dx.doi.org/10.2172/1186737.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Okonechnikov, Konstantin, James Amundson, and Alexandru Macridin. Transverse space charge effect calculation in the Synergia accelerator modeling toolkit. Office of Scientific and Technical Information (OSTI), September 2009. http://dx.doi.org/10.2172/968693.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Barnard, J. J., and S. M. Lund. Course Notes: United States Particle Accelerator School Beam Physics with Intense Space-Charge. Office of Scientific and Technical Information (OSTI), May 2008. http://dx.doi.org/10.2172/941431.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography
We use cookies to improve our website's functionality. Learn more