Literatura académica sobre el tema "Parallel code optimization"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Parallel code optimization".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Artículos de revistas sobre el tema "Parallel code optimization"
Özcan, Ender y Esin Onbaşioğlu. "Memetic Algorithms for Parallel Code Optimization". International Journal of Parallel Programming 35, n.º 1 (2 de diciembre de 2006): 33–61. http://dx.doi.org/10.1007/s10766-006-0026-x.
Texto completoLuo, Hao, Guoyang Chen, Pengcheng Li, Chen Ding y Xipeng Shen. "Data-centric combinatorial optimization of parallel code". ACM SIGPLAN Notices 51, n.º 8 (9 de noviembre de 2016): 1–2. http://dx.doi.org/10.1145/3016078.2851182.
Texto completoBailey, Duane A., Janice E. Cuny y Bruce B. MacLeod. "Reducing communication overhead: A parallel code optimization". Journal of Parallel and Distributed Computing 4, n.º 5 (octubre de 1987): 505–20. http://dx.doi.org/10.1016/0743-7315(87)90021-9.
Texto completoShang, Zhi. "Large-Scale CFD Parallel Computing Dealing with Massive Mesh". Journal of Engineering 2013 (2013): 1–6. http://dx.doi.org/10.1155/2013/850148.
Texto completoÖzturan, Can, Balaram Sinharoy y Boleslaw K. Szymanski. "Compiler Technology for Parallel Scientific Computation". Scientific Programming 3, n.º 3 (1994): 201–25. http://dx.doi.org/10.1155/1994/243495.
Texto completoKiselev, E. A., P. N. Telegin y A. V. Baranov. "Impact of Parallel Code Optimization on Computer Power Consumption". Lobachevskii Journal of Mathematics 44, n.º 12 (diciembre de 2023): 5306–19. http://dx.doi.org/10.1134/s1995080223120211.
Texto completoSafarik, Jakub y Vaclav Snasel. "Acceleration of Particle Swarm Optimization with AVX Instructions". Applied Sciences 13, n.º 2 (4 de enero de 2023): 734. http://dx.doi.org/10.3390/app13020734.
Texto completoChowdhary, K. R., Rajendra Purohit y Sunil Dutt Purohit. "Source-to-source translation for code-optimization". Journal of Information and Optimization Sciences 44, n.º 3 (2023): 407–16. http://dx.doi.org/10.47974/jios-1350.
Texto completoWANG, SHENGYUE, PEN-CHUNG YEW y ANTONIA ZHAI. "CODE TRANSFORMATIONS FOR ENHANCING THE PERFORMANCE OF SPECULATIVELY PARALLEL THREADS". Journal of Circuits, Systems and Computers 21, n.º 02 (abril de 2012): 1240008. http://dx.doi.org/10.1142/s0218126612400087.
Texto completoSiow, C. L., Jaswar y Efi Afrizal. "Computational Fluid Dynamic Using Parallel Loop of Multi-Cores Processor". Applied Mechanics and Materials 493 (enero de 2014): 80–85. http://dx.doi.org/10.4028/www.scientific.net/amm.493.80.
Texto completoTesis sobre el tema "Parallel code optimization"
Cordeiro, Silvio Ricardo. "Code profiling and optimization in transactional memory systems". reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2014. http://hdl.handle.net/10183/97866.
Texto completoTransactional Memory has shown itself to be a promising paradigm for the implementation of shared-memory concurrent applications that eschew a lock-based model of data synchronization. Rather than conditioning exclusive access on the value of a lock that is shared across concurrent threads, Transactional Memory attempts to execute critical sections optimistically, rolling back the modifications in the event of a data access conflict. However, while the lock-based approach has acquired a significant body of debugging, profiling and automated optimization tools (as one of the oldest and most researched synchronization techniques), the field of Transactional Memory is still comparably recent, and programmers are usually tasked with an unguided manual tuning of their transactional applications when facing efficiency problems. We propose a system in which code profiling in a simulated hardware implementation of Transactional Memory is used to characterize a transactional application, which forms the basis for the automated tuning of the underlying speculative system for the efficient execution of that particular application. We also propose a profile-guided approach to the scheduling of threads in a software-based implementation of Transactional Memory, using collected data to predict the likelihood of conflicts and determine what thread to schedule based on this prediction. We present the results achieved under both designs.
Hong, Changwan. "Code Optimization on GPUs". The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1557123832601533.
Texto completoFaber, Peter. "Code Optimization in the Polyhedron Model - Improving the Efficieny of Parallel Loop Nests". kostenfrei, 2007. http://www.opus-bayern.de/uni-passau/volltexte/2008/1251/.
Texto completoFassi, Imen. "XFOR (Multifor) : A new programming structure to ease the formulation of efficient loop optimizations". Thesis, Strasbourg, 2015. http://www.theses.fr/2015STRAD043/document.
Texto completoWe propose a new programming structure named XFOR (Multifor), dedicated to data-reuse aware programming. It allows to handle several for-loops simultaneously and map their respective iteration domains onto each other. Additionally, XFOR eases loop transformations application and composition. Experiments show that XFOR codes provides significant speed-ups when compared to the original code versions, but also to the Pluto optimized versions. We implemented the XFOR structure through the development of three software tools: (1) a source-to-source compiler named IBB for Iterate-But-Better!, which automatically translates any C/C++ code containing XFOR-loops into an equivalent code where XFOR-loops have been translated into for-loops. IBB takes also benefit of optimizations implemented in the polyhedral code generator CLooG which is invoked by IBB to generate for-loops from an OpenScop specification; (2) an XFOR programming environment named XFOR-WIZARD that assists the programmer in re-writing a program with classical for-loops into an equivalent but more efficient program using XFOR-loops; (3) a tool named XFORGEN, which automatically generates XFOR-loops from any OpenScop representation of transformed loop nests automatically generated by an automatic optimizer
Irigoin, François. "Partitionnement des boucles imbriquées : une technique d'optimisation pour les programmes scientifiques". Paris 6, 1987. http://www.theses.fr/1987PA066437.
Texto completoHe, Guanlin. "Parallel algorithms for clustering large datasets on CPU-GPU heterogeneous architectures". Electronic Thesis or Diss., université Paris-Saclay, 2022. http://www.theses.fr/2022UPASG062.
Texto completoClustering, which aims at achieving natural groupings of data, is a fundamental and challenging task in machine learning and data mining. Numerous clustering methods have been proposed in the past, among which k-means is one of the most famous and commonly used methods due to its simplicity and efficiency.Spectral clustering is a more recent approach that usually achieves higher clustering quality than k-means. However, classical algorithms of spectral clustering suffer from a lack of scalability due to their high complexities in terms of number of operations and memory space requirements. This scalability challenge can be addressed by applying approximation methods or by employing parallel and distributed computing.The objective of this thesis is to accelerate spectral clustering and make it scalable to large datasets by combining representatives-based approximation with parallel computing on CPU-GPU platforms. Considering different scenarios, we propose several parallel processing chains for large-scale spectral clustering. We design optimized parallel algorithms and implementations for each module of the proposed chains: parallel k-means on CPU and GPU, parallel spectral clustering on GPU using sparse storage format, parallel filtering of data noise on GPU, etc. Our various experiments reach high performance and validate the scalability of each module and the complete chains
Fang, Juing. "Décodage pondère des codes en blocs et quelques sujets sur la complexité du décodage". Paris, ENST, 1987. http://www.theses.fr/1987ENST0005.
Texto completoTagliavini, Giuseppe <1980>. "Optimization Techniques for Parallel Programming of Embedded Many-Core Computing Platforms". Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amsdottorato.unibo.it/8068/1/TESI.pdf.
Texto completoDrebes, Andi. "Dynamic optimization of data-flow task-parallel applications for large-scale NUMA systems". Thesis, Paris 6, 2015. http://www.theses.fr/2015PA066330/document.
Texto completoWithin the last decade, microprocessor development reached a point at which higher clock rates and more complex micro-architectures became less energy-efficient, such that power consumption and energy density were pushed beyond reasonable limits. As a consequence, the industry has shifted to more energy efficient multi-core designs, integrating multiple processing units (cores) on a single chip. The number of cores is expected to grow exponentially and future systems are expected to integrate thousands of processing units. In order to provide sufficient memory bandwidth in these systems, main memory is physically distributed over multiple memory controllers with non-uniform access to memory (NUMA). Past research has identified programming models based on fine-grained, dependent tasks as a key technique to unleash the parallel processing power of massively parallel general-purpose computing architectures. However, the execution of task-paralel programs on architectures with non-uniform memory access and the dynamic optimizations to mitigate NUMA effects have received only little interest. In this thesis, we explore the main factors on performance and data locality of task-parallel programs and propose a set of transparent, portable and fully automatic on-line mapping mechanisms for tasks to cores and data to memory controllers in order to improve data locality and performance. Placement decisions are based on information about point-to-point data dependences, readily available in the run-time systems of modern task-parallel programming frameworks. The experimental evaluation of these techniques is conducted on our implementation in the run-time of the OpenStream language and a set of high-performance scientific benchmarks. Finally, we designed and implemented Aftermath, a tool for performance analysis and debugging of task-parallel applications and run-times
Child, Ryan. "Performance and Power Optimization of Parallel Discrete Event Simulations Using DVFS". University of Cincinnati / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1342730759.
Texto completoLibros sobre el tema "Parallel code optimization"
Faber, Peter. Code Optimization in the Polyhedron Model - Improving the Efficiency of Parallel Loop Nests. Lulu Press, Inc., 2009.
Buscar texto completoFaber, Peter. Paperback: Code Optimization in the Polyhedron Model - Improving the Efficiency of Parallel Loop Nests. Lulu Press, Inc., 2009.
Buscar texto completoPerformance Optimization of Numerically Intensive Codes (Software, Environments and Tools). Society for Industrial Mathematics, 2001.
Buscar texto completoBäck, Thomas. Evolutionary Algorithms in Theory and Practice. Oxford University Press, 1996. http://dx.doi.org/10.1093/oso/9780195099713.001.0001.
Texto completoCapítulos de libros sobre el tema "Parallel code optimization"
Dekel, Eliezer, Simeon Ntafos y Shie-Tung Peng. "Parallel tree techniques and code optimization". En VLSI Algorithms and Architectures, 205–16. Berlin, Heidelberg: Springer Berlin Heidelberg, 1986. http://dx.doi.org/10.1007/3-540-16766-8_18.
Texto completoAndersson, Niclas y Peter Fritzson. "Object Oriented Mathematical Modelling and Compilation to Parallel Code". En Applied Optimization, 99–182. Boston, MA: Springer US, 1997. http://dx.doi.org/10.1007/978-1-4613-3400-2_5.
Texto completoSarkar, Vivek. "Challenges in Code Optimization of Parallel Programs". En Lecture Notes in Computer Science, 1. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-00722-4_1.
Texto completoTaylor, Ryan y Xiaoming Li. "A Code Merging Optimization Technique for GPU". En Languages and Compilers for Parallel Computing, 218–36. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-36036-7_15.
Texto completoMartinez Caamaño, Juan Manuel, Willy Wolff y Philippe Clauss. "Code Bones: Fast and Flexible Code Generation for Dynamic and Speculative Polyhedral Optimization". En Euro-Par 2016: Parallel Processing, 225–37. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-43659-3_17.
Texto completoAvis, David y Gary Roumanis. "A Portable Parallel Implementation of the lrs Vertex Enumeration Code". En Combinatorial Optimization and Applications, 414–29. Cham: Springer International Publishing, 2013. http://dx.doi.org/10.1007/978-3-319-03780-6_36.
Texto completoWcisło, R., J. Kitowski y J. Mościński. "Parallelization of a code for animation of multi-object system". En Applied Parallel Computing Industrial Computation and Optimization, 697–709. Berlin, Heidelberg: Springer Berlin Heidelberg, 1996. http://dx.doi.org/10.1007/3-540-62095-8_75.
Texto completoDamani, Sana y Vivek Sarkar. "Common Subexpression Convergence: A New Code Optimization for SIMT Processors". En Languages and Compilers for Parallel Computing, 64–73. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-72789-5_5.
Texto completoEpshteyn, Arkady, María Jesús Garzaran, Gerald DeJong, David Padua, Gang Ren, Xiaoming Li, Kamen Yotov y Keshav Pingali. "Analytic Models and Empirical Search: A Hybrid Approach to Code Optimization". En Languages and Compilers for Parallel Computing, 259–73. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/978-3-540-69330-7_18.
Texto completoTaubert, Oskar, Marie Weiel, Daniel Coquelin, Anis Farshian, Charlotte Debus, Alexander Schug, Achim Streit y Markus Götz. "Massively Parallel Genetic Optimization Through Asynchronous Propagation of Populations". En Lecture Notes in Computer Science, 106–24. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-32041-5_6.
Texto completoActas de conferencias sobre el tema "Parallel code optimization"
Sarkar, Vivek. "Code optimization of parallel programs". En the sixth annual IEEE/ACM international symposium. New York, New York, USA: ACM Press, 2008. http://dx.doi.org/10.1145/1356058.1356087.
Texto completoWang, Fang, Shixin Cheng, Wei Xu y Haifeng Wang. "Design and Code Optimization of Parallel Concatenated Gallager Codes". En 2007 IEEE 18th International Symposium on Personal, Indoor and Mobile Radio Communications. IEEE, 2007. http://dx.doi.org/10.1109/pimrc.2007.4394240.
Texto completoBuck, Ian. "GPU Computing: Programming a Massively Parallel Processor". En International Symposium on Code Generation and Optimization (CGO'07). IEEE, 2007. http://dx.doi.org/10.1109/cgo.2007.13.
Texto completoSoliman, Karim, Marwa El Shenawy y Ahmed Abou El Farag. "Loop unrolling effect on parallel code optimization". En ICFNDS'18: International Conference on Future Networks and Distributed Systems. New York, NY, USA: ACM, 2018. http://dx.doi.org/10.1145/3231053.3231060.
Texto completoLuo, Hao, Guoyang Chen, Pengcheng Li, Chen Ding y Xipeng Shen. "Data-centric combinatorial optimization of parallel code". En PPoPP '16: 21st ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming. New York, NY, USA: ACM, 2016. http://dx.doi.org/10.1145/2851141.2851182.
Texto completoDubey, A. y T. Clune. "Optimization of a parallel pseudospectral MHD code". En Proceedings. Frontiers '99. Seventh Symposium on the Frontiers of Massively Parallel Computation. IEEE, 1999. http://dx.doi.org/10.1109/fmpc.1999.750602.
Texto completoSuriana, Patricia, Andrew Adams y Shoaib Kamil. "Parallel associative reductions in Halide". En 2017 IEEE/ACM International Symposium on Code Generation and Optimization (CGO). IEEE, 2017. http://dx.doi.org/10.1109/cgo.2017.7863747.
Texto completoYongpeng Zhang y F. Mueller. "Hidp: A hierarchical data parallel language". En 2013 IEEE/ACM International Symposium on Code Generation and Optimization (CGO). IEEE, 2013. http://dx.doi.org/10.1109/cgo.2013.6494994.
Texto completoDewey, Kyle, Vineeth Kashyap y Ben Hardekopf. "A parallel abstract interpreter for JavaScript". En 2015 IEEE/ACM International Symposium on Code Generation and Optimization (CGO). IEEE, 2015. http://dx.doi.org/10.1109/cgo.2015.7054185.
Texto completoYunsup Lee, R. Krashinsky, V. Grover, S. W. Keckler y K. Asanovic. "Convergence and scalarization for data-parallel architectures". En 2013 IEEE/ACM International Symposium on Code Generation and Optimization (CGO). IEEE, 2013. http://dx.doi.org/10.1109/cgo.2013.6494995.
Texto completoInformes sobre el tema "Parallel code optimization"
Hisley, Dixie M. Enabling Programmer-Controlled Combined Memory Consistency for Parallel Code Optimization. Fort Belvoir, VA: Defense Technical Information Center, julio de 2003. http://dx.doi.org/10.21236/ada416794.
Texto completo