Literatura académica sobre el tema "Heterogenous programming"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Heterogenous programming".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Artículos de revistas sobre el tema "Heterogenous programming"
Nowicki, Marek, Magdalena Ryczkowska, Łukasz Gorski, Michał Szynkiewicz y Piotr Bała. "PCJ - a Java Library for Heterogenous Parallel Computing". WSEAS TRANSACTIONS ON COMPUTERS 21 (23 de marzo de 2022): 81–87. http://dx.doi.org/10.37394/23205.2022.21.12.
Texto completoSoto, Miguel A., Francesco Lelj y Mark J. MacLachlan. "Programming permanent and transient molecular protection via mechanical stoppering". Chemical Science 10, n.º 44 (2019): 10422–27. http://dx.doi.org/10.1039/c9sc03744f.
Texto completoBlazewicz, Marek, Steven R. Brandt, Michal Kierzynka, Krzysztof Kurowski, Bogdan Ludwiczak, Jian Tao y Jan Weglarz. "CaKernel – A Parallel Application Programming Framework for Heterogenous Computing Architectures". Scientific Programming 19, n.º 4 (2011): 185–97. http://dx.doi.org/10.1155/2011/457030.
Texto completoWiggins, Keenan J., Christopher Scharer y Jeremy M. Boss. "Memory B cells are a heterogenous population regulated by epigenetic programming". Journal of Immunology 208, n.º 1_Supplement (1 de mayo de 2022): 112.13. http://dx.doi.org/10.4049/jimmunol.208.supp.112.13.
Texto completoHarshavardhan, K. S. "Programming in OpenCL and its advantages in a GPU Framework". International Journal for Research in Applied Science and Engineering Technology 10, n.º 7 (31 de julio de 2022): 3739–43. http://dx.doi.org/10.22214/ijraset.2022.45835.
Texto completoJankovics, Vince, Michael Garcia Ortiz y Eduardo Alonso. "HetSAGE: Heterogenous Graph Neural Network for Relational Learning (Student Abstract)". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 18 (18 de mayo de 2021): 15803–4. http://dx.doi.org/10.1609/aaai.v35i18.17898.
Texto completoWidiyanti, Syafira Chika, M. Dachyar y Farizal. "Product Distribution Optimization in Food SMEs with Integer Linear Programming". MATEC Web of Conferences 248 (2018): 03014. http://dx.doi.org/10.1051/matecconf/201824803014.
Texto completoBeck, Justin, John Harvey, Kristina Kaylen, Corrado Sala, Melinda Urban, Peter Vermeulen, Norman Wilken, Wei Xie, Dan Iliescu y Pratik Mital. "Carnival Optimizes Revenue and Inventory Across Heterogenous Cruise Line Brands". INFORMS Journal on Applied Analytics 51, n.º 1 (febrero de 2021): 26–41. http://dx.doi.org/10.1287/inte.2020.1062.
Texto completoLin, Na, Huimin Yang, Ya Li y Xuping Wang. "Scheduling multi-pattern precooling service resources for post-harvest fruits and vegetables using the adaptive large neighborhood search". Journal of Physics: Conference Series 2425, n.º 1 (1 de febrero de 2023): 012006. http://dx.doi.org/10.1088/1742-6596/2425/1/012006.
Texto completoMini, Darshana Sreedhar. "Satellites of Belonging". Middle East Journal of Culture and Communication 14, n.º 1-2 (28 de septiembre de 2021): 81–111. http://dx.doi.org/10.1163/18739865-01401002.
Texto completoTesis sobre el tema "Heterogenous programming"
Sodsong, Wasuwee. "Parallelization Techniques for Heterogeneous Multicores with Applications". Thesis, The University of Sydney, 2017. http://hdl.handle.net/2123/17987.
Texto completoKainth, Haresh S. "A data dependency recovery system for a heterogeneous multicore processor". Thesis, University of Derby, 2014. http://hdl.handle.net/10545/313343.
Texto completoDiarra, Rokiatou. "Automatic Parallelization for Heterogeneous Embedded Systems". Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS485.
Texto completoRecent years have seen an increase of heterogeneous architectures combining multi-core CPUs with accelerators such as GPU, FPGA, and Intel Xeon Phi. GPU can achieve significant performance for certain categories of application. Nevertheless, achieving this performance with low-level APIs (e.g. CUDA, OpenCL) requires to rewrite the sequential code, to have a good knowledge of GPU architecture, and to apply complex optimizations that are sometimes not portable. On the other hand, directive-based programming models (e.g. OpenACC, OpenMP) offer a high-level abstraction of the underlying hardware, thus simplifying the code maintenance and improving productivity. They allow users to accelerate their sequential codes on GPU by simply inserting directives. OpenACC/OpenMP compilers have the daunting task of applying the necessary optimizations from the user-provided directives and generating efficient codes that take advantage of the GPU architecture. Although the OpenACC / OpenMP compilers are mature and able to apply some optimizations automatically, the generated code may not achieve the expected speedup as the compilers do not have a full view of the whole application. Thus, there is generally a significant performance gap between the codes accelerated with OpenACC/OpenMP and those hand-optimized with CUDA/OpenCL. To help programmers for speeding up efficiently their legacy sequential codes on GPU with directive-based models and broaden OpenMP/OpenACC impact in both academia and industry, several research issues are discussed in this dissertation. We investigated OpenACC and OpenMP programming models and proposed an effective application parallelization methodology with directive-based programming approaches. Our application porting experience revealed that it is insufficient to simply insert OpenMP/OpenACC offloading directives to inform the compiler that a particular code region must be compiled for GPU execution. It is highly essential to combine offloading directives with loop parallelization constructs. Although current compilers are mature and perform several optimizations, the user may provide them more information through loop parallelization constructs clauses in order to get an optimized code. We have also revealed the challenge of choosing good loop schedules. The default loop schedule chosen by the compiler may not produce the best performance, so the user has to manually try different loop schedules to improve the performance. We demonstrate that OpenMP and OpenACC programming models can achieve best performance with lesser programming effort, but OpenMP/OpenACC compilers quickly reach their limit when the offloaded region code is computed/memory bound and contain several nested loops. In such cases, low-level languages may be used. We also discuss pointers aliasing problem in GPU codes and propose two static analysis tools that perform automatically at source level type qualifier insertion and scalar promotion to solve aliasing issues
Bhatia, Vishal. "Remote programming for heterogeneous sensor networks". Manhattan, Kan. : Kansas State University, 2008. http://hdl.handle.net/2097/1091.
Texto completoDi, Domenico Daniel. "HPSM: uma API em linguagem c++ para programas com laços paralelos com suporte a multi-CPUs e Multi-GPUs". Universidade Federal de Santa Maria, 2016. http://repositorio.ufsm.br/handle/1/12171.
Texto completoParallel architectures has been ubiquitous for some time now. However, the word ubiquitous can’t be applied to parallel programs, because there is a greater complexity to code them comparing to ordinary programs. This fact is aggravated when the programming also involves accelerators, like GPUs, which demand the use of tools with scpecific resources. Considering this setting, there are programming models that make easier the codification of parallel applications to explore accelerators, nevertheless, we don’t know APIs that allow implementing programs with parallel loops that can be processed simultaneously by multiple CPUs and multiple GPUs. This works presents a high-level C++ API called HPSM aiming to make easier and more efficient the codification of parallel programs intended to explore multi-CPU and multi-GPU architectures. Following this idea, the desire is to improve performance through the sum of resources. HPSM uses parallel loops and reductions implemented by three parallel back-ends, being Serial, OpenMP and StarPU. Our hypothesis estimates that scientific applications can explore heterogeneous processing in multi-CPU and multi-GPU to achieve a better performance than exploring just accelerators. Comparisons with other parallel programming interfaces demonstrated that HPSM can reduce a multi-CPU and multi-GPU code in more than 50%. The use of the new API can introduce impact to program performance, where experiments showed a variable overhead for each application, that can achieve a maximum value of 16,4%. The experimental results confirmed the hypothesis, because the N-Body, Hotspot e CFD applications achieved gains using just CPUs and just GPUs, as well as overcame the performance achieved by just accelerators (GPUs) through the combination of multi-CPU and multi-GPU.
Arquiteturas paralelas são consideradas ubíquas atualmente. No entanto, o mesmo termo não pode ser aplicado aos programas paralelos, pois existe uma complexidade maior para codificálos em relação aos programas convencionais. Este fato é agravado quando a programação envolve também aceleradores, como GPUs, que demandam o uso de ferramentas com recursos muito específicos. Neste cenário, apesar de existirem modelos de programação que facilitam a codificação de aplicações paralelas para explorar aceleradores, desconhece-se a existência de APIs que permitam a construção de programas com laços paralelos que possam ser processados simultaneamente em múltiplas CPUs e múltiplas GPUs. Este trabalho apresenta uma API C++ de alto nível, denominada HPSM, visando facilitar e tornar mais eficiente a codificação de programas paralelos voltados a explorar arquiteturas com multi-CPU e multi-GPU. Seguindo esta ideia, deseja-se ganhar desempenho através da soma dos recursos. A HPSM é baseada em laços e reduções paralelas implementadas por meio de três diferentes back-ends paralelos, sendo Serial, OpenMP e StarPU. A hipótese deste estudo é que aplicações científicas podem valer-se do processamento heterogêneo em multi-CPU e multi-GPU para alcançar um desempenho superior em relação ao uso de apenas aceleradores. Comparações com outras interfaces de programação paralela demonstraram que o uso da HPSM pode reduzir em mais de 50% o tamanho de um programa multi-CPU e multi-GPU. O uso da nova API pode trazer impacto no desempenho do programa, sendo que experimentos demonstraram que seu sobrecusto é variável de acordo com a aplicação, chegando até 16,4%. Os resultados experimentais confirmaram a hipótese, pois as aplicações N-Body, Hotspot e CFD, além de alcançarem ganhos ao utilizar somente CPUs e somente GPUs, também superaram o desempenho obtido por somente aceleradores (GPUs) através da combinação de multi-CPU e multi-GPU.
Dastgeer, Usman. "Skeleton Programming for Heterogeneous GPU-based Systems". Licentiate thesis, Linköpings universitet, PELAB - Laboratoriet för programmeringsomgivningar, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-70234.
Texto completoPlanas, Carbonell Judit. "Programming models and scheduling techniques for heterogeneous architectures". Doctoral thesis, Universitat Politècnica de Catalunya, 2015. http://hdl.handle.net/10803/327036.
Texto completoActualment, hi ha una clara tendència per l'ús de sistemes heterogenis d'alt rendiment, ja que ofereixen una major potència de càlcul que els sistemes homogenis amb CPUs tradicionals. L'addició d'unitats especialitzades (acceleradors com ara GPGPUs) als sistemes amb CPUs s'ha convertit en una revolució en el món de la computació d'alt rendiment. Els sistemes heterogenis poden adaptar-se millor a les diferents necessitats de les aplicacions, ja que cada tipus d'arquitectura ofereix diferents característiques. Per tant, per maximitzar el rendiment, les aplicacions s'han de dividir en diverses parts d'acord amb els seus requeriments computacionals. Llavors, aquestes parts s'han d'executar al dispositiu que s'adapti millor a les seves necessitats. Per tant, l'heterogeneïtat introdueix una complexitat addicional en el desenvolupament d'aplicacions: d'una banda, els codis font s'han d'adaptar a les noves arquitectures i, de l'altra, la gestió de recursos es fa més complicada. Per exemple, múltiples espais de memòria que requereixen moviments explícits de dades o sincronitzacions addicionals entre diferents parts de codi que s'executen en diferents unitats. Per això, la programació i el manteniment del codi en sistemes heterogenis són extremadament complexos i cars. Tot i que hi ha diverses propostes per a la programació d'acceleradors, com CUDA o OpenCL, aquests models no resolen els reptes de programació descrits anteriorment, ja que exposen les característiques de baix nivell del hardware al programador. Per tant, els models de programació han de poder ocultar les complexitats dels acceleradors de cara al programador, proporcionant un entorn de desenvolupament homogeni. En aquest context, la tesi contribueix en dos aspectes fonamentals: primer, proposa un disseny per a gestionar de manera eficient l'execució d'aplicacions heterogènies i, segon, presenta diversos mecanismes de planificació per dividir l'execució d'aplicacions entre totes les unitats del sistema, per tal de maximitzar el rendiment i la utilització de recursos. La primera contribució proposa un disseny d'execució asíncron per gestionar els moviments de dades i sincronitzacions en acceleradors. Aquest enfocament s'ha desenvolupat en dos passos: primer, una proposta semi-asíncrona i després, una proposta totalment asíncrona per tal d'adaptar-se a les restriccions del hardware contemporani. Els resultats en sistemes multi-accelerador mostren que aquests enfocaments poden assolir el màxim rendiment esperat. Fins i tot, en determinats casos, poden superar el rendiment de codis nadius altament optimitzats. La segona contribució presenta quatre mecanismes de planificació diferents, enfocats a la programació heterogènia, per minimitzar el temps d'execució de les aplicacions. Per exemple, minimitzar la quantitat de dades compartides entre espais de memòria, o maximitzar la utilització de recursos mitjançant l'execució de cada porció de codi a la unitat que s'adapta millor. Els experiments s'han realitzat en diferents plataformes heterogènies, incloent CPUs, GPGPUs i dispositius Intel Xeon Phi. És particularment interessant analitzar com totes aquestes estratègies de planificació poden afectar el rendiment de l'aplicació. Com a resultat, es poden extreure tres conclusions generals: en primer lloc, el rendiment de l'aplicació no està garantit en les noves generacions de hardware. Per tant, els codis s'han d'actualitzar periòdicament a mesura que el hardware evoluciona. En segon lloc, la forma més eficient d'executar una aplicació en una plataforma heterogènia és dividir-la en porcions més petites i escollir la unitat que millor s'adapta per executar cada porció. Finalment, i probablement la conclusió més important, és que les exigències derivades de les dues primeres conclusions poden ser implementades dins de llibreries de sistema, de manera que la complexitat de programació d'arquitectures heterogènies quedi completament oculta per al programador.
VILLALOBOS, CRISTIAN ENRIQUE MUNOZ. "HETEROGENEOUS PARALLELIZATION OF QUANTUM-INSPIRED LINEAR GENETIC PROGRAMMING". PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2014. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=27791@1.
Texto completoCOORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
FUNDAÇÃO DE APOIO À PESQUISA DO ESTADO DO RIO DE JANEIRO
PROGRAMA DE EXCELENCIA ACADEMICA
BOLSA NOTA 10
Um dos principais desafios da ciência da computação é conseguir que um computador execute uma tarefa que precisa ser feita, sem dizer-lhe como fazê-la. A Programação Genética (PG) aborda este desafio a partir de uma declaração de alto nível sobre o que é necessário ser feito e cria um programa de computador para resolver o problema automaticamente. Nesta dissertação, é desenvolvida uma extensão do modelo de Programação Genética Linear com Inspiração Quântica (PGLIQ) com melhorias na eficiência e eficácia na busca de soluções. Para tal, primeiro o algoritmo é estruturado em um sistema de paralelização heterogênea visando à aceleração por Unidades de Processamento Gráfico e a execução em múltiplos processadores CPU, maximizando a velocidade dos processos, além de utilizar técnicas otimizadas para reduzir os tempos de transferências de dados. Segundo, utilizam-se as técnicas de Visualização Gráfica que interpretam a estrutura e os processos que o algoritmo evolui para entender o efeito da paralelização do modelo e o comportamento da PGLIQ. Na implementação da paralelização heterogênea, são utilizados os recursos de computação paralela como Message Passing Interface (MPI) e Open Multi-Processing (OpenMP), que são de vital importância quando se trabalha com multi-processos. Além de representar graficamente os parametros da PGLIQ, visualizando-se o comportamento ao longo das gerações, uma visualização 3D para casos de robôtica evolutiva é apresentada, na qual as ferramentas de simulação dinâmica como Bullet SDK e o motor gráfico OGRE para a renderização são utilizadas.
One of the main challenges of computer science is to get a computer execute a task that must be done, without telling it how to do it. Genetic Programming (GP) deals with this challenge from a high level statement of what is needed to be done and creates a computer program to solve the problem automatically. In this dissertation we developed an extension of Quantum-Inspired Linear Genetic Programming Model (QILGP), aiming to improve its efficiency and effectiveness in the search for solutions. For this, first the algorithm is structured in a Heterogeneous Parallelism System, Aiming to accelerated using Graphics Processing Units GPU and multiple CPU processors, reducing the timing of data transfers while maximizing the speed of the processes. Second, using the techniques of Graphic Visualization which interpret the structure and the processes that the algorithm evolves, understanding the behavior of QILGP. We used the highperformance features such as Message Passing Interface (MPI) and Open Multi- Processing (OpenMP), which are of vital importance when working with multiprocesses, as it is necessary to design a topology that has multiple levels of parallelism to avoid delaying the process for transferring the data to a local computer where the visualization is projected. In addition to graphically represent the parameters of PGLIQ devising the behavior over generations, a 3D visualization for cases of evolutionary robotics is presented, in which the tools of dynamic simulation as Bullet SDK and graphics engine OGRE for rendering are used . This visualization is used as a tool for a case study in this dissertation.
Aji, Ashwin M. "Programming High-Performance Clusters with Heterogeneous Computing Devices". Diss., Virginia Tech, 2015. http://hdl.handle.net/10919/52366.
Texto completoPh. D.
Guerreiro, Pedro Miguel Rito. "Visual programming in a heterogeneous multi-core environment". Master's thesis, Universidade de Évora, 2009. http://hdl.handle.net/10174/18505.
Texto completoLibros sobre el tema "Heterogenous programming"
Castrillón Mazo, Jerónimo y Rainer Leupers. Programming Heterogeneous MPSoCs. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-00675-8.
Texto completoHeterogeneous computing with OpenCL. Waltham, MA: Morgan Kaufmann, 2012.
Buscar texto completoSchwartz, David G. Cooperating heterogeneous systems. Boston: Kluwer Academic, 1995.
Buscar texto completoKarandikar, Abhay. Mobility Management in LTE Heterogeneous Networks. Singapore: Springer Singapore, 2017.
Buscar texto completoSchwartz, David G. Cooperating Heterogeneous Systems. Boston, MA: Springer US, 1995.
Buscar texto completoParallel computing on heterogeneous networks. Hoboken, N.J: John Wiley, 2003.
Buscar texto completoBalakrishnan, Anantaram. The nozzle guide vane problem: Partitioning a heterogeneous inventory. West Lafayette, Ind: Institute for Research in the Behavioral, Economic, and Management Sciences, Krannert Graduate School of Management, Purdue University, 1986.
Buscar texto completoM, Purtilo James y United States. National Aeronautics and Space Administration., eds. Using an architectural approach to integrate heterogeneous, distributed software components. [Morgantown, WV]: West Virginia University, 1995.
Buscar texto completoGray, Peter M. D., 1940-, ed. The Functional approach to data management: Modeling, analyzing, and integrating heterogeneous data. Berlin: Springer, 2004.
Buscar texto completoservice), SpringerLink (Online, ed. Specification and Analytical Evaluation of Heterogeneous Dynamic Quorum-Based Data Replication Schemes. Wiesbaden: Vieweg+Teubner Verlag, 2012.
Buscar texto completoCapítulos de libros sobre el tema "Heterogenous programming"
Pinheiro, Anderson Boettge, Francisco Heron de Carvalho Junior, Neemias Gabriel Pena Batista Arruda y Tiago Carneiro. "Fusion: Abstractions for Multicore/Manycore Heterogenous Parallel Programming Using GPUs". En Programming Languages, 109–23. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-11863-5_8.
Texto completoDix, Jürgen. "A Computational Logic Approach to Heterogenous Agent Systems". En Logic Programming and Nonmotonic Reasoning, 1–21. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-45402-0_1.
Texto completoFluet, Matthew, Lars Bergstrom, Nic Ford, Mike Rainey, John Reppy, Adam Shaw y Yingqi Xiao. "Programming in Manticore, a Heterogenous Parallel Functional Language". En Central European Functional Programming School, 94–145. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-17685-2_4.
Texto completoWohlstadter, Eric A. y Premkumar T. Devanbu. "DADO: A Novel Programming Model for Distributed, Heterogenous, Late-Bound QoS Implementations". En On The Move to Meaningful Internet Systems 2003: OTM 2003 Workshops, 926–33. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/978-3-540-39962-9_90.
Texto completoZhao, Hui, Meikang Qiu, Keke Gai, Jie Li y Xin He. "Cost Reduction for Data Allocation in Heterogenous Cloud Computing Using Dynamic Programming". En Lecture Notes in Computer Science, 1–11. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-52015-5_1.
Texto completoRegaieg, Rym, Mohamed Koubàa, Evans Osei-Opoku y Taoufik Aguili. "A Two Objective Linear Programming Model for VM Placement in Heterogenous Data Centers". En Ubiquitous Networking, 167–78. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-02849-7_15.
Texto completoFumero, J., C. Kotselidis, F. Zakkak, M. Papadimitriou, O. Akrivopoulos, C. Tselios, N. Kanakis et al. "Programming and Architecture Models". En Heterogeneous Computing Architectures, 53–87. Boca Raton : Taylor & Francis, a CRC title, part of the Taylor & Francis imprint, a member of the Taylor & Francis Group, the academic division of T&F Informa, plc, 2019.: CRC Press, 2019. http://dx.doi.org/10.1201/9780429399602-3.
Texto completoCastrillón Mazo, Jerónimo y Rainer Leupers. "Introduction". En Programming Heterogeneous MPSoCs, 1–13. Cham: Springer International Publishing, 2013. http://dx.doi.org/10.1007/978-3-319-00675-8_1.
Texto completoCastrillón Mazo, Jerónimo y Rainer Leupers. "Background and Problem Definition". En Programming Heterogeneous MPSoCs, 15–52. Cham: Springer International Publishing, 2013. http://dx.doi.org/10.1007/978-3-319-00675-8_2.
Texto completoCastrillón Mazo, Jerónimo y Rainer Leupers. "Related Work". En Programming Heterogeneous MPSoCs, 53–72. Cham: Springer International Publishing, 2013. http://dx.doi.org/10.1007/978-3-319-00675-8_3.
Texto completoActas de conferencias sobre el tema "Heterogenous programming"
Weeks, Michael. "Calq programming via a web-interface on heterogenous devices". En SOUTHEASTCON 2014. IEEE, 2014. http://dx.doi.org/10.1109/secon.2014.6950723.
Texto completoVogel-Heuser, Birgit, Sebastian Rehberger, Timo Frank y Thomas Aicher. "Quality despite quantity — Teaching large heterogenous classes in C programming and fundamentals in computer science". En 2014 IEEE Global Engineering Education Conference (EDUCON). IEEE, 2014. http://dx.doi.org/10.1109/educon.2014.6826119.
Texto completoPerez-Serrano, Antonio, Morten Andreas Geday, Xabier Quintana y Francisco Jose Lopez Hernandez. "AN INTENSIVE PROJECT BASED LEARNING EXPERIENCE IN PROGRAMMING AND ELECTRONICS INVOLVING HETEROGENOUS GROUPS OF STUDENTS WITH DIFFERENT BACKGROUNDS". En 15th International Technology, Education and Development Conference. IATED, 2021. http://dx.doi.org/10.21125/inted.2021.1243.
Texto completoWu, Jiaxin y Pingfeng Wang. "Risk-Averse Optimization for Resilience Enhancement Under Uncertainty". En ASME 2020 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2020. http://dx.doi.org/10.1115/detc2020-22226.
Texto completoChen, Songhua, Wei Shao, Huiwen Sheng y Hyung Kwak. "Use of Symbolic Regression for Developing Petrophysical Interpretation Models". En 2022 SPWLA 63rd Annual Symposium. Society of Petrophysicists and Well Log Analysts, 2022. http://dx.doi.org/10.30632/spwla-2022-0113.
Texto completoKunzman, David M. y Laxmikant V. Kale. "Programming Heterogeneous Systems". En Distributed Processing, Workshops and Phd Forum (IPDPSW). IEEE, 2011. http://dx.doi.org/10.1109/ipdps.2011.377.
Texto completo"[Copyright notice]". En 2021 IEEE/ACM Programming Environments for Heterogeneous Computing (PEHC). IEEE, 2021. http://dx.doi.org/10.1109/pehc54839.2021.00002.
Texto completo"Table of Contents". En 2021 IEEE/ACM Programming Environments for Heterogeneous Computing (PEHC). IEEE, 2021. http://dx.doi.org/10.1109/pehc54839.2021.00003.
Texto completo"[Title page]". En 2021 IEEE/ACM Programming Environments for Heterogeneous Computing (PEHC). IEEE, 2021. http://dx.doi.org/10.1109/pehc54839.2021.00001.
Texto completoHuang, Sitao, Kun Wu, Sai Rahul Chalamalasetti, Izzat El Hajj, Cong Xu, Paolo Faraboschi y Deming Chen. "A Python-based High-Level Programming Flow for CPU-FPGA Heterogeneous Systems : (Invited Paper)". En 2021 IEEE/ACM Programming Environments for Heterogeneous Computing (PEHC). IEEE, 2021. http://dx.doi.org/10.1109/pehc54839.2021.00008.
Texto completoInformes sobre el tema "Heterogenous programming"
Flower, J. W. y A. Kolawa. A Heterogeneous Parallel Programming Capability. Fort Belvoir, VA: Defense Technical Information Center, noviembre de 1990. http://dx.doi.org/10.21236/ada229710.
Texto completoLabarta, Jesus J. Programming Models for Heterogeneous Multicore Systems. Fort Belvoir, VA: Defense Technical Information Center, agosto de 2011. http://dx.doi.org/10.21236/ada550469.
Texto completoArabe, Jose N., Adam Beguelin, Bruce Lowekamp y Erik Seligman. Dome: Parallel Programming in a Heterogeneous Multi-User Environment. Fort Belvoir, VA: Defense Technical Information Center, abril de 1995. http://dx.doi.org/10.21236/ada295491.
Texto completoKnighton, Shane A. A Network-Based Mathematical Programming Approach to Optimal Rostering of Continuous Heterogeneous Workforces. Fort Belvoir, VA: Defense Technical Information Center, mayo de 2005. http://dx.doi.org/10.21236/ada433267.
Texto completoBarbara Chapman. Center for Programming Models for Scalable Parallel Computing - Towards Enhancing OpenMP for Manycore and Heterogeneous Nodes. Office of Scientific and Technical Information (OSTI), febrero de 2012. http://dx.doi.org/10.2172/1051399.
Texto completoMorkun, Vladimir S., Natalia V. Morkun y Andrey V. Pikilnyak. Augmented reality as a tool for visualization of ultrasound propagation in heterogeneous media based on the k-space method. [б. в.], febrero de 2020. http://dx.doi.org/10.31812/123456789/3757.
Texto completo