Literatura académica sobre el tema "Calcolo HTC"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Calcolo HTC".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Artículos de revistas sobre el tema "Calcolo HTC"
Valliappan, Nachiappan, Fabian Ruch y Carlos Tomé Cortiñas. "Normalization for fitch-style modal calculi". Proceedings of the ACM on Programming Languages 6, ICFP (29 de agosto de 2022): 772–98. http://dx.doi.org/10.1145/3547649.
Texto completoB'Shary, I., C. Guimon, M. Grimaud y G. Pfister-Guillouzo. "Étude de la flash thermolyse du méthyl-5 azido-2 thiadiazole-1,3,4 par spectroscopie photoélectronique (HeI) et calculs quantiques". Canadian Journal of Chemistry 66, n.º 11 (1 de noviembre de 1988): 2830–34. http://dx.doi.org/10.1139/v88-438.
Texto completoAlca-Clares, Raúl, Harold Tabori-Peinado, Armando Calvo-Quiroz, Alfredo Berrocal-Kasay y Cesar Loza-Munarriz. "Manifestaciones musculo-esqueléticas en pacientes en hemodiálisis crónica". Revista Medica Herediana 24, n.º 4 (19 de diciembre de 2013): 298. http://dx.doi.org/10.20453/rmh.v24i4.274.
Texto completoNolasco, Pedro, Paulo V. Coelho, Carla Coelho, David F. Angelo, J. R. Dias, Nuno M. Alves, António Maurício et al. "Mineralization of Sialoliths Investigated by Ex Vivo and In Vivo X-ray Computed Tomography". Microscopy and Microanalysis 25, n.º 1 (febrero de 2019): 151–63. http://dx.doi.org/10.1017/s1431927618016124.
Texto completoReyes Obando, Ana Lourdes, Carmen Valeria Pinto Romero, Andrea Giselle Banegas Pineda, Delcy Olivia Alberto Villanueva, José Daniel Hernández Vásquez, Hassel Denisse Ferrera Dubón, Luis Dariel Reyes Quezada et al. "ESTUDIO COMPARATIVO IN–VITRO DEL SELLADO APICAL DE TRES CEMENTOS ENDODÓNTICOS". Revista Científica de la Escuela Universitaria de las Ciencias de la Salud 4, n.º 1 (17 de enero de 2019): 15–21. http://dx.doi.org/10.5377/rceucs.v4i1.7064.
Texto completoMello, Flavia Siqueira Furtado, José Milton de Castro Lima, Elodie Bomfim Hyppolito, Rodrigo Vieira Costa Lima, Flávio Esmeraldo Rolim, Cibele Silveira Pinho y Jesus Irajacy Fernandes da Costa. "Comparação dos graus de fibrose hepática na hepatite C crônica (HCC) medidos por métodos de elastrografia e de sorologia: ARFI e Fibroscan vs APRI e FIB4". Revista de Medicina da UFC 60, n.º 2 (23 de junio de 2020): 18–25. http://dx.doi.org/10.20513/2447-6595.2020v60n2p18-25.
Texto completoSanabria, Hernán, Carolina Tarqui y Eduardo Zárate. "Expectativas y prioridades laborales de los estudiantes de la Facultad de Medicina en una universidad peruana". Anales de la Facultad de Medicina 73 (7 de mayo de 2013): 58. http://dx.doi.org/10.15381/anales.v73i1.2248.
Texto completoImhoff, Matías y Alfredo Trento. "Determinación de la rugosidad superficial y anchos de inundación en la planicie del río Salado (Santa Fe) para la crecida de 2003". Cuadernos del CURIHAM 18 (28 de diciembre de 2012): 52–61. http://dx.doi.org/10.35305/curiham.v18i0.49.
Texto completoÁlvarez Justel, Josefina. "Construcción y validación inicial de la escala de toma de decisiones de la Carrera en secundaria (ETDC-S)". Electronic Journal of Research in Education Psychology 19, n.º 55 (1 de diciembre de 2021): 605–24. http://dx.doi.org/10.25115/ejrep.v19i55.4322.
Texto completoTarqui Mamani, Carolina y Daniel Quintana Atencio. "Desempeño laboral del profesional de enfermería en un hospital de la Seguridad Social del Callao – Perú". Archivos de Medicina (Manizales) 20, n.º 1 (15 de diciembre de 2019): 123–32. http://dx.doi.org/10.30554/archmed.20.1.3372.2020.
Texto completoTesis sobre el tema "Calcolo HTC"
Vinot, Emmanuel. "Modélisation des supraconducteurs HTC : applications au calcul des pertes AC". Phd thesis, Grenoble INPG, 2000. http://tel.archives-ouvertes.fr/tel-00689985.
Texto completoGIROTTO, IVAN. "Studio della Fisica delle Emulsioni tramite l'utilizzo di Calcolo ad Alte Prestazioni". Doctoral thesis, Università degli studi di Modena e Reggio Emilia, 2021. http://hdl.handle.net/11380/1251098.
Texto completoIn this project we employed highly optimized codes, based on the multicomponent Lattice Boltzmann model (LBM), to explore the physics of complex fluids in 3-dimensions. We first implemented an LBM based application which delivers good scaling performances on distributed systems while optimising the memory access through a data organisation that enables high computing efficiency. In particular, we first introduced and then deeply analysed, two new clustered data layouts which, enhancing compiler vectorizazion, demonstrated to deliver high-performance on modern x86-64 CPUs, if compared with legacy data layouts typically adopted for LBM based codes such as arrays of structures (AoS) or structures of arrays (SoA). This work aided the award of two PRACE projects for approximately hundreds of millions of core-hours distributed among two major European Tier-0 systems for high-performance computing such as the Marconi at CINECA and the MareNostrum at the Barcelona Supercomputing Centre (BSC). We performed a detailed analysis of the computing performance and energy efficiency on both the CPU systems which equipped those supercomputers: the Intel KNL and the more recent Intel Skylake processor, respectively. In the ultimate stage of the project we also extended the implemented model to run on multi-GPU distributed systems such as the Marconi-100 at CINECA. We implemented and validated the well-established Shan-Chen multicomponent LBM with second neighbour coupling. This allows to model the dynamics of two immiscible fluids characterized by a surface tension as well as by a disjoing pressure between them. The emulsion is stirred via a large scale forcing mimicking a classical stirring often used in spectral simulation of turbulent flows. With the implemented numerical models, we started to explore the physics of complex fluid emulsions: from the phase of turbulent stirring where the emulsion is produced, to the resting phase where the resulting emulsion is in jammed state. In particular, we performed several simulations to achieve a first qualitative measurements on the morphology of the system (i.e., number of droplets, average volume of the droplets, average surface, PDFs of volume and surface) as well as some initial estimation of the energy. We made the analysis at different volume fractions and by pushing the dispersed phase up to about 80%, limit reported by experiments. We observed how the resulting highly-packed emulsions bring up rich phenomenology showing non-spherical droplets, and while presenting feature of a solid in resting phase but still flowing as a fluid if subjected to a forcing. We have analysed the behaviour of the system looking at both, the influence of the flow on the morphology, by stirring at different forcing amplitudes, and the influence of morphology on the flow, by performing Kolmogorov rheology tests on jammed emulsions at different volume fractions. Emulsions are remarkable systems presenting an extremely interesting phenomenology but at the same time being really fragile. Indeed, we have experimented the difficulties of finding the equilibrium between the rate of pushing higher volume fraction and the correct stirring amplitude to achieve turbulence without facing the problem of catastrophic phase inversion. In the second part of the project we engineer and added to the implemented LBM based code a method for tracking all droplets present in a 3-dimensional emulsion at high-resolution, obtaining a Lagrangian profile of all droplets in the dispersed phase of the emulsion both when exposed to large-scale stirring and when the forcing is turn off
Masini, Filippo. "Coca cola hbc italia: Modello per il calcolo di inventory stock target e production cycles ottimali". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2014. http://amslaurea.unibo.it/8060/.
Texto completoCapra, Antoine. "Virtualisation en contexte HPC". Thesis, Bordeaux, 2015. http://www.theses.fr/2015BORD0436/document.
Texto completoTo meet the growing needs of the digital simulation and remain at the forefront of technology, supercomputers must be constantly improved. These improvements can be hardware or software order. This forces the application to adapt to a new programming environment throughout its development. It then becomes necessary to raise the question of the sustainability of applications and portability from one machine to another. The use of virtual machines may be a first response to this need for sustaining stabilizing programming environments. With virtualization, applications can be developed in a fixed environment, without being directly impacted by the current environment on a physical machine. However, the additional abstraction induced by virtual machines in practice leads to a loss of performance. We propose in this thesis a set of tools and techniques to enable the use of virtual machines in HPC context. First we show that it is possible to optimize the operation of a hypervisor to respond accurately to the constraints of HPC that are : the placement of implementing son and memory data locality. Then, based on this, we have proposed a resource partitioning service from a compute node through virtual machines. Finally, to expand our work to use for MPI applications, we studied the network solutions and performance of a virtual machine
Chatelain, Yohan. "Outils de débogage et d'optimisation des calculs flottants dans le contexte HPC". Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLV096.
Texto completoHigh Performance Computing (HPC) is a dynamic ecosystem where scientific computing architectures and codes are in permanent co-evolution (parallelism, specialized accelerators, new memories).This dynamism requires developers to adapt their software regularly to exploit all the new technological innovations.For this purpose, co-design approaches consisting of simultaneously developing software and hardware are an interesting approach.Nevertheless, co-design efforts have mainly focused on application performance without necessarily taking into account the numerical quality.However, this is becoming increasingly difficult to maintain from one generation of supercomputer to the next due to the increased complexity of the hardware and the parallel programming models. In addition, there are new floating point computation formats (bfloat16, binary16) that should be harnessed during the modernization process.These findings raise two issues:1) How to check the digital quality of codes during the modernization process? This requires tools that allow both to quickly identify sources of numerical errors and to be user-friendly for non-expert users.2) How can we take advantage of the new possibilities offered by the equipment?The applications possibilities are manifold and therefore lead to a considerable space of possible solutions. The solutions found are the result of a compromise between the performance of the application and the numerical quality of the computations, but also the reproducibility of the results.In this thesis, we contributed to the Verificarlo software that helps to detect numerical errors by injecting various noise models into floating computations. More precisely, we have developed an approach to study the evolution of numerical errors over time. This tool is based on the generation of numerical traces that allow the numerical quality of the variables to be tracked over time. These traces are enriched by context information retrieved during compilation and can then be viewed in an elegant way.We also contributed to VPREC, a computation model simulating formats of varying sizes. This tool has been used to address the problem of format optimization in iterative schemes. The proposed optimization is temporal since it optimizes the computation precision for each time step.Finally, a major constraint in the development of tools for HPC is the scaling up. Indeed, the size of the codes and the number of computations involved drastically increase the complexity of the analyses and limit conventional approaches. We have demonstrated that the techniques developed in this thesis are applicable to industrial codes since they have made it possible, first, to detect and correct a numerical error in the ABINIT code (ab initio code for quantum chemistry developed by the CEA et al.). Secondly, these tools have reduced the computation accuracy of YALES2 (fluid mechanics code developed by CORIA) and improved performance by reducing communication volumes by 28% and accelerating execution up to 1.30 times
Magnani, Simone. "analisi delle prestazioni del sistema grafico videocore iv applicato al calcolo generico". Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/19100/.
Texto completoPourroy, Jean. "Calcul Haute Performance : Caractérisation d’architectures et optimisation d’applications pour les futures générations de supercalculateurs". Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASM028.
Texto completoInformation systems and High-Performance Computing (HPC) infrastructures play an active role in the improvement of scientific knowledge and the evolution of our societies. The field of HPC is expanding rapidly and users need increasingly powerful architectures to analyze the tsunami of data (numerical simulations, IOT), to make more complex decisions (artificial intelligence), and to make them faster (connected cars, weather).In this thesis work, we discuss several challenges (power consumption, cost, complexity) for the development of new generations of Exascale supercomputers. While industrial applications do not manage to achieve more than 10% of the theoretical performance, we show the need to rethink the architecture of platforms, in particular by using energy-optimized architectures. We then present some of the emerging technologies that will allow their development: 3D memories (HBM), Storage Class Memory (SCM) or photonic interconnection technologies. These new technologies associated with a new communication protocol (Gen-Z) will help to optimally execute the different parts of an application. However, in the absence of a method for fine characterization of code performance, these emerging architectures are potentially condemned since few experts know how to exploit them.Our contribution consists in the development of benchmarks and performance analysis tools. The first aim is to finely characterize specific parts of the microarchitecture. Two microbenchmarks have thus been developed to characterize the memory system and the floating point unit (FPU). The second family of tools is used to study the performance of applications. A first tool makes it possible to monitor the memory bus traffic, a critical resource of modern architectures. A second tool can be used to profile applications by extracting and characterizing critical loops (hot spots).To take advantage of the heterogeneity of platforms, we propose a 5-step methodology to identify and characterize these new platforms, to model the performance of an application, and finally to port its code to the selected architecture. Finally, we show how the tools can help developers to extract the maximum performance from an architecture. By providing our tools in open source, we want to sensitize users to this approach and develop a community around the work of performance characterization and analysis
Bruned, Vianney. "Analyse statistique et interprétation automatique de données diagraphiques pétrolières différées à l’aide du calcul haute performance". Thesis, Montpellier, 2018. http://www.theses.fr/2018MONTS064.
Texto completoIn this thesis, we investigate the automation of the identification and the characterization of geological strata using well logs. For a single well, geological strata are determined thanks to the segmentation of the logs comparable to multivariate time series. The identification of strata on different wells from the same field requires correlation methods for time series. We propose a new global method of wells correlation using multiple sequence alignment algorithms from bioinformatics. The determination of the mineralogical composition and the percentage of fluids inside a geological stratum results in an ill-posed inverse problem. Current methods are based on experts’ choices: the selection of a subset of mineral for a given stratum. Because of a model with a non-computable likelihood, an approximate Bayesian method (ABC) assisted with a density-based clustering algorithm can characterize the mineral composition of the geological layer. The classification step is necessary to deal with the identifiability issue of the minerals. At last, the workflow is tested on a study case
Honore, Valentin. "Convergence HPC - Big Data : Gestion de différentes catégories d'applications sur des infrastructures HPC". Thesis, Bordeaux, 2020. http://www.theses.fr/2020BORD0145.
Texto completoNumerical simulations are complex programs that allow scientists to solve, simulate and model complex phenomena. High Performance Computing (HPC) is the domain in which these complex and heavy computations are performed on large-scale computers, also called supercomputers.Nowadays, most scientific fields need supercomputers to undertake their research. It is the case of cosmology, physics, biology or chemistry. Recently, we observe a convergence between Big Data/Machine Learning and HPC. Applications coming from these emerging fields (for example, using Deep Learning framework) are becoming highly compute-intensive. Hence, HPC facilities have emerged as an appropriate solution to run such applications. From the large variety of existing applications has risen a necessity for all supercomputers: they mustbe generic and compatible with all kinds of applications. Actually, computing nodes also have a wide range of variety, going from CPU to GPU with specific nodes designed to perform dedicated computations. Each category of node is designed to perform very fast operations of a given type (for example vector or matrix computation).Supercomputers are used in a competitive environment. Indeed, multiple users simultaneously connect and request a set of computing resources to run their applications. This competition for resources is managed by the machine itself via a specific program called scheduler. This program reviews, assigns andmaps the different user requests. Each user asks for (that is, pay for the use of) access to the resources ofthe supercomputer in order to run his application. The user is granted access to some resources for a limited amount of time. This means that the users need to estimate how many compute nodes they want to request and for how long, which is often difficult to decide.In this thesis, we provide solutions and strategies to tackle these issues. We propose mathematical models, scheduling algorithms, and resource partitioning strategies in order to optimize high-throughput applications running on supercomputers. In this work, we focus on two types of applications in the context of the convergence HPC/Big Data: data-intensive and irregular (orstochastic) applications.Data-intensive applications represent typical HPC frameworks. These applications are made up oftwo main components. The first one is called simulation, a very compute-intensive code that generates a tremendous amount of data by simulating a physical or biological phenomenon. The second component is called analytics, during which sub-routines post-process the simulation output to extract,generate and save the final result of the application. We propose to optimize these applications by designing automatic resource partitioning and scheduling strategies for both of its components.To do so, we use the well-known in situ paradigm that consists in scheduling both components together in order to reduce the huge cost of saving all simulation data on disks. We propose automatic resource partitioning models and scheduling heuristics to improve overall performance of in situ applications.Stochastic applications are applications for which the execution time depends on its input, while inusual data-intensive applications the makespan of simulation and analytics are not affected by such parameters. Stochastic jobs originate from Big Data or Machine Learning workloads, whose performanceis highly dependent on the characteristics of input data. These applications have recently appeared onHPC platforms. However, the uncertainty of their execution time remains a strong limitation when using supercomputers. Indeed, the user needs to estimate how long his job will have to be executed by the machine, and enters this estimation as his first reservation value. But if the job does not complete successfully within this first reservation, the user will have to resubmit the job, this time requiring a longer reservation
Colin, de Verdière Guillaume. "A la recherche de la haute performance pour les codes de calcul et la visualisation scientifique". Thesis, Reims, 2019. http://www.theses.fr/2019REIMS012/document.
Texto completoThis thesis aims to demonstrate that algorithms and coding, in a high performance computing (HPC) context, cannot be envisioned without taking into account the hardware at the core of supercomputers since those machines evolve dramatically over time. After setting a few definitions relating to scientific codes and parallelism, we show that the analysis of the different generations of supercomputer used at CEA over the past 30 years allows to exhibit a number of attention points and best practices toward code developers.Based on some experiments, we show how to aim at code performance suited to the usage of supercomputers, how to try to get portable performance and possibly extreme performance in the world of massive parallelism, potentially using GPUs.We explain that graphical post-processing software and hardware follow the same parallelism principles as large scientific codes, requiring to master a global view of the simulation chain.Last, we describe tendencies and constraints that will be forced on the new generations of exaflopic class supercomputers. These evolutions will, yet again, impact the development of the next generations of scientific codes
Libros sobre el tema "Calcolo HTC"
Reddy, G. Ram Mohana y Kiran M. Mobile Ad Hoc Networks. Taylor & Francis Group, 2020.
Buscar texto completoYenké, Blaise Omer. Sauvegarde en parallèle d'applications HPC: Ordonnancement des sauvegardes/reprises d'applications de calcul haute performance dans les environnements dynamiques. Omniscriptum, 2011.
Buscar texto completoAndruchow, Marcela. El patrimonio plástico de la Facultad de Artes. Teseo, 2022. http://dx.doi.org/10.55778/ts878834498.
Texto completoReddy, G. Ram Mohana y Kiran M. Mobile Ad Hoc Networks: Bio-Inspired Quality of Service Aware Routing Protocols. Taylor & Francis Group, 2016.
Buscar texto completoReddy, G. Ram Mohana y Kiran M. Mobile Ad Hoc Networks: Bio-Inspired Quality of Service Aware Routing Protocols. Taylor & Francis Group, 2016.
Buscar texto completoReddy, G. Ram Mohana y Kiran M. Mobile Ad Hoc Networks: Bio-Inspired Quality of Service Aware Routing Protocols. Taylor & Francis Group, 2016.
Buscar texto completoActas de conferencias sobre el tema "Calcolo HTC"
Lee, Ho, Jeehyun Kim, Bernard Choi, Joel M. H. Teichman y A. J. Welch. "High-Speed Photographic Evaluation of Retropulsion Momentum Induced by a Laser Calculi Lithotriptor". En ASME 2001 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2001. http://dx.doi.org/10.1115/imece2001/htd-24429.
Texto completoJee, Anand, Shanidul Hoque y Wasim Arif. "Analysis of non completion probability for cognitive radio ad hoc networks". En 2017 IEEE Calcutta Conference (CALCON). IEEE, 2017. http://dx.doi.org/10.1109/calcon.2017.8280700.
Texto completoSoto, Julio, Saulo Queiroz y Michele Nogueira. "Um Esquema Cooperativo para Análise da Presença de Ataques EUP em Redes Ad Hoc de Rádio Cognitivo". En Simpósio Brasileiro de Segurança da Informação e de Sistemas Computacionais. Sociedade Brasileira de Computação - SBC, 2012. http://dx.doi.org/10.5753/sbseg.2012.20544.
Texto completo