Добірка наукової літератури з теми "High performances calculus"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "High performances calculus".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "High performances calculus":

1

Nicola, Marcel, and Claudiu-Ionel Nicola. "Sensorless Fractional Order Control of PMSM Based on Synergetic and Sliding Mode Controllers." Electronics 9, no. 9 (September 11, 2020): 1494. http://dx.doi.org/10.3390/electronics9091494.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The field oriented control (FOC) strategy of the permanent magnet synchronous motor (PMSM) includes all the advantages deriving from the simplicity of using PI-type controllers, but inherently the control performances are limited due to the nonlinear model of the PMSM, the need for wide-range and high-dynamics speed and load torque control, but also due to the parametric uncertainties which occur especially as a result of the variation of the combined rotor-load moment of inertia, and of the load resistance. Based on the fractional calculus for the integration and differentiation operators, this article presents a number of fractional order (FO) controllers for the PMSM rotor speed control loops, and id and iq current control loops in the FOC-type control strategy. The main contribution consists of proposing a PMSM control structure, where the controller of the outer rotor speed control loop is of FO-sliding mode control (FO-SMC) type, and the controllers for the inner control loops of id and iq currents are of FO-synergetic type. Superior performances are obtained by using the control system proposed, even in the case of parametric variations. The performances of the proposed control system are validated both by numerical simulations and experimentally, through the real-time implementation in embedded systems.
2

Bârsan, Ghiţă, Silviu Mihai Petrişor, and Luminiţa Giurgiu. "Validation of the Mathematical and Numerical Models for Artillery Barrels Autofrettage Based on Hydrostatic Procedure." Applied Mechanics and Materials 186 (June 2012): 58–69. http://dx.doi.org/10.4028/www.scientific.net/amm.186.58.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The research of advanced gun barrels focuses on materials or the combination between advanced materials and new innovative processes that enable the increase of the life cycle and performances of all calibers cannons. In addition to the investigation of new materials, considerable efforts were made for developing new techniques. The paper describes a theoretical framework validated with the experimental tests for increasing mechanical properties of thick-walled tubes subjected to high interior pressure loads. The theoretical model established a mathematical model of calculus for non-linear environments in the case of self-hooping or autofrettaging of the thick-walled tubes. The mathematical model was validated with experimental tests performed in the Mechanical Engineering Laboratory of the Military Technical Academy in Bucharest on a standard tension test specimens collected from the abutment barrel made out of alloyed steel. Finally, the present paper introduces some theoretical guidelines of hydrostatic procedure in the field of artillery barrels manufacturing, as well as experimental data obtained after using the autofrettage procedure.
3

Tenreiro Machado, José A., and António M. Lopes. "Fractional-order kinematic analysis of biomechanical inspired manipulators." Journal of Vibration and Control 26, no. 1-2 (October 16, 2019): 102–11. http://dx.doi.org/10.1177/1077546319877703.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Present-day mechanical manipulators reveal limited performances when compared with the human arm. Joint-driven manipulators are sub-optimal due to the high actuator requirements imposed by the transients of the operational space tasks. Muscle-actuated arms are superior because the anatomic structures adapt the task requirements to the driving linear actuators. However, the advantages of muscle actuation are difficult to unravel using the standard integer-order kinematics based on the integer derivatives, namely the positions, velocities and accelerations. This paper investigates the human arm and evaluates the influence of biomechanics upon the driving actuators by means of a new method of kinematic analysis and visualisation. The proposed method uses the tools of fractional calculus for computing the continuous propagation of the signals between positions and accelerations. The behaviour of the variables is compared in the joint and muscle spaces, using both the kinematics in the time domain and the describing function method. In this line of thought, the classical integer-order kinematics, with three discrete levels of visualisation, is generalised to a continuous description represented by the fractional-order kinematics.
4

Wang, Jin, Xiancong Wu, Qiang Li, and Jian Zhao. "First Order Plus Dead Time (FOPDT) Model Approximation and Proportional-Integral-Derivative Controllers Tuning for Multi-Volume Process." Journal of Nanoelectronics and Optoelectronics 17, no. 3 (March 1, 2022): 474–88. http://dx.doi.org/10.1166/jno.2022.3223.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Analysis and controller design for the FOPDT model is widely used because of its simpleness and convenience. In this paper, the problem of the multi-volume process (MVP) being approximated to first order plus dead time (FOPDT) model, as well as its proportional-integral-derivative (PID) controller arguments tuning is studied. These have been heavily studied in recent years and the methods developed for its optimal design rely on the idea of including several robust performance specifications in the objective function the method presents fast convergence and consists of mentioning a desired closed-loop transfer function. Particle swarm optimization (PSO) algorithm and performance index of the integral of the time-weighted absolute error (ITAE) minimum is presented to determine the approximate FOPDT model coefficients of MVP processes, where the order is from two to fifteen. In addition, the approximate FOPDT model is used to design the PID controller, which is used to control MVP. A large number of tuning methods are provided to analyze and compare the closed-loop control performances. At the end of the paper, two simulation examples illustrate the superiority and effectiveness of the PID controller design based on the proposed model reduction method. The simulation results show that the reduced-order controller can control a high-order system well, but the process of order reduction is complicated and it needs a long computation time. The FOPID is a generalization of the conventional PID controller. This is based on an extension calculus. A new method for approximating MVP to the FOPDT model is presented in this paper with more effectiveness.
5

Wang, Jin, Xiancong Wu, Qiang Li, and Jian Zhao. "First Order Plus Dead Time Model Approximation and Proportional-Integral-Derivative Controllers Tuning for Multi-Volume Process." Journal of Nanoelectronics and Optoelectronics 17, no. 5 (May 1, 2022): 794–808. http://dx.doi.org/10.1166/jno.2022.3253.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Analysis and controller design for the FOPDT model is widely used because of its simpleness and convenience. In this paper, the problem of the multi-volume process (MVP) being approximated to first order plus dead time (FOPDT) model, as well as its proportional-integral-derivative (PID) controller arguments tuning is studied. These have been heavily studied in recent years and the methods developed for its optimal design rely on the idea of including several robust performance specifications in the objective function the method presents fast convergence and consists of mentioning a desired closed-loop transfer function. Particle swarm optimization (PSO) algorithm and performance index of the integral of the time-weighted absolute error (ITAE) minimum is presented to determine the approximate FOPDT model coefficients of MVP processes, where the order is from two to fifteen. In addition, the approximate FOPDT model is used to design the PID controller, which is used to control MVP. A large number of tuning methods are provided to analyze and compare the closed-loop control performances. At the end of the paper, two simulation examples illustrate the superiority and effectiveness of the PID controller design based on the proposed model reduction method. The simulation results show that the reduced-order controller can control a high-order system well, but the process of order reduction is complicated and it needs a long computation time. The FOPID is a generalization of the conventional PID controller. This is based on an extension calculus. A new method for approximating MVP to the FOPDT model is presented in this paper with more effectiveness.
6

Niederle, Muriel, and Lise Vesterlund. "Explaining the Gender Gap in Math Test Scores: The Role of Competition." Journal of Economic Perspectives 24, no. 2 (May 1, 2010): 129–44. http://dx.doi.org/10.1257/jep.24.2.129.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The mean and standard deviation in performance on math test scores are only slightly larger for males than for females. Despite minor differences in mean performance, many more boys than girls perform at the right tail of the distribution. This gender gap has been documented for a series of math tests including the AP calculus test, the mathematics SAT, and the quantitative portion of the Graduate Record Exam (GRE). The objective of this paper is not to discuss whether the mathematical skills of males and females differ, be it a result of nurture or nature. Rather we argue that the reported test scores do not necessarily match the gender differences in math skills. We will present results that suggest that the evidence of a large gender gap in mathematics performance at high percentiles in part may be explained by the differential manner in which men and women respond to competitive test-taking environments. The effects in mixed-sex settings range from women failing to perform well in competitions, to women shying away from environments in which they have to compete. We find that the response to competition differs for men and women, and in the examined environment, gender difference in competitive performance does not reflect the difference in noncompetitive performance. We argue that the competitive pressures associated with test taking may result in performances that do not reflect those of less-competitive settings. Of particular concern is that the distortion is likely to vary by gender and that it may cause gender differences in performance to be particularly large in mathematics and for the right tail of the performance distribution. Thus the gender gap in math test scores may exaggerate the math advantage of males over females.
7

Chiu, Singa Wang, Liang-Wei You, Tsu-Ming Yeh, and Tiffany Chiu. "The Collective Influence of Component Commonality, Adjustable-Rate, Postponement, and Rework on Multi-Item Manufacturing Decision." Mathematics 8, no. 9 (September 11, 2020): 1570. http://dx.doi.org/10.3390/math8091570.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The present study explores the collective influence of component commonality, adjustable-rate, postponement, and rework on the multi-item manufacturing decision. In contemporary markets, customer demand trends point to fast-response, high-quality, and diversified merchandise. Hence, to meet customer expectations, modern manufacturers must plan their multiproduct fabrication schedule in the most efficient and cost-saving way, especially when product commonality exists in a series of end products. To respond to the above viewpoints, we propose a two-stage multiproduct manufacturing scheme, featuring an adjustable fabrication rate in stage one for all needed common parts, and manufacturing diversified finished goods in stage two. The rework processes are used in both stages to repair the inevitable, nonconforming items and ensure the desired product quality. We derive the cost-minimized rotation cycle decision through modeling, formulation, cost analysis, and differential calculus. Using a numerical illustration, we reveal the collective and individual influence of adjustable-rate, rework, and postponement strategies on diverse critical system performances (such as uptime of the common part and/or end products, utilization, individual cost factor, and total system cost). Our decision-support model offers in-depth managerial insights for manufacturing and operations planning in a wide variety of contemporary industries, such as household merchandise, clothing, and automotive.
8

Padernal, Rogie E., and Crispina V. Diego. "Academic Performance of Senior High School Students in Pre-Calculus." Philippine Social Science Journal 3, no. 2 (November 12, 2020): 69–70. http://dx.doi.org/10.52006/main.v3i2.185.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Knowledge in Pre-Calculus depends on students' understanding of Algebra and Trigonometry. The result of the Program for International Students Assessment (PISA) in 2018 disclosed that the Philippines ranked the second-lowest in Mathematics assessment and indicated low performance in advanced subjects such as Calculus. Hence, the paper described the level of academic performance of senior high school students in a maritime school in Bacolod City during the school year 2019-2020. Likewise, it aimed to determine the relationship between the students' demographics and the level of academic performance in Pre-Calculus. Furthermore, it is intended to test the correlation and predictive capability of the school of origin and entrance examination scores in the academic performance of students in Pre-Calculus.
9

King, Nancy T. "Calculus & Technology: Calculus Reform for High School Teachers." Journal of Educational Technology Systems 23, no. 2 (December 1994): 183–95. http://dx.doi.org/10.2190/5q4b-p4db-m7n8-rxw3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Technology has changed our lives. It affects how we work, play, handle business transactions, and even how and when we die. The fact that it influences our daily activities is undebatable. That technology would be used to enhance the teaching of mathematics was inevitable. While technology is not the only component of the Calculus Reform movement, it is the central focus of the movement. It is what separates this movement from reform movements of years past. Mathematicians and math educators are changing how we teach calculus. Critics may question the role of calculators and computers in the classroom, but the opportunities provided by technology will make it virtually impossible to ignore this medium of delivering classroom instructions. It is gernerally accepted that the problems associated with poor mathematics performance in this country must be solved in the institutions of higher education [1]. CAL-TECH is the beginning of Texas Southern University's contribution to the solution.
10

Sadler, Philip, and Gerhard Sonnert. "The Path to College Calculus: The Impact of High School Mathematics Coursework." Journal for Research in Mathematics Education 49, no. 3 (May 2018): 292–329. http://dx.doi.org/10.5951/jresematheduc.49.3.0292.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This study addresses a longstanding question among high school mathematics teachers and college mathematics professors: Which is the best preparation for college calculus—(a) a high level of mastery of mathematics considered preparatory for calculus (algebra, geometry, precalculus) or (b) taking calculus itself in high school? We used a data set of 6,207 students of 216 professors at 133 randomly selected U.S. colleges and universities, and hierarchical models controlled for differences in demography and background. Mastery of the mathematics considered preparatory for calculus was found to have more than double the impact of taking a high school calculus course on students' later performance in college calculus, on average. However, students with weaker mathematics preparation gained the most from taking high school calculus.

Дисертації з теми "High performances calculus":

1

Jerad, Sadok. "Approches du second ordre de d'ordre élevées pour l'optimisation nonconvex avec variantes sans évaluation de la fonction objective." Electronic Thesis or Diss., Université de Toulouse (2023-....), 2024. http://www.theses.fr/2024TLSEP024.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Même si l'optimisation non linéaire semble (a priori) être un domaine mature, de nouveaux schémas de minimisation sont proposés ou redécouverts pour les problèmes modernes à grande échelle. A titre d'exemple et en rétrospective de la dernière décennie, nous avons vu une vague de méthodes du premier ordre avec différentes analyses, malgré le fait que les limitations théoriques bien connues de ces méthodes ont été discutées en profondeur auparavant. Cette thèse explore deux lignes principales de recherche dans le domaine de l'optimisation non-convexe avec un accent particulier sur les méthodes de second ordre et d'ordre supérieur. Dans la première série de travaux, nous nous concentrons sur les algorithmes qui ne calculent pas les valeurs des fonctions et opèrent sans connaissance d'aucun paramètre, car les méthodes du premier ordre les plus adaptées pour les problèmes modernes appartiennent à cette dernière catégorie. Nous commençons par redéfinir l'algorithme bien connu d'Adagrad dans un cadre de région de confiance et utilisons ce dernier paradigme pour étudier deux classes d'algorithmes OFFO (Objective-Free Function Optimization) déterministes du premier ordre. Pour permettre des algorithmes OFFO exacts plus rapides, nous proposons ensuite une méthode de régularisation adaptative déterministe d'ordre p qui évite le calcul des valeurs de la fonction. Cette approche permet de retrouver la vitesse de convergence bien connu du cadre standard lors de la recherche de points stationnaires, tout en utilisant beaucoup moins d'informations. Dans une deuxième série de travaux, nous analysons les algorithmes adaptatifs dans le cadre plus classique où les valeurs des fonctions sont utilisées pour adapter les paramètres. Nous étendons les méthodes de régularisation adaptatives à une classe spécifique d'espaces de Banach en développant un algorithme de descente du gradient de Hölder. En plus, nous étudions un algorithme de second ordre qui alterne entre la courbure négative et les étapes de Newton avec une vitesse de convergence quasi optimal. Pour traiter les problèmes de grande taille, nous proposons des versions sous-espace de l'algorithme qui montrent des performances numériques prometteuses. Dans l'ensemble, cette recherche couvre un large éventail de techniques d'optimisation et fournit des informations et des contributions précieuses aux algorithmes d'optimisation adaptatifs et sans paramètres pour les fonctions non convexes. Elle ouvre également la voie à des développements théoriques ultérieurs et à l'introduction d'algorithmes numériques plus rapides
Even though nonlinear optimization seems (a priori) to be a mature field, new minimization schemes are proposed or rediscovered for modern large-scale problems. As an example and in retrospect of the last decade, we have seen a surge of first-order methods with different analysis, despite the fact that well-known theoretical limitations of the previous methods have been thoroughly discussed.This thesis explores two main lines of research in the field of nonconvex optimization with a narrow focus on second and higher order methods.In the first series, we focus on algorithms that do not compute function values and operate without knowledge of any parameters, as the most popular currently used first-order methods fall into the latter category. We start by redefining the well-known Adagrad algorithm in a trust-region framework and use the latter paradigm to study two first-order deterministic OFFO (Objective-Free Function Optimization) classes. To enable faster exact OFFO algorithms, we then propose a pth-order deterministic adaptive regularization method that avoids the computation of function values. This approach recovers the well-known convergence rate of the standard framework when searching for stationary points, while using significantly less information.In the second set of papers, we analyze adaptive algorithms in the more classical framework where function values are used to adapt parameters. We extend adaptive regularization methods to a specific class of Banach spaces by developing a Hölder gradient descent algorithm. In addition, we investigate a second-order algorithm that alternates between negative curvature and Newton steps with a near-optimal convergence rate. To handle large problems, we propose subspace versions of the algorithm that show promising numerical performance.Overall, this research covers a wide range of optimization techniques and provides valuable insights and contributions to both parameter-free and adaptive optimization algorithms for nonconvex functions. It also opens the door for subsequent theoretical developments and the introduction of faster numerical algorithms
2

Peretti, Pezzi Guilherme. "High performance hydraulic simulations on the grid using Java and ProActive." Nice, 2011. http://www.theses.fr/2011NICE4118.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Optimization of water distribution is a crucial issue which has been targeted by many modelling tools. Useful models, implemented several decades ago, need to be updated and implemented in more powerful computing environments. This thesis presents the redesign of a legacy hydraulic simulation software (IRMA) written in FORTRAN that has been used for over 30 years by the Société du Canal de Provence in order to design and to maintain water distribution networks. IRMA was developed aiming mainly the treatment of irrigation networks. The growing complexity and size of networks requested to update IRMA and to rewrite the code by using modern tools and language (Java). This thesis presents IRMA’s simulation model, including its head loss equations, linearization methods, topology analysis algorithms, equipments modelling and the linear system construction? Some new specific simulation features are presented : scenarios with probabilistic demands (Débit de Clément), pump profiling, pipe sizing, and pressure driven analysis. The new adopted solution for solving the linear system is describer and a comparison with the previous FORTRAN results of all networks maintained by the Société du Canal de Provence and the values obtained from a standard and well-known simulation tool (EPANET). Regarding the performance of the new solution, a sequential benchmark comparing with the former FORTRAN version is presented. Finally, two use cases are presented in order to demonstrate the capability of executing distributed simulations in a Grid infrastructure, using the ProActive solution. The new solution has been already deployed in a production environment and demonstrates clearly its efficiency with a significant reduction of the computation time, an improved quality of results and a transparent integration with the company’s modern software infrastructure (spatial databases)
L’optimisation de la distribution de l’eau est un enjeu crucial qui a déjà été ciblé par de nombreux outils de modélisation. Des modèles utiles, implémentés il y a des décennies, ont besoin d’évoluer vers des formalismes et des environnements informatiques plus récents. Cette thèse présente la refonte d’un ancien logiciel de simulation hydraulique (IRMA) écrit en FORTRAN, qui a été utilisé depuis plus de 30 ans par la Société du Canal de Provence, afin de concevoir et maintenir les réseaux de distribution d’eau. IRMA a été développé visant principalement pour le traitement des réseaux d’irrigation – en utilisant le modèle probabiliste d’estimation de la demande de Clément – et il permet aujourd’hui de gérer plus de 6000 km de réseaux d’eau sous pression. L’augmentation de la complexité et de la taille des réseaux met en évidence le besoin de moderniser IRMA et de le réécrire dans un langage plus actuel (Java). Cette thèse présente le modèle de simulation implémenté dans IRMA, y compris les équations de perte de charge, les méthodes de linéarisation, les algorithmes d’analyse de la topologie, la modélisation des équipements et la construction du système linéaire. Quelques nouveaux types de simulation sont présentés : la demande en pointe avec une estimation probabiliste de la consommation (débit de Clément), le dimensionnement de pompe (caractéristiques indicées), l’optimisation des diamètres des tuyaux, et la variation de consommation en fonction de la pression. La nouvelle solution adoptée pour résoudre le système linéaire est décrite et une comparaison avec les solveurs existant en Java est présentée. La validation des résultats est réalisée d’abord avec une comparaison avec une comparaison entre les résultats obtenus avec l’ancienne version FORTRAN et la nouvelle solution, pour tous les réseaux maintenus par la Société du Canal de Provence. Une deuxième validation est effectuée en comparant des résultats obtenus à partir d’un outil de simulation standard et bien connu (EPANET). Concernant les performances de la nouvelle solution, des mesures séquentielles de temps sont présentées afin de les comparer avec l’ancienne version FORTRAN. Enfin, deux cas d’utilisation sont présentés afin de démontrer la capacité d’exécuter des simulations distribuées dans une infrastructure de grille, utilisant la solution ProActive. La nouvelle solution a déjà été déployée dans un environnement de production et démontre clairement son efficacité avec une réduction significative du temps de calcul, une amélioration de la qualité des résultats et une intégration facilitée dans le système d’information de la Société du Canal de Provence, notamment la base de données spatiales
3

Bondouy, Manon. "Construction de modèles réduits pour le calcul des performances des avions." Thesis, Toulouse 3, 2016. http://www.theses.fr/2016TOU30027/document.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
L'objectif de cette thèse est de mettre en place une méthodologie et les outils associés en vue d'harmoniser le processus de construction des modèles de performances et de qualités de vol. Pour ce faire, des techniques de réduction de modèles ont été élaborées afin de satisfaire des objectifs industriels contradictoires de taille mémoire, de précision et de temps de calcul. Après avoir établi une méthodologie de construction de modèles réduits et effectué un état de l'art critique, les Réseaux de Neurones et le High Dimensional Model Representation ont été choisis, puis adaptés et validés sur des fonctions de petite dimension. Pour traiter les problèmes de dimension supérieure, une méthode de réduction basée sur la sélection optimale de sous-modèles réduits a été développée, qui permet de satisfaire les exigences de rapidité, de précision et de taille mémoire. L'efficacité de cette méthode a finalement été démontrée sur un modèle de performances des avions destiné à être embarqué
The objective of this thesis is to provide a methodology and the associated tools in order to standardize the building process of performance and handling quality models. This typically leads to elaborate surrogate models in order to satisfy industrial contrasting objectives of memory size, accuracy and computation time. After listing the different steps of a construction of surrogates methodology and realizing a critical state of the art, Neural Networks and High Dimensional Model Representation methods have been selected and validated on low dimension functions. For functions of higher dimension, a reduction method based on the optimal selection of submodel surrogates has been developed which allows to satisfy the requirements on accuracy, computation time and memory size. The efficiency of this method has been demonstrated on an aircraft performance model which will be embedded into the avionic systems
4

Pawlowski, Filip igor. "High-performance dense tensor and sparse matrix kernels for machine learning." Thesis, Lyon, 2020. http://www.theses.fr/2020LYSEN081.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Dans cette thèse, nous développons des algorithmes à haute performance pour certains calculs impliquant des tenseurs denses et des matrices éparses. Nous abordons les opérations du noyau qui sont utiles pour les tâches d'apprentissage de la machine, telles que l'inférence avec les réseaux neuronaux profonds. Nous développons des structures de données et des techniques pour réduire l'utilisation de la mémoire, pour améliorer la localisation des données et donc pour améliorer la réutilisation du cache des opérations du noyau. Nous concevons des algorithmes parallèles à mémoire séquentielle et à mémoire partagée.Dans la première partie de la thèse, nous nous concentrons sur les noyaux tenseurs denses. Les noyaux tenseurs comprennent la multiplication tenseur-vecteur (TVM), la multiplication tenseur-matrice (TMM) et la multiplication tenseur-tendeur (TTM). Parmi ceux-ci, la MVT est la plus liée à la largeur de bande et constitue un élément de base pour de nombreux algorithmes. Nous proposons une nouvelle structure de données qui stocke le tenseur sous forme de blocs, qui sont ordonnés en utilisant la courbe de remplissage de l'espace connue sous le nom de courbe de Morton (ou courbe en Z). L'idée clé consiste à diviser le tenseur en blocs suffisamment petits pour tenir dans le cache et à les stocker selon l'ordre de Morton, tout en conservant un ordre simple et multidimensionnel sur les éléments individuels qui les composent. Ainsi, des routines BLAS haute performance peuvent être utilisées comme micro-noyaux pour chaque bloc. Les résultats démontrent non seulement que l'approche proposée est plus performante que les variantes de pointe jusqu'à 18%, mais aussi que l'approche proposée induit 71% de moins d'écart-type d'échantillon pour le MVT dans les différents modes possibles. Enfin, nous étudions des algorithmes de mémoire partagée parallèles pour la MVT qui utilisent la structure de données proposée. Nos résultats sur un maximum de 8 systèmes de prises montrent une performance presque maximale pour l'algorithme proposé pour les tenseurs à 2, 3, 4 et 5 dimensions.Dans la deuxième partie de la thèse, nous explorons les calculs épars dans les réseaux de neurones en nous concentrant sur le problème d'inférence profonde épars à haute performance. L'inférence sparse DNN est la tâche d'utiliser les réseaux sparse DNN pour classifier un lot d'éléments de données formant, dans notre cas, une matrice de caractéristiques sparse. La performance de l'inférence clairsemée dépend de la parallélisation efficace de la matrice clairsemée - la multiplication matricielle clairsemée (SpGEMM) répétée pour chaque couche dans la fonction d'inférence. Nous introduisons ensuite l'inférence modèle-parallèle, qui utilise un partitionnement bidimensionnel des matrices de poids obtenues à l'aide du logiciel de partitionnement des hypergraphes. Enfin, nous introduisons les algorithmes de tuilage modèle-parallèle et de tuilage hybride, qui augmentent la réutilisation du cache entre les couches, et utilisent un module de synchronisation faible pour cacher le déséquilibre de charge et les coûts de synchronisation. Nous évaluons nos techniques sur les données du grand réseau du IEEE HPEC 2019 Graph Challenge sur les systèmes à mémoire partagée et nous rapportons jusqu'à 2x l'accélération par rapport à la ligne de base
In this thesis, we develop high performance algorithms for certain computations involving dense tensors and sparse matrices. We address kernel operations that are useful for machine learning tasks, such as inference with deep neural networks (DNNs). We develop data structures and techniques to reduce memory use, to improve data locality and hence to improve cache reuse of the kernel operations. We design both sequential and shared-memory parallel algorithms. In the first part of the thesis we focus on dense tensors kernels. Tensor kernels include the tensor--vector multiplication (TVM), tensor--matrix multiplication (TMM), and tensor--tensor multiplication (TTM). Among these, TVM is the most bandwidth-bound and constitutes a building block for many algorithms. We focus on this operation and develop a data structure and sequential and parallel algorithms for it. We propose a novel data structure which stores the tensor as blocks, which are ordered using the space-filling curve known as the Morton curve (or Z-curve). The key idea consists of dividing the tensor into blocks small enough to fit cache, and storing them according to the Morton order, while keeping a simple, multi-dimensional order on the individual elements within them. Thus, high performance BLAS routines can be used as microkernels for each block. We evaluate our techniques on a set of experiments. The results not only demonstrate superior performance of the proposed approach over the state-of-the-art variants by up to 18%, but also show that the proposed approach induces 71% less sample standard deviation for the TVM across the d possible modes. Finally, we show that our data structure naturally expands to other tensor kernels by demonstrating that it yields up to 38% higher performance for the higher-order power method. Finally, we investigate shared-memory parallel TVM algorithms which use the proposed data structure. Several alternative parallel algorithms were characterized theoretically and implemented using OpenMP to compare them experimentally. Our results on up to 8 socket systems show near peak performance for the proposed algorithm for 2, 3, 4, and 5-dimensional tensors. In the second part of the thesis, we explore the sparse computations in neural networks focusing on the high-performance sparse deep inference problem. The sparse DNN inference is the task of using sparse DNN networks to classify a batch of data elements forming, in our case, a sparse feature matrix. The performance of sparse inference hinges on efficient parallelization of the sparse matrix--sparse matrix multiplication (SpGEMM) repeated for each layer in the inference function. We first characterize efficient sequential SpGEMM algorithms for our use case. We then introduce the model-parallel inference, which uses a two-dimensional partitioning of the weight matrices obtained using the hypergraph partitioning software. The model-parallel variant uses barriers to synchronize at layers. Finally, we introduce tiling model-parallel and tiling hybrid algorithms, which increase cache reuse between the layers, and use a weak synchronization module to hide load imbalance and synchronization costs. We evaluate our techniques on the large network data from the IEEE HPEC 2019 Graph Challenge on shared-memory systems and report up to 2x times speed-up versus the baseline
5

Cohet, Romain. "Transport des rayons cosmiques en turbulence magnétohydrodynamique." Thesis, Montpellier, 2015. http://www.theses.fr/2015MONTS051/document.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Dans cette thèse, nous étudions les propriétés du transport de particules chargées de haute énergie dans des champs électromagnétiques turbulents.Ces champs ont été générés en utilisant le code magnétohydrodynamique (MHD) RAMSES, résolvant les équations de la MHD idéales compressibles. Nous avons développé un module pour générer la turbulence MHD, en utilisant une technique de forçage à grande échelle. Les propriétés des équations de la MHD font cascader l'énergie des grandes échelles vers les petites, développant un spectre en énergie suivant une loi de puissance, appelée zone inertielle. Nous avons développé un module permettant de calculer les trajectoires de particule chargée une fois le spectre turbulent établi. En injectant les particules à une énergie telle que l'inverse du rayon de Larmor des particules corresponde à un mode du spectre de Fourier dans la zone inertielle, nous avons cherché à mettre en évidence un effet systématique lié à la loi de puissance du spectre. Cette méthode a montré que le libre parcours moyen est indépendant de l'énergie des particules jusqu'à des valeurs de rayon de Larmor proches de l'échelle de cohérence de la turbulence. La dépendance du libre parcours moyen avec le nombre de Mach alfvénique des simulations MHD a également produit une loi de puissance.Nous avons également développé une technique pour mesurer l'effet de l'anisotropie de la turbulence MHD sur les propriétés du transport des rayons cosmiques, au travers le calcul de champs magnétiques locaux. Cette étude nous a montré un effet sur coefficient de diffusion angulaire, accréditant l'hypothèse que les particules sont plus sensible aux variations de petites échelles
In this thesis, we study the transport properties of high energy charged particles in turbulent electromagnetic fields.These fields were generated by using the magnetohydrodynamic (MHD) code RAMSES, which solve the compressible ideal MHD equations. We have developed a module for generating the MHD turbulence, by using a large scale forcing technique. The MHD equations induce a cascading of the energy from large scales to small ones, developing an energy spectrum which follows a power law, called the inertial range.We have developed a module for computing the charged particle trajectories once the turbulent spectrum is established. By injecting the particles to energy such as the inverse of the particle Larmor radius corresponds to a mode in the inertial range of the Fourier spectrum, we have highlighted systematic effects related to the power law spectrum. This method showed that the mean free path is independent of the particules energy until the Larmor radius takes values close to the turbulence coherence scale. The dependence of the mean free path with the alfvénic Mach number produced a power law.We have also developed a technique to measure the anisotropy effect of the MHD turbulence in the cosmic rays transport properties through the calculation of local magnetic fields. This study has shown an effect on the pitch angle scattering coefficient, which confirmed the assumption that the particles are more sensitive to changes in small scales fluctuations
6

Applencourt, Thomas. "Calcul haute performance & chimie quantique." Thesis, Toulouse 3, 2015. http://www.theses.fr/2015TOU30162/document.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
L'objectif de ce travail de thèse est double : - Le développement et application de méthodes originales pour la chimie quantique ; - La mise au point de stratégies informatiques variées permettant la réalisation de simulations à grande échelle. Dans la première partie, les méthodes d'integration de configuration (IC) et monte carlo quantique (QMC) utilisées dans ce travail pour le calcul des propriétés quantiques sont présentées. Nous détaillerons en particulier la méthode d'\IC sélectionnée perturbativement (CISPI) que nous avons utilisée pour construire des fonctions d'onde d'essai pour le QMC. La première application concerne le calcul des énergies totales non-relativistes des atomes de transition de la série 3d ; ceci a nécessité l'implémentation de fonctions de base de type Slater et a permis d'obtenir les meilleures valeurs publiées à ce jour. La deuxième application concerne l'implémentation de pseudo-potentiels adaptés à notre approche QMC, avec pour application une étude concernant le calcul des énergies d'atomisation d'un ensemble de 55 molécules. La seconde partie traite des aspects calcule haute performance (HPC) avec pour objectif l'aide au déploiement des simulations à très grande échelle, aussi bien sous l'aspect informatique proprement dit - utilisation de paradigmes de programmation originaux, optimisation des processus monocœurs, calculs massivement parallèles sur grilles de calcul (supercalculateur et Cloud), outils d'aide au développement collaboratif \textit{et cætera} -, que sous l'aspect \emph{utilisateur} - installation, gestion des paramètres d'entrée et de sortie, interface graphique, interfaçage avec d'autres codes. L'implémentation de ces différents aspects dans nos codes-maison quantum pakcage et qmc=chem est également présentée
This thesis work has two main objectives: 1. To develop and apply original electronic structure methods for quantum chemistry 2. To implement several computational strategies to achieve efficient large-scale computer simulations. In the first part, both the Configuration Interaction (CI) and the Quantum Monte Carlo (QMC) methods used in this work for calculating quantum properties are presented. We then describe more specifically the selected CI approach (so-called CIPSI approach, Configuration Interaction using a Perturbative Selection done Iteratively) that we used for building trial wavefunctions for QMC simulations. As a first application, we present the QMC calculation of the total non-relativistic energies of transition metal atoms of the 3d series. This work, which has required the implementation of Slater type basis functions in our codes, has led to the best values ever published for these atoms. We then present our original implementation of the pseudo-potentials for QMC and discuss the calculation of atomization energies for a benchmark set of 55 organic molecules. The second part is devoted to the Hight Performance Computing (HPC) aspects. The objective is to make possible and/or facilitate the deployment of very large-scale simulations. From the point of view of the developer it includes: The use of original programming paradigms, single-core optimization process, massively parallel calculations on grids (supercomputer and Cloud), development of collaborative tools , etc - and from the user's point of view: Improved code installation, management of the input/output parameters, GUI, interfacing with other codes, etc
7

Lagardère, Louis. "Calcul haute-performance et dynamique moléculaire polarisable." Thesis, Paris 6, 2017. http://www.theses.fr/2017PA066042.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Ce travail de thèse se situe à l'interface entre la chimie théorique, le calcul scientifique et les mathématiques appliquées. On s'intéresse aux différents algorithmes utilisés pour résoudre les équations spécifiques qui apparaissent dans le cadre de la dynamique moléculaire utilisant des champs de forces polarisables dans un cadre massivement parallèle. Cette famille de modèles nécessite en effet de résoudre des équations plus complexes que les modèles classiques usuels et rend nécessaire l'utilisation de supercalculateurs pour obtenir des résultats significatifs. On s'intéressera plus précisément à différents cas de conditions aux limites pour rendre compte des effets de solvatation comme les conditions aux limites périodiques traitées avec la méthode du Particle Mesh Ewald et un modèle de solvatation continu discrétisé par décomposition de domaine : le ddCOSMO. Le plan de cette thèse est le suivant : sont d'abord passées en revue les différentes stratégies parallèles en dynamique moléculaire en général, sont ensuite présentées les façons de les adapter au cas des champs de forces polarisables. Après quoi sont présentées différentes stratégies pour s'affranchir de certaines limites liées à l'usage de méthodes itératives en dynamique moléculaire polarisable en utilisant des approximations analytiques pour l'énergie de polarisation. Ensuite, l'adaptation de ces méthodes à différents cas pratiques de conditions aux limites est présentée : d'abord en ce qui concerne les conditions aux limites périodiques traitées avec la méthode du Particle Mesh Ewald et ensuite en ce qui concerne un modèle de solvatation continue discrétisé selon une stratégie de décomposition de domaine
This works is at the interface between theoretical chemistry, scientific computing and applied mathematics. We study different algorithms used to solve the specific equations that arise in polarizable molecular dynamics in a massively parallel context. This family of models requires indeed to solve more complex equations than in the classical case making the use of supercomputers mandatory in order to get significant results. We will more specifically study different types of boundary conditions that represent different ways to model solvation effects : first the Particle Mesh Ewald method to treat periodic boundary conditions and then a continuum solvation model discretized within a domain decomposition strategy : the ddCOSMO. The outline of this thesis is as follows : first, the different parallel strategies in the general context of molecular dynamics are reviewed. Then several methods to adapt these strategies to the specific case of polarizable force fields are presented. After that, strategies that allow to circumvent certain limits due to the use of iterative methods in the context of polarizable molecular dynamics are presented and studied. Then, the adapation of these methods to different cases of boundary conditions is presented : first in the case of the Particle Mesh Ewald method to treat periodic boundary conditions and then in the case of a particular continuum solvation model discretized with a domain decomposition strategy : the ddCOSMO. Finally, various numerical results and applications are presented
8

Guilloteau, Quentin. "Une approche autonomique à la régulation en ligne de systèmes HPC, avec un support pour la reproductibilité des expériences." Electronic Thesis or Diss., Université Grenoble Alpes, 2023. http://www.theses.fr/2023GRALM075.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Les systèmes de calcul haute performance (HPC) sont devenus de plus en plus complexes, et leurs performances ainsi que leur consommation d'énergie les rendent de moins en moins prévisibles.Cette imprévisibilité nécessite une gestion en ligne et prudente, afin garantir une qualité de service acceptable aux utilisateurs.Un tel problème de régulation se pose dans le contexte de l'intergiciel de grille de calcul CiGri qui vise à récolter les ressources inutilisées d'un ensemble de grappes via l'injection de tâches faiblement prioritaires.Une stratégie de récolte trop agressive peut conduire à la dégradation des performances pour tous les utilisateurs des grappes, tandis qu'une récolte trop timide laissera des ressources inutilisées et donc une perte de puissance de calcul.Il existe ainsi un compromis entre la quantité de ressources pouvant être récoltées et la dégradation des performances pour les tâches des utilisateurs qui en résulte.Ce compromis peut évoluer au cours de l'exécution en fonction des accords de niveau de service et de la charge du système.Nous affirmons que de tels défis de régulation peuvent être résolus avec des outils issus de l'informatique autonomique, et en particulier lorsqu'ils sont couplés à la théorie du contrôle.Cette thèse étudie plusieurs problèmes de régulation dans le contexte de CiGri avec de tels outils.Nous nous concentrerons sur la régulation de la récolte de ressources libres en fonction de la charge d'un système de fichiers distribué partagé et sur l'amélioration de l'utilisation globale des ressources de calcul.Nous évaluerons et comparerons également la réutilisabilité des solutions proposées dans le contexte des systèmes HPC.Les expériences réalisées dans cette thèse nous ont par ailleurs amené à rechercher de nouveaux outils et techniques pour améliorer le coût et la reproductibilité des expériences.Nous présenterons un outil nommé NixOS-Compose capable de générer et de déployer des environnements logiciels distribués reproductibles.Nous étudierons de plus des techniques permettant de réduire le nombre de machines nécessaires pour expérimenter sur des intergiciels de grappe, tels que CiGri, tout en garantissant un niveau de réalisme acceptable pour le système final déployé
High-Performance Computing (HPC) systems have become increasingly more complex, and their performance and power consumption make them less predictable.This unpredictability requires cautious runtime management to guarantee an acceptable Quality-of-Service to the end users.Such a regulation problem arises in the context of the computing grid middleware CiGri that aims at harvesting the idle computing resources of a set of cluster by injection low priority jobs.A too aggressive harvesting strategy can lead to the degradation of the performance for all the users of the clusters, while a too shy harvesting will leave resources idle and thus lose computing power.There is thus a tradeoff between the amount of resources that can be harvested and the resulting degradation of users jobs, which can evolve at runtime based on Service Level Agreements and the current load of the system.We claim that such regulation challenges can be addressed with tools from Autonomic Computing, and in particular when coupled with Control Theory.This thesis investigates several regulation problems in the context of CiGri with such tools.We will focus on regulating the harvesting based on the load of a shared distributed file-system, and improving the overall usage of the computing resources.We will also evaluate and compare the reusability of the proposed control-based solutions in the context of HPC systems.The experiments done in this thesis also led us to investigate new tools and techniques to improve the cost and reproducibility of the experiments.We will present a tool named NixOS-Compose able to generate and deploy reproducible distributed software environments.We will also investigate techniques to reduce the number of machines needed to deploy experiments on grid or cluster middlewares, such as CiGri, while ensuring an acceptable level of realism for the final deployed system
9

Jolivet, Pierre. "Méthodes de décomposition de domaine. Application au calcul haute performance." Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENM040/document.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Cette thèse présente une vision unifiée de plusieurs méthodes de décomposition de domaine : celles avec recouvrement, dites de Schwarz, et celles basées sur des compléments de Schur, dites de sous-structuration. Il est ainsi possible de changer de méthodes de manière abstraite et de construire différents préconditionneurs pour accélérer la résolution de grands systèmes linéaires creux par des méthodes itératives. On rencontre régulièrement ce type de systèmes dans des problèmes industriels ou scientifiques après discrétisation de modèles continus. Bien que de tels préconditionneurs exposent naturellement de bonnes propriétés de parallélisme sur les architectures distribuées, ils peuvent s’avérer être peu performants numériquement pour des décompositions complexes ou des problèmes physiques multi-échelles. On peut pallier ces défauts de robustesse en calculant de façon concurrente des problèmes locaux creux ou denses aux valeurs propres généralisées. D’aucuns peuvent alors identifier des modes qui perturbent la convergence des méthodes itératives sous-jacentes a priori. En utilisant ces modes, il est alors possible de définir des opérateurs de projection qui utilisent un problème dit grossier. L’utilisation de ces outils auxiliaires règle généralement les problèmes sus-cités, mais tend à diminuer les performances algorithmiques des préconditionneurs. Dans ce manuscrit, on montre en trois points quela nouvelle construction développée est performante : 1) grâce à des essais numériques à très grande échelle sur Curie—un supercalculateur européen, puis en le comparant à des solveurs de pointe 2) multi-grilles et 3) directs
This thesis introduces a unified framework for various domain decomposition methods:those with overlap, so-called Schwarz methods, and those based on Schur complements,so-called substructuring methods. It is then possible to switch with a high-level of abstractionbetween methods and to build different preconditioners to accelerate the iterativesolution of large sparse linear systems. Such systems are frequently encountered in industrialor scientific problems after discretization of continuous models. Even though thesepreconditioners naturally exhibit good parallelism properties on distributed architectures,they can prove inadequate numerical performance for complex decompositions or multiscalephysics. This lack of robustness may be alleviated by concurrently solving sparse ordense local generalized eigenvalue problems, thus identifying modes that hinder the convergenceof the underlying iterative methods a priori. Using these modes, it is then possibleto define projection operators based on what is usually referred to as a coarse solver. Theseauxiliary tools tend to solve the aforementioned issues, but typically decrease the parallelefficiency of the preconditioners. In this dissertation, it is shown in three points thatthe newly developed construction is efficient: 1) by performing large-scale numerical experimentson Curie—a European supercomputer, and by comparing it with state of the art2) multigrid and 3) direct solvers
10

Hascoët, Julien. "Contributions to Software Runtime for Clustered Manycores Applied to Embedded and High-Performance Applications." Thesis, Rennes, INSA, 2018. http://www.theses.fr/2018ISAR0029/document.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Le besoin en calculs est toujours plus important et difficile à satisfaire, spécialement dans le domaine de l’informatique embarquée qui inclue les voitures autonomes, drones et téléphones intelligents. Les systèmes embarqués doivent respecter des contraintes fortes de temps, de consommation et de sécurité. Les nouveaux processeurs parallèles et hétérogènes comme le MPPA® de Kalray utilisé dans cette thèse, doivent alors combiner haute performance et basse consommation. Pour cela, le MPPA® intègre 288 coeurs, regroupés en 18 clusters à mémoire locale partagée, un réseau sur puce et des moteurs DMA pour les communications. Ces processeurs sont difficiles à programmer, engendrant des coûts de développement importants. Cette thèse a pour objectif de simplifier leur programmation tout en optimisant les performances finales. Nous proposons pour cela AOS, une librairie de communication et synchronisation haute performance gérant les mémoires locales distribuées des processeurs clustérisés. La librairie atteint 70% de la crête matérielle pour des transferts supérieurs à 8 KB. Nous proposons plusieurs outils de développement basés sur AOS et des modèles de programmation flux-dedonnées pour accélérer le développement d’applications parallèles pour processeurs clustérisés, notamment OpenVX qui est un nouveau standard pour les applications de vision et les réseaux de neurones. Nous automatisons l’optimisation de l’application OpenVX en faisant du pré-chargement de données et en les fusionnants, pour éviter le mur de la bande passante mémoire externe. Les résultats montrent des facteurs d’accélération super linéaires
The growing need for computing is more and more challenging, especially in the embedded system world with autonomous cars, drones, and smartphones. New highly parallel and heterogeneous processors emerge to answer this challenge. They operate in constrained environments with real-time requirements, reduced power consumption, and safety. Programming these new chips is a time-consuming and challenging task leading to huge software development costs. The Kalray MPPA® processor is a competitive example for low-power super-computing on a single chip. It integrates up to 288 VLIW cores grouped in 18 clusters, each fitted with shared local memory. These clusters are interconnected with a high-bandwidth network-on-chip, and DMA engines are used to communicate. This processor is used in this thesis for experimental results. We propose the AOS library enabling highperformance communications and synchronizations of distributed local memories on clustered manycores. AOS provides 70% of the peak hardware throughput for transfers larger than 8 KB. We propose tools for the implementation of static and dynamic dataflow programs based on AOS to accelerate the parallel application developments onto clustered manycores. We propose an implementation of OpenVX for clustered manycores on top of AOS. OpenVX is a standard based on dataflow for the development of computer vision and neural network computing. The proposed OpenVX implementation includes automatic optimizations like data prefetch to overlap communications and computations, or kernel fusion to avoid the main memory bandwidth bottleneck. Results show super-linear speedups

Книги з теми "High performances calculus":

1

Stone, Harold S. High-performance computer architecture. 2nd ed. Reading, Mass: Addison-Wesley Pub. Co., 1990.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Stone, Harold S. High-performance computer architecture. Reading, Mass: Addison-Wesley Pub. Co., 1987.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Stone, Harold S. High-performance computer architecture. 3rd ed. Reading, Mass: Addison-Wesley, 1993.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Feucht, Dennis. Designing high-performance amplifiers. Raleigh, NC: SciTech Pub., 2010.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Shriver, Bruce D. The anatomy of a high-performance microprocessor: A systems perspective. Los Alamitos, Calif: IEEE Computer Society, 1998.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

International Conference on High Performance Computing (6th 1999 Calcutta, India). High performance computing--HiPC'99: 6th International Conference, Calcutta, India, December 1999 : proceedings. New York: Springer, 1999.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Joe, Kazuki, Mateo Valero, Hidehiko Tanaka, and Masaru Kitsuregawa. High Performance Computing: Third International Symposium, ISHPC 2000 Tokyo, Japan, October 16-18, 2000 Proceedings. Springer London, Limited, 2003.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

(Editor), Mateo Valero, Kazuki Joe (Editor), Masaru Kitsuregawa (Editor), and Hidehiko Tanaka (Editor), eds. High Performance Computing: Third International Symposium, ISHPC 2000 Tokyo, Japan, October 16-18, 2000 Proceedings (Lecture Notes in Computer Science). Springer, 2000.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Harth, Andreas, Ralf Schenkel, and Katja Hose. Linked Data Management. Taylor & Francis Group, 2016.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Linked Data Management. Taylor & Francis Inc, 2014.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "High performances calculus":

1

Le Boudec, Jean-Yves, and Patrick Thiran. "Network Calculus Using Min-Plus System Theory." In High-Performance Networks for Multimedia Applications, 153–66. Boston, MA: Springer US, 1999. http://dx.doi.org/10.1007/978-1-4615-5541-4_9.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Paulraj, D., and S. Swamynathan. "Composition of Composite Semantic Web Services Using Abductive Event Calculus." In High Performance Architecture and Grid Computing, 201–13. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-22577-2_28.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Geetha, G., and Saruchi. "From Calculus to Number Theory, Paves Way to Break OSS Scheme." In High Performance Architecture and Grid Computing, 609–11. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-22577-2_81.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Chahed, Tijani, Gérard Hébuterne, and Caroline Fayet. "Mapping of Loss and Delay Between IP and ATM Using Network Calculus." In Networking 2000 Broadband Communications, High Performance Networking, and Performance of Communication Networks, 240–51. Berlin, Heidelberg: Springer Berlin Heidelberg, 2000. http://dx.doi.org/10.1007/3-540-45551-5_21.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Bhayat, Ahmed, and Martin Suda. "A Higher-Order Vampire (Short Paper)." In Automated Reasoning, 75–85. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-63498-7_5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractThe support for higher-order reasoning in the Vampire theorem prover has recently been completely reworked. This rework consists of new theoretical ideas, a new implementation, and a dedicated strategy schedule. The theoretical ideas are still under development, so we discuss them at a high level in this paper. We also describe the implementation of the calculus in the Vampire theorem prover, the strategy schedule construction and several empirical performance statistics.
6

Hou, Yafei, Shiyong Zhang, and Yiping Zhong. "Scheduling Model in Global Real-Time High Performance Computing with Network Calculus." In Grid and Cooperative Computing, 195–98. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-24680-0_32.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

"Issues Related to Acceleration of Algorithms." In Advances in Systems Analysis, Software Engineering, and High Performance Computing, 173–94. IGI Global, 2022. http://dx.doi.org/10.4018/978-1-7998-8350-0.ch010.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This chapter provides examples on acceleration of various algorithms using the dataflow paradigm. Implementation of algorithms demands input data changes, operation substitutions, and data tilling to achieve significant performance. The implementation becomes even more challenging if data are coming via a serial stream. This chapter presents acceleration mechanisms on three different use cases related to presented algorithms. This chapter consists of three parts: Acceleration of Algorithms Using Innovations in Suboptimal Calculus and Approximate Computing, Acceleration of Algorithms Using Innovations in Computer Architecture and Implementational Technologies, and Speeding up the Execution of Data Mining Algorithms.
8

Murturi, Ilir. "Transforming the Method of Least Squares to the Dataflow Paradigm." In Advances in Systems Analysis, Software Engineering, and High Performance Computing, 114–21. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-7156-9.ch008.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In mathematical statistics, an interesting and common problem is finding the best linear or non-linear regression equations that express the relationship between variables or data. The method of least squares (MLS) represents one of the oldest procedures among multiple techniques to determine the best fit line to the given data through simple calculus and linear algebra. Notably, numerous approaches have been proposed to compute the least-squares. However, the proposed methods are based on the control flow paradigm. As a result, this chapter presents the MLS transformation from control flow logic to the dataflow paradigm. Moreover, this chapter shows each step of the transformation, and the final kernel code is presented.
9

Fowley, Frank, Claus Pahl, and Li Zhang. "Cloud Service Brokerage." In Advances in Systems Analysis, Software Engineering, and High Performance Computing, 613–39. IGI Global, 2014. http://dx.doi.org/10.4018/978-1-4666-6178-3.ch024.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Cloud service brokerage has been identified as a key concern for future Cloud technology research and development. Integration, customization, and aggregation are core functions of a Cloud service broker. The need to cater to horizontal and vertical integration in service description languages, horizontally between different providers and vertically across the different Cloud layers, has been well recognized. In this chapter, the authors propose a conceptual framework for a Cloud service broker in two parts: first, a reference architecture for Cloud service brokers; and second, a rich ontology-based template manipulation framework and operator calculus that describes the mediated and integrated Cloud services, facilitates manipulating their descriptions, and allows both horizontal and vertical dimensions to be covered. Structural aspects of that template are identified, formalized in an ontology, and aligned with the Cloud development and deployment process.
10

"RESULTS AND DISCUSSION The olfactometer readings of the measurements are statistically treated as described in /3/. The results for the plants and air cleaning systems, described in table 1, are given in table 2. system chemical biological plant ABCDE rel. odour raw air 65200 14200 26800 41400 95100 concentration Z50 cleaned air 48300 7360 29500 7930 5100 /odour units/ olfactometric efficiency n 26 % 48 % 81 % 95 % Table 2. Results of measurements, obtained during normal performance, cooker closed. Taking the index R for raw air at the cleaner inlet and the index C for cleaned air at the cleaner outlet, the olfactometric efficiency of the cleaner is defined according to /6/: 50 R In the regarded air cleaning systems, the odoriferous pollutants are first seperated from the raw air by sorption and then decomposed by chemicals or by micro-organisms. As long as this decomposition is not yet completed, the pollutants may desorb and repollute the air, when sorption conditions, i.g. the raw gas concentration, change. By the relation of the difference in raw and cleaned gas concentration to the actual raw gas concentration, a negative efficiency may be calcula­ ted by equation 1, i.g. when a low raw air concentration is preceded by a high one. Table 3 shows peak concentrations and increasing olfactometric efficiency, wherT’in plant A the cooker is opened. rel. odour concentration raw air 627000 Zgg /odour units/ cleaned air 240800 olfactometric efficiency n 62 % Table 3. Results of measurements, obtained in peak load performance when cooker is opened. Although the number of measurements is too small for general assertions, some deductions can be drawn: The results confirm the superiority of the biofilters. And in fact, the number of biofilters in rendering plants increases. Concerning the rel. odour concentration in the cleaned air, a large difference is evident between the presented results and the assertion that a limit value of 100 odour units can be achieved. Two interpretations can be offered:." In Odour Prevention and Control of Organic Sludge and Livestock Farming, 239. CRC Press, 1986. http://dx.doi.org/10.1201/9781482286311-98.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "High performances calculus":

1

Ying Li, Lei Lei, Siyu Lin, and Zhangdui Zhong. "Performance Analysis for High-speed Railway Communication Network using Stochastic Network Calculus." In 5th IET International Conference on Wireless, Mobile and Multimedia Networks (ICWMMN 2013). Institution of Engineering and Technology, 2013. http://dx.doi.org/10.1049/cp.2013.2386.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Isaev, Mikhail, Nic Mcdonald, Larry Dennison, and Richard Vuduc. "Calculon: a methodology and tool for high-level co-design of systems and large language models." In SC '23: International Conference for High Performance Computing, Networking, Storage and Analysis. New York, NY, USA: ACM, 2023. http://dx.doi.org/10.1145/3581784.3607102.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Feng, Quanyou, Jiannong Cao, Yue Qian, and Wenhua Dou. "An Analytical Approach to Modeling and Evaluation of Optical Chip-scale Network using Stochastic Network Calculus." In 2012 IEEE 14th Int'l Conf. on High Performance Computing and Communication (HPCC) & 2012 IEEE 9th Int'l Conf. on Embedded Software and Systems (ICESS). IEEE, 2012. http://dx.doi.org/10.1109/hpcc.2012.152.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Catania, Andrea Emilio, Alessandro Ferrari, and Antonio Mittica. "High-Pressure Rotary Pump Performance in Multi-Jet Common Rail Systems." In ASME 8th Biennial Conference on Engineering Systems Design and Analysis. ASMEDC, 2006. http://dx.doi.org/10.1115/esda2006-95590.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The high-pressure hydraulic circuit of the Multi-jet Common Rail (C.R.) system has thoroughly been investigated in the last few years by researchers of the automotive field. However, shortage of knowledge is still present about the high-pressure pump performance. Hydraulic-mechanical efficiency of the pump is only known as mean value and no published data are available on the Radial-jet compression volumetric efficiency. Due to the fact that part of the pumped fuel is partially expelled by the pressure-control valve and because of the presence of the oil flowing in the cooling and lubrification circuit, the determination of the compression volumetric efficiency seems to be a hard task. In the present paper a detailed description of the Radial-jet performance has been provided. The dependence of the flow rate sucked by the high-pressure pump, on speed and load has been studied and the characteristic curve of the cooling-lubricant circuit has been determined. A special procedure was designed and applied for the experimental evaluation of the fuel leakages from the pumping chambers, so as to allow the calculus of the volumetric efficiency. The actual head-capacity pump curves at different revolution speeds were plotted and compared with the electroinjector flow-requirements so as to allow the evaluation of the efficiency of the pressure-control strategy. Furthermore the pump mechanic-hydraulic efficiency dependence on head and speed was also experimentally assessed.
5

Pennock, G. R., and B. S. Ryuh. "Dynamic Analysis of Two Cooperating Robots." In ASME 1987 Design Technology Conferences. American Society of Mechanical Engineers, 1987. http://dx.doi.org/10.1115/detc1987-0064.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract The use of a computer-controlled multirobot system with sensors in batch manufacturing and assembly tasks offers a number of significant advantages. These include cost savings, reliability, tolerance of working environments unacceptable to humans, and an adaptability to both structured and unstructured environments through simple reprogramming. The end results are improved productivity, efficiency, and flexibility in manufacturing and automation. However, the use of two or more cooperating robots has not been fully exploited to date. Current industrial practice employs simple time-space coordination which does not allow more than one robot working in a common workspace, such coordination and control results in under-utilization of robots. With the increasing demand for high performance manipulators and efficient multirobot manufacturing cells, there is a vital need to develop theoretical and design methodologies that will solve the generic problems faced by industrial robots working cooperatively. If multirobot systems are to be used in manufacturing and assembly tasks, a thorough knowledge of the dynamics of such systems is essential. This paper formulates the dynamics of two robots cooperating to move a rigid body object. The analysis is based on Newtonian mechanics with screw calculus and dual transformation matrices.
6

Chen, Wenqing, Jidong Tian, Caoyun Fan, Hao He, and Yaohui Jin. "Dependent Multi-Task Learning with Causal Intervention for Image Captioning." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/312.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Recent work for image captioning mainly followed an extract-then-generate paradigm, pre-extracting a sequence of object-based features and then formulating image captioning as a single sequence-to-sequence task. Although promising, we observed two problems in generated captions: 1) content inconsistency where models would generate contradicting facts; 2) not informative enough where models would miss parts of important information. From a causal perspective, the reason is that models have captured spurious statistical correlations between visual features and certain expressions (e.g., visual features of "long hair" and "woman"). In this paper, we propose a dependent multi-task learning framework with the causal intervention (DMTCI). Firstly, we involve an intermediate task, bag-of-categories generation, before the final task, image captioning. The intermediate task would help the model better understand the visual features and thus alleviate the content inconsistency problem. Secondly, we apply Pearl's do-calculus on the model, cutting off the link between the visual features and possible confounders and thus letting models focus on the causal visual features. Specifically, the high-frequency concept set is considered as the proxy confounders where the real confounders are inferred in the continuous space. Finally, we use a multi-agent reinforcement learning (MARL) strategy to enable end-to-end training and reduce the inter-task error accumulations. The extensive experiments show that our model outperforms the baseline models and achieves competitive performance with state-of-the-art models.
7

Florea, Adrian, and Arpad Gellert. "DIFFERENT APPROACHES FOR SOLVING OPTIMIZATION PROBLEMS USING INTERACTIVE E-LEARNING TOOLS." In eLSE 2014. Editura Universitatii Nationale de Aparare "Carol I", 2014. http://dx.doi.org/10.12753/2066-026x-14-081.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Solving optimization problems, regardless of the scope, involves knowledge of mathematical apparatus based on the techniques and methods that are not always simple (differential calculus, operational research, etc.) and concepts of artificial intelligence, machine learning, evolutionary computing, graph theory. These problems are NP-complete and very often the optimization process targets more than one objective, at least two, and they can have an antagonist behavior. As an example, we can consider a simple car design: two objectives - cost (production cost or fuel consumption) that should be minimized and performance (speed limit or reliability) which are to be maximized. Or, if we talk about a microprocessor design the multi-criteria analysis must targets: high performance, small integration area, small energy consumption having also constraints about thermal dissipation. Given the above, becomes more difficult to teach optimization methods, to communicate new concepts and skills in an informative and formative manner, and in the same time to be attractive for students. Thus, developing effective e-learning tools targeting evolutionary algorithms in order to solve optimization problems is a continuous challenge. Moreover, technology applied to education became a key issue in nowadays knowledge society and education represents an essential element for knowledge improvement and economy growth. In this work we try to tackle the above challenges by developing the ETTOP tool (E-learning Tool for Teaching Optimization Problems) in order to gain a better understanding and familiarity of the students with new advanced learning methods and tools in the Evolutionary Computing domains, and especially in the optimization methods field. The main aim of our work consists in highlighting of different approaches for solving mono and multi-objective optimization problems using interactive e-learning tools (non-Pareto techniques, Pareto techniques and techniques based on swarm behavior). Although our software tool is designed and developed as an Application Programming Interface (API) that allows each user to select an existing problem or define a new one, to customize the solution algorithm based on problem specific constraints that the users can construct themselves or take over from sets of predetermined functions and rules, in this stage we present only three case studies that we implemented.
8

Zderčík, Antonín, Jiří Nykodým, Jana Talašová, Pavel Holeček, and Michal Bozděch. "The application of fuzzy logic in the diagnostics of performance preconditions in tennis." In 12th International Conference on Kinanthropology. Brno: Masaryk University Press, 2020. http://dx.doi.org/10.5817/cz.muni.p210-9631-2020-5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Sports performance is influenced by a many of factors that can be characterised as its rela-tively independent – although synergetic – components. The most frequently mentioned are the fitness, somatic, tactical, mental and technical factors of sports performance. The subject of interest in sport is the process of monitoring and evaluating the level of these individual factors, i.e. the diagnostics of sports performance. When diagnosing the level of performance prerequi-site for tennis, it is recommended to use those diagnostic methods that focus on tennis-specific performance prerequisites. Analyses of modern tennis show speed (reaction, action), strength (explosive), strength endurance and specific coordination abilities to be the most important motor prerequisites. Diagnostics of the motor prerequisites of an athlete are often performed in practice employing motor tests and test batteries. Methods of evaluating the results obtained are generally based on the probability approach, though an alternative is provided by a method based on the theory of fuzzy logic. The aim of the research was to use the theory of fuzzy logic in evaluating the level of performance prerequisites and compare evaluation results by using of a classical discrete approach and a fuzzy approach. The two approaches are evaluated and compared using the results of testing of a group of 15–16-year old tennis players (n = 203, age M ± SD = 15.97 ± 0.57 years, height M ± SD = 181.9 ± 6.8 cm, weight M ± SD = 71.6 ± 8.6 kg) who took part in regular testing conducted by the Czech Tennis Association in the years 2000–2018 using the TENDIAG1 test battery. STATISTICA12 software was used for the anal-ysis of data using a probability approach. FuzzME software was used for analysis using of a fuzzy approach. The testing of research data (the Kolmogorov-Smirnov test) demonstrated the normal distribution of the frequency of the results of individual tests in the test battery. The level of agreement of the results (the Pearson correlation coeficient) obtained by the two approach-es (the discrete and the fuzzy approaches) was high both from the effect size (ES, large) and statistical significance points of view (r = 0.89, p = 0.05). The evaluation of the effect size (ES) of the differences between the mean values of the results obtained by the two approaches us-ing the Cohen’s d did not demonstrate any substantively significant difference (d = 0.16). For a more detailed analysis, two subsets were selected from the original group of tennis players. They consisted of players with an overall evaluation (probability approach) of 4–5 points and 8–9 points, respectively. The level of agreement between the results in the subgroup with the evaluation 4–5 points was low from both the effect size (ES, small) and statistical significance points of view (r = 0.15, p = 0.05), while the agreement in the subgroup with the evaluation of 8–9 points was at a medium level in terms of the effect size (ES, medium) and statistically insignificant (r = 0.47, p = 0.05). The effect size (ES) assessment of the differences between mean values of the results obtained by the two approaches did not demonstrate any effect (d = 0.12) in the group with the overall score of 4–5 points, and a large effect (d = 0.89, large) in the group with an overall score of 8–9 points. Despite the similarity of the results obtained by the probability and fuzzy methods, it was shown that the fuzzy approach enables a finer dif-ferentiation of the level of fitness prerequisites in players on the evaluation boundaries. Since that the results for individual items in the TENDIAG1 test battery indicate the level of individual performance prerequisites, the use of different weighting criteria may be considered for future evaluation using the fuzzy approach. For this approach, the use of the point method, a paired comparison method or the Saaty method can be considered for the identification and calcula-tion of individual subtests weighting.
9

Encinas Pino, Felipe, André De Herde, Carlos Ramiro Marmolejo Duarte, and Carlos Andrés Aguirre Núñez. "Comportamiento termico de edificios de departamentos en Santiago de Chile: segmentación de nichos en el mercado inmobiliario privado a partir de las exigencias de la reglamentación térmica nacional." In International Conference Virtual City and Territory. Barcelona: Centre de Política de Sòl i Valoracions, 2009. http://dx.doi.org/10.5821/ctv.7586.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Santiago, capital de la República de Chile, se sitúa en el valle central del país en los 33º 27’ de latitud sur y 70º 42’ de longitud oeste, presentando un clima templado cálido con una estación seca prolongada de 7 a 8 meses de duración. La temperatura media anual es de 12,2°C y la oscilación térmica es considerable: hay casi 13°C de diferencia en la temperatura media entre el mes más cálido (enero) y el más frío (julio) y la diferencia entre las medias de las temperaturas máximas y mínimas para todos los meses del año fluctúan entre 10 y 16°C. De acuerdo a datos del Instituto Nacional de Estadísticas de Chile (INE), el 37,4% de los permisos de edificación de viviendas nuevas del 2006, declara que el ladrillo es su material predominante de muros, mientras que otro 36,0% está asociado con el hormigón armado. Dada la generalmente nula presencia de aislación térmica en estos sistemas constructivos y su alta inercia térmica de absorción, se podría esperar para Santiago un comportamiento térmico - en términos de confort - más bien desfavorable en invierno y favorable en verano. Sin embargo, estudios recientes presentan un escenario opuesto, dado que un gran porcentaje de usuarios encuestados acusa un alto nivel de sobrecalentamiento en sus viviendas. Esta aparente contradicción podría entenderse desde las limitaciones propias de esta base datos del INE del año 2006, puesto que por ejemplo, no refleja el impacto de la implementación de la 2° etapa de la Reglamentación Térmica nacional. Esta regulación, en vigencia desde enero de 2007, establece valores máximos de transmitancia térmica admisible para los diversos elementos de la envolvente de una vivienda. A partir del valor exigido en muros en Santiago (1,9 W/m2K), los nuevos edificios de departamentos han tenido que necesariamente incorporar al menos 10 mm de aislante térmico en su envolvente vertical, modificando su comportamiento térmico tanto en invierno como en verano.Este artículo propone la simulación del desempeño energético y condiciones de confort térmico para invierno y verano, de edificios de departamentos en Santiago para estratos socioeconómicos medios y medios altos, con el objetivo de establecer los impactos de las soluciones constructivas adoptadas en estos. Estas simulaciones numéricas se realizarán sobre tipologías de productos de vivienda ofertadas en el mercado privado durante el periodo 2006-2007, incorporando su materialidad y los datos de mercado, precios y atributos inmobiliarios, según datos de oferta del Portalinmobiliario.com. Estas tipologías de vivienda se traducirán en nichos, los cuales serán determinados a partir de la generación de grupos homogéneos de viviendas mediante a la técnica de generación de conglomerados, sobre las variables de cada producto inmobiliario. Estos grupos de viviendas se encontrarán en los mismos sub mercados inmobiliarios, evaluándose diferentes combinaciones de atributos asociados a las materialidades. Las simulaciones numéricas del comportamiento térmico en invierno y en verano, se realizan mediante el software de evaluación de desempeño energético TAS, mediante un sistema dinámico que calcula las condiciones de las viviendas en régimen horario, evaluando las condiciones de confort térmico. Se espera probar que las soluciones técnico-arquitectónicas actuales, y su interpretación de la Reglamentación Térmica vigente, generan desfavorables condiciones de confort independiente del nicho de mercado donde estén compitiendo. Estas conclusiones permitirán establecer desafíos y oportunidades para el mercado inmobiliario privado, tanto en términos de tecnología de la construcción, como en el diseño arquitectónico, permitiendo el desarrollo de nuevas propuestas para integrar las exigencias de la Reglamentación Térmica nacional a la realidad del mercado de vivienda privada. Santiago de Chile (33°27’S and 70°42’W), capital city of the country, is placed in the central valley. It has a Mediterranean climate with a long dry season (between 7 and 8 months). Its annual average temperature is 12,2°C, whereas the thermal oscillation is considerable: there is almost 13°C between January and July average temperatures (hottest and coldest months, respectively) and the difference between maximum and minimum temperatures ranges between 10°C and 16°C during all the year. According to the National Statistics Institute, 37.4% and 36.0% of new housing during 2006 were built using mainly brick masonry and concrete in their walls, respectively. In both cases, thermal insulation was not generally considered. On the contrary for the heating period, a favorable thermal performance in summer should be expected (low thermal insulation in combination to high thermal mass). However, some recent studies show the completely opposite scenario, since an important percentage of users declare overheating in their own dwellings. This apparent contradiction could be understood from a database limitation, due to these official data do not reflect the impact of the current thermal regulation, which is in force since January 2007. Notwithstanding the required standards are weak in comparison to the international state-of-art (e.g. 1,9 W/m2K as maximum U-value for walls in Santiago), nowadays apartment buildings in Santiago are including at least 20 mm of thermal insulation in their walls to give compliance to the code. This paper proposes a series of dynamic thermal simulations to apartment buildings in Santiago, with the aim of establish the impact of different constructive solutions by means of thermal behavior, both in winter and summer. These digital models are statistically based on the typologies offered in the private real estate market during both periods 2001-2002 and 2006-2007, according to a database from Portalinmobiliario.com. These were determined using a multivariate analysis of their attributes – producing homogeneous market niches - through the hierarchical clustering technique. These homogeneous niches were identified in the real estate private submarkets, assessing different attributes. Thermal simulations were made using the TAS software, a dynamic-state digital tool. According to the results, the implementation of the thermal regulation – intended mainly to reduce heating consumption – have produced unfavorable comfort conditions in all the studied market niches, in comparison with the business as usual scenario. These conclusions allow establishing challenges and opportunities for the private real estate market, in order to integrate new thermal regulations with the private market reality.

До бібліографії