Auswahl der wissenschaftlichen Literatur zum Thema „In situ computing“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "In situ computing" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "In situ computing"

1

Kamath, Goutham, Lei Shi, Edmond Chow, Wenzhan Song und Junjie Yang. „Decentralized multigrid for in-situ big data computing“. Tsinghua Science and Technology 20, Nr. 6 (Dezember 2015): 545–59. http://dx.doi.org/10.1109/tst.2015.7349927.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Mencagli, Gabriele, Felipe MG França, Cristiana Barbosa Bentes, Leandro Augusto Justen Marzulo und Mauricio Lima Pilla. „Special issue on parallel applications for in-situ computing on the next-generation computing platforms“. International Journal of High Performance Computing Applications 33, Nr. 3 (26.12.2018): 429–30. http://dx.doi.org/10.1177/1094342018820155.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Troxel, Ian, Eric Grobelny und Alan D. George. „System Management Services for High-Performance In-situ Aerospace Computing“. Journal of Aerospace Computing, Information, and Communication 4, Nr. 2 (Februar 2007): 636–56. http://dx.doi.org/10.2514/1.26832.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Consolvo, Sunny, Beverly Harrison, Ian Smith, Mike Y. Chen, Katherine Everitt, Jon Froehlich und James A. Landay. „Conducting In Situ Evaluations for and With Ubiquitous Computing Technologies“. International Journal of Human-Computer Interaction 22, Nr. 1-2 (April 2007): 103–18. http://dx.doi.org/10.1080/10447310709336957.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Spence, Allan D., D. Alan Sawula, James R. Stone und Yu Pin Lin. „In-Situ Measurement and Distributed Computing for Adjustable CNC Machining“. Computer-Aided Design and Applications 11, Nr. 6 (10.06.2014): 659–69. http://dx.doi.org/10.1080/16864360.2014.914384.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Dorier, Matthieu, Zhe Wang, Srinivasan Ramesh, Utkarsh Ayachit, Shane Snyder, Rob Ross und Manish Parashar. „Towards elastic in situ analysis for high-performance computing simulations“. Journal of Parallel and Distributed Computing 177 (Juli 2023): 106–16. http://dx.doi.org/10.1016/j.jpdc.2023.02.014.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Zhu, Wenkang, Hui Li, Shengnan Shen, Yingjie Wang, Yuqing Hou, Yikai Zhang und Liwei Chen. „In-situ monitoring additive manufacturing process with AI edge computing“. Optics & Laser Technology 171 (April 2024): 110423. http://dx.doi.org/10.1016/j.optlastec.2023.110423.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Zyarah, Abdullah M., und Dhireesha Kudithipudi. „Semi-Trained Memristive Crossbar Computing Engine with In Situ Learning Accelerator“. ACM Journal on Emerging Technologies in Computing Systems 14, Nr. 4 (11.12.2018): 1–16. http://dx.doi.org/10.1145/3233987.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Alimi, Roger, Elad Fisher und Kanna Nahir. „In Situ Underwater Localization of Magnetic Sensors Using Natural Computing Algorithms“. Sensors 23, Nr. 4 (05.02.2023): 1797. http://dx.doi.org/10.3390/s23041797.

Der volle Inhalt der Quelle
Annotation:
In the shallow water regime, several positioning methods for locating underwater magnetometers have been investigated. These studies are based on either computer simulations or downscaled laboratory experiments. The magnetic fields created at the sensors’ locations define an inverse problem in which the sensors’ precise coordinates are the unknown variables. This work addresses the issue through (1) a full-scale experimental setup that provides a thorough scientific perspective as well as real-world system validation and (2) a passive ferromagnetic source with (3) an unknown magnetic vector. The latter increases the numeric solution’s complexity. Eight magnetometers are arranged according to a 2.5 × 2.5 m grid. Six meters above, a ferromagnetic object moves according to a well-defined path and velocity. The magnetic field recorded by the network is then analyzed by two natural computing algorithms: the genetic algorithm (GA) and particle swarm optimizer (PSO). Single- and multi-objective versions are run and compared. All the methods performed very well and were able to determine the location of the sensors within a relative error of 1 to 3%. The absolute error lies between 20 and 35 cm for the close and far sensors, respectively. The multi-objective versions performed better.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Aupy, Guillaume, Brice Goglin, Valentin Honoré und Bruno Raffin. „Modeling high-throughput applications for in situ analytics“. International Journal of High Performance Computing Applications 33, Nr. 6 (22.05.2019): 1185–200. http://dx.doi.org/10.1177/1094342019847263.

Der volle Inhalt der Quelle
Annotation:
With the goal of performing exascale computing, the importance of input/output (I/O) management becomes more and more critical to maintain system performance. While the computing capacities of machines are getting higher, the I/O capabilities of systems do not increase as fast. We are able to generate more data but unable to manage them efficiently due to variability of I/O performance. Limiting the requests to the parallel file system (PFS) becomes necessary. To address this issue, new strategies are being developed such as online in situ analysis. The idea is to overcome the limitations of basic postmortem data analysis where the data have to be stored on PFS first and processed later. There are several software solutions that allow users to specifically dedicate nodes for analysis of data and distribute the computation tasks over different sets of nodes. Thus far, they rely on a manual resource partitioning and allocation by the user of tasks (simulations, analysis). In this work, we propose a memory-constraint modelization for in situ analysis. We use this model to provide different scheduling policies to determine both the number of resources that should be dedicated to analysis functions and that schedule efficiently these functions. We evaluate them and show the importance of considering memory constraints in the model. Finally, we discuss the different challenges that have to be addressed to build automatic tools for in situ analytics.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "In situ computing"

1

Ranisavljević, Elisabeth. „Cloud computing appliqué au traitement multimodal d’images in situ pour l’analyse des dynamiques environnementales“. Thesis, Toulouse 2, 2016. http://www.theses.fr/2016TOU20128/document.

Der volle Inhalt der Quelle
Annotation:
L’analyse des paysages, de ses dynamiques et ses processus environnementaux, nécessite d’acquérir régulièrement des données des sites, notamment pour le bilan glaciaire au Spitsberg et en haute montagne. A cause des mauvaises conditions climatiques communes aux latitudes polaires et à cause de leur coût, les images satellites journalières ne sont pas toujours accessibles. De ce fait, les événements rapides comme la fonte de la neige ou l'enneigement ne peuvent pas être étudiés à partir des données de télédétection à cause de leur fréquence trop faible. Nous avons complété les images satellites par un ensemble de de stations photo automatiques et autonomes qui prennent 3 photos par jour. L’acquisition de ces photos génère une grande base de données d’images. Plusieurs traitements doivent être appliqués sur les photos afin d’extraire l’information souhaitée (modifications géométriques, gestion des perturbations atmosphériques, classification, etc). Seule l’informatique est à même de stocker et gérer toutes ces informations. Le cloud computing offre en tant que services des ressources informatiques (puissance de calcul, espace de stockage, applications, etc). Uniquement le stockage de la masse de données géographique pourrait être une raison d’utilisation du cloud computing. Mais en plus de son espace de stockage, le cloud offre une simplicité d’accès, une architecture scalable ainsi qu’une modularité dans les services disponibles. Dans le cadre de l’analyse des photos in situ, le cloud computing donne la possibilité de mettre en place un outil automatique afin de traiter l’ensemble des données malgré la variété des perturbations ainsi que le volume de données. A travers une décomposition du traitement d’images en plusieurs tâches, implémentées en tant que web services, la composition de ces services nous permet d’adapter le traitement aux conditions de chacune des données
Analyzing landscape, its dynamics and environmental evolutions require regular data from the sites, specifically for glacier mass balanced in Spitsbergen and high mountain area. Due to poor weather conditions including common heavy cloud cover at polar latitudes, and because of its cost, daily satellite imaging is not always accessible. Besides, fast events like flood or blanket of snow is ignored by satellite based studies, since the slowest sampling rate is unable to observe it. We complement satellite imagery with a set of ground based autonomous automated digital cameras which take 3 pictures a day. These pictures form a huge database. Each picture needs many processing to extract the information (geometric modifications, atmospheric disturbances, classification, etc). Only computer science is able to store and manage all this information. Cloud computing, being more accessible in the last few years, offers as services IT resources (computing power, storage, applications, etc.). The storage of the huge geographical data could, in itself, be a reason to use cloud computing. But in addition to its storage space, cloud offers an easy way to access , a scalable architecture and a modularity in the services available. As part of the analysis of in situ images, cloud computing offers the possibility to set up an automated tool to process all the data despite the variety of disturbances and the data volume. Through decomposition of image processing in several tasks, implemented as web services, the composition of these services allows us to adapt the treatment to the conditions of each of the data
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Adhinarayanan, Vignesh. „Models and Techniques for Green High-Performance Computing“. Diss., Virginia Tech, 2020. http://hdl.handle.net/10919/98660.

Der volle Inhalt der Quelle
Annotation:
High-performance computing (HPC) systems have become power limited. For instance, the U.S. Department of Energy set a power envelope of 20MW in 2008 for the first exascale supercomputer now expected to arrive in 2021--22. Toward this end, we seek to improve the greenness of HPC systems by improving their performance per watt at the allocated power budget. In this dissertation, we develop a series of models and techniques to manage power at micro-, meso-, and macro-levels of the system hierarchy, specifically addressing data movement and heterogeneity. We target the chip interconnect at the micro-level, heterogeneous nodes at the meso-level, and a supercomputing cluster at the macro-level. Overall, our goal is to improve the greenness of HPC systems by intelligently managing power. The first part of this dissertation focuses on measurement and modeling problems for power. First, we study how to infer chip-interconnect power by observing the system-wide power consumption. Our proposal is to design a novel micro-benchmarking methodology based on data-movement distance by which we can properly isolate the chip interconnect and measure its power. Next, we study how to develop software power meters to monitor a GPU's power consumption at runtime. Our proposal is to adapt performance counter-based models for their use at runtime via a combination of heuristics, statistical techniques, and application-specific knowledge. In the second part of this dissertation, we focus on managing power. First, we propose to reduce the chip-interconnect power by proactively managing its dynamic voltage and frequency (DVFS) state. Toward this end, we develop a novel phase predictor that uses approximate pattern matching to forecast future requirements and in turn, proactively manage power. Second, we study the problem of applying a power cap to a heterogeneous node. Our proposal proactively manages the GPU power using phase prediction and a DVFS power model but reactively manages the CPU. The resulting hybrid approach can take advantage of the differences in the capabilities of the two devices. Third, we study how in-situ techniques can be applied to improve the greenness of HPC clusters. Overall, in our dissertation, we demonstrate that it is possible to infer power consumption of real hardware components without directly measuring them, using the chip interconnect and GPU as examples. We also demonstrate that it is possible to build models of sufficient accuracy and apply them for intelligently managing power at many levels of the system hierarchy.
Doctor of Philosophy
Past research in green high-performance computing (HPC) mostly focused on managing the power consumed by general-purpose processors, known as central processing units (CPUs) and to a lesser extent, memory. In this dissertation, we study two increasingly important components: interconnects (predominantly focused on those inside a chip, but not limited to them) and graphics processing units (GPUs). Our contributions in this dissertation include a set of innovative measurement techniques to estimate the power consumed by the target components, statistical and analytical approaches to develop power models and their optimizations, and algorithms to manage power statically and at runtime. Experimental results show that it is possible to build models of sufficient accuracy and apply them for intelligently managing power on multiple levels of the system hierarchy: chip interconnect at the micro-level, heterogeneous nodes at the meso-level, and a supercomputing cluster at the macro-level.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Li, Shaomeng. „Wavelet Compression for Visualization and Analysis on High Performance Computers“. Thesis, University of Oregon, 2018. http://hdl.handle.net/1794/23905.

Der volle Inhalt der Quelle
Annotation:
As HPC systems move towards exascale, the discrepancy between computational power and I/O transfer rate is only growing larger. Lossy in situ compression is a promising solution to address this gap, since it alleviates I/O constraints while still enabling traditional post hoc analysis. This dissertation explores the viability of such a solution with respect to a specific kind of compressor — wavelets. We especially examine three aspects of concern regarding the viability of wavelets: 1) information loss after compression, 2) its capability to fit within in situ constraints, and 3) the compressor’s capability to adapt to HPC architectural changes. Findings from this dissertation inform in situ use of wavelet compressors on HPC systems, demonstrate its viabilities, and argue that its viability will only increase as exascale computing becomes a reality.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Alomar, Barceló Miquel Lleó. „Methodologies for hardware implementation of reservoir computing systems“. Doctoral thesis, Universitat de les Illes Balears, 2017. http://hdl.handle.net/10803/565422.

Der volle Inhalt der Quelle
Annotation:
[cat]Inspirades en la forma en què el cervell processa la informació, les xarxes neuronals artificials (XNA) es crearen amb l’objectiu de reproduir habilitats humanes en tasques que són difícils de resoldre mitjançant la programació algorítmica clàssica. El paradigma de les XNA s’ha aplicat a nombrosos camps de la ciència i enginyeria gràcies a la seva capacitat d’aprendre dels exemples, l’adaptació, el paral·lelisme i la tolerància a fallades. El reservoir computing (RC), basat en l’ús d’una xarxa neuronal recurrent (XNR) aleatòria com a nucli de processament, és un model de gran abast molt adequat per processar sèries temporals. Les realitzacions en maquinari de les XNA són crucials per aprofitar les propietats paral·leles d’aquests models, les quals afavoreixen una major velocitat i fiabilitat. D’altra banda, les xarxes neuronals en maquinari (XNM) poden oferir avantatges apreciables en termes de consum energètic i cost. Els dispositius compactes de baix cost implementant XNM són útils per donar suport o reemplaçar el programari en aplicacions en temps real, com ara de control, supervisió mèdica, robòtica i xarxes de sensors. No obstant això, la realització en maquinari de XNA amb un nombre elevat de neurones, com al cas de l’RC, és una tasca difícil a causa de la gran quantitat de recursos exigits per les operacions involucrades. Tot i els possibles beneficis dels circuits digitals en maquinari per realitzar un processament neuronal basat en RC, la majoria d’implementacions es realitzen en programari usant processadors convencionals. En aquesta tesi, proposo i analitzo diverses metodologies per a la implementació digital de sistemes RC fent ús d’un nombre limitat de recursos de maquinari. Els dissenys de la xarxa neuronal es descriuen en detall tant per a una implementació convencional com per als distints mètodes alternatius. Es discuteixen els avantatges i inconvenients de les diferents tècniques pel que fa a l’exactitud, velocitat de càlcul i àrea requerida. Finalment, les implementacions proposades s’apliquen a resoldre diferents problemes pràctics d’enginyeria.
[spa]Inspiradas en la forma en que el cerebro procesa la información, las redes neuronales artificiales (RNA) se crearon con el objetivo de reproducir habilidades humanas en tareas que son difíciles de resolver utilizando la programación algorítmica clásica. El paradigma de las RNA se ha aplicado a numerosos campos de la ciencia y la ingeniería gracias a su capacidad de aprender de ejemplos, la adaptación, el paralelismo y la tolerancia a fallas. El reservoir computing (RC), basado en el uso de una red neuronal recurrente (RNR) aleatoria como núcleo de procesamiento, es un modelo de gran alcance muy adecuado para procesar series temporales. Las realizaciones en hardware de las RNA son cruciales para aprovechar las propiedades paralelas de estos modelos, las cuales favorecen una mayor velocidad y fiabilidad. Por otro lado, las redes neuronales en hardware (RNH) pueden ofrecer ventajas apreciables en términos de consumo energético y coste. Los dispositivos compactos de bajo coste implementando RNH son útiles para apoyar o reemplazar al software en aplicaciones en tiempo real, como el control, monitorización médica, robótica y redes de sensores. Sin embargo, la realización en hardware de RNA con un número elevado de neuronas, como en el caso del RC, es una tarea difícil debido a la gran cantidad de recursos exigidos por las operaciones involucradas. A pesar de los posibles beneficios de los circuitos digitales en hardware para realizar un procesamiento neuronal basado en RC, la mayoría de las implementaciones se realizan en software mediante procesadores convencionales. En esta tesis, propongo y analizo varias metodologías para la implementación digital de sistemas RC utilizando un número limitado de recursos hardware. Los diseños de la red neuronal se describen en detalle tanto para una implementación convencional como para los distintos métodos alternativos. Se discuten las ventajas e inconvenientes de las diversas técnicas con respecto a la precisión, velocidad de cálculo y área requerida. Finalmente, las implementaciones propuestas se aplican a resolver diferentes problemas prácticos de ingeniería.
[eng]Inspired by the way the brain processes information, artificial neural networks (ANNs) were created with the aim of reproducing human capabilities in tasks that are hard to solve using the classical algorithmic programming. The ANN paradigma has been applied to numerous fields of science and engineering thanks to its ability to learn from examples, adaptation, parallelism and fault-tolerance. Reservoir computing (RC), based on the use of a random recurrent neural network (RNN) as processing core, is a powerful model that is highly suited to time-series processing. Hardware realizations of ANNs are crucial to exploit the parallel properties of these models, which favor higher speed and reliability. On the other hand, hardware neural networks (HNNs) may offer appreciable advantages in terms of power consumption and cost. Low-cost compact devices implementing HNNs are useful to suport or replace software in real-time applications, such as control, medical monitoring, robotics and sensor networks. However, the hardware realization of ANNs with large neuron counts, such as in RC, is a challenging task due to the large resource requirement of the involved operations. Despite the potential benefits of hardware digital circuits to perform RC-based neural processing, most implementations are realized in software using sequential processors. In this thesis, I propose and analyze several methodologies for the digital implementation of RC systems using limited hardware resources. The neural network design is described in detail for both a conventional implementation and the diverse alternative approaches. The advantages and shortcomings of the various techniques regarding the accuracy, computation speed and required silicon area are discussed. Finally, the proposed approaches are applied to solve different real-life engineering problems.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Santos, Rodríguez Patrícia. „Computing-Based Testing: conceptual model, implementations and experiments extending IMS QTI“. Doctoral thesis, Universitat Pompeu Fabra, 2011. http://hdl.handle.net/10803/69962.

Der volle Inhalt der Quelle
Annotation:
The use of objective tests in Technology Enhanced Learning (TEL) is based on the application of computers to support automatic assessment. Current research in this domain is mainly focused on the design of new question-items, being IMS Question and Test Interoperability (QTI) the recognized de-facto standard. This thesis claims that the domain can be extended with the design of advanced test-scenarios that integrate new interactive contexts for the visualization of question-items and tests, and that consider different types of devices and technologies that enable diverse activity settings. In this context, the dissertation proposes to term the domain as Computing-Based Testing (CBT) instead of Computer- Based Testing because it captures better the new technological support possibilities for testing. Advanced CBT scenarios can increase teachers’ choices in the design of more appropriate tests for their subject areas, enabling the assessment of higher-order skills. With the aim of modelling an advanced CBT domain that extends the current possibilities of QTI and related work, this thesis provides a set of contributions around three objectives. The first objective deals with proposing a Conceptual Model for the CBT domain considering three main dimensions: the Question-item, the Test and the Activity. To tackle this objective, the thesis presents, on the one hand, a framework to assist in the categorization and design of advanced CBT scenarios and, on the other hand, two models that suggest elements for technologically representing the Test and Question-item dimensions. The models are platform-independent models (PIM) that extend QTI in order to support advanced CBT. Besides, the use of patterns is proposed to complement the modelling of the domain. The second objective seeks to show the relevance, value and applicability of the CBT Conceptual Model through exemplary challenging scenarios and case studies in authentic settings. To this end, the dissertation evaluates the design and implementation of a set of CBT systems and experiments. All the experiments use the proposed CBT Conceptual Model for designing an advanced CBT scenario. For each case the CBT-PIMs serve as the basis for developing a particular CBT-PSM and system. The evaluation results show that the implementations foster educational benefits, enable the assessment of higher-order skills and enhance the students’ motivation. Finally, the third objective is devoted to propose extension paths for QTI. The collection of models proposed in the thesis suggests different extension directions for QTI so as to enable the implementation of advanced questions, tests and activities. The proposed systems and scenarios also represent reference implementation and good practices of the proposed extension paths.
El uso de test de corrección automática, en el Aprendizaje Apoyado por Tecnologías de la Información y las Comunicaciones, se basa en el uso de ordenadores. Las propuestas actuales se centran en el diseño de nuevas preguntas, siendo IMS Question and Test Interoperability (QTI) el estándar de-facto. La tesis propone que este dominio puede ser extendido con el diseño de escenarios de test avanzados que integren nuevos contextos de interacción para la visualización de preguntas y tests, y que consideren la aplicación de diversos dispositivos tecnológicos para permitir diversos tipos de actividades. En este contexto se propone usar el término inglés Computing-Based Testing (CBT) para referirse al dominio, en vez de usar el término Computer-Based Testing, enfatizando el papel de la tecnología para la evaluación basada en test. Los escenarios CBT avanzados pueden aumentar la posibilidad de que los profesores puedan diseñar test más adecuados para sus asignaturas, permitiendo la evaluación de habilidades de alto nivel. Con el reto principal de modelar el dominio del CBT extendiendo las posibilidades actuales de QTI y las aproximaciones actuales, esta tesis proporciona un conjunto de contribuciones relacionadas con tres objetivos. El primer objetivo de la tesis es proponer un Modelo Conceptual definiendo y relacionando tres dimensiones: Pregunta, Test y Actividad. Por una parte, se propone un marco como guía en la categorización y diseño de escenarios CBT. Además, se proponen dos modelos que indican los elementos para la representación tecnológica de preguntas y test. Estos modelos son independientes de plataforma (PIM) que extienden QTI formulando los elementos que permiten implementar escenarios CBT avanzados. Además, se propone el uso de patrones como complemento en el modelado del dominio. El segundo objetivo trata de mostrar la relevancia y aplicabilidad de las contribuciones a través de escenarios y casos de estudio representativos en contextos reales. Para ello, se evalúa el diseño e implementación de un conjunto de experimentos y sistemas. En todos los experimentos se utiliza el Modelo Conceptual para diseñar escenarios CBT avanzados. Para cada caso los CBT-PIMs sirven como base para desarrollar modelos específicos de plataforma (CBT-PSMs) y sistemas asociados. La evaluación muestra que las implementaciones resultantes tienen beneficios educativos positivos, permitiendo la evaluación de habilidades de alto nivel y mejorando la motivación de los estudiantes. Finalmente, el tercer objetivo se centra en proponer vías de extensión para QTI. La colección de modelos propuestos sugiere diferentes direcciones de extensión de QTI para la implementación de preguntas, tests y actividades avanzados. Los escenarios y sistemas llevados a cabo representan implementaciones de referencia y buenas prácticas para las vías de extensión propuestas.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Dirand, Estelle. „Développement d'un système in situ à base de tâches pour un code de dynamique moléculaire classique adapté aux machines exaflopiques“. Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAM065/document.

Der volle Inhalt der Quelle
Annotation:
L’ère de l’exascale creusera encore plus l’écart entre la vitesse de génération des données de simulations et la vitesse d’écriture et de lecture pour analyser ces données en post-traitement. Le temps jusqu’à la découverte scientifique sera donc grandement impacté et de nouvelles techniques de traitement des données doivent être mises en place. Les méthodes in situ réduisent le besoin d’écrire des données en les analysant directement là où elles sont produites. Il existe plusieurs techniques, en exécutant les analyses sur les mêmes nœuds de calcul que la simulation (in situ), en utilisant des nœuds dédiés (in transit) ou en combinant les deux approches (hybride). La plupart des méthodes in situ traditionnelles ciblent les simulations qui ne sont pas capables de tirer profit du nombre croissant de cœurs par processeur mais elles n’ont pas été conçues pour les architectures many-cœurs qui émergent actuellement. La programmation à base de tâches est quant à elle en train de devenir un standard pour ces architectures mais peu de techniques in situ à base de tâches ont été développées.Cette thèse propose d’étudier l’intégration d’un système in situ à base de tâches pour un code de dynamique moléculaire conçu pour les supercalculateurs exaflopiques. Nous tirons profit des propriétés de composabilité de la programmation à base de tâches pour implanter l’architecture hybride TINS. Les workflows d’analyses sont représentés par des graphes de tâches qui peuvent à leur tour générer des tâches pour une exécution in situ ou in transit. L’exécution in situ est rendue possible grâce à une méthode innovante de helper core dynamique qui s’appuie sur le concept de vol de tâches pour entrelacer efficacement tâches de simulation et d’analyse avec un faible impact sur le temps de la simulation.TINS utilise l’ordonnanceur de vol de tâches d’Intel® TBB et est intégré dans ExaStamp, un code de dynamique moléculaire. De nombreuses expériences ont montrées que TINS est jusqu’à 40% plus rapide que des méthodes existantes de l’état de l’art. Des simulations de dynamique moléculaire sur des système de 2 milliards de particles sur 14,336 cœurs ont montré que TINS est capable d’exécuter des analyses complexes à haute fréquence avec un surcoût inférieur à 10%
The exascale era will widen the gap between data generation rate and the time to manage their output and analysis in a post-processing way, dramatically increasing the end-to-end time to scientific discovery and calling for a shift toward new data processing methods. The in situ paradigm proposes to analyze data while still resident in the supercomputer memory to reduce the need for data storage. Several techniques already exist, by executing simulation and analytics on the same nodes (in situ), by using dedicated nodes (in transit) or by combining the two approaches (hybrid). Most of the in situ techniques target simulations that are not able to fully benefit from the ever growing number of cores per processor but they are not designed for the emerging manycore processors.Task-based programming models on the other side are expected to become a standard for these architectures but few task-based in situ techniques have been developed so far. This thesis proposes to study the design and integration of a novel task-based in situ framework inside a task-based molecular dynamics code designed for exascale supercomputers. We take benefit from the composability properties of the task-based programming model to implement the TINS hybrid framework. Analytics workflows are expressed as graphs of tasks that can in turn generate children tasks to be executed in transit or interleaved with simulation tasks in situ. The in situ execution is performed thanks to an innovative dynamic helper core strategy that uses the work stealing concept to finely interleave simulation and analytics tasks inside a compute node with a low overhead on the simulation execution time.TINS uses the Intel® TBB work stealing scheduler and is integrated into ExaStamp, a task-based molecular dynamics code. Various experiments have shown that TINS is up to 40% faster than state-of-the-art in situ libraries. Molecular dynamics simulations of up to 2 billions particles on up to 14,336 cores have shown that TINS is able to execute complex analytics workflows at a high frequency with an overhead smaller than 10%
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Carlson, Darren Vaughn. „Ocean. Towards Web-scale context-aware computing. A community-centric, wide-area approach for in-situ, context-mediated component discovery and composition“. Lübeck Zentrale Hochschulbibliothek Lübeck, 2010. http://d-nb.info/1001862880/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Dutta, Soumya. „In Situ Summarization and Visual Exploration of Large-scale Simulation Data Sets“. The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu1524070976058567.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Lemon, Alexander Michael. „A Shared-Memory Coupled Architecture to Leverage Big Data Frameworks in Prototyping and In-Situ Analytics for Data Intensive Scientific Workflows“. BYU ScholarsArchive, 2019. https://scholarsarchive.byu.edu/etd/7545.

Der volle Inhalt der Quelle
Annotation:
There is a pressing need for creative new data analysis methods whichcan sift through scientific simulation data and produce meaningfulresults. The types of analyses and the amount of data handled by currentmethods are still quite restricted, and new methods could providescientists with a large productivity boost. New methods could be simpleto develop in big data processing systems such as Apache Spark, which isdesigned to process many input files in parallel while treating themlogically as one large dataset. This distributed model, combined withthe large number of analysis libraries created for the platform, makesSpark ideal for processing simulation output.Unfortunately, the filesystem becomes a major bottleneck in any workflowthat uses Spark in such a fashion. Faster transports are notintrinsically supported by Spark, and its interface almost denies thepossibility of maintainable third-party extensions. By leveraging thesemantics of Scala and Spark's recent scheduler upgrades, we forceco-location of Spark executors with simulation processes and enable fastlocal inter-process communication through shared memory. This provides apath for bulk data transfer into the Java Virtual Machine, removing thecurrent Spark ingestion bottleneck.Besides showing that our system makes this transfer feasible, we alsodemonstrate a proof-of-concept system integrating traditional HPC codeswith bleeding-edge analytics libraries. This provides scientists withguidance on how to apply our libraries to gain a new and powerful toolfor developing new analysis techniques in large scientific simulationpipelines.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Carlson, Darren Vaughn [Verfasser]. „Ocean. Towards Web-scale context-aware computing : A community-centric, wide-area approach for in-situ, context-mediated component discovery and composition / Darren Vaughn Carlson“. Lübeck : Zentrale Hochschulbibliothek Lübeck, 2010. http://d-nb.info/1001862880/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Bücher zum Thema "In situ computing"

1

Computing for site managers: Database techniques. Oxford [England]: Blackwell Science, 1996.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

-Y, Sheu Phillip C., Hrsg. Semantic computing. Hoboken, N.J: John Wiley & Sons, 2010.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

S, Wadman Barry, Hrsg. Using Microsoft Site server. Indianapolis, IN: Que, 1997.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Nick, Apostolopoulos, Hrsg. Professional Site server 3.0. Birmingham, UK: Wrox Press, 1999.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Tim, Huckaby, Hrsg. Beginning Site Server 3.0. Birmingham: Wrox Press, 2000.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Sequeira, Anthony. SQL server on site. Scottsdale, AZ: Coriolis Group Books, 2001.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

1971-, Li Qing, und Shih Timothy K. 1961-, Hrsg. Ubiquitous multimedia computing. Boca Raton, FL: Chapman & Hall/CRC, 2009.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

AmirFaiz, Farhad. Official Microsoft site server 2.0 enterprise edition toolkit. Redmond, WA: Microsoft Press, 1998.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Tim, Huckaby, Hrsg. Beginning Site Server 3.0. Birmingham, UK ; Chicago, IL: Wrox Press, 2000.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Turlington, Shannon R. Microsoft Exchange Server 5.5 on site: Planning, deployment configuation, troubleshooting. Albany, N.Y: Coriolis Group Books, 1998.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "In situ computing"

1

Kavehei, Omid, Efstratios Skafidas und Kamran Eshraghian. „Memristive In Situ Computing“. In Handbook of Memristor Networks, 1005–20. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-319-76375-0_35.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Kavehei, Omid, Efstratios Skafidas und Kamran Eshraghian. „Memristive in Situ Computing“. In Memristor Networks, 413–28. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-02630-5_19.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Antoine, Martine, Brigitte Sigal, Fabrice Harms, Anne Latrive, Adriano Burcheri, Osnath Assayag, Bertrand de Poly, Sylvain Gigan und A. Claude Boccara. „Intra-Operative Ex-Situ and In-Situ Optical Biopsy Using Light-CT“. In Advances in Intelligent and Soft Computing, 77–84. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-25547-2_7.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Zhang, Jian, und Rui Jin. „In-Situ Merge Sort Using Hand-Shaking Algorithm“. In Advances in Intelligent Systems and Computing, 228–33. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-33-4572-0_33.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Martin, Betty, P. E. Shankaranarayanan, Vimala Juliet und A. Gopal. „Identifying Sound of RPW In Situ from External Sources“. In Advances in Intelligent Systems and Computing, 681–91. New Delhi: Springer India, 2014. http://dx.doi.org/10.1007/978-81-322-2126-5_73.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Zimmerer, Christoph, Thomas Nelius und Sven Matthiesen. „Using Eye Tracking to Measure Cognitive Load of Designers in Situ“. In Design Computing and Cognition’22, 481–95. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-20418-0_29.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Chandrakanth, S. Anil, Thanga Raj Chelliah, S. P. Srivastava und Radha Thangaraj. „In-situ Efficiency Determination of Induction Motor through Parameter Estimation“. In Advances in Intelligent and Soft Computing, 689–700. India: Springer India, 2012. http://dx.doi.org/10.1007/978-81-322-0487-9_66.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Cai, Haipeng, Jian Chen, Alexander P. Auchus, Stephen Correia und David H. Laidlaw. „InShape: In-Situ Shape-Based Interactive Multiple-View Exploration of Diffusion MRI Visualizations“. In Advances in Visual Computing, 706–15. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33191-6_70.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Parashkevova, Ludmila, und Pedro Egizabal. „Modelling of Light Mg and Al Based Alloys as “in situ” Composites“. In Advanced Computing in Industrial Mathematics, 145–57. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-65530-7_14.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Rogers, Yvonne, Kay Connelly, Lenore Tedesco, William Hazlewood, Andrew Kurtz, Robert E. Hall, Josh Hursey und Tammy Toscos. „Why It’s Worth the Hassle: The Value of In-Situ Studies When Designing Ubicomp“. In UbiComp 2007: Ubiquitous Computing, 336–53. Berlin, Heidelberg: Springer Berlin Heidelberg, 2007. http://dx.doi.org/10.1007/978-3-540-74853-3_20.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "In situ computing"

1

Kim, Jinoh, Hasan Abbasi, Luis Chacon, Ciprian Docan, Scott Klasky, Qing Liu, Norbert Podhorszki, Arie Shoshani und Kesheng Wu. „Parallel in situ indexing for data-intensive computing“. In 2011 IEEE Symposium on Large Data Analysis and Visualization (LDAV). IEEE, 2011. http://dx.doi.org/10.1109/ldav.2011.6092319.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Konovalov, Dmitry A., Simindokht Jahangard und Lin Schwarzkopf. „In Situ Cane Toad Recognition“. In 2018 Digital Image Computing: Techniques and Applications (DICTA). IEEE, 2018. http://dx.doi.org/10.1109/dicta.2018.8615780.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Kline, Jenna, Christopher Stewart, Tanya Berger-Wolf, Michelle Ramirez, Samuel Stevens, Reshma Ramesh Babu, Namrata Banerji et al. „A Framework for Autonomic Computing for In Situ Imageomics“. In 2023 IEEE International Conference on Autonomic Computing and Self-Organizing Systems (ACSOS). IEEE, 2023. http://dx.doi.org/10.1109/acsos58161.2023.00018.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Goncalves, Jorge, Hannu Kukka, Iván Sánchez und Vassilis Kostakos. „Crowdsourcing Queue Estimations in Situ“. In CSCW '16: Computer Supported Cooperative Work and Social Computing. New York, NY, USA: ACM, 2016. http://dx.doi.org/10.1145/2818048.2819997.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Kusy, Brano, Jiajun Liu, Aninda Saha, Yang Li, Ross Marchant, Jeremy Oorloff, Lachlan Tychsen-Smith et al. „In-situ data curation“. In ACM MobiCom '22: The 28th Annual International Conference on Mobile Computing and Networking. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3495243.3558758.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Kress, James, Scott Klasky, Norbert Podhorszki, Jong Choi, Hank Childs und David Pugmire. „Loosely Coupled In Situ Visualization“. In SC15: The International Conference for High Performance Computing, Networking, Storage and Analysis. New York, NY, USA: ACM, 2015. http://dx.doi.org/10.1145/2828612.2828623.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Hubenschmid, Sebastian, Jonathan Wieland, Daniel Immanuel Fink, Andrea Batch, Johannes Zagermann, Niklas Elmqvist und Harald Reiterer. „ReLive: Bridging In-Situ and Ex-Situ Visual Analytics for Analyzing Mixed Reality User Studies“. In CHI '22: CHI Conference on Human Factors in Computing Systems. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3491102.3517550.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Mauldin, Jeffrey A., Thomas J. Otahal, Anthony M. Agelastos und Stefan P. Domino. „In-situ visualization for the large scale computing initiative milestone“. In ISAV'19: In Situ Infrastructures for Enabling Extreme-Scale Analysis and Visualization. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3364228.3364229.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

He, Yintao, Ying Wang, Cheng Liu, Huawei Li und Xiaowei Li. „TARe: Task-Adaptive in-situ ReRAM Computing for Graph Learning“. In 2021 58th ACM/IEEE Design Automation Conference (DAC). IEEE, 2021. http://dx.doi.org/10.1109/dac18074.2021.9586193.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Li, Huize, Zhaoying Li, Zhenyu Bai und Tulika Mitra. „ASADI: Accelerating Sparse Attention Using Diagonal-based In-Situ Computing“. In 2024 IEEE International Symposium on High-Performance Computer Architecture (HPCA). IEEE, 2024. http://dx.doi.org/10.1109/hpca57654.2024.00065.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Berichte der Organisationen zum Thema "In situ computing"

1

Choudhary, Alok, Ankit Agrawal und Wei-Keng Liao. Scalable, In-situ Data Clustering Data Analysis for Extreme Scale Scientific Computing. Office of Scientific and Technical Information (OSTI), Juli 2021. http://dx.doi.org/10.2172/1896359.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Yazzie, Natanii. In-situ TEM EELS analysis of memristive thin films for neuromorphic computing. Office of Scientific and Technical Information (OSTI), Mai 2024. http://dx.doi.org/10.2172/2372652.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Bauer, Andrew, James Forsythe, Jayanarayanan Sitaraman, Andrew Wissink, Buvana Jayaraman und Robert Haehnel. In situ analysis and visualization to enable better workflows with CREATE-AV™ Helios. Engineer Research and Development Center (U.S.), Juni 2021. http://dx.doi.org/10.21079/11681/40846.

Der volle Inhalt der Quelle
Annotation:
The CREATE-AV™ Helios CFD simulation code has been used to accurately predict rotorcraft performance under a variety of flight conditions. The Helios package contains a suite of tools that contain almost the entire set of functionality needed for a variety of workflows. These workflows include tools customized to properly specify many in situ analysis and visualization capabilities appropriate for rotorcraft analysis. In situ is the process of computing analysis and visualization information during a simulation run before data is saved to disk. In situ has been referred to with a variety of terms including co-processing, covisualization, coviz, etc. In this paper we describe the customization of the pre-processing GUI and corresponding development of the Helios solver code-base to effectively implement in situ analysis and visualization to reduce file IO and speed up workflows for CFD analysts. We showcase how the workflow enables the wide variety of Helios users to effectively work in post-processing tools they are already familiar with as opposed to forcing them to learn new tools in order post-process in situ data extracts being produced by Helios. These data extracts include various sources of information customized to Helios, such as knowledge about the near- and off-body grids, internal surface extracts with patch information, and volumetric extracts meant for fast post-processing of data. Additionally, we demonstrate how in situ can be used by workflow automation tools to help convey information to the user that would be much more difficult when using full data dumps.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Boxberger, L. M., L. W. Amiot, M. E. Bretscher, D. E. Engert, F. M. Moszur, C. J. Mueller, D. E. O'Brien, C. G. Schlesselman und L. J. Troyer. ANL statement of site strategy for computing workstations. Herausgegeben von K. R. Fenske. Office of Scientific and Technical Information (OSTI), November 1991. http://dx.doi.org/10.2172/6253682.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Opitz, L., L. Boxberger und R. Izzo. ANL statement of site strategy for computing workstations. Office of Scientific and Technical Information (OSTI), September 1989. http://dx.doi.org/10.2172/7161254.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Horak, Karl Emanuel, Sharon Marie DeLand und Dianna Sue Blair. The feasibility of mobile computing for on-site inspection. Office of Scientific and Technical Information (OSTI), September 2014. http://dx.doi.org/10.2172/1162192.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Vesselinov, Velimir V., und Danny Katzman. High-performance computing for model-driven decision support related to the LANL Chromium Site. Office of Scientific and Technical Information (OSTI), Mai 2013. http://dx.doi.org/10.2172/1078375.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Corum, Zachary, Ethan Cheng, Stanford Gibson und Travis Dahl. Optimization of reach-scale gravel nourishment on the Green River below Howard Hanson Dam, King County, Washington. Engineer Research and Development Center (U.S.), April 2022. http://dx.doi.org/10.21079/11681/43887.

Der volle Inhalt der Quelle
Annotation:
The US Army Corps of Engineers, Seattle District, nourishes gravel downstream of Howard Hanson Dam (HHD) on the Green River in Washington State. The study team developed numerical models to support the ongoing salmonid habitat improvement mission downstream of HHD. Recent advancements in computing and numerical modeling software make long-term simulations in steep, gravel, cobble, and boulder river environments cost effective. The team calibrated mobile-bed, sediment-transport models for the pre-dam and post-dam periods. The modeling explored geomorphic responses to flow and sediment regime changes associated with HHD construction and operation. The team found that pre-dam conditions were significantly more dynamic than post-dam conditions and may have had lower spawning habitat quality in the project vicinity. The team applied the Bank Stability and Toe Erosion Model to the site and then calibrated to the post-dam gravel augmentation period. The team implemented a new hiding routine in HEC-RAS that improved the simulated grain size trends but underestimated coarse sediment transport. Models without the hiding function overestimated grain size but matched bed elevations and mass flux very well. Decade-long simulations of four future gravel nourishment conditions showed continued sediment storage in the reach. The storage rate was sensitive to nourishment mass and grain size.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Scribner, David R., und Patrick H. Wiley. The Development of a Virtual McKenna Military Operations in Urban Terrain (MOUT) Site for Command, Control, Communication, Computing, Intelligence, Surveillance, and Reconnaissance (C4ISR) Studies. Fort Belvoir, VA: Defense Technical Information Center, Juni 2007. http://dx.doi.org/10.21236/ada468507.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Kottke, Albert, Norman Abrahamson, David Boore, Yousef Bozorgnia, Christine Goulet, Justin Hollenback, Tadahiro Kishida et al. Selection of Random Vibration Procedures for the NGA-East Project. Pacific Earthquake Engineering Research Center, University of California, Berkeley, CA, November 2018. http://dx.doi.org/10.55461/ltmu9309.

Der volle Inhalt der Quelle
Annotation:
Pseudo-spectral acceleration (PSA) is the most commonly used intensity measure in earthquake engineering as it serves as a simple approximate predictor of structural response for many types of systems. Therefore, most ground-motion models (GMMs, aka GMPEs) provide median and standard deviation PSA using a suite of input parameters characterizing the source, path, and site effects. Unfortunately, PSA is a complex metric: the PSA for a single oscillator frequency depends on the Fourier amplitudes across a range of frequencies. The Fourier amplitude spectrum (FAS) is an appealing alternative because its simple linear superposition allows effects to be modeled as transfer functions. For this reason, most seismological models, i.e., the source spectrum, are developed for the FAS. Using FAS in conjunction with random-vibration theory (RVT) allows GMM developers to superimpose seismological models directly, computing PSA only at the end of the process. The FAS-RVT-PSA approach was first used by the Hollenback et al. team in their development of GMMs for the Next Generation Attenuation Relationships for Central & Eastern North-America (NGA-East) project (see Chapter 11 of PEER Report No. 2015/04). As part of the NGA-East project to support the Hollenback et al. team and similar efforts, the current report summarizes a systematic processing algorithm for FAS that minimizes computational requirements and bias that results from the RVT approximation for median GMM development.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie