Дисертації з теми "In situ computing"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: In situ computing.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-41 дисертацій для дослідження на тему "In situ computing".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Ranisavljević, Elisabeth. "Cloud computing appliqué au traitement multimodal d’images in situ pour l’analyse des dynamiques environnementales." Thesis, Toulouse 2, 2016. http://www.theses.fr/2016TOU20128/document.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
L’analyse des paysages, de ses dynamiques et ses processus environnementaux, nécessite d’acquérir régulièrement des données des sites, notamment pour le bilan glaciaire au Spitsberg et en haute montagne. A cause des mauvaises conditions climatiques communes aux latitudes polaires et à cause de leur coût, les images satellites journalières ne sont pas toujours accessibles. De ce fait, les événements rapides comme la fonte de la neige ou l'enneigement ne peuvent pas être étudiés à partir des données de télédétection à cause de leur fréquence trop faible. Nous avons complété les images satellites par un ensemble de de stations photo automatiques et autonomes qui prennent 3 photos par jour. L’acquisition de ces photos génère une grande base de données d’images. Plusieurs traitements doivent être appliqués sur les photos afin d’extraire l’information souhaitée (modifications géométriques, gestion des perturbations atmosphériques, classification, etc). Seule l’informatique est à même de stocker et gérer toutes ces informations. Le cloud computing offre en tant que services des ressources informatiques (puissance de calcul, espace de stockage, applications, etc). Uniquement le stockage de la masse de données géographique pourrait être une raison d’utilisation du cloud computing. Mais en plus de son espace de stockage, le cloud offre une simplicité d’accès, une architecture scalable ainsi qu’une modularité dans les services disponibles. Dans le cadre de l’analyse des photos in situ, le cloud computing donne la possibilité de mettre en place un outil automatique afin de traiter l’ensemble des données malgré la variété des perturbations ainsi que le volume de données. A travers une décomposition du traitement d’images en plusieurs tâches, implémentées en tant que web services, la composition de ces services nous permet d’adapter le traitement aux conditions de chacune des données
Analyzing landscape, its dynamics and environmental evolutions require regular data from the sites, specifically for glacier mass balanced in Spitsbergen and high mountain area. Due to poor weather conditions including common heavy cloud cover at polar latitudes, and because of its cost, daily satellite imaging is not always accessible. Besides, fast events like flood or blanket of snow is ignored by satellite based studies, since the slowest sampling rate is unable to observe it. We complement satellite imagery with a set of ground based autonomous automated digital cameras which take 3 pictures a day. These pictures form a huge database. Each picture needs many processing to extract the information (geometric modifications, atmospheric disturbances, classification, etc). Only computer science is able to store and manage all this information. Cloud computing, being more accessible in the last few years, offers as services IT resources (computing power, storage, applications, etc.). The storage of the huge geographical data could, in itself, be a reason to use cloud computing. But in addition to its storage space, cloud offers an easy way to access , a scalable architecture and a modularity in the services available. As part of the analysis of in situ images, cloud computing offers the possibility to set up an automated tool to process all the data despite the variety of disturbances and the data volume. Through decomposition of image processing in several tasks, implemented as web services, the composition of these services allows us to adapt the treatment to the conditions of each of the data
2

Adhinarayanan, Vignesh. "Models and Techniques for Green High-Performance Computing." Diss., Virginia Tech, 2020. http://hdl.handle.net/10919/98660.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
High-performance computing (HPC) systems have become power limited. For instance, the U.S. Department of Energy set a power envelope of 20MW in 2008 for the first exascale supercomputer now expected to arrive in 2021--22. Toward this end, we seek to improve the greenness of HPC systems by improving their performance per watt at the allocated power budget. In this dissertation, we develop a series of models and techniques to manage power at micro-, meso-, and macro-levels of the system hierarchy, specifically addressing data movement and heterogeneity. We target the chip interconnect at the micro-level, heterogeneous nodes at the meso-level, and a supercomputing cluster at the macro-level. Overall, our goal is to improve the greenness of HPC systems by intelligently managing power. The first part of this dissertation focuses on measurement and modeling problems for power. First, we study how to infer chip-interconnect power by observing the system-wide power consumption. Our proposal is to design a novel micro-benchmarking methodology based on data-movement distance by which we can properly isolate the chip interconnect and measure its power. Next, we study how to develop software power meters to monitor a GPU's power consumption at runtime. Our proposal is to adapt performance counter-based models for their use at runtime via a combination of heuristics, statistical techniques, and application-specific knowledge. In the second part of this dissertation, we focus on managing power. First, we propose to reduce the chip-interconnect power by proactively managing its dynamic voltage and frequency (DVFS) state. Toward this end, we develop a novel phase predictor that uses approximate pattern matching to forecast future requirements and in turn, proactively manage power. Second, we study the problem of applying a power cap to a heterogeneous node. Our proposal proactively manages the GPU power using phase prediction and a DVFS power model but reactively manages the CPU. The resulting hybrid approach can take advantage of the differences in the capabilities of the two devices. Third, we study how in-situ techniques can be applied to improve the greenness of HPC clusters. Overall, in our dissertation, we demonstrate that it is possible to infer power consumption of real hardware components without directly measuring them, using the chip interconnect and GPU as examples. We also demonstrate that it is possible to build models of sufficient accuracy and apply them for intelligently managing power at many levels of the system hierarchy.
Doctor of Philosophy
Past research in green high-performance computing (HPC) mostly focused on managing the power consumed by general-purpose processors, known as central processing units (CPUs) and to a lesser extent, memory. In this dissertation, we study two increasingly important components: interconnects (predominantly focused on those inside a chip, but not limited to them) and graphics processing units (GPUs). Our contributions in this dissertation include a set of innovative measurement techniques to estimate the power consumed by the target components, statistical and analytical approaches to develop power models and their optimizations, and algorithms to manage power statically and at runtime. Experimental results show that it is possible to build models of sufficient accuracy and apply them for intelligently managing power on multiple levels of the system hierarchy: chip interconnect at the micro-level, heterogeneous nodes at the meso-level, and a supercomputing cluster at the macro-level.
3

Li, Shaomeng. "Wavelet Compression for Visualization and Analysis on High Performance Computers." Thesis, University of Oregon, 2018. http://hdl.handle.net/1794/23905.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
As HPC systems move towards exascale, the discrepancy between computational power and I/O transfer rate is only growing larger. Lossy in situ compression is a promising solution to address this gap, since it alleviates I/O constraints while still enabling traditional post hoc analysis. This dissertation explores the viability of such a solution with respect to a specific kind of compressor — wavelets. We especially examine three aspects of concern regarding the viability of wavelets: 1) information loss after compression, 2) its capability to fit within in situ constraints, and 3) the compressor’s capability to adapt to HPC architectural changes. Findings from this dissertation inform in situ use of wavelet compressors on HPC systems, demonstrate its viabilities, and argue that its viability will only increase as exascale computing becomes a reality.
4

Alomar, Barceló Miquel Lleó. "Methodologies for hardware implementation of reservoir computing systems." Doctoral thesis, Universitat de les Illes Balears, 2017. http://hdl.handle.net/10803/565422.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
[cat]Inspirades en la forma en què el cervell processa la informació, les xarxes neuronals artificials (XNA) es crearen amb l’objectiu de reproduir habilitats humanes en tasques que són difícils de resoldre mitjançant la programació algorítmica clàssica. El paradigma de les XNA s’ha aplicat a nombrosos camps de la ciència i enginyeria gràcies a la seva capacitat d’aprendre dels exemples, l’adaptació, el paral·lelisme i la tolerància a fallades. El reservoir computing (RC), basat en l’ús d’una xarxa neuronal recurrent (XNR) aleatòria com a nucli de processament, és un model de gran abast molt adequat per processar sèries temporals. Les realitzacions en maquinari de les XNA són crucials per aprofitar les propietats paral·leles d’aquests models, les quals afavoreixen una major velocitat i fiabilitat. D’altra banda, les xarxes neuronals en maquinari (XNM) poden oferir avantatges apreciables en termes de consum energètic i cost. Els dispositius compactes de baix cost implementant XNM són útils per donar suport o reemplaçar el programari en aplicacions en temps real, com ara de control, supervisió mèdica, robòtica i xarxes de sensors. No obstant això, la realització en maquinari de XNA amb un nombre elevat de neurones, com al cas de l’RC, és una tasca difícil a causa de la gran quantitat de recursos exigits per les operacions involucrades. Tot i els possibles beneficis dels circuits digitals en maquinari per realitzar un processament neuronal basat en RC, la majoria d’implementacions es realitzen en programari usant processadors convencionals. En aquesta tesi, proposo i analitzo diverses metodologies per a la implementació digital de sistemes RC fent ús d’un nombre limitat de recursos de maquinari. Els dissenys de la xarxa neuronal es descriuen en detall tant per a una implementació convencional com per als distints mètodes alternatius. Es discuteixen els avantatges i inconvenients de les diferents tècniques pel que fa a l’exactitud, velocitat de càlcul i àrea requerida. Finalment, les implementacions proposades s’apliquen a resoldre diferents problemes pràctics d’enginyeria.
[spa]Inspiradas en la forma en que el cerebro procesa la información, las redes neuronales artificiales (RNA) se crearon con el objetivo de reproducir habilidades humanas en tareas que son difíciles de resolver utilizando la programación algorítmica clásica. El paradigma de las RNA se ha aplicado a numerosos campos de la ciencia y la ingeniería gracias a su capacidad de aprender de ejemplos, la adaptación, el paralelismo y la tolerancia a fallas. El reservoir computing (RC), basado en el uso de una red neuronal recurrente (RNR) aleatoria como núcleo de procesamiento, es un modelo de gran alcance muy adecuado para procesar series temporales. Las realizaciones en hardware de las RNA son cruciales para aprovechar las propiedades paralelas de estos modelos, las cuales favorecen una mayor velocidad y fiabilidad. Por otro lado, las redes neuronales en hardware (RNH) pueden ofrecer ventajas apreciables en términos de consumo energético y coste. Los dispositivos compactos de bajo coste implementando RNH son útiles para apoyar o reemplazar al software en aplicaciones en tiempo real, como el control, monitorización médica, robótica y redes de sensores. Sin embargo, la realización en hardware de RNA con un número elevado de neuronas, como en el caso del RC, es una tarea difícil debido a la gran cantidad de recursos exigidos por las operaciones involucradas. A pesar de los posibles beneficios de los circuitos digitales en hardware para realizar un procesamiento neuronal basado en RC, la mayoría de las implementaciones se realizan en software mediante procesadores convencionales. En esta tesis, propongo y analizo varias metodologías para la implementación digital de sistemas RC utilizando un número limitado de recursos hardware. Los diseños de la red neuronal se describen en detalle tanto para una implementación convencional como para los distintos métodos alternativos. Se discuten las ventajas e inconvenientes de las diversas técnicas con respecto a la precisión, velocidad de cálculo y área requerida. Finalmente, las implementaciones propuestas se aplican a resolver diferentes problemas prácticos de ingeniería.
[eng]Inspired by the way the brain processes information, artificial neural networks (ANNs) were created with the aim of reproducing human capabilities in tasks that are hard to solve using the classical algorithmic programming. The ANN paradigma has been applied to numerous fields of science and engineering thanks to its ability to learn from examples, adaptation, parallelism and fault-tolerance. Reservoir computing (RC), based on the use of a random recurrent neural network (RNN) as processing core, is a powerful model that is highly suited to time-series processing. Hardware realizations of ANNs are crucial to exploit the parallel properties of these models, which favor higher speed and reliability. On the other hand, hardware neural networks (HNNs) may offer appreciable advantages in terms of power consumption and cost. Low-cost compact devices implementing HNNs are useful to suport or replace software in real-time applications, such as control, medical monitoring, robotics and sensor networks. However, the hardware realization of ANNs with large neuron counts, such as in RC, is a challenging task due to the large resource requirement of the involved operations. Despite the potential benefits of hardware digital circuits to perform RC-based neural processing, most implementations are realized in software using sequential processors. In this thesis, I propose and analyze several methodologies for the digital implementation of RC systems using limited hardware resources. The neural network design is described in detail for both a conventional implementation and the diverse alternative approaches. The advantages and shortcomings of the various techniques regarding the accuracy, computation speed and required silicon area are discussed. Finally, the proposed approaches are applied to solve different real-life engineering problems.
5

Santos, Rodríguez Patrícia. "Computing-Based Testing: conceptual model, implementations and experiments extending IMS QTI." Doctoral thesis, Universitat Pompeu Fabra, 2011. http://hdl.handle.net/10803/69962.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The use of objective tests in Technology Enhanced Learning (TEL) is based on the application of computers to support automatic assessment. Current research in this domain is mainly focused on the design of new question-items, being IMS Question and Test Interoperability (QTI) the recognized de-facto standard. This thesis claims that the domain can be extended with the design of advanced test-scenarios that integrate new interactive contexts for the visualization of question-items and tests, and that consider different types of devices and technologies that enable diverse activity settings. In this context, the dissertation proposes to term the domain as Computing-Based Testing (CBT) instead of Computer- Based Testing because it captures better the new technological support possibilities for testing. Advanced CBT scenarios can increase teachers’ choices in the design of more appropriate tests for their subject areas, enabling the assessment of higher-order skills. With the aim of modelling an advanced CBT domain that extends the current possibilities of QTI and related work, this thesis provides a set of contributions around three objectives. The first objective deals with proposing a Conceptual Model for the CBT domain considering three main dimensions: the Question-item, the Test and the Activity. To tackle this objective, the thesis presents, on the one hand, a framework to assist in the categorization and design of advanced CBT scenarios and, on the other hand, two models that suggest elements for technologically representing the Test and Question-item dimensions. The models are platform-independent models (PIM) that extend QTI in order to support advanced CBT. Besides, the use of patterns is proposed to complement the modelling of the domain. The second objective seeks to show the relevance, value and applicability of the CBT Conceptual Model through exemplary challenging scenarios and case studies in authentic settings. To this end, the dissertation evaluates the design and implementation of a set of CBT systems and experiments. All the experiments use the proposed CBT Conceptual Model for designing an advanced CBT scenario. For each case the CBT-PIMs serve as the basis for developing a particular CBT-PSM and system. The evaluation results show that the implementations foster educational benefits, enable the assessment of higher-order skills and enhance the students’ motivation. Finally, the third objective is devoted to propose extension paths for QTI. The collection of models proposed in the thesis suggests different extension directions for QTI so as to enable the implementation of advanced questions, tests and activities. The proposed systems and scenarios also represent reference implementation and good practices of the proposed extension paths.
El uso de test de corrección automática, en el Aprendizaje Apoyado por Tecnologías de la Información y las Comunicaciones, se basa en el uso de ordenadores. Las propuestas actuales se centran en el diseño de nuevas preguntas, siendo IMS Question and Test Interoperability (QTI) el estándar de-facto. La tesis propone que este dominio puede ser extendido con el diseño de escenarios de test avanzados que integren nuevos contextos de interacción para la visualización de preguntas y tests, y que consideren la aplicación de diversos dispositivos tecnológicos para permitir diversos tipos de actividades. En este contexto se propone usar el término inglés Computing-Based Testing (CBT) para referirse al dominio, en vez de usar el término Computer-Based Testing, enfatizando el papel de la tecnología para la evaluación basada en test. Los escenarios CBT avanzados pueden aumentar la posibilidad de que los profesores puedan diseñar test más adecuados para sus asignaturas, permitiendo la evaluación de habilidades de alto nivel. Con el reto principal de modelar el dominio del CBT extendiendo las posibilidades actuales de QTI y las aproximaciones actuales, esta tesis proporciona un conjunto de contribuciones relacionadas con tres objetivos. El primer objetivo de la tesis es proponer un Modelo Conceptual definiendo y relacionando tres dimensiones: Pregunta, Test y Actividad. Por una parte, se propone un marco como guía en la categorización y diseño de escenarios CBT. Además, se proponen dos modelos que indican los elementos para la representación tecnológica de preguntas y test. Estos modelos son independientes de plataforma (PIM) que extienden QTI formulando los elementos que permiten implementar escenarios CBT avanzados. Además, se propone el uso de patrones como complemento en el modelado del dominio. El segundo objetivo trata de mostrar la relevancia y aplicabilidad de las contribuciones a través de escenarios y casos de estudio representativos en contextos reales. Para ello, se evalúa el diseño e implementación de un conjunto de experimentos y sistemas. En todos los experimentos se utiliza el Modelo Conceptual para diseñar escenarios CBT avanzados. Para cada caso los CBT-PIMs sirven como base para desarrollar modelos específicos de plataforma (CBT-PSMs) y sistemas asociados. La evaluación muestra que las implementaciones resultantes tienen beneficios educativos positivos, permitiendo la evaluación de habilidades de alto nivel y mejorando la motivación de los estudiantes. Finalmente, el tercer objetivo se centra en proponer vías de extensión para QTI. La colección de modelos propuestos sugiere diferentes direcciones de extensión de QTI para la implementación de preguntas, tests y actividades avanzados. Los escenarios y sistemas llevados a cabo representan implementaciones de referencia y buenas prácticas para las vías de extensión propuestas.
6

Dirand, Estelle. "Développement d'un système in situ à base de tâches pour un code de dynamique moléculaire classique adapté aux machines exaflopiques." Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAM065/document.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
L’ère de l’exascale creusera encore plus l’écart entre la vitesse de génération des données de simulations et la vitesse d’écriture et de lecture pour analyser ces données en post-traitement. Le temps jusqu’à la découverte scientifique sera donc grandement impacté et de nouvelles techniques de traitement des données doivent être mises en place. Les méthodes in situ réduisent le besoin d’écrire des données en les analysant directement là où elles sont produites. Il existe plusieurs techniques, en exécutant les analyses sur les mêmes nœuds de calcul que la simulation (in situ), en utilisant des nœuds dédiés (in transit) ou en combinant les deux approches (hybride). La plupart des méthodes in situ traditionnelles ciblent les simulations qui ne sont pas capables de tirer profit du nombre croissant de cœurs par processeur mais elles n’ont pas été conçues pour les architectures many-cœurs qui émergent actuellement. La programmation à base de tâches est quant à elle en train de devenir un standard pour ces architectures mais peu de techniques in situ à base de tâches ont été développées.Cette thèse propose d’étudier l’intégration d’un système in situ à base de tâches pour un code de dynamique moléculaire conçu pour les supercalculateurs exaflopiques. Nous tirons profit des propriétés de composabilité de la programmation à base de tâches pour implanter l’architecture hybride TINS. Les workflows d’analyses sont représentés par des graphes de tâches qui peuvent à leur tour générer des tâches pour une exécution in situ ou in transit. L’exécution in situ est rendue possible grâce à une méthode innovante de helper core dynamique qui s’appuie sur le concept de vol de tâches pour entrelacer efficacement tâches de simulation et d’analyse avec un faible impact sur le temps de la simulation.TINS utilise l’ordonnanceur de vol de tâches d’Intel® TBB et est intégré dans ExaStamp, un code de dynamique moléculaire. De nombreuses expériences ont montrées que TINS est jusqu’à 40% plus rapide que des méthodes existantes de l’état de l’art. Des simulations de dynamique moléculaire sur des système de 2 milliards de particles sur 14,336 cœurs ont montré que TINS est capable d’exécuter des analyses complexes à haute fréquence avec un surcoût inférieur à 10%
The exascale era will widen the gap between data generation rate and the time to manage their output and analysis in a post-processing way, dramatically increasing the end-to-end time to scientific discovery and calling for a shift toward new data processing methods. The in situ paradigm proposes to analyze data while still resident in the supercomputer memory to reduce the need for data storage. Several techniques already exist, by executing simulation and analytics on the same nodes (in situ), by using dedicated nodes (in transit) or by combining the two approaches (hybrid). Most of the in situ techniques target simulations that are not able to fully benefit from the ever growing number of cores per processor but they are not designed for the emerging manycore processors.Task-based programming models on the other side are expected to become a standard for these architectures but few task-based in situ techniques have been developed so far. This thesis proposes to study the design and integration of a novel task-based in situ framework inside a task-based molecular dynamics code designed for exascale supercomputers. We take benefit from the composability properties of the task-based programming model to implement the TINS hybrid framework. Analytics workflows are expressed as graphs of tasks that can in turn generate children tasks to be executed in transit or interleaved with simulation tasks in situ. The in situ execution is performed thanks to an innovative dynamic helper core strategy that uses the work stealing concept to finely interleave simulation and analytics tasks inside a compute node with a low overhead on the simulation execution time.TINS uses the Intel® TBB work stealing scheduler and is integrated into ExaStamp, a task-based molecular dynamics code. Various experiments have shown that TINS is up to 40% faster than state-of-the-art in situ libraries. Molecular dynamics simulations of up to 2 billions particles on up to 14,336 cores have shown that TINS is able to execute complex analytics workflows at a high frequency with an overhead smaller than 10%
7

Carlson, Darren Vaughn. "Ocean. Towards Web-scale context-aware computing. A community-centric, wide-area approach for in-situ, context-mediated component discovery and composition." Lübeck Zentrale Hochschulbibliothek Lübeck, 2010. http://d-nb.info/1001862880/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Dutta, Soumya. "In Situ Summarization and Visual Exploration of Large-scale Simulation Data Sets." The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu1524070976058567.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Lemon, Alexander Michael. "A Shared-Memory Coupled Architecture to Leverage Big Data Frameworks in Prototyping and In-Situ Analytics for Data Intensive Scientific Workflows." BYU ScholarsArchive, 2019. https://scholarsarchive.byu.edu/etd/7545.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
There is a pressing need for creative new data analysis methods whichcan sift through scientific simulation data and produce meaningfulresults. The types of analyses and the amount of data handled by currentmethods are still quite restricted, and new methods could providescientists with a large productivity boost. New methods could be simpleto develop in big data processing systems such as Apache Spark, which isdesigned to process many input files in parallel while treating themlogically as one large dataset. This distributed model, combined withthe large number of analysis libraries created for the platform, makesSpark ideal for processing simulation output.Unfortunately, the filesystem becomes a major bottleneck in any workflowthat uses Spark in such a fashion. Faster transports are notintrinsically supported by Spark, and its interface almost denies thepossibility of maintainable third-party extensions. By leveraging thesemantics of Scala and Spark's recent scheduler upgrades, we forceco-location of Spark executors with simulation processes and enable fastlocal inter-process communication through shared memory. This provides apath for bulk data transfer into the Java Virtual Machine, removing thecurrent Spark ingestion bottleneck.Besides showing that our system makes this transfer feasible, we alsodemonstrate a proof-of-concept system integrating traditional HPC codeswith bleeding-edge analytics libraries. This provides scientists withguidance on how to apply our libraries to gain a new and powerful toolfor developing new analysis techniques in large scientific simulationpipelines.
10

Carlson, Darren Vaughn [Verfasser]. "Ocean. Towards Web-scale context-aware computing : A community-centric, wide-area approach for in-situ, context-mediated component discovery and composition / Darren Vaughn Carlson." Lübeck : Zentrale Hochschulbibliothek Lübeck, 2010. http://d-nb.info/1001862880/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Soumagne, Jérome. "An In-situ Visualization Approach for Parallel Coupling and Steering of Simulations through Distributed Shared Memory Files." Phd thesis, Université Sciences et Technologies - Bordeaux I, 2012. http://tel.archives-ouvertes.fr/tel-00788826.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Les codes de simulation devenant plus performants et plus interactifs, il est important de suivre l'avancement d'une simulation in-situ, en réalisant non seulement la visualisation mais aussi l'analyse des données en même temps qu'elles sont générées. Suivre l'avancement ou réaliser le post-traitement des données de simulation in-situ présente un avantage évident par rapport à l'approche conventionnelle consistant à sauvegarder--et à recharger--à partir d'un système de fichiers; le temps et l'espace pris pour écrire et ensuite lire les données à partir du disque est un goulet d'étranglement significatif pour la simulation et les étapes consécutives de post-traitement. Par ailleurs, la simulation peut être arrêtée, modifiée, ou potentiellement pilotée, conservant ainsi les ressources CPU. Nous présentons dans cette thèse une approche de couplage faible qui permet à une simulation de transférer des données vers un serveur de visualisation via l'utilisation de fichiers en mémoire. Nous montrons dans cette étude comment l'interface, implémentée au-dessus d'un format hiérarchique de données (HDF5), nous permet de réduire efficacement le goulet d'étranglement introduit par les I/Os en utilisant des stratégies efficaces de communication et de configuration des données. Pour le pilotage, nous présentons une interface qui permet non seulement la modification de simples paramètres, mais également le remaillage complet de grilles ou des opérations impliquant la régénération de grandeurs numériques sur le domaine entier de calcul d'être effectués. Cette approche, testée et validée sur deux cas-tests industriels, est suffisamment générique pour qu'aucune connaissance particulière du modèle de données sous-jacent ne soit requise.
12

Meyer, Lucas. "Deep Learning en Ligne pour la Simulation Numérique à Grande Échelle." Electronic Thesis or Diss., Université Grenoble Alpes, 2024. http://www.theses.fr/2024GRALM001.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Nombreuses applications d’ingénieries et recherches scientifiques nécessitent la simulation fidèle de phénomènes complexes et dynamiques, transcrits mathématiquement en Équations aux Dérivées Partielles (EDP). Les solutions de ces EDP sont généralement approximées au moyen de solveurs qui effectuent des calculs intenses et génèrent des quantités importantes de données. Les applications requièrent rarement une unique simulation, mais plutôt un ensemble d’exécutions pour différents paramètres afin d’analyser la sensibilité du phénomène ou d’en trouver une configuration optimale. Ces larges ensembles de simulations sont limités par des temps de calcul importants et des capacités de stockage mémoire finies. La problématique du coût de calcul a jusqu’à présent encouragé le développement du calcul haute-performance (HPC) et de techniques de réductions de modèles. Récemment, le succès de l'apprentissage profond a poussé la communauté scientifique à considérer son usage pour accélérer ces ensembles de simulations. Cette thèse s'inscrit dans ce cadre en explorant tout d'abord deux techniques d’apprentissage pour la simulation numérique. La première propose d’utiliser une série de convolutions sur une hiérarchie de graphes pour reproduire le champ de vitesse d’un fluide tel que généré par le solveur à tout pas de temps de la simulation. La seconde hybride des algorithmes de régression avec des techniques classiques de réduction de modèles pour prédire les coefficients de toute nouvelle simulation dans une base réduite obtenue par analyse en composantes principales. Ces deux approches, comme la majorité de celles présentées dans la littérature, sont supervisées. Leur entraînement nécessite de générer a priori de nombreuses simulations. Elles souffrent donc du même problème qui a motivé leur développement : générer un jeu d’entraînement de simulations fidèles à grande échelle est laborieux. Nous proposons un cadre d’apprentissage générique pour l’entraînement de réseaux de neurones artificiels à partir de simulations générées à la volée tirant profit des ressources HPC. Les données sont produites en exécutant simultanément plusieurs instances d’un solveur pour différents paramètres. Le solveur peut lui-même être parallélisé sur plusieurs unités de calcul. Dès qu’un pas de temps est simulé, il est directement transmis pour effectuer l’apprentissage. Aucune donnée générée par le solveur n’est donc sauvegardée sur disque, évitant ainsi les coûteuses opérations d’écriture et de lecture et la nécessité de grands volumes de stockage. L’apprentissage se fait selon une distribution parallèle des données sur plusieurs GPUs. Comme il est désormais en ligne, cela crée un biais dans les données d’entraînement, comparativement à une situation classique où les données sont échantillonnées uniformément sur un ensemble de simulations disponibles a priori. Nous associons alors chaque GPU à une mémoire tampon en charge de mélanger les données produites. Ce formalisme a permis d’améliorer les capacités de généralisation de modèles issus de l’état de l’art, en les exposant à une diversité globale de données simulées plus riches qu’il n’aurait été faisable lors d’un entraînement classique. Des expériences montrent que l’implémentation de la mémoire tampon est cruciale pour garantir un entraînement de qualité à haut débit. Ce cadre d’apprentissage a permis d’entraîner un réseau à reproduire des simulations de diffusion thermique en moins de 2 heures sur 8TB de données générées et traitées in situ, améliorant ainsi les prédictions de 47% par rapport à un entraînement classique
Many engineering applications and scientific discoveries rely on faithful numerical simulations of complex phenomena. These phenomena are transcribed mathematically into Partial Differential Equation (PDE), whose solution is generally approximated by solvers that perform intensive computation and generate tremendous amounts of data. The applications rarely require only one simulation but rather a large ensemble of runs for different parameters to analyze the sensitivity of the phenomenon or to find an optimal configuration. Those large ensemble runs are limited by computation time and finite memory capacity. The high computational cost has led to the development of high-performance computing (HPC) and surrogate models. Recently, pushed by the success of deep learning in computer vision and natural language processing, the scientific community has considered its use to accelerate numerical simulations. The present thesis follows this approach by first presenting two techniques using machine learning for surrogate models. First, we propose to use a series of convolutions on hierarchical graphs to reproduce the velocity of fluids as generated by solvers at any time of the simulation. Second, we hybridize regression algorithms with classical reduced-order modeling techniques to identify the coefficients of any new simulation in a reduced basis computed by proper orthogonal decomposition. These two approaches, as the majority found in the literature, are supervised. Their training needs to generate a large number of simulations. Thus, they suffer the same problem that motivated their development in the first instance: generating many faithful simulations at scale is laborious. We propose a generic training framework for artificial neural networks that generate data simulations on-the-fly by leveraging HPC resources. Data are produced by running simultaneously several instances of the solver for different parameters. The solver itself can be parallelized over several processing units. As soon as a time step is computed by any simulation, it is streamed for training. No data is ever written on disk, thus overcoming slow input-output operations and alleviating the memory footprint. Training is performed by several GPUs with distributed data-parallelism. Because the training is now online, it induces a bias in the data compared to classical training, for which they are sampled uniformly from an ensemble of simulations available a priori. To mitigate this bias, each GPU is associated with a memory buffer in charge of mixing the incoming simulation data. This framework has improved the generalization capabilities of state-of-the-art architectures by exposing them during training to a richer diversity of data than would have been feasible with classical training. Experiments show the importance of the memory buffer implementation in guaranteeing generalization capabilities and high throughput training. The framework has been used to train a deep surrogate for heat diffusion simulation in less than 2 hours on 8TB of data processed in situ, thus increasing the prediction accuracy by 47% compared to a classical setting
13

Su, Yu. "Big Data Management Framework based on Virtualization and Bitmap Data Summarization." The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1420738636.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Honore, Valentin. "Convergence HPC - Big Data : Gestion de différentes catégories d'applications sur des infrastructures HPC." Thesis, Bordeaux, 2020. http://www.theses.fr/2020BORD0145.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Le calcul haute performance est un domaine scientifique dans lequel de très complexes et intensifs calculs sont réalisés sur des infrastructures de calcul à très large échelle appelées supercalculateurs. Leur puissance calculatoire phénoménale permet aux supercalculateurs de générer un flot de données gigantesque qu'il est aujourd'hui difficile d'appréhender, que ce soit d'un point de vue du stockage en mémoire que de l'extraction des résultats les plus importants pour les applications.Nous assistons depuis quelques années à une convergence entre le calcul haute performance et des domaines tels que le BigData ou l'intelligence artificielle qui voient leurs besoins en terme de capacité de calcul exploser. Dans le cadre de cette convergence, une grande diversité d'applications doit être traitée par les ordonnanceurs des supercalculateurs, provenant d'utilisateurs de différents horizons pour qui il n'est pas toujours aisé de comprendre le fonctionnement de ces infrastructures pour le calcul distribué.Dans cette thèse, nous exposons des solutions d'ordonnancement et de partitionnement de ressources pour résoudre ces problématiques. Pour ce faire, nous proposons une approche basée sur des modèles mathématiques qui permet d'obtenir des solutions avec de fortes garanties théoriques de leu performance. Dans ce manuscrit, nous nous focalisons sur deux catégories d'applications qui s'inscrivent en droite ligne avec la convergence entre le calcul haute performance et le BigData:les applications intensives en données et les applications à temps d'exécution stochastique.Les applications intensives en données représentent les applications typiques du domaine du calcul haute performance. Dans cette thèse, nous proposons d'optimiser cette catégorie d'applications exécutées sur des supercalculateurs en exposant des méthodes automatiques de partitionnement de ressources ainsi que des algorithmes d'ordonnancement pour les différentes phases de ces applications. Pour ce faire, nous utilisons le paradigme in situ, devenu à ce jour une référence pour ces applications. De nombreux travaux se sont attachés à proposer des solutions logicielles pour mettre en pratique ce paradigme pour les applications. Néanmoins, peu de travaux ont étudié comment efficacement partager les ressources de calcul les différentes phases des applications afin d'optimiser leur temps d'exécution.Les applications stochastiques constituent la deuxième catégorie d'applications que nous étudions dans cette thèse. Ces applications ont un profil différent de celles de la première partie de ce manuscrit. En effet, contrairement aux applications de simulation numérique, ces applications présentent de fortes variations de leur temps d'exécution en fonction des caractéristiques du jeu de données fourni en entrée. Cela est dû à leur structure interne composée d'une succession de fonctions, qui diffère des blocs de code massifs composant les applications intensive en données.L'incertitude autour de leur temps d'exécution est une contrainte très forte pour lancer ces applications sur les supercalculateurs. En effet, l'utilisateur doit réserver des ressources de calcul pour une durée qu'il ne connait pas. Dans cette thèse, nous proposons une approche novatrice pour aider les utilisateurs à déterminer une séquence de réservations optimale qui minimise l'espérance du coût total de toutes les réservations. Ces solutions sont par la suite étendues à un modèle d'applications avec points de sauvegarde à la fin de (certaines) réservations afin d'éviter de perdre le travail réalisé lors des réservations trop courtes. Enfin, nous proposons un profiling d'une application stochastique issue du domaine des neurosciences afin de mieux comprendre les propriétés de sa stochasticité. A travers cette étude, nous montrons qu'il est fondamental de bien connaître les caractéristiques des applications pour qui souhaite élaborer des stratégies efficaces du point de vue de l'utilisateur
Numerical simulations are complex programs that allow scientists to solve, simulate and model complex phenomena. High Performance Computing (HPC) is the domain in which these complex and heavy computations are performed on large-scale computers, also called supercomputers.Nowadays, most scientific fields need supercomputers to undertake their research. It is the case of cosmology, physics, biology or chemistry. Recently, we observe a convergence between Big Data/Machine Learning and HPC. Applications coming from these emerging fields (for example, using Deep Learning framework) are becoming highly compute-intensive. Hence, HPC facilities have emerged as an appropriate solution to run such applications. From the large variety of existing applications has risen a necessity for all supercomputers: they mustbe generic and compatible with all kinds of applications. Actually, computing nodes also have a wide range of variety, going from CPU to GPU with specific nodes designed to perform dedicated computations. Each category of node is designed to perform very fast operations of a given type (for example vector or matrix computation).Supercomputers are used in a competitive environment. Indeed, multiple users simultaneously connect and request a set of computing resources to run their applications. This competition for resources is managed by the machine itself via a specific program called scheduler. This program reviews, assigns andmaps the different user requests. Each user asks for (that is, pay for the use of) access to the resources ofthe supercomputer in order to run his application. The user is granted access to some resources for a limited amount of time. This means that the users need to estimate how many compute nodes they want to request and for how long, which is often difficult to decide.In this thesis, we provide solutions and strategies to tackle these issues. We propose mathematical models, scheduling algorithms, and resource partitioning strategies in order to optimize high-throughput applications running on supercomputers. In this work, we focus on two types of applications in the context of the convergence HPC/Big Data: data-intensive and irregular (orstochastic) applications.Data-intensive applications represent typical HPC frameworks. These applications are made up oftwo main components. The first one is called simulation, a very compute-intensive code that generates a tremendous amount of data by simulating a physical or biological phenomenon. The second component is called analytics, during which sub-routines post-process the simulation output to extract,generate and save the final result of the application. We propose to optimize these applications by designing automatic resource partitioning and scheduling strategies for both of its components.To do so, we use the well-known in situ paradigm that consists in scheduling both components together in order to reduce the huge cost of saving all simulation data on disks. We propose automatic resource partitioning models and scheduling heuristics to improve overall performance of in situ applications.Stochastic applications are applications for which the execution time depends on its input, while inusual data-intensive applications the makespan of simulation and analytics are not affected by such parameters. Stochastic jobs originate from Big Data or Machine Learning workloads, whose performanceis highly dependent on the characteristics of input data. These applications have recently appeared onHPC platforms. However, the uncertainty of their execution time remains a strong limitation when using supercomputers. Indeed, the user needs to estimate how long his job will have to be executed by the machine, and enters this estimation as his first reservation value. But if the job does not complete successfully within this first reservation, the user will have to resubmit the job, this time requiring a longer reservation
15

Chen, Yuan. "Using mobile computing for construction site information management." Thesis, University of Newcastle Upon Tyne, 2008. http://hdl.handle.net/10443/164.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In recent years, construction information management has greatly benefited from advancesin Information and Communication Technology (ICT) increasing the speed of information flow, enhancing the efficiency and effectiveness of information communication, and reducing the cost of information transfer. Current ICT support has been extended to construction site offices. However, construction projects typically take place in the field where construction personnel have difficulty in gaining access to conventional information systems for their information requirements. The advances in affordable mobile devices, increases in wireless network transfer speeds and enhancements in mobile application performance, give mobile computing a powerful potential to improve on-site construction information management. This research project aims to explore how mobile computing can be implemented to manage information on construction sites through the development of a framework. Various research methods and strategies were adopted to achieve the defined aim of this research. These methods include an extensive literature review in both areas of construction information management and mobile computing; case studies that investigate construction information management on construction sites; a web-based survey for the investigation of the existing mechanism for on-site information retrieval and transfer; and a case study of the validation of the framework. Based on the results obtained from the literature review, case studies and the survey,the developed framework identifies the primary factors that influence the implementation of mobile computing in construction site information management, and the inter relationships between those factors. Each of these primary factors is further divided into sub-factors that describe the detailed features of relevant primary factors. In order to explore links between sub-factors, the top-level framework is broken down into different sub-frameworks, each of which presents the specific links between two primary factors. One of the applications for the developed framework is the selection of a mobile computing strategy for managing on-site construction information. The overall selection procedure has three major steps: the definition of on-site information management objectives; the identification of mobile computing strategy; and the selection of appropriate mobile computing technologies. The evaluation and validity of the selection procedure is demonstrated through an illustrative constructions cenario.
16

Ghorbani, Mohammadmersad. "Computational analysis of CpG site DNA methylation." Thesis, Brunel University, 2013. http://bura.brunel.ac.uk/handle/2438/8217.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Epigenetics is the study of factors that can change DNA and passed to next generation without change to DNA sequence. DNA methylation is one of the categories of epigenetic change. DNA methylation is the attachment of methyl group (CH3) to DNA. Most of the time it occurs in the sequences that G is followed by C known as CpG sites and by addition of methyl to the cytosine residue. As science and technology progress new data are available about individual’s DNA methylation profile in different conditions. Also new features discovered that can have role in DNA methylation. The availability of new data on DNA methylation and other features of DNA provide challenge to bioinformatics and the opportunity to discover new knowledge from existing data. In this research multiple data series were used to identify classes of methylation DNA to CpG sites. These classes are a) Never methylated CpG sites,b) Always methylated CpG sites, c) Methylated CpG sites in cancer/disease samples and non-methylated in normal samples d) Methylated CpG sites in normal samples and non-methylated in cancer/disease samples. After identification of these sites and their classes, an analysis was carried out to find the features which can better classify these sites a matrix of features was generated using four applications in EMBOSS software suite. Features matrix was also generated using the gUse/WS-PGRADE portal workflow system. In order to do this each of the four applications were grid enabled and ported to BOINC platform. The gUse portal was connected to the BOINC project via 3G-bridge. Each node in the workflow created portion of matrix and then these portions were combined together to create final matrix. This final feature matrix used in a hill climbing workflow. Hill climbing node was a JAVA program ported to BOINC platform. A Hill climbing search workflow was used to search for a subset of features that are better at classifying the CpG sites using 5 different measurements and three different classification methods: support vector machine, naïve bayes and J48 decision tree. Using this approach the hill climbing search found the models which contain less than half the number of features and better classification results. It is also been demonstrated that using gUse/WS-PGRADE workflow system can provide a modular way of feature generation so adding new feature generator application can be done without changing other parts. It is also shown that using grid enabled applications can speedup both feature generation and feature subset selection. The approach used in this research for distributed workflow based feature generation is not restricted to this study and can be applied in other studies that involve feature generation. The approach also needs multiple binaries to generate portions of features. The grid enabled hill climbing search application can also be used in different context as it only requires to follow the same format of feature matrix.
17

Löfgren, Alexander. "Making Mobile Meaning : expectations and experiences of mobile computing usefulness in construction site management practice." Doctoral thesis, KTH, Industriell ekonomi och organisation (Inst.), 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-9216.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
During the last decade, anticipated and realized benefits of mobile and wireless information and communication technology (ICT) for different business purposes have been widely explored and evaluated. Also, the significance of ‘user acceptance’ mechanisms through ‘perceived usefulness’ of ICT applications has gained broad recognition among business organizations in developing and adopting new ICT capabilities. However, even though technology usefulness is regularly highlighted as an important factor in ICT projects, there is often a lack of understanding of what the concept involves in the practical work context of the actual users, and how to deal with the issues of usefulness in organizational ICT development processes. This doctoral thesis covers a 1,5 year case study of a mobile computing development project at a Swedish international construction enterprise. The company’s mobile ICT venture addressed the deficient ICT use situation of management practitioners in construction site operations. The study portrays the overall socially shaped development process of the chosen technology and its evolving issues of usefulness for existing construction site management practice. The perceived usefulness of mobile computing tools among the ‘user-practitioners’ is described as emergence of ‘meaningful use’ based on initial expectations and actual experiences of the technology in their situated fieldwork context. The studied case depicts the ongoing and open-ended conversational nature of understanding adequate ICT requirements in work practice, and the negotiation of mobile computing technology design properties between users and developers over time towards the alignment of diverse personal, professional and organizational needs and purposes of ICT use. The studied introduction of mobile computing technology in construction site management fieldwork practice serves as an illustrative actual example of how to interpret, understand and approach issues of usefulness and user acceptance of ICT resources in operative work contexts when managing ICT development processes in organizations.
QC 20100825
18

Schmelzer, Diana McAllister. "A case study and proposed decision guide for allocating instructional computing resources at the school site level." Diss., Virginia Polytechnic Institute and State University, 1986. http://hdl.handle.net/10919/76500.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
School based administrators must often determine the use of potentially powerful computing resources for the school's instructional program. While site level administrators have allocated many kinds of resources within the schools, the allocation of this new technology has little precedent. A decision guide is proposed to assist site level administrators. This guide explores three major sources of information to assist the site level administrator in making computer-related allocations. First, the context of the school, such as the school profile, and the district plan for instructional use of microcomputers, forms a basis for investigating the allocation of computing resources. Second, because both access to and applications for instructional computing resources are critical issues, the moral dilemma of equity-excellence is examined. Finally, empirical information from the existing literature and from a possible school based research effort are analyzed. A procedure for using this information to make decisions is proposed. By weighing these three sources of information, it is contended that the administrator is better able to allocate potentially powerful computing resources. Woven into the decision guide are specific examples from one administrator's efforts to make decisions about word processing at an intermediate school. The context, equity-excellence issues, and empirical information are examined in this particular site to illustrate one application of the guide and to share findings about word processing as an instructional tool.
Ed. D.
19

Creutz, Julia, and Isabelle Borgkvist. "Smart Hem, smart för vem? : En kvalitativ studie om varför det Smarta Hemmet inte har fått sitt förväntade genomslag." Thesis, Södertörns högskola, Institutionen för naturvetenskap, miljö och teknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:sh:diva-29681.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Smart Homes are not smart for everyone, at least not yet. The purpose of this paper is to examine four obstacles that prevents Smart Homes from being adopted as a standard in Sweden. This paper is based on the contributions of the study “Home Automation in the Wild: Challanges and Opportunities” (Brush et al. 2011), and further investigates the obstacles the authors present in that study. Thanks to a broad use of different methods, we state that all the obstacles listed in this particular study (Brush et al. 2011) still remain, but perhaps on different terms. In the discussion part of this paper, we present a few ways to work against these obstacles and, hopefully, eliminate them.
Smarta Hem är inte smarta för alla, åtminstone inte än. Syftet med denna uppsats är undersöka fyra hinder som förhindrar Smarta Hem från att anammas som standard i Sverige. Denna uppsats är baserad på bidragen från studien “Home Automation in the Wild: Challanges and Opportunities” (brush et al. 2011), och undersöker de hinder som presenteras i den studien. Tack vare användandet av ett flertal olika metoder, kan vi konstatera att de hinder som presenteras i den specifika studien (Brush et al. 2011) fortfarande finns kvar idag, men möjligtvis på andra villkor. I uppsatsens diskussionsdel presenterar vi ett antal sätt att arbeta mot dessa hinder och, förhoppningsvis, kunna eliminera dem.
20

Okamoto, Sohei. "WIDE web interface development environment /." abstract and full text PDF (free order & download UNR users only), 2005. http://0-gateway.proquest.com.innopac.library.unr.edu/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:1433350.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Ward, Michael James. "The capture and integration of construction site data." Thesis, Loughborough University, 2004. https://dspace.lboro.ac.uk/2134/799.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The use of mobile computing on the construction site has been a well-researched area since the early 1990's, however, there still remains a lack of computing on the construction site. Where computers are utilised on the site this tends to be by knowledge workers utilising a laptop or PC in the site office with electronic data collection being the exception rather than the norm. The problems associated with paper-based documentation on the construction site have long been recognised (Baldwin, et al, 1994; McCullough, 1993) yet there still seems to be reluctance to replace this with electronic alternatives. Many reasons exist for this such as; low profit margins, perceived high cost; perceived lack of available hardware and perceived inability of the workforce. However, the benefits that can be gained from the successful implementation of IT on the construction site and the ability to re-use construction site data to improve company performance, whilst difficult to cost, are clearly visible. This thesis represents the development and implementation of a data capture system for the management of the construction of rotary bored piles (SHERPA). Operated by the site workforce, SHERPA comprises a wireless network, site-based server and webbased data capture using tablet computers. This research intends to show that mobile computing technologies can be implemented on the construction site and substantial benefits can be gained for the company from the re-use and integration of the captured site data.
22

Skyner, Rachael Elaine. "Hydrate crystal structures, radial distribution functions, and computing solubility." Thesis, University of St Andrews, 2017. http://hdl.handle.net/10023/11746.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Solubility prediction usually refers to prediction of the intrinsic aqueous solubility, which is the concentration of an unionised molecule in a saturated aqueous solution at thermodynamic equilibrium at a given temperature. Solubility is determined by structural and energetic components emanating from solid-phase structure and packing interactions, solute–solvent interactions, and structural reorganisation in solution. An overview of the most commonly used methods for solubility prediction is given in Chapter 1. In this thesis, we investigate various approaches to solubility prediction and solvation model development, based on informatics and incorporation of empirical and experimental data. These are of a knowledge-based nature, and specifically incorporate information from the Cambridge Structural Database (CSD). A common problem for solubility prediction is the computational cost associated with accurate models. This issue is usually addressed by use of machine learning and regression models, such as the General Solubility Equation (GSE). These types of models are investigated and discussed in Chapter 3, where we evaluate the reliability of the GSE for a set of structures covering a large area of chemical space. We find that molecular descriptors relating to specific atom or functional group counts in the solute molecule almost always appear in improved regression models. In accordance with the findings of Chapter 3, in Chapter 4 we investigate whether radial distribution functions (RDFs) calculated for atoms (defined according to their immediate chemical environment) with water from organic hydrate crystal structures may give a good indication of interactions applicable to the solution phase, and justify this by comparison of our own RDFs to neutron diffraction data for water and ice. We then apply our RDFs to the theory of the Reference Interaction Site Model (RISM) in Chapter 5, and produce novel models for the calculation of Hydration Free Energies (HFEs).
23

Sigurjonsdottir, Edda Kristin. "Sit, Eat, Drink, Talk, Laugh – Dining and Mixed Media." Thesis, Malmö högskola, Fakulteten för kultur och samhälle (KS), 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-23378.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Sit, Eat, Drink, Talk, Laugh – Dining and Mixed Media, is an exploratory study of qualities in everyday life and challenges people to enjoy the qualities of mundanity. Seeking inspiration in ethnographic studies, field work was conducted in domestic settings, returning an extensive body of material to work from. The study challenges people to absorb the moment, reflect and enjoy, rather than pacing through a lifetime, with a constant focus on the future instead of the present. This work takes a starting point in food and dining as a social activity, where interactive sound and a reference to online social media is explored through two interventions. The results of these are discussed with central findings around food and dining in the area of sociology, the use of sound in ambient computing and on a higher level around the topic of temporality.
24

Posey, Orlando Guy. "Client/Server Systems Performance Evaluation Measures Use and Importance: a Multi-Site Case Study of Traditional Performance Measures Applied to the Client/Server Environment." Thesis, University of North Texas, 1999. https://digital.library.unt.edu/ark:/67531/metadc277882/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This study examines the role of traditional computing performance measures when used in a client/server system (C/SS) environment. It also evaluates the effectiveness of traditional computing measures of mainframe systems for use in C/SS. The underlying problem was the lack of knowledge about how performance measures are aligned with key business goals and strategies. This research study has identified and evaluated client/server performance measurements' importance in establishing an effective performance evaluation system. More specifically, this research enables an organization to do the following: (1) compare the relative states of development or importance of performance measures, (2) identify performance measures with the highest priority for future development, (3) contrast the views of different organizations regarding the current or desired states of development or relative importance of these performance measures.
25

Lemoine, David. "Modèles génériques et méthodes de résolution pour la planification tactique mono-site et multi-site." Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2008. http://tel.archives-ouvertes.fr/tel-00731297.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
La planification tactique consiste à élaborer des plans de production afin de répondre au mieux à la demande, à un moindre coût. Traditionnellement, cette planification est divisée en trois plans principaux : le Plan Industriel et Commercial (PIC), le Plan Directeur de Production (PDP) et le Calcul des Besoins Net (CBN). Pour élaborer ces différents plans, des modèles mathématiques dits de " lot-sizing " ont été développés. Cependant, les mécanismes de fusion/acquisition entre entreprises ont considérablement complexifié cette planification en y intégrant les aspects multi-site inhérents au concept de chaîne logistique et il n'existe pas, à notre connaissance, de modèle du domaine et de modèle mathématique de référence pour cette problématique. Dans cette thèse, nous proposons un modèle générique de connaissance pour la planification multi-site à partir duquel un modèle mathématique générique peut être obtenu. Ce dernier permet, par instanciation, de retrouver les principaux modèles de la littérature. Nous proposons également des méthodes d'optimisation efficaces pour l'élaboration des plans de production (PIC, PDP et CBN) dans un contexte mono et multi-site : - Nous nous intéressons à l'obtention du PIC et du PDP dans un contexte mono-site au travers de la résolution du Capacitated Lot Sizing Problem (CLSP) grâce à des métaheuristiques et des bornes inférieures. Par cette technique, nous améliorons des résultats de la littérature. - Nous proposons un modèle mathématique pour la planification d'une chaîne logistique de type " flowshop hybride " obtenu par instanciation du modèle mathématique générique ainsi qu'une méthode d'optimisation efficace pour déterminer les PDPs et CBNs pour cette chaîne logistique. Nous abordons ensuite les problèmes de faisabilité des plans de production ainsi déterminés au niveau opérationnel en utilisant différents couplages entre modèles mathématiques ou modèles de simulation, ce qui permet d'assurer la synchronisation verticale des plans. Enfin, dans le cadre d'un contrat industriel, nous nous intéressons à la mise en place d'une politique de gestion de stock à demande différenciée. Après avoir étudié la faisabilité d'une telle mise en oeuvre dans un contexte industriel, nous avons conçu les algorithmes et développé l'application permettant de calculer les seuils de rationnement de chaque client afin de mener un test grandeur nature de cette politique.
26

Стеценко, Анастасія, та Anastasiia Stetsenko. "Особливості створення динамічних презентацій засобами програми Sway". СумДПУ імені А. С. Макаренка, 2017. http://repository.sspu.sumy.ua/handle/123456789/2626.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
В роботі розглянуто особливості створення динамічних презентацій у новій програмі з пакету Microsoft Office – Sway. Проаналізовано способи створення таких презентацій. Особливу увагу приділено створенню презентацій з документів з розширенням DOC, DOCX, PDF, PPT, PPTX.
In the article the features of creation of dynamic presentations in the new program of Microsoft Office Suite, Sway. Analyzed the methods of making such presentations. Special attention is paid to creating presentation documents with extension DOC, DOCX, PDF, PPT, PPTX.
27

De, Silva Buddhima. "Realising end-user driven web application development using meta-design paradigm." View thesis, 2008. http://handle.uws.edu.au:8081/1959.7/44493.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Thesis (Ph.D.)--University of Western Sydney, 2008.
A thesis submitted to the University of Western Sydney, College of Health and Science, School of Computing and Mathematics, in fulfilment of the requirements for the degree of Doctor of Philosophy. Includes bibliographical references.
28

De, Silva Buddhima. "Realising end-user driven web application development using meta-design paradigm." Thesis, View thesis, 2008. http://handle.uws.edu.au:8081/1959.7/44493.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Small to Medium Enterprises (SMEs) need to use Information and Communication Technologies (ICT) to enhance their business processes to become competitive in the global economy. When an information system is introduced to an organisation it changes the original business environment thus changing the original requirements. This in turn will change the processes that are supported by the information system. Also when users get familiar with the system they ask for more functionality. This gives rise to a cycle of changes known as co-evolution. In addition, SMEs have budget constraints which make the problem associated with co-evolution worse. One solution to overcome this situation is to empower end-users to develop and maintain the information systems. Within the above context the work presented addresses the following research question: “How to support SME end-users to develop and / or maintain Web applications to support their business processes?” There are two main components to this question: What are the requirements of a suitable end-user development approach for SMEs and how to create the Web applications based on the requirements. The requirements of a suitable end-user development approach can be established by identifying the different types of Web applications required by SMEs, the capabilities of end-users in relation to developing and / or maintaining Web applications and how they conceptualise the Web applications. The literature review is conducted to discover different types of Web applications required by SMEs and to identify a suitable end-user development approach and tools that can support the development of these various types of Web applications. According to the literature survey, the main types of Web applications required by SMEs can be categorised as information centric Web applications (Simple Web sites which focus on effective presentation of unstructured information), data intensive Web applications (the focus is on efficient presentation and management of structured data such as product catalogue) and workflow intensive Web applications (The focus is on efficient automation of business processes such as an order processing system). The literature on end-user development shows that the existing end-user development approaches are focused on specific types of Web applications. The frameworks and tools in the Web development discipline mainly target experienced Web developers. Therefore a gap is identified as “there are limited end-user development approaches for developments of different types of Web applications which are required by SMEs to IT enable their businesses’”. The capabilities of SMEs in relation to Web application development were identified based on a study conducted with a group of SMEs. This study first surveyed the SMEs experience and knowledge in relation to Web application development and their attitude towards end-user development. Then their capabilities relating to Web application development were studied in a hands-on session to develop a Web site. The second study is conducted with administrative staff members involved in development of a Web application. This study helps to establish the requirements of a suitable end-user development approach from the point of view of the end-user developers. This study on end-user development observed the different activities carried out by end-users. Then the end-user was interviewed to identify the issues and benefits of end-user development in the project. Following that, a set of requirements for the end-user development approach was derived based on the findings from these two studies and the related literature: 1) A need to support different types of Web applications required by SMEs; 2) A need to support the specification of Web applications at the conceptual level; 3) A need for a common data repository to store the data used in different applications within the organisation; 4) Providing a common login to all applications within the organisation; 5) Striping a balance between Do it Yourself (DIY) and a professional developer that allows end-users to do the activities they are capable of while getting help from a professional developer to do the difficult tasks. The conceptual aspects of different types of Web applications (information centric, data intensive and process intensive) were identified based on a literature survey of existing conceptual modelling approaches. This set of aspects was refined by modelling selected Web applications for each type of Web application. The aspects needed to specify different types of Web applications are: presentation, data, task, workflow, access control, navigation, and personalisation. Then the usage of these aspects in a set of end-user specifications was analysed. This study reveals that end-users only focus on some of these aspects such as data and process to specify the applications. Therefore, another requirement for the development approach was identified – a need to support development of Web applications with minimum aspects. A meta-design paradigm based on the meta-model of Web applications is proposed to support the identified requirements. A meta-model of Web applications is developed based on the patterns of different types of Web applications. A component based Web application development framework called CBEADS (Component based eApplication Development and Deployment Shell) was extended to support the meta-model based development approach. Web applications can be created by populating the values for the attributes of the meta-model which are related to the attributes of different aspects of the Web applications at the conceptual level. The meta-model is organised into three levels: shell level, application level, and function level. Aspects common to many web applications are modelled at the shell level. The data model and user model are stored and managed at the shell level. This supports the requirements of common data repository and the common login to all applications. The aspects common to a web application are modelled at the application level. The function specific aspects required to implement the functionality of the Web application are modelled at the function level. The metamodel has two properties called overriding and inheritance. Inheritance property allows developing Web applications with minimum aspects. The activities required to develop the Web applications in a framework supporting the meta-model are grouped into three levels based on the complexity of these tasks named routine level, logical level and programming level.These different levels together with the overriding property help to balance between DIY and a professional developer. The meta-design paradigm is practically evaluated with a group of users including SMEs and students. The studies establish strategies for the success of the meta-design paradigm such as characteristics of individuals, facilitation and infrastructure. The original contributions of this thesis enhance the field of end-user development by providing a new end-user development approach that can be used by business end-users to develop web applications. More importantly the major contributions of this research provide a practical approach that can be used particularly by SME end-users with little or no previous experience in web application development. Significant research contributions are made in the following four areas: 1) Establishing requirements for an end-user development approach suitable for business users. 2) Identifying a set of aspects required to model different types of Web applications at the conceptual level. 3) Developing a meta-design paradigm based on the meta-model of different types of Web applications 4) Developing the strategies for successful use of the meta-design paradigm.
29

Bowden, Sarah L. "Application of mobile IT in construction." Thesis, Loughborough University, 2005. https://dspace.lboro.ac.uk/2134/794.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In recent years, the construction industry has been compelled to explore all possible options for improving the delivery of their products and services. Clients are now expecting a better service and projects that meet their requirements more closely. This has challenged the industry to become more efficient, integrated and more attractive, with benefits for its potential workforce and for society as a whole. Information and communication technologies (ICT) are an enabler to facilitate the improvements required for modernisation. However, due to the geographically dispersed and nomadic nature of the construction industry's workforce, many people are prevented from efficiently and effectively using the ICT tools adopted to date. Mobile technologies providing the 'last mile' connection to the point-of activity could be the missing link to help address the ongoing drive for process improvement. Although this has been a well-researched area, several barriers to mainstream adoption still exist: including a perceived lack of suitable devices; a perceived lack of computer literacy; and the perceived high cost. Through extensive industry involvement, this research has taken the theoretical idea that mobile IT use in the construction industry would be beneficial, a step further; demonstrating by means of a state of the art assessment, usability trials, case studies and demonstration projects that the barriers to mainstream adoption can be overcome. The findings of this work have been presented in four peer-reviewed papers. An ongoing dissemination programme is expected to encourage further adoption.
30

Fortuna, Frederico José. "Normas no desenvolvimento de ambientes Web inclusivos e flexíveis." [s.n.], 2010. http://repositorio.unicamp.br/jspui/handle/REPOSIP/275809.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Orientadores: Maria Cecília Calani Baranauskas, Rodrigo Bonacin
Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação
Made available in DSpace on 2018-08-16T05:09:54Z (GMT). No. of bitstreams: 1 Fortuna_FredericoJose_M.pdf: 5862511 bytes, checksum: 69f07468f18b3dceea14297c74b0250d (MD5) Previous issue date: 2010
Resumo: De acordo com W3C, o valor social da Web está no fato de que ela possibilita a comunicação, o comércio, e oportunidades de troca de conhecimento. Estes benefícios deveriam estar disponíveis para todas as pessoas, com o hardware e versão do software que utilizam, sua infra-estrutura de rede, linguagem nativa, cultura, localização geográfica, habilidade física e conhecimento. Estes aspectos estão relacionados tanto a questões sociais quanto tecnológicas. Considerando a diversidade de usuários e a complexidade de situações possíveis de uso da Web, buscam-se soluções para interfaces mais flexíveis, que possibilitem sua adaptação a diferentes contextos de uso. Este trabalho apresenta uma abordagem para solucionar o problema de como desenvolver interfaces de usuário flexíveis para sistemas Web, investigando como interfaces poderiam ser adaptadas a diferentes contextos de uso, considerando o conceito de normas da Semiótica Organizacional. Tal abordagem está representada em um framework, proposto neste trabalho, para apoiar designers e desenvolvedores na construção de interfaces flexíveis. Resultados obtidos na aplicação do framework em um sistema Web real, inserido no contexto da inclusão digital e acesso universal, são apresentados e discutidos nesta obra. Tais resultados são sugestivos da viabilidade da proposta e apontam para seu aprofundamento futuro
Abstract: According to W3C, the social value of the Web is in the fact that it enables communications, business and knowledge sharing opportunities. These benefits should be available for every person regardless of the person's hardware, software, network infrastructure, native language, cultural aspects, geographical location, physical and mental abilities. These aspects are related both to social and technological issues. Considering the differences among users and the complexity of possible Web usage, solutions are sought for more flexible user interfaces that allow their adaptation to different use contexts. This work presents an approach to solve the problem of developing flexible user interfaces for Web systems, investigating how interfaces can be adapted to different use contexts considering the concept of norms from Organizational Semiotics. This approach is represented by a framework, proposed on this work, that may help designers and developers to build flexible Web interfaces that may be adapted according to each use context. Results gathered when the framework was applied in a real Web system related to the context of universal access and digital inclusion are presented and discussed. Such results are suggestive of the proposal's viability and point to further improvements in future research
Mestrado
Interação Humano-Computador
Mestre em Ciência da Computação
31

Boutkhil, Soumaya. "A study and implementation of an electronic commerce website using active server pages." CSUSB ScholarWorks, 2001. https://scholarworks.lib.csusb.edu/etd-project/1894.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The purpose of this project is to design an electronic commerce site for MarocMart company. MarocMart.com is an one-stop shopping company for a number of high quality products: carpets, jewelry, pottery, wood, leather, metals, and fashion items, etc... Each article is unique, hand-made by Moroccan craftsmen.
32

Oliver, Gelabert Antoni. "Desarrollo y aceleración hardware de metodologías de descripción y comparación de compuestos orgánicos." Doctoral thesis, Universitat de les Illes Balears, 2018. http://hdl.handle.net/10803/462902.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Introducción El acelerado ritmo al que se genera y crece la información en la sociedad actual y la posible llegada de la tecnología de transistor a sus límites de tamaño exige la puesta en marcha de soluciones para el procesado eficiente de datos en campos específicos de aplicación. Contenido Esta tesis doctoral de carácter transdisciplinar a medio camino entre la ingeniería electrónica y la química computacional presenta soluciones optimizadas en hardware y en software para la construcción y el procesado eficiente de bases de datos moleculares. En primer lugar se propone y se estudia el funcionamiento de bloques digitales que implementan funciones en lógica pulsante estocástica orientadas a tareas de reconocimiento de objetos. Especialmente se proponen y analizan diseños digitales para la construcción de generadores de números aleatorios (RNG) como base de estos sistemas que han sido implementados en dispositivos Field Programable Gate Array (FPGA). En segundo lugar se propone y se evalúa un conjunto reducido de descriptores moleculares para la caracterización de compuestos orgánicos y la generación de bases de datos moleculares. Estos descriptores recogen información sobre la distribución de la carga molecular en el espacio y la energía electrostática. Las bases de datos generadas con estos descriptores se han procesado utilizando sistemas de computación convencionales en software y mediante sistemas de computación estocástica implementados en hardware mediante el uso de circuitería digital programable. Finalmente se proponen optimizaciones para la estimación del potencial electrostático molecular (MEP) y para el cálculo de los puntos de interacción molecular derivados (SSIP). Conclusiones Por una parte, los resultados obtenidos ponen de manifiesto la importancia de la uniformidad de los RNG en el período de evaluación para poder implementar sistemas de computación estocástica de alta fiabilidad. Además, los RNG propuestos tienen una naturaleza aperiódica que minimiza las posibles correlaciones entre señales, haciendo que sean adecuados para la implementación de sistemas de computación estocástica. Por otra parte, el conjunto de descriptores moleculares propuestos PED han demostrado obtener muy buenos resultados en comparación con otros métodos presentes en la literatura. Este hecho se ha discutido mediante los parámetros Area Under The Curve (AUC) y Enrichment Factor (EF) obtenidos de las curvas promedio Receiving Operating Characteristic (ROC). Además, se ha mostrado como la eficacia de los descriptores aumenta cuando se implementan en sistemas de clasificación con aprendizaje supervisado, haciéndolos adecuados para la construcción de un sistema de predicción de dianas terapéuticas eficiente. En esta tesis, además, se ha determinado que los MEP calculados utilizando la teoría DFT y el conjunto de bases B3LYP/6-31*G en la superficie con densidad electrónica 0,01 au correlacionan bien con datos experimentales debido presumiblemente a la mayor contribución de las propiedades electrostáticas locales reflejadas en el MEP. Las parametrizaciones propuestas en función del tipo de hibridación atómica pueden haber contribuido también a esta mejora. Los cálculos realizados en dichas superficies suponen mejoras en un factor cinco en la velocidad de procesamiento del MEP. Dado el aceptable ajuste a datos experimentales del método propuesto para el cálculo del MEP aproximado y de los SSIP, éste se puede utilizar con el fin de obtener los SSIP para bases de datos moleculares extensas o en macromoléculas como proteínas de manera muy rápida (ya que la velocidad de procesamiento obtenida puede alcanzar del orden de cinco mil átomos procesados por segundo utilizando un solo procesador). Estas técnicas resultan de especial interés dadas las numerosas aplicaciones de los SSIP como por ejemplo el cribado virtual de cocristales o la predicción de energías libres en disolución.
Introducció El creixement accelerat de les dades en la societat actual i l'arribada de la tecnologia del transistor als límits físics exigeix la proposta de metodologies per al processament eficient de dades. Contingut Aquesta tesi doctoral, de caràcter transdisciplinària i a mig camí entre els camps de l'enginyeria electrònica i la química computacional presenta solucions optimitzades en maquinari i en programari per tal d’accelerar el processament de bases de dades moleculars. En primer lloc es proposa i s'estudia el funcionament de blocs digitals que implementen funcions de lògica polsant estocàstica aplicades a tasques de reconeixement d'objectes. En concret es proposen i analitzen dissenys específics per a la construcció de generadors de nombres aleatoris (RNG) com a sistemes bàsics per al funcionament dels sistemes de computació estocàstics implementats en dispositius programables com les Field Programable Gate Array (FPGA). En segon lloc es proposen i avaluen un conjunt reduït de descriptors moleculars especialment orientats a la caracterització de compostos orgànics. Aquests descriptors reuneixen la informació sobre la distribució de càrrega molecular i les energies electroestàtiques. Les bases de dades generades amb aquests descriptors s’han processat emprant sistemes de computació convencionals en programari i mitjançant sistemes basats en computació estocàstica implementats en maquinari programable. Finalment es proposen optimitzacions per al càlcul del potencial electroestàtic molecular (MEP) calculat mitjançant la teoria del funcional de la densitat (DFT) i dels punts d’interacció que se’n deriven (SSIP). Conclusions Per una banda, els resultats obtinguts posen de manifest la importància de la uniformitat del RNG en el període d’avaluació per a poder implementar sistemes de computació estocàstics d’alta fiabilitat. A més, els RNG proposats presenten una font d’aleatorietat aperiòdica que minimitza les correlacions entre senyals, fent-los adequats per a la implementació de sistemes de computació estocàstica. Per una altra banda, el conjunt de descriptors moleculars proposats PED, han demostrat obtenir molts bons resultats en comparació amb els mètodes presents a la literatura. Aquest fet ha estat discutit mitjançant l’anàlisi dels paràmetres Area Under The Curve (AUC) i Enrichment Factor (EF) de les curves Receiving Operating Characteristic (ROC) analitzades. A més, s’ha mostrat com l’eficàcia dels descriptors augmenta de manera significativa quan s’implementen en sistemes de classificació amb aprenentatge supervisat com les finestres de Parzen, fent-los adequats per a la construcció d’un sistema de predicció de dianes terapèutiques eficient. En aquesta tesi doctoral, a més, s’ha trobat que els MEP calculats mitjançant la teoria DFT i el conjunt de bases B3LYP/6-31*G en la superfície amb densitat electrònica 0,01 au correlacionen bé amb dades experimentals possiblement a causa de la contribució més gran de les propietats electroestàtiques locals reflectides en el MEP. Les parametritzacions proposades en funció del tipus d’hibridació atòmica han contribuït també a la millora dels resultats. Els càlculs realitzats en aquestes superfícies suposen un guany en un factor cinc en la velocitat de processament del MEP. Donat l’acceptable ajust a les dades experimentals del mètode proposat per al càlcul del MEP aproximat i dels SSIP que se’n deriven, aquest procediment es pot emprar per obtenir els SSIP en bases de dades moleculars extenses i en macromolècules (com ara proteïnes) d’una manera molt ràpida (ja que la velocitat de processament obtinguda arriba fins als cinc mil àtoms per segon amb un sol processador). Les tècniques proposades en aquesta tesi doctoral resulten d’interès donades les nombroses aplicacions que tenen els SSIP com per exemple, en el cribratge virtual de cocristalls o en la predicció d’energies lliures en dissolució.
Introduction Because of the generalized data growth in the nowadays digital era and due to the fact that we are possibly living on the last days of the Moore’s law, there exists a good reason for being focused on the development of technical solutions for efficient data processing. Contents In this transdisciplinary thesis between electronic engineering and computational chemistry, it's shown optimal solutions in hardware and software for molecular database processing. On the first hand, there's proposed and studied a set of stochastic computing systems in order to implement ultrafast pattern recognition applications. Specially, it’s proposed and analyzed specific digital designs in order to create digital Random Number Generators (RNG) as a base for stochastic functions. The digital platform used to generate the results is a Field Programmable Gate Array (FPGA). On the second hand, there's proposed and evaluated a set of molecular descriptors in order to create a compact molecular database. The proposed descriptors gather charge and molecular geometry information and they have been used as a database both in software conventional computing and in hardware stochastic computing. Finally, there's a proposed a set of optimizations for Molecular Electrostatic Potential (MEP) and Surface Site Interaction Points (SSIP). Conclusions Firstly, the results show the relevance of the uniformity of the RNG within the evaluation period in order to implement high precision stochastic computing systems. In addition, the proposed RNG have an aperiodic behavior which avoid some potential correlations between stochastic signals. This property makes the proposed RNG suitable for implementation of stochastic computing systems. Secondly, the proposed molecular descriptors PED have demonstrated to provide good results in comparison with other methods that are present in the literature. This has been discussed by the use of Area Under the Curve (AUC) and Enrichment Factor (EF) of averaged Receiving Operating Characteristic (ROC) curves. Furthermore, the performance of the proposed descriptors gets increased when they are implemented in supervised machine learning algorithms making them appropriate for therapeutic target predictions. Thirdly, the efficient molecular database characterization and the usage of stochastic computing circuitry can be used together in order to implement ultrafast information processing systems. On the other hand, in this thesis, it has been found that the MEP calculated by using DFT and B3LYP/6-31*G basis at 0.01 au density surface level has good correlation with experimental data. This fact may be due to the important contribution of local electrostatics and the refinement performed by the parameterization of the MEP as a function of the orbital atom type. Additionally, the proposed calculation over 0.01 au is five times faster than the calculation over 0.002 au. Finally, due to acceptable agreement between experimental data and theoretical results obtained by using the proposed calculation for MEP and SSIP, the proposed method is suitable for being applied in order to quickly process big molecular databases and macromolecules (the processing speed can achieve five thousand molecules per second using a single processor). The proposed techniques have special interest with the purpose of finding the SSIP because the big number of applications they have as for instance in virtual cocrystal screening and calculation of free energies in solution.
33

Ramanan, Paritosh. "INDIGO: An In-Situ Distributed Gossip System Design and Evaluation." 2015. http://scholarworks.gsu.edu/cs_theses/81.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Distributed Gossip in networks is a well studied and observed problem which can be accomplished using different gossiping styles. This work focusses on the development, analysis and evaluation of a novel in-situ distributed gossip protocol framework design called (INDIGO). A core aspect of INDIGO is its ability to execute on a simulation setup as well as a system testbed setup in a seamless manner allowing easy portability. The evaluations focus on application of INDIGO to solve problems such as distributed average consensus, distributed seismic event location and lastly distributed seismic tomography. The results obtained herein validate the efficacy and reliability of INDIGO.
34

Shi, Lei. "Real-time In-situ Seismic Tomography in Sensor Network." 2016. http://scholarworks.gsu.edu/cs_diss/111.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Seismic tomography is a technique for illuminating the physical dynamics of the Earth by seismic waves generated by earthquakes or explosions. In both industry and academia, the seismic exploration does not yet have the capability of imaging seismic tomography in real-time and with high resolution. There are two reasons. First, at present raw seismic data are typically recorded on sensor nodes locally then are manually collected to central observatories for post processing, and this process may take months to complete. Second, high resolution tomography requires a large and dense sensor network, the real-time data retrieval from a network of large-amount wireless seismic nodes to a central server is virtually impossible due to the sheer data amount and resource limitations. This limits our ability to understand earthquake zone or volcano dynamics. To obtain the seismic tomography in real-time and high resolution, a new design of sensor network system for raw seismic data processing and distributed tomography computation is demanded. Based on these requirements, three research aspects are addressed in this work. First, a distributed multi-resolution evolving tomography computation algorithm is proposed to compute tomography in the network, while avoiding costly data collections and centralized computations. Second, InsightTomo, an end-to-end sensor network emulation platform, is designed to emulate the entire process from data recording to tomography image result delivery. Third, a sensor network testbed is presented to verify the related methods and design in real world. The design of the platform consists of hardware, sensing and data processing components.
35

(10223831), Yuankun Fu. "Accelerated In-situ Workflow of Memory-aware Lattice Boltzmann Simulation and Analysis." Thesis, 2021.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
As high performance computing systems are advancing from petascale to exascale, scientific workflows to integrate simulation and visualization/analysis are a key factor to influence scientific campaigns. As one of the campaigns to study fluid behaviors, computational fluid dynamics (CFD) simulations have progressed rapidly in the past several decades, and revolutionized our lives in many fields. Lattice Boltzmann method (LBM) is an evolving CFD approach to significantly reducing the complexity of the conventional CFD methods, and can simulate complex fluid flow phenomena with cheaper computational cost. This research focuses on accelerating the workflow of LBM simulation and data analysis.

I start my research on how to effectively integrate each component of a workflow at extreme scales. Firstly, we design an in-situ workflow benchmark that integrates seven state-of-the-art in-situ workflow systems with three synthetic applications, two real-world CFD applications, and corresponding data analysis. Then detailed performance analysis using visualized tracing shows that even the fastest existing workflow system still has 42% overhead. Then, I develop a novel minimized end-to-end workflow system, Zipper, which combines the fine-grain task parallelism of full asynchrony and pipelining. Meanwhile, I design a novel concurrent data transfer optimization method, which employs a multi-threaded work-stealing algorithm to transfer data using both channels of network and parallel file system. It significantly reduces the data transfer time by up to 32%, especially when the simulation application is stalled. Then investigation on the speedup using OmniPath network tools shows that the network congestion has been alleviated by up to 80%. At last, the scalability of the Zipper system has been verified by a performance model and various largescale workflow experiments on two HPC systems using up to 13,056 cores. Zipper is the fastest workflow system and outperforms the second-fastest by up to 2.2 times.

After minimizing the end-to-end time of the LBM workflow, I began to accelerate the memory-bound LBM algorithms. We first design novel parallel 2D memory-aware LBM algorithms. Then I extend to design 3D memory-aware LBM that combine features of single-copy distribution, single sweep, swap algorithm, prism traversal, and merging multiple temporal time steps. Strong scalability experiments on three HPC systems show that 2D and 3D memory-aware LBM algorithms outperform the existing fastest LBM by up to 4 times and 1.9 times, respectively. The speedup reasons are illustrated by theoretical algorithm analysis. Experimental roofline charts on modern CPU architectures show that memory-aware LBM algorithms can improve the arithmetic intensity (AI) of the fastest existing LBM by up to 4.6 times.
36

CHEN, CHANG-HONG, and 陳昶宏. "A Comparison of Cloud Computing and On-Site Computing for Robotic Image Recognition." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/62375149586847967125.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
碩士
國立雲林科技大學
電子工程系
104
Autonomous robots can use a camera to detect target objects and deal with the defeated objects. This problem of computer vision requires intensive computation but current mobile devices such as smartphones are generally not able to deliver sufficient computing power. This study tested a robot with camera vision on an Arduino Yún. The robot sent its image stream to a PC server via Wi-Fi connection. The server detected a target object with the OpenCV library and sent commands back to Arduino Yun to control the robot. This study compared this cloud approach with previous studies that did local computation with mobile devices. The results showed that the cloud approach had some advantages.
37

Diesel, Brian. "Site-specific computing for a data-based place /." 2007. http://proquest.umi.com/pqdweb?did=1417816361&sid=6&Fmt=2&clientId=39334&RQT=309&VName=PQD.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Thesis (M.Arch.)--State University of New York at Buffalo, 2007.
Title from PDF title page (viewed on Feb. 19, 2008) Available through UMI ProQuest Digital Dissertations. Thesis adviser: Bohlen, Marc, Khan, Omar Includes bibliographical references.
38

Chen, Sung-Yi, and 陳松毅. "A Multi-site Resource Allocation Strategy in Grid Computing Environments." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/36316767310101348663.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
碩士
東海大學
資訊工程與科學系
95
Grid computing encounters distributed heterogeneous resources, including different platforms, hardware/software, computer architecture, and computer languages, which are geographically distributed and governed by different Administrative Domains over a network using open standards to solve large-scale computational problems. As more Grids are deployed worldwide, the number of multi-institutional collaborations is rapidly growing. However, to realize Grid computing full potential, it is expected that Grid participants are able to use one another’s resources. This work presented a multi-site resource allocation (MSRA) strategy for Resource Broker to dispatch jobs to appropriate resources across two different Administrative Domains and the experimental result shown that MSRA exhibits a better performance than other strategies. In this work, we addressed information gathering and focused on providing a domain-based model for network information measurement using Network Weather Service (NWS) on Grid computing environments. We used the Ganglia and NWS tools to monitor resource status and network-related information, respectively. The proposed broker provided secure, updated information about available resources and served as a link to the diverse systems available in the Grid.
39

Richardson, Wendy Westenberg. "Voronoi site modeling a computer model to predict the binding affinity of small flexible molecules." 1993. http://catalog.hathitrust.org/api/volumes/oclc/68796719.html.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Kajita, Marcos Suguru. "Google app engine case study : a micro blogging site." Thesis, 2009. http://hdl.handle.net/2152/ETD-UT-2009-12-565.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Cloud computing refers to the combination of large scale hardware resources at datacenters integrated by system software that provides services, commonly known as Software-as-a-Service (SaaS), over the Internet. As a result of more affordable datacenters, cloud computing is slowly making its way into the mainstream business arena and has the potential to revolutionize the IT industry. As more cloud computing solutions become available, it is expected that there will be a shift to what is sometimes referred to as the Web Operating System. The Web Operating System, along with the sense of infinite computing resources on the “cloud” has the potential to bring new challenges in software engineering. The motivation of this report, which is divided into two parts, is to understand these challenges. The first part gives a brief introduction and analysis of cloud computing. The second part focuses on Google’s cloud computing platform and evaluates the implementation of a micro blogging site using Google’s App Engine.
text
41

TORRE, MARCO. "INDAGINI INFORMATICHE E PROCESSO PENALE." Doctoral thesis, 2016. http://hdl.handle.net/2158/1028650.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Sulla prova informatica nel processo penale le questioni ad oggi aperte sono molteplici e tutte caratterizzate da notevole complessità. In questo contributo, partendo dalla definizione stessa dell'istituto -sulla quale, come vedremo, manca unanimità di vedute fra gli interpreti- si passeranno in rassegna le principali problematiche connesse, rispettivamente, al profilo statico ed al profilo dinamico dell’acquisizione dell'evidenza digitale a scopo investigativo e probatorio. In particolare, nella prima parte dell’elaborato si affronta il tema della acquisizione della prova digitale off line. Come noto, si tratta della fase maggiormente problematica nella gestione della digital evidence. Qui, il terreno di scontro fra dottrina e giurisprudenza è squisitamente tecnico ed è rappresentato dalla divergenza di opinioni circa la natura ripetibile o non ripetibile dell’attività di acquisizione dei file on site, in sede di sopralluogo (art. 354, co. 2, c.p.p.), ispezione (art. 244, co. 2, c.p.p.), perquisizione (artt. 247, co. 1-bis e 352, co. 1-bis c.p.p.) e sequestro (art. 254-bis c.p.p.) Nell’ambito delle operazioni tecniche non ripetibili, inoltre, è necessario distinguere tra accertamenti modificativi della fonte di prova e accertamenti modificativi degli elementi di prova, essendo diverse le rispettive norme di copertura. Punto di partenza della nostra riflessione è la legge 18 marzo 2008, n. 48: è facile osservare che il codice di procedura penale, all’indomani della novella, si occupa di prova digitale attraverso la rivisitazione di istituti tipici vecchi. La tecnica legislativa utilizzata per fare spazio alla digital evidence all’interno del codice di rito è stata quella di integrare le vecchie disposizioni previste per le ispezioni, le perquisizioni e i sequestri attraverso la previsione di una formula tautologica comune a tutti e tre i mezzi di ricerca della prova citati: «adottando misure tecniche dirette ad assicurare la conservazione dei dati originali e ad impedirne l’alterazione» . Come vedremo, il problema è che in ambito informatico è naturalisticamente difficile distinguere tra accertamento, ispezione, perquisizione e sequestro . Probabilmente sarebbe stato più opportuno predisporre un nuovo strumento di ricerca ad hoc per la prova di natura digitale, disciplinando in modo più dettagliato le attività da compiere per assicurarne il valore probatorio. Probabilmente, la fretta dovuta alla necessità di far fronte alle scadenze europee è stata cattiva consigliera e così la novella normativa del 2008 si è tradotta in un “copia e incolla” normativo (dalla fonte europea alla legge italiana) che non ha tenuto conto delle specificità della realtà scientifica di riferimento. In altre e più semplici parole, si sono voluti regolamentare istituti nuovi ragionando con schemi vecchi, senza tener conto del fatto che in ambito informatico non è possibile riuscire a distinguere tra ispezioni, perquisizioni e sequestri: l’unica cosa che conta è l’ “apprensione” dell’evidenza digitale con metodi e tecniche idonei a conciliare accertamento del fatto e garanzie individuali. Utilizzando le vecchie norme sugli accertamenti urgenti, sulle ispezioni, le perquisizioni ed il sequestro il legislatore della novella ha creato delle problematiche interpretative di non poco conto, destinate ad emergere ogni qual volta si cerchi di inquadrare una determinata attività operativa nell’una o nelle altre fattispecie previste dal codice, con evidenti conseguenze in termini di disciplina applicabile e di pretese garanzie . La seconda parte del presente lavoro è dedicata all’approfondimento –anche in una prospettiva de iure condendo- del tema delle investigazioni informatiche online. Tale argomento coinvolge la questione della legittimità delle indagini atipiche e della conseguente utilizzabilità dei suoi risultati, con specifico riferimento ai limiti derivanti dalle regole di esclusione di matrice costituzionale. Come vedremo, salvo ipotesi particolari nella prassi giudiziaria è difficile che emerga un problema che coinvolga esclusivamente la prova atipica; il vero dilemma sono le indagini atipiche, ossia quelle attività investigative completamente sciolte da briglie di natura positiva. Da questa prospettiva, la novella normativa del 2008 è stata un’occasione mancata, non avendo il legislatore distinto tra prova informatica off line e prova informatica online (la prima conservata sulla memoria di massa del computer o su strumenti di tipo integrativo di questa, quali CD, DVD, USB, ecc.; la seconda accessibile mediante rete telematica). In particolare, quest’ultimo aspetto relativo al profilo dinamico della prova digitale non è stato affatto disciplinato; sarebbe stato invece opportuno prevedere anche tale tipo di captazione digitale attraverso uno strumento tipico ad hoc, in maniera non troppo diversa da quanto oggi avviene con riferimento alle intercettazioni delle conversazioni telefoniche ed ambientali, con dettagliate discipline dei casi e dei modi in cui è ammessa l'intrusione investigativa . Per questo motivo, al termine di un appassionato capitolo interamente dedicato al c.d. “captatore informatico”, l’autore allega una propria proposta di inserimento, nel Libro III, Titolo III, del codice di procedura penale, di un capo V (artt. 271-bis – 271-sexies) dedicato ai “Programmi informatici per l’acquisizione da remoto dei dati e delle informazioni presenti in un sistema informatico o telematico”, modellandone chiaramente la disciplina sulla base di quanto previsto in materia di intercettazioni, seppur con qualche spunto di novità. Nei successivi capitoli si passano in rassegna le altre tipologie di indagini digitali occulte, attualmente in voga nella prassi operativa, non senza sottolinearne relative criticità: intercettazioni telematiche; pedinamento elettronico; data retention; indagini under cover e monitoraggio dei siti; cloud computing; Osint. Questa poco rassicurante premessa non scoraggi il lettore. Nel prosieguo di questo lavoro ci si soffermerà su ciò che il Legislatore ha scritto e su ciò che non ha scritto (o voluto scrivere). Dopo l'esame e la critica, tuttavia, saranno proposte delle soluzioni concrete, opinabili certamente, ma presenti e praticabili. L'idea che ha sostenuto lo scrivente è semplice: il divieto di non liquet dovrebbe valere non soltanto per il giudice, ma anche e soprattutto per chi, ad ogni livello, quel giudice o quel legislatore intenda (giustamente) criticare. Solo così la critica diviene costruttiva e foriera di una scienza giuridica degna di tale nome.

До бібліографії