Дисертації з теми "In situ computing"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-41 дисертацій для дослідження на тему "In situ computing".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.
Ranisavljević, Elisabeth. "Cloud computing appliqué au traitement multimodal d’images in situ pour l’analyse des dynamiques environnementales." Thesis, Toulouse 2, 2016. http://www.theses.fr/2016TOU20128/document.
Analyzing landscape, its dynamics and environmental evolutions require regular data from the sites, specifically for glacier mass balanced in Spitsbergen and high mountain area. Due to poor weather conditions including common heavy cloud cover at polar latitudes, and because of its cost, daily satellite imaging is not always accessible. Besides, fast events like flood or blanket of snow is ignored by satellite based studies, since the slowest sampling rate is unable to observe it. We complement satellite imagery with a set of ground based autonomous automated digital cameras which take 3 pictures a day. These pictures form a huge database. Each picture needs many processing to extract the information (geometric modifications, atmospheric disturbances, classification, etc). Only computer science is able to store and manage all this information. Cloud computing, being more accessible in the last few years, offers as services IT resources (computing power, storage, applications, etc.). The storage of the huge geographical data could, in itself, be a reason to use cloud computing. But in addition to its storage space, cloud offers an easy way to access , a scalable architecture and a modularity in the services available. As part of the analysis of in situ images, cloud computing offers the possibility to set up an automated tool to process all the data despite the variety of disturbances and the data volume. Through decomposition of image processing in several tasks, implemented as web services, the composition of these services allows us to adapt the treatment to the conditions of each of the data
Adhinarayanan, Vignesh. "Models and Techniques for Green High-Performance Computing." Diss., Virginia Tech, 2020. http://hdl.handle.net/10919/98660.
Doctor of Philosophy
Past research in green high-performance computing (HPC) mostly focused on managing the power consumed by general-purpose processors, known as central processing units (CPUs) and to a lesser extent, memory. In this dissertation, we study two increasingly important components: interconnects (predominantly focused on those inside a chip, but not limited to them) and graphics processing units (GPUs). Our contributions in this dissertation include a set of innovative measurement techniques to estimate the power consumed by the target components, statistical and analytical approaches to develop power models and their optimizations, and algorithms to manage power statically and at runtime. Experimental results show that it is possible to build models of sufficient accuracy and apply them for intelligently managing power on multiple levels of the system hierarchy: chip interconnect at the micro-level, heterogeneous nodes at the meso-level, and a supercomputing cluster at the macro-level.
Li, Shaomeng. "Wavelet Compression for Visualization and Analysis on High Performance Computers." Thesis, University of Oregon, 2018. http://hdl.handle.net/1794/23905.
Alomar, Barceló Miquel Lleó. "Methodologies for hardware implementation of reservoir computing systems." Doctoral thesis, Universitat de les Illes Balears, 2017. http://hdl.handle.net/10803/565422.
[spa]Inspiradas en la forma en que el cerebro procesa la información, las redes neuronales artificiales (RNA) se crearon con el objetivo de reproducir habilidades humanas en tareas que son difíciles de resolver utilizando la programación algorítmica clásica. El paradigma de las RNA se ha aplicado a numerosos campos de la ciencia y la ingeniería gracias a su capacidad de aprender de ejemplos, la adaptación, el paralelismo y la tolerancia a fallas. El reservoir computing (RC), basado en el uso de una red neuronal recurrente (RNR) aleatoria como núcleo de procesamiento, es un modelo de gran alcance muy adecuado para procesar series temporales. Las realizaciones en hardware de las RNA son cruciales para aprovechar las propiedades paralelas de estos modelos, las cuales favorecen una mayor velocidad y fiabilidad. Por otro lado, las redes neuronales en hardware (RNH) pueden ofrecer ventajas apreciables en términos de consumo energético y coste. Los dispositivos compactos de bajo coste implementando RNH son útiles para apoyar o reemplazar al software en aplicaciones en tiempo real, como el control, monitorización médica, robótica y redes de sensores. Sin embargo, la realización en hardware de RNA con un número elevado de neuronas, como en el caso del RC, es una tarea difícil debido a la gran cantidad de recursos exigidos por las operaciones involucradas. A pesar de los posibles beneficios de los circuitos digitales en hardware para realizar un procesamiento neuronal basado en RC, la mayoría de las implementaciones se realizan en software mediante procesadores convencionales. En esta tesis, propongo y analizo varias metodologías para la implementación digital de sistemas RC utilizando un número limitado de recursos hardware. Los diseños de la red neuronal se describen en detalle tanto para una implementación convencional como para los distintos métodos alternativos. Se discuten las ventajas e inconvenientes de las diversas técnicas con respecto a la precisión, velocidad de cálculo y área requerida. Finalmente, las implementaciones propuestas se aplican a resolver diferentes problemas prácticos de ingeniería.
[eng]Inspired by the way the brain processes information, artificial neural networks (ANNs) were created with the aim of reproducing human capabilities in tasks that are hard to solve using the classical algorithmic programming. The ANN paradigma has been applied to numerous fields of science and engineering thanks to its ability to learn from examples, adaptation, parallelism and fault-tolerance. Reservoir computing (RC), based on the use of a random recurrent neural network (RNN) as processing core, is a powerful model that is highly suited to time-series processing. Hardware realizations of ANNs are crucial to exploit the parallel properties of these models, which favor higher speed and reliability. On the other hand, hardware neural networks (HNNs) may offer appreciable advantages in terms of power consumption and cost. Low-cost compact devices implementing HNNs are useful to suport or replace software in real-time applications, such as control, medical monitoring, robotics and sensor networks. However, the hardware realization of ANNs with large neuron counts, such as in RC, is a challenging task due to the large resource requirement of the involved operations. Despite the potential benefits of hardware digital circuits to perform RC-based neural processing, most implementations are realized in software using sequential processors. In this thesis, I propose and analyze several methodologies for the digital implementation of RC systems using limited hardware resources. The neural network design is described in detail for both a conventional implementation and the diverse alternative approaches. The advantages and shortcomings of the various techniques regarding the accuracy, computation speed and required silicon area are discussed. Finally, the proposed approaches are applied to solve different real-life engineering problems.
Santos, Rodríguez Patrícia. "Computing-Based Testing: conceptual model, implementations and experiments extending IMS QTI." Doctoral thesis, Universitat Pompeu Fabra, 2011. http://hdl.handle.net/10803/69962.
El uso de test de corrección automática, en el Aprendizaje Apoyado por Tecnologías de la Información y las Comunicaciones, se basa en el uso de ordenadores. Las propuestas actuales se centran en el diseño de nuevas preguntas, siendo IMS Question and Test Interoperability (QTI) el estándar de-facto. La tesis propone que este dominio puede ser extendido con el diseño de escenarios de test avanzados que integren nuevos contextos de interacción para la visualización de preguntas y tests, y que consideren la aplicación de diversos dispositivos tecnológicos para permitir diversos tipos de actividades. En este contexto se propone usar el término inglés Computing-Based Testing (CBT) para referirse al dominio, en vez de usar el término Computer-Based Testing, enfatizando el papel de la tecnología para la evaluación basada en test. Los escenarios CBT avanzados pueden aumentar la posibilidad de que los profesores puedan diseñar test más adecuados para sus asignaturas, permitiendo la evaluación de habilidades de alto nivel. Con el reto principal de modelar el dominio del CBT extendiendo las posibilidades actuales de QTI y las aproximaciones actuales, esta tesis proporciona un conjunto de contribuciones relacionadas con tres objetivos. El primer objetivo de la tesis es proponer un Modelo Conceptual definiendo y relacionando tres dimensiones: Pregunta, Test y Actividad. Por una parte, se propone un marco como guía en la categorización y diseño de escenarios CBT. Además, se proponen dos modelos que indican los elementos para la representación tecnológica de preguntas y test. Estos modelos son independientes de plataforma (PIM) que extienden QTI formulando los elementos que permiten implementar escenarios CBT avanzados. Además, se propone el uso de patrones como complemento en el modelado del dominio. El segundo objetivo trata de mostrar la relevancia y aplicabilidad de las contribuciones a través de escenarios y casos de estudio representativos en contextos reales. Para ello, se evalúa el diseño e implementación de un conjunto de experimentos y sistemas. En todos los experimentos se utiliza el Modelo Conceptual para diseñar escenarios CBT avanzados. Para cada caso los CBT-PIMs sirven como base para desarrollar modelos específicos de plataforma (CBT-PSMs) y sistemas asociados. La evaluación muestra que las implementaciones resultantes tienen beneficios educativos positivos, permitiendo la evaluación de habilidades de alto nivel y mejorando la motivación de los estudiantes. Finalmente, el tercer objetivo se centra en proponer vías de extensión para QTI. La colección de modelos propuestos sugiere diferentes direcciones de extensión de QTI para la implementación de preguntas, tests y actividades avanzados. Los escenarios y sistemas llevados a cabo representan implementaciones de referencia y buenas prácticas para las vías de extensión propuestas.
Dirand, Estelle. "Développement d'un système in situ à base de tâches pour un code de dynamique moléculaire classique adapté aux machines exaflopiques." Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAM065/document.
The exascale era will widen the gap between data generation rate and the time to manage their output and analysis in a post-processing way, dramatically increasing the end-to-end time to scientific discovery and calling for a shift toward new data processing methods. The in situ paradigm proposes to analyze data while still resident in the supercomputer memory to reduce the need for data storage. Several techniques already exist, by executing simulation and analytics on the same nodes (in situ), by using dedicated nodes (in transit) or by combining the two approaches (hybrid). Most of the in situ techniques target simulations that are not able to fully benefit from the ever growing number of cores per processor but they are not designed for the emerging manycore processors.Task-based programming models on the other side are expected to become a standard for these architectures but few task-based in situ techniques have been developed so far. This thesis proposes to study the design and integration of a novel task-based in situ framework inside a task-based molecular dynamics code designed for exascale supercomputers. We take benefit from the composability properties of the task-based programming model to implement the TINS hybrid framework. Analytics workflows are expressed as graphs of tasks that can in turn generate children tasks to be executed in transit or interleaved with simulation tasks in situ. The in situ execution is performed thanks to an innovative dynamic helper core strategy that uses the work stealing concept to finely interleave simulation and analytics tasks inside a compute node with a low overhead on the simulation execution time.TINS uses the Intel® TBB work stealing scheduler and is integrated into ExaStamp, a task-based molecular dynamics code. Various experiments have shown that TINS is up to 40% faster than state-of-the-art in situ libraries. Molecular dynamics simulations of up to 2 billions particles on up to 14,336 cores have shown that TINS is able to execute complex analytics workflows at a high frequency with an overhead smaller than 10%
Carlson, Darren Vaughn. "Ocean. Towards Web-scale context-aware computing. A community-centric, wide-area approach for in-situ, context-mediated component discovery and composition." Lübeck Zentrale Hochschulbibliothek Lübeck, 2010. http://d-nb.info/1001862880/34.
Dutta, Soumya. "In Situ Summarization and Visual Exploration of Large-scale Simulation Data Sets." The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu1524070976058567.
Lemon, Alexander Michael. "A Shared-Memory Coupled Architecture to Leverage Big Data Frameworks in Prototyping and In-Situ Analytics for Data Intensive Scientific Workflows." BYU ScholarsArchive, 2019. https://scholarsarchive.byu.edu/etd/7545.
Carlson, Darren Vaughn [Verfasser]. "Ocean. Towards Web-scale context-aware computing : A community-centric, wide-area approach for in-situ, context-mediated component discovery and composition / Darren Vaughn Carlson." Lübeck : Zentrale Hochschulbibliothek Lübeck, 2010. http://d-nb.info/1001862880/34.
Soumagne, Jérome. "An In-situ Visualization Approach for Parallel Coupling and Steering of Simulations through Distributed Shared Memory Files." Phd thesis, Université Sciences et Technologies - Bordeaux I, 2012. http://tel.archives-ouvertes.fr/tel-00788826.
Meyer, Lucas. "Deep Learning en Ligne pour la Simulation Numérique à Grande Échelle." Electronic Thesis or Diss., Université Grenoble Alpes, 2024. http://www.theses.fr/2024GRALM001.
Many engineering applications and scientific discoveries rely on faithful numerical simulations of complex phenomena. These phenomena are transcribed mathematically into Partial Differential Equation (PDE), whose solution is generally approximated by solvers that perform intensive computation and generate tremendous amounts of data. The applications rarely require only one simulation but rather a large ensemble of runs for different parameters to analyze the sensitivity of the phenomenon or to find an optimal configuration. Those large ensemble runs are limited by computation time and finite memory capacity. The high computational cost has led to the development of high-performance computing (HPC) and surrogate models. Recently, pushed by the success of deep learning in computer vision and natural language processing, the scientific community has considered its use to accelerate numerical simulations. The present thesis follows this approach by first presenting two techniques using machine learning for surrogate models. First, we propose to use a series of convolutions on hierarchical graphs to reproduce the velocity of fluids as generated by solvers at any time of the simulation. Second, we hybridize regression algorithms with classical reduced-order modeling techniques to identify the coefficients of any new simulation in a reduced basis computed by proper orthogonal decomposition. These two approaches, as the majority found in the literature, are supervised. Their training needs to generate a large number of simulations. Thus, they suffer the same problem that motivated their development in the first instance: generating many faithful simulations at scale is laborious. We propose a generic training framework for artificial neural networks that generate data simulations on-the-fly by leveraging HPC resources. Data are produced by running simultaneously several instances of the solver for different parameters. The solver itself can be parallelized over several processing units. As soon as a time step is computed by any simulation, it is streamed for training. No data is ever written on disk, thus overcoming slow input-output operations and alleviating the memory footprint. Training is performed by several GPUs with distributed data-parallelism. Because the training is now online, it induces a bias in the data compared to classical training, for which they are sampled uniformly from an ensemble of simulations available a priori. To mitigate this bias, each GPU is associated with a memory buffer in charge of mixing the incoming simulation data. This framework has improved the generalization capabilities of state-of-the-art architectures by exposing them during training to a richer diversity of data than would have been feasible with classical training. Experiments show the importance of the memory buffer implementation in guaranteeing generalization capabilities and high throughput training. The framework has been used to train a deep surrogate for heat diffusion simulation in less than 2 hours on 8TB of data processed in situ, thus increasing the prediction accuracy by 47% compared to a classical setting
Su, Yu. "Big Data Management Framework based on Virtualization and Bitmap Data Summarization." The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1420738636.
Honore, Valentin. "Convergence HPC - Big Data : Gestion de différentes catégories d'applications sur des infrastructures HPC." Thesis, Bordeaux, 2020. http://www.theses.fr/2020BORD0145.
Numerical simulations are complex programs that allow scientists to solve, simulate and model complex phenomena. High Performance Computing (HPC) is the domain in which these complex and heavy computations are performed on large-scale computers, also called supercomputers.Nowadays, most scientific fields need supercomputers to undertake their research. It is the case of cosmology, physics, biology or chemistry. Recently, we observe a convergence between Big Data/Machine Learning and HPC. Applications coming from these emerging fields (for example, using Deep Learning framework) are becoming highly compute-intensive. Hence, HPC facilities have emerged as an appropriate solution to run such applications. From the large variety of existing applications has risen a necessity for all supercomputers: they mustbe generic and compatible with all kinds of applications. Actually, computing nodes also have a wide range of variety, going from CPU to GPU with specific nodes designed to perform dedicated computations. Each category of node is designed to perform very fast operations of a given type (for example vector or matrix computation).Supercomputers are used in a competitive environment. Indeed, multiple users simultaneously connect and request a set of computing resources to run their applications. This competition for resources is managed by the machine itself via a specific program called scheduler. This program reviews, assigns andmaps the different user requests. Each user asks for (that is, pay for the use of) access to the resources ofthe supercomputer in order to run his application. The user is granted access to some resources for a limited amount of time. This means that the users need to estimate how many compute nodes they want to request and for how long, which is often difficult to decide.In this thesis, we provide solutions and strategies to tackle these issues. We propose mathematical models, scheduling algorithms, and resource partitioning strategies in order to optimize high-throughput applications running on supercomputers. In this work, we focus on two types of applications in the context of the convergence HPC/Big Data: data-intensive and irregular (orstochastic) applications.Data-intensive applications represent typical HPC frameworks. These applications are made up oftwo main components. The first one is called simulation, a very compute-intensive code that generates a tremendous amount of data by simulating a physical or biological phenomenon. The second component is called analytics, during which sub-routines post-process the simulation output to extract,generate and save the final result of the application. We propose to optimize these applications by designing automatic resource partitioning and scheduling strategies for both of its components.To do so, we use the well-known in situ paradigm that consists in scheduling both components together in order to reduce the huge cost of saving all simulation data on disks. We propose automatic resource partitioning models and scheduling heuristics to improve overall performance of in situ applications.Stochastic applications are applications for which the execution time depends on its input, while inusual data-intensive applications the makespan of simulation and analytics are not affected by such parameters. Stochastic jobs originate from Big Data or Machine Learning workloads, whose performanceis highly dependent on the characteristics of input data. These applications have recently appeared onHPC platforms. However, the uncertainty of their execution time remains a strong limitation when using supercomputers. Indeed, the user needs to estimate how long his job will have to be executed by the machine, and enters this estimation as his first reservation value. But if the job does not complete successfully within this first reservation, the user will have to resubmit the job, this time requiring a longer reservation
Chen, Yuan. "Using mobile computing for construction site information management." Thesis, University of Newcastle Upon Tyne, 2008. http://hdl.handle.net/10443/164.
Ghorbani, Mohammadmersad. "Computational analysis of CpG site DNA methylation." Thesis, Brunel University, 2013. http://bura.brunel.ac.uk/handle/2438/8217.
Löfgren, Alexander. "Making Mobile Meaning : expectations and experiences of mobile computing usefulness in construction site management practice." Doctoral thesis, KTH, Industriell ekonomi och organisation (Inst.), 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-9216.
QC 20100825
Schmelzer, Diana McAllister. "A case study and proposed decision guide for allocating instructional computing resources at the school site level." Diss., Virginia Polytechnic Institute and State University, 1986. http://hdl.handle.net/10919/76500.
Ed. D.
Creutz, Julia, and Isabelle Borgkvist. "Smart Hem, smart för vem? : En kvalitativ studie om varför det Smarta Hemmet inte har fått sitt förväntade genomslag." Thesis, Södertörns högskola, Institutionen för naturvetenskap, miljö och teknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:sh:diva-29681.
Smarta Hem är inte smarta för alla, åtminstone inte än. Syftet med denna uppsats är undersöka fyra hinder som förhindrar Smarta Hem från att anammas som standard i Sverige. Denna uppsats är baserad på bidragen från studien “Home Automation in the Wild: Challanges and Opportunities” (brush et al. 2011), och undersöker de hinder som presenteras i den studien. Tack vare användandet av ett flertal olika metoder, kan vi konstatera att de hinder som presenteras i den specifika studien (Brush et al. 2011) fortfarande finns kvar idag, men möjligtvis på andra villkor. I uppsatsens diskussionsdel presenterar vi ett antal sätt att arbeta mot dessa hinder och, förhoppningsvis, kunna eliminera dem.
Okamoto, Sohei. "WIDE web interface development environment /." abstract and full text PDF (free order & download UNR users only), 2005. http://0-gateway.proquest.com.innopac.library.unr.edu/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:1433350.
Ward, Michael James. "The capture and integration of construction site data." Thesis, Loughborough University, 2004. https://dspace.lboro.ac.uk/2134/799.
Skyner, Rachael Elaine. "Hydrate crystal structures, radial distribution functions, and computing solubility." Thesis, University of St Andrews, 2017. http://hdl.handle.net/10023/11746.
Sigurjonsdottir, Edda Kristin. "Sit, Eat, Drink, Talk, Laugh – Dining and Mixed Media." Thesis, Malmö högskola, Fakulteten för kultur och samhälle (KS), 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-23378.
Posey, Orlando Guy. "Client/Server Systems Performance Evaluation Measures Use and Importance: a Multi-Site Case Study of Traditional Performance Measures Applied to the Client/Server Environment." Thesis, University of North Texas, 1999. https://digital.library.unt.edu/ark:/67531/metadc277882/.
Lemoine, David. "Modèles génériques et méthodes de résolution pour la planification tactique mono-site et multi-site." Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2008. http://tel.archives-ouvertes.fr/tel-00731297.
Стеценко, Анастасія, та Anastasiia Stetsenko. "Особливості створення динамічних презентацій засобами програми Sway". СумДПУ імені А. С. Макаренка, 2017. http://repository.sspu.sumy.ua/handle/123456789/2626.
In the article the features of creation of dynamic presentations in the new program of Microsoft Office Suite, Sway. Analyzed the methods of making such presentations. Special attention is paid to creating presentation documents with extension DOC, DOCX, PDF, PPT, PPTX.
De, Silva Buddhima. "Realising end-user driven web application development using meta-design paradigm." View thesis, 2008. http://handle.uws.edu.au:8081/1959.7/44493.
A thesis submitted to the University of Western Sydney, College of Health and Science, School of Computing and Mathematics, in fulfilment of the requirements for the degree of Doctor of Philosophy. Includes bibliographical references.
De, Silva Buddhima. "Realising end-user driven web application development using meta-design paradigm." Thesis, View thesis, 2008. http://handle.uws.edu.au:8081/1959.7/44493.
Bowden, Sarah L. "Application of mobile IT in construction." Thesis, Loughborough University, 2005. https://dspace.lboro.ac.uk/2134/794.
Fortuna, Frederico José. "Normas no desenvolvimento de ambientes Web inclusivos e flexíveis." [s.n.], 2010. http://repositorio.unicamp.br/jspui/handle/REPOSIP/275809.
Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação
Made available in DSpace on 2018-08-16T05:09:54Z (GMT). No. of bitstreams: 1 Fortuna_FredericoJose_M.pdf: 5862511 bytes, checksum: 69f07468f18b3dceea14297c74b0250d (MD5) Previous issue date: 2010
Resumo: De acordo com W3C, o valor social da Web está no fato de que ela possibilita a comunicação, o comércio, e oportunidades de troca de conhecimento. Estes benefícios deveriam estar disponíveis para todas as pessoas, com o hardware e versão do software que utilizam, sua infra-estrutura de rede, linguagem nativa, cultura, localização geográfica, habilidade física e conhecimento. Estes aspectos estão relacionados tanto a questões sociais quanto tecnológicas. Considerando a diversidade de usuários e a complexidade de situações possíveis de uso da Web, buscam-se soluções para interfaces mais flexíveis, que possibilitem sua adaptação a diferentes contextos de uso. Este trabalho apresenta uma abordagem para solucionar o problema de como desenvolver interfaces de usuário flexíveis para sistemas Web, investigando como interfaces poderiam ser adaptadas a diferentes contextos de uso, considerando o conceito de normas da Semiótica Organizacional. Tal abordagem está representada em um framework, proposto neste trabalho, para apoiar designers e desenvolvedores na construção de interfaces flexíveis. Resultados obtidos na aplicação do framework em um sistema Web real, inserido no contexto da inclusão digital e acesso universal, são apresentados e discutidos nesta obra. Tais resultados são sugestivos da viabilidade da proposta e apontam para seu aprofundamento futuro
Abstract: According to W3C, the social value of the Web is in the fact that it enables communications, business and knowledge sharing opportunities. These benefits should be available for every person regardless of the person's hardware, software, network infrastructure, native language, cultural aspects, geographical location, physical and mental abilities. These aspects are related both to social and technological issues. Considering the differences among users and the complexity of possible Web usage, solutions are sought for more flexible user interfaces that allow their adaptation to different use contexts. This work presents an approach to solve the problem of developing flexible user interfaces for Web systems, investigating how interfaces can be adapted to different use contexts considering the concept of norms from Organizational Semiotics. This approach is represented by a framework, proposed on this work, that may help designers and developers to build flexible Web interfaces that may be adapted according to each use context. Results gathered when the framework was applied in a real Web system related to the context of universal access and digital inclusion are presented and discussed. Such results are suggestive of the proposal's viability and point to further improvements in future research
Mestrado
Interação Humano-Computador
Mestre em Ciência da Computação
Boutkhil, Soumaya. "A study and implementation of an electronic commerce website using active server pages." CSUSB ScholarWorks, 2001. https://scholarworks.lib.csusb.edu/etd-project/1894.
Oliver, Gelabert Antoni. "Desarrollo y aceleración hardware de metodologías de descripción y comparación de compuestos orgánicos." Doctoral thesis, Universitat de les Illes Balears, 2018. http://hdl.handle.net/10803/462902.
Introducció El creixement accelerat de les dades en la societat actual i l'arribada de la tecnologia del transistor als límits físics exigeix la proposta de metodologies per al processament eficient de dades. Contingut Aquesta tesi doctoral, de caràcter transdisciplinària i a mig camí entre els camps de l'enginyeria electrònica i la química computacional presenta solucions optimitzades en maquinari i en programari per tal d’accelerar el processament de bases de dades moleculars. En primer lloc es proposa i s'estudia el funcionament de blocs digitals que implementen funcions de lògica polsant estocàstica aplicades a tasques de reconeixement d'objectes. En concret es proposen i analitzen dissenys específics per a la construcció de generadors de nombres aleatoris (RNG) com a sistemes bàsics per al funcionament dels sistemes de computació estocàstics implementats en dispositius programables com les Field Programable Gate Array (FPGA). En segon lloc es proposen i avaluen un conjunt reduït de descriptors moleculars especialment orientats a la caracterització de compostos orgànics. Aquests descriptors reuneixen la informació sobre la distribució de càrrega molecular i les energies electroestàtiques. Les bases de dades generades amb aquests descriptors s’han processat emprant sistemes de computació convencionals en programari i mitjançant sistemes basats en computació estocàstica implementats en maquinari programable. Finalment es proposen optimitzacions per al càlcul del potencial electroestàtic molecular (MEP) calculat mitjançant la teoria del funcional de la densitat (DFT) i dels punts d’interacció que se’n deriven (SSIP). Conclusions Per una banda, els resultats obtinguts posen de manifest la importància de la uniformitat del RNG en el període d’avaluació per a poder implementar sistemes de computació estocàstics d’alta fiabilitat. A més, els RNG proposats presenten una font d’aleatorietat aperiòdica que minimitza les correlacions entre senyals, fent-los adequats per a la implementació de sistemes de computació estocàstica. Per una altra banda, el conjunt de descriptors moleculars proposats PED, han demostrat obtenir molts bons resultats en comparació amb els mètodes presents a la literatura. Aquest fet ha estat discutit mitjançant l’anàlisi dels paràmetres Area Under The Curve (AUC) i Enrichment Factor (EF) de les curves Receiving Operating Characteristic (ROC) analitzades. A més, s’ha mostrat com l’eficàcia dels descriptors augmenta de manera significativa quan s’implementen en sistemes de classificació amb aprenentatge supervisat com les finestres de Parzen, fent-los adequats per a la construcció d’un sistema de predicció de dianes terapèutiques eficient. En aquesta tesi doctoral, a més, s’ha trobat que els MEP calculats mitjançant la teoria DFT i el conjunt de bases B3LYP/6-31*G en la superfície amb densitat electrònica 0,01 au correlacionen bé amb dades experimentals possiblement a causa de la contribució més gran de les propietats electroestàtiques locals reflectides en el MEP. Les parametritzacions proposades en funció del tipus d’hibridació atòmica han contribuït també a la millora dels resultats. Els càlculs realitzats en aquestes superfícies suposen un guany en un factor cinc en la velocitat de processament del MEP. Donat l’acceptable ajust a les dades experimentals del mètode proposat per al càlcul del MEP aproximat i dels SSIP que se’n deriven, aquest procediment es pot emprar per obtenir els SSIP en bases de dades moleculars extenses i en macromolècules (com ara proteïnes) d’una manera molt ràpida (ja que la velocitat de processament obtinguda arriba fins als cinc mil àtoms per segon amb un sol processador). Les tècniques proposades en aquesta tesi doctoral resulten d’interès donades les nombroses aplicacions que tenen els SSIP com per exemple, en el cribratge virtual de cocristalls o en la predicció d’energies lliures en dissolució.
Introduction Because of the generalized data growth in the nowadays digital era and due to the fact that we are possibly living on the last days of the Moore’s law, there exists a good reason for being focused on the development of technical solutions for efficient data processing. Contents In this transdisciplinary thesis between electronic engineering and computational chemistry, it's shown optimal solutions in hardware and software for molecular database processing. On the first hand, there's proposed and studied a set of stochastic computing systems in order to implement ultrafast pattern recognition applications. Specially, it’s proposed and analyzed specific digital designs in order to create digital Random Number Generators (RNG) as a base for stochastic functions. The digital platform used to generate the results is a Field Programmable Gate Array (FPGA). On the second hand, there's proposed and evaluated a set of molecular descriptors in order to create a compact molecular database. The proposed descriptors gather charge and molecular geometry information and they have been used as a database both in software conventional computing and in hardware stochastic computing. Finally, there's a proposed a set of optimizations for Molecular Electrostatic Potential (MEP) and Surface Site Interaction Points (SSIP). Conclusions Firstly, the results show the relevance of the uniformity of the RNG within the evaluation period in order to implement high precision stochastic computing systems. In addition, the proposed RNG have an aperiodic behavior which avoid some potential correlations between stochastic signals. This property makes the proposed RNG suitable for implementation of stochastic computing systems. Secondly, the proposed molecular descriptors PED have demonstrated to provide good results in comparison with other methods that are present in the literature. This has been discussed by the use of Area Under the Curve (AUC) and Enrichment Factor (EF) of averaged Receiving Operating Characteristic (ROC) curves. Furthermore, the performance of the proposed descriptors gets increased when they are implemented in supervised machine learning algorithms making them appropriate for therapeutic target predictions. Thirdly, the efficient molecular database characterization and the usage of stochastic computing circuitry can be used together in order to implement ultrafast information processing systems. On the other hand, in this thesis, it has been found that the MEP calculated by using DFT and B3LYP/6-31*G basis at 0.01 au density surface level has good correlation with experimental data. This fact may be due to the important contribution of local electrostatics and the refinement performed by the parameterization of the MEP as a function of the orbital atom type. Additionally, the proposed calculation over 0.01 au is five times faster than the calculation over 0.002 au. Finally, due to acceptable agreement between experimental data and theoretical results obtained by using the proposed calculation for MEP and SSIP, the proposed method is suitable for being applied in order to quickly process big molecular databases and macromolecules (the processing speed can achieve five thousand molecules per second using a single processor). The proposed techniques have special interest with the purpose of finding the SSIP because the big number of applications they have as for instance in virtual cocrystal screening and calculation of free energies in solution.
Ramanan, Paritosh. "INDIGO: An In-Situ Distributed Gossip System Design and Evaluation." 2015. http://scholarworks.gsu.edu/cs_theses/81.
Shi, Lei. "Real-time In-situ Seismic Tomography in Sensor Network." 2016. http://scholarworks.gsu.edu/cs_diss/111.
(10223831), Yuankun Fu. "Accelerated In-situ Workflow of Memory-aware Lattice Boltzmann Simulation and Analysis." Thesis, 2021.
CHEN, CHANG-HONG, and 陳昶宏. "A Comparison of Cloud Computing and On-Site Computing for Robotic Image Recognition." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/62375149586847967125.
國立雲林科技大學
電子工程系
104
Autonomous robots can use a camera to detect target objects and deal with the defeated objects. This problem of computer vision requires intensive computation but current mobile devices such as smartphones are generally not able to deliver sufficient computing power. This study tested a robot with camera vision on an Arduino Yún. The robot sent its image stream to a PC server via Wi-Fi connection. The server detected a target object with the OpenCV library and sent commands back to Arduino Yun to control the robot. This study compared this cloud approach with previous studies that did local computation with mobile devices. The results showed that the cloud approach had some advantages.
Diesel, Brian. "Site-specific computing for a data-based place /." 2007. http://proquest.umi.com/pqdweb?did=1417816361&sid=6&Fmt=2&clientId=39334&RQT=309&VName=PQD.
Title from PDF title page (viewed on Feb. 19, 2008) Available through UMI ProQuest Digital Dissertations. Thesis adviser: Bohlen, Marc, Khan, Omar Includes bibliographical references.
Chen, Sung-Yi, and 陳松毅. "A Multi-site Resource Allocation Strategy in Grid Computing Environments." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/36316767310101348663.
東海大學
資訊工程與科學系
95
Grid computing encounters distributed heterogeneous resources, including different platforms, hardware/software, computer architecture, and computer languages, which are geographically distributed and governed by different Administrative Domains over a network using open standards to solve large-scale computational problems. As more Grids are deployed worldwide, the number of multi-institutional collaborations is rapidly growing. However, to realize Grid computing full potential, it is expected that Grid participants are able to use one another’s resources. This work presented a multi-site resource allocation (MSRA) strategy for Resource Broker to dispatch jobs to appropriate resources across two different Administrative Domains and the experimental result shown that MSRA exhibits a better performance than other strategies. In this work, we addressed information gathering and focused on providing a domain-based model for network information measurement using Network Weather Service (NWS) on Grid computing environments. We used the Ganglia and NWS tools to monitor resource status and network-related information, respectively. The proposed broker provided secure, updated information about available resources and served as a link to the diverse systems available in the Grid.
Richardson, Wendy Westenberg. "Voronoi site modeling a computer model to predict the binding affinity of small flexible molecules." 1993. http://catalog.hathitrust.org/api/volumes/oclc/68796719.html.
Kajita, Marcos Suguru. "Google app engine case study : a micro blogging site." Thesis, 2009. http://hdl.handle.net/2152/ETD-UT-2009-12-565.
text
TORRE, MARCO. "INDAGINI INFORMATICHE E PROCESSO PENALE." Doctoral thesis, 2016. http://hdl.handle.net/2158/1028650.