Siga este enlace para ver otros tipos de publicaciones sobre el tema: Multi-scale architecture.

Tesis sobre el tema "Multi-scale architecture"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 31 mejores tesis para su investigación sobre el tema "Multi-scale architecture".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Olivares, Chauvet Pedro. "Multi-scale analysis of chromosome and nuclear architecture". Thesis, University of Manchester, 2013. https://www.research.manchester.ac.uk/portal/en/theses/multiscale-analysis-of-chromosome-and-nuclear-architecture(32a7b634-035b-4c6b-83f9-735f83bc73fb).html.

Texto completo
Resumen
Mammalian nuclear function depends on the complex interaction of genetic and epi-genetic elements coordinated in space and time. Structure and function overlap to such a degree that they are usually considered as being inextricably linked. In this work I combine an experimental approach with a computational one in order to answer two main questions in the field of mammalian chromosome organization. In the first section of this thesis, I attempted to answer the question, to what extent does chromatin from different chromosome territories share the same space inside the nucleus? This is a relatively open question in the field of chromosome territories. It is well-known and accepted that interphase chromosomes are spatially constrained inside the nucleus and that they occupy their own territory, however, the degree of spatial interaction between neighbouring chromosomes is still under debate. Using labelling methods that directly incorporate halogenated DNA precursors into newly replicated DNA without the need for immuno-detection or in situ hybridization, we show that neighbouring chromosome territories colocalise at very low levels. We also found that the native structure of DNA foci is partially responsible for constraining the interaction of chromosome territories as disruption of the innate architecture of DNA foci by treatment with TSA resulted in increased colocalisation signal between adjacent chromosomes territories. The second major question I attempted to answer concerned the correlation between nuclear function and the banding pattern observed in human mitotic chromosomes. Human mitotic chromosomes display characteristic patterns of light and dark bands when visualized under the light microscope using specific chemical dyes such as Giemsa. Despite the long standing use of the Giemsa banding pattern in human genetics for identifying chromosome abnormalities and mapping genes, little is known about the molecular mechanisms that generate the Giemsa banding pattern or its biological relevance. The recent availability of many genetic and epigenetic features mapped to the human genome permit a high-resolution investigation of the molecular correlates of Giemsa banding. Here I investigate the relationship of more than 50 genomic and epigenomic features with light (R) and dark (G) bands. My results confirm many classical results, such as the low gene density of the most darkly staining G bands and their late replication time, using genome-wide data. Surprisingly, I found that for virtually all features investigated, R bands show intermediate properties between the lightest and darkest G bands, suggesting that many R bands contain G-like sequences within them. To identify R bands that show properties of G bands, I employed an unsupervised learning approach to classify R bands on their genomic and epigenomic properties and show that the smallest R bands show a tendency to have characteristics typical of G bands. I revisit the evidence supporting the boundaries of G and R bands in the current cytogenomic map and conclude that inaccurate placement of weakly supported band boundaries can explain the intermediate pattern of R bands. Finally, I propose an approach based on aggregating data from multiple genomic and epigenomic features to improve the positioning of band boundaries in the human cytogenomic map. My results suggest that contiguous domains showing a high degree of uniformity in the ratio of heterochromatin and euchromatin sub-domains define the Giemsa banding pattern in human chromosomes.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Javalera, Rincón Valeria. "Distributed large scale systems : a multi-agent RL-MPC architecture". Doctoral thesis, Universitat Politècnica de Catalunya, 2016. http://hdl.handle.net/10803/393922.

Texto completo
Resumen
This thesis describes a methodology to deal with the interaction between MPC controllers in a distributed MPC architecture. This approach combines ideas from Distributed Artificial Intelligence (DAI) and Reinforcement Learning (RL) in order to provide a controller interaction based on cooperative agents and learning techniques. The aim of this methodology is to provide a general structure to perform optimal control in networked distributed environments, where multiple dependencies between subsystems are found. Those dependencies or connections often correspond to control variables. In that case, the distributed control has to be consistent in both subsystems. One of the main new concepts of this architecture is the negotiator agent. Negotiator agents interact with MPC agents to determine the optimal value of the shared control variables in a cooperative way using learning techniques (RL). The optimal value of those shared control variables has to accomplish a common goal, probably different from the specific goal of each agent sharing the variable. Two cases of study, in which the proposed architecture is applied and tested are considered, a small water distribution network and the Barcelona water network. The results suggest this approach is a promising strategy when centralized control is not a reasonable choice.
Esta tesis describe una metodología para hacer frente a la interacción entre controladores MPC en una arquitectura MPC distribuida. Este enfoque combina las ideas de Inteligencia Artificial Distribuida (DIA) y aprendizaje por refuerzo (RL) con el fin de proporcionar una interacción entre controladores basado en agentes de cooperativos y técnicas de aprendizaje. El objetivo de esta metodología es proporcionar una estructura general para llevar a cabo un control óptimo en entornos de redes distribuidas, donde se encuentran varias dependencias entre subsistemas. Esas dependencias o conexiones corresponden a menudo a variables de control. En ese caso, el control distribuido tiene que ser coherente en ambos subsistemas. Uno de los principales conceptos novedosos de esta arquitectura es el agente negociador. Los agentes negociadores actúan junto con agentes MPC para determinar el valor óptimo de las variables de control compartidas de forma cooperativa utilizando técnicas de aprendizaje (RL). El valor óptimo de esas variables compartidas debe lograr un objetivo común, probablemente diferente de los objetivos específicos de cada agente que está compartiendo la variable. Se consideran dos casos de estudio, en el que la arquitectura propuesta se ha aplicado y probado, una pequeña red de distribución de agua y la red de agua de Barcelona. Los resultados sugieren que este enfoque es una estrategia prometedora cuando el control centralizado no es una opción razonable.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Zhu, Weirong. "Efficient synchronization for a large-scale multi-core chip architecture". Access to citation, abstract and download form provided by ProQuest Information and Learning Company; downloadable PDF file, 206 p, 2007. http://proquest.umi.com/pqdweb?did=1362532791&sid=27&Fmt=2&clientId=8331&RQT=309&VName=PQD.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Soler, Vila Paula 1989. "Multi-scale study of the genome architecture and its dynamical facets". Doctoral thesis, Universitat Pompeu Fabra, 2019. http://hdl.handle.net/10803/668229.

Texto completo
Resumen
High-throughput Chromosome Conformation Capture (3C) techniques have provided a comprehensive overview of the genome architecture. Hi-C, a derivative of 3C, has become a reference technique to study the 3D chromatin structure and its relationship with the functional state of the cell. However, several aspects of the analysis and interpretation of Hi-C data remain a challenge and may hide a potential yet to be unveiled. In this thesis, we explore the structural landscape of multiple chromatin features. We developed an integrative approach combining in situ Hi-C data with nine additional omic layers and revealed a new dynamic and transitional genomic compartment enriched in poised and polycomb-repressed chromatin. This novel intermediate compartment plays an important role in the modulation of the genome during B cells differentiation and upon neoplastic transformation, specifically in chronic lymphocytic leukemia (CLL) or mantle cell lymphoma (MCL) patients. We also developed TADpole, a computational tool designed to characterize the hierarchy of topologically-associated domains (TADs) using Hi-C interaction matrices. We demonstrated its technical and biological robustness, and its capacity to reveal topological differences in high-resolution capture Hi-C experiments.
El desarrollo de métodos experimentales basados en la captura de la conformación cromosómica (3C) ha permitido tener una visión más detallada de la arquitectura genómica. El Hi-C, derivado del 3C, se ha convertido en una técnica de referencia para analizar la estructura tridimensional de la cromatina, así como su relación con el estado funcional celular. Sin embargo, varios aspectos del análisis y la interpretación de los datos de Hi-C siguen siendo un desafío, y pueden ocultar un potencial aún por descubrir. En esta tesis se exploran múltiples niveles de organización estructural de la cromatina. Hemos realizado un estudio integrativo combinando datos de in situ Hi-C con nueve capas epigenéticas y hemos revelado un nuevo compartimento genómico caracterizado por su dinámica y capacidad de transición, enriquecido en cromatina reprimida por polycomb. Este nuevo compartimento intermedio juega un papel importante en la modulación del genoma durante la diferenciación de células B y durante su transformación neoplásica, específicamente en pacientes con leucemia linfocítica crónica (CLL) o con linfoma de células del manto (MCL). Además, hemos desarrollado TADpole, un nuevo método computacional destinado a la detección de la jerarquía de dominios asociados topológicamente (TADs) empleando mapas de interacciones de Hi-C. Hemos demostrado su robustez ante una evaluación técnica y biológica, así como su capacidad de detectar diferencias topológicas en experimentos de capture Hi-C de alta resolución.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Sclaroff, Stanley Edward. "Deformable solids and displacement maps--a multi-scale technique for model recovery and recognition". Thesis, Massachusetts Institute of Technology, 1991. http://hdl.handle.net/1721.1/70198.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Duro, Royo Jorge. "Towards Fabrication Information Modeling (FIM) : workflow and methods for multi-scale trans-disciplinary informed design". Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/101843.

Texto completo
Resumen
Thesis: S.M., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2015.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 67-70).
This thesis sets the stage for Fabrication Information Modeling (FIM); a design approach for enabling seamless design-to-production workflows that can derive complex designs fusing advanced digital design technologies associated with analysis, engineering and manufacturing. Present day digital fabrication platforms enable the design and construction of high-resolution and complex material distribution structures. However, virtual-to-physical workflows and their associated software environments are yet to incorporate such capabilities. As preliminary methods towards FIM I have developed four computational strategies for the design and digital construction of custom systems. These methods are presented in this thesis in the context of specific design challenges and include a biologically driven fiber construction algorithm; an anatomically driven shell-to-wearable translation protocol; an environmentally-driven swarm printing system; and a manufacturing-driven hierarchical fabrication platform. I discuss and analyze these four challenges in terms of their capabilities to integrate design across media, disciplines and scales through the concepts of multidimensionality, media-informed computation and trans-disciplinary data in advanced digital design workflows. With FIM I aim to contribute to the field of digital design and fabrication by enabling feedback workflows where materials are designed rather than selected; where the question of how information is passed across spatiotemporal scales is central to design generation itself; where modeling at each level of resolution and representation is based on various methods and carried out by various media or agents within a single environment; and finally, where virtual and physical considerations coexist as equals.
by Jorge Duro Royo.
S.M.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Krüger, Martin Wolfgang [Verfasser]. "Personalized Multi-Scale Modeling of the Atria: Heterogeneities, Fiber Architecture, Hemodialysis and Ablation Therapy / Martin Wolfgang Krüger". Karlsruhe : KIT Scientific Publishing, 2013. http://www.ksp.kit.edu.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Deserranno, Dimitri. "A Multi-Scale Finite Element Model of the Cardiac Ventricles". Case Western Reserve University School of Graduate Studies / OhioLINK, 2006. http://rave.ohiolink.edu/etdc/view?acc_num=case1148984314.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Hardy, Clément. "Architectures multi-échelles de type encοdeur-décοdeur pοur la stéréοphοtοmétrie". Electronic Thesis or Diss., Normandie, 2024. http://www.theses.fr/2024NORMC222.

Texto completo
Resumen
La stéréophotométrie est une technique de reconstruction 3D de la surface d'un objet. De plus en plus de recherches s'intéressent à ce problème qui se veut prometteur dans le monde industriel. En effet, la stéréophotométrie peut être utilisée pour détecter les défauts d'usinage de pièces mécaniques ou pour de la reconnaissance faciale par exemple. Cette thèse explore les méthodes d'apprentissage profond pour la stéréophotométrie, notamment les différents aspects liés aux bases de données d'entraînement et aux architectures considérées.De manière générale, la sur-paramétrisation d'un réseau de neurones est souvent suffisante pour supporter la diversité des problèmes rencontrés. La base de données d'entraînement est alors considérée comme le point clé permettant de conditionner le réseau au problème traité. Par conséquent, pour répondre à ce besoin, nous proposons une nouvelle base de données d'entraînement synthétique. Cette base de données considère une très grande variété de géométries, de textures, de directions ou conditions lumineuses mais également d'environnements, permettant donc de générer un nombre de situation quasiment infini.Le second point décisif d'une bonne reconstruction concerne le choix de l'architecture. L'architecture d'un réseau doit assurer une bonne capacité de généralisation sur de nouvelles données pour générer de très bons résultats sur des données inédites. Et ce, quelle que soit l'application. En particulier, pour la stéréophotométrie, l'enjeu est d'être capable de reconstruire des images très haute résolution afin de ne pas perdre de détails. Nous proposons alors une architecture multi-échelles de type encodeur-décodeur afin de répondre à ce problème.Dans un premier temps, nous proposons une architecture fondée sur les réseaux convolutionnels pour répondre au problème de stéréophotométrie calibrée, i.e. quand la direction lumineuse est connue. Dans un second temps, nous proposons une version fondé sur les Transformers afin de répondre au problème de stéréophotométrie universelle. C'est-à-dire que nous sommes en capacité de gérer n'importe quel environnement, direction lumineuse, etc., sans aucune information préalable. Finalement, pour améliorer les reconstructions sur des matériaux difficiles (translucides ou brillants par exemple), nous proposons une nouvelle approche que nous appelons ``faiblement calibrée'' pour la stéréophotométrie. Dans ce contexte, nous n'avons qu'une connaissance approximative de la direction d'éclairage.L'ensemble des pistes que nous avons explorées ont conduit à des résultats convaincants, à la fois quantitatifs et visuels sur l'ensemble des bases de données de l'état-de-l'art. En effet, nous avons pu observer une amélioration notable de la précision de reconstruction des cartes de normales, contribuant ainsi à avancer l'état de l'art dans ce domaine
Photometric stereo is a technique for 3D surface reconstruction of objects. This field has seen a surge in research interest due to its potential applications in industry. Specifically, photometric stereo can be employed for tasks such as detecting machining defects in mechanical components or facial recognition. This thesis delves into deep learning methods for photometry stero, with a particular focus on training data and network architectures.While neural network over-parameterization is often adequate, the training dataset plays a pivotal role in task adaptation. To generate a highly diverse and extensible training set, we propose a new synthetic dataset. This dataset incorporates a broad spectrum of geometric, textural, lighting, and environmental variations, allowing for the creation of nearly infinite training instances.The second decisive point of a good reconstruction concerns the choice of architecture. The architecture of a network must ensure a good generalization capacity on new data to generate very good results on unseen data. And this, regardless of the application. In particular, for the photometric stereo problem, the challenge is to be able to reconstruct very high-resolution images in order not to lose any details. We therefore propose a multi-scale encoder-decoder architecture to address this problem.We first introduce a convolutional neural network architecture for calibrated photometric stereo, where the lighting direction is known. To handle unconstrained environments, we propose a Transformers-based approach for universal photometric stereo. Lastly, for challenging materials shiny like translucent or shiny surfaces, we introduce a ``weakly calibrated'' approach that assumes only approximate knowledge of the lighting direction.The approaches we have investigated have consistently demonstrated strong performance on standard benchmarks, as evidenced by both quantitative metrics and visual assessments. Our results, particularly the improved accuracy of reconstructed normal maps, represent a significant advancement in photometric stereo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Stephan, André. "Towards a comprehensive energy assessment of residential buildings: a multi-scale life cycle energy analysis framework". Doctoral thesis, Universite Libre de Bruxelles, 2013. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209465.

Texto completo
Resumen
Buildings are directly responsible for 40% of the final energy use in most developed economies and for much more if indirect requirements are considered. This results in huge impacts which affect the environmental balance of our planet.

However, most current building energy assessments focus solely on operational energy overlooking other energy uses such as embodied and transport energy. Embodied energy comprises the energy requirements for building materials production, construction and replacement. Transport energy represents the amount of energy required for the mobility of building users.

Decisions based on partial assessments might result in an increased energy demand during other life cycle stages or at different scales of the built environment. Recent studies have shown that embodied and transport energy demands often account for more than half of the total lifecycle energy demand of residential buildings. Current assessment tools and policies therefore overlook more than 50% of the life cycle energy use.

This thesis presents a comprehensive life cycle energy analysis framework for residential buildings. This framework takes into account energy requirements at the building scale, i.e. the embodied and operational energy demands, and at the city scale, i.e. the embodied energy of nearby infrastructures and the transport energy of its users. This framework is implemented through the development, verification and validation of an advanced software tool which allows the rapid analysis of the life cycle energy demand of residential buildings and districts. Two case studies, located in Brussels, Belgium and Melbourne, Australia, are used to investigate the potential of the developed framework.

Results show that each of the embodied, operational and transport energy requirements represent a significant share of the total energy requirements and associated greenhouse gas emissions of a residential building, over its useful life. The use of the developed tool will allow building designers, town planners and policy makers to reduce the energy demand and greenhouse gas emissions of residential buildings by selecting measures that result in overall savings. This will ultimately contribute to reducing the environmental impact of the built environment.
Doctorat en Sciences de l'ingénieur
info:eu-repo/semantics/nonPublished

Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Abbott, Sunshine. "Depositional architecture and facies variability in anhydrite and polyhalite sequences : a multi-scale study of the Jurassic (Weald Basin, Brightling Mine) and Permian (Zechstein Basin, Boulby Mine) of the UK". Thesis, Imperial College London, 2016. http://hdl.handle.net/10044/1/45720.

Texto completo
Resumen
Quantifying the geometries of evaporite deposits at a <1 km scale is critical in our understanding of similar ancient depositional systems, but is challenging given evaporite mineral dissolution at surface conditions. Two high-resolution stratigraphic studies in mines provide insight into the geometries, dimensions, and spatial distributions of sedimentary bodies in 3-D at a <1-km scale in evaporites. A field mapping study was conducted in Brightling (Purbeck Group) and Boulby (Zechstein Group) mines, in southeast and northeast England, respectively. This is integrated with XRD, petrography, and δ13C and δ18O isotope analyses. The evolution and conditions of sedimentation during the Tithonian in the Weald Basin is also evaluated. A newly defined megasequence boundary at the base of the Purbeck Group is suggested to mark the onset of rifting of the Bay of Biscay and to the north of the Charlie-Gibbs Fracture Zone, which implies an earlier rifting phase than previously proposed. Basal Purbeck lateral facies changes are influenced by the position in the Weald Basin, normal fault systems, and relative sea level changes. In Brightling Mine, the basal Purbeck exhibits carbonate-evaporite shoaling upward cycles, likely controlled by localized high-frequency relative sea level changes and/or sabkha hydrology. The dynamic process of evaporite deposition led to subtle stratigraphic heterogeneities and changes in bed thicknesses, but largely continuous lateral bedding. Boulby Mine offers a unique opportunity to study early deformation structures in ancient polyhalite that formed in playa conditions. The controlling mechanism that formed these syndepositional polyhalite tepees is attributed to soft sediment deformation via polyhalite dewatering coupled with penecontemporaneous precipitation of halite during fluid escape. This study offers new insight into the types of heterogeneity observed in ancient evaporites formed in marginal playa and sabkha environments at a < 1 km-scale, which can include a variety of compositions and morphologies at a range of scales.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Kalua, Amos. "Framework for Integrated Multi-Scale CFD Simulations in Architectural Design". Diss., Virginia Tech, 2021. http://hdl.handle.net/10919/105013.

Texto completo
Resumen
An important aspect in the process of architectural design is the testing of solution alternatives in order to evaluate them on their appropriateness within the context of the design problem. Computational Fluid Dynamics (CFD) analysis is one of the approaches that have gained popularity in the testing of architectural design solutions especially for purposes of evaluating the performance of natural ventilation strategies in buildings. Natural ventilation strategies can reduce the energy consumption in buildings while ensuring the good health and wellbeing of the occupants. In order for natural ventilation strategies to perform as intended, a number of factors interact and these factors must be carefully analysed. CFD simulations provide an affordable platform for such analyses to be undertaken. Traditionally, these simulations have largely followed the direction of Best Practice Guidelines (BPGs) for quality control. These guidelines are built around certain simplifications due to the high computational cost of CFD modelling. However, while the computational cost has increasingly fallen and is predicted to continue to drop, the BPGs have largely remained without significant updates. The need to develop a CFD simulation framework that leverages the contemporary and anticipates the future computational cost and capacity can, therefore, not be overemphasised. When conducting CFD simulations during the process of architectural design, the variability of the wind flow field including the wind direction and its velocity constitute an important input parameter. Presently, however, in many simulations, the wind direction is largely used in a steady state manner. It is assumed that the direction of flow downwind of a meteorological station remains constant. This assumption may potentially compromise the integrity of CFD modelling as in reality, the wind flow field is bound to be dynamic from place to place. In order to improve the accuracy of the CFD simulations for architectural design, it is therefore necessary to adequately account for this variability. This study was a two-pronged investigation with the ultimate objective of improving the accuracy of the CFD simulations that are used in the architectural design process, particularly for the design and analysis of natural ventilation strategies. Firstly, a framework for integrated meso-scale and building scale CFD simulations was developed. Secondly, the newly developed framework was then implemented by deploying it to study the variability of the wind flow field between a reference meteorological station, the Virginia Tech Airport, and a selected localized building scale site on the Virginia Tech campus. The findings confirmed that the wind flow field varies from place to place and showed that the newly developed framework was able to capture this variation, ultimately, generating a wind flow field characterization representative of the conditions prevalent at the localized building site. This framework can be particularly useful when undertaking de-coupled CFD simulations to design and analyse natural ventilation strategies in the building design process.
Doctor of Philosophy
The use of natural ventilation strategies in building design has been identified as one viable pathway toward minimizing energy consumption in buildings. Natural ventilation can also reduce the prevalence of the Sick Building Syndrome (SBS) and enhance the productivity of building occupants. This research study sought to develop a framework that can improve the usage of Computational Fluid Dynamics (CFD) analyses in the architectural design process for purposes of enhancing the efficiency of natural ventilation strategies in buildings. CFD is a branch of computational physics that studies the behaviour of fluids as they move from one point to another. The usage of CFD analyses in architectural design requires the input of wind environment data such as direction and velocity. Presently, this data is obtained from a weather station and there is an assumption that this data remains the same even for a building site located at a considerable distance away from the weather station. This potentially compromises the accuracy of the CFD analyses as studies have shown that due to a number of factors such the urban built form, vegetation, terrain and others, the wind environment is bound to vary from one point to another. This study sought to develop a framework that quantifies this variation and provides a way for translating the wind data obtained from a weather station to data that more accurately characterizes a local building site. With this accurate site wind data, the CFD analyses can then provide more meaningful insights into the use of natural ventilation in the process of architectural design. This newly developed framework was deployed on a study site at Virginia Tech. The findings showed that the framework was able to demonstrate that the wind flow field varies from one place to another and it also provided a way to capture this variation, ultimately, generating a wind flow field characterization that was more representative of the local conditions.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Eid, Elie. "Towards a multi-scale analysis of dynamic failure in architectured materials". Thesis, Ecole centrale de Nantes, 2021. https://tel.archives-ouvertes.fr/tel-03670412.

Texto completo
Resumen
Durant ces dernières années, on a vu un intérêt de plus en plus marqué pour de nouveaux matériaux avancés appelés matériaux architecturés. On s’intéresse à la fissuration de matériaux architecturés dans lesquels la séparation d'échelle n’est pas toujours bien établie. Ceci se traduit par de fortes interactions entre le front de la fissure et l’architecture du matériau indépendamment de l'échelle considérée. De plus, sous chargements dynamiques, des ondes élastiques entrent en jeu et les interactions entre le front de la fissure, les ondes élastiques et la microstructure pilotent ensemble le comportement global de la structure. Dans cette thèse, trois types de matériaux architecturés (microstructures) sont considérés : un réseau de trous périodique et deux réseaux quasi-périodiques type Penrose. L'analyse est divisée en trois parties. Pour étudier l'influence de la microstructure sur la propagation des fissures à différentes échelles, dessimulations numériques de rupture sont analysées ; ces simulations montrent une meilleure résistance des matériaux quasi-périodiques à la propagation des fissures. De plus, on développe une approche de changement d’échelle “bottom-up” qui n’a pas recours à la notion de volume élémentaire représentatif. Celle-ci permet donc une évaluation multi-échelle cohérente des propriétés effectives à la rupture des microstructures périodiques et quasi-périodiques. On montre ainsi l’inévitabilité de la prise en compte d'un milieu effectif non-homogène pour modéliser avec précision la réponse globale d’un matériau en tenant compte de sa sous-structure. En dynamique, une analyse de l'influence de l'architecture sur l'atténuation des ondes élastiques montre une meilleure performance des réseaux quasipériodiques. De plus, pour comprendre le ou les mécanismes régissant le phénomène de branchement dynamique dans un milieu homogène, un critère basé sur la mécanique de la rupture dynamique est développé et validé sur une nouvelle configuration expérimentale où l'imagerie à haute vitesse et haute résolution est combinée à la corrélation d'images numériques pour capturer les phénomènes marquants. Le rôle incontestable que joue la contrainte T dans le branchement dynamique est mis en avant. Cette thèse fournit ainsi les outils nécessaires à une analyse multiéchelle de la rupture dynamique des matériaux architecturés
Architectured materials are a rising class of materials that provide tremendous possibilities in terms of functional properties. Interest is drawn on the failure of architectured materials in which scale separation ceases to exist. This directly translates to strong interactions between a crack tip and the architecture independently of the considered scale. Moreover, under dynamic loadings, stress-waves come into play and interactions between the crack-tip, the microstructure (architecture) and the stress-waves eventually pilot together the structural behaviour. In this thesis, three types of architectured materials are considered: one periodic and two Penrose-type quasi-periodic lattices of holes. The analysis is broken into three parts. To study the influence of the microstructure on crack-propagat ion at different scales, numerical simulations of failure are analysed; they show improved resistance to crack propagation in the quasi-periodic materials. At the core of the work is also the development of a coarse-graining technique that requires no representative volume element. This technique allows for a physically consistent multiscale evaluation of the effective failure properties of the architectures. The inevitability of the consideration of a non-homogeneous effective medium to accurately model microstructural effects at larger scales is highlighted. In dynamics, the influence of the architectures on the stress-wave attenuation shows improved attenuation properties of the quasi-periodic lattices. Moreover, to understand the mechanism(s) governing the dynamic branching phenomenon in a homogeneous material, a criterion based on dynamic fracture mechanics is developed and validated on a novel experimental setup where Ultra-High-Speed-High- Resolution imaging is combined with Digital Image Correlation to capture extraordinary phenomena. The unquestionable role of T-stress in dynamic branching is put forth. This thesis brings forth the necessary tools towards a multi-scale analysis of dynamic failure of architectured materials
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Devlin, John M. "Revitalizing Downtown Houston - Bringing Back the Human Scale". Thesis, Virginia Tech, 2016. http://hdl.handle.net/10919/71872.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Gu, Tang. "Modélisation multi-échelles du comportement électrique et élasto-plastique de fils composites Cu-Nb nanostructurés et architecturés". Thesis, Paris, ENSAM, 2017. http://www.theses.fr/2017ENAM0017/document.

Texto completo
Resumen
Les fils composites nanostructurés et architecturés cuivre-niobium sont de candidats excellents pour la génération de champs magnétiques intenses (>90T); en effet, ces fils allient une limite élastique élevée et une excellente conductivité électrique. Les fils Cu-Nb multi-échelles sont fabriqués par étirage et empaquetage cumulatif (une technique de déformation plastique sévère), conduisant à une microstructure multi-échelle, architecturée et nanostructurée présentant une texture cristallographique de fibres forte et des formes de grains allongées le long de l'axe du fil. Cette thèse présente une étude compréhensive du comportement électrique et élasto-plastique de ce matériau composite, elle est divisée en trois parties: modélisation multi-échelle électrique, élastique et élasto-plastique. Afin d'étudier le lien entre le comportement effective et la microstructure du fil, plusieurs méthodes d'homogénéisation sont appliquées, qui peuvent être séparées en deux types principaux: la méthode en champs moyens et en champs complets. Comme les spécimens présentent plusieurs échelles caractéristiques, plusieurs étapes de transition d'échelle sont effectuées itérativement de l'échelle de grain à la macro-échelle. L'accord général parmi les réponses de modèle permet de suggérer la meilleure stratégie pour estimer de manière fiable le comportement électrique et élasto-plastique des fils Cu-Nb et économiser le temps de calcul. Enfin, les modèles électriques prouvent bien prédire les données expérimentales anisotopique. De plus, les modèles mécaniques sont aussi validés par les données expérimentales ex-situ et in-situ de diffraction des rayons X/neutrons avec un bon accord
Nanostructured and architectured copper niobium composite wires are excellent candidates for the generation of intense pulsed magnetic fields (>90T) as they combine both high strength and high electrical conductivity. Multi-scaled Cu-Nb wires are fabricated by accumulative drawing and bundling (a severe plastic deformation technique), leading to a multiscale, architectured and nanostructured microstructure exhibiting a strong fiber crystallographic texture and elongated grain shapes along the wire axis. This thesis presents a comprehensive study of the effective electrical and elasto-plastic behavior of this composite material. It is divided into three parts: electrical, elastic and elasto-plastic multiscale modeling. In order to investigate the link between the effective material behavior and the wire microstructure, several homogenization methods are applied which can be separated into two main types: mean-field and full-field theories. As the specimens exhibit many characteristic scales, several scale transition steps are carried out iteratively from the grain scale to the macro-scale. The general agreement among the model responses allows suggesting the best strategy to estimate reliably the effective electrical and elasto-plastic behavior of Cu-Nb wires and save computational time. The electrical models are demonstrated to predict accurately the anisotropic experimental data. Moreover, the mechanical models are also validated by the available ex-situ and in-situ X-ray/neutron diffraction experimental data with a good agreement
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Kalayci, Selim. "Techniques for Efficient Execution of Large-Scale Scientific Workflows in Distributed Environments". FIU Digital Commons, 2014. http://digitalcommons.fiu.edu/etd/1664.

Texto completo
Resumen
Scientific exploration demands heavy usage of computational resources for large-scale and deep analysis in many different fields. The complexity or the sheer scale of the computational studies can sometimes be encapsulated in the form of a workflow that is made up of numerous dependent components. Due to its decomposable and parallelizable nature, different components of a scientific workflow may be mapped over a distributed resource infrastructure to reduce time to results. However, the resource infrastructure may be heterogeneous, dynamic, and under diverse administrative control. Workflow management tools are utilized to help manage and deal with various aspects in the lifecycle of such complex applications. One particular and fundamental aspect that has to be dealt with as smooth and efficient as possible is the run-time coordination of workflow activities (i.e. workflow orchestration). Our efforts in this study are focused on improving the workflow orchestration process in such dynamic and distributed resource environments. We tackle three main aspects of this process and provide contributions in each of them. Our first contribution involves increasing the scalability and site autonomy in situations where the mapped components of a workflow span across several heterogeneous administrative domains. We devise and implement a generic decentralization framework for orchestration of workflows under such conditions. Our second contribution is involved with addressing the issues that arise due to the dynamic nature of such environments. We provide generic adaptation mechanisms that are highly transparent and also substantially less intrusive with respect to the rest of the workflow in execution. Our third contribution is to improve the efficiency of orchestration of large-scale parameter-sweep workflows. By exploiting their specific characteristics, we provide generic optimization patterns that are applicable to most instances of such workflows. We also discuss implementation issues and details that arise as we provide our contributions in each situation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Ferreira, Leite Alessandro. "A user-centered and autonomic multi-cloud architecture for high performance computing applications". Thesis, Paris 11, 2014. http://www.theses.fr/2014PA112355/document.

Texto completo
Resumen
Le cloud computing a été considéré comme une option pour exécuter des applications de calcul haute performance. Bien que les plateformes traditionnelles de calcul haute performance telles que les grilles et les supercalculateurs offrent un environnement stable du point de vue des défaillances, des performances, et de la taille des ressources, le cloud computing offre des ressources à la demande, généralement avec des performances imprévisibles mais à des coûts financiers abordables. Pour surmonter les limites d’un cloud individuel, plusieurs clouds peuvent être combinés pour former une fédération de clouds, souvent avec des coûts supplémentaires légers pour les utilisateurs. Une fédération de clouds peut aider autant les fournisseurs que les utilisateurs à atteindre leurs objectifs tels la réduction du temps d’exécution, la minimisation des coûts, l’augmentation de la disponibilité, la réduction de la consommation d’énergie, pour ne citer que ceux-Là. Ainsi, la fédération de clouds peut être une solution élégante pour éviter le sur-Approvisionnement, réduisant ainsi les coûts d’exploitation en situation de charge moyenne, et en supprimant des ressources qui, autrement, resteraient inutilisées et gaspilleraient ainsi de énergie. Cependant, la fédération de clouds élargit la gamme des ressources disponibles. En conséquence, pour les utilisateurs, des compétences en cloud computing ou en administration système sont nécessaires, ainsi qu’un temps d’apprentissage considérable pour maîtrises les options disponibles. Dans ce contexte, certaines questions se posent: (a) Quelle ressource du cloud est appropriée pour une application donnée? (b) Comment les utilisateurs peuvent-Ils exécuter leurs applications HPC avec un rendement acceptable et des coûts financiers abordables, sans avoir à reconfigurer les applications pour répondre aux normes et contraintes du cloud ? (c) Comment les non-Spécialistes du cloud peuvent-Ils maximiser l’usage des caractéristiques du cloud, sans être liés au fournisseur du cloud ? et (d) Comment les fournisseurs de cloud peuvent-Ils exploiter la fédération pour réduire la consommation électrique, tout en étant en mesure de fournir un service garantissant les normes de qualité préétablies ? À partir de ces questions, la présente thèse propose une solution de consolidation d’applications pour la fédération de clouds qui garantit le respect des normes de qualité de service. On utilise un système multi-Agents pour négocier la migration des machines virtuelles entre les clouds. En nous basant sur la fédération de clouds, nous avons développé et évalué une approche pour exécuter une énorme application de bioinformatique à coût zéro. En outre, nous avons pu réduire le temps d’exécution de 22,55% par rapport à la meilleure exécution dans un cloud individuel. Cette thèse présente aussi une architecture de cloud baptisée « Excalibur » qui permet l’adaptation automatique des applications standards pour le cloud. Dans l’exécution d’une chaîne de traitements de la génomique, Excalibur a pu parfaitement mettre à l’échelle les applications sur jusqu’à 11 machines virtuelles, ce qui a réduit le temps d’exécution de 63% et le coût de 84% par rapport à la configuration de l’utilisateur. Enfin, cette thèse présente un processus d’ingénierie des lignes de produits (PLE) pour gérer la variabilité de l’infrastructure à la demande du cloud, et une architecture multi-Cloud autonome qui utilise ce processus pour configurer et faire face aux défaillances de manière indépendante. Le processus PLE utilise le modèle étendu de fonction avec des attributs pour décrire les ressources et les sélectionner en fonction des objectifs de l’utilisateur. Les expériences réalisées avec deux fournisseurs de cloud différents montrent qu’en utilisant le modèle proposé, les utilisateurs peuvent exécuter leurs applications dans un environnement de clouds fédérés, sans avoir besoin de connaître les variabilités et contraintes du cloud
Cloud computing has been seen as an option to execute high performance computing (HPC) applications. While traditional HPC platforms such as grid and supercomputers offer a stable environment in terms of failures, performance, and number of resources, cloud computing offers on-Demand resources generally with unpredictable performance at low financial cost. Furthermore, in cloud environment, failures are part of its normal operation. To overcome the limits of a single cloud, clouds can be combined, forming a cloud federation often with minimal additional costs for the users. A cloud federation can help both cloud providers and cloud users to achieve their goals such as to reduce the execution time, to achieve minimum cost, to increase availability, to reduce power consumption, among others. Hence, cloud federation can be an elegant solution to avoid over provisioning, thus reducing the operational costs in an average load situation, and removing resources that would otherwise remain idle and wasting power consumption, for instance. However, cloud federation increases the range of resources available for the users. As a result, cloud or system administration skills may be demanded from the users, as well as a considerable time to learn about the available options. In this context, some questions arise such as: (a) which cloud resource is appropriate for a given application? (b) how can the users execute their HPC applications with acceptable performance and financial costs, without needing to re-Engineer the applications to fit clouds' constraints? (c) how can non-Cloud specialists maximize the features of the clouds, without being tied to a cloud provider? and (d) how can the cloud providers use the federation to reduce power consumption of the clouds, while still being able to give service-Level agreement (SLA) guarantees to the users? Motivated by these questions, this thesis presents a SLA-Aware application consolidation solution for cloud federation. Using a multi-Agent system (MAS) to negotiate virtual machine (VM) migrations between the clouds, simulation results show that our approach could reduce up to 46% of the power consumption, while trying to meet performance requirements. Using the federation, we developed and evaluated an approach to execute a huge bioinformatics application at zero-Cost. Moreover, we could decrease the execution time in 22.55% over the best single cloud execution. In addition, this thesis presents a cloud architecture called Excalibur to auto-Scale cloud-Unaware application. Executing a genomics workflow, Excalibur could seamlessly scale the applications up to 11 virtual machines, reducing the execution time by 63% and the cost by 84% when compared to a user's configuration. Finally, this thesis presents a product line engineering (PLE) process to handle the variabilities of infrastructure-As-A-Service (IaaS) clouds, and an autonomic multi-Cloud architecture that uses this process to configure and to deal with failures autonomously. The PLE process uses extended feature model (EFM) with attributes to describe the resources and to select them based on users' objectives. Experiments realized with two different cloud providers show that using the proposed model, the users could execute their application in a cloud federation environment, without needing to know the variabilities and constraints of the clouds
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Ryu, Kyeong Keol. "Automated Bus Generation for Multi-processor SoC Design". Diss., Georgia Institute of Technology, 2004. http://hdl.handle.net/1853/5076.

Texto completo
Resumen
In the design of a multi-processor System-on-a-Chip (SoC), the bus architecture typically comes to the forefront because the system performance is not dependent only on the speed of the Processing Elements (PEs) but also on the bus architecture in the system. An efficient bus architecture with effective arbitration for reducing contention on the bus plays an important role in maximizing performance. Therefore, among many issues of multi-processor SoC research, we focus on two issues related to the bus architecture in this dissertation. One issue is how to quickly and easily design an efficient bus architecture for an SoC. The second issue is how to quickly explore the design space across performance influencing factors to achieve a high performance bus system. The objective of this research is to provide a Computer-Aided Design (CAD) tool with which the user can quickly explore System-on-a-Chip (SoC) bus design space in search of a high performance SoC bus system. From a straightforward description of the numbers and types of Processing Elements (PEs), non-PEs, memories and buses (including, for example, the address and data bus widths of the buses and memories), our Bus Synthesis tool, called BusSynth, generates a Register-Transfer Level (RTL) Verilog Hardware Description Language (HDL) description of the specified bus system. The user can utilize this RTL Verilog in bus-accurate simulations to more quickly arrive at an efficient bus architecture for a multi-processor SoC. The methodology we propose gives designers a great benefit in fast design space exploration of bus systems across a variety of performance influencing factors such as bus types, PE types and software programming styles (e.g., pipelined parallel fashion or functional parallel fashion). We also show that BusSynth can efficiently generate bus systems in a matter of seconds as opposed to weeks of design effort to integrate together each system component by hand. Moreover, unlike the previous related work, BusSynth can support a wide variety of PEs, memory types and bus architectures (including a hybrid bus architecture) in search of a high performance SoC.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Simon, Loïc. "Procedural reconstruction of buildings : towards large scale automatic 3D modeling of urban environments". Phd thesis, Ecole Centrale Paris, 2011. http://tel.archives-ouvertes.fr/tel-00637638.

Texto completo
Resumen
This thesis is devoted to 2D and 3D modeling of urban environments using structured representations and grammars. Our approach introduces a semantic representation for buildings that encodes expected architectural constraints and is able to derive complex instances using fairly simple grammars. Furthermore, we propose two novel inference algorithms to parse images using such grammars. To this end, a steepest ascent hill climbing concept is considered to derive the grammar and the corresponding parameters from a single facade view. It combines the grammar constraints with the expected visual properties of the different architectural elements. Towards addressing more complex scenarios and incorporating 3D information, a second inference strategy based on evolutionary computational algorithms is adopted to optimize a two-component objective function introducing depth cues. The proposed framework was evaluated qualitatively and quantitatively on a benchmark of annotated facades, demonstrating robustness to challenging situations. Substantial improvement due to the strong grammatical context was shown in comparison to the performance of the same appearance models coupled with local priors. Therefore, our approach provides powerful techniques in response to increasing demand on large scale 3D modeling of real environments through compact, structured and semantic representations, while opening new perspectives for image understanding
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Djourachkovitch, Tristan. "Conception de matériaux micro-architecturés innovants : Application à l'optimisation topologique multi-échelle". Thesis, Lyon, 2020. http://www.theses.fr/2020LYSEI086.

Texto completo
Resumen
La conception de nouveaux matériaux toujours plus performants est un enjeu de taille de la science des matériaux moderne. On trouve plusieurs exemples de ces matériaux innovants, tels les matériaux composites, les mousses, ou encore, les matériaux micro-architecturés (matériaux qui présentent certaines propriétés de périodicité à une échelle petite par rapport aux dimensions de la structure). Un critère fréquent que l'on retrouve chez ces matériaux est leur rapport entre masse et rigidité. L'optimisation topologique est particulièrement adaptée à la conception de ce genre de matériaux car le critère que l'on cherche à améliorer est directement intégré à la formulation du problème de minimisation. Nous proposons donc, dans un premier temps, des méthodes de conception de matériaux micro-architecturés par optimisation topologique pour différents critères. Dans un second temps l'utilité de ces matériaux est illustrée via des simulations multi-échelle dans la théorie du premier gradient et l'hypothèse de séparabilité des échelles pour l'homogénéisation. Une méthode d'optimisation couplée des échelles macro/micro est proposée où l'objectif est d'optimiser simultanément ces deux échelles malgré leur interdépendance. Le développement d'un démonstrateur numérique à permis d'illustrer ces différentes méthodes ainsi que de tester différents critères d'optimisation, différents modèles mécaniques, etc. Afin de réduire les coûts de calculs qui peuvent croître rapidement notamment pour les problèmes multi-échelle en raison de l'augmentation du nombre de variables de design, une approche "base de donnée" est proposée. Une large gamme de matériaux micro-architecturés est stockée (puis enrichie) pour différents critères (masse, rigidité, comportement originaux). Cette base est ensuite consultée au cours de l'optimisation couplée
The design on innovative micro-architectured materials is a key issue of modern material science. One can find many examples of this kind of materials such as composites materials, foams, and even micro-architectured materials (materials which come along with some periodicity properties at the small scale). A common criterion for these materials is their ratio between weight and stiffness. Topology optimization is well suited for the design of this kind of material since the criterion that is subject to improvement is directly integrated in the formulation of the minimization problem. In this context, we propose some methods for the design of micro-architectured materials using topology optimization and for several criteria. We afterwards illustrate the benefits of these materials thought multi-scale simulations based on the theory of the first gradient and the scale separability assumption in the homogenization framework.A coupled macro/micro optimization method is presented for the concurrent optimization of the these two interdependent scales. The development of a numerical demonstrator has allowed to illustrated those various methods and to test several optimization criteria, mechanical models etcetera. In order to reduce the computational costs that might become exorbitant especially for multi-scale problems since the number of design variables increases significantly, a database approach is proposed. A broad range of micro-architectured materials is stored (and enhanced) for several criteria (weight, stiffness, original behaviour). This database is then consulted throughout the coupled optimization
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Sutor, S. R. (Stephan R. ). "Large-scale high-performance video surveillance". Doctoral thesis, Oulun yliopisto, 2014. http://urn.fi/urn:isbn:9789526205618.

Texto completo
Resumen
Abstract The last decade was marked by a set of harmful events ranging from economical crises to organized crime, acts of terror and natural catastrophes. This has led to a paradigm transformation concerning security. Millions of surveillance cameras have been deployed, which led to new challenges, as the systems and operations behind those cameras could not cope with the rapid growth in number of video cameras and systems. Looking at today’s control rooms, often hundreds or even thousands of cameras are displayed, overloading security officers with irrelevant information. The purpose of this research was the creation of a novel video surveillance system with automated analysis mechanisms which enable security authorities and their operators to cope with this information flood. By automating the process, video surveillance was transformed into a proactive information system. The progress in technology as well as the ever increasing demand in security have proven to be an enormous driver for security technology research, such as this study. This work shall contribute to the protection of our personal freedom, our lives, our property and our society by aiding the prevention of crime and terrorist attacks that diminish our personal freedom. In this study, design science research methodology was utilized in order to ensure scientific rigor while constructing and evaluating artifacts. The requirements for this research were sought in close cooperation with high-level security authorities and prior research was studied in detail. The created construct, the “Intelligent Video Surveillance System”, is a distributed, highly-scalable software framework, that can function as a basis for any kind of high-performance video surveillance system, from installations focusing on high-availability to flexible cloud-based installation that scale across multiple locations and tens of thousands of cameras. First, in order to provide a strong foundation, a modular, distributed system architecture was created, which was then augmented by a multi-sensor analysis process. Thus, the analysis of data from multiple sources, combining video and other sensors in order to automatically detect critical events, was enabled. Further, an intelligent mobile client, the video surveillance local control, which addressed remote access applications, was created. Finally, a wireless self-contained surveillance system was introduced, a novel smart camera concept that enabled ad hoc and mobile surveillance. The value of the created artifacts was proven by evaluation at two real-world sites: An international airport, which has a large-scale installation with high-security requirements, and a security service provider, offering a multitude of video-based services by operating a video control center with thousands of cameras connected
Tiivistelmä Viime vuosikymmen tunnetaan vahingollisista tapahtumista alkaen talouskriiseistä ja ulottuen järjestelmälliseen rikollisuuteen, terrori-iskuihin ja luonnonkatastrofeihin. Tämä tilanne on muuttanut suhtautumista turvallisuuteen. Miljoonia valvontakameroita on otettu käyttöön, mikä on johtanut uusiin haasteisiin, koska kameroihin liittyvät järjestelmät ja toiminnot eivät pysty toimimaan yhdessä lukuisien uusien videokameroiden ja järjestelmien kanssa. Nykyajan valvontahuoneissa voidaan nähdä satojen tai tuhansien kameroiden tuottavan kuvaa ja samalla runsaasti tarpeetonta informaatiota turvallisuusvirkailijoiden katsottavaksi. Tämän tutkimuksen tarkoitus oli luoda uusi videovalvontajärjestelmä, jossa on automaattiset analyysimekanismit, jotka mahdollistavat turva-alan toimijoiden ja niiden operaattoreiden suoriutuvan informaatiotulvasta. Automaattisen videovalvontaprosessin avulla videovalvonta muokattiin proaktiiviseksi tietojärjestelmäksi. Teknologian kehitys ja kasvanut turvallisuusvaatimus osoittautuivat olevan merkittävä ajuri turvallisuusteknologian tutkimukselle, kuten tämä tutkimus oli. Tämä tutkimus hyödyttää yksittäisen ihmisen henkilökohtaista vapautta, elämää ja omaisuutta sekä yhteisöä estämällä rikoksia ja terroristihyökkäyksiä. Tässä tutkimuksessa suunnittelutiedettä sovellettiin varmistamaan tieteellinen kurinalaisuus, kun artefakteja luotiin ja arvioitiin. Tutkimuksen vaatimukset perustuivat läheiseen yhteistyöhön korkeatasoisten turva-alan viranomaisten kanssa, ja lisäksi aiempi tutkimus analysoitiin yksityiskohtaisesti. Luotu artefakti - ’älykäs videovalvontajärjestelmä’ - on hajautettu, skaalautuva ohjelmistoviitekehys, joka voi toimia perustana monenlaiselle huipputehokkaalle videovalvontajärjestelmälle alkaen toteutuksista, jotka keskittyvät saatavuuteen, ja päättyen joustaviin pilviperustaisiin toteutuksiin, jotka skaalautuvat useisiin sijainteihin ja kymmeniin tuhansiin kameroihin. Järjestelmän tukevaksi perustaksi luotiin hajautettu järjestelmäarkkitehtuuri, jota laajennettiin monisensorianalyysiprosessilla. Siten mahdollistettiin monista lähteistä peräisin olevan datan analysointi, videokuvan ja muiden sensorien datan yhdistäminen ja automaattinen kriittisten tapahtumien tunnistaminen. Lisäksi tässä työssä luotiin älykäs kännykkäsovellus, videovalvonnan paikallinen kontrolloija, joka ohjaa sovelluksen etäkäyttöä. Viimeksi tuotettiin langaton itsenäinen valvontajärjestelmä – uudenlainen älykäs kamerakonsepti – joka mahdollistaa ad hoc -tyyppisen ja mobiilin valvonnan. Luotujen artefaktien arvo voitiin todentaa arvioimalla ne kahdessa reaalimaailman ympäristössä: kansainvälinen lentokenttä, jonka laajamittaisessa toteutuksessa on korkeat turvavaatimukset, ja turvallisuuspalveluntuottaja, joka tarjoaa moninaisia videopohjaisia palveluja videovalvontakeskuksen avulla käyttäen tuhansia kameroita
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Rodosik, Sandrine. "Etude de l'impact d'architectures fluidiques innovantes sur la gestion, la performance et la durabilité de systèmes de pile à combustible PEMFC pour les transports". Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAI090.

Texto completo
Resumen
Même si l’hydrogène est en plein essor, les véhicules électriques à pile à combustible sont encore rares sur le marché. Leur volume et leur complexité encore trop importants comptent parmi les freins au développement des systèmes PEM (Proton Exchange Membrane) dans le secteur des transports. Ces travaux de thèse visent à étudier deux nouveaux circuits fluidiques permettant à la fois de simplifier et de réduire le volume du système. Il s’agit de la recirculation de l’air, et du Ping-Pong, une architecture fluidique permettant d’alterner la localisation de l’alimentation en combustible dans le stack. Les performances des deux architectures ont été étudiées expérimentalement en conditions automobile sur un système de 5 kW. Une analyse multi-échelles a été conduite pour comparer, à d’autres architectures connues, les performances du système, du stack et l’homogénéité des tensions de cellules du stack. L’étude a été complétée par un test de durabilité en Ping-Pong afin d’évaluer l’impact de ce nouveau mode de fonctionnement sur le stack. A nouveau, les données expérimentales sont analysées à différentes échelles jusqu’à l’expertise post-mortem des assemblages-membrane-électrode
Although hydrogen is booming, fuel cell electric vehicles are still rare on the market. Their high volume and complexity are still major hurdles to the development of PEM (Proton Exchange Membrane) systems for transport applications. This PhD. work aimed at studying two new fluidic circuits that can both simplify and reduce the system volume. Namely, the cathodic recirculation, and the Ping-Pong, which is a new fluidic architecture that alternate the fuel feed locations during operation. The performances of both architectures have been studied experimentally in automotive conditions on a 5 kW system. A multiscale analysis was conducted to compare, with other known architectures, the performances of the system, the stack and the homogeneity of the cell voltages inside the stack. The study was completed with a Ping-Pong durability test to evaluate the impact of this new operation on the fuel cell stack. The experimental data have been analyzed at different scales up to the post-mortem expertise of membrane-electrode assemblies
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

PalChaudhuri, Santashil. "An adaptive sensor network architecture for multi-scale communication". Thesis, 2006. http://hdl.handle.net/1911/18954.

Texto completo
Resumen
Sensor networking has emerged as a promising tool for monitoring and actuating the devices of the physical world, employing self-organizing networks of battery-powered wireless sensors that can sense, process, and communicate. Such networks can be rapidly deployed at low cost, enabling large-scale, on-demand monitoring and tracking over a wide area. Energy is the most crucial and scarce resource for such networks. However, since sensor network applications generally exhibit specific limited behaviors, there is both a need and an opportunity for adapting the network architecture to match the application in order to optimize resource utilization. Many applications-such as large-scale collaborative sensing, distributed signal processing, and distributed data assimilation-require sensor data to be available at multiple resolutions, or allow fidelity to be traded-off for energy efficiency. In this thesis, I develop an adaptive cross-layered sensor network architecture that enables multi-scale collaboration and communication. Analyzing the unique characteristics of sensor networks, I identify cross-layering and adaptability to applications as the primary design principles needed to build three closely coupled-protocols: (1) a self-organizing adaptive hierarchical data service for multi-scale communication, together with communication primitives to simplify application design; (2) a medium scheduling protocol tailored for this hierarchical data service, to take advantage of the communication and routing characteristics to achieve close to optimal latency and energy usage; and (3) an adaptive clock synchronization service, which provides an analytical framework for mapping clock synchronization requirements to actual protocol parameters, to provide required synchronization. I have analyzed as well as simulated the performance of these protocols to show optimized energy utilization.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Dong, Jingqi. "Multi-scale hydrological information system using an OGC standards-based architecture". Thesis, 2011. http://hdl.handle.net/2152/ETD-UT-2011-05-3527.

Texto completo
Resumen
A Multi-Scale Hydrological Information System (HIS) includes three levels of HIS, which are the national CUAHSI HIS, the Texas HIS and the local Capital Area Council of Governments (CAPCOG) HIS. The CUAHSI Hydrologic Information System has succeeded in putting water data together using a Services-Oriented Architecture (SOA). However, maintaining the current metadata catalog service has been problematic. An Open Geospatial Consortium (OGC) standard transformation procedure is happening to transfer the current web services into OGC adopted services and models. The transformation makes CUAHSI HIS compliant with the international OGC standards and to have the capability to host tremendous water data. On a scaled down level, the Texas HIS has been built for the specific Texas hydrologic data, concerning the variables and the web services listed in this thesis. The CAPCOG emergency response system was initiated for the purpose of the Texas flash flood warning, including several data services, such as the USGS NWIS, the City of Austin (COA) and the Lower Colorado River Authority (LCRA). By applying the consistent mechanism, which is the OGC standards-based SOA, in these three scales of HIS, three catalogs of services can be created within the architecture, and hydrologic data services included in different catalogs can be searched across. Each catalog of services has a different scale or purpose. A technique, called KiWIS developed by the KISTERS Company, of publishing OGC standard web services through the WISKI hydrologic database was then described. The technique has been applied to the City of Austin’s water data hosted at CRWR. The OGC standard transformation progress reviewed in the thesis and the technique described can give a reference on how to synthesize Multi-Scale HIS within a standard mechanism.
text
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Rodrigues, J. M. F. "Integrated multi-scale architecture of the cortex with application to computer vision". Doctoral thesis, 2007. http://hdl.handle.net/10400.1/413.

Texto completo
Resumen
Tese de dout., Engenharia Electrónica e de Computadores, Faculdade de Ciência e Tecnologia, Universidade do Algarve, 2007
The main goal of this thesis is to try to understand the functioning of the visual cortex through the development of computational models. In the input layer V1 of the visual cortex there are simple, complex and endstopped cells. These provide a multi-scale representation of objects and scene in terms of lines, edges and keypoints. In this thesis we combine recent progress concerning the development of computational models of these and other cells with processes in higher cortical areas V2 and V4 etc. Three pertinent challenges are discussed: (i) object recognition embedded in a cortical architecture; (ii) brightness perception, and (iii) painterly rendering based on human vision. Specific aspects are Focusof- Attention by means of keypoint-based saliency maps, the dynamic routing of features from V1 through higher cortical areas in order to obtain translation, rotation and size invariance, and the construction of normalized object templates with canonical views in visual memory. Our simulations show that the multi-scale representations can be integrated into a cortical architecture in order to model subsequent processing steps: from segregation, via different categorization levels, until final object recognition is obtained. As for real cortical processing, the system starts with coarse-scale information, refines categorization by using mediumscale information, and employs all scales in recognition. We also show that a 2D brightness model can be based on the multi-scale symbolic representation of lines and edges, with an additional low-pass channel and nonlinear amplitude transfer functions, such that object recognition and brightness perception are combined processes based on the same information. The brightness model can predict many different effects such as Mach bands, grating induction, the Craik-O’Brien-Cornsweet illusion and brightness induction, i.e. the opposite effects of assimilation (White effect) and simultaneous brightness contrast. Finally, a novel application is introduced: painterly rendering has been linked to computer vision, but we propose to link it to human vision because perception and painting are two processes which are strongly interwoven.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Hsu, Ya-Chun y 許雅淳. "A Qemu-based Multi-core Simulator with Flexible Performance Model for Large Scale Architecture Exploration". Thesis, 2011. http://ndltd.ncl.edu.tw/handle/19969845988787135254.

Texto completo
Resumen
碩士
國立中正大學
資訊工程研究所
100
It is now clear that processors with hundreds or thousands of cores will eventually be available according current industry trends. To accelerate hardware development, simulations of future multicore architecture which have huge computational resources and more complex than current machine are unavoidable. This thesis builds a trace-driven simulator based on Qemu with a flexible performance model. The trace mechanism is implemented in Qemu and supports exchanging information with the performance model. Users can control what time to start the trace and what time to finish the trace by themselves. To achieve this, this thesis adds a device on target architecture. The trace mechanism also can filter out kernel mode information and only allow user mode information to produce performance statistics. Based on swappable modules, the performance modular design offers a programming interface for integrations with other customized hardware modules. Users can use the provided module interface to write their customized modules for detailed timing models or performance-demand models.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Pal, Biswajit. "A multi-physics-based modelling approach to predict mechanical and thermo-mechanical behaviour of cementitious composite in a multi-scale framework". Thesis, 2023. https://etd.iisc.ac.in/handle/2005/6069.

Texto completo
Resumen
Concrete is a heterogeneous material whose constituents (e.g., cement paste, aggregate etc.) range from a characteristic length-scale dimension of a nanometre to a metre. Owing to the heterogeneity of concrete and the contrasting nature of its constituent’s (cement paste, aggregate) response at ambient and high temperatures, applying a homogeneous macroscopic model to predict concrete’s mechanical and thermo-mechanical performance is questionable. Hence, in this thesis, multiple physical and chemical processes that occur within the concrete constituents at different length scales are considered, and a multi-scale model is developed to study the mechanical and thermo-mechanical behaviour of concrete in a hygral-thermal-chemical-mechanical (HTCM) framework. Firstly, the governing equations of HTCM processes are described at meso-scale, a length-scale where coarse aggregate is explicitly modelled in a binding medium called mortar. After that, a hierarchical homogenization approach is employed, and the evolution of mechanical properties etc., are upscaled (from micro to meso) and used at the meso-scale. The proposed methodology is then used to predict the evolution of mechanical properties (e.g., compressive strength) and time-dependent deformation (e.g., shrinkage and creep) of cement paste, mortar and concrete for a wide variety of factors (e.g., type and content of constituents, different curing conditions, etc.). Like ambient conditions, the developed model is used to simulate thermo-mechanical responses (e.g., in terms of spalling, deformation, residual capacity, etc.) of both plain and reinforced concrete structural elements. Further, the effect of several other meso and macroscopic parameters (e.g., interfacial transition zone, aggregate shape, random configurations of aggregates etc.) on concrete’s mechanical and thermo-mechanical behaviour is studied numerically at the meso-scale. Validation of the proposed methodology with the available experimental results at both ambient and high temperatures for a wide variety of cases highlights the general applicability of the model. It has been shown that on several occasions, existing macro, meso or multi-scale models unable to reproduce the mechanical and thermos-mechanical behaviour of concrete structures. Such limitations can be overcome with the present developed approach. Further, empiricism in several calibrated parameters in the existing thermal-hygral-mechanical macroscopic models (associated with elasticity, strength, shrinkage, and creep prediction) can be avoided by using the present developed multi-scale and multi-physics-based methodology. Similarly, simulated results at high temperatures highlight several crucial aspects related to obtaining a more precise residual capacity of a concrete structure, which is impossible to reproduce with a homogenized macroscopic model. For instance, spalling out of random concrete parts at different times during high-temperature exposure cannot be simulated with a homogenized assumption. Further, unlike macroscopic models, a mesoscopic model does not require transient creep strain to be specified explicitly in the analysis. The primary influencing mechanisms behind this transient creep strain are implicitly taken into account in the present developed meso-scale model that results in such advantages.
Ministry of Human Resource and Development, Government of India
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Patel, Parita. "Compilation of Graph Algorithms for Hybrid, Cross-Platform and Distributed Architectures". Thesis, 2017. http://etd.iisc.ac.in/handle/2005/3803.

Texto completo
Resumen
1. Main Contributions made by the supplicant: This thesis proposes an Open Computing Language (OpenCL) framework to address the challenges of implementation of graph algorithms on parallel architectures and large scale graph processing. The proposed framework uses the front-end of the existing Falcon DSL compiler, andso, programmers enjoy conventional, imperative and shared memory programming style. The back-end of the framework generates implementations of graph algorithms in OpenCL to target single device architectures. The generated OpenCL code is portable across various platforms, e.g., CPU and GPU, and also vendors, e.g., NVIDIA, Intel and AMD. The framework automatically generates code for thread management and memory management for the devices. It hides all the lower level programming details from the programmers. A few optimizations are applied to reduce the execution time. The large graph processing challenge is tackled through graph partitioning over multiple devices of a single node and multiple nodes of a distributed cluster. The programmer codes a graph algorithm in Falcon assuming that the graph fits into single machine memory and the framework handles graph partitioning without any intervention by the programmer. The framework analyses the Abstract Syntax Tree (AST) generated by Falcon to find all the necessary information about communication and synchronization. It automatically generates code for message passing to hide the complexity of programming in a distributed environment. The framework also applies a set of optimizations to minimize the communication latency. The thesis reports results of several experiments conducted on widely used graph algorithms: single source shortest path, pagerank and minimum spanning tree to name a few. Experimental evaluations show that the reported results are comparable to the state-of-art non-portable graph DSLs and frameworks on a single node. Experiments in a distributed environment to show the scalability and efficiency of the framework are also described. 2. Summary of the Referees' Written Comments: Extracts from the referees' reports are provided below. A copy of the written replies to the clarifications sought by the external examiner is appended to this report. Referee 1: This thesis extends the Falcon framework with OpenCL for parallel graph processing on multi-device and multi-node architectures. The thesis makes important contributions. Processing large graphs in short time is very important, and making use of multiple nodes and devices is perhaps the only way to achieve this. Towards this, the thesis makes good contributions for easy programming, compiler transformations and efficient runtime systems. One of the commendable aspects of the thesis that it demonstrates with graphs that cannot be accommodated In the memory of a single device. The thesis is generally written well. The related work coverage is very good. The magnitude of thesis excellent for a Masters work. The experimental setup is very comprehensive with good set of graphs, good experimental comparisons with state-of-art works and good platforms. Particularly. the demonstration with a GPU cluster with multiple GPU nodes (Chapter 5) is excellent. The attempt to demonstrate scalability with 2, 4 and 8 nodes is also noteworthy. However, the contributions on optimizations are weak. Most of the optimizations and compiler transformations are straight-forward. There should be summary observations on the results in Chapter 3, especially given that the results are mixed and don't quite clearly convey the clear advantages of their work. The same is the case with multi-device results in chapter 4, where the results are once again mixed. Similarly, the speedups and scalability achieved with multiple nodes are not great. The problem size justification in the multi-node results is not clear. (Referee 1 also indicates a couple of minor changes to the thesis). Referee 2: The thesis uses the OpenCL framework to address the problem of programming graph algorithms on distributed systems. The use of OpenCL ensures that the generated code is platform-agnoistic and vendor-agnoistic. Sufficient experimentation with large scale graphs and reasonable size clusters have been conducted to demonstrate the scalability and portability of the code generated by the framework. The automatically generated code is almost as efficient as manually written code. The thesis is well written and is of high quality. The related work section is well organized and displays a good knowledge of the subject matter under consideration. The author has made important contributions to a good publication as well. 3. An Account of the Open Oral Examination: The oral examination of Ms. Parita Patel took place during 10 AM and 11AM on 27th November 2017, in the Seminar Hall of the Department of Computer Science and Automation. The members of the Oral Examination Board present were, Prof. Sathish Vadhiyar, external examiner and Prof. Y. N. Srikant, research supervisor. The candidate presented the work in an open defense seminar highlighting the problem domain, the methodology used, the investigations carried out by her, and the resulting contributions documented in the thesis before an audience consisting of the examiners, some faculty members, and students. Some of the questions posed by the examiners and the members of the audience during the oral examination are listed below. 1. How much is the overlap between Falcon work and this thesis? Response: We have used the Falcon front end in our work. Further, the existing Falcon compiler was useful to us to test our own implementation of algorithms in Falcon. 2. Why are speedup and scalability not very high with multiple nodes? Response: For the multi-node architecture, we were not able to achieve linear scalability because, with the increase in number of nodes, communication cost increases significantly. Unless the computation cost in the nodes is significant and is much more than the communication cost, this is bound to happen. 3. Do you have plans of making the code available for use by the community? Response: The code includes some part of Falcon implementation (front-end parsing/grammar) also. After discussion with the author of Falcon, the code can be made available to the community. 4. How can a graph that does not fit into a single device fit into a single node in the case of multiple nodes? Response: Single node machine used in the experiments of “multi-device architecture” contains multiple devices while each node used in experiments of “multi-node architecture” contains only a single device. So, the graph which does not fit into single-node-single-device memory can fit into single-node-multi-device after partitioning. 5. Is there a way to permit morph algorithms to be coded in your framework? Response: Currently, our framework does not translate morph algorithms. Supporting morph algorithms will require some kind of runtime system to manage memory on GPU since morph algorithms add and remove the vertices and edges to the graph dynamically. This can be further explored in future work. 6. Is it possible to accommodate FPGA devices in your framework? Response: Yes, we can support FPGA devices (or any other device that is compatible for OpenCL) just by specifying the device type in the command line argument. We did not work with other devices because CPU and GPU are generally used to process graph algorithms. The candidate provided satisfactory answers to all the questions posed and the clarifications sought by the audience and the examiners during the presentation. The candidate's overall performance during the open defense and the oral examination was very satisfactory to the oral examination board. 4. Certificate of Corrections and Changes: All the necessary corrections and changes suggested by the examiners have been made in the thesis and these have been verified by the members of the oral examination board. The thesis has been recommended for acceptance in its revised form. 5. Final Recommendation: In view of the recommendations of the referees and the satisfactory performance of the candidate in the oral examination, the oral examination board recommends that the thesis of Ms. ParitaPatel be accepted for the award of the M.Sc(Engg.) Degree of the Institute. Response to the comments by the external examiner on the M.Sc(Engg.) thesis “Compilation of Graph Algorithms for Hybrid, Cross-Platform, and Distributed Architectures” by Parita Patel 1. Comment: The contributions on optimizations are weak. Response: The novelty of this thesis is to make the Falcon platform agnostic, and additionally process large scale graphs on multi-devices of a single node and multi-node clusters seamlessly. Our framework performs similar to the existing frameworks, but at the same time, it targets several types of architectures which are not possible in the existing works. Advanced optimizations are beyond the scope of this thesis. 2. Comment: The translation of Falcon to OpenCL is simple. While the translation of Falcon to OpenCL was not hard, figuring out the details of the translation for multi-device and multi-node architectures was not simple. For example, design of implementations for collection, set, global variables, concurrency, etc., were non-trivial. These designs have already been explained in the appropriate places in the thesis. Further, such large software introduced its own intricacies during development. 3. Comment: Lines between Falcon work and this work are not clear. Response: Appendix-A shows the falcon implementation of all the algorithms which we used to run the experiments. We compiled these falcon implementations through our framework and subsequently ran the generated code on different types of target architectures and compared the results with other framework's generated code. These falcon programs were written by us. We have also used the front-end of the Falcon compiler and this has already been stated in the thesis (page 16). 4. Comment: There should be a summary of observations in chapter 3. Response: Summary of observations have been added to chapter 3 (pages 35-36), chapter 4 (page 46), and chapter 5 (page 51) of the thesis. 5. Comment: Speedup and scalability achieved with multiple nodes are not great. Response: For the multi-node architecture, we were not able to achieve linear scalability because, with the increase in number of nodes, communication cost increases significantly. Unless the computation cost in the nodes is significant and is much more than the communication cost, this is bound to happen. 6. Comment: It will be good to separate the related work coverage into a separate chapter. Response: The related work is coherent with the flow in chapter 1. It consists of just 4.5 pages and separating it into a separate chapter would make both (rest of) chapter 1 and the new chapter very small. Therefore, we do not recommend it. 7. Comment: The code should be made available for use by the community. Response: The code includes some part of Falcon code (front-end parsing/grammar) also. After discussion with the author of Falcon, the code can be made available to the community. 8. Comment: Page 28: Shouldn’t the else part be inside the kernel? Response: There was some missing text and a few minor changes in Figure 3.14 (page 28) which have been incorporated in the corrected thesis. 9. Comment: Figure 4.1 needs to be explained better. Response: Explanation for Figure 4.1 (pages 38-39) has been added to the thesis. 10. Comment: The problem size justification in the multi-node results is not clear. Response: Single node machine used in the experiments of “multi-device architecture” contains multiple devices while each node used in experiments of “multi-node architecture” contains only a single device. So, the graph which does not fit into single-node-single-device memory can fit into single-node-multi-device after partitioning. Name of the Candidate: Parita Patel (S.R. No. 04-04-00-10-21-14-1-11610) Degree Registered: M.Sc(Engg.) Department: Computer Science & Automation Title of the Thesis: Compilation of Graph Algorithms for Hybrid, Cross-Platform and Graph algorithms are abundantly used in various disciplines. These algorithms perform poorly due to random memory access and negligible spatial locality. In order to improve performance, parallelism exhibited by these algorithms can be exploited by leveraging modern high performance parallel computing resources. Implementing graph algorithms for these parallel architectures requires manual thread management and memory management which becomes tedious for a programmer. Large scale graphs cannot fit into the memory of a single machine. One solution is to partition the graph either on multiple devices of a single node or on multiple nodes of a distributed network. All the available frameworks for such architectures demand unconventional programming which is difficult and error prone. To address these challenges, we propose a framework for compilation of graph algorithms written in an intuitive graph domain-specific language, Falcon. The framework targets shared memory parallel architectures, computational accelerators and distributed architectures (CPU and GPU cluster). First, it analyses the abstract syntax tree (generated by Falcon) and gathers essential information. Subsequently, it generates optimized code in OpenCL for shared-memory parallel architectures and computational accelerators, and OpenCL coupled with MPI code for distributed architectures. Motivation behind generating OpenCL code is its platform-agnostic and vendor-agnostic behavior, i.e., it is portable to all kinds of devices. Our framework makes memory management, thread management, message passing, etc., transparent to the user. None of the available domain-specific languages, frameworks or parallel libraries handle portable implementations of graph algorithms. Experimental evaluations demonstrate that the generated code performs comparably to the state-of-the-art non-portable implementations and hand-tuned implementations. The results also show portability and scalability of our framework.
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Patel, Parita. "Compilation of Graph Algorithms for Hybrid, Cross-Platform and Distributed Architectures". Thesis, 2017. http://etd.iisc.ernet.in/2005/3803.

Texto completo
Resumen
1. Main Contributions made by the supplicant: This thesis proposes an Open Computing Language (OpenCL) framework to address the challenges of implementation of graph algorithms on parallel architectures and large scale graph processing. The proposed framework uses the front-end of the existing Falcon DSL compiler, andso, programmers enjoy conventional, imperative and shared memory programming style. The back-end of the framework generates implementations of graph algorithms in OpenCL to target single device architectures. The generated OpenCL code is portable across various platforms, e.g., CPU and GPU, and also vendors, e.g., NVIDIA, Intel and AMD. The framework automatically generates code for thread management and memory management for the devices. It hides all the lower level programming details from the programmers. A few optimizations are applied to reduce the execution time. The large graph processing challenge is tackled through graph partitioning over multiple devices of a single node and multiple nodes of a distributed cluster. The programmer codes a graph algorithm in Falcon assuming that the graph fits into single machine memory and the framework handles graph partitioning without any intervention by the programmer. The framework analyses the Abstract Syntax Tree (AST) generated by Falcon to find all the necessary information about communication and synchronization. It automatically generates code for message passing to hide the complexity of programming in a distributed environment. The framework also applies a set of optimizations to minimize the communication latency. The thesis reports results of several experiments conducted on widely used graph algorithms: single source shortest path, pagerank and minimum spanning tree to name a few. Experimental evaluations show that the reported results are comparable to the state-of-art non-portable graph DSLs and frameworks on a single node. Experiments in a distributed environment to show the scalability and efficiency of the framework are also described. 2. Summary of the Referees' Written Comments: Extracts from the referees' reports are provided below. A copy of the written replies to the clarifications sought by the external examiner is appended to this report. Referee 1: This thesis extends the Falcon framework with OpenCL for parallel graph processing on multi-device and multi-node architectures. The thesis makes important contributions. Processing large graphs in short time is very important, and making use of multiple nodes and devices is perhaps the only way to achieve this. Towards this, the thesis makes good contributions for easy programming, compiler transformations and efficient runtime systems. One of the commendable aspects of the thesis that it demonstrates with graphs that cannot be accommodated In the memory of a single device. The thesis is generally written well. The related work coverage is very good. The magnitude of thesis excellent for a Masters work. The experimental setup is very comprehensive with good set of graphs, good experimental comparisons with state-of-art works and good platforms. Particularly. the demonstration with a GPU cluster with multiple GPU nodes (Chapter 5) is excellent. The attempt to demonstrate scalability with 2, 4 and 8 nodes is also noteworthy. However, the contributions on optimizations are weak. Most of the optimizations and compiler transformations are straight-forward. There should be summary observations on the results in Chapter 3, especially given that the results are mixed and don't quite clearly convey the clear advantages of their work. The same is the case with multi-device results in chapter 4, where the results are once again mixed. Similarly, the speedups and scalability achieved with multiple nodes are not great. The problem size justification in the multi-node results is not clear. (Referee 1 also indicates a couple of minor changes to the thesis). Referee 2: The thesis uses the OpenCL framework to address the problem of programming graph algorithms on distributed systems. The use of OpenCL ensures that the generated code is platform-agnoistic and vendor-agnoistic. Sufficient experimentation with large scale graphs and reasonable size clusters have been conducted to demonstrate the scalability and portability of the code generated by the framework. The automatically generated code is almost as efficient as manually written code. The thesis is well written and is of high quality. The related work section is well organized and displays a good knowledge of the subject matter under consideration. The author has made important contributions to a good publication as well. 3. An Account of the Open Oral Examination: The oral examination of Ms. Parita Patel took place during 10 AM and 11AM on 27th November 2017, in the Seminar Hall of the Department of Computer Science and Automation. The members of the Oral Examination Board present were, Prof. Sathish Vadhiyar, external examiner and Prof. Y. N. Srikant, research supervisor. The candidate presented the work in an open defense seminar highlighting the problem domain, the methodology used, the investigations carried out by her, and the resulting contributions documented in the thesis before an audience consisting of the examiners, some faculty members, and students. Some of the questions posed by the examiners and the members of the audience during the oral examination are listed below. 1. How much is the overlap between Falcon work and this thesis? Response: We have used the Falcon front end in our work. Further, the existing Falcon compiler was useful to us to test our own implementation of algorithms in Falcon. 2. Why are speedup and scalability not very high with multiple nodes? Response: For the multi-node architecture, we were not able to achieve linear scalability because, with the increase in number of nodes, communication cost increases significantly. Unless the computation cost in the nodes is significant and is much more than the communication cost, this is bound to happen. 3. Do you have plans of making the code available for use by the community? Response: The code includes some part of Falcon implementation (front-end parsing/grammar) also. After discussion with the author of Falcon, the code can be made available to the community. 4. How can a graph that does not fit into a single device fit into a single node in the case of multiple nodes? Response: Single node machine used in the experiments of “multi-device architecture” contains multiple devices while each node used in experiments of “multi-node architecture” contains only a single device. So, the graph which does not fit into single-node-single-device memory can fit into single-node-multi-device after partitioning. 5. Is there a way to permit morph algorithms to be coded in your framework? Response: Currently, our framework does not translate morph algorithms. Supporting morph algorithms will require some kind of runtime system to manage memory on GPU since morph algorithms add and remove the vertices and edges to the graph dynamically. This can be further explored in future work. 6. Is it possible to accommodate FPGA devices in your framework? Response: Yes, we can support FPGA devices (or any other device that is compatible for OpenCL) just by specifying the device type in the command line argument. We did not work with other devices because CPU and GPU are generally used to process graph algorithms. The candidate provided satisfactory answers to all the questions posed and the clarifications sought by the audience and the examiners during the presentation. The candidate's overall performance during the open defense and the oral examination was very satisfactory to the oral examination board. 4. Certificate of Corrections and Changes: All the necessary corrections and changes suggested by the examiners have been made in the thesis and these have been verified by the members of the oral examination board. The thesis has been recommended for acceptance in its revised form. 5. Final Recommendation: In view of the recommendations of the referees and the satisfactory performance of the candidate in the oral examination, the oral examination board recommends that the thesis of Ms. ParitaPatel be accepted for the award of the M.Sc(Engg.) Degree of the Institute. Response to the comments by the external examiner on the M.Sc(Engg.) thesis “Compilation of Graph Algorithms for Hybrid, Cross-Platform, and Distributed Architectures” by Parita Patel 1. Comment: The contributions on optimizations are weak. Response: The novelty of this thesis is to make the Falcon platform agnostic, and additionally process large scale graphs on multi-devices of a single node and multi-node clusters seamlessly. Our framework performs similar to the existing frameworks, but at the same time, it targets several types of architectures which are not possible in the existing works. Advanced optimizations are beyond the scope of this thesis. 2. Comment: The translation of Falcon to OpenCL is simple. While the translation of Falcon to OpenCL was not hard, figuring out the details of the translation for multi-device and multi-node architectures was not simple. For example, design of implementations for collection, set, global variables, concurrency, etc., were non-trivial. These designs have already been explained in the appropriate places in the thesis. Further, such large software introduced its own intricacies during development. 3. Comment: Lines between Falcon work and this work are not clear. Response: Appendix-A shows the falcon implementation of all the algorithms which we used to run the experiments. We compiled these falcon implementations through our framework and subsequently ran the generated code on different types of target architectures and compared the results with other framework's generated code. These falcon programs were written by us. We have also used the front-end of the Falcon compiler and this has already been stated in the thesis (page 16). 4. Comment: There should be a summary of observations in chapter 3. Response: Summary of observations have been added to chapter 3 (pages 35-36), chapter 4 (page 46), and chapter 5 (page 51) of the thesis. 5. Comment: Speedup and scalability achieved with multiple nodes are not great. Response: For the multi-node architecture, we were not able to achieve linear scalability because, with the increase in number of nodes, communication cost increases significantly. Unless the computation cost in the nodes is significant and is much more than the communication cost, this is bound to happen. 6. Comment: It will be good to separate the related work coverage into a separate chapter. Response: The related work is coherent with the flow in chapter 1. It consists of just 4.5 pages and separating it into a separate chapter would make both (rest of) chapter 1 and the new chapter very small. Therefore, we do not recommend it. 7. Comment: The code should be made available for use by the community. Response: The code includes some part of Falcon code (front-end parsing/grammar) also. After discussion with the author of Falcon, the code can be made available to the community. 8. Comment: Page 28: Shouldn’t the else part be inside the kernel? Response: There was some missing text and a few minor changes in Figure 3.14 (page 28) which have been incorporated in the corrected thesis. 9. Comment: Figure 4.1 needs to be explained better. Response: Explanation for Figure 4.1 (pages 38-39) has been added to the thesis. 10. Comment: The problem size justification in the multi-node results is not clear. Response: Single node machine used in the experiments of “multi-device architecture” contains multiple devices while each node used in experiments of “multi-node architecture” contains only a single device. So, the graph which does not fit into single-node-single-device memory can fit into single-node-multi-device after partitioning. Name of the Candidate: Parita Patel (S.R. No. 04-04-00-10-21-14-1-11610) Degree Registered: M.Sc(Engg.) Department: Computer Science & Automation Title of the Thesis: Compilation of Graph Algorithms for Hybrid, Cross-Platform and Graph algorithms are abundantly used in various disciplines. These algorithms perform poorly due to random memory access and negligible spatial locality. In order to improve performance, parallelism exhibited by these algorithms can be exploited by leveraging modern high performance parallel computing resources. Implementing graph algorithms for these parallel architectures requires manual thread management and memory management which becomes tedious for a programmer. Large scale graphs cannot fit into the memory of a single machine. One solution is to partition the graph either on multiple devices of a single node or on multiple nodes of a distributed network. All the available frameworks for such architectures demand unconventional programming which is difficult and error prone. To address these challenges, we propose a framework for compilation of graph algorithms written in an intuitive graph domain-specific language, Falcon. The framework targets shared memory parallel architectures, computational accelerators and distributed architectures (CPU and GPU cluster). First, it analyses the abstract syntax tree (generated by Falcon) and gathers essential information. Subsequently, it generates optimized code in OpenCL for shared-memory parallel architectures and computational accelerators, and OpenCL coupled with MPI code for distributed architectures. Motivation behind generating OpenCL code is its platform-agnostic and vendor-agnostic behavior, i.e., it is portable to all kinds of devices. Our framework makes memory management, thread management, message passing, etc., transparent to the user. None of the available domain-specific languages, frameworks or parallel libraries handle portable implementations of graph algorithms. Experimental evaluations demonstrate that the generated code performs comparably to the state-of-the-art non-portable implementations and hand-tuned implementations. The results also show portability and scalability of our framework.
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Rosner, Jakub. "Methods of parallelizing selected computer vision algorithms for multi-core graphics processors". Rozprawa doktorska, 2015. https://repolis.bg.polsl.pl/dlibra/docmetadata?showContent=true&id=28390.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Rosner, Jakub. "Methods of parallelizing selected computer vision algorithms for multi-core graphics processors". Rozprawa doktorska, 2015. https://delibra.bg.polsl.pl/dlibra/docmetadata?showContent=true&id=28390.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía