Dissertations / Theses on the topic 'Architecture analytics'

To see the other types of publications on this topic, follow the link: Architecture analytics.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Architecture analytics.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Ho, Quan. "Architecture and Applications of a Geovisual Analytics Framework." Doctoral thesis, Linköpings universitet, Medie- och Informationsteknik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-91679.

Full text
Abstract:
The large and ever-increasing amounts of multi-dimensional, multivariate, multi-source, spatio-temporal data represent a major challenge for the future. The need to analyse and make decisions based on these data streams, often in time-critical situations, demands integrated, automatic and sophisticated interactive tools that aid the user to manage, process, visualize and interact with large data spaces. The rise of `Web 2.0', which is undisputedly linked with developments such as blogs, wikis and social networking, and the internet usage explosion in the last decade represent another challenge for adapting these tools to the Internet to reach a broader user community. In this context, the research presented in this thesis introduces an effective web-enabled geovisual analytics framework implemented, applied and verified in Adobe Flash ActionScript and HTML5/JavaScript. It has been developed based on the principles behind Visual Analytics and designed to significantly reduce the time and effort needed to develop customized web-enabled applications for geovisual analytics tasks and to bring the benefits of visual analytics to the public. The framework has been developed based on a component architecture and includes a wide range of visualization techniques enhanced with various interaction techniques and interactive features to support better data exploration and analysis. The importance of multiple coordinated and linked views is emphasized and a number of effective techniques for linking views are introduced. Research has so far focused more on tools that explore and present data while tools that support capturing and sharing gained insight have not received the same attention. Therefore, this is one of the focuses of the research presented in this thesis. A snapshot technique is introduced, which supports capturing discoveries made during the exploratory data analysis process and can be used for sharing gained knowledge. The thesis also presents a number of applications developed to verify the usability and the overall performance of the framework for the visualization, exploration and analysis of data in different domains. Four application scenarios are presented introducing (1) the synergies among information visualization methods, geovisualization methods and volume data visualization methods for the exploration and correlation of spatio-temporal ocean data, (2) effective techniques for the visualization, exploration and analysis of self-organizing network data, (3) effective flow visualization techniques applied to the analysis of time-varying spatial interaction data such as migration data, commuting data and trade flow data, and (4) effective techniques for the visualization, exploration and analysis of flood data.
APA, Harvard, Vancouver, ISO, and other styles
2

Reuterswärd, John. "Implementation & architecture of a cloud-based data analytics pipeline." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-301335.

Full text
Abstract:
Organizations can gain deeper insight into the usage of their products and services by analyzing data from events as they happen in real-time. Examples of such data include user login attempts, feature usage, sign-ups, client logs as well as other domain and application specific events. The high frequency at which these types of events are generated combined with the large volume of data they will occupy once stored provides many interesting challenges. Most importantly, it is difficult to predict what type of analytics data will provide useful insights in the future. This master thesis documents the architecture of a data analytics pipeline based on the principles of decoupled components to meet this goal. It is shown that by extending the concepts of the publish & subscribe pattern and by leveraging cloud-based services the development of such a system is achievable even for smaller organizations.
APA, Harvard, Vancouver, ISO, and other styles
3

Ajiboye, Soladoye Oyebowale. "Video big data : an agile architecture for systematic exploration and analytics." Thesis, University of Sussex, 2017. http://sro.sussex.ac.uk/id/eprint/71047/.

Full text
Abstract:
Video is currently at the forefront of most business and natural environments. In surveillance, it is the most important technology as surveillance systems reveal information and patterns for solving many security problems including crime prevention. This research investigates technologies that currently drive video surveillance systems with a view to optimization and automated decision support. The investigation reveals some features and properties that can be optimised to improve performance and derive further benefits from surveillance systems. These aspects include system-wide architecture, meta-data generation, meta-data persistence, object identification, object tagging, object tracking, search and querying sub-systems. The current less-than-optimum performance is attributable to many factors, which include massive volume, variety, and velocity (the speed at which streaming video transmit to storage) of video data in surveillance systems. Research contributions are 2-fold. First, we propose a system-wide architecture for designing and implementing surveillance systems, based on the authors' system architecture for generating meta-data. Secondly, we design a simulation model of a multi-view surveillance system from which the researchers generate simulated video streams in large volumes. From each video sequence in the model, the authors extract meta-data and apply a novel algorithm for predicting the location of identifiable objects across a well-connected camera cluster. This research provide evidence that independent surveillance systems (for example, security cameras) can be unified across a geographical location such as a smart city, where each network is administratively owned and managed independently. Our investigation involved 2 experiments - first, the implementation of a web-based solution where we developed a directory service for managing, cataloguing, and persisting metadata generated by the surveillance networks. The second experiment focused on the set up, configuration and the architecture of the surveillance system. These experiments involved the investigation and demonstration of 3 loosely coupled service-oriented APIs – these services provided the capability to generate the query-able metadata. The results of our investigations provided answers to our research questions - the main question being “to what degree of accuracy can we predict the location of an object in a connected surveillance network”. Our experiment also provided evidence in support of our hypothesis – “it is feasible to ‘explore' unified surveillance data generated from independent surveillance networks”.
APA, Harvard, Vancouver, ISO, and other styles
4

Carle, William R. II. "Active Analytics: Adapting Web Pages Automatically Based on Analytics Data." UNF Digital Commons, 2016. http://digitalcommons.unf.edu/etd/629.

Full text
Abstract:
Web designers are expected to perform the difficult task of adapting a site’s design to fit changing usage trends. Web analytics tools give designers a window into website usage patterns, but they must be analyzed and applied to a website's user interface design manually. A framework for marrying live analytics data with user interface design could allow for interfaces that adapt dynamically to usage patterns, with little or no action from the designers. The goal of this research is to create a framework that utilizes web analytics data to automatically update and enhance web user interfaces. In this research, we present a solution for extracting analytics data via web services from Google Analytics and transforming them into reporting data that will inform user interface improvements. Once data are extracted and summarized, we expose the summarized reports via our own web services in a form that can be used by our client side User Interface (UI) framework. This client side framework will dynamically update the content and navigation on the page to reflect the data mined from the web usage reports. The resulting system will react to changing usage patterns of a website and update the user interface accordingly. We evaluated our framework by assigning navigation tasks to users on the UNF website and measuring the time it took them to complete those tasks, one group with our framework enabled, and one group using the original website. We found that the group that used the modified version of the site with our framework enabled was able to navigate the site more quickly and effectively.
APA, Harvard, Vancouver, ISO, and other styles
5

Villalon, Rachelle B. (Rachelle Bentajado). "Data mining, inference, and predictive analytics for the built environment with images, text, and WiFi data." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/115448.

Full text
Abstract:
Thesis: Ph. D. in Architecture Design and Computation, Massachusetts Institute of Technology, Department of Architecture, June 2017.
Cataloged from PDF version of thesis. "February 2017."
Includes bibliographical references (pages 190-194).
What can campus WiFi data tell us about life at MIT? What can thousands of images tell us about the way people see and occupy buildings in real-time? What can we learn about the buildings that millions of people snap pictures of and text about over time? Crowdsourcing has triggered a dramatic shift in the traditional forms of producing content. The increasing number of people contributing to the Internet has created big data that has the potential to 1) enhance the traditional forms of spatial information that the design and engineering fields are typically accustomed to; 2) yield further insights about a place or building from discovering relationships between the datasets. In this research, I explore how the Architecture, Engineering, and Construction (AEC) industry can exploit crowdsourced and non-traditional datasets. I describe its possible roles for the following constituents: historian, designer/city administrator, and facilities manager - roles that engage with a building's information in the past, present, and future with different goals. As part of this research, I have developed a complete software pipeline for data mining, analyzing, and visualizing large volumes of crowdsourced unstructured content about MIT and other locations from images, campus WiFi access points, and text in batch/real-time using computer vision, machine learning, and statistical modeling techniques. The software pipeline is used for exploring meaningful statistical patterns from the processed data.
by Rachelle B. Villalon.
Ph. D. in Architecture Design and Computation
APA, Harvard, Vancouver, ISO, and other styles
6

O'Toole, Ryan (Ryan Michael). "Red Ink : open source financial analytics for people & communities." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/62079.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2010.
Cataloged from PDF version of thesis.
Includes bibliographical references.
Red Ink is an open source social-financial web-service that enables people to share, aggregate, analyze, visualize and publish their financial transactions as individuals and ad-hoc groups, through data sharing campaigns. Virtual and geographic communities of financial data sharers can form on Red Ink to create new sources of information for self-knowledge and understanding of complex personal, community, economic, environmental, and civic concerns and how to better coordinate their solutions. Red Ink posits that just like volunteering time or donating money, personal financial data is itself an asset that people can share to gain group leverage. Further, in the hands of everyday people, the data and tools of corporate scale consumer analysis will be reborn to serve larger and more personally meaningful goals.
by Ryan O'Toole.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
7

Anne, Chaitanya. "Advanced Text Analytics and Machine Learning Approach for Document Classification." ScholarWorks@UNO, 2017. http://scholarworks.uno.edu/td/2292.

Full text
Abstract:
Text classification is used in information extraction and retrieval from a given text, and text classification has been considered as an important step to manage a vast number of records given in digital form that is far-reaching and expanding. This thesis addresses patent document classification problem into fifteen different categories or classes, where some classes overlap with other classes for practical reasons. For the development of the classification model using machine learning techniques, useful features have been extracted from the given documents. The features are used to classify patent document as well as to generate useful tag-words. The overall objective of this work is to systematize NASA’s patent management, by developing a set of automated tools that can assist NASA to manage and market its portfolio of intellectual properties (IP), and to enable easier discovery of relevant IP by users. We have identified an array of methods that can be applied such as k-Nearest Neighbors (kNN), two variations of the Support Vector Machine (SVM) algorithms, and two tree based classification algorithms: Random Forest and J48. The major research steps in this work consist of filtering techniques for variable selection, information gain and feature correlation analysis, and training and testing potential models using effective classifiers. Further, the obstacles associated with the imbalanced data were mitigated by adding synthetic data wherever appropriate, which resulted in a superior SVM classifier based model.
APA, Harvard, Vancouver, ISO, and other styles
8

Flöring, Stefan [Verfasser], Hans-Jürgen [Akademischer Betreuer] Appelrath, and Tobias [Akademischer Betreuer] Isenberg. "KnoVA: a Reference Architecture for Knowledge-based Visual Analytics / Stefan Flöring. Betreuer: Hans-Jürgen Appelrath ; Tobias Isenberg." Oldenburg : BIS der Universität Oldenburg, 2012. http://d-nb.info/1050299485/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Flöring, Stefan Verfasser], Hans-Jürgen [Akademischer Betreuer] [Appelrath, and Tobias [Akademischer Betreuer] Isenberg. "KnoVA: a Reference Architecture for Knowledge-based Visual Analytics / Stefan Flöring. Betreuer: Hans-Jürgen Appelrath ; Tobias Isenberg." Oldenburg : BIS der Universität Oldenburg, 2012. http://nbn-resolving.de/urn:nbn:de:gbv:715-oops-15310.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Moscoso-Zea, Oswaldo. "A Hybrid Infrastructure of Enterprise Architecture and Business Intelligence & Analytics to Empower Knowledge Management in Education." Doctoral thesis, Universidad de Alicante, 2019. http://hdl.handle.net/10045/97408.

Full text
Abstract:
The large volumes of data (Big Data) that are generated on a global scale and within organizations along with the knowledge that resides in people and in business processes makes organizational knowledge management (KM) very complex. A right KM can be a source of opportunities and competitive advantage for organizations that use their data intelligently and subsequently generate knowledge with them. Two of the fields that support KM and that have had accelerated growth in recent years are business intelligence (BI) and enterprise architecture (EA). On the one hand, BI allows taking advantage of the information stored in data warehouses using different operations such as slice, dice, roll-up, and drill-down. This information is obtained from the operational databases through an extraction, transformation, and loading (ETL) process. On the other hand, EA allows institutions to establish methods that support the creation, sharing and transfer of knowledge that resides in people and processes through the use of blueprints and models. One of the objectives of KM is to create a culture where tacit knowledge (knowledge that resides in a person) stays in an organization when qualified and expert personnel leave the institution or when changes are required in the organizational structure, in computer applications or in the technological infrastructure. In higher education institutions (HEIs) not having an adequate KM approach to handle data is even a greater problem due to the nature of this industry. Generally, HEIs have very little interdependence between departments and faculties. In other words, there is low standardization, redundancy of information, and constant duplicity of applications and functionalities in the different departments which causes inefficient organizations. That is why the research performed within this dissertation has focused on finding an adequate KM method and researching on the right technological infrastructure that supports the management of information of all the knowledge dimensions such as people, processes and technology. All of this with the objective to discover innovative mechanisms to improve education and the service that HEIs offer to their students and teachers by improving their processes. Despite the existence of some initiatives, and papers on KM frameworks, we were not able to find a standard framework that supports or guides KM initiatives. In addition, KM frameworks found in the literature do not present practical mechanisms to gather and analyze all the knowledge dimensions to facilitate the implementation of KM projects. The core contribution of this thesis is a hybrid infrastructure of KM based on EA and BI that was developed from research using an empirical approach and taking as reference the framework developed for KM. The proposed infrastructure will help HEIs to improve education in a general way by analyzing reliable and cleaned data and integrating analytics from the perspective of EA. EA analytics takes into account the interdependence between the objects that make up the organization: people, processes, applications, and technology. Through the presented infrastructure, the doors are opened for the realization of different research projects that increment the type of knowledge that is generated by integrating the information of the applications found in the data warehouses together with the information of the people and the organizational processes that are found in the EA repositories. In order to validate the proposal, a case study was carried out within a university with promising initial results. As future works, it is planned that different HEIs' activities can be automated through a software development methodology based on EA models. In addition, it is desired to develop a KM system that allows the generation of different and new types of analytics, which would be impossible to obtain with only transactional or multidimensional databases.
APA, Harvard, Vancouver, ISO, and other styles
11

Demirel, Kutukcu Seher. "Using Google Analytics And Think-aloud Study For Improving The Information Architecture Of Metu Informatics Institute Website: A Case Study." Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12612584/index.pdf.

Full text
Abstract:
Today, web sites are important communication channels that reach a wide group of people. Measuring the effectiveness of these web-sites has become a key issue for researchers as well as practitioners. However, there is no consensus on how to define web site effectiveness and which dimensions need to be used for the evaluation of these web sites. This problem is more noteworthy for information driven web sites like academic web sites. There is limited academic literature in this predominant application area. The existing studies measured the effectiveness of these academic web sites by taking into account their information architecture mostly using think-aloud methodology. However, there is limited study on web analytics tools which are capable of providing valuable information regarding the web site users such as their navigation behaviours and browser details. Although web analytics tools provide detailed and valuable information, the existing studies have utilized their very basic features. In this thesis, we have explored web analytic tools and think-aloud study method to improve information architecture of web sites. Taking METU Informatics Institute web site as a case study, we have used the reports of Google Analytics which is a commercial web analytics tool owned by Google and think-aloud study results to improve the information architecture of our case study web-site.
APA, Harvard, Vancouver, ISO, and other styles
12

Deodhar, Suruchi. "Data Integration Methodologies and Services for Evaluation and Forecasting of Epidemics." Diss., Virginia Tech, 2016. http://hdl.handle.net/10919/71303.

Full text
Abstract:
Most epidemiological systems described in the literature are built for evaluation and analysis of specific diseases, such as Influenza-like-illness. The modeling environments that support these systems are implemented for specific diseases and epidemiological models. Hence they are not reusable or extendable. This thesis focuses on the design and development of an integrated analytical environment with flexible data integration methodologies and multi-level web services for evaluation and forecasting of various epidemics in different regions of the world. The environment supports analysis of epidemics based on any combination of disease, surveillance sources, epidemiological models, geographic regions and demographic factors. The environment also supports evaluation and forecasting of epidemics when various policy-level and behavioral interventions are applied, that may inhibit the spread of an epidemic. First, we describe data integration methodologies and schema design, for flexible experiment design, storage and query retrieval mechanisms related to large scale epidemic data. We describe novel techniques for data transformation, optimization, pre-computation and automation that enable flexibility, extendibility and efficiency required in different categories of query processing. Second, we describe the design and engineering of adaptable middleware platforms based on service-oriented paradigms for interactive workflow, communication, and decoupled integration. This supports large-scale multi-user applications with provision for online analysis of interventions as well as analytical processing of forecast computations. Using a service-oriented architecture, we have provided a platform-as-a-service representation for evaluation and forecasting of epidemics. We demonstrate the applicability of our integrated environment through development of the applications, DISIMS and EpiCaster. DISIMS is an interactive web-based system for evaluating the effects of dynamic intervention strategies on epidemic propagation. EpiCaster is a situation assessment and forecasting tool for projecting the state of evolving epidemics such as flu and Ebola in different regions of the world. We discuss how our platform uses existing technologies to solve a novel problem in epidemiology, and provides a unique solution on which different applications can be built for analyzing epidemic containment strategies.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
13

Dalci, Mustafa. "Using Google Analytics, Card Sorting And Search Statistics For Getting Insights About Metu Website." Master's thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12613077/index.pdf.

Full text
Abstract:
websites are one of the most popular and quickest way for communicating with users and providing information. Measuring the effectiveness of website, availability of information on website and information architecture on users
APA, Harvard, Vancouver, ISO, and other styles
14

Spur, Maxim. "Immersive Visualization of Multilayered Geospatial Urban Data for Visual Analytics and Exploration." Thesis, Ecole centrale de Nantes, 2021. http://www.theses.fr/2021ECDN0032.

Full text
Abstract:
Les données géospatiales urbaines englobent une multitude de couches thématiques et s'étendent sur des échelles géométriques allant d’éléments architecturaux aux réseaux de transport interrégionaux. Cette thèse étudie comment les environnements immersifs peuvent être utilisés afin d'aider à visualiser efficacement ces données multicouches de manière simultanée et à diverses échelles. Pour cela, deux prototypes logiciels ont été développés afin de mettre en oeuvre deux nouvelles méthodes de visualisation de données, à savoir les « vues multiples coordonnées » et le « focus+contexte ». Ces logiciels tirent pleinement parti des possibilités offertes par le matériel moderne de réalité virtuelle tout en étant également adaptés à la réalité augmentée. Parmi les deux nouvelles méthodes présentées ici, l'une — une disposition verticale optimisée des couches cartographiques — a été évaluée de manière formelle dans le cadre d'une étude contrôlée auprès des utilisateurs, et l'autre — une approche de projection géométrique pour créer des vues panoramiques de type focus+contexte — de manière plus informelle à partir des commentaires d'experts du domaine qui l'ont testée. Si les deux méthodes ont montré des résultats prometteurs, l'étude formelle a plus particulièrement permis de comprendre comment les particularités des utilisateurs peuvent influencer la perception de la facilité d'utilisation de ces systèmes de visualisation ainsi que leurs performances
Geospatial urban data encompasses a plethora of thematic layers, and spans geometric scales reaching from individual architectural elements to inter-regional transportation networks. This thesis examines how immersive environments can be utilized to effectively aid in visualizing this multilayered data simultaneously at various scales. For this, two distinct software prototypes were developed to implement the concepts of multiple coordinated views and focus+context, specifically taking full advantage of the affordances granted by modern virtual reality hardware, while also being suitable for augmented reality. Of the two novel methods introduced here, one — an optimized, vertical arrangement of map layers — was formally evaluated in a controlled user study, and the other — a geometric projection approach to create panoramic focus+ context views — informally through feedback from domain experts who tested it. Both showed promising results, and especially the formal study yielded valuable insights into how user characteristics can influence the perceived usability of such visualization systems and their performance
APA, Harvard, Vancouver, ISO, and other styles
15

Andréasson, Magnus. "Towards a Digital Analytics Maturity Model : A Design Science Research Approach." Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-13884.

Full text
Abstract:
Digital analytics kallas den samling teknologier som med olika teknikeranalyserar digitala kanaler (webbsidor, email och även offline data) för attsöka förståelse för kunders beteenden och intentioner. Digital Analytics harblivit en mycket viktig komponent till en stor del webbaserade systemmiljöer,där den stödjer och underlättar affärer och beslutsfattande för organisationer.Men hur väl tillämpas dessa teknologier och hur ser den digitalatransformationen ut som utspelar sig inom organisationer, och hur kan manmäta denna digitala mognadsprocess?Denna studie tillämpar en Design Science Research-approach för att uppfyllamålet om att utveckla en Digital Analytics Maturity Model (DAMM) lämpligför små till medelstora företag, varav en expertpanel bestående av 6 st ledandeforskare inom mognadsforskning och Digital Analytic är tillsatt i formen av enDelphi-undersökning. Resultaten från studien visar bl.a att organisatoriskaaspekter spelar en viktig roll för Digital Analytics samt att utvecklingen av enfunktionsduglig DAMM som är redo att tas i burk är möjligt.
APA, Harvard, Vancouver, ISO, and other styles
16

Lemon, Alexander Michael. "A Shared-Memory Coupled Architecture to Leverage Big Data Frameworks in Prototyping and In-Situ Analytics for Data Intensive Scientific Workflows." BYU ScholarsArchive, 2019. https://scholarsarchive.byu.edu/etd/7545.

Full text
Abstract:
There is a pressing need for creative new data analysis methods whichcan sift through scientific simulation data and produce meaningfulresults. The types of analyses and the amount of data handled by currentmethods are still quite restricted, and new methods could providescientists with a large productivity boost. New methods could be simpleto develop in big data processing systems such as Apache Spark, which isdesigned to process many input files in parallel while treating themlogically as one large dataset. This distributed model, combined withthe large number of analysis libraries created for the platform, makesSpark ideal for processing simulation output.Unfortunately, the filesystem becomes a major bottleneck in any workflowthat uses Spark in such a fashion. Faster transports are notintrinsically supported by Spark, and its interface almost denies thepossibility of maintainable third-party extensions. By leveraging thesemantics of Scala and Spark's recent scheduler upgrades, we forceco-location of Spark executors with simulation processes and enable fastlocal inter-process communication through shared memory. This provides apath for bulk data transfer into the Java Virtual Machine, removing thecurrent Spark ingestion bottleneck.Besides showing that our system makes this transfer feasible, we alsodemonstrate a proof-of-concept system integrating traditional HPC codeswith bleeding-edge analytics libraries. This provides scientists withguidance on how to apply our libraries to gain a new and powerful toolfor developing new analysis techniques in large scientific simulationpipelines.
APA, Harvard, Vancouver, ISO, and other styles
17

Cayllahua, Huaman Erick Eduardo, and Arias Felipe Anthonino Ramos. "Desarrollo de un modelo de BI & Analytics usando infraestructura Cloud para la Gestión de PMO en una consultora de TI." Bachelor's thesis, Universidad Peruana de Ciencias Aplicadas (UPC), 2020. http://hdl.handle.net/10757/652806.

Full text
Abstract:
El presente proyecto de tesis tiene como objetivo analizar, diseñar y modelar la arquitectura software para el proceso de gestión de PMO. Este modelo arquitectural será utilizado como base de soporte a los procesos de gestión de llamadas y tickets de la consultora de TI “NECSIA”. La finalidad del presente proyecto es resolver la situación problemática del proceso mencionado que será parte de un análisis profundo la cual se detallará más adelante. El punto crítico de esta situación problemática es que muchas actividades de extracción, transformación y homologación de datos se realizan de manera manual, lo que impide una correcta centralización del flujo de datos en la empresa. El presente proyecto propone una solución de BI y Analytics donde se destaca el modelo arquitectural 4C que integrará las diversas fuentes de información en un repositorio unificado en Cloud. Por tal motivo, se podrá obtener una adecuada gestión y gobierno de datos, sobre todo en sus cálculos históricos de los proyectos involucrados en el proceso de Gestión de PMO. En este contexto, el documento a presentar plantea el uso del marco de trabajo Zachman para en realizar un análisis profundo del negocio con la finalidad de alinear el proceso evaluado a los objetivos estratégicos del negocio. En cuanto respecta al diseño del Modelado de los procesos de negocio se utilizó la notación BPMN. Este estándar nos permitirá mejorar la descomposición y modularización de las actividades que se involucran en los procesos. Finalmente, la presente solución de BI & Analytics busca ser parte del cambio continuo y estar alineados a los objetivos estratégicos de la empresa.
This thesis project aims to analyze, design and model the software architecture for the PMO management process. This architectural model will be used as a support base for the call and ticket management processes of the IT consultancy “NECSIA”. The purpose of this project is to solve the problematic situation of the mentioned process that will be part of an in-depth analysis which will be detailed later. The critical point of this problematic situation is that many data extraction, transformation and homologation activities are carried out manually, which prevents a correct centralization of the data flow in the company. This project proposes a BI and Analytics solution that highlights the 4C architectural model that will integrate the various sources of information in a unified repository in the Cloud. For this reason, adequate data management and governance can be obtained, especially in its historical calculations of the projects involved in the PMO Management process. In this context, the document to be presented proposes the use of the Zachman framework to carry out an in-depth analysis of the business in order to align the evaluated process with the strategic objectives of the business. Regarding the design of the Business Process Modeling, the BPMN notation was used. This standard will allow us to improve the decomposition and modularization of the activities that are involved in the processes. Finally, the present BI & Analytics solution seeks to be part of the continuous change and be aligned with the strategic objectives of the company.
Tesis
APA, Harvard, Vancouver, ISO, and other styles
18

Coimbra, Rafael Melo. "Framework based on lambda architecture applied to IoT: case scenario." Master's thesis, Universidade de Aveiro, 2016. http://hdl.handle.net/10773/21739.

Full text
Abstract:
Mestrado em Sistemas de Informação
Desde o início da primeira década do presente milénio, tem-se testemunhado um aumento exponencial da quantidade de dados produzidos de dia para dia. Numa primeira instância, o aumento foi atribuído aos dados gerados pelos dispositivos GPS; numa segunda fase, à rápida expansão das redes sociais, agora não devido a um fator específico, mas devido ao surgimento de um novo conceito denominado de Internet das Coisas. Este novo conceito, com resultados já mensuráveis, nasceu da premissa de facilitar o dia-a-dia das pessoas fazendo com que os dispositivos eletrónicos comunicassem entre si com o objetivo de sugerir e assistir a pequenas decisões dado os comportamentos observados no passado. Com o objetivo de manter o conceito possível e o estender para além das já existentes aplicações, os dados gerados pelos dispositivos necessitam não apenas de serem armazenados, mas igualmente processados. Adicionando ao volume de dados a sua variedade e velocidade de produção, estes são igualmente fatores que quando não ultrapassados da maneira correta podem apresentar diversas dificuldades, ao ponto de inviabilizarem a criação de novas aplicações baseadas neste novo conceito. Os mecanismos e tecnologias existentes não acompanharam a evolução das novas necessidades, e para que o conceito possa evoluir, novas soluções são obrigatórias. A liderar a lista das novas tecnologias preparadas para este novo tipo de desafios, composto por um sistema de ficheiros distribuído e uma plataforma de processamento distribuída, está o Hadoop. O Hadoop é uma referência para a resolução desta nova gama de problemas, e já comprovou ser capaz de processar enormes quantidades de dados de maneira económica. No entanto, dadas as suas características, tem alguma dificuldade em processar menores quantidades de dados e tem como desvantagem a grande latência necessária para a iniciação do processamento de dados. Num mercado volátil, ser capaz de processar grandes quantidades de dados baseadas em dados passados não é o suficiente. Tecnologias capazes de processar dados em tempo real são igualmente necessárias para complementar as necessidades de processamento de dados anteriores. No panorama atual, as tecnologias existentes não se demonstram à prova de tão distintas necessidades e, quando postas à prova, diferentes produtos tecnológicos necessitam ser combinados. Resultado de um ambiente com as características descritas é o ambiente que servirá de contexto para a execução do trabalho que se segue. Tendo com base as necessidades impostas por um caso de uso pertencente a IoT, através da arquitetura Lambda, diferentes tecnologias serão combinadas com o objetivo de que no final todos os requisitos impostos possam ser ultrapassados. No final, a solução apresentada será avaliada sobre um ambiente real como forma de prova de conceito.
Since the beginning of the first decade of current millennium, it has been witnessed an exponential grow of data being produced every day. First, the increase was given to the amount of data generated by GPS devices, then, the quickly arise of social networks, and now because a new trend as emerged named Internet of Things. This new concept, which is already a reality, was born from the premise of facilitating people's lives by having small electronic devices communicating with each other with the goal to suggest small daily decisions based on the behaviours experienced in the past. With the goal to keep this concept alive and extended further to other applications, the data produced by the target electronic devices is however need to be process and storage. The data volume, velocity and variety are the main variables which when not over planned on the correct way, a wall is created at the point of enviabilize the leverage of the true potential of this new group of applications. Traditional mechanisms and technologies did not follow the actual needs and with the goal to keep the concept alive the address of new technologies are now mandatory. On top of the line, leading the resolution of this new set of challenges, composed by a distributed file system and a parallel processing Framework is Hadoop. Hadoop have proven to fit under the new imposed challenges being capable of process and storage high volumes of data on a cost-effective batch-oriented way. However, given its characteristics on other hand it presents some drawbacks when faced with small amounts of data. In order to gain leverage on market, the companies need not only to be capable of process the data, but process it in a profitable way. Real time processing technologies are needed to complement batch oriented technologies. There is no one size fits all system and with the goal to address the multiples requirements, different technologies are required to be combined. Result of the demanding requirements imposed by the IoT concepts, is the environment which on will be relied the address of the business use case under analyses. Based on the needs imposed by a use case belonging to IoT, through the Lambda architecture, different technologies will be combined with the goal that in the end all the imposed requirements can be accomplished and exceeded. In the end, the solution presented will be evaluated on a real environment as proof of concept.
APA, Harvard, Vancouver, ISO, and other styles
19

Wierzbieniec, Gabriel. "Architecture and design requirements forEnterprise Security Monitoring Platform : Addressing security monitoring challenges in the financial services industry." Thesis, Luleå tekniska universitet, Institutionen för system- och rymdteknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-69269.

Full text
Abstract:
Security Monitoring Platform (SMP) represents multiple detective controls applied inthe enterprise to protect against cyberattacks. Building SMP is a challenging task, as itconsists of multiple systems that require integration. This paper introduces a framework thatcompiles various aspects of Security Monitoring and presents respective requirements sets.SMP framework provides guidance for establishing a risk-based detection platform,augmented with automation, threat intelligence and analytics capabilities. It provides morebroad view on the problem of Security Monitoring in the enterprise context and can assist inthe platform creation. The proposed solution has been built using Design Science ResearchMethodology and contains of twenty requirements for building SMP. Expert evaluation andcomparison with similar frameworks show potential value in holistic approach to the problem,as well as indicate the need for further research.
APA, Harvard, Vancouver, ISO, and other styles
20

McLeod, Catriona Jane. "Green architectural discourse : rhetoric and power." Thesis, Queensland University of Technology, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
21

Kurup, Unmesh. "Design and use of a bimodal cognitive architecture for diagrammatic reasoning and cognitive modeling." Columbus, Ohio : Ohio State University, 2008. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1198526352.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Nannoni, Nicolas. "Message-oriented Middleware for Scalable Data Analytics Architectures." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-167139.

Full text
Abstract:
The democratization of Internet allowed many more people to use online services and enjoy their benefits. The traffic towards websites became tremendous those recent years, especially with the apparition of social networks. Mobile application, televisions and other non--‐computer devices also get connected to the Internet and use it to provide services to the end--‐users: Video on--‐demand, music streaming and so on. These applications rely on powerful backend servers that handle the requests made by devices and provide statistics and metrics about application usage. These metrics can be generated by aggregating the access logs (e.g. HTTP requests log), logs that are potentially extremely large. Big data tools and analytics, providing a way to handle this huge number of records, come then in hand, as typical client--‐server architectures, with a single database storing all the data, reach their limits in terms of performance and capacity. Data duplication, combined to dedicated and specialized databases storing it, is the key to efficient data handling.   How to fill up these databases in an elegant, efficient and scalable manner is the remaining question, and message--‐oriented middleware may be a viable answer. This project aims at exploring the capabilities of such middleware, identifying what are the benefits and the drawbacks in using them and presenting how they can be integrated in a real--‐world application that needs to aggregate events and logs on a large scale. Apache Kafka and RabbitMQ, two message--‐oriented middleware, are benchmarked and compared, on both performance metrics and qualitative criteria. A fully working proof--‐ of--‐concept (of an already--‐existing industry product modified to use a message--‐oriented middleware and a specialized data warehouse system) is developed and presented, to conclude on the usefulness of message--‐oriented middleware when designing scalable data analytics architectures.
Demokratiseringen av Internet har tillåtit många fler att använda online--‐tjänster och deras fördelar. Trafiken till webbsidor har blivit enorm de senaste åren. Speciellt i och med de sociala nätverken. Mobil--‐applikationer, TV--‐apparater och andra enheter ansluter sig i allt större omfattning till Internet och tillhandahåller tjänster till slutanvändare: Video On--‐Demand, strömmande musik o.s.v. Applikationerna förlitar sig på kraftfull infrastruktur som kan hantera de förfrågningar enheterna gör och tillhandahålla statistik och mätetal om applikationernas användning. Dessa mätetal kan skapas genom att aggregera access--‐loggar (ex. HTTP--‐loggar). Dessa loggar är potentiellt väldigt stora. Så kallade Big Data--‐verktyg kan lösa problemet med att hantera denna stora mängd data. Typiskt är dessa verktyg klient--‐server--‐arkitekturer med en enskild, central databas som lagrar all data. Dessa databaser har i regel begränsningar när det gäller prestanda och kapacitet.    Duplicering av data kombinerat med en dedikerad och specialiserad databas är nyckeln till en effektiv lösning på detta problem. Frågan är hur man på ett effektivt, elegant och skalbart sätt fyller dessa databaser med information. Här kan meddelande--‐baserad mellanprogramvara vara en lösning. Det här examensarbetet syftar till att granska hur sådan mellanprogramvara kan integreras i en applikation som används i branschen idag och som behöver aggregera stora mängder loggar. Apache Kafka och RabbitMQ, som är två meddelande--‐baserade mellanprogramvaror, granskas och jämförs. Prestanda och effektivitet av lösningarna testas. En fullständig prototyp skapas. Den baseras på ett befintligt system och ändras för att använda meddelande--‐baserad mellanprogramvara och ett specialiserat Data Warehouse--‐system. Slutligen dras slutsatser om meddelande--‐ baserad mellanprogramvara är effektivt när man vill skapa ett skalbart system för aggregering av loggar.
APA, Harvard, Vancouver, ISO, and other styles
23

Das, Joydip. "Analytical models for accelerating FPGA architecture development." Thesis, University of British Columbia, 2012. http://hdl.handle.net/2429/43471.

Full text
Abstract:
Field-Programmable Gate Arrays (FPGAs) are widely used to implement logic without going through an expensive fabrication process. Current-generation FPGAs still suffer from area and power overheads, making them unsuitable for mainstream adoption for large volume systems. FPGA companies constantly design new architectures to provide higher density, lower power consumption, and faster implementation. An experimental approach is typically followed for new architecture design, which is a very slow and computationally expensive process. This dissertation presents an alternate faster way for FPGA architecture design. We use analytical model based design techniques, where the models consist of a set of equations that relate the effectiveness of FPGA architectures to the parameters describing these architectures. During early stage architecture investigation, FPGA architects and vendors can use our equations to quickly short-list a limited number of architectures from a range of architectures under investigation. Only the short-listed architectures need then be investigated using an expensive experimental approach. This dissertation presents three contributions towards the formulation of analytical models and the investigation of capabilities and limitations of these models. First, we develop models that relate key FPGA architectural parameters to the depth along the critical path and the post-placement wirelength. We detail how these models can be used to estimate the expected area of implementation and critical path delay for user-circuits mapped on FPGAs. Secondly, we develop a model that relates key parameters of the FPGA routing fabric to the fabric's routability, assuming that a modern one-step global/detailed router is used. We show that our model can capture the effects of the architectural parameters on routability. Thirdly, we investigate capabilities and limitations of analytical models in answering design questions that are posed by the FPGA architects. Comparing with two experimental approaches, we demonstrate that analytical models can better optimize FPGA architectures while requiring significantly less design effort. However, we also demonstrate that the analytical models, due to their continuous nature, should not be used to answer the architecture design questions related to applications having `discrete effects'.
APA, Harvard, Vancouver, ISO, and other styles
24

Kahoul, Asma. "Reconfigurable architecture floorplan optimisation using analytical techniques." Thesis, Imperial College London, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.550949.

Full text
Abstract:
Since the invention of FPGAs in 1984, their capabilities have increased dramatically making them more speed, area, and power efficient than older reconfigurable devices. These advances were made possible by better computer aided design tools and the continuous development of algorithms used to both design the chips, and to map circuits onto them. However, current methodologies for FPGA chip design suffer from their dependence on empirical approaches which sample the design space based on intuition and heuristic techniques. As a result these empirical tools might result in good architectures but their optimality cannot be measured. This thesis argues the case for the use of analytical models in heterogeneous FPGA architecture exploration. It shows that the problem, when simplified, is amenable to formal optimisation techniques such as Integer Linear Programming (ILP). However, the simplification process may lead to inaccurate models causing uncertainty about the quality of the results. Consequently, existing accurate models such as that used in the versatile place and route (VPR) tool are used to quantify the performance of the analytical framework in comparison with traditional design methodologies. The results obtained in this thesis show that the architectures found by the ILP model are better than those found using traditional parameter sweep techniques with an average improvement of up to 15% in speed. In addition, these architectures are further improved by combining the accuracy of VPR with the efficiency of analytical techniques. This was achieved using a closed loop framework which iteratively refines the analytical model using place and route information from VPR. The results show a further average improvement of 10% and a total improvement of 25% in comparison with a parameter sweep methodology. In summary, the work carried out in this thesis shows that the ILP architecture exploration framework may not model heterogeneous architectures as accurately as current place and route tools, however, it improves on parameter sweep techniques by exploring a wider range of designs.
APA, Harvard, Vancouver, ISO, and other styles
25

Glanville, Ranulph. "Architecture and space for thought." Thesis, Brunel University, 1988. http://bura.brunel.ac.uk/handle/2438/5018.

Full text
Abstract:
This thesis is concerned with the description of individual experiences of (architectural) space in a social milieu. Architecture, while considered to be primarily concerned with space as its medium, has a very impoverished (or occasionally, very contorted) verbal language in which to discuss space. The author, as a beginner teacher, noted this in attempts to explore spatial experience with students of architecture, and resolved with their help to generate an appropriate verbal vehicle. The main body of the thesis relates this attempt and accounts for its failure. The Thesis, thus, follows three intertwined streams. 1) A scientific investigation into means for the description of human experience of (architectural) space, using methods developed from Kelly's Personal Construct Theory Repertory Grids. 2) A partially developed spatial analytic language, my personal response to 1) above, which is to be seen as the start of a new research programme that may last many years (the future of which is outlined). 3) An account of a personal learning experience both from, around and through each of these. These streams are organised into three parts. Part 1: Background Studies - into work in associated areas and fields, with an assessment of their relevance to the undertaking presented here. Part 2: The Experiments - attempting (and failing) to create a language, and the transition from verbal to visual, with critical arguments and observations. Part 3: A New Beginning - learning from the failure of Part 2, and the argument for and commencement of a new research programme.
APA, Harvard, Vancouver, ISO, and other styles
26

Sakellaridou, Irini. "A top-down analytic approach to architectural composition." Thesis, University College London (University of London), 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.504524.

Full text
Abstract:
This thesis is an exercise in theory with an empirical exercise. It deals with the traditional architectural ideas of 'composition' and 'parti', and applies a formal analytic approach to them. It takes a top-down approach to the notion of 'composition', which tries to reflect the way architects think, and looks at the 'parti' as the deep structure of the building, which is abstract, global, and capable of many realisations. As a case study, 19 houses of Mario Botta are analysed. The purpose of the empirical exercise is to explore how far it is possible to produce an analytic construction of the notion of 'parti'. It asks: are there formal top-down themes which underly the composition of the houses and have to do with their relational structure? After the description of the houses a formal analysis of the identified themes takes place. These formal top-down themes are defined as rules. A distinction is made between the nature of the rule, the degree of its realisation and the domains (mass, elevations, plan) of its realisation. Formal analysis, thus, measures properties of the mass, the elevations and the plan. What analysis shows is that the interrelations of the rules define the 'parti'. Three phases are identified in the development of the 'parti' of the houses which show an evolution of it from combinations to structure. A distinction between a short and a long genotype for order is thus made, as well as a distinction between the intension and the extension of the rule seen as a relation. In the last part the thesis explores what these findings suggest towards theory building as well as implications for further research by addressing the notion of relation and by defining two different types of interrelations.
APA, Harvard, Vancouver, ISO, and other styles
27

Back, Andrew Scott. "Molecular architecture of ordered thin films of crystalline organic dyes." Diss., The University of Arizona, 1997. http://hdl.handle.net/10150/282491.

Full text
Abstract:
The factors which determine the growth mode and molecular architecture of vacuum deposited organic thin films on single crystalline substrates were investigated. Specifically, the relative importance of layer planes in the bulk structure, lattice matching between the overlayer and substrate, topographic direction by the substrate, and specific molecule-substrate interactions, in determining the growth mode were examined. The majority of the molecules studied here (ClAlPc, F₁₆ZnPc, PTCDA, C4-PTCDI, and C5-PTCDI) exhibited layer planes in their bulk structures, however, the molecular plane is coincident with the layer plane only for PTCDA and ClAlPc. ClAlPc and F₁₆ZnPc were found to adopt different flat-lying commensurate square lattices on the Cu(100) surface. In both cases, the flat-lying orientation of the molecules was dictated by specific molecule-substrate interactions, while the orientation of the lattice was dictated by lattice matching with the substrate. ClAlPc was also able to adopt an incommensurate centered rectangular lattice whose orientation was directed by alignment along step edges. Fluorescence investigation of submonolayer PTCDA and PTCDI films on alkali halide substrates demonstrated the great potential of fluorescence spectroscopy as a means of monitoring film growth. PTCDA was found to adopt a flat-lying orientation on NaCl, KCl, and KBr, while a flat-lying orientation of the PTCDI molecules was determined by the strength of the molecule-substrate interactions. From these measurements, the relative interaction strengths of the substrates were determined to be KCl > KBr > NaCl. IR dichroism showed that the expected growth along the layer planes was found only to occur for PTCDA, due to the coincidence of the layer and molecular planes. IR spectroscopy also revealed that a new polymorph of C5-PTCDI had been formed on these surfaces. These studies showed that the relative importance of the factors in determining the molecular architecture adopted within the first 1-2 MLE of a film are: (1) molecule-substrate interaction, (2) lattice matching, (3) topographic direction, (4) layer planes in the bulk structure. In addition the use of fluorescence spectroscopy to probe the evolution of vacuum deposited films was significantly advanced.
APA, Harvard, Vancouver, ISO, and other styles
28

Machado, Oscar A. "A Philosophy of Architecture." Scholarly Repository, 2009. http://scholarlyrepository.miami.edu/oa_theses/205.

Full text
Abstract:
To evaluate specific architectural theories, an analytic methodology was used. The specific architectural theories evaluated all have in common the fact that their formative models can explain how their original ideas manifest in the practice of architectural works. Although these architectural theories researched are thousands or in some cases hundreds of years apart, a way to compare and contrast them was to use philosophies of art common to all. This contemporary approach to analysis was done with the use of ?analytic philosophy? for its effectiveness to clarify concepts. Central aspects of architectural theories will be analyzed in detail through the lenses of four contemporary theories of the philosophy of art. They are: formalism (including neo-formalism and theories that emphasize the connection between form and function), expression theories, representation theories (including neo-representational and mimetic accounts), and theories based on aesthetic experience. Looking at architecture from the viewpoint of analytic philosophy of art provides new insights into the nature of architecture and illuminates the field in significant ways. A recommendation for further study is enclosed.
APA, Harvard, Vancouver, ISO, and other styles
29

Leow, Yoon Kah. "Post-Routing Analytical Models for Homogeneous FPGA Architectures." Diss., The University of Arizona, 2013. http://hdl.handle.net/10150/311216.

Full text
Abstract:
The rapid growth in Field Programmable Gate Array (FPGA) architecture design space has led to an explosion in architectural choices that exceed well over 1,000,000 configurations. This makes searching for pareto-optimal solutions using a CAD-based incremental design process near impossible for hardware architects and application engineers. Designers need fast and accurate analytical models in order to evaluate the impact of their design choices on performance. Despite the proliferation of FPGA models, todays state-of the art modeling tools suffer from two drawbacks. First, they rely on circuit characteristics extracted from various stages of the FPGA CAD flow making them CAD dependent. Second, they lack ability to take routing architecture parameters into account. These two factors pose as a barrier for converging to the desired implementation rapidly. In this research, we address these two challenges and propose the first static power and post-routing wirelength models in academia. Our models are unique as they are CAD-independent, and they take both logic and routing architecture parameters into account. Using the static power model we are able to estimate the active and idle leakage power dissipation in homogeneous FPGAs with average correlation factor of 95% and mean percentage error of 17% over experimental results based on MCNC benchmarks. Using our wirelength model, we are able to obtain a low mean percentage error of 4.2% and an average correlation factor of 84% using MCNC and VTR benchmarks. We also show that utilizing wirelength model for architecture optimization process reduces the design space exploration time by 53% compared to the CAD-based process. We finally propose an algorithmic approach to estimate the logic density (i.e., number of LUTs) of multiplexer-based circuits, and address the problem of discrete effects in FPGA analytical models. We show that a model that generates logic density of a fundamental circuit element, such as a multiplexer, can be used to estimate performance metrics, such as critical path delay and power.
APA, Harvard, Vancouver, ISO, and other styles
30

Fischer, Ulrike. "Forecasting in Database Systems." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-133281.

Full text
Abstract:
Time series forecasting is a fundamental prerequisite for decision-making processes and crucial in a number of domains such as production planning and energy load balancing. In the past, forecasting was often performed by statistical experts in dedicated software environments outside of current database systems. However, forecasts are increasingly required by non-expert users or have to be computed fully automatically without any human intervention. Furthermore, we can observe an ever increasing data volume and the need for accurate and timely forecasts over large multi-dimensional data sets. As most data subject to analysis is stored in database management systems, a rising trend addresses the integration of forecasting inside a DBMS. Yet, many existing approaches follow a black-box style and try to keep changes to the database system as minimal as possible. While such approaches are more general and easier to realize, they miss significant opportunities for improved performance and usability. In this thesis, we introduce a novel approach that seamlessly integrates time series forecasting into a traditional database management system. In contrast to flash-back queries that allow a view on the data in the past, we have developed a Flash-Forward Database System (F2DB) that provides a view on the data in the future. It supports a new query type - a forecast query - that enables forecasting of time series data and is automatically and transparently processed by the core engine of an existing DBMS. We discuss necessary extensions to the parser, optimizer, and executor of a traditional DBMS. We furthermore introduce various optimization techniques for three different types of forecast queries: ad-hoc queries, recurring queries, and continuous queries. First, we ease the expensive model creation step of ad-hoc forecast queries by reducing the amount of processed data with traditional sampling techniques. Second, we decrease the runtime of recurring forecast queries by materializing models in a specialized index structure. However, a large number of time series as well as high model creation and maintenance costs require a careful selection of such models. Therefore, we propose a model configuration advisor that determines a set of forecast models for a given query workload and multi-dimensional data set. Finally, we extend forecast queries with continuous aspects allowing an application to register a query once at our system. As new time series values arrive, we send notifications to the application based on predefined time and accuracy constraints. All of our optimization approaches intend to increase the efficiency of forecast queries while ensuring high forecast accuracy.
APA, Harvard, Vancouver, ISO, and other styles
31

Korkmaz, Koray Arkon Cemal. "An analytical study of the design potentials in kinetic architecture/." [s.l.]: [s.n.], 2004. http://library.iyte.edu.tr/tezler/doktora/mimarlik/T000485.doc.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Muyan, Cem Eyüce Özen. "An analytical approach to the concept of 'Topography' in architecture/." [s.l.]: [s.n.], 2003. http://library.iyte.edu.tr/tezler/master/mimarlik/T000271.rar.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Sarma, Dominik Arun. "Modular Hybrid Architectures for Single Particle-based Analytical Assays." Doctoral thesis, Humboldt-Universität zu Berlin, 2020. http://dx.doi.org/10.18452/22055.

Full text
Abstract:
Globale Megatrends erfordern immer flexiblere analytische Messmethoden und Assays. Insbesondere im Umwelt-, Agrar-, Lebensmittel- und Gesundheitssektor sind chemische Assays hier eine geeignete Wahl. Eine Vielzahl solcher Assays steht in akademischen und industriellen Bereichen zur Verfügung. Die Anpassung an neue Verbindungen hingegen ist oft schwierig zu realisieren, da der einzelne Test meist für einen spezifischen Analyten konzipiert ist. Eine modulare, analytische Plattform für die Entwicklung chemischer Assays ist daher sehr wünschenswert. Ein solches System sollte die Möglichkeit einer schnellen und flexiblen Implementierung verschiedener Erkennungstypen für neue Analyten und die Möglichkeit einer Multiparameter-(Multiplex-)Bestimmung in einem robusten und portablen Auslesegerät beinhalten. Einzelpartikel-basierte, chemische Assays haben sich hier als geeignete Lösung erwiesen. In dieser Arbeit stelle ich Polystyrol-Kern-Silikat-Schale-Partikel als modulare, hybride Plattform für die flexible Konfiguration von Einzelpartikel-basierten chemischen Assays vor. Zunächst wurde ein Verfahren entwickelt, das den Zugang zu verschiedenen-Partikelarchitekturen ermöglicht. Diese Partikel wurden für den DNA-Nachweis bis in den fmol-Bereich getestet (Kapitel 2). Ein neues Werkzeug zur Bestimmung der Rauheit der Partikel aus elektronenmikroskopischen Bildern wurde entwickelt und auf das breite Spektrum der im Projekt hergestellten SiO2@PS-Partikel angewendet (Kapitel 3). Damit soll die Grundlage zur Vergleichbarkeit zwischen zukünftigen Partikelcharakterisierungen geschaffen werden. Schließlich wurde ein Multiplex-Assay mit farbstoffkodierten, Protein-abweisenden Partikeln entwickelt, um kleine Moleküle durch immunochemische Reaktionen in einem waschfreien Verfahren nachzuweisen (Kapitel 4). Letzteres verdeutlicht, dass eine hohe analytische Leistung mit neuem Potenzial an flexibler Funktionalität durch die Verwendung hybrider SiO2@PS-Partikel kombiniert werden kann.
Global megatrends such as demographic change, personalization, climate change or urbanization demand for increasingly flexible and mobile analytical measurement methods and assays. Especially in the environmental, agricultural, food and health sectors, chemical assays are a suitable choice. A large variety of such assays is available in the academic and industry area. However, their modification to measure new compounds is time-consuming and laborious, because they are typically designed to detect a specific single analyte. A modular platform for chemical assay development is thus highly desirable. Such a system should include the possibility for fast, easy and flexible implementation of various recognition types towards emerging analytes and the possibility for multi-parameter (multiplexed) detection in a potentially portable fashion. Single particle-based assays have proven to be an adequate solution here. In this work, I present hybrid polystyrene core-silica shell (SiO2@PS) particles as new spherical substrates for the flexible configuration of single particle-based chemical assays. First, a procedure to control the surface topology of the beads was developed, giving access to smooth, raspberry-like or multilayer-like CS structures. These particles were used for DNA detection down to the fmol-level (Chapter 2). A new tool to extract the roughness of the particles from electron microscopy images was developed next and applied to the wide range of CS beads prepared throughout the project (Chapter 3). This general protocol provides the basis for the comparability of future CS particle characterization. Finally, a multiplex assay with dye-encoded beads with non-fouling surfaces was developed to detect small molecules via immunochemical reactions in a wash-free procedure (Chapter 4). The latter ultimately proves that hybrid CS particles can combine high analytical performance and unmatched potential for flexible functionality.
Suspension Array Technology
APA, Harvard, Vancouver, ISO, and other styles
34

Lam, Andrew H. "An analytical model of logic resource utilization for FPGA architecture development." Thesis, University of British Columbia, 2010. http://hdl.handle.net/2429/19753.

Full text
Abstract:
Designers constantly strive to improve Field-Programmable Gate Array (FPGA) performance through innovative architecture design. To evaluate performance, an understanding of the effects of modifying logic blocks structures and routing fabrics on performance is needed. Current architectures are evaluated via computer-aided design (CAD) simulations that are labourious and computationally-expensive experiments to perform. A more scientific method, based on understanding the relationships between architectural parameters and performance will enable the rapid evaluation of new architectures, even before the development of a CAD tool. This thesis presents an analytical model that describes such relationships and is based principally on Rent’s Rule. Specifically, it relates logic architectural parameters to the area efficiency of an FPGA. Comparison to experimental results show that our model is accurate. This accuracy combined with the simple form of the model’s equations make it a powerful tool for FPGA architects to better understand and guide the development of future FPGA architectures.
APA, Harvard, Vancouver, ISO, and other styles
35

MUKHERJEE, ANINDO. "AN INTEGRATED ARCHITECTURE FOR MULTI-HOP INFRASTRUCTURE WIRELESS NETWORKS." University of Cincinnati / OhioLINK, 2006. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1155834305.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Zellers, Eric M. "MAINFRAME: Military acquisition inspired framework for architectural modeling and evaluation." Diss., Georgia Institute of Technology, 2016. http://hdl.handle.net/1853/54997.

Full text
Abstract:
Military acquisition programs have long been criticized for the exponential growth in program costs required to generate modest improvements in capability. One of the most promising reform efforts to address this trend is the open system architecture initiative, which uses modular design principles and commercial interface standards as a means to reduce the cost and complexity of upgrading systems over time. While conceptually simple, this effort has proven to be exceptionally difficult to implement in practice. This difficulty stems, in large part, from the fact that open systems trade additional cost and risk in the early phases of development for the option to infuse technology at a later date, but the benefits provided by this option are inherently uncertain. Practical implementation therefore requires a decision support framework to determine when these uncertain, future benefits are worth the cost and risk assumed in the present. The objective of this research is to address this gap by developing a method to measure the expected costs, benefits and risks associated with open systems. This work is predicated on three assumptions: (1) the purpose of future technology infusions is to keep pace with the uncertain evolution of operational requirements, (2) successful designs must justify how future upgrades will be used to satisfy these requirements, and (3) program managers retain the flexibility to adapt prior decisions as new information is made available over time. The analytical method developed in this work is then applied to an example scenario for an aerial Intelligence, Surveillance, and Reconnaissance platform with the potential to upgrade its sensor suite in future increments. Final results demonstrate that the relative advantages and drawbacks between open and integrated system architectures can be presented in the context of a cost-effectiveness framework that is currently used by acquisition professionals to manage complex design decisions.
APA, Harvard, Vancouver, ISO, and other styles
37

Smithwick, Daniel J. II (Daniel John). "Physical design cognition : an analytical study of exploratory model making to inform creative robotic interaction." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/106727.

Full text
Abstract:
Thesis: Ph. D. in Design and Computation, Massachusetts Institute of Technology, Department of Architecture, 2016.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 185-188).
In current practices of digital design fabrication, model making is bifurcated into screen based visualization (CAD) and machine based production (CAM), which limits the body's capability of generating creative thought through material interaction. If theories from the field of cognitive science about embodied cognition are true, then there is opportunity to rethink how CAD-CAM technologies can better harness the bodily based thinking involved in physical model making. To study how designers explore ideas when making models an experiment was run in which experienced architects and novice students were asked to construct their dream house out of blocks. The hypothesis was that experienced architects would exhibit physical interactions with the blocks that distinguish them from the novices, thus helping define what may be called physical design cognition. To test this their behaviors were coded in terms of simple robotic actions: adding, subtracting, modifying, and relocating blocks. Architects differed from students along three dimensions. Architects were more controlled using fewer blocks overall and fewer variations; they reported more thoughts about spatial relationships and material constraints; and lastly, they more frequently experimented with multiple block positions within the model. Together these findings suggest that architects physically explore the design space more effectively than students by exploiting body-material interactions. This designerly embodied intelligence is something that robotic technology can support and enhance. As roboticist Rodney Brooks famously said, "The world is its own best model." In other words, designers should not be limited to visualizing a model on a screen before making it physical. Implications for material-based robotic interaction are discussed and a pilot program is presented in which designers interact in real-time with a robotic manipulator arm to make physical models.
by Daniel J. Smithwick.
Ph. D. in Design and Computation
APA, Harvard, Vancouver, ISO, and other styles
38

Varanasi, Kishore Venkat 1974. "The role of spatial configurations in urban dynamics : an analytical model for urban design and development." Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/70738.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Architecture, 2001.
Includes bibliographical references (p. 93-95).
The thesis explores the role of spatial configurations on the development of city form and the influence of these configurations on specific economic variables which govern the development of cities. It is argued that there are pervasive interconnections that seem to link the nature of society and economics with its spatial form and that economic theories must have some basis in a spatial theory. Based on this premise, this work investigates the logic of spaces and their network to derive an analytical spatial theory of intelligibility which is then built into a spatial-economic model of land values and densities. In urban land dynamics, location plays a significant role in determining the land values and density of development. Two important models namely Space Syntax and Ricardian rent mode, have been developed based on this assumption. This thesis adapts these models to generate a comprehensive dynamic spatial-economic model of cities. This new model will have predictive applications in spatial design as well as in urban development. It is proposed that the location in urban areas is determined by spatial configurations as well as commuting distance which is often used as a variable in economics to determine location. Relative asymmetry is proposed as a measure of relative location of a street segment in the system with respect to every other segment and as a function of intelligibility in any given physically contiguous configuration. Segments that are accessible and intelligible have location advantage. The differences in location value are then used to develop an econometric relationship between land values, density and relative asymmetry in a city. This yields a quantitative measure of an individual s preference for location and density.
by Kishore Venkat Varanasi.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
39

Sarma, Dominik Arun [Verfasser]. "Modular Hybrid Architectures for Single Particle-based Analytical Assays / Dominik Arun Sarma." Berlin : Humboldt-Universität zu Berlin, 2020. http://d-nb.info/1220287709/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Rischmuller-Magadley, Esther. "Development of an analytical computer tool for building integrated renewable energy and CHP." Thesis, University of Nottingham, 2009. http://eprints.nottingham.ac.uk/10773/.

Full text
Abstract:
This thesis describes a computer tool that was developed to compare different combinations of photovoltaic panels, solar thermal collectors and combined heat and power technologies for building applications to find the option with the lowest cost of emissions reduction. The novelty of this computer tool is that it addresses the uncertainty of building energy load profiles in the sizing of renewable energy and CHP technologies by applying the Monte Carlo Method. A database of historical building energy load profiles was collated for this purpose. However, little domestic hot water load profiles were found in the literature. Therefore, as part of this study, a survey was also carried out to collect some domestic hot water load profile data. The survey consisted of a questionnaire and monitoring study. The questionnaire consisted of two parts: a general questionnaire about the dwelling and a diary study. The questionnaire collected general information about the dwelling, enabling the load profiles collected to be classified into different building type categories. In the diary study the hot water consumption patterns were recorded. The hot water energy consumption data was also obtained from direct monitoring using temperature sensors attached to the hot water pipes of the different appliances to record when and from which appliance hot water was used throughout the day in the dwellings. Load profiles were formed using this data and the data from the diary study in the questionnaire together with typical hot water usages of different appliances which were calculated from hot water usage times and flow rates of the different appliances that were recorded by a clamp-on flow meter. The data collected from the survey and the literature was loaded into the computer tool database.
APA, Harvard, Vancouver, ISO, and other styles
41

Soni, Arpit, and Arpit Soni. "Analytical Model for Relating FPGA Logic and Routing Architecture Parameters to Post-Routing Wirelength." Thesis, The University of Arizona, 2016. http://hdl.handle.net/10150/621365.

Full text
Abstract:
Analytical models have been introduced for rapidly evaluating the impact of architectural design choices on FPGA performance through model-based trend analysis. Modeling wirelength is a critical problem since channel width can be expressed as a function of total net length in a design, which is an indicator of routability for an FPGA. Furthermore, performance indicators, such as critical path delay and power consumption, are functions of net capacitance, which in turn is a function of net length. The analytical models to this date mainly originate from extracting circuit characteristics from post-placement stage of the CAD flow, which instills a strong binding between the model and the optimization objective of the CAD flow. Furthermore, these models primarily take only logic architecture features into account. In this study, we present a post-routing wirelength model that takes into account both logic and routing architectural parameters, and that does not rely on circuit characteristics extracted from any stage of the FPGA CAD flow. We apply a methodological approach to model parameter tuning as opposed to relying on a curve-fitting method, and show that our model accurately captures the experimental trends in wirelength with respect to changes in logic and routing architecture parameters individually. We demonstrate that the model accuracy is not sacrificed even if the performance objective of the CAD flow changes or the algorithms used by individual stages of the CAD flow (technology mapping, clustering, and routing) change. We swap the training and validation benchmarks, and show that our model development approach is robust and the model accuracy is not sacrificed. We evaluate our model based on new set of benchmarks that are not part of the training and validation benchmarks, and demonstrate its superiority over the state of the art. Based on the swapping based experiments, we show that the model parameters take values in a fixed range. We verify that this range holds its validity even for benchmarks that are not part of the training and validation benchmarks. We finally show that our model maintains a good estimation of the empirical trends even when very large values are used for the logic block architecture parameter.
APA, Harvard, Vancouver, ISO, and other styles
42

Du, Chen. "Architectural Characterization of Polyhedral Oligomeric Silsesquioxanes by Ion Mobility Mass Spectrometry." University of Akron / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=akron1525355186538632.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Yol, Aleer M. "Determination of Polymer Structures, Sequences, and Architectures by Multidimensional Mass Spectrometry." University of Akron / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=akron1376344592.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Anis, Sadia Shahnoor. "A Design Choice Guideline for Software-Defined Network Control Plane Architecture using Analytical Hierarchical Process." University of Akron / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=akron1608144391722863.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Llorca, Bofí Josep. "The generative, analytic and instructional capacities of sound in architecture : fundamentals, tools and evaluation of a design methodology." Doctoral thesis, Universitat Politècnica de Catalunya, 2018. http://hdl.handle.net/10803/664194.

Full text
Abstract:
The disciplines of space and time form two domains to which it is daring to compare, since it is obvious that they are of a different nature. Music happens in time, while architecture happens in space. However, from the first treatises on both architecture and music, repeated calls for comparison, complementarity and influence of both disciplines can be read, at least to the observation of certain common orders between the two domains. In this doctoral thesis we do not question this whole theoretical corpus that has been enriching the relationship between both disciplines. We received it and joined that stream of knowledge. What we do notice, however, is the almost impertinent question that follows: can sound help the architect in his daily tasks? And, therefore, what are the contributions of sound to the architect? To do this we must seek the connection in the principles of both arts, where we can detach ourselves from time and space, and approach the most universal of art forms. The architect, in his daily work, is faced with three particular tasks: the architectural project, the architectural analysis and the teaching of architecture. Each of the three tasks is connected with the other two tasks: the project is carried out again with the analysis and transmitted to the new architect; the analysis supports the project decisions and gives tools to the disciple; and the teaching has the project as its purpose and the analysis as its method. The thesis presented here shows what sound offers to the task of the project, to that of analysis and to that of teaching. These three tasks are approached from three premises: theoretical foundations, tools and evaluation. The interaction of the three tasks with the three premises gives rise to nine lines of work that articulate the chapters of the thesis. The first, fourth and seventh chapters approach the three tasks from the premise of theoretical foundations, foundations that perhaps because they are obvious, have been ignored or overlooked but which constitute the nature of both disciplines. The first shows, by the hand of two 20th century authors - the architect Dom Hans van der Laan and the composer Olivier Messiaen - that creation in both disciplines is of a systematic nature. The fourth one revaluates the analytical systems of representation of form both in architecture and in music which, starting with the basic characteristics of its elements, lead to a symbolic notation and a tool for the analysis of the work: the plan and the score. The seventh introduces the student of architecture to the growing separation between music and architecture that has been accentuated to this day. The second, fifth and eighth chapters approach the three particular tasks from the premise of tools, working tools that help to understand more directly the influence of architecture on sound. The second places virtual reality and auralization techniques at the service of the architectural and urban planning project, enhancing the sound experience in these projects. The fifth deals with the acoustic analysis of exterior spaces and their relationship with the urban configuration of these spaces. The eighth section presents the study of acoustic heritage as an educational tool. The third, sixth and ninth chapters deal with the three tasks from the premise of evaluation, a check that ensures the influence of sound on them through teaching experiments. The third argues and exemplifies that a sound landscape can be the engine and generator of an architectural design. The sixth one reviews the methods for evaluating the subjective and objective parameters of architectural acoustics. The ninth shows that in teaching sound to architects, "learning by listening" should be given priority over "passive learning".
Las disciplinas del espacio y del tiempo forman dos dominios a los que resulta atrevido comparar, pues es obvio que son de naturaleza distinta. La música ocurre en el tiempo, mientras que la arquitectura en el espacio. No obstante, desde los primeros tratados tanto de arquitectura como de música, se pueden leer repetidas llamadas a la comparación, al complemento y a la influencia de ambas disciplinas, cuanto menos a la constatación de ciertos órdenes comunes entre ambos dominios. En esta tesis doctoral no ponemos en cuestión todo este corpus teórico que ha venido enriqueciendo la relación entre ambas disciplinas. La recibimos y nos unimos a esa corriente de conocimiento. En lo que sí reparamos, en cambio, es en la pregunta casi impertinente que surge seguidamente: ¿puede el sonido ayudar al arquitecto en sus tareas diarias? Y, por tanto, ¿cuáles son las contribuciones del sonido para el arquitecto? Para ello debemos buscar la conexión en los principios de ambas artes, allí donde podemos despegarnos del tiempo y del espacio, y acercarnos a la más universal de las formas de arte. El arquitecto, en su tarea diaria, se enfrenta a tres tareas particulares: el proyecto arquitectónico, el análisis arquitectónico y la enseñanza de la arquitectura. Cada una de las tres tareas está conectada con las otras dos: el proyecto se reconduce con el análisis y se transmite al nuevo arquitecto; el análisis soporta las decisiones de proyecto y da herramientas al discípulo; y la enseñanza tiene como fin el proyecto y como método el análisis. La tesis aquí presentada pone de manifiesto lo que el sonido ofrece a la tarea del proyecto, a la del análisis y a la de la enseñanza. Estas tres tareas son abordadas desde tres premisas: los fundamentos teóricos, las herramientas y la evaluación. La interacción de las tres tareas con las tres premisas da lugar a nueve líneas de trabajo que articulan los capítulos de la tesis. Los capítulos primero, cuarto y séptimo abordan las tres tareas desde la premisa de los fundamentos teóricos, fundamentos que quizá por ser obvios, se han obviado o pasado por alto pero que constituyen la naturaleza de ambas disciplinas. El primero muestra, de la mano de dos autores del siglo XX ?el arquitecto Dom Hans van der Laan y el compositor Olivier Messiaen- que la creación en ambas disciplinas es de naturaleza sistemática. El cuarto revaloriza los sistemas analíticos de representación de la forma tanto en arquitectura como en música que, empezando por las características básicas de sus elementos, conducen a una notación simbólica y una herramienta de análisis de la obra: el plano y la partitura. El séptimo presenta al estudiante de arquitectura la creciente separación entre la música y la arquitectura que se ha venido acentuando hasta nuestros días. Los capítulos segundo, quinto y octavo abordan las tres tareas particulares desde la premisa de las herramientas, útiles de trabajo que ayudan a comprender de modo más directo la influencia de la arquitectura en el sonido. El segundo sitúa la realidad virtual y las técnicas de auralización al servicio del proyecto de arquitectura y urbanismo, potenciando la experiencia sonora en estos proyectos. El quinto aborda el análisis acústico de espacios exteriores y su relación con la configuración urbana de estos espacios. El octavo presenta el estudio del patrimonio acústico como herramienta pedagógica. Los capítulos tercero, sexto y noveno abordan las tres tareas desde la premisa de la evaluación, comprobación que asegura mediante experimentos docentes la influencia del sonido en ellas. El tercero argumenta y ejemplifica que un paisaje sonoro puede ser el motor y generador de un diseño arquitectónico. El sexto realiza una revisión de los métodos de evaluación de los parámetros subjetivos y objetivos de la acústica arquitectónica. El noveno muestra que en la enseñanza del sonido para los arquitectos debe priorizarse "aprender escuchando" antes que el "aprendizaje pasivo".
APA, Harvard, Vancouver, ISO, and other styles
46

Mao, Jialin. "CHARACTERIZATION OF POLYMER ARCHITECTURES AND SEQUENCES BY MULTI-STAGE MASS SPECTROMETRY." University of Akron / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=akron1554913939030297.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Suthakar, Uthayanath. "A scalable data store and analytic platform for real-time monitoring of data-intensive scientific infrastructure." Thesis, Brunel University, 2017. http://bura.brunel.ac.uk/handle/2438/15788.

Full text
Abstract:
Monitoring data-intensive scientific infrastructures in real-time such as jobs, data transfers, and hardware failures is vital for efficient operation. Due to the high volume and velocity of events that are produced, traditional methods are no longer optimal. Several techniques, as well as enabling architectures, are available to support the Big Data issue. In this respect, this thesis complements existing survey work by contributing an extensive literature review of both traditional and emerging Big Data architecture. Scalability, low-latency, fault-tolerance, and intelligence are key challenges of the traditional architecture. However, Big Data technologies and approaches have become increasingly popular for use cases that demand the use of scalable, data intensive processing (parallel), and fault-tolerance (data replication) and support for low-latency computations. In the context of a scalable data store and analytics platform for monitoring data-intensive scientific infrastructure, Lambda Architecture was adapted and evaluated on the Worldwide LHC Computing Grid, which has been proven effective. This is especially true for computationally and data-intensive use cases. In this thesis, an efficient strategy for the collection and storage of large volumes of data for computation is presented. By moving the transformation logic out from the data pipeline and moving to analytics layers, it simplifies the architecture and overall process. Time utilised is reduced, untampered raw data are kept at storage level for fault-tolerance, and the required transformation can be done when needed. An optimised Lambda Architecture (OLA), which involved modelling an efficient way of joining batch layer and streaming layer with minimum code duplications in order to support scalability, low-latency, and fault-tolerance is presented. A few models were evaluated; pure streaming layer, pure batch layer and the combination of both batch and streaming layers. Experimental results demonstrate that OLA performed better than the traditional architecture as well the Lambda Architecture. The OLA was also enhanced by adding an intelligence layer for predicting data access pattern. The intelligence layer actively adapts and updates the model built by the batch layer, which eliminates the re-training time while providing a high level of accuracy using the Deep Learning technique. The fundamental contribution to knowledge is a scalable, low-latency, fault-tolerant, intelligent, and heterogeneous-based architecture for monitoring a data-intensive scientific infrastructure, that can benefit from Big Data, technologies and approaches.
APA, Harvard, Vancouver, ISO, and other styles
48

Niedermann, Florian [Verfasser], and Bernhard [Akademischer Betreuer] Mitschang. "Deep Business Optimization : concepts and architecture for an analytical business process optimization platform / Florian Niedermann ; Betreuer: Bernhard Mitschang." Stuttgart : Universitätsbibliothek der Universität Stuttgart, 2016. http://d-nb.info/1118370287/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Longford, Nicola Jane Margaret. "Critique of Analytical Techniques Used in the Processing of Iron Recovered from Historic Sites in North America." W&M ScholarWorks, 1993. https://scholarworks.wm.edu/etd/1539625798.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Ekneligoda, Thushan Chandrasiri. "Estimation of the Elastic Moduli of Porous Materials using Analytical Methods, Numerical Methods, and Image Analysis." Doctoral thesis, KTH, Mark- och vattenteknik, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4538.

Full text
Abstract:
The effective bulk modulus and effective shear modulus of porous materials having various types of pore shapes are investigated, using both analytical and numerical methods. These solutions, and the scaling laws that are derived with the aid of these solutions, are then used to make predictions of the effective elastic moduli of some sandstones and ceramics, based on two-dimensional images of the pore space. The complex variable method is used to find the hydrostatic and shear compliances of a large family of pores that have N-fold rotational symmetry, and which have at most four terms in their conformal mapping function. This solution is validated using boundary element (BEM) calculations, and is also used to test two scaling laws that estimate the compliances based on the area and perimeter of the pore. The boundary perturbation method is used to study the effect of small-scale roughness on the compressibility and shear compliance of a nominally circular pore. The solution is carried out to fourth order in the roughness parameter for the case of hydrostatic loading, and to second order for shear loading. These solutions allow one to judge the scale of roughness that can safely be ignored when obtaining images of the pores. Predictions are then made of the elastic moduli of some porous materials – two sandstones and a ceramic. Starting with scanning electron micrographs, image analysis software is used to isolate and extract each pore from the host material. The bulk and shear compliances are estimated using both BEM and the two scaling laws. Areally-weighted mean values of these compliances are calculated for each material, and the differential effective medium scheme is used to obtain expressions for the moduli as functions of porosity. These predictions agree well with the experimental values found in the literature.
QC 20100706
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography