Dissertations / Theses on the topic 'SCALP framework'

To see the other types of publications on this topic, follow the link: SCALP framework.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'SCALP framework.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Mao, Tingting. "Interoperable internet-scale security framework for RFID networks." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/47741.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Civil and Environmental Engineering, 2008.
Includes bibliographical references (leaves 124-129).
It is estimated that over 3 billion Radio Frequency Identification (RFID) tags have been deployed through 2007. Most tags are used in supply chains where the Electronic Product Code (EPC) and associated business event data are transmitted through RFID networks. Security and privacy issues are critically important in RFID networks because EPC data and their associated business events are valuable assets. Companies need to share these data with restricted business partners and, under some conditions, such as product recall, more widely with regulators and non business partners. At present, no security or privacy framework has been chosen as an EPCglobal standard(industry-driven standards for EPC) due to the difficulty of sharing information between parties who have no direct business relationships and hence no business rules for sharing these data. To date, no security schemes have been deployed that can support data exchange with multiple identity techniques and interchangeable complex business rules, as required by RFID networks. In this thesis, an Interoperable Internet-Scale Security (IISS) framework for RFID networks is proposed. The IISS framework performs authentication and authorization based on an aggregation of business rules, enterprise information, and RFID tag information. IISS provides a protocol for several authentication schemes and identity techniques. It also provides an engine for reasoning over business rules from different domains. Moreover, the IISS framework is able to resolve provenance information of RFID tags, which can identify the history of a particular piece of EPC data through the supply chain.
(cont.) The IISS framework and the IISS ontologies to model the information in RFID networks are also described, and how the IISS framework can be developed for access control in RFID enabled supply chains is discussed. Finally, the IISS framework's efficiency is tested using a supply chain EPC simulator as the testing platform, which allows optimization of the IISS protocol's performance.
by Tingting Mao.
Ph.D.
2

Kolmistr, Tomáš. "Frameworky pro jednotkové testování v jazyce Scala." Master's thesis, Vysoká škola ekonomická v Praze, 2015. http://www.nusl.cz/ntk/nusl-203973.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis deals with frameworks for unit testing in Scala programming language. In total, there are presented five frameworks in the thesis, two of which are designed for unit testing with mock objekt and three without mock objects. The first, theoretical part aims to introduce concepts regarding testing and Scala programming language. In another part of the thesis there are specified criteria for selecting frameworks, including the criteria for subsequent comparison. In the practical part there are written unit tests according to test scenarios and evaluated the comparison of frameworks.
3

Donepudi, Harinivesh. "An Apache Hadoop Framework for Large-Scale Peptide Identification." TopSCHOLAR®, 2015. http://digitalcommons.wku.edu/theses/1527.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Peptide identification is an essential step in protein identification, and Peptide Spectrum Match (PSM) data set is huge, which is a time consuming process to work on a single machine. In a typical run of the peptide identification method, PSMs are positioned by a cross correlation, a statistical score, or a likelihood that the match between the trial and hypothetical is correct and unique. This process takes a long time to execute, and there is a demand for an increase in performance to handle large peptide data sets. Development of distributed frameworks are needed to reduce the processing time, but this comes at the price of complexity in developing and executing them. In distributed computing, the program may divide into multiple parts to be executed. The work in this thesis describes the implementation of Apache Hadoop framework for large-scale peptide identification using C-Ranker. The Apache Hadoop data processing software is immersed in a complex environment composed of massive machine clusters, large data sets, and several processing jobs. The framework uses Apache Hadoop Distributed File System (HDFS) and Apache Mapreduce to store and process the peptide data respectively.The proposed framework uses a peptide processing algorithm named CRanker which takes peptide data as an input and identifies the correct PSMs. The framework has two steps: Execute the C-Ranker algorithm on Hadoop cluster and compare the correct PSMs data generated via Hadoop approach with the normal execution approach of C-Ranker. The goal of this framework is to process large peptide datasets using Apache Hadoop distributed approach.
4

Park, Dong-Jun. "Video event detection framework on large-scale video data." Diss., University of Iowa, 2011. https://ir.uiowa.edu/etd/2754.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Detection of events and actions in video entails substantial processing of very large, even open-ended, video streams. Video data presents a unique challenge for the information retrieval community because properly representing video events is challenging. We propose a novel approach to analyze temporal aspects of video data. We consider video data as a sequence of images that form a 3-dimensional spatiotemporal structure, and perform multiview orthographic projection to transform the video data into 2-dimensional representations. The projected views allow a unique way to rep- resent video events and capture the temporal aspect of video data. We extract local salient points from 2D projection views and perform detection-via-similarity approach on a wide range of events against real-world surveillance data. We demonstrate our example-based detection framework is competitive and robust. We also investigate the synthetic example driven retrieval as a basis for query-by-example.
5

Kervazo, Christophe. "Optimization framework for large-scale sparse blind source separation." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS354/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Lors des dernières décennies, la Séparation Aveugle de Sources (BSS) est devenue un outil de premier plan pour le traitement de données multi-valuées. L’objectif de ce doctorat est cependant d’étudier les cas grande échelle, pour lesquels la plupart des algorithmes classiques obtiennent des performances dégradées. Ce document s’articule en quatre parties, traitant chacune un aspect du problème: i) l’introduction d’algorithmes robustes de BSS parcimonieuse ne nécessitant qu’un seul lancement (malgré un choix d’hyper-paramètres délicat) et fortement étayés mathématiquement; ii) la proposition d’une méthode permettant de maintenir une haute qualité de séparation malgré un nombre de sources important: iii) la modification d’un algorithme classique de BSS parcimonieuse pour l’application sur des données de grandes tailles; et iv) une extension au problème de BSS parcimonieuse non-linéaire. Les méthodes proposées ont été amplement testées, tant sur données simulées que réalistes, pour démontrer leur qualité. Des interprétations détaillées des résultats sont proposées
During the last decades, Blind Source Separation (BSS) has become a key analysis tool to study multi-valued data. The objective of this thesis is however to focus on large-scale settings, for which most classical algorithms fail. More specifically, it is subdivided into four sub-problems taking their roots around the large-scale sparse BSS issue: i) introduce a mathematically sound robust sparse BSS algorithm which does not require any relaunch (despite a difficult hyper-parameter choice); ii) introduce a method being able to maintain high quality separations even when a large-number of sources needs to be estimated; iii) make a classical sparse BSS algorithm scalable to large-scale datasets; and iv) an extension to the non-linear sparse BSS problem. The methods we propose are extensively tested on both simulated and realistic experiments to demonstrate their quality. In-depth interpretations of the results are proposed
6

Schuchmann, Roberta. "A framework for unlocking large-scale urban regeneration projects." Thesis, Curtin University, 2018. http://hdl.handle.net/20.500.11937/75246.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis investigates three large-scale urban projects in Australia: Kogarah in NSW, Dandenong in Victoria and Canning in Western Australia. Each is a regeneration site based upon transit-oriented development (TOD) principles. The findings of this research culminated in the creation of a framework that can be used for other regeneration projects, such as the Canning City Centre, a TOD in the planning phase.
7

Chacin, Martínez Pablo Jesus. "A Middleware framework for self-adaptive large scale distributed services." Doctoral thesis, Universitat Politècnica de Catalunya, 2011. http://hdl.handle.net/10803/80538.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Modern service-oriented applications demand the ability to adapt to changing conditions and unexpected situations while maintaining a required QoS. Existing self-adaptation approaches seem inadequate to address this challenge because many of their assumptions are not met on the large-scale, highly dynamic infrastructures where these applications are generally deployed on. The main motivation of our research is to devise principles that guide the construction of large scale self-adaptive distributed services. We aim to provide sound modeling abstractions based on a clear conceptual background, and their realization as a middleware framework that supports the development of such services. Taking the inspiration from the concepts of decentralized markets in economics, we propose a solution based on three principles: emergent self-organization, utility driven behavior and model-less adaptation. Based on these principles, we designed Collectives, a middleware framework which provides a comprehensive solution for the diverse adaptation concerns that rise in the development of distributed systems. We tested the soundness and comprehensiveness of the Collectives framework by implementing eUDON, a middleware for self-adaptive web services, which we then evaluated extensively by means of a simulation model to analyze its adaptation capabilities in diverse settings. We found that eUDON exhibits the intended properties: it adapts to diverse conditions like peaks in the workload and massive failures, maintaining its QoS and using efficiently the available resources; it is highly scalable and robust; can be implemented on existing services in a non-intrusive way; and do not require any performance model of the services, their workload or the resources they use. We can conclude that our work proposes a solution for the requirements of self-adaptation in demanding usage scenarios without introducing additional complexity. In that sense, we believe we make a significant contribution towards the development of future generation service-oriented applications.
Las Aplicaciones Orientadas a Servicios modernas demandan la capacidad de adaptarse a condiciones variables y situaciones inesperadas mientras mantienen un cierto nivel de servio esperado (QoS). Los enfoques de auto-adaptación existentes parecen no ser adacuados debido a sus supuestos no se cumplen en infrastructuras compartidas de gran escala. La principal motivación de nuestra investigación es inerir un conjunto de principios para guiar el desarrollo de servicios auto-adaptativos de gran escala. Nuesto objetivo es proveer abstraciones de modelaje apropiadas, basadas en un marco conceptual claro, y su implemetnacion en un middleware que soporte el desarrollo de estos servicios. Tomando como inspiración conceptos económicos de mercados decentralizados, hemos propuesto una solución basada en tres principios: auto-organización emergente, comportamiento guiado por la utilidad y adaptación sin modelos. Basados en estos principios diseñamos Collectives, un middleware que proveer una solución exhaustiva para los diversos aspectos de adaptación que surgen en el desarrollo de sistemas distribuidos. La adecuación y completitud de Collectives ha sido provada por medio de la implementación de eUDON, un middleware para servicios auto-adaptativos, el ha sido evaluado de manera exhaustiva por medio de un modelo de simulación, analizando sus propiedades de adaptación en diversos escenarios de uso. Hemos encontrado que eUDON exhibe las propiedades esperadas: se adapta a diversas condiciones como picos en la carga de trabajo o fallos masivos, mateniendo su calidad de servicio y haciendo un uso eficiente de los recusos disponibles. Es altamente escalable y robusto; puedeoo ser implementado en servicios existentes de manera no intrusiva; y no requiere la obtención de un modelo de desempeño para los servicios. Podemos concluir que nuestro trabajo nos ha permitido desarrollar una solucion que aborda los requerimientos de auto-adaptacion en escenarios de uso exigentes sin introducir complejidad adicional. En este sentido, consideramos que nuestra propuesta hace una contribución significativa hacia el desarrollo de la futura generación de aplicaciones orientadas a servicios.
8

Mohamed, Ibrahim Daoud Ahmed. "Automatic history matching in Bayesian framework for field-scale applications." Texas A&M University, 2004. http://hdl.handle.net/1969.1/3170.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Conditioning geologic models to production data and assessment of uncertainty is generally done in a Bayesian framework. The current Bayesian approach suffers from three major limitations that make it impractical for field-scale applications. These are: first, the CPU time scaling behavior of the Bayesian inverse problem using the modified Gauss-Newton algorithm with full covariance as regularization behaves quadratically with increasing model size; second, the sensitivity calculation using finite difference as the forward model depends upon the number of model parameters or the number of data points; and third, the high CPU time and memory required for covariance matrix calculation. Different attempts were used to alleviate the third limitation by using analytically-derived stencil, but these are limited to the exponential models only. We propose a fast and robust adaptation of the Bayesian formulation for inverse modeling that overcomes many of the current limitations. First, we use a commercial finite difference simulator, ECLIPSE, as a forward model, which is general and can account for complex physical behavior that dominates most field applications. Second, the production data misfit is represented by a single generalized travel time misfit per well, thus effectively reducing the number of data points into one per well and ensuring the matching of the entire production history. Third, we use both the adjoint method and streamline-based sensitivity method for sensitivity calculations. The adjoint method depends on the number of wells integrated, and generally is of an order of magnitude less than the number of data points or the model parameters. The streamline method is more efficient and faster as it requires only one simulation run per iteration regardless of the number of model parameters or the data points. Fourth, for solving the inverse problem, we utilize an iterative sparse matrix solver, LSQR, along with an approximation of the square root of the inverse of the covariance calculated using a numerically-derived stencil, which is broadly applicable to a wide class of covariance models. Our proposed approach is computationally efficient and, more importantly, the CPU time scales linearly with respect to model size. This makes automatic history matching and uncertainty assessment using a Bayesian framework more feasible for large-scale applications. We demonstrate the power and utility of our approach using synthetic cases and a field example. The field example is from Goldsmith San Andres Unit in West Texas, where we matched 20 years of production history and generated multiple realizations using the Randomized Maximum Likelihood method for uncertainty assessment. Both the adjoint method and the streamline-based sensitivity method are used to illustrate the broad applicability of our approach.
9

Zafari, Afshin. "Adapting a Radial Basis Functions Framework for Large-Scale Computing." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-182859.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This work is aimed at extending a parallel computing framework for radial basis functions methods for solving partial differential equations. Existing framework uses Task Based parallelization method in shared memory architectures to run tasks concurrently on multi-core machines using POSIX Threads. In this method, an algorithm is viewed as a set of tasks each of which performs a specific part of that algorithm while reading some data and producing others. All the dependencies between tasks are translated into data dependencies which makes the tasks decoupled. This work uses the same method but for distributed memory systems using message passing scheme of inter-process conversations. These frameworks cooperates  with each other for distributing and running the tasks among nodes and/or cores in a hybrid way of multi-threading and message passing parallel programming paradigms.  All the communication between processes (nodes) are performed asynchronously (non-blocking) to be overlapped with computations and the execution flow of the framework is implemented using state machine software construct.
10

Orgerie, Anne-Cécile. "An Energy-Efficient Reservation Framework for Large-Scale Distributed Systems." Phd thesis, Ecole normale supérieure de lyon - ENS LYON, 2011. http://tel.archives-ouvertes.fr/tel-00672130.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Over the past few years, the energy consumption of Information and Communication Technologies (ICT) has become a major issue. Nowadays, ICT accounts for 2% of the global CO2 emissions, an amount similar to that produced by the aviation industry. Large-scale distributed systems (e.g. Grids, Clouds and high-performance networks) are often heavy electricity consumers because -- for high-availability requirements -- their resources are always powered on even when they are not in use. Reservation-based systems guarantee quality of service, allow for respect of user constraints and enable fine-grained resource management. For these reasons, we propose an energy-efficient reservation framework to reduce the electric consumption of distributed systems and dedicated networks. The framework, called ERIDIS, is adapted to three different systems: data centers and grids, cloud environments and dedicated wired networks. By validating each derived infrastructure, we show that significant amounts of energy can be saved using ERIDIS in current and future large-scale distributed systems.
11

DUARTE, LEONARDO SEPERUELO. "TOPSIM: A PLUGIN-BASED FRAMEWORK FOR LARGE-SCALE NUMERICAL ANALYSIS." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2016. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=28680@1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO
Métodos computacionais em engenharia são usados na solução de problemas físicos que não possuem solução analítica ou sua perfeita representação matemática é inviável. Técnicas de métodos numéricos, incluindo o amplamente usado método dos elementos finitos, podem exigir a solução de sistemas lineares com centenas de milhares de equações, demandando altos recursos computacionais (memória e tempo). Nesta tese, nós apresentamos um sistema baseado em plugins para análise numérica em larga escala. O sistema é usado como uma ferramenta original na solução de problemas de otimização topológica usando o método dos elementos finitos com milhões de elementos. Nossa estratégia utiliza uma técnica elemento-por-elemento para implementar um código altamente paralelo para um solver iterativo com baixo consumo de memória. Além disso, a abordagem de plugin proporciona um ambiente completamente flexível e fácil de estender, onde diferentes aplicações, exigindo diferentes tipos de elementos finitos, materiais, solvers lineares e formulações podem ser desenvolvidos e melhorados. O kernel do sistema é mínimo, com apenas um módulo gerenciador de plugin, responsável por carregar os plugins desejados em tempo real usando um arquivo de configuração de entrada. Todas as funcionalidades necessárias para uma determinada aplicação são definidas dentro dos plugins, sem a necessidade de mudar o kernel. Plugins podem disponibilizar ou exigir interfaces adicionais especializadas, onde outros plugins podem ser conectados para compor um sistema mais complexo e completo. Nós apresentamos resultados para uma análise estrutural estática linear elástica e para uma análise estrutural de otimização topológica. As simulações utilizam elementos Q4, hexagonal (Brick8) e prisma hexagonal (Honeycomb), com solvers diretos e iterativos usando computação sequencial, paralela e distribuída. Nós investigamos o desempenho com relação ao uso de memória e escalabilidade da solução para problemas com diferentes tamanhos, de exemplos pequenos a muito grandes em apenas uma máquina e em um cluster. Foi simulado um exemplo de análise estática linear elástica com 500 milhões de elementos em 300 máquinas.
Computational methods in engineering are used to solve physical problems that do not have analytical solution or their perfect mathematical representation is unfeasible. Numerical techniques, including the largely used finite element method, require the solution of linear systems with hundreds of thousands equations, demanding high computational resources (memory and time). In this thesis, we present a plugin-based framework for large-scale numerical analysis. The framework is used as an original tool to solve topology optimization problems using the finite element method with millions of elements. Our strategy uses an element-by-element technique to implement a highly parallel code for an iterative solver with low memory consumption. Besides, the plugin approach provides a fully flexible and easy to extend environment, where different types of applications, requiring different types of finite elements, materials, linear solvers, and formulations, can be developed and improved. The kernel of the framework is minimum with only a plugin manager module, responsible to load the desired plugins during runtime using an input configuration file. All the features required for a specific application are defined inside plugins, with no need to change the kernel. Plugins may provide or require additional specialized interfaces, where other plugins may be connected to compose a more complex and complete system. We present results for a structural linear elastic static analysis and for a structural topology optimization analysis. The simulations use elements Q4, hexahedron (Brick8), and hexagonal prism (Honeycomb), with direct and iterative solvers using sequential, parallel and distributed computing. We investigate the performance regarding the use of memory and the scalability of the solution for problems with different sizes, from small to very large examples on a single machine and on a cluster. We simulated a linear elastic static example with 500 million elements on 300 machines.
12

Bosse, Michael Carsten. "ATLAS: a framework for large scale automated mapping and localization." Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/30088.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.
Includes bibliographical references (p. 203-207).
This thesis describes a scalable robotic navigation system that builds a map of the robot's environment on the fly. This problem is also known as Simultaneous Localization and Mapping (SLAM). The SLAM problem has as inputs the control of the robot's motion and sensor measurements to features in the environment. The desired output is the path traversed by the robot (localization) and a representation of the sensed environment (mapping). The principal contribution of this thesis is the introduction of a framework, termed Atlas, that alleviates the computational restrictions of previous approaches to SLAM when mapping extended environments. The Atlas framework partitions the SLAM problem into a graph of submaps, each with its own coordinate system. Furthermore, the framework facilitates the modularity of sensors, map representations, and local navigation algorithms by encapsulating the implementation specific algorithms into an abstracted module. The challenge of loop closing is handled with a module that matches submaps and a verification procedure that trades latency in loop closing with a lower chance of incorrect loop detections inherent with symmetric environments. The framework is demonstrated with several datasets that map large indoor and urban outdoor environments using a variety of sensors: a laser scanner, sonar rangers, and omni-directional video.
by Michael Carsten Bosse.
Ph.D.
13

Kalua, Amos. "Framework for Integrated Multi-Scale CFD Simulations in Architectural Design." Diss., Virginia Tech, 2021. http://hdl.handle.net/10919/105013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
An important aspect in the process of architectural design is the testing of solution alternatives in order to evaluate them on their appropriateness within the context of the design problem. Computational Fluid Dynamics (CFD) analysis is one of the approaches that have gained popularity in the testing of architectural design solutions especially for purposes of evaluating the performance of natural ventilation strategies in buildings. Natural ventilation strategies can reduce the energy consumption in buildings while ensuring the good health and wellbeing of the occupants. In order for natural ventilation strategies to perform as intended, a number of factors interact and these factors must be carefully analysed. CFD simulations provide an affordable platform for such analyses to be undertaken. Traditionally, these simulations have largely followed the direction of Best Practice Guidelines (BPGs) for quality control. These guidelines are built around certain simplifications due to the high computational cost of CFD modelling. However, while the computational cost has increasingly fallen and is predicted to continue to drop, the BPGs have largely remained without significant updates. The need to develop a CFD simulation framework that leverages the contemporary and anticipates the future computational cost and capacity can, therefore, not be overemphasised. When conducting CFD simulations during the process of architectural design, the variability of the wind flow field including the wind direction and its velocity constitute an important input parameter. Presently, however, in many simulations, the wind direction is largely used in a steady state manner. It is assumed that the direction of flow downwind of a meteorological station remains constant. This assumption may potentially compromise the integrity of CFD modelling as in reality, the wind flow field is bound to be dynamic from place to place. In order to improve the accuracy of the CFD simulations for architectural design, it is therefore necessary to adequately account for this variability. This study was a two-pronged investigation with the ultimate objective of improving the accuracy of the CFD simulations that are used in the architectural design process, particularly for the design and analysis of natural ventilation strategies. Firstly, a framework for integrated meso-scale and building scale CFD simulations was developed. Secondly, the newly developed framework was then implemented by deploying it to study the variability of the wind flow field between a reference meteorological station, the Virginia Tech Airport, and a selected localized building scale site on the Virginia Tech campus. The findings confirmed that the wind flow field varies from place to place and showed that the newly developed framework was able to capture this variation, ultimately, generating a wind flow field characterization representative of the conditions prevalent at the localized building site. This framework can be particularly useful when undertaking de-coupled CFD simulations to design and analyse natural ventilation strategies in the building design process.
Doctor of Philosophy
The use of natural ventilation strategies in building design has been identified as one viable pathway toward minimizing energy consumption in buildings. Natural ventilation can also reduce the prevalence of the Sick Building Syndrome (SBS) and enhance the productivity of building occupants. This research study sought to develop a framework that can improve the usage of Computational Fluid Dynamics (CFD) analyses in the architectural design process for purposes of enhancing the efficiency of natural ventilation strategies in buildings. CFD is a branch of computational physics that studies the behaviour of fluids as they move from one point to another. The usage of CFD analyses in architectural design requires the input of wind environment data such as direction and velocity. Presently, this data is obtained from a weather station and there is an assumption that this data remains the same even for a building site located at a considerable distance away from the weather station. This potentially compromises the accuracy of the CFD analyses as studies have shown that due to a number of factors such the urban built form, vegetation, terrain and others, the wind environment is bound to vary from one point to another. This study sought to develop a framework that quantifies this variation and provides a way for translating the wind data obtained from a weather station to data that more accurately characterizes a local building site. With this accurate site wind data, the CFD analyses can then provide more meaningful insights into the use of natural ventilation in the process of architectural design. This newly developed framework was deployed on a study site at Virginia Tech. The findings showed that the framework was able to demonstrate that the wind flow field varies from one place to another and it also provided a way to capture this variation, ultimately, generating a wind flow field characterization that was more representative of the local conditions.
14

Asnicar, Francesco. "A phylogenetic framework for large-scale analysis of microbial communities." Doctoral thesis, Università degli studi di Trento, 2019. https://hdl.handle.net/11572/368807.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The human microbiome represents the community of archaea, bacteria, micro-eukaryotes, and viruses present in and on the human body. Metagenomics is the most recent and advanced tool that allows the study of the microbiome at high resolution by sequencing the whole genetic content of a biological sample. The computational side of the metagenomic pipeline is recognized as the most challenging one as it needs to process large amounts of data coming from next-generation sequencing technologies to obtain accurate profiles of the microbiomes. Among all the analyses that can be performed, phylogenetics allows researchers to study microbial evolution, resolve strain-level relationships between microbes, and also taxonomically place and characterize novel and unknown microbial genomes. This thesis presents a novel computational phylogenetic approach implemented during my doctoral studies. The aims of the work range from the high-quality visualization of large phylogenies to the reconstruction of phylogenetic trees at unprecedented scale and resolution. Large-scale and accurate phylogeny reconstruction is crucial in tracking species at strain-level resolution across samples and phylogenetically characterizing unknown microbes by placing their genomes reconstructed via metagenomic assembly into a large reference phylogeny. The proposed computational phylogenetic framework has been used in several different metagenomic analyses, improving our understanding of the complexity of microbial communities. It proved, for example, to be crucial in the detection of vertical transmission events from mothers to infants and for the placement of thousands of unknown metagenome-reconstructed genomes leading to the definition of many new candidate species. This poses the basis for large-scale and more accurate analysis of the microbiome.
15

Asnicar, Francesco. "A phylogenetic framework for large-scale analysis of microbial communities." Doctoral thesis, University of Trento, 2019. http://eprints-phd.biblio.unitn.it/3663/1/Francesco_Asnicar_thesis.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The human microbiome represents the community of archaea, bacteria, micro-eukaryotes, and viruses present in and on the human body. Metagenomics is the most recent and advanced tool that allows the study of the microbiome at high resolution by sequencing the whole genetic content of a biological sample. The computational side of the metagenomic pipeline is recognized as the most challenging one as it needs to process large amounts of data coming from next-generation sequencing technologies to obtain accurate profiles of the microbiomes. Among all the analyses that can be performed, phylogenetics allows researchers to study microbial evolution, resolve strain-level relationships between microbes, and also taxonomically place and characterize novel and unknown microbial genomes. This thesis presents a novel computational phylogenetic approach implemented during my doctoral studies. The aims of the work range from the high-quality visualization of large phylogenies to the reconstruction of phylogenetic trees at unprecedented scale and resolution. Large-scale and accurate phylogeny reconstruction is crucial in tracking species at strain-level resolution across samples and phylogenetically characterizing unknown microbes by placing their genomes reconstructed via metagenomic assembly into a large reference phylogeny. The proposed computational phylogenetic framework has been used in several different metagenomic analyses, improving our understanding of the complexity of microbial communities. It proved, for example, to be crucial in the detection of vertical transmission events from mothers to infants and for the placement of thousands of unknown metagenome-reconstructed genomes leading to the definition of many new candidate species. This poses the basis for large-scale and more accurate analysis of the microbiome.
16

Ioris, Antônio Augusto Rossotto. "A framework for assessing freshwater sustainability at the river basin scale." Thesis, University of Aberdeen, 2005. http://digitool.abdn.ac.uk/R?func=search-advanced-go&find_code1=WSN&request1=AAIU200347.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis focuses upon understanding the process of developing water sustainability indicators and their application for the assessment of catchment management systems. The study deals with the assessment of environmental, economic and social processes related to sustainable water management. In order to develop the framework of indicators, a group of catchments was selected in Scotland (Rivers Clyde and Dee) and in Brazil (Rivers Sinos and Pardo). Drawing on international experience and in consultation with local water stakeholders, a list of critical criteria of water sustainability was initially selected. These criteria were: water quality; water quantity; system resilience; water use efficiency; user sector productivity; institutional preparedness; equitable water services; water-related well-being; and public participation. From these criteria a framework of sustainability indicators was developed through an inductive and participatory approach, which included prospective contacts with water stakeholders, a sequence of trial exercises and a pilot-study. The proposed framework of indicators is the product of the amalgamation of existing literature, interaction with stakeholders and informed choices by the researcher. The calculation of indicators required the gathering and manipulation of secondary data using both quantitative and qualitative research techniques. The main difficulties encountered during the calculation of indicator results were data inaccessibility and incompatibility of spatial scales. The interpretation of the sustainability condition of the catchments was based on the analysis of historic trends and future tendencies of the proposed indicators. The research outcomes were mixed in all four studied catchments, with specific achievements and deficiencies identified in the local water management approaches. The final research stage included interviews with stakeholders to discuss both indicator results and the appropriateness of the proposed methodology.
17

Christensen, Kirsten Elvira. "Open-Framework Germanates : Crystallography, structures and cluster building units." Doctoral thesis, Stockholm : Department of Physical, Inorganic and Structural Chemistry, Stockholm university, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-7501.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Botha, Mark Jonathan. "Collective ownership in the South African small-scale fishing sector: a framework for sustained economic growth." Doctoral thesis, University of Cape Town, 2018. http://hdl.handle.net/11427/29329.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The thesis tested the notion of collective ownership in the small-scale fisheries sector, as advocated by the Department of Agriculture, Forestry and Fisheries, the South African government department responsible for fisheries administration. More specifically, it examined the conditions under which collective ownership would yield economic benefits to small-scale fishers. This was done according to three constructs, i.e. collective entrepreneurship, agency theory and value chain development. In testing the study’s presuppositions, a sequential qualitative-quantitative mixed methods research methodology was used. Data were gathered through focus group discussions, individual interviews and surveys with fishers from South Africa’s Western Cape and Northern Cape provinces. Qualitative data were analysed through the constant comparative approach preliminary outcomes thereof were used to devise the quantitative instruments, which were analysed with the SPSS statistical package. The outcomes of the quantitative data analysis were then discussed with key participants to validate the findings and to ensure overall congruency. In the current value chain dispensation, small-scale fishers realise approximately 38% of overall revenue accrual, whereas the remaining 62% is realised by fish-processing establishments and exporters. The value chain requires reconfiguration to progressively enable small-scale fishers to own and control all upstream and downstream catch, processing and marketing processes. In addition, greater value can be realised when all regulatory, catch, processing and marketing processes are efficiently aligned with local and export market requirements. The findings note that small-scale fishers require developmental support to exploit opportunities. The study suggests that the required support should be facilitated through a dedicated multi- and interdisciplinary fisheries institute located at a higher education institution. This institute needs to focus on training, advisory services and research, as well as on defined support for the fisheries co-operatives. Moreover, the impact of the envisaged institute provides for the establishment of localised fishing community information centres, located near coastal fishing communities, harbours and slipways. Such centres ought to improve communications, trust-building relations and shared expertise among all actors, namely small-scale fishers, their co-operatives, the various government departments, industrial associations, non-governmental organisations, agencies and all others implicated, to maximise benefit and effectively secure government’s infrastructural investment programme within the small-scale fisheries sector.
19

Zein, Aghaji Mohammad. "Large Scale Computational Screening of Metal Organic Framework Materials for Natural Gas Purification." Thesis, Université d'Ottawa / University of Ottawa, 2017. http://hdl.handle.net/10393/36226.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
An immediate reduction in global CO2 emissions could be accomplished by replacing coal- or oil-based energy sources with purified natural gas. The most important process involved in natural gas purification is the separation of CO2 from CH4, where Pressure Swing Adsorption (PSA) technology on porous materials has emerged as a less energy demanding technology. Among porous materials which are used or could potentially be used in PSA, Metal Organic Frameworks (MOFs) have attracted particular interest owing to their record-breaking surface areas, high-porosity, and high tunability. However, the discovery of optimal MOFs for use in adsorption-based CO2 separation processes is remarkably challenging, as millions of MOFs can potentially be constructed from virtually limitless combinations of inorganic and organic secondary building units. To overcome this combinatorial problem, this thesis aims to (1) identify important design features of MOFs for CO2/CH4 separation through the investigation of currently existing MOFs as well as the high throughput computational screening of a large database of MOFs, and to (2) develop efficient computational tools for aiding the discovery of new MOF materials. To validate the computational methods and models used in this thesis, the first work of this thesis presents the computational modeling of CO2 adsorption on an experimental CuBDPMe MOF using grand canonical Monte Carlo simulations and density functional theory. The simulated CO2 adsorption isotherms are in good agreement with experiment, which confirms the accuracy of the models used in our simulations throughout this thesis. The second work of this thesis investigates the performance of an experimental MIL-47 MOF and its seven functionalized derivatives in the context of natural gas purification, and compares their performance with that of other well-known MOFs and commercially used adsorbents. The computational results show that introducing polar non-bulky functional groups on MIL-47 leads to an enhancement in its performance, and the comparison suggests that MIL-47-NO2 could be a possible candidate as a solid sorbent for natural gas purification. This study is followed by the compactional study of water effects on natural gas purification using MOFs, as traces of water is present in natural gas under pipeline specifications. From the study, it is found that water has a marginal effect on natural gas purification in hydrophobic MOFs under pipeline specifications. Following the aforementioned studies, a database of 324,500 hypothetical MOFs is screened for their performance in natural gas purification using the general protocol defined in this thesis. From the study, we identify 'hit' materials for targeted synthesis, and investigate the structure-property relationships with the intent of finding important MOF design features relevant to natural gas purification. We show that layered sheets consisting of poly-aromatic molecules separated by a perpendicular distance of roughly 7 Å are an important structural-chemical feature that leads to strong adsorption of CO2. Following the screening study, we develop efficient computational tools for the recognition of high-preforming MOFs for methane purification using Machine Learning techniques. A training set of 32,500 MOF structures was used to calibrate support vector machines (SVMs) classifiers that incorporate simple geometrical features including pore size, void fraction and surface area. The SVM machine learning classifiers can be used as a filtering tool when screening large databases. The SVM classifiers were tested on ~290,000 MOFs that were not part of the training set and could correctly identify up to 70% of high-performing MOFs while only flagging a fraction of the MOFs for more rigorous screening. As a complement to this study, we present ML classifier models for CO2/CH4 separation parameters that incorporate separately the Voronoi hologram and AP-RDF descriptors, and we compare their performance with the classifiers composed of simple geometrical descriptors. From the comparison, it is found that including AP-RDF and Voronoi hologram descriptors into the classifiers improves the performance of classifiers by 20% in capturing high-performing MOFs. Finally, from the screening data, we develop a novel chemiformatics tool, MOFFinder, for aiding in the discovery of new MOFs for CO2 scrubbing from natural gas. It has a user-friendly graphical interface to promote easy exploration of over 300,000 hypothetical MOFs. It enables synthetic chemists to find MOFs of interest by searching the database for Secondary Building Units (SBUs), geometric features, functional groups and adsorption properties. MOFFinder provides, for the first time the substructure/similarity query of porous materials for users and is publicly available on titan.chem.uottawa.ca/moffinger.
20

Gandolfi, Giovanni. "Modelling the small-scale clustering of VIPERS galaxies in the HOD framework." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/18753/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The present work is intended as a contribution to help formulate a response to one of the most interesting and contemporary questions of astrophysics: what is the link between baryonic matter and dark matter? An extremely useful tool to investigate this relationship is the study of galaxy clustering within the haloes of dark matter in which these objects originated. Studying galaxy clustering means to statistically study their spatial properties, in the assumption that galaxies are a biased tracer of the mass distribution that hosts them. It is the modelling of this bias that is the real focus of the present Thesis work. This modelling will be achieved through the exploitation of the Halo Occupation Distribution (HOD) model. Unlike other similar models, the HOD allows to efficiently model clustering on small scales by introducing a 1halo term in the two-point correlation function of galaxies. The model is in fact able to take into account how the astrophysics of inter-galactic interactions modify the spatial distribution of galaxies in haloes. The HOD model presented here has been included in a set of free software libraries optimised for cosmological studies, called CosmoBolognaLib. Subsequently, the implemented model has been tested on data from the latest release of the VIPERS survey, for which the HOD has not yet been exploited. In this way it has been shown that the implementation presented here is effective, paving the way for a future application on data from completed or next-generation surveys, such as Euclid. Finally, we present an improvement to the model which, in the future, will make it possible to model the clustering of small-scale galaxies even better.
21

Zhao, Die. "Workflow Management System Analysis and Construction Design Framework for Large-scale Enterprise." Thesis, Växjö University, School of Mathematics and Systems Engineering, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:vxu:diva-1734.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:

Nowadays, with the rapid development of Information technology, workflow management technology has undoubtedly become the basic component of enterprise information system construction, the research and application level regarding workflow management technology directly determine the level of enterprise information system. By virtue of using workflow management system within enterprises, could advance the operating efficiency of business processes and enhance enterprise competition ability.

This paper describes the origin and status of workflow management, analyses on the base of the workflow reference model of Workflow Management Coalition; discusses the overall requirements of workflow management system, especially for large-scale enterprises, which includes workflow engines, process design tools, administration and monitoring tools, and client tools thesis four essential components; as well as studies on the workflow modeling techniques and adopts the method based on UML activity diagram to achieve modeling. Along with proposes a design and realization method concerning how to construct a workflow management structure, which utilizes J2EE lightweight workflow engine to meet changeful business processes and diverse deployment environments.

22

Bessinger, Zachary. "An Automatic Framework for Embryonic Localization Using Edges in a Scale Space." TopSCHOLAR®, 2013. http://digitalcommons.wku.edu/theses/1262.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Localization of Drosophila embryos in images is a fundamental step in an automatic computational system for the exploration of gene-gene interaction on Drosophila. Contour extraction of embryonic images is challenging due to many variations in embryonic images. In the thesis work, we develop a localization framework based on the analysis of connected components of edge pixels in a scale space. We propose criteria to select optimal scales for embryonic localization. Furthermore, we propose a scale mapping strategy to compress the range of a scale space in order to improve the efficiency of the localization framework. The effectiveness of the proposed framework and the scale mapping strategy are validated in our experiments.
23

Battat, Jonathan. "A fine-grained geospatial representation and framework for large-scale indoor environments." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/61278.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 110-112).
This thesis describes a system and method for extending the current paradigm of geographic information systems (GIS) to support indoor environments. It introduces features and properties of indoor multi-building environments that do not exist in other geographic environments or are not characterized in existing geospatial models, and proposes a comprehensive representation for describing such spatial environments. Specifically, it presents enhanced notions of spatial containment and graph topology for indoor environments, and extends existing geometric and semantic constructs. Furthermore, it describes a framework to: automatically extract indoor spatial features from a corpus of semi-structured digital floor plans; populate the aforementioned indoor spatial representation with these features; store the spatial data in a descriptive yet extensible data model; and provide mechanisms for dynamically accessing, mutating, augmenting, and distributing the resulting large-scale dataset. Lastly, it showcases an array of applications, and proposes others, which utilize the representation and dataset to provide rich location-based services within indoor environments.
by Jonathan Battat.
M.Eng.
24

Evenson, Grey Rogers. "A Process-Comprehensive Simulation-Optimization Framework for Watershed Scale Wetland Restoration Planning." The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1406213250.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Xie, Junfei. "Data-Driven Decision-Making Framework for Large-Scale Dynamical Systems under Uncertainty." Thesis, University of North Texas, 2016. https://digital.library.unt.edu/ark:/67531/metadc862845/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Managing large-scale dynamical systems (e.g., transportation systems, complex information systems, and power networks, etc.) in real-time is very challenging considering their complicated system dynamics, intricate network interactions, large scale, and especially the existence of various uncertainties. To address this issue, intelligent techniques which can quickly design decision-making strategies that are robust to uncertainties are needed. This dissertation aims to conquer these challenges by exploring a data-driven decision-making framework, which leverages big-data techniques and scalable uncertainty evaluation approaches to quickly solve optimal control problems. In particular, following techniques have been developed along this direction: 1) system modeling approaches to simplify the system analysis and design procedures for multiple applications; 2) effective simulation and analytical based approaches to efficiently evaluate system performance and design control strategies under uncertainty; and 3) big-data techniques that allow some computations of control strategies to be completed offline. These techniques and tools for analysis, design and control contribute to a wide range of applications including air traffic flow management, complex information systems, and airborne networks.
26

Bansal, Dheeraj. "An advanced real-time predictive maintenance framework for large scale machine systems." Thesis, Aston University, 2005. http://publications.aston.ac.uk/12235/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis introduces and develops a novel real-time predictive maintenance system to estimate the machine system parameters using the motion current signature. A crucial concept underpinning this project is that the motion current signature contains infor­mation relating to the machine system parameters and that this information can be extracted using nonlinear mapping techniques, such as neural networks. Towards this end, a proof of con­cept procedure is performed, which substantiates this concept. A simulation model, TuneLearn, is developed to simulate the large amount of training data required by the neural network ap­proach. Statistical validation and verification of the model is performed to ascertain confidence in the simulated motion current signature. Validation experiment concludes that, although, the simulation model generates a good macro-dynamical mapping of the motion current signature, it fails to accurately map the micro-dynamical structure due to the lack of knowledge regarding performance of higher order and nonlinear factors, such as backlash and compliance. Failure of the simulation model to determine the micro-dynamical structure suggests the pres­ence of nonlinearity in the motion current signature. This motivated us to perform surrogate data testing for nonlinearity in the motion current signature. Results confirm the presence of nonlinearity in the motion current signature, thereby, motivating the use of nonlinear tech­niques for further analysis. Outcomes of the experiment show that nonlinear noise reduction combined with the linear reverse algorithm offers precise machine system parameter estimation using the motion current signature for the implementation of the real-time predictive maintenance system. Finally, a linear reverse algorithm, BJEST, is developed and applied to the motion current signature to estimate the machine system parameters.
27

Sanz, Leon Paula. "Development of a computational and neuroinformatics framework for large-scale brain modelling." Thesis, Aix-Marseille, 2014. http://www.theses.fr/2014AIXM5036/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The central theme of this thesis is the development of both a generalised computational model for large-scale brain networks and the neuroinformatics platform that enables a systematic exploration and analysis of those models. In this thesis we describe the mathematical framework of the computational model at the core of the tool The Virtual brain (TVB), designed to recreate collective whole brain dynamics by virtualising brain structure and function, allowing simultaneous outputs of a number of experimental modalities such as electro- and magnetoencephalography (EEG, MEG) and functional Magnetic Resonance Imaging (fMRI). The implementation allows for a systematic exploration and manipulation of every underlying component of a large-scale brain network model (BNM), such as the neural mass model governing the local dynamics or the structural connectivity constraining the space time structure of the network couplings. We also review previous studies related to brain network models and multimodal neuroimaging integration and detail how they are related to the general model presented in this work. Practical examples describing how to build a minimal *in silico* primate brain model are given. Finally, we explain how the resulting software tool, TVB, facilitates the collaboration between experimentalists and modellers by exposing both a comprehensive simulator for brain dynamics and an integrative framework for the management, analysis, and simulation of structural and functional data in an accessible, web-based interface
The central theme of this thesis is the development of both a generalised computational model for large-scale brain networks and the neuroinformatics platform that enables a systematic exploration and analysis of those models. In this thesis we describe the mathematical framework of the computational model at the core of the tool The Virtual brain (TVB), designed to recreate collective whole brain dynamics by virtualising brain structure and function, allowing simultaneous outputs of a number of experimental modalities such as electro- and magnetoencephalography (EEG, MEG) and functional Magnetic Resonance Imaging (fMRI). The implementation allows for a systematic exploration and manipulation of every underlying component of a large-scale brain network model (BNM), such as the neural mass model governing the local dynamics or the structural connectivity constraining the space time structure of the network couplings. We also review previous studies related to brain network models and multimodal neuroimaging integration and detail how they are related to the general model presented in this work. Practical examples describing how to build a minimal *in silico* primate brain model are given. Finally, we explain how the resulting software tool, TVB, facilitates the collaboration between experimentalists and modellers by exposing both a comprehensive simulator for brain dynamics and an integrative framework for the management, analysis, and simulation of structural and functional data in an accessible, web-based interface
28

Sadasivam, Rajani Shankar. "An architecture framework for composite services with process-personalization." Birmingham, Ala. : University of Alabama at Birmingham, 2007. https://www.mhsl.uab.edu/dt/2009r/sadasivam.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (Ph. D.)--University of Alabama at Birmingham, 2007.
Title from PDF title page (viewed Feb. 4, 2010). Additional advisors: Barrett R. Bryant, Chittoor V. Ramamoorthy, Jeffrey H. Kulick, Gary J. Grimes, Gregg L. Vaughn, Murat N. Tanju. Includes bibliographical references (p. 161-183).
29

Jeřábek, Jakub [Verfasser]. "Numerical framework for modeling of cementitious composites at the meso-scale / Jakub Jerabek." Aachen : Hochschulbibliothek der Rheinisch-Westfälischen Technischen Hochschule Aachen, 2011. http://d-nb.info/1018218130/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Kulikov, Viacheslav [Verfasser]. "A generalized framework for multi-scale simulation of complex crystallization processes / Viacheslav Kulikov." Aachen : Hochschulbibliothek der Rheinisch-Westfälischen Technischen Hochschule Aachen, 2011. http://d-nb.info/1018202803/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Horn, Heidi Lynn. "A framework for integrative forest planning at the landscape scale in British Columbia." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/mq24158.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Eyiah, Alex Kojo. "Financing small and medium scale construction firms in Ghana : a framework for improvement." Thesis, University of Manchester, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.556704.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Eydgahi, Hoda. "A quantitative framework For large-scale model estimation and discrimination In systems biology." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/82347.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2013.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 103-111).
Using models to simulate and analyze biological networks requires principled approaches to parameter estimation and model discrimination. We use Bayesian and Monte Carlo methods to recover the full probability distributions of free parameters (initial protein concentrations and rate constants) for mass action models of receptor-mediated cell death. The width of the individual parameter distributions is largely determined by non-identifiability but co-variation among parameters, even those that are poorly determined, encodes essential information. Knowledge of joint parameter distributions makes it possible to compute the uncertainty of model-based predictions whereas ignoring it (e.g. by treating parameters as a simple list of values and variances) yields nonsensical predictions. Computing the Bayes factor from joint distributions yields the odds ratio (~20-fold) for competing "direct" and "indirect" apoptosis models having different numbers of parameters. The methods presented in this thesis were then extended to make predictions in eight apoptosis mini-models. Despite topological uncertainty, the simulated predictions can be used to drive experimental design. Our results illustrate how Bayesian approaches to model calibration and discrimination combined with single-cell data represent a generally useful and rigorous approach to discriminating between competing hypotheses in the face of parametric and topological uncertainty.
by Hoda Eydgahi.
Ph.D.
34

Ahmadian, Ahmadabad Hossein. "Integrated Multi-Scale Modeling Framework for Simulating Failure Response of Fiber Reinforced Composites." The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu15553373269295.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Adams, David Kenton. "Application of the heat engine framework to modeling of large-scale atmospheric convection." Diss., The University of Arizona, 2003. http://hdl.handle.net/10150/280339.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The heat engine framework is examined in terms of large-scale atmospheric convection in order to investigate several theoretical and modeling issues related to the steady-state convecting atmosphere. Applications of the heat engine framework to convective circulations are reviewed. It is shown that this framework provides fundamental insights into the nature of various atmospheric phenomena and estimates of their potential intensity. The framework is shown to be valid for both reversible and irreversible systems; the irreversible processes' sole effect is to reduce the thermodynamic efficiency of the convective heat engine. The heat engine framework is then employed to demonstrate that the two asymptotic limits of quasi-equilibrium theory are consistent. That is, the fractional area covered by convection goes to zero, σ → 0, as the ratio of the convective adjustment to large-scale time scale (e.g. radiative time scale) go to zero, tADJ/tLS →0 , despite recent arguments to the contrary. Furthermore, the heat engine framework is utilized to develop a methodology for assessing the strength of irreversibilities in numerical models. Using the explicit energy budget, we derive thermodynamic efficiencies based on work and the heat budget for both open (e.g., the Hadley circulation) and closed (e.g., the general circulation) thermodynamic systems. In addition, the Carnot efficiency for closed systems is calculated to ascertain the maximum efficiency possible. Comparison of the work-based efficiency with that of the efficiency based on the heat budget provides a gauge for assessing how close to reversible model-generated circulations are. A battery of experiments is carried out with an idealized GCM. The usefulness of this method is demonstrated and it is shown that an essentially reversible GCM is sensitive (i.e., becomes more irreversible) to changes in numerical parameters and horizontal resolution.
36

Garba, Muhammad. "A scalable design framework for variability management in large-scale software product lines." Thesis, University of East London, 2016. http://roar.uel.ac.uk/5032/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Variability management is one of the major challenges in software product line adoption, since it needs to be efficiently managed at various levels of the software product line development process (e.g., requirement analysis, design, implementation, etc.). One of the main challenges within variability management is the handling and effective visualization of large-scale (industry-size) models, which in many projects, can reach the order of thousands, along with the dependency relationships that exist among them. These have raised many concerns regarding the scalability of current variability management tools and techniques and their lack of industrial adoption. To address the scalability issues, this work employed a combination of quantitative and qualitative research methods to identify the reasons behind the limited scalability of existing variability management tools and techniques. In addition to producing a comprehensive catalogue of existing tools, the outcome form this stage helped understand the major limitations of existing tools. Based on the findings, a novel approach was created for managing variability that employed two main principles for supporting scalability. First, the separation-of-concerns principle was employed by creating multiple views of variability models to alleviate information overload. Second, hyperbolic trees were used to visualise models (compared to Euclidian space trees traditionally used). The result was an approach that can represent models encompassing hundreds of variability points and complex relationships. These concepts were demonstrated by implementing them in an existing variability management tool and using it to model a real-life product line with over a thousand variability points. Finally, in order to assess the work, an evaluation framework was designed based on various established usability assessment best practices and standards. The framework was then used with several case studies to benchmark the performance of this work against other existing tools.
37

Fuentes, Magdalena. "Multi-scale computational rhythm analysis : a framework for sections, downbeats, beats, and microtiming." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS404.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
La modélisation computationnelle du rythme a pour objet l'extraction et le traitement d’informations rythmiques à partir d’un signal audio de musique. Cela s'avère être une tâche extrêmement complexe car, pour traiter un enregistrement audio réel, il faut pouvoir gérer sa complexité acoustique et sémantique à plusieurs niveaux de représentation. Les méthodes d’analyse rythmique existantes se concentrent généralement sur l'un de ces aspects à la fois et n’exploitent pas la richesse de la structure musicale, ce qui compromet la cohérence musicale des estimations automatiques. Dans ce travail, nous proposons de nouvelles approches tirant parti des informations multi-échelles pour l'analyse automatique du rythme. Nos modèles prennent en compte des interdépendances intrinsèques aux signaux audio de musique, en permettant ainsi l’interaction entre différentes échelles de temps et en assurant la cohérence musicale entre elles. En particulier, nous effectuons une analyse systématique des systèmes de l’état de l’art pour la détection des premiers temps, ce qui nous conduit à nous tourner vers des architectures convolutionnelles et récurrentes qui exploitent la modélisation acoustique à court et long terme; nous introduisons un modèle de champ aléatoire conditionnel à saut de chaîne pour la détection des premiers temps. Ce système est conçu pour tirer parti des informations de structure musicale (c'est-à-dire des répétitions de sections musicales) dans un cadre unifié. Nous proposons également un modèle linguistique pour la détection conjointe des temps et du micro-timing dans la musique afro-latino-américaine. Nos méthodes sont systématiquement évaluées sur diverses bases de données, allant de la musique occidentale à des genres plus spécifiques culturellement, et comparés à des systèmes de l’état de l’art, ainsi qu’à des variantes plus simples. Les résultats globaux montrent que nos modèles d’estimation des premiers temps sont aussi performants que ceux de l’état de l'art, tout en étant plus cohérents sur le plan musical. De plus, notre modèle d’estimation conjointe des temps et du microtiming représente une avancée vers des systèmes plus interprétables. Les méthodes présentées ici offrent des alternatives nouvelles et plus holistiques pour l'analyse numérique du rythme, ouvrant des perspectives vers une analyse automatique plus complète de la musique
Computational rhythm analysis deals with extracting and processing meaningful rhythmical information from musical audio. It proves to be a highly complex task, since dealing with real audio recordings requires the ability to handle its acoustic and semantic complexity at multiple levels of representation. Existing methods for rhythmic analysis typically focus on one of those levels, failing to exploit music’s rich structure and compromising the musical consistency of automatic estimations. In this work, we propose novel approaches for leveraging multi-scale information for computational rhythm analysis. Our models account for interrelated dependencies that musical audio naturally conveys, allowing the interplay between different time scales and accounting for music coherence across them. In particular, we conduct a systematic analysis of downbeat tracking systems, leading to convolutional-recurrent architectures that exploit short and long term acoustic modeling; we introduce a skip-chain conditional random field model for downbeat tracking designed to take advantage of music structure information (i.e. music sections repetitions) in a unified framework; and we propose a language model for joint tracking of beats and micro-timing in Afro-Latin American music. Our methods are systematically evaluated on a diverse group of datasets, ranging from Western music to more culturally specific genres, and compared to state-of-the-art systems and simpler variations. The overall results show that our models for downbeat tracking perform on par with the state of the art, while being more musically consistent. Moreover, our model for the joint estimation of beats and microtiming takes further steps towards more interpretable systems. The methods presented here offer novel and more holistic alternatives for computational rhythm analysis, towards a more comprehensive automatic analysis of music
38

Kazemi, Alamouti Hamideh. "Development of a Hybrid Conceptual-Statistical Framework to Evaluate Catchment-Scale Water Balance." Thesis, Curtin University, 2021. http://hdl.handle.net/20.500.11937/84108.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The study introduces a novel framework that combines conceptual, hydrological and statistical models to study water resources on short-term and long-term scales. It improves the performance of a common conceptual model and its extended version to estimate water balance components of the case studies. Moreover, a conceptual-statistical model is developed to further increase the performance of the original models. The proposed method is a useful approach for other catchments particularly, where limited data is available.
39

Materano, Antonella. "The building blocks of social entrepreneurship: empirical model and framework." Master's thesis, NSBE - UNL, 2013. http://hdl.handle.net/10362/11631.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
A Work Project, presented as part of the requirements for the Award of a Masters Degree in Management from the NOVA – School of Business and Economics
The purpose of this exploratory study is to identify a common path followed by social entrepreneurs, so as to build a comprehensive empirical model. The methodology used is qualitative interviews; in particular, semi-structured questions were addressed to a sample of ten social entrepreneurs, whose answers were transcribed and analysed. The main result is represented by a five-stage pattern followed by social entrepreneurs: each stage is firstly described and further linked to specific challenges that social entrepreneurs face and assets they need during the process. It is fundamental to highlight that some of these stages and challenges are peculiar to social entrepreneurship, differing from regular entrepreneurship. The key conclusion is that it is possible to identify a common pattern that could guide current and future social entrepreneurs. Furthermore, this research paper emphasises best practices and lesson learned from current social entrepreneurs by leaving a powerful heritage to who is interested in make a real change in society.
40

Castro, Rui Bayer. "An object-oriented framework for large-scale discrete event simulation modelling : selective external modularity." Thesis, Lancaster University, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.302371.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Loschko, Matthias [Verfasser], and Olaf A. [Akademischer Betreuer] Cirpka. "Stochastic Framework for Catchment-Scale Reactive Transport Simulations / Matthias Loschko ; Betreuer: Olaf A. Cirpka." Tübingen : Universitätsbibliothek Tübingen, 2018. http://d-nb.info/1171795009/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Rodríguez, Montes Juan David. "Devising a framework for the development of the medium scale coal sector in Colombia." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/46673.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Urban Studies and Planning, 2009.
Includes bibliographical references (p. 117-119).
The purpose of my study is to devise a framework that encompasses the strategies put forth by the Colombian government as to advancing the productivity and competitiveness of the country, with an emphasis on finding development alternatives for the medium-scale coal sector, tackling its shortcomings, and improving the sector's situation. The framework integrates three dimensions of analysis including the Colombian national policies for productivity and competitiveness, coal sector development, and environmental conservation. I lay out several coal-development alternatives and evaluate their economic and environmental performances using input-output analysis, and multi-criteria decision analysis for alternative selection. I also model different scenarios prioritizing each of the dimensions of analysis. The results from the scenario analysis show that coal gasification suits best the three dimensions of analysis, providing the highest economic benefits with the least environmental impacts of the proposed development alternatives. In addition, I use Geographic Information Systems to conduct location-suitability analysis for the coal fields in the interior of Colombia. Results of the suitability analysis portray coal fields in the Córdoba and Cundinamarca Provinces as the most suitable regions for coal-gasification development.
by Juan David Rodríguez Montes.
S.M.
43

Srivastava, Gagan. "A Macro-scale, Tribological Modeling Framework for Simulating Multiple Lubrication Regimes and Engineering Applications." Research Showcase @ CMU, 2015. http://repository.cmu.edu/dissertations/657.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Tribology is the science of interacting surfaces and the associated study of friction lubrication and wear. High friction and wear cause energy loss and deterioration of interacting surfaces. Lubrication, using hydrodynamic liquids is the primary mechanism to reduce friction and wear. Unfortunately, not all applications can be ideally lubricated to operate in a low friction zone. In the majority of cases, the relative velocities between the moving components is either too low, or the transferred force is too high for them to get perfectly lubricated, with minimal solid to solid contact. In such conditions they operate in the boundary or mixed lubrication regime, where there is significant solid-solid contact. Examples of such conditions are commonplace in our daily lives. From the food in our mouth to a floating hard disk drive read/write heads, or artificial hip joints to a polishing process, all operate in the mixed lubrication regime. In this thesis, a generalized numerical modeling framework has been developed that can be applied to simulate the operation of a large variety tribological applications that operate in any of the three lubrication regimes. The framework called the Particle Augmented Mixed Lubrication - Plus (PAML+), accounts for all the major mechanical interactions encountered in any tribosystem. It involves coupled iii iv ABSTRACT modules for solid mechanics and fluid mechanics. Depending on the application, additional fidelity has been added in the form of modules relevant to the physical interactions unique to the application. For example, modeling of the chemical mechanical polishing process requires treatment of particle dynamics and wear to be able to generate predictions of meaningful quantities such as the material removal rate. Similarly, modeling of artificial hip joints requires additional treatment of mass transport and wear to simulate contamination with debris particles. The fluid mechanics have been modeled through the thin film approximation of Navier Stokes equations, known as the Reynolds Equation. The solid mechanics have been modeled using analytical or semi-analytical techniques. Statistical treatments have been applied to model particle dynamics wherever required to avoid huge computational requirements associated with deterministic methods such as the discrete element method. To demonstrate the strengths and general applicability of the modeling approach, four major tribological applications have been modeled using the new modeling approach in order to broadly impact key industries. The four tribological applications are (i) Pin-on-disk tribosystems (ii) Chemical mechanical polishing (CMP) (iii) Artificial hip joints, and (iv) Mechanical seals. First the model was employed to simulate pin-on-disk interfaces to evaluate different surface texture designs. It also served as a platform to test the model’s ability to capture, and seamlessly traverse through different lubrication regimes. The model predicted that an intermediate texture dimension of 200mm resulted in 80% lesser wear than a larger texture of 200mm, and up to 90% lesser wear than an untextured sample. Second, the framework was employed to study the CMP process. Overall, the model was found to be at least 50% more accurate than the previous generation model. Third, the model v was tailored to study the artificial joints. Wear predictions from the model remained within 5% error upon comparing against the experiments, while studying different “head” sizes. It was discovered that textured joints can reduce the concentration of the wear debris by at least 2:5% per cycle. For an expected lifetime of 12 years, that translates to lifetime enhancement of 3 months. Lastly, the model was employed to study the performance of mechanical seals. Even though the model was much more computationally efficient, it remained within 5% of much more detailed and computationally expensive FEA models. The model also predicted that the seals allow the highest leakage at shaft speeds of about 950 RPM.
44

Liang, Bowen. "Integrated Multi-Scale Modeling Framework for Simulating Failure Response of Materials with Complex Microstructures." The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu1542233231302831.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Xi, Hui. "A DDDAS-Based Multi-Scale Framework for Pedestrian Behavior Modeling and Interactions with Drivers." Diss., The University of Arizona, 2013. http://hdl.handle.net/10150/306361.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
A multi-scale agent-based simulation framework is firstly proposed to analyze pedestrian delays at signalized crosswalks in large urban areas under different conditions. The aggregated-level model runs under normal conditions, where each crosswalk is represented as an agent. Pedestrian counts collected near crosswalks are utilized to derive the binary choice probability from a utility maximization model. The derived probability function is utilized based on the extended Adam's model to estimate an average pedestrian delay with corresponding traffic flow rate and traffic light control at each crosswalk. When abnormality is detected, the detailed-level model with each pedestrian as an agent is running in the affected subareas. Pedestrian decision-making under abnormal conditions, physical movement, and crowd congestion are considered in the detailed-level model. The detailed-level model contains two sub-level models: the tactical sub-level model for pedestrian route choice and the operational sub-level model for pedestrian physical interactions. The tactical sub-level model is based on Extended Decision Field Theory (EDFT) to represent the psychological preferences of pedestrians with respect to different route choice options during their deliberation process after evaluating current surroundings. At the operational sub-level model, physical interactions among pedestrians and consequent congestions are represented using a Cellular Automata model, in which pedestrians are allowed biased random-walking without back step towards their destination that has been given by the tactical sub-level model. In addition, Dynamic-Data-Driven Application Systems (DDDAS) architecture has been integrated with the proposed multi-scale simulation framework for an abnormality detection and appropriate fidelity selection (between the aggregate level and the detailed level models) during the simulation execution process. Various experiments have been conducted under varying conditions with the scenario of a Chicago Loop area to demonstrate the advantage of the proposed framework, balancing between computational efficiency and model accuracy. In addition to the signalized intersections, pedestrian crossing behavior under unsignalized conditions which has been recognized as a main reason for pedestrian-vehicle crashes has also been analyzed in this dissertation. To this end, an agent-based model is proposed to mimic pedestrian crossing behavior together with drivers' yielding behavior in the midblock crossing scenario. In particular, pedestrian-vehicle interaction is first modeled as a Two-player Pareto game which develops evaluation of strategies from two aspects, delay and risk, for each agent (i.e. pedestrian and driver). The evaluations are then used by Extended Decision Field Theory to mimic decision making of each agent based on his/her aggressiveness and physical capabilities. A base car-following algorithm from NGSIM is employed to represent vehicles' physical movement and execution of drivers' decisions. A midblock segment of a typical arterial in the Tucson area is adopted to illustrate the proposed model, and the model for the considered scenario has been implemented in AnyLogic® simulation software. Using the constructed simulation, experiments have been conducted to analyze different behaviors of pedestrians and drivers and the mutual impact upon each other, i.e. average pedestrian delay resulted from different crossing behaviors (aggressive vs. conservative), and average braking distance which is affected by driving aggressiveness and drivers' awareness of pedestrians. The results look interesting and are believed to be useful for improvement of pedestrians' safety during their midblock crossing. To the best of our knowledge, the proposed multi-scale modeling framework for pedestrians and drivers is one of the first efforts to estimate pedestrian delays in an urban area with adaptive resolution based on demand and accuracy requirement, as well as to address pedestrian-vehicle interactions under unsignalized conditions.
46

Fankhänel, Johannes Andreas [Verfasser]. "A multi-scale framework for nanocomposites including interphase and agglomeration effects / Johannes Andreas Fankhänel." Hannover : Gottfried Wilhelm Leibniz Universität Hannover, 2020. http://d-nb.info/121699515X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Aguzzi, Gianluca. "Sviluppo di un front-end di simulazione per applicazioni aggregate nel framework Scafi." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/16824/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
La tesi è articolata in sei capitoli: nel primo vengono introdotti gli studi pregressi realizzati durante il periodo di tirocinio svolto presso l'Università di Bologna, utili a comprendere l'insieme di scelte effettuate nelle altre fasi dello studio. Nel secondo capitolo viene descritta la fase di analisi del modello da realizzare, completa di elenco dei requisiti che il front-end deve rispettare, in questa fase in modo particolare, c'è stato uno scambio di informazioni costante con il prof. Mirko Viroli e il dott. Roberto Casadei nei panni dei 'committenti' nel progetto. Nel terzo capitolo si descrive la fase di progettazione architetturale, fornendo informazioni su come il software realizza quanto richiesto in fase di analisi in termini di architettura. Si descrive, quindi, la struttura generale del front-end e si spiega il perché delle varie scelte presenti in questa fase. Nel quarto capitolo vengono descritte, invece, quelle che sono le scelte di design dettagliato, in particolare si descrive com'è possibile interfacciare il front-end con il framework \class{ScaFi} e come avviare e descrivere una simulazione aggregata. Nel quinto capitolo vengono mostrate le scelte implementative per la realizzazione del software e vengono mostrati alcuni screenshot con i risultati ottenuti e, infine, nel sesto si offre una panoramica di quello che è il risultato finale mostrando la struttura dell'interfaccia grafica corredato di analisi delle performance estratte da alcune delle possibili simulazioni eseguibili sul front-end
48

Boåsen, Magnus. "Modeling framework for ageing of low alloy steel." Licentiate thesis, KTH, Hållfasthetslära (Inst.), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-246036.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Ageing of low alloy steel in nuclear applications commonly takes the form as a hardening and an embrittlement of the material. This is due to the evolution of the microstructure during irradiation and at purely thermal conditions, as a combination or separate. Irradiation introduces evenly distributed solute clusters, while thermal ageing has been shown to yield a more inhomogeneous distribution. These clusters affect the dislocation motion within the material and results in a hardening and in more severe cases of ageing, also a decreased work hardening slope due to plastic strain localization into bands/channels. Embrittlement corresponds to decreased fracture toughness due to microstructural changes resulting from ageing. The thesis presents a possible framework for modeling of ageing effects in low alloy steels.In Paper I, a strain gradient plasticity framework is applied in order to capture length scale effects. The constitutive length scale is assumed to be related to the dislocation mean free path and the changes this undergoes during plastic deformation. Several evolution laws for the length scale were developed and implemented in a FEM-code considering 2D plane strain. This was used to solve a test problem of pure bending in order to investigate the effects of the length scale evolution. As all length scale evolution laws considered in this study results in a decreasing length scale; this leads to a loss of non-locality which causes an overall softening at cases where the strain gradient is dominating the solution. The results are in tentative agreement with phenomena of strain localization that is occurring in highly irradiated materials.In Paper II, the scalar stress measure for cleavage fracture is developed and generalized, here called the effective normal stress measure. This is used in a non-local weakest link model which is applied to two datasets from the literature in order to study the effects of the effective normal stress measure, as well as new experiments considering four-point bending of specimens containing a semi-elliptical surface crack. The model is shown to reproduce the failure probability of all considered datasets, i.e. well capable of transferring toughness information between different geometries.
Åldring av låglegerade stål i kärntekniska användningsområden framträder typiskt som ett hårdnande och en försprödning av materialet. Detta på grund av utvecklingen av mikrostrukturen under bestrålning och under rent termiska förhållanden. Bestrålning introducerar jämt fördelade kluster av legeringsämnen. Termisk åldring har däremot visats ge upphov till en mer ojämn fördelning. Klustren hämmar dislokationsrörelsen i materialet och ger därigenom upphov till en ökning av materialets sträckgräns, vid en mer påtaglig åldring det även leda till ett sänkt arbetshårdnande på grund av lokalisering av plastisk töjning i s.k. kanaler/band. Försprödning är en sänkning av materialets brottseghet som en följd av de mikrostrukturella förändringar som sker vid åldring. Arbetet som presenteras i den här avhandlingen har gjorts i syfte till att ta fram ett möjligt ramverk för modellering av låglegerade stål.I Artikel I, används en töjningsgradientbaserad plasticitetsteori för att kunna fånga längdskalebeteenden. Längdskalan i teorin antas vara relaterad till dislokationernas medelfria väg och den förändring den genomgår vid plastisk deformation. Flera utvecklingslagar för längdskalan har analyserats och implementerats i en finita element kod för 2D plan deformation. Denna implementering har använts för att lösa ett testproblem bestående av ren böjning med syfte att undersöka effekterna av utvecklingen hos längdskalan. Alla de utvecklingslagar som presenteras i artikeln ger en minskande längdskala, vilket leder till vad som valt att kallas förlust av icke-lokalitet. Fenomenet leder till ett övergripande mjuknande vid fall där den plastiska töjningsgradienten har stor inverkan på lösningen. Resultaten är i preliminär överenstämmelse med de typer av lokalisering av plastisk töjning som observerats i starkt bestrålade material.I Artikel II utvecklas ett generaliserat spänningsmått i syfte att beskriva klyvbrott, här benämnt effektivt normalspänningsmått. Detta har använts i samband med en icke-lokal svagaste länk modell, som har applicerats på två experimentella studier från den öppna litteraturen i syfte att studera effekterna av det effektiva normalspänningsmåttet. Utöver detta presenteras även nya experiment på ytspruckna provstavar under fyrpunktsböj. I artikeln visas att modellen återskapar sannolikheten för brott för alla undersökta experimentuppställningar, d.v.s. modellen visas vara väl duglig för att överföra brottseghet mellan geometrier.

QC 20190312

49

Thoumazeau, Alexis. "A new integrative and operational framework to assess the impact of land management on soil quality : From a field scale to a global scale indicator to be integrated within the Life Cycle Assessment framework." Thesis, Montpellier, SupAgro, 2018. http://www.theses.fr/2018NSAM0030.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Le sol fait face à de nombreuses pressions anthropiques qui menacent son fonctionnement et sa capacité à fournir des services essentiels au bien-être humain. Pour évaluer l’effet de ces perturbations et proposer de nouvelles voies de gestion des sols, il est nécessaire de développer des méthodes d’évaluation de la qualité des sols opérationnelles. La qualité des sols a été définie comme « la capacité des sols à fonctionner […] » (Karlen et al., 1997). Cependant, la littérature scientifique se focalise plutôt sur son état et ses propriétés que sur son fonctionnement. En effet, la qualité des sols est généralement appréhendée comme une somme de propriétés édaphiques, chimiques et biologiques qui ne tiennent pas compte des nombreuses interactions de cet écosystème. Cette étude propose un nouveau cadre conceptuel d’évaluation fonctionnelle de la qualité des sols basé sur la mesure directe des fonctions portées par les assemblages biologiques du sol. A partir de ce cadre, un nouveau set d’indicateurs, nommé Biofunctool®, a été développé. Biofunctool® permet de renseigner trois fonctions du sol (transformation du carbone, cycle des nutriments et maintenance de la structure) à partir de douze indicateurs, bords de champ et de faible technicité. Le set d’indicateurs a été appliqué dans divers sites expérimentaux en Asie du Sud Est et a permis de relever l’impact i) de la transition entre une culture annuelle et une culture pérenne, ii) de l’évolution du développement de cultures pérennes et iii) d’une couverture du sol avec légumineuses en plantation d’hévéa sur la qualité des sols. L’évaluation locale et fonctionnelle de la qualité des sols a pu ensuite être extrapolée à une échelle plus globale grâce à un modèle prédictif. Ce modèle permet de répondre à une forte attente d’intégration d’un indicateur fonctionnel de qualité des sols dans les modèles d’évaluation environnementale à une échelle globale comme l’Analyse Cycle de Vie
Soils are currently threatened by many human activities that jeopardize soil functioning and its ability to provide ecosystem services, vital for human well-being. In order to assess human impacts and to propose new management practices to protect soils, it is necessary to implement assessments of soil quality. Soil quality was defined by Karlen et al. (1997) as “the capacity of soil to function […]”. However, in the literature, most of study focus on assessment of soil properties and intrinsic states rather than focusing on the soil functioning and the multiple interactions within the complex system. This study proposes a new integrative approach of the soil quality from direct assessment of the functions carried out by the soil biological assemblages, namely Biofunctool®. Biofunctool® allows for assessing three soil functions (carbon transformation, nutrient cycling, structure maintenance) based on twelve functional, in-field and low-tech indicators. Biofunctool® was applied over several case studies in Thailand to assess the impact of various land management on soil quality. The results pinpointed the impact of the conversion from an annual cropping system to a perennial one on soil; it also raised the evolution of soil quality over perennial tree stands and the impact of cover crop in rubber tree systems. The local assessment of soil integrative quality was then scaled up, to be integrated within the Life Cycle Assessment framework through a predictive model approach. The model developed allows to meet the current demand in defining integrative indicators of soil quality adapted to global scale environmental frameworks
50

Banda, Juan. "Framework for creating large-scale content-based image retrieval system (CBIR) for solar data analysis." Diss., Montana State University, 2011. http://etd.lib.montana.edu/etd/2011/banda/BandaJ0511.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
With the launch of NASA's Solar Dynamics Observatory mission, a whole new age of high-quality solar image analysis was started. With the generation of over 1.5 Terabytes of solar images, per day, that are ten times higher resolution than high-definition television, the task of analyzing them by scientists by hand is simply impossible. The storage of all these images becomes a second problem of importance due to the fact that there is only one full copy of this repository in the world, therefore an alternate and compressed representation of these images is of vital importance. Current automated image processing approaches in solar physics are entirely dedicated to analyze individual types of solar phenomena and do not allow researchers to conveniently query the whole Solar Dynamics Observatory repository for similar images of their interests. We developed a Content-based Image Retrieval system that can automatically analyze and retrieve multiple different types of solar phenomena, this will fundamentally change the way researchers look for solar images in a similar way as Google changed the way people searched the internet. During the development of our system, we created a framework that would allow researchers to tweak and develop their own content-based image retrieval systems for different domain-specific applications with great ease and a deeper understanding of the representation of domain-specific image data. This framework incorporates many different aspects of image processing and information retrieval such as: image parameter extraction for reduced representation of solar images, image parameter evaluation for validation of image parameters used, evaluation of multiple dissimilarity measures for more accurate data analysis, analyses of dimensionality reduction methods to help reduce storage and processing costs, and indexing and retrieval algorithms for faster and more efficient search. The capabilities of this framework have never been available together as an open source and comprehensive software package. With these unique capabilities, we achieved a higher level of knowledge of our solar data and validated each of our steps into the creation of our solar content-based image retrieval system with an exhaustive evaluation. The contributions of our framework will allow researchers to tweak and develop new content-based image retrieval systems for other domains (e.g astronomy, medical field) and will allow the migration of astrophysics research from the individual analysis of solar phenomenon into larger-scale data analyses.

To the bibliography