Siga este link para ver outros tipos de publicações sobre o tema: Complex and heterogeneous dynamic system.

Teses / dissertações sobre o tema "Complex and heterogeneous dynamic system"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Veja os 50 melhores trabalhos (teses / dissertações) para estudos sobre o assunto "Complex and heterogeneous dynamic system".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Veja as teses / dissertações das mais diversas áreas científicas e compile uma bibliografia correta.

1

SANSONE, ALESSANDRO. "Applications of Nonlinear Dynamics and Complex Systems Theory to Finance". Doctoral thesis, La Sapienza, 2007. http://hdl.handle.net/11573/917404.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Mark, Christoph [Verfasser], Ben [Akademischer Betreuer] Fabry, Ben [Gutachter] Fabry, Rainer [Gutachter] Böckmann e Josef [Gutachter] Käs. "Heterogeneous stochastic processes in complex dynamic systems / Christoph Mark ; Gutachter: Ben Fabry, Rainer Böckmann, Josef Käs ; Betreuer: Ben Fabry". Erlangen : Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), 2018. http://d-nb.info/1175206369/34.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

CROCIANI, LUCA. "Complex Heterogeneous Crowding Phenomena: Multi-Agent Modeling, Simulation, Empirical Evidences and the Case of Elderly Pedestrians". Doctoral thesis, Università degli Studi di Milano-Bicocca, 2016. http://hdl.handle.net/10281/102390.

Texto completo da fonte
Resumo:
The simulation of complex systems is nowadays one of the major applications of the multi-agent paradigm and it is widely applied in many fields. In the current scenario of global urbanization, the research for the development of intelligent transportation systems has gain much interests in the last decades, and it has produced relevant efforts on the simulation field as well. Collective forms of transport are one of the most sensible solutions for mitigating traffic congestion and pollution, and they imply a growing importance of pedestrians in the planning and design activities. The Thesis focusses on these activities, proposing discrete approaches for the microscopic simulation of pedestrian traffic with several innovative components. The literature of pedestrian modeling, presented in Chapter 2, can be classified with a three layers scheme that defines the behavioral levels of pedestrian choices. Most of the literature is aimed at the simulation of the so-called "operational" level that describes, in the case of pedestrian, the pure walking behavior from sources to destinations. At this bottom-line level, in fact, the consistent empirical knowledge on the physical properties of the pedestrian traffic allows a sufficient validation of the simulation models, which are then usable for a dynamical analysis of pedestrian flows in arbitrary environments. Nonetheless, the consideration of only the operational level lacks the route choice activity of pedestrians, which resides at the tactical level and that becomes fundamental if the simulation scenario represents a complex environment, with also different possible intermediate points (e.g. ticket windows). Available knowledge on pedestrian wayfinding is scarce, but designing simulation models for this activity is a way of defining requirements for further studies on this topic. Chapter 3 discusses the first microscopic model proposed for pedestrian traffic. The model proposes a hybrid agents architecture describing two synchronized and communicating components to deal with both tactical and operational level of pedestrian behavior. At the lowest level the model extends the well-known floor field model, proposing innovative extensions as the consideration of pedestrian groups and an approach to manage different speeds of people. The component dedicated to the route choice of the agents describes an adaptive behavior aimed at minimizing the individual traveling time towards the final destination, considering the static configuration of the environment and the perceived state of the system. Chapter 4 presents the results of the simulation model by firstly presenting quantitative results on the operational level, which are compared with state-of-art empirical data. Experiments about the tactical level explore the overall behavior of the simulated pedestrians in presence of different paths towards a destination. Chapter 5 analyzes a slightly different microscopic model defined for the walking behavior of pedestrian, whose aim is an integration with the MATSim simulation system, based on a queue model and mainly used for vehicular traffic simulation in metropolitan areas. The definition of this model finds its purposes in the approach to manage the route choice inherited from MATSim, which follows an iterative learning algorithm implying numerous simulations of the same scenario. Starting from a systematic choice of the shortest path at the first iteration, the agents are able to adjust their routes, for the following iteration, based on the traveling time experienced at the current iteration. The simulation campaign can converge to an equilibrium, of different type (Nash equilibrium or system optimum) according to the perceived travel time. This alternative microscopic model for pedestrian traffic is very simple and optimized, yet is able to be perfectly calibrated to fit the empirical data for the validation, as it will be shown in the results section.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Faller, Daniel. "Analysis and dynamic modelling of complex systems". [S.l. : s.n.], 2003. http://www.freidok.uni-freiburg.de/volltexte/777.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Vaneman, Warren Kenneth. "Evaluating System Performance in a Complex and Dynamic Environment". Diss., Virginia Tech, 2002. http://hdl.handle.net/10919/30043.

Texto completo da fonte
Resumo:
Realistically, organizational and/or system efficiency performance is dynamic, non-linear, and a function of multiple interactions in production. However, in the efficiency literature, system performance is frequently evaluated considering linear combinations of the input/output variables, without explicitly taking into account the interactions and feedback mechanisms that explain the causes of efficiency behavior, the dynamic nature of production, and non-linear combinations of the input/output variables. Consequently, policy decisions based on these results may be sub-optimized because the non-linear relationships among variables, causal relationships, and feedback mechanisms are ignored. This research takes the initial steps of evaluating system efficiency performance in a dynamic environment, by relating the factors that effect system efficiency performance to the policies that govern it. First, this research extends the concepts of the static production axioms into a dynamic realm, where inputs are not instantaneously converted into outputs. The relationships of these new dynamic production axioms to the basic behaviors associated with system dynamics structures are explored. Second, this research introduces a methodological approach that combines system dynamics modeling with the measurement of productive efficiency. System dynamics is a modeling paradigm that evaluates system policies by exploring the causal relationships of the important elements within the system. This paradigm is coupled with the fundamental assumptions of production theory in order to evaluate the productive efficiency of a production system operating within a dynamic and non-linear environment. As a result, a subsystem within the system dynamics model is introduced that computes efficiency scores based on the fundamental notions of productive efficiency. The framework's ability to combine prescriptive and descriptive modeling characteristics, as well as dynamic and combinatorial complexity, can potentially have a greater impact on policy decisions and how they affect system efficiency performance. Finally, the utility of these concepts is demonstrated in an implementation case study. This methodology generates a prescriptive dynamical production frontier which defines the optimal production resources required to satisfy system requirements. Additionally, the dynamical production frontier allows for analysis for comparisons between options during a transient period, insight into possible unintended consequences, and the ability to forecast optimal times for introducing system or process improvements.
Ph. D.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Gupta, Amit. "Model reduction and simulation of complex dynamic systems /". Online version of thesis, 1990. http://hdl.handle.net/1850/11265.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Peterson, Thomas. "Dynamic Allocation for Embedded Heterogeneous Memory : An Empirical Study". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-223904.

Texto completo da fonte
Resumo:
Embedded systems are omnipresent and contribute to our lives in many ways by instantiating functionality in larger systems. To operate, embedded systems require well-functioning software, hardware as well as an interface in-between these. The hardware and software of these systems is under constant change as new technologies arise. An actual change these systems are undergoing are the experimenting with different memory management techniques for RAM as novel non-volatile RAM(NVRAM) technologies have been invented. These NVRAM technologies often come with asymmetrical read and write latencies and thus motivate designing memory consisting of multiple NVRAMs. As a consequence of these properties and memory designs there is a need for memory management that minimizes latencies.This thesis addresses the problem of memory allocation on heterogeneous memory by conducting an empirical study. The first part of the study examines free list, bitmap and buddy system based allocation techniques. The free list allocation technique is then concluded to be superior. Thereafter, multi-bank memory architectures are designed and memory bank selection strategies are established. These strategies are based on size thresholds as well as memory bank occupancies. The evaluation of these strategies did not result in any major conclusions but showed that some strategies were more appropriate for someapplication behaviors.
Inbyggda system existerar allestädes och bidrar till våran livsstandard på flertalet avseenden genom att skapa funktionalitet i större system. För att vara verksamma kräver inbyggda system en välfungerande hård- och mjukvara samt gränssnitt mellan dessa. Dessa tre måste ständigt omarbetas i takt med utvecklingen av nya användbara teknologier för inbyggda system. En förändring dessa system genomgår i nuläget är experimentering med nya minneshanteringstekniker för RAM-minnen då nya icke-flyktiga RAM-minnen utvecklats. Dessa minnen uppvisar ofta asymmetriska läs och skriv fördröjningar vilket motiverar en minnesdesign baserad på flera olika icke-flyktiga RAM. Som en konsekvens av dessa egenskaper och minnesdesigner finns ett behov av att hitta minnesallokeringstekniker som minimerar de fördröjningar som skapas. Detta dokument adresserar problemet med minnesallokering på heterogena minnen genom en empirisk studie. I den första delen av studien studerades allokeringstekniker baserade på en länkad lista, bitmapp och ett kompissystem. Med detta som grund drogs slutsatsen att den länkade listan var överlägsen alternativen. Därefter utarbetades minnesarkitekturer med flera minnesbanker samtidigt som framtagandet av flera strategier för val av minnesbank utfördes. Dessa strategier baserades på storleksbaserade tröskelvärden och nyttjandegrad hos olika minnesbanker. Utvärderingen av dessa strategier resulterade ej i några större slutsatser men visade att olika strategier var olika lämpade för olika beteenden hos applikationer.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Balchanos, Michael Gregory. "A probabilistic technique for the assessment of complex dynamic system resilience". Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/43730.

Texto completo da fonte
Resumo:
In the presence of operational uncertainty, one of the greatest challenges in systems engineering is to ensure system effectiveness, mission capability and survivability. Safety management is shifting from passive, reactive and diagnosis-based approaches to autonomous architectures that will manage safety and survivability through active, proactive and prognosis-based solutions. Resilience engineering is an emerging discipline, with alternative recommendations on safer and more survivable system architectures. A resilient system can "absorb" the impact of change due to unexpected disturbances, while it "adapts" to change, in order to maintain its physical integrity and mission capability. A framework of proposed resilience estimations is the basis for a scenario-based assessment technique, driven by modeling and simulation-based (M&S) analysis, for obtaining system performance, health monitoring, damage propagation and overall mission capability responses. For the technique development and testing, a small-scale canonical problem has been formulated, involving a reconfigurable spring-mass-damper system, in a multi-spring configuration. Operational uncertainty is introduced through disturbance factors, such as external forces with varying magnitude, input frequency, event duration and occurrence time. Case studies with varying levels of damping and alternative reconfiguration strategies return the effects of operational uncertainty on system performance, mission capability, and survivability, as well as on the "restore", "absorb", and "adapt" resilience capacities. The Topological Investigation for Resilient and Effective Systems, through Increased Architecture Survivability (TIRESIAS) technique is demonstrated for a reduced scale, reconfigurable naval cooling network application. With uncertainty effects modeled through network leak combinations, TIRESIAS provides insight on leak effects to survival times, mission capability degradations, and on resilience function capacities, for the baseline configuration. Comparative case studies were conducted for different architecture configurations, which have been generated for different total number of control valves and valve locations on the topology.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Meslmawy, Mahdi Abed Salman. "Efficient ressources management in a ditributed computer system, modeled as a dynamic complex system". Thesis, Le Havre, 2015. http://www.theses.fr/2015LEHA0007/document.

Texto completo da fonte
Resumo:
Les grilles et les clouds sont deux types aujourd'hui largement répandus de systèmes informatiques distribués (en anglais DCS). Ces DCS sont des systèmes complexes au sens où leur comportement global émergent résulte de l'interaction décentralisée de ses composants, et n'est pas guidée directement de manière centralisée. Dans notre étude, nous présentons un modèle de système complexe qui gère de manière la plus efficace possible les ressources d'un DCS. Les entités d'un DCS réagissent à l'instabilité du système et s'ajustent aux conditions environnementales pour optimiser la performance du système. La structure des réseaux d'interaction qui permettent un accès rapide et sécurisé aux ressources disponibles est étudiée, et des améliorations proposées
Grids and clouds are types of currently widely known distributed computing systems or DCSs. DCSs are complex systems in the sense that their emergent global behavior results from decentralized interaction of its parts and is not guided directly from a central point. In our study, we present a complex system model that efficiently manages the ressources of a DCS. The entities of the DCS react to system instability and adjust their environmental condtions for optimizing system performance. The structure of the interaction networks that allow fast and reliable access to available resources is studied and improvements ar proposed
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Moukir, Sara. "High performance analysis for road traffic control". Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASG039.

Texto completo da fonte
Resumo:
La réduction des temps de trajet et de la consommation d'énergie dans les réseaux routiers urbains est cruciale pour le bien-être collectif et la durabilité environnementale. Depuis les années 1950, la modélisation du trafic a été un axe central de la recherche. Avec l'évolution des capacités informatiques, des simulations sophistiquées représentant fidèlement les complexités du trafic routier ont émergé, essentielles pour évaluer les technologies sans perturber le trafic réel.Les systèmes de transport deviennent plus complexes avec des informations en temps réel, nécessitant des modèles de simulation adaptés. Les simulations multi-agents, analysant les comportements individuels dans un environnement dynamique, sont particulièrement efficaces pour cette tâche, permettant de comprendre et de gérer le trafic urbain en représentant les interactions entre les voyageurs et leur environnement.Simuler de grandes populations de voyageurs dans les villes a longtemps été une tâche exigeante en termes de ressources informatiques. Les technologies avancées permettant la distribution des calculs sur plusieurs ordinateurs ont ouvert de nouvelles possibilités. Cependant, de nombreux simulateurs de mobilité urbaine n'exploitent pas pleinement ces architectures distribuées, limitant leur capacité à modéliser des scénarios complexes.L'objectif principal de cette recherche est d'améliorer la performance algorithmique et computationnelle des simulateurs de mobilité. Nous développons et validons des modèles de distribution génériques et reproductibles pouvant être adoptés par divers simulateurs de mobilité multi-agents, surmontant ainsi les barrières techniques pour analyser les systèmes de transport complexes dans des environnements urbains dynamiques.Nous utilisons le simulateur de trafic MATSim, reconnu pour la simulation de trafic multi-agents, pour tester nos méthodes génériques. Notre première contribution applique l'approche "Unite and Conquer" (UC) à MATSim. Cette méthode accélère les simulations en exploitant les architectures informatiques modernes. L'approche multiMATSim réplique plusieurs instances de MATSim sur plusieurs nœuds de calcul avec des communications périodiques, chaque instance fonctionnant sur un nœud séparé, utilisant les capacités de multithreading de MATSim pour améliorer le parallélisme. La synchronisation périodique assure la cohérence des données, tandis que les mécanismes de tolérance aux pannes permettent à la simulation de se poursuivre même en cas d'échec de certaines instances. Cette approche optimise l'utilisation des ressources informatiques selon les capacités spécifiques de chaque nœud.La deuxième contribution explore les techniques d'intelligence artificielle pour accélérer la simulation. Nous utilisons des réseaux de neurones profonds pour prédire les résultats des simulations MATSim. Initialement mise en œuvre sur un seul nœud, cette approche de preuve de concept utilise efficacement les ressources CPU disponibles. Les réseaux de neurones sont entraînés sur des données de simulations précédentes pour prédire des indicateurs tels que les temps de trajet et les niveaux de congestion. Les résultats sont comparés à ceux de MATSim pour évaluer leur précision. Cette approche est conçue pour évoluer avec des plans futurs pour une formation distribuée sur plusieurs nœuds.En résumé, nos contributions fournissent de nouvelles variantes algorithmiques et explorent l'intégration du calcul haute performance et de l'IA dans les simulateurs de trafic multi-agents. Nous démontrons l'impact de ces modèles et technologies sur la simulation de trafic, en abordant les défis et les limites de leur mise en œuvre. Notre travail met en évidence les avantages des architectures émergentes et des nouveaux concepts algorithmiques pour améliorer la robustesse et la performance des simulateurs de trafic, avec des résultats prometteurs
The need to reduce travel times and energy consumption in urban road networks is critical for improving collective well-being and environmental sustainability. Since the 1950s, traffic modeling has been a central research focus. With the rapid evolution of computing capabilities in the 21st century, sophisticated digital simulations have emerged, accurately depicting road traffic complexities. Mobility simulations are essential for assessing emerging technologies like cooperative systems and dynamic GPS navigation without disrupting real traffic.As transport systems become more complex with real-time information, simulation models must adapt. Multi-agent simulations, which analyze individual behaviors within a dynamic environment, are particularly suited for this task. These simulations help understand and manage urban traffic by representing interactions between travelers and their environment.Simulating large populations of travelers in cities, potentially millions of individuals, has historically been computationally demanding. Advanced computer technologies allowing distributed calculations across multiple computers have opened new possibilities. However, many urban mobility simulators do not fully exploit these distributed architectures, limiting their ability to model complex scenarios involving many travelers and extensive networks.The main objective of this research is to improve the algorithmic and computational performance of mobility simulators. We aim to develop and validate generic and reproducible distribution models that can be adopted by various multi-agent mobility simulators. This approach seeks to overcome technical barriers and provide a solid foundation for analyzing complex transport systems in dynamic urban environments.Our research leverages the MATSim traffic simulator due to its flexibility and open structure. MATSim is widely recognized in the literature for multi-agent traffic simulation, making it an ideal candidate to test our generic methods.Our first contribution applies the "Unite and Conquer" (UC) approach to MATSim. This method accelerates simulation speed by leveraging modern computing architectures. The multiMATSim approach involves replicating several MATSim instances across multiple computing nodes with periodic communications. Each instance runs on a separate node, utilizing MATSim's native multithreading capabilities to enhance parallelism. Periodic synchronization ensures data consistency, while fault tolerance mechanisms allow the simulation to continue smoothly even if some instances fail. This approach efficiently uses diverse computational resources based on each node's specific capabilities.The second contribution explores artificial intelligence techniques to expedite the simulation process. Specifically, we use deep neural networks to predict MATSim simulation outcomes. Initially implemented on a single node, this proof-of-concept approach efficiently uses available CPU resources. Neural networks are trained on data from previous simulations to predict key metrics like travel times and congestion levels. The outputs are compared to MATSim results to assess accuracy. This approach is designed to scale, with future plans for distributed neural network training across multiple nodes.In summary, our contributions provide new algorithmic variants and explore integrating high-performance computing and AI into multi-agent traffic simulators. We aim to demonstrate the impact of these models and technologies on traffic simulation, addressing the challenges and limitations of their implementation. Our work highlights the benefits of emerging architectures and new algorithmic concepts for enhancing the robustness and performance of traffic simulators, presenting promising results
Estilos ABNT, Harvard, Vancouver, APA, etc.
11

MacLellan, Michael. "Central Nervous System Control of Dynamic Stability during Locomotion in Complex Environments". Thesis, University of Waterloo, 2006. http://hdl.handle.net/10012/2804.

Texto completo da fonte
Resumo:
A major function of the central nervous system (CNS) during locomotion is the ability to maintain dynamic stability during threats to balance. The CNS uses reactive, predictive, and anticipatory mechanisms in order to accomplish this. Previously, stability has been estimated using single measures. Since the entire body works as a system, dynamic stability should be examined by integrating kinematic, kinetic, and electromyographical measures of the whole body. This thesis examines three threats to stability (recovery from a frontal plane surface translation, stepping onto and walking on a compliant surface, and obstacle clearance on a compliant surface). These threats to stability would enable a full body stability analysis for reactive, predictive, and anticipatory CNS control mechanisms. From the results in this study, observing various biomechanical variables provides a more precise evaluation of dynamic stability and how it is achieved. Observations showed that different methods of increasing stability (eg. Lowering full body COM, increasing step width) were controlled by differing CNS mechanisms during a task. This provides evidence that a single measure cannot determine dynamic stability during a locomotion task and the body must be observed entirely to determine methods used in the maintenance of dynamic stability.
Estilos ABNT, Harvard, Vancouver, APA, etc.
12

Mykityshyn, Mark. "Assessing the maturity of information architectures for complex dynamic enterprise systems". Diss., Atlanta, Ga. : Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/26686.

Texto completo da fonte
Resumo:
Thesis (Ph.D)--Industrial and Systems Engineering, Georgia Institute of Technology, 2008.
Committee Chair: Dr. William B. Rouse; Committee Member: Dr. Amy Pritchett; Committee Member: Dr. Leon McGinnis; Committee Member: Dr. Mike Cummins; Committee Member: Dr. Steve Cross. Part of the SMARTech Electronic Thesis and Dissertation Collection.
Estilos ABNT, Harvard, Vancouver, APA, etc.
13

Mao, Shenghao. "Dynamic resource management technologies for CBR heterogeneous services in the reverse link CDMA2000 1X system". Thesis, University of Newcastle Upon Tyne, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.427382.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
14

Binotto, Alécio Pedro Delazari. "A dynamic scheduling runtime and tuning system for heterogeneous multi and many-core desktop platforms". reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2011. http://hdl.handle.net/10183/34768.

Texto completo da fonte
Resumo:
Atualmente, o computador pessoal (PC) moderno poder ser considerado como um cluster heterogênedo de um nodo, o qual processa simultâneamente inúmeras tarefas provenientes das aplicações. O PC pode ser composto por Unidades de Processamento (PUs) assimétricas, como a Unidade Central de Processamento (CPU), composta de múltiplos núcleos, a Unidade de Processamento Gráfico (GPU), composta por inúmeros núcleos e que tem sido um dos principais co-processadores que contribuiram para a computação de alto desempenho em PCs, entre outras. Neste sentido, uma plataforma de execução heterogênea é formada em um PC para efetuar cálculos intensivos em um grande número de dados. Na perspectiva desta tese, a distribuição da carga de trabalho de uma aplicação nas PUs é um fator importante para melhorar o desempenho das aplicações e explorar tal heterogeneidade. Esta questão apresenta desafios uma vez que o custo de execução de uma tarefa de alto nível em uma PU é não-determinístico e pode ser afetado por uma série de parâmetros não conhecidos a priori, como o tamanho do domínio do problema e a precisão da solução, entre outros. Nesse escopo, esta pesquisa de doutorado apresenta um sistema sensível ao contexto e de adaptação em tempo de execução com base em um compromisso entre a redução do tempo de execução das aplicações - devido a um escalonamento dinâmico adequado de tarefas de alto nível - e o custo de computação do próprio escalonamento aplicados em uma plataforma composta de CPU e GPU. Esta abordagem combina um modelo para um primeiro escalonamento baseado em perfis de desempenho adquiridos em préprocessamento com um modelo online, o qual mantém o controle do tempo de execução real de novas tarefas e escalona dinâmicamente e de modo eficaz novas instâncias das tarefas de alto nível em uma plataforma de execução composta de CPU e de GPU. Para isso, é proposto um conjunto de heurísticas para escalonar tarefas em uma CPU e uma GPU e uma estratégia genérica e eficiente de escalonamento que considera várias unidades de processamento. A abordagem proposta é aplicada em um estudo de caso utilizando uma plataforma de execução composta por CPU e GPU para computação de métodos iterativos focados na solução de Sistemas de Equações Lineares que se utilizam de um cálculo de stencil especialmente concebido para explorar as características das GPUs modernas. A solução utiliza o número de incógnitas como o principal parâmetro para a decisão de escalonamento. Ao escalonar tarefas para a CPU e para a GPU, um ganho de 21,77% em desempenho é obtido em comparação com o escalonamento estático de todas as tarefas para a GPU (o qual é utilizado por modelos de programação atuais, como OpenCL e CUDA para Nvidia) com um erro de escalonamento de apenas 0,25% em relação à combinação exaustiva.
A modern personal computer can be now considered as a one-node heterogeneous cluster that simultaneously processes several applications’ tasks. It can be composed by asymmetric Processing Units (PUs), like the multi-core Central Processing Unit (CPU), the many-core Graphics Processing Units (GPUs) - which have become one of the main co-processors that contributed towards high performance computing - and other PUs. This way, a powerful heterogeneous execution platform is built on a desktop for data intensive calculations. In the perspective of this thesis, to improve the performance of applications and explore such heterogeneity, a workload distribution over the PUs plays a key role in such systems. This issue presents challenges since the execution cost of a task at a PU is non-deterministic and can be affected by a number of parameters not known a priori, like the problem size domain and the precision of the solution, among others. Within this scope, this doctoral research introduces a context-aware runtime and performance tuning system based on a compromise between reducing the execution time of the applications - due to appropriate dynamic scheduling of high-level tasks - and the cost of computing such scheduling applied on a platform composed of CPU and GPUs. This approach combines a model for a first scheduling based on an off-line task performance profile benchmark with a runtime model that keeps track of the tasks’ real execution time and efficiently schedules new instances of the high-level tasks dynamically over the CPU/GPU execution platform. For that, it is proposed a set of heuristics to schedule tasks over one CPU and one GPU and a generic and efficient scheduling strategy that considers several processing units. The proposed approach is applied in a case study using a CPU-GPU execution platform for computing iterative solvers for Systems of Linear Equations using a stencil code specially designed to explore the characteristics of modern GPUs. The solution uses the number of unknowns as the main parameter for assignment decision. By scheduling tasks to the CPU and to the GPU, it is achieved a performance gain of 21.77% in comparison to the static assignment of all tasks to the GPU (which is done by current programming models, such as OpenCL and CUDA for Nvidia) with a scheduling error of only 0.25% compared to exhaustive search.
Estilos ABNT, Harvard, Vancouver, APA, etc.
15

Ahlman, Scott M. (Scott Martin) 1969. "Complex dynamic system architecture evaluation through a hierarchical synthesis of tools and methods". Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/29737.

Texto completo da fonte
Resumo:
Thesis (S.M.)--Massachusetts Institute of Technology, System Design & Management Program, 2003.
Includes bibliographical references (p. 202-203).
The automobile embodies complex dynamic system architecture with thousands of components and as many interconnections. The modern day vehicle architecture attempts to balance significant tradeoffs and constraints to achieve the system goals. There are innumerable combinations, which may or may not achieve success. This work proposes a new method for evaluation of complex dynamic system architecture through a hierarchical synthesis of specific qualitative and quantitative tools and methods within a system architecture framework. The proposed methodology is applied to key subsystems of a specific high performance car to assess primarily the merits of the process. Current methods for system architecture definition at the automobile manufacturer utilized for analysis rely primarily on experience-based intuition within an architecting framework. Current system architecture frameworks and the manufacturer's process utilized appear insufficient, as significant issues (often dynamics related) arise in the verification and validation phase of their product development process, requiring change to vehicle architecture. Changes in architecture at this phase of the manufacturer's product development process have significant cost, timing and perhaps functional performance implications. Many system architecture and engineering tools exist to aid architecture definition, but a hierarchy in usage and the interrelationships of the tools are not clearly defined. The proposed solution for rigorous complex dynamic system architecture evaluation includes a four phase hierarchical synthesis of known qualitative and quantitative tools and methods within a holistic system architecture framework. For purposes of this thesis, the proposed evaluation methodology is labeled "CD-SAAM" for Complex Dynamic System Architecture Assessment Methodology. The proposed methodology is a rigorous complement, superimposed on the concept development phase, to the standard product development design process. CD-SAAM mainly combines known system architecting and system engineering framework, principles and tools. Application of CD-SAAM to a high performance car's powertrain and chassis system architecture's second level form and function decomposition, serve to demonstrate many high level conclusions. The hierarchy and synthesis of framework, principles and tools in CD-SAAM provided a valuable and rigorous method to evaluate complex dynamic system architecture. While certain aspects of the proposed methodology appear time-consuming, each step and the overall process serve to greatly improve consistent success with respect to achievement of a system's goals within its constraints. Application of CD-SAAM also underscores the importance and need for explicit design parameter identification and analysis in complex dynamic system architecture assessment. The performance car application also provides insight into the value of DOE RSE methods in architecture assessment, as opposed to its typical region of use in detailed design analysis. Finally, a positive by-product of the analysis includes CD-SAAM's ability to evaluate the consistency and attainability of goals within the given constraints.
by Scott M. Ahlman.
S.M.
Estilos ABNT, Harvard, Vancouver, APA, etc.
16

Impiombato, Andrea Natale <1990&gt. "Geometric optimization of complex thermal-fluid dynamic system by means of constructal design". Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2022. http://amsdottorato.unibo.it/10056/1/Andrea_Impiombato.pdf.

Texto completo da fonte
Resumo:
In this work the Constructal Theory is exposed in its generality, trying to approach it through examples mostly of a physical-engineering nature. Constructal Theory proposes to see living bodies as elements subject to constraints, which are built with a goal, an objective, which is to obtain maximum efficiency. Constructal Theory is characterized by Constructal Law, which states that if a system has the freedom to morph it develops over time a flow architecture that provides easier access to the currents that pass through it. The Constructal Law is as general as the First and Second Laws of Thermodynamics, but it has a very different purpose which makes it unique and complementary to those laws. While the First Law points to the conservation of energy, both the Constructal Law and the Second Law point to change, that is, to a direction in time. Contrary to the Second Law, the Constructal Law applies to systems that are out of balance, that is, to systems that evolve over time. While the second law deals with state variables, the Constructal Law combines flows and design. The thesis continues with the application of the Constructal Theory for a cardiac bypass shape optimization. Through the Constructal Theory the constraints under which the system is free to morph are defined and, through the classical engineering optimization processes (numerical simulations and optimization algorithms) the optimum conditions are defined, i.e., those conditions that guarantee the minimum resistance to the passage of the fluid. The characterization of the blood flow was an important step in the study of this system, as the heartbeat induces a pulsed regime inside the veins. Therefore, the simulations conducted in transient regime consider the deformed velocity profile according to the conditions dictated by the pressure gradient established by the heartbeat.
Estilos ABNT, Harvard, Vancouver, APA, etc.
17

Pesonen, L. T. (Lasse T. T. ). "Implementation of design to profit in a complex and dynamic business context". Doctoral thesis, University of Oulu, 2001. http://urn.fi/urn:isbn:9514264509.

Texto completo da fonte
Resumo:
Abstract The objective of this thesis is to demonstrate a design to profit procedure and its implementation in industrial case environment. The procedure is demonstrated as a way to improve profits in a global company. The essential elements of the procedure are product business case calculations and profit consciousness of employees. This study utilizes a combination of product life cycle analysis, advanced costing methods and multidimensional data processing for the product business case calculations. The combination is necessary for solving the research task. The need of proactive design is emphasized in the telecommunications industry due to shorter and shorter product life cycles. However, traditional accounting methods do not support proactive design work sufficiently during the life cycle of the products. The design to profit procedure has been created to help business managers to solve following problems: 1. How to proactively ensure the growth of business profits in the future? 2. How to prevent suboptimal decisions from being made in functional units and to promote overall profitability? 3. How to judge the profitability of new product programs within a company? 4. How can we ensure an adequate level of cost consciousness and profitability-driven targets for the company's key employees? This study presents and discusses the construction of the procedure and describes its elements, implementation and use in practice. The argumentation is illustrated by case studies. This method has benefits, especially when the product life cycles are short and the market competition strong. The design to profit procedure is a proactive mind set or thinking pattern. This system makes the employees aware of the importance target profitability and especially target costing. There is no decision support system that could guarantee the profitability of business. Cautious utilization of the system results and common sense are required to achieve continuous growth of business profits.
Estilos ABNT, Harvard, Vancouver, APA, etc.
18

Kciuk, Thaddeus A. "The static and dynamic analysis of a complex optical-mechanical system utilizing the finite element method /". Online version of thesis, 1985. http://hdl.handle.net/1850/10326.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
19

Kamapantula, Bhanu K. "In-silico Models for Capturing the Static and Dynamic Characteristics of Robustness within Complex Networks". VCU Scholars Compass, 2015. http://scholarscompass.vcu.edu/etd/4049.

Texto completo da fonte
Resumo:
Understanding the role of structural patterns within complex networks is essential to establish the governing principles of such networks. Social networks, biological networks, technological networks etc. can be considered as complex networks where information processing and transport plays a central role. Complexity in these net works can be due to abstraction, scale, functionality and structure. Depending on the abstraction each of these can be categorized further. Gene regulatory networks are one such category of biological networks. Gene regulatory networks (GRNs) are assumed to be robust under internal and external perturbations. Network motifs such as feed-forward loop motif and bifan motif are believed to play a central role functionally in retaining GRN behavior under lossy conditions. While the role of static characteristics like average shortest path, density, degree centrality among other topological features is well documented by the research community, the structural role of motifs and their dynamic characteristics are not xiii well understood. Wireless sensor networks in the last decade were intensively studied using network simulators. Can we use in-silico experiments to understand biological network topologies better? Does the structure of these motifs have any role to play in ensuring robust information transport in such networks? How do their static and dynamic roles differ? To understand these questions, we use in-silico network models to capture the dynamic characteristics of complex network topologies. Developing these models involve network mapping, sink selection strategies and identifying metrics to capture robust system behavior. Further, I studied the dynamic aspect of network characteristics using variation in network information flow under perturbations defined by lossy conditions and channel capacity. We use machine learning techniques to identify significant features that contribute to robust network performance. Our work demonstrates that although the structural role of feed-forward loop motif in signal transduction within GRNs is minimal, these motifs stand out under heavy perturbations.
Estilos ABNT, Harvard, Vancouver, APA, etc.
20

Zhang, Daili. "Multi-agent based control of large-scale complex systems employing distributed dynamic inference engine". Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/33963.

Texto completo da fonte
Resumo:
Increasing societal demand for automation has led to considerable efforts to control large-scale complex systems, especially in the area of autonomous intelligent control methods. The control system of a large-scale complex system needs to satisfy four system level requirements: robustness, flexibility, reusability, and scalability. Corresponding to the four system level requirements, there arise four major challenges. First, it is difficult to get accurate and complete information. Second, the system may be physically highly distributed. Third, the system evolves very quickly. Fourth, emergent global behaviors of the system can be caused by small disturbances at the component level. The Multi-Agent Based Control (MABC) method as an implementation of distributed intelligent control has been the focus of research since the 1970s, in an effort to solve the above-mentioned problems in controlling large-scale complex systems. However, to the author's best knowledge, all MABC systems for large-scale complex systems with significant uncertainties are problem-specific and thus difficult to extend to other domains or larger systems. This situation is partly due to the control architecture of multiple agents being determined by agent to agent coupling and interaction mechanisms. Therefore, the research objective of this dissertation is to develop a comprehensive, generalized framework for the control system design of general large-scale complex systems with significant uncertainties, with the focus on distributed control architecture design and distributed inference engine design. A Hybrid Multi-Agent Based Control (HyMABC) architecture is proposed by combining hierarchical control architecture and module control architecture with logical replication rings. First, it decomposes a complex system hierarchically; second, it combines the components in the same level as a module, and then designs common interfaces for all of the components in the same module; third, replications are made for critical agents and are organized into logical rings. This architecture maintains clear guidelines for complexity decomposition and also increases the robustness of the whole system. Multiple Sectioned Dynamic Bayesian Networks (MSDBNs) as a distributed dynamic probabilistic inference engine, can be embedded into the control architecture to handle uncertainties of general large-scale complex systems. MSDBNs decomposes a large knowledge-based system into many agents. Each agent holds its partial perspective of a large problem domain by representing its knowledge as a Dynamic Bayesian Network (DBN). Each agent accesses local evidence from its corresponding local sensors and communicates with other agents through finite message passing. If the distributed agents can be organized into a tree structure, satisfying the running intersection property and d-sep set requirements, globally consistent inferences are achievable in a distributed way. By using different frequencies for local DBN agent belief updating and global system belief updating, it balances the communication cost with the global consistency of inferences. In this dissertation, a fully factorized Boyen-Koller (BK) approximation algorithm is used for local DBN agent belief updating, and the static Junction Forest Linkage Tree (JFLT) algorithm is used for global system belief updating. MSDBNs assume a static structure and a stable communication network for the whole system. However, for a real system, sub-Bayesian networks as nodes could be lost, and the communication network could be shut down due to partial damage in the system. Therefore, on-line and automatic MSDBNs structure formation is necessary for making robust state estimations and increasing survivability of the whole system. A Distributed Spanning Tree Optimization (DSTO) algorithm, a Distributed D-Sep Set Satisfaction (DDSSS) algorithm, and a Distributed Running Intersection Satisfaction (DRIS) algorithm are proposed in this dissertation. Combining these three distributed algorithms and a Distributed Belief Propagation (DBP) algorithm in MSDBNs makes state estimations robust to partial damage in the whole system. Combining the distributed control architecture design and the distributed inference engine design leads to a process of control system design for a general large-scale complex system. As applications of the proposed methodology, the control system design of a simplified ship chilled water system and a notional ship chilled water system have been demonstrated step by step. Simulation results not only show that the proposed methodology gives a clear guideline for control system design for general large-scale complex systems with dynamic and uncertain environment, but also indicate that the combination of MSDBNs and HyMABC can provide excellent performance for controlling general large-scale complex systems.
Estilos ABNT, Harvard, Vancouver, APA, etc.
21

AlZahrani, Saleh Saeed. "Regionally distributed architecture for dynamic e-learning environment (RDADeLE)". Thesis, De Montfort University, 2010. http://hdl.handle.net/2086/3814.

Texto completo da fonte
Resumo:
e-Learning is becoming an influential role as an economic method and a flexible mode of study in the institutions of higher education today which has a presence in an increasing number of college and university courses. e-Learning as system of systems is a dynamic and scalable environment. Within this environment, e-learning is still searching for a permanent, comfortable and serviceable position that is to be controlled, managed, flexible, accessible and continually up-to-date with the wider university structure. As most academic and business institutions and training centres around the world have adopted the e-learning concept and technology in order to create, deliver and manage their learning materials through the web, it has become the focus of investigation. However, management, monitoring and collaboration between these institutions and centres are limited. Existing technologies such as grid, web services and agents are promising better results. In this research a new architecture has been developed and adopted to make the e-learning environment more dynamic and scalable by dividing it into regional data grids which are managed and monitored by agents. Multi-agent technology has been applied to integrate each regional data grid with others in order to produce an architecture which is more scalable, reliable, and efficient. The result we refer to as Regionally Distributed Architecture for Dynamic e-Learning Environment (RDADeLE). Our RDADeLE architecture is an agent-based grid environment which is composed of components such as learners, staff, nodes, regional grids, grid services and Learning Objects (LOs). These components are built and organised as a multi-agent system (MAS) using the Java Agent Development (JADE) platform. The main role of the agents in our architecture is to control and monitor grid components in order to build an adaptable, extensible, and flexible grid-based e-learning system. Two techniques have been developed and adopted in the architecture to build LOs' information and grid services. The first technique is the XML-based Registries Technique (XRT). In this technique LOs' information is built using XML registries to be discovered by the learners. The registries are written in Dublin Core Metadata Initiative (DCMI) format. The second technique is the Registered-based Services Technique (RST). In this technique the services are grid services which are built using agents. The services are registered with the Directory Facilitator (DF) of a JADE platform in order to be discovered by all other components. All components of the RDADeLE system, including grid service, are built as a multi-agent system (MAS). Each regional grid in the first technique has only its own registry, whereas in the second technique the grid services of all regional grids have to be registered with the DF. We have evaluated the RDADeLE system guided by both techniques by building a simulation of the prototype. The prototype has a main interface which consists of the name of the system (RDADeLE) and a specification table which includes Number of Regional Grids, Number of Nodes, Maximum Number of Learners connected to each node, and Number of Grid Services to be filled by the administrator of the RDADeLE system in order to create the prototype. Using the RST technique shows that the RDADeLE system can be built with more regional grids with less memory consumption. Moreover, using the RST technique shows that more grid services can be registered in the RDADeLE system with a lower average search time and the search performance is increased compared with the XRT technique. Finally, using one or both techniques, the XRT or the RST, in the prototype does not affect the reliability of the RDADeLE system.
Estilos ABNT, Harvard, Vancouver, APA, etc.
22

Krosner, Stephen Paul. "Using an extension of rasmussen's abstraction hierarchy as a framework for design of a supervisory control system of a complex dynamic system". Diss., Georgia Institute of Technology, 1991. http://hdl.handle.net/1853/25294.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
23

Erdogan, Ezgi. "A Complex Dynamical Systems Model Of Education, Research, Employment, And Sustainable Human Development". Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/3/12612138/index.pdf.

Texto completo da fonte
Resumo:
Economic events of this era reflect the fact that the value of information and technology has surpassed the value of physical production. This motivates countries to focus on increasing the education levels of citizens. However, policy making about education system and its returns requires dynamical analyses in order to be sustainable. The study aims to investigate the dynamic characteristics of a country-wide education system, in particular, that of Turkey. System Dynamics modeling, which is one of the most commonly referred tools for understanding the complex social structures, is used. Our model introduces dynamic relationships among different classes of labor forces with varying education levels, university admissions, research quality, and the investments made in education, research and other sectors. Model experimentation provides new insights into the investment and capacity-related aspects of the education system environment.
Estilos ABNT, Harvard, Vancouver, APA, etc.
24

Gustad, Håvard. "Implications on System Integration and Standardisation within Complex and Heterogeneous Organisational Domains : Difficulties and Critical Success Factors in Open Industry Standards Development". Thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2006. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-9300.

Texto completo da fonte
Resumo:

Numerous standardisation and integration initiatives within the use of information and communication technologies (ICT) seem to fail due to lack of acknowledging the socio-technical negotiation that goes into standardisation work. This thesis addresses the implication of open standards development within organisational use of ICT. A standardisation initiative for data transmission, the PRODML project, within the domain of the Oil & Gas industry is investigated. This initiative strives to increase interoperability between organisations as it focus on removing the use of proprietary standards. By using Actor-Network Theory, this thesis try to articulate how such standards emerge, and the critical factors that can lead to their success. It emphasis the need to consider the importance of aligning interests in standards development, and the importance of creating the right initial alliance, building an installed base, for increased credibility and public acceptance.

Estilos ABNT, Harvard, Vancouver, APA, etc.
25

Mirza, Ahmed Kamal. "Managing high data availability in dynamic distributed derived data management system (D4M) under Churn". Thesis, KTH, Kommunikationssystem, CoS, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-95220.

Texto completo da fonte
Resumo:
The popularity of decentralized systems is increasing day by day. These decentralized systems are preferable to centralized systems for many reasons, specifically they are more reliable and more resource efficient. Decentralized systems are more effective in the area of information management in the case when the data is distributed across multiple peers and maintained in a synchronized manner. This data synchronization is the main requirement for information management systems deployed in a decentralized environment, especially when data/information is needed for monitoring purposes or some dependent data artifacts rely upon this data. In order to ensure a consistent and cohesive synchronization of dependent/derived data in a decentralized environment, a dependency management system is needed. In a dependency management system, when one chunk of data relies on another piece of data, the resulting derived data artifacts can use a decentralized systems approach but must consider several critical issues, such as how the system behaves if any peer goes down, how the dependent data can be recalculated, and how the data which was stored on a failed peer can be recovered. In case of a churn (resulting from failing peers), how does the system adapt the transmission of data artifacts with respect to their access patterns and how does the system provide consistency management? The major focus of this thesis was to addresses the churn behavior issues and to suggest and evaluate potential solutions while ensuring a load balanced network, within the scope of a dependency information management system running in a decentralized network. Additionally, in peer-to-peer (P2P) algorithms, it is a very common assumption that all peers in the network have similar resources and capacities which is not true in real world networks. The peer‟s characteristics can be quite different in actual P2P systems; as the peers may differ in available bandwidth, CPU load, available storage space, stability, etc. As a consequence, peers having low capacities are forced to handle the same computational load which the high capacity peers handle, resulting in poor overall system performance. In order to handle this situation, the concept of utility based replication is introduced in this thesis to avoid the assumption of peer equality, enabling efficient operation even in heterogeneous environments where the peers have different configurations. In addition, the proposed protocol assures a load balanced network while meeting the requirement for high data availability, thus keeping the distributed dependent data consistent and cohesive across the network. Furthermore, an implementation and evaluation in the PeerfactSim.KOM P2P simulator of an integrated dependency management framework, D4M, was done. In order to benchmark the implementation of proposed protocol, the performance and fairness tests were examined. A conclusion is that the proposed solution adds little overhead to the management of the data availability in a distributed data management systems despite using a heterogeneous P2P environment. Additionally, the results show that the various P2P clusters can be introduced in the network based on peer‟s capabilities.
Populariteten av decentraliserade system ökar varje dag. Dessa decentraliserade system är att föredra framför centraliserade system för många anledningar, speciellt de är mer säkra och mer resurseffektiv. Decentraliserade system är mer effektiva inom informationshantering i fall när data delas ut över flera Peers och underhållas på ett synkroniserat sätt. Dessa data synkronisering är huvudkravet för informationshantering som utplacerade i en decentraliserad miljö, särskilt när data / information behövs för att kontrollera eller några beroende artefakter uppgifter lita på dessa data. För att säkerställa en konsistent och härstammar synkronisering av beroende / härledd data i en decentraliserad miljö, är ett beroende ledningssystem behövs. I ett beroende ledningssystem, när en bit av data som beror på en annan bit av data, kan de resulterande erhållna uppgifterna artefakter använd decentraliserad system approach, men måste tänka på flera viktiga frågor, såsom hur systemet fungerar om någon peer går ner, hur beroende data kan omräknas, och hur de data som lagrats på en felaktig peer kan återvinnas. I fall av churn (på grund av brist Peers), hur systemet anpassar sändning av data artefakter med avseende på deras tillgång mönster och hur systemet ger konsistens förvaltning? Den viktigaste fokus för denna avhandling var att behandlas churn beteende frågor och föreslå och bedöma möjliga lösningar samtidigt som en belastning välbalanserat nätverk, inom ramen för ett beroende information management system som kör i ett decentraliserade nätverket. Dessutom, i peerto- peer (P2P) algoritmer, är det en mycket vanlig uppfattning att alla Peers i nätverket har liknande resurser och kapacitet vilket inte är sant i verkliga nätverk. Peer egenskaper kan vara ganska olika i verkliga P2P system, som de Peers kan skilja sig tillgänglig bandbredd, CPU tillgängligt lagringsutrymme, stabilitet, etc. Som en följd, är peers har låg kapacitet tvingade att hantera sammaberäkningsbelastningen som har hög kapacitet peer hanterar vilket resulterar i dåligsystemets totala prestanda. För att hantera den här situationen, är begreppet verktygetbaserad replikering införs i denna uppsats att undvika antagandet om peer jämlikhet, så att effektiv drift även i heterogena miljöer där Peers har olika konfigurationer. Dessutom säkerställer det föreslagna protokollet en belastning välbalanserat nätverk med iakttagande kraven på hög tillgänglighet och därför hålla distribuerade beroende datakonsekvent och kohesiv över nätverket. Vidare ett genomförande och utvärdering iPeerfactSim.KOM P2P simulatorn av en integrerad beroende förvaltningsram, D4M, var gjort[.] De prestandatester och tester rättvisa undersöktes för att riktmärka genomförandet avföreslagna protokollet. En slutsats är att den föreslagna lösningen tillagt lite overhead för förvaltningen av tillgången till uppgifterna inom ett distribuerade system för datahantering, trots med användning av en heterogen P2P miljö. Dessutom visar resultaten att de olikaP2P-kluster kan införas i nätverket baserat på peer-möjligheter.
Estilos ABNT, Harvard, Vancouver, APA, etc.
26

Baghaei, Lakeh Arash. "Essays on Utilizing Data Analytics and Dynamic Modeling to Inform Complex Science and Innovation Policies". Diss., Virginia Tech, 2018. http://hdl.handle.net/10919/95009.

Texto completo da fonte
Resumo:
In many ways, science represents a complex system which involves technical, social, and economic aspects. An analysis of such a system requires employing and combining different methodological perspectives and incorporation of different sources of data. In this dissertation, we use a variety of methods to analyze large sets of data in order to examine the effects of various domestic and institutional factors on scientific activities. First, we evaluate how the contributions of behavioral and social sciences to studies of health have evolved over time. We use data analytics to conduct a textual analysis of more than 200,000 publications on the topic of HIV/AIDS. We find that the focus of the scientific community within the context of the same problem varies as the societal context of the problem changes. Specifically, we uncover that the focus on the behavioral and social aspects of HIV/AIDS has increased over time and varies in different countries. Further, we show that this variation is related to the mortality level that the disease causes in each country. Second, we investigate how different sources of funding affect the science enterprise differently. We use data analytics to analyze more than 60,000 papers published on the subject of specific diseases globally and highlight the role of philanthropic money in these domains. We find that philanthropies tend to have a more practical approach in health studies as compared with public funders. We further show that they are also concerned with the economic, policy related, social, and behavioral aspects of the diseases. We uncover that philanthropies tend to mix and combine approaches and contents supported both by public and private sources of funding for science. We further show that in doing so, philanthropies tend to be closer to the position held by the public sector in the context of health studies. Finally, we find that studies funded by philanthropies tend to receive higher citations, and hence have higher impact, in comparison to those funded by the public sector. Third, we study the effect of different schemes of funding distribution on the career of scientists. In this study, we develop a system dynamics model for analyzing a scientist's career under different funding and competition contexts. We investigate the characteristics of optimal strategies and also the equilibrium points for the cases of scientists competing for financial resources. We show that a policy to fund the best can lead scientists to spend more time on writing proposals, in order to secure funding, rather than writing papers. We find that when everyone receives funding (or have the same chance of receiving funding) the overall optimal payoff of the scientists reaches its highest level and at this optimum, scientists spend all their time on writing papers rather than writing proposals. Our analysis suggests that more egalitarian distributions of funding results in higher overall research output by scientists. We also find that luck plays an important role in the success of scientists. We show that following the optimal strategies do not guarantee success. Due to the stochastic nature of funding decisions, some will eventually fail. The failure is not due to scientists' faulty decisions, but rather simply due to their lack of luck.
Ph. D.
Science helps us understand the world and enables us to improve how we interact with our environment. But science itself has also been the subject of inquiry by philosophers, sociologists, economists, historians, and scientists. The goal in the investigations of science has been to better understand how scientific advances occur, how to foster innovation, and how to improve the institutions that push science forward. This dissertation contributes to this area of research by asking and responding to several questions about the science enterprise. First, we study how communities of scientists in different parts of the world look at the seemingly same problem differently. We use a computational method to read through a large set of publications on the topic of HIV/AIDS (which includes more than 200,000 papers) and uncover the topics of these papers. We find that in the context of HIV/AIDS, contributions of behavioral and social scientists have increased over time. Moreover, we show that the share of these contributions in any counties’ total research output differs significantly. We further find that there is a significant relationship between one country’s rate of death, due to HIV/AIDS, and the share of behavioral and social studies in the overall research profile of that country on the topic of HIV/AIDS. Second, we investigate how different sources of research funding affect scientific activities differently. Specifically, we focus on the role of philanthropic money in science and its effect on the content and impact of research studies. In our analysis, we rely on computational techniques that distinguishes between different themes of research in the studies of a few diseases and also different statistical methods. We find that philanthropies tend to have a more practical approach to health studies as compared with public sources of funding. Meanwhile, we find that they are also concerned with the economic, policy related, social, and behavioral aspects of the diseases. Moreover, we show that philanthropies tend to mix and combine approaches and contents supported both by public and private sources of funding for science. We find that, in doing so, philanthropies tend to be closer to the position held by the public sector in the context of health studies. Finally, we show that studies funded by philanthropies tend to receive higher citations. This finding suggests that these studies have a higher impact in comparison to those funded by the public sector. Third, we study how different mechanisms for distributing research funding among scientists can affect their career and success. Many scientists should spend time on both writing papers and research grant proposals. In this work, we aim at understanding how a scientists should allocate her time between these two activities to maximize her career long number of papers. We develop a small mathematical model to capture the mechanisms related to the research career of a scientist in an academic setting. Then, for different schemes of funding distribution, we find the scientist’s time allocation that maximizes the number of papers she publishes over her career. We find that when funding is being allocated to the best scientists and best grant proposals, scientists’ best strategy is to spend more time on writing research grant proposals rather than papers. This decreases the total number of papers published by the scientists over their career. We also find that luck is important in determining the career success of scientists. Due to errors in evaluation of proposal qualities, a scientist may fail in her career regardless of whether she has followed the best strategy that she could.
Estilos ABNT, Harvard, Vancouver, APA, etc.
27

Baumgartner, Laura. "Digging into biologically-driven injury mechanisms in the intervertebral disc: an evidence-based network modelling approach to estimate cell dynamics within complex multicellular systems". Doctoral thesis, Universitat Pompeu Fabra, 2021. http://hdl.handle.net/10803/673061.

Texto completo da fonte
Resumo:
It is well known that injuries affect the tissue integrity. It might be less known that injury mechanisms could be initiated through a compromised cell response, which subsequently affects the tissue integrity. Such injuries are subsequently referred to as biologically-driven injuries. They usually develop over long time periods, and disease progression remains largely silent. Biologically-driven injury mechanisms are still poorly understood, which is partly related to limited methodologies to estimate complex, dynamic cell responses over long periods of time. To this end, this work presents a novel network modelling approach to tackle complex multicellular systems. Thereby, the cell is considered as a “black box” and cell activities, such as mRNA expressions, were directly linked to the surrounding stimulus environment, based on experimental knowledge. To achieve that, a set of integrative methodologies was developed to translate experimental findings into parameters suitable for systems biology models, and to numerically approximate cell activities within complex multicellular environments. This set of methodologies is presented as “PNt-Methodology”. The acronym “PNt" refers to the conceptualization of cell activities, as numerous potentially time-dependent (subscript t) networks that are simultaneously acting, i.e. parallel networks (PN). The PNt-Methodology was developed to investigate the intervertebral disc (IVD) Nucleus Pulposus, the degradation of which is assumed to be highly biologically-driven. The objective was to better understand the initiation of IVD degeneration. The effects of key relevant biochemical (glucose, lactate) and mechanical stimuli (load magnitude and frequency) on non-degenerated Nucleus Pulposus cells were investigated. The multicellular system was simulated within a 3D agent-based model and contained both non-inflamed and inflamed cells, whereby the proinflammatory mediators IL1β and TNF-α were considered. This led to four different cell states; non-inflamed, inflamed with IL1β, TNF-α or both, IL1β&TNF-α. For each cell state, the mRNA expressions of the main tissue proteins Aggrecan and Collagen Types I & II and of the crucial proteases MMP3 and ADAMTS4 were estimated. The qualitative results of the model could successfully be validated with findings from the literature through the different steps of development. Eventually, CA of different CS were approximated for different body postures and physical activities, including long-term predictions. To the best of our knowledge, this is the first in silico approach that tackles the cellular level in IVD research. Furthermore, thanks to its generic and scalable design, the PNt-Methodology is adaptable to more complex cell environments and is expected to be applicable to multicellular systems of other tissue. Hence, this contribution complements existing in silico methods by providing a new top-down high-level network modelling approach based on biological measurements to approximate the dynamics of biologically-driven injury mechanisms.
Está conocido que las lesiones afectan la integridad de un tejido. Lo que quizás está menos conocido es que los mecanismos que causan estas lesiones podrían ser iniciados por una respuesta celular comprometida, la cual afecta la integridad de un tejido. A continuación, se refiere a este tipo de lesiones como lesiones con origen biológico. Estas lesiones suelen desarrollarse muy lentamente con una progresión mayormente silenciosa. Todavía no se entienden bien los mecanismos de este tipo de lesiones, lo cual en parte está relacionado con limitaciones metodológicas que permiten estimar respuestas complejas y dinámicas de células durante largos periodos de tiempo. Este trabajo presenta una metodología nueva de modelado de redes para aproximar sistemas complejos en un entorno multicelular. Se considera la célula como una caja negra y las actividades celulares, como expresiones de ARNm, se vinculan directamente con los estímulos que reciben las células, según datos experimentales. Para conseguir esto, se desarrolló un conjunto de métodos integrativos que permiten el traslado de resultados experimentales en parámetros adecuados para modelos de biología de sistemas, y para aproximar numéricamente actividades celulares en entornos complejos multicelulares. Este conjunto de métodos esta presentado como “Métodología-RPt” (PNt-Methodology). El acrónimo “RPt” se refiere a la conceptualización de actividades celulares como numerosas redes, potencialmente dependiendo del tiempo (indexado t) que están actuando simultáneamente, es decir, como redes paralelas (RP). La Metodología-RPt se desarrolló para investigar el Nucleus Pulposus del disco intervertebral, la degradación del cual podría ser mayoritariamente causada por lesiones de origen biológicos. El objetivo fue entender mejor el inicio de la degeneración del disco intervertebral. Los efectos de estímulos claves bioquímicos (glucosa, lactato) y mecánicos (magnitud y frecuencia de una carga) fueron investigados en células no degeneradas del Nucleus Pulposus. El sistema multicelular es simuló en un modelo 3D basado en agentes que incluyó células tanto no inflamadas como inflamadas, considerando los mediadores proinflamatorios IL1β y TNF-α. En consecuencia, se definieron cuatro estados celulares: células no inflamadas, inflamadas con IL1β, inflamadas con TNF-α o inflamadas con ambos, IL1β&TNF-α. Para cada estado celular, se estimó la expresión de ARNm de las proteínas estructurales principales del tejido, Agrecano y Colágeno tipo I & II y de las proteasas claves MMP3 y ADAMTS4. Los resultados cualitativos del modelo se pudieron validar exitosamente con resultados experimentales de la literatura a lo largo del desarrollo. Finalmente, se pudieron aproximar las actividades celulares para diferentes posturas corporales y actividades físicas, incluyendo predicciones durante largos periodos tiempo. A nuestro entender, este es el primer método in silico que trata el nivel celular en la investigación del disco intervertebral. Además, gracias a su diseño genérico y escalable, la Metodología-RPt se puede adaptar a entornos más complejos, y se puede vislumbrar su aplicación a sistemas multicelulares de otros tejidos. Por lo tanto, esta contribución complementa los métodos in silico existentes al ofrecer una nueva estrategia top-down de modelado de redes de alto nivel (high-level), basado en resultados experimentales para aproximar las dinámicas de los mecanismos causantes de lesiones con origen biológico.
Estilos ABNT, Harvard, Vancouver, APA, etc.
28

Boudermine, Antoine. "A dynamic attack graphs based approach for impact assessment of vulnerabilities in complex computer systems". Electronic Thesis or Diss., Institut polytechnique de Paris, 2022. http://www.theses.fr/2022IPPAT046.

Texto completo da fonte
Resumo:
De nos jours, les réseaux informatiques sont utilisés dans de nombreux domaines et leur défaillance peut avoir un fort impact sur notre vie quotidienne. L'évaluation de leur sécurité est une nécessité pour réduire le risque de compromission par un attaquant. Néanmoins, les solutions proposées jusqu'à présent sont rarement adaptées à la grande complexité des systèmes informatiques modernes. Elles reposent souvent sur un travail humain trop important et les algorithmes utilisés ne sont pas assez performants. De plus, l'évolution du système dans le temps est rarement modélisée et n'est donc pas prise en compte dans l'évaluation de sa sécurité. Dans cette thèse, nous proposons un nouveau modèle de graphe d'attaque construit à partir d'une description dynamique du système. Nous avons mis en évidence à travers nos expériences que notre modèle permettait d'identifier davantage de chemins d'attaque qu'un modèle de graphe d'attaque statique. Nous avons ensuite proposé un algorithme de simulation d'attaques permettant d'approximer les chances de succès de compromission du système par un acteur malveillant. Nous avons également prouvé que notre solution était capable d'analyser la sécurité de systèmes complexes. La complexité en temps dans le pire des cas a été évaluée pour chaque algorithme utilisé et plusieurs tests ont été réalisés pour mesurer leurs performances réelles. Pour terminer, nous avons appliqué notre solution sur un réseau IT composé de plusieurs milliers d'éléments. De futurs travaux devraient être réalisés pour améliorer les performances de l'algorithme de génération des graphes d'attaque afin de permettre d'analyser des systèmes toujours plus complexes. Des solutions devraient également être trouvées pour faciliter l'étape de modélisation du système qui reste encore à ce jour une tâche difficile à réaliser, surtout par des humains. Enfin, l'algorithme de simulation pourrait être amélioré pour être plus réaliste et tenir compte des réelles capacités de l'attaquant. Il serait également intéressant d'évaluer l'impact des attaques au niveau de l'organisation et de ses processus métiers
Nowadays, computer networks are used in many fields and their breakdown can strongly impact our daily life. Assessing their security is a necessity to reduce the risk of compromise by an attacker. Nevertheless, the solutions proposed so far are rarely adapted to the high complexity of modern computer systems. They often rely on too much human work and the algorithms used don't scale well. Furthermore, the evolution of the system over time is rarely modeled and is therefore not considered in the evaluation of its security.In this thesis, we propose a new attack graph model built from a dynamic description of the system. We have shown through our experimentations that our model allows to identify more attack paths than a static attack graph model. We then proposed an attack simulation algorithm to approximate the chances of success of system compromise by a malicious actor.We also proved that our solution was able to analyze the security of complex systems. The worst-case time complexity was assessed for each algorithm used. Several tests were performed to measure their real performances. Finally, we applied our solution on an IT network composed of several thousand elements.Future work should be done to improve the performance of the attack graph generation algorithm in order to analyze increasingly complex systems. Solutions should also be found to facilitate the system modeling step which is still a difficult task to perform, especially by humans. Finally, the simulation algorithm could be improved to be more realistic and take into account the real capabilities of the attacker. It would also be interesting to assess the impact of the attacks on the organization and its business processes
Estilos ABNT, Harvard, Vancouver, APA, etc.
29

Murrani, Sana. "Unstable territories of representation : architectural experience and the behaviour of forms, spaces and the collective dynamic environment". Thesis, University of Plymouth, 2011. http://hdl.handle.net/10026.1/310.

Texto completo da fonte
Resumo:
This thesis applies an interdisciplinary cybernetic and phenomenological analysis to contemporary theories of representation and interpretation of architecture, resulting in a speculative theoretical model of architectural experience as a behavioural system. The methodological model adopted for this research defines the main structure of the thesis where the narrative and the contributing parts of its complexity emerge. The narrative is presented through objectives and hypotheses that shift and slide between architectural representation and its experience based on three key internal components in architecture: the architectural forms and spaces, the active observers that interact with their environment, and finally, the responsive environment. Three interrelated research questions are considered. The first seeks to define the influence of the theoretical instability between complex life processes, emerging technologies and active perception upon architecture. The second questions the way in which the architectural experience is generated. The third asks: Does architecture behave? And if so, is it possible to define its behavioural characteristics related to its representation, experience and the medium of communication in-between? The thesis begins by exploring the effect of developments in digitally interactive, biological, and hybrid technologies on representation in architecture. An account of architectural examples considers the shift in the meaning of representation in architecture from the actual and literal to the more conceptual and experimental, from the individual human body and its relations to the multifaceted ecosystem of collective and connected cultures. The writings of Kester Rattenbury, Neil Leach, and Peter Cook among others contribute to the transformation of the ordinary perceptual experience of architecture, the development of experimental practices in architectural theory, and the dynamism of our perception. The thesis goes on to suggest that instability in architectural representation does not only depend on the internal components of the architectural system but also on the principles and processes of complex systems as well as changes in active perception and our consciousness that act as the external influences on the system. Established theoretical endeavours in biology of D’Arcy Thompson, Alan Turing, and John Holland and philosophies of Merleau-Ponty, Richard Gregory, and Deleuze and Guattari are discussed in this context. Pre-programmed and computational models, illustrative and generative, are presented throughout the thesis. In the final stage of the development of the thesis architecture is analysed as a system. This is not an unprecedented notion, however defining the main elements and components of this system and their interactions and thereafter identifying that the system behaves and defining its behavioural characteristics, adds to the knowledge in the field of theoretical and experimental architecture. This thesis considers the behavioural characteristics of architecture to be derived from the hypothetical links and unstable thresholds of its non-dualistic notions of materiality and immateriality, reality and virtuality, and finally, intentionality and interpretation.
Estilos ABNT, Harvard, Vancouver, APA, etc.
30

RADI, Davide. "Essays on Nonlinear Dynamics, Heterogeneous Agents and Evolutionary Games in Economics and Finance". Doctoral thesis, Università degli studi di Bergamo, 2014. http://hdl.handle.net/10446/30390.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
31

Grosu, Yaroslav G. "Thermodynamics and operational properties of nanoporous heterogeneous lyophobic systems for mechanical and thermal energy storage/dissipation". Thesis, Clermont-Ferrand 2, 2015. http://www.theses.fr/2015CLF22579/document.

Texto completo da fonte
Resumo:
La thèse est consacrée à l’étude théorique et expérimentale des propriétés thermodynamiques et d'usage de Systèmes Hétérogènes de Lyophobes nanoporeux (SHL) et leurs dépendances en fonction de la température afin de déterminer les conditions optimales et accroître l'efficacité des dispositifs énergétiques à base de SHL. La thèse présente les résultats obtenus dans trois directions principales de recherche: 1. Analyse thermodynamique; 2. Caractéristiques des SHL dans une large plage de température; 3. Stabilité de SHL dans différentes conditions opérationnelles. La gamme maximale de température étudiée est à 2 - 150 °C et 0.1 - 120 MPa pour la pression. En particulier, les résultats comprennent une équation d'état pour décrire des SHL réels qui prend en compte la distribution de taille des pores; les caractéristiques énergétiques de quatre (deux mésoporeux et deux microporeux) SHLs mesurées dans une large plage de température; certains nouveaux régimes de fonctionnement de SHLs ont été étudiés dans des conditions isobares contrôlées; enfin le concept d'utilisation de SHL comme système avec dilatation thermique négative prononcée est proposé
The thesis is devoted to the theoretical and experimental investigations of thermodynamic and operational properties of nanoporous Heterogeneous Lyophobic Systems (HLS) and their temperature dependences in order to determine optimal conditions and increase efficiency of HLS-based energetical devices. The thesis reflects results obtained in three main directions of research: 1. Thermodynamic analysis; 2. Characteristics of HLS in a wide temperature range; 3. Stability of HLS under different operational conditions. Maximum temperature range investigated is to 2 - 150 ° C. Pressure range is 0.1 - 120 MPa. Particularly, results include proposed equation of state for real HLS, which takes into account pore size distribution function; the energetic characteristics of four (two mesoporous and two microporous) HLSs collected in a wide temperature range; some new operation regimes of HLSs were investigated under controlled isobaric conditions; proposed concept of usage of HLS as a system with pronounced negative thermal expansion
Estilos ABNT, Harvard, Vancouver, APA, etc.
32

Oet, Mikhail V. "Financial stress in an adaptive system: From empirical validity to theoretical foundations". Case Western Reserve University School of Graduate Studies / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=case1459347548.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
33

Pelties, Christian. "The discontinuous galerkin approach for 3D seismic wave propagation and 3D dynamic rupture modeling in the case of a complex fault system". Diss., lmu, 2012. http://nbn-resolving.de/urn:nbn:de:bvb:19-145243.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
34

Tchappi, haman Igor. "Dynamic Multilevel and Holonic Model for the Simulation of a Large-Scale Complex System with Spatial Environment : Application to Road Traffic Simulation". Thesis, Bourgogne Franche-Comté, 2020. http://www.theses.fr/2020UBFCA004.

Texto completo da fonte
Resumo:
De nos jours, avec l’émergence d’objets et de voitures connectés, les systèmes de trafic routier deviennent de plus en plus complexes et présentent des comportements hiérarchiques à plusieurs niveaux de détail. L'approche de modélisation multiniveaux est une approche appropriée pour représenter le trafic sous plusieurs perspectives. Les modèles multiniveaux constituent également une approche appropriée pour modéliser des systèmes complexes à grande échelle comme le trafic routier. Cependant, la plupart des modèles multiniveaux de trafic proposés dans la littérature sont statiques car ils utilisent un ensemble de niveaux de détail prédéfinis et ces représentations ne peuvent pas commuter pendant la simulation. De plus ces modèles multiniveaux considèrent généralement seulement deux niveaux de détail. Très peu de travaux se sont intéressés à la modélisation dynamique multiniveau de trafic.Cette thèse propose un modèle holonique multiniveau et dynamique du trafic à grande échelle.La commutation dynamique des niveaux de détail lors de l’exécution de la simulation permet d’adapter le modèle aux contraintes liées à la qualité des résultats ou aux ressources de calcul disponibles.La proposition étend l'algorithme DBSCAN dans le contexte des systèmes multi-agents holoniques. De plus, une méthodologie permettant la commutation dynamique entre les différents niveaux de détail est proposée. Des indicateurs multiniveaux basés sur l'écart type sont aussi proposés afin d'évaluer la cohérence des résultats de la simulation
Nowadays, with the emergence of connected objects and cars, road traffic systems become more and more complex and exhibit hierarchical behaviours at several levels of detail. The multilevel modeling approach is an appropriate approach to represent traffic from several perspectives. Multilevel models are also an appropriate approach to model large-scale complex systems such as road traffic. However, most of the multilevel models of traffic proposed in the literature are static because they use a set of predefined levels of detail and these representations cannot change during simulation. Moreover, these multilevel models generally consider only two levels of detail. Few works have been interested on the dynamic multilevel traffic modeling.This thesis proposes a holonic multilevel and dynamic traffic model for large scale traffic systems. The dynamic switching of the levels of detail during the execution of the simulation allows to adapt the model to the constraints related to the quality of the results or to the available computing resources.The proposal extends the DBSCAN algorithm in the context of holonic multi-agent systems. In addition, a methodology allowing a dynamic transition between the different levels of detail is proposed. Multilevel indicators based on standard deviation are also proposed in order to assess the consistency of the simulation results
Estilos ABNT, Harvard, Vancouver, APA, etc.
35

VAIRO, TOMASO. "DARMS - Dynamic Asset-integrity and Risk Management System - How Machine Learning and Systems Engineering cooperate to enhance the resilience of complex systems". Doctoral thesis, Università degli studi di Genova, 2022. http://hdl.handle.net/11567/1080188.

Texto completo da fonte
Resumo:
“Static, incomplete, superficial, wrong”. The traditional approach to risk analysis, as applied in the process industries, has been largely criticized in response to recent major accidents. Since it was first proposed, modifications and improvements have been made, and a formal accepted approach is included in several regulations and standards (as the recent development of guidelines for the ageing management in SEVESO installations). Quantitative Risk Assessment (QRA) is based on consolidated procedures. Nevertheless, the need of safety improvement asks for more advanced tools for hazard identification and risk evaluation. Besides considering technical aspects (e.g., malfunctions and process upsets), operational errors, organizational aspects, such as lack of attention and motivation to the safety culture, may lead to risk increment in terms of likelihood of undesired failures. Not all those aspects may be investigated with conventional QRA techniques, which have also the disadvantage of being intrinsically static and failing to capture risk variations during the lifecycle of a plant or production site. Despite their proved effectiveness, many hazards identification and risk assessment techniques lack the dynamic dimension, which is the ability to learn from new risk notions, experience, and early warnings. Now’s the time to go beyond the limits of conventional static methods for hazard identification and risk assessment; the risk assessment is, indeed, a very useful approach in support of this change but at the same time it is not exhaustive to capture also the possible “failure” in the interface/interaction among the several single components of a complex system beside their specific failures. This research work discusses a novel approach for dynamizing the risk assessment process, integrating measured process data, asset integrity and operative conditions. In the first part of the thesis, the inferential process and the application of Machine Learning to inference is discussed, and various applications of standard, and tailored, machine learning algorithms to industrial and environmental risks are detailed as case studies. The second part is focused on the resilience engineering. The resilience paradigm is discussed, as well as the concept of emerging properties of complex systems. it will be shown how real-time data analytics, through appropriate AI models, combined with the expert knowledge of process engineering, constitute the fundamental technological key to pursue the resilience of plants and processes. The third section integrates the aforementioned concepts within the wide framework of Systems Engineering. Accordingly, a dynamic and systemic model is presented, to address the significant shortcomings of the current risk analysis models. The Dynamic Asset-integrity and Risk Management System (DARMS) is designed starting from the Bow-tie technique, integrated with improved Machine Learning algorithms, to overcome the epistemic uncertainty in the prior probabilities and likelihoods of escalation factors and barriers. Subsequently, a Hidden Markov Model (HMM), based on Bayesian Inference, is developed to analyze real-time risk, and produce reliable predictions on the state of the whole system during the operations. The application of the proposed model is demonstrated on an Oil and Gas terminal under Seveso legislation. The results of the case study provide a better understanding of the advanced Data Driven modeling of accident scenarios. The proposed model will serve as a useful tool for the operational safety management of complex systems.
Estilos ABNT, Harvard, Vancouver, APA, etc.
36

Gantel, Laurent. "Hardware and software architecture facilitating the operation by the industry of dynamically adaptable heterogeneous embedded systems". Phd thesis, Université de Cergy Pontoise, 2014. http://tel.archives-ouvertes.fr/tel-01019909.

Texto completo da fonte
Resumo:
This thesis aims to define software and hardware mechanisms helping in the management the Heterogeneous and dynamically Reconfigurable Systems-on-Chip (HRSoC). The heterogeneity is due to the presence of general processing units and reconfigurable IPs. Our objective is to provide to an application developer an abstracted view of this heterogeneity, regarding the task mapping on the available processing elements. First, we homogenize the user interface defining a hardware thread model. Then, we pursue with the homogenization of the hardware threads management. We implemented OS services permitting to save and restore a hardware thread context. Conception tools have also been developed in order to overcome the relocation issue. The last step consisted in extending the access to the distributed OS services to every thread running on the platform. This access is provided independently from the thread location and is is realized implementing the MRAPI API. With these three steps, we build a solid basis to, in future work, provide to the developer, a conception flow dedicated to HRSoC allowing to perform precise architectural space explorations. Finally, to validate these mechanisms, we realize a demonstration platform on a Virtex 5 FPGA running a dynamic tracking application.
Estilos ABNT, Harvard, Vancouver, APA, etc.
37

Kharabe, Amol T. "Organizational Agility and Complex Enterprise System Innovations: A Mixed Methods Study of the Effects of Enterprise Systems on Organizational Agility". Case Western Reserve University School of Graduate Studies / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=case1339176723.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
38

Pelties, Christian [Verfasser], e Heiner [Akademischer Betreuer] Igel. "The discontinuous Galerkin approach for 3D seismic wave propagation and 3D dynamic rupture modeling in the case of a complex fault system / Christian Pelties. Betreuer: Heiner Igel". München : Universitätsbibliothek der Ludwig-Maximilians-Universität, 2012. http://d-nb.info/1025046935/34.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
39

Alharbi, Fahad. "The Dynamics of the L2 Motivational Self System among Saudi Study Abroad Students". Scholar Commons, 2017. http://scholarcommons.usf.edu/etd/6672.

Texto completo da fonte
Resumo:
Adult second language acquisition takes time over an extended period of time during which the L2 motivation of learners goes through periods of ups and downs. Dörnyei, MacIntyre and Henry (2015) recognized the inherently dynamic nature of L2 motivation and called for adopting the Complex Dynamic System Theory (CDST) when studying this phenomenon. While using a CDST perspective, this mixed method study drew on Dörnyei’s (2009b) model of the Motivational Self System to examine the L2 motivation of 86 Saudi study-abroad students. Also, the construct of the Anti-ought to Self (Thompson, 2015) and aspects of the Appraisal Theory (Schumann, 2001) were adopted to guide this examination. The results of the study showed that the L2 motivation of the participants fell into four main motivational patterns. Also, some of the participants shifted into new attractor states over the course of their academic semester. Another important finding was that the Anti-ought to Self appeared as an important construct. The results of the standard multiple regressions showed that the amount of the variance in the Intended Learning Effort that was accounted for by the Anti-ought to Self alone exceeded the amount of the variance accounted for by the other explanatory variables put together. Also, the analysis of the quantitative and qualitative data showed that the use of the Appraisal Theory improved the construct validity of the Learning Experiences. The implications of these findings and future directions of the L2 motivational research were also discussed in the study.
Estilos ABNT, Harvard, Vancouver, APA, etc.
40

Preston, Jon Anderson. "Rethinking Consistency Management in Real-time Collaborative Editing Systems". Digital Archive @ GSU, 2007. http://digitalarchive.gsu.edu/cs_diss/18.

Texto completo da fonte
Resumo:
Networked computer systems offer much to support collaborative editing of shared documents among users. Increasing concurrent access to shared documents by allowing multiple users to contribute to and/or track changes to these shared documents is the goal of real-time collaborative editing systems (RTCES); yet concurrent access is either limited in existing systems that employ exclusive locking or concurrency control algorithms such as operational transformation (OT) may be employed to enable concurrent access. Unfortunately, such OT based schemes are costly with respect to communication and computation. Further, existing systems are often specialized in their functionality and require users to adopt new, unfamiliar software to enable collaboration. This research discusses our work in improving consistency management in RTCES. We have developed a set of deadlock-free multi-granular dynamic locking algorithms and data structures that maximize concurrent access to shared documents while minimizing communication cost. These algorithms provide a high level of service for concurrent access to the shared document and integrate merge-based or OT-based consistency maintenance policies locally among a subset of the users within a subsection of the document – thus reducing the communication costs in maintaining consistency. Additionally, we have developed client-server and P2P implementations of our hierarchical document management algorithms. Simulations results indicate that our approach achieves significant communication and computation cost savings. We have also developed a hierarchical reduction algorithm that can minimize the space required of RTCES, and this algorithm may be pipelined through our document tree. Further, we have developed an architecture that allows for a heterogeneous set of client editing software to connect with a heterogeneous set of server document repositories via Web services. This architecture supports our algorithms and does not require client or server technologies to be modified – thus it is able to accommodate existing, favored editing and repository tools. Finally, we have developed a prototype benchmark system of our architecture that is responsive to users’ actions and minimizes communication costs.
Estilos ABNT, Harvard, Vancouver, APA, etc.
41

Maguire, Gregory M. "Concept of a dynamic organizational schema for a network-centric organization". Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2003. http://library.nps.navy.mil/uhtbin/hyperion-image/03Jun%5FMaguire.pdf.

Texto completo da fonte
Resumo:
Thesis (M.S. in Systems Technology)--Naval Postgraduate School, June 2003.
Thesis advisor(s): Carl R. Jones, William G. Kemple. Includes bibliographical references (p. 95-97). Also available online.
Estilos ABNT, Harvard, Vancouver, APA, etc.
42

Ribas, Lucas Correia. "Análise de texturas dinâmicas baseada em sistemas complexos". Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-28072017-141204/.

Texto completo da fonte
Resumo:
A análise de texturas dinâmicas tem se apresentado como uma área de pesquisa crescente e em potencial nos últimos anos em visão computacional. As texturas dinâmicas são sequências de imagens de textura (i.e. vídeo) que representam objetos dinâmicos. Exemplos de texturas dinâmicas são: evolução de colônia de bactérias, crescimento de tecidos do corpo humano, escada rolante em movimento, cachoeiras, fumaça, processo de corrosão de metal, entre outros. Apesar de existirem pesquisas relacionadas com o tema e de resultados promissores, a maioria dos métodos da literatura possui limitações. Além disso, em muitos casos as texturas dinâmicas são resultado de fenômenos complexos, tornando a tarefa de caracterização um desafio ainda maior. Esse cenário requer o desenvolvimento de um paradigma de métodos baseados em complexidade. A complexidade pode ser compreendida como uma medida de irregularidade das texturas dinâmicas, permitindo medir a estrutura dos pixels e quantificar os aspectos espaciais e temporais. Neste contexto, o objetivo deste mestrado é estudar e desenvolver métodos para caracterização de texturas dinâmicas baseado em metodologias de complexidade advindas da área de sistemas complexos. Em particular, duas metodologias já utilizadas em problemas de visão computacional são consideradas: redes complexas e caminhada determinística parcialmente auto-repulsiva. A partir dessas metodologias, três métodos de caracterização de texturas dinâmicas foram desenvolvidos: (i) baseado em difusão em redes - (ii) baseado em caminhada determinística parcialmente auto-repulsiva - (iii) baseado em redes geradas por caminhada determinística parcialmente auto-repulsiva. Os métodos desenvolvidos foram aplicados em problemas de nanotecnologia e tráfego de veículos, apresentando resultados potenciais e contribuindo para o desenvolvimento de ambas áreas.
Dynamic texture analysis has been an area of research increasing and in potential in recent years in computer vision. Dynamic textures are sequences of texture images (i.e. video) that represent dynamic objects. Examples of dynamic textures are: evolution of the colony of bacteria, growth of body tissues, moving escalator, waterfalls, smoke, process of metal corrosion, among others. Although there are researches related to the topic and promising results, most literature methods have limitations. Moreover, in many cases the dynamic textures are the result of complex phenomena, making a characterization task even more challenging. This scenario requires the development of a paradigm of methods based on complexity. The complexity can be understood as a measure of irregularity of the dynamic textures, allowing to measure the structure of the pixels and to quantify the spatial and temporal aspects. In this context, this masters aims to study and develop methods for the characterization of dynamic textures based on methodologies of complexity from the area of complex systems. In particular, two methodologies already used in computer vision problems are considered: complex networks and deterministic walk partially self-repulsive. Based on these methodologies, three methods of characterization of dynamic textures were developed: (i) based on diffusion in networks - (ii) based on deterministic walk partially self-repulsive - (iii) based on networks generated by deterministic walk partially self-repulsive. The developed methods were applied in problems of nanotechnology and vehicle traffic, presenting potencial results and contribuing to the development of both areas.
Estilos ABNT, Harvard, Vancouver, APA, etc.
43

Benhani, El mehdi. "Sécurité des systèmes sur puce complexes hétérogènes". Thesis, Lyon, 2020. http://www.theses.fr/2020LYSES016.

Texto completo da fonte
Resumo:
La thèse étudie la sécurité de la technologie ARM TrustZone dans le cadre des SoCs complexes hétérogènes. La thèse présente des attaques matérielles qui touchent des éléments de l’architecture des SoCs et elle présente aussi des stratégies de contremesure
The thesis studies the security of the ARM TrustZone technology in the context of complex heterogeneous SoCs. The thesis presents hardware attacks that affect elements of the SoCs architecture and it also presents countermeasure strategies
Estilos ABNT, Harvard, Vancouver, APA, etc.
44

Ahmadi, Achachlouei Mohammad. "Exploring the Effects of ICT on Environmental Sustainability: From Life Cycle Assessment to Complex Systems Modeling". Doctoral thesis, KTH, Miljöstrategisk analys (fms), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-171443.

Texto completo da fonte
Resumo:
The production and consumption of information and communication technology (ICT) products and services continue to grow worldwide. This trend is accompanied by a corresponding increase in electricity use by ICT, as well as direct environmental impacts of the technology. Yet a more complicated picture of ICT’s effects is emerging. Positive indirect effects on environmental sustainability can be seen in substitution and optimization (enabling effects), and negative indirect effects can be seen in additional demand due to efficiency improvements (rebound effects). A variety of methods can be employed to model and assess these direct and indirect effects of ICT on environmental sustainability. This doctoral thesis explores methods of modeling and assessing environmental effects of ICT, including electronic media. In a series of five studies, three methods were at times applied in case studies and at others analyzed theoretically. These methods include life cycle assessment (LCA) and complex systems modeling approaches, including System Dynamics (SD) and agent-based (AB) modeling. The first two studies employ the LCA approach in a case study of an ICT application, namely, the tablet edition of a Swedish design magazine. The use of tablets has skyrocketed in recent years, and this phenomenon has been little studied to date. Potential environmental impacts of the magazine’s tablet edition were assessed and compared with those of the print edition. The tablet edition’s emerging version (which is marked by a low number of readers and low reading time per copy) resulted in higher potential environmental impacts per reader than did the print edition. However, the mature tablet edition (with a higher number of readers and greater reading time per copy) yielded lower impacts per reader in half the ten impact categories assessed. While previous studies of electronic media have reported that the main life-cycle contributor to environmental impacts is the use phase (which includes operational electricity use as well as the manufacture of the electronic device), the present study did not support those findings in all scenarios studied in this thesis. Rather, this study found that the number of readers played an important role in determining which life-cycle phase had the greatest impacts. For the emerging version, with few readers, content production was the leading driver of environmental impacts. For the mature version, with a higher number of readers, electronic storage and distribution were the major contributors to environmental impacts. Only when there were many readers but low overall use of the tablet device was the use phase the main contributor to environmental impacts of the tablet edition of the magazine. The third study goes beyond direct effects at product- and service-level LCAs, revisiting an SD simulation study originally conducted in 2002 to model indirect environmental effects of ICT in 15 European countries for the period 2000-2020. In the current study, three scenarios of the 2002 study were validated in light of new empirical data from the period 2000–2012. A new scenario was developed to revisit the quantitative and qualitative results of the original study. The results showed, inter alia, that ICT has a stimulating influence on total passenger transport, for it makes it more cost- and time-efficient (rebound effects). The modeling mechanism used to represent this rebound effect is further investigated in the fourth study, which discusses the feedback loops used to model two types of rebound effects in passenger transport (direct economic rebound and time rebound). Finally, the role of systems thinking and modeling in conceptualizing and communicating the dynamics of rebound effects is examined. The aim of the fifth study was to explore the power of systems modeling and simulation to represent nonlinearities of the complex and dynamic systems examined elsewhere in this thesis. That study reviews previous studies that have compared the SD and AB approaches and models, summarizing their purpose, methodology, and results, based on certain criteria for choosing between SD and AB approaches. The transformation procedure used to develop an AB model for purposes of comparison with an SD model is also explored. In conclusion, first-order or direct environmental effects of ICT production, use, and disposal can be assessed employing an LCA method. This method can also be used to assess second-order or enabling effects by comparing ICT applications with conventional alternatives. However, the assessment of enabling effects can benefit from systems modeling methods, which are able to formally describe the drivers of change, as well as the dynamics of complex social, technical, and environmental systems associated with ICT applications. Such systems methods can also be used to model third-order or rebound effects of efficiency improvements by ICT.
Den ökande produktionen och konsumtionen av produkter och tjänster inom informations- och kommunikationsteknik (IKT) leder till en ökning av den globala elanvändningen samt direkta miljökonsekvenser kopplade till IKT. Men IKT har även indirekta miljömässiga effekter. Dessa kan vara positiva till exempel genom substitutions- och optimeringseffekter eller negativa genom att till exempel ge upphov till ytterligare efterfrågan på grund av effektivisering (så kallade reboundeffekter). Olika metoder kan användas för att modellera och bedöma både direkta och indirekta effekter av IKT. Syftet med denna avhandling är att undersöka metoder för modellering samt att studera miljöeffekter av IKT och elektronisk media med hjälp av livscykelanalys (LCA) och även modellering av komplexa och dynamiska system, samt simuleringsteknik, så som System Dynamics (SD) och agentbaserad (AB) modellering. Avhandlingen omfattar fem artiklar (artikel I-V). Artikel I & II beskriver resultaten från en fallstudie där miljöeffekter kopplade till en svensk tidskrift studeras med LCA. Tidskriftens version för surfplatta samt motsvarande tryckta version studeras och jämförs. Artikel III går ett steg vidare från produktnivåns LCA. Artikeln återkopplar till en SD simuleringsstudie som ursprungligen genomfördes under 2002. Simuleringsstudien gällde framtida miljöeffekter av IKT i 15 europeiska länder med tidspespektivet 2000-2020. I artikeln valideras tre scenarier från simuleringsstudien med hjälp av nya empiriska data från 2000-2012 och ett nytt scenario modelleras. Kvantitativa och kvalitativa resultat från den ursprungliga studien diskuteras. Till exempel visar artikel III att IKT har en stimulerande effekt på den totala persontrafiken genom att göra den mer kostnads- och tidseffektiv (reboundeffekt). Modelleringsmekanismen som används för att representera denna reboundeffekt diskuteras vidare i artikel IV. Artikeln belyser och diskuterar den återkopplingsslinga (feedback-loop) som används för att modellera två typer av reboundeffekter kopplade till persontrafik (direkt ekonomisk rebound och tidsrelaterad rebound) samt jämför med en tidigare studie. Artikel IV behandlar också den roll systemtänkande och modellering kan spela i konceptualisering och kommunikation av reboundeffekters dynamik. För att ytterligare undersöka systemmodelleringens och simuleringens möjligheter att representera icke-linjära komplexa och dynamiska system (exempel på sådana diskuteras i artikel III och IV), sammanställer artikel V tidigare studier som jämför SD och AB-metoder och -modeller.  Studiernas mål och metod summeras och resultaten med avseende på vilka kriterier som presenteras för att välja mellan SD och AB sammanställs. Även processen för att omvandla en befintlig SD-modell till en AB-modell beskrivs. Avhandlingens slutsats är att LCA och systemmodelleringsmetoder kan vara användbara för att studera IKTs direkta effekter så väl som indirekta effekter på miljön.

QC 20150813

Estilos ABNT, Harvard, Vancouver, APA, etc.
45

Blanc, Jean-luc. "Transmission de l'information et complexité des activités de populations neuronales". Thesis, Aix-Marseille, 2012. http://www.theses.fr/2012AIXM4720/document.

Texto completo da fonte
Resumo:
Dans cette thèse, nous abordons les problèmes de la transmission et du traitement de l'information par les assemblées de neurones, du point de vue de l'approche inter-disciplinaire des systèmes complexes en nous référant principalement aux formalismes de la théorie de l'information et de la théorie des systèmes dynamiques. Dans ce contexte, nous nous focalisons sur les mécanismes de représentation de l'information sensorielle par les activités neuronales à travers le codage neuronal. Nous explorons la structure de ce code, à plusieurs échelles grâce à l'étude de différents signaux électrophysiologiques issus de populations de neurones (signaux unitaires, LFP et EEG). Sur le plan méthodologique, nous avons implémenté différents indices permettant d'extraire objectivement l'information des activités neuronales, mais également d'en caractériser la dynamique sous-jacente à partir de séries temporelles de taille finie (le taux d'entropie). Nous avons également étudié un indicateur peu utilisé (le taux d'information mutuelle), qui permet de quantifier l'auto-organisation et les relations de couplage entre deux systèmes. Grâce à des approches théoriques et numériques, nous analysons les propriétés caractéristiques de ces indices et proposons leur utilisation dans le cadre de l'étude des systèmes neuronaux. Ce travail permet de caractériser la complexité de différentes activités neuronales associées aux dynamiques de transmission de l'information
In this thesis, we address the problem of transmission and information processing by neuronal assemblies, in terms of the interdisciplinary approach of complex systems by referring mainly to the formalisms of information theory and dynamical systems. In this context, we focus on the mechanisms underlying sensory information representation by neuronal activity through neural coding. We explore the structure of this code under several scales through the study of different neuronal population electrophysiological signals (singel unit, LFP and EEG). We have implemented various indices in order to extract objectively information from neural activity, but also to characterize the underlying dynamics from finite size time series (the entropy rate). We also defined a new indicator (the mutual information rate), which quantifies self-organization and relations of coupling between two systems. Using theoretical and numerical approaches, we analyze some characteristic properties of these indices and propose their use in the context of the study of neural systems. This work allows us to characterize the complexity of different neuronal activity associated to information transmission dynamics
Estilos ABNT, Harvard, Vancouver, APA, etc.
46

Démare, Thibaut. "Une approche systémique à base d'agents et de graphes dynamiques pour modéliser l'interface logistique port-métropole". Thesis, Le Havre, 2016. http://www.theses.fr/2016LEHA0021/document.

Texto completo da fonte
Resumo:
Un système logistique est une composante essentielle d'un système spatial dans lequel les acteurs s'organisent autour d'infrastructures pour faire circuler des flux (de marchandises, d'information et financier) sur un territoire. L'organisation logistique globale résulte d'un processus auto-organisé et distribué de la part des acteurs. Ce travail vise à comprendre, à de multiples échelles, comment des acteurs autonomes et très hétérogènes (dans leurs modes de fonctionnements et dans leurs objectifs), s'organisent collectivement autour des infrastructures à leurs dispositions pour gérer des flux soumis à un ensemble de contraintes (temporelles, spatiales,...). On propose ici un modèle orienté agent permettant de simuler les processus de création et d'organisation des flux liés à la logistique sur un territoire. Le modèle prévoit de décrire l'interface entre les flux internationaux et les flux urbains afin de comprendre comment les dynamiques portuaires et urbaines cohabitent au sein du système. Le modèle intègre une dynamique structurelle et organisationnelle grâce aux graphes dynamiques afin de représenter l'évolution du système. Le modèle permet ainsi aux agents de s'adapter, comme dans la réalité, à des perturbations du système
A logistic system is an essential component of a spatial system. Actors are organised around infrastructures in order to move different kinds of flow (of goods, of information, or financial) over a territory. The logistic organisation comes from an auto-organised and distributed process from the actors. This works aims to understand, at different scales, how autonomous and heterogeneous actors (according to their goals and methods to take decisions) are collectively organised around infrastructures to manage different kinds of flow, and despite numerous constraints (temporal, spatial,...). We propose an agent-based model which allows to simulate the processes to create and organise logistic flow over a territory. The model describes an interface between international and urban flow in order to understand how the port and urban dynamics work together. The model integrates a structural and organisational dynamics thanks to dynamic graphs in order to represent the evolution of this kind of system. Thus, the agents can adapt themselves to system's perturbations as in the reality
Estilos ABNT, Harvard, Vancouver, APA, etc.
47

Kerzerho, Vincent. ""Analogue Network of Converters": a DfT Technique to Test a Complete Set of ADCs and DACs Embedded in a Complex SiP or SoC". Phd thesis, Université Montpellier II - Sciences et Techniques du Languedoc, 2008. http://tel.archives-ouvertes.fr/tel-00364546.

Texto completo da fonte
Resumo:
Une nouvelle méthode de test pour les convertisseurs ADC et DAC embarqués dans un système complexe a été développée en prenant en compte les nouvelles contraintes affectant le test. Ces contraintes, dues aux tendances de design de systèmes, sont un nombre réduit de point d'accès aux entrées/sorties des blocs analogiques du système et une augmentation galopante du nombre et des performances des convertisseurs intégrés. La méthode proposée consiste à connecter les convertisseurs DAC et ADC dans le domaine analogique pour n'avoir besoin que d'instruments de test numériques pour générer et capturer les signaux de test. Un algorithme de traitement du signal a été développé pour discriminer les erreurs des DACs et ADCs. Cet algorithme a été validé par simulation et par expérimentation sur des produits commercialisés par NXP. La dernière partie de la thèse a consisté à développer de nouvelles applications pour l'algorithme.
Estilos ABNT, Harvard, Vancouver, APA, etc.
48

Oruc, Sercan. "Modeling The Dynamics Of Creative Industries: The Case Of Film Industries". Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12611988/index.pdf.

Texto completo da fonte
Resumo:
Dynamic complexity occurs in every social structure. Film industry, as a type of creative industries, constitutes a dynamic environment where uncertainty is at high levels. This complexity of the environment renders the more traditional operations research models somewhat ineffective, and thus, requires a dynamic analysis. In this study, a model showing the dynamics of film exhibition is given. The interactions within and between the theatrical and the DVD sales channels are implemented by the model. Later on, the possible effects of piracy to the model are discussed, using the inferences obtained by the created model. The model is examined with scenario and sensitivity analysis. All the modeling studies are done with a commercial dynamic systems modeling software. The model also can be extended for the whole film industry, or for some other creative industries like the publishing industry.
Estilos ABNT, Harvard, Vancouver, APA, etc.
49

Samal, Mahendra Engineering &amp Information Technology Australian Defence Force Academy UNSW. "Neural network based identification and control of an unmanned helicopter". Awarded by:University of New South Wales - Australian Defence Force Academy. Engineering & Information Technology, 2009. http://handle.unsw.edu.au/1959.4/43917.

Texto completo da fonte
Resumo:
This research work provides the development of an Adaptive Flight Control System (AFCS) for autonomous hover of a Rotary-wing Unmanned Aerial Vehicle (RUAV). Due to the complex, nonlinear and time-varying dynamics of the RUAV, indirect adaptive control using the Model Predictive Control (MPC) is utilised. The performance of the MPC mainly depends on the model of the RUAV used for predicting the future behaviour. Due to the complexities associated with the RUAV dynamics, a neural network based black box identification technique is used for modelling the behaviour of the RUAV. Auto-regressive neural network architecture is developed for offline and online modelling purposes. A hybrid modelling technique that exploits the advantages of both the offline and the online models is proposed. In the hybrid modelling technique, the predictions from the offline trained model are corrected by using the error predictions from the online model at every sample time. To reduce the computational time for training the neural networks, a principal component analysis based algorithm that reduces the dimension of the input training data is also proposed. This approach is shown to reduce the computational time significantly. These identification techniques are validated in numerical simulations before flight testing in the Eagle and RMAX helicopter platforms. Using the successfully validated models of the RUAVs, Neural Network based Model Predictive Controller (NN-MPC) is developed taking into account the non-linearity of the RUAVs and constraints into consideration. The parameters of the MPC are chosen to satisfy the performance requirements imposed on the flight controller. The optimisation problem is solved numerically using nonlinear optimisation techniques. The performance of the controller is extensively validated using numerical simulation models before flight testing. The effects of actuator and sensor delays and noises along with the wind gusts are taken into account during these numerical simulations. In addition, the robustness of the controller is validated numerically for possible parameter variations. The numerical simulation results are compared with a base-line PID controller. Finally, the NN-MPCs are flight tested for height control and autonomous hover. For these, SISO as well as multiple SISO controllers are used. The flight tests are conducted in varying weather conditions to validate the utility of the control technique. The NN-MPC in conjunction with the proposed hybrid modelling technique is shown to handle additional disturbances successfully. Extensive flight test results provide justification for the use of the NN-MPC technique as a reliable technique for control of non-linear complex dynamic systems such as RUAVs.
Estilos ABNT, Harvard, Vancouver, APA, etc.
50

Lucas, Iris. "Dynamique et contrôle d'un marché financier avec une approche système multi-agents". Thesis, Normandie, 2018. http://www.theses.fr/2018NORMLH39/document.

Texto completo da fonte
Resumo:
Cette thèse propose une réflexion autour de l'étude des marchés financiers sous le prisme des systèmes complexes.Tout d'abord une description mathématique est proposée pour représenter le processus de prises de décision des agents dès lors où celui-ci bien que représentant les intérêts individuels d'un agent, est également influencé par l'émergence d'un comportement collectif. La méthode est particulièrement applicable lorsque le système étudié est caractérisé par une dynamique non-linéaire. Une application du modèle est proposée au travers de l'implémentation d'un marché artificiel boursier avec une approche système multi-agents. Dans cette application la dynamique du marché est décrite à la fois aux niveaux microscopiques (comportement des agents) et macroscopique (formation du prix). Le processus de décision des agents est défini à partir d'un ensemble de règles comportementales reposant sur des principes de logique floue. La dynamique de la formation du prix repose sur une description déterministe à partir des règles d'appariement d'un carnet d'ordres central tel que sur NYSE-Euronext-Paris. Il est montré que le marché artificiel boursier tel qu'implémenté est capable de répliquer plusieurs faits stylisés des marchés financiers : queue de distribution des rendements plus épaisse que celle d'une loi normale et existence de grappes de volatilité (ou volatility clustering).Par la suite, à partir de simulations numériques il est proposé d'étudier trois grandes propriétés du système : sa capacité d'auto-organisation, de résilience et sa robustesse. Dans un premier temps une méthode est introduite pour qualifier le niveau d'auto-organisation du marché. Nous verrons que la capacité d'auto-organisation du système est maximisée quand les comportements des agents sont diversifiés. Ensuite, il est proposé d'étudier la réponse du système quand celui-ci est stressé via la simulation de chocs de marché. Dans les deux analyses, afin de mettre en évidence comment la dynamique globale du système émerge à partir des interactions et des comportements des agents des résultats numériques sont systématiquement apportés puis discutés.Nos résultats montrent notamment qu'un comportement collectif grégaire apparait à la suite d'un choc, et, entraîne une incapacité temporaire du système à s'auto-organiser. Finalement, au travers des simulations numériques il peut être également remarqué que le marché artificiel boursier implémenté est plus sensible à de faibles répétitions répétées qu'à un choc plus important mais unique
This thesis suggests reflection in studying financial markets through complex systems prism.First, an original mathematic description for describing agents' decision-making process in case of problems affecting by both individual and collective behavior is introduced. The proposed method is particularly applicable when studied system is characterized by non-linear, path dependent and self-organizing interactions. An application to financial markets is proposed by designing a multi¬agent system based on the proposed formalization.In this application, we propose to implement a computational agent-based financial market in which the system is described in both a microscopie and macroscopic levels are proposed. The agents' decision-making process is based on fuzzy logic rules and the price dynamic is purely deten-ninistic according to the basis matching rules of a central order book as in NYSE-Euronext-Paris. We show that, while putting most parameters under evolutionary control, the computational agent- based system is able to replicate several stylized facts of financial time series (distributions of stocks returns showing a heavy tau l with positive excess kurtosis and volatility clustering phenomenon).Thereafter, with numerical simulations we propose to study three system's properties: self-organization, resilience and robustness. First a method is introduced to quantify the degree of selforganization which ernerges in the system and shows that the capacity of self-organization is maximized when the agents' behaviors are heterogeneous. Secondly, we propose to study the system's response when market shock is simulated. in both cases, numerical results are presentedI and analyzed, showing how the global market behavior emerges from specific individual behavior interactions.Our results notably show that the emergence of collective herding behavior when market shock occurs leads to a temporary disruption on the system self-organization. Finaily, numerical simulations highlight that our artificial financial market can be able to absorb strong mono-shock but be lead to the rupture by low but repeated perturbations
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia