Teses / dissertações sobre o tema "Distributed environment simulator"

Siga este link para ver outros tipos de publicações sobre o tema: Distributed environment simulator.

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Veja os 50 melhores trabalhos (teses / dissertações) para estudos sobre o assunto "Distributed environment simulator".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Veja as teses / dissertações das mais diversas áreas científicas e compile uma bibliografia correta.

1

Alvarez, Valera Hernan Humberto. "An energy saving perspective for distributed environments : Deployment, scheduling and simulation with multidimensional entities for Software and Hardware". Electronic Thesis or Diss., Pau, 2022. https://theses.hal.science/tel-04116013.

Texto completo da fonte
Resumo:
De nos jours, la forte croissance économique et les conditions météorologiques extrêmes ont augmenté la demande mondiale d'électricité de plus de 6 % en 2021 après la pandémie de COVID. La reprise rapide de cette demande a rapidement augmenté la consommation d'électricité. Même si les sources renouvelables présentent une croissance significative, la production d'électricité à partir de sources de charbon et de gaz a atteint un niveau historique. D'autre part, la consommation d'énergie du secteur du numérique dépend de sa croissance et de son degré d'efficacité énergétique. À ce sujet, bien que les appareils à tous les niveaux de déploiement soient aujourd'hui économes en énergie, leur utilisation massive signifie que la consommation énergétique mondiale continue de croître.Toutes ces données montrent la nécessité d'utiliser l'énergie de ces appareils à bon escient. Pour cette raison, ce travail de thèse aborde le (re)déploiement dynamique de composants logiciels (conteneurs ou machines virtuelles) et de leurs données pour économiser de l'énergie. Dans cette mesure, nous avons conçu et développé des algorithmes intelligents d'ordonnancement distribué pour réduire la consommation électrique globale tout en préservant la qualité de service des applications.De tels algorithmes exécutent des procédures de migration et de duplication en tenant compte de la relation naturelle entre la charge/les fonctionnalités des composants matériels et la consommation d'énergie. Pour cela, ils mettent en place une nouvelle manière de négociations décentralisées basée sur un middleware distribué que nous avons créé (Kaligreen) et des structures de données multidimensionnelles.Pour exploiter et évaluer les algorithmes ci-dessus, des outils appropriés concernant les solutions matérielles et logicielles sont essentiels. Ici, notre choix a été de développer notre propreoutil de simulation appelé : PISCO.PISCO est un simulateur polyvalent et simple qui permet aux utilisateurs de se concentrer uniquement sur leurs stratégies de planification. Il permet d'abstraire les topologies de réseau sous forme de structures de données dont les éléments sont des dispositifs indexés par un ou plusieurs critères. De plus, il imite l'exécution de microservices en allouant des ressources selon diverses heuristiques de planification.Nous avons utilisé PISCO pour implémenter, exécuter et tester nos algorithmes de planification
Nowadays, strong economic growth and extreme weather conditions increased global electricity demand by more than 6% in 2021 after the COVID pandemic. The fast recovery regarding this demand rapidly increased electricity consumption. Even though renewable sources present a significant growth, electricity production from both coal and gas sources has reached a historical level.On the other hand, the consumption of energy by the digital technology sector depends on its growth and its degree of energy efficiency. On this matter, although devices at all deployment levels are energy efficient today, their massive use means that global energy consumption continues to grow.All these data show the need to use the energy of these devices wisely. For that reason, this thesis work addresses the dynamic (re)deployment of software components (containers or virtual machines) and their data to save energy. To this extent, we designed and developed intelligent distributed scheduling algorithms to decrease global power consumption while preserving the applications' quality of service.Such algorithms execute migrations and duplications procedures considering the natural relation between hardware components' load/features and power consumption. For that, they implement a novel manner of decentralized negotiations based on a distributed middleware we created (Kaligreen) and multidimensional data structures.To operate and assess the algorithms above, appropriate tools regarding hardware and software solutions are essential. Here, our choice was to develop our ownsimulation tool called: PISCO.PISCO is a versatile and straightforward simulator that allows users to concentrate only on their scheduling strategies. It enables network topologies to be abstracted as data structures whose elements are devices indexed by one or more criteria. Additionally, it mimics the execution of microservices by allocating resources according to various scheduling heuristics.We have used PISCO to implement, run and test our scheduling algorithms
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Agyeman, Addai Daniel. "A Cloud Based Framework For Managing Requirements Change In Global Software Development". University of Cincinnati / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1593266480093711.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Ma, Qingwei. "Distributed Manufacturing Simulation Environment". Ohio University / OhioLINK, 2002. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1038409280.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Yu, Xiaoning. "Distributed interactive simulation". Thesis, Brunel University, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.310078.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Chiou, Jen-Diann. "A distributed simulation environment for multibody physics". Thesis, Massachusetts Institute of Technology, 1998. http://hdl.handle.net/1721.1/50509.

Texto completo da fonte
Resumo:
Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Civil and Environmental Engineering, 1998.
Includes bibliographical references (leaves 128-134).
A distributed simulation environment, which can be used to model multibody physics, is developed. The software design is based on the object oriented paradigm and is implemented in C++ to run on a single workstation or multiple processors in parallel. It provides facilities to set up a multibody physics simulation, including arbitrary 3D geometric representation, particle interactions such as contacts and constraints, and visualization for postprocessing. Contact detection, the process of automatic identifying the geometric overlap between objects, is generally the most time-consuming procedure in the overall discrete element analysis pipeline. The computational cost of contact detection grows as a function of both the number of particles and the complexity of the geometric representation of each body. This thesis presents algorithms that significantly reduce the computational cost of the contact detection problem. The hashtable-based spatial reasoning algorithm demonstrates an O(M) performance, where M is the number of particles in the simulation system for a restricted set of particles. The discrete function representation (DFR) scheme is employed to model the surface geometry of complex 3D objects. DFR-based contact detection between a pair of objects exhibits an O(N) running time performance, where N is the number of surface point used to represent each object. In practice this results in a significant speedup over traditional techniques. A distributed DEM simulation environment is built on top of a set of software tools which exploit the parallelism embedded in the DEM analysis and which take advantage of a high-speed communications network to achieve good parallel performance. The goal is of reducing the entire computing time of of large-scale simulation problems to order O(N) is shown to be achieveable using the algorithms described.
by Jen-Diann Chiou.
Ph.D.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Mao, Wei Ph D. Massachusetts Institute of Technology. "Scalable, probabilistic simulation in a distributed design environment". Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/55254.

Texto completo da fonte
Resumo:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2008.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 110-114).
Integrated simulations have been used to predict and analyze the integrated behavior of large, complex product and technology systems throughout their design cycles. During the process of integration, uncertainties arise from many sources, such as material properties, manufacturing variations, inaccuracy of models and so on. Concerns about uncertainty and robustness in large-scale integrated design can be significant, especially under the situations where the system performance is sensitive to the variations. Probabilistic simulation can be an important tool to enable uncertainty analysis, sensitivity analysis, risk assessment and reliability-based design in integrated simulation environments. Monte Carlo methods have been widely used to resolve probabilistic simulation problems. To achieve desired estimation accuracy, typically a large number of samples are needed. However, large integrated simulation systems are often computationally heavy and time-consuming due to their complexity and large scale, making the conventional Monte Carlo approach computationally prohibitive. This work focuses on developing an efficient and scalable approach for probabilistic simulations in integrated simulation environments. A predictive machine learning and statistical approach is proposed in this thesis.
(cont.) Using random sampling of the system input distributions and running the integrated simulation for each input state, a random sample of limited size can be attained for each system output. Based on this limited output sample, a multilayer, feed-forward neural network is constructed as an estimator for the underlying cumulative distribution function. A mathematical model for the cumulative probability distribution function is then derived and used to estimate the underlying probability density function using differentiation. Statistically processing the sample used by the neural network is important so as to provide a good training set to the neural network estimator. Combining the statistical information from the empirical output distribution and the kernel estimation, a training set containing as much information about the underlying distribution as possible is attained. A back-propagation algorithm using adaptive learning rates is implemented to train the neural network estimator. To incorporate a required cumulative probability distribution function monotonicity hint into the learning process, a novel hint-reinforced back-propagation approach is created. The neural network estimator trained by empirical and kernel information (NN-EK estimator) can then finally be attained. To further improve the estimation, the statistical method of bootstrap aggregating (Bagging) is used. Multiple versions of the estimator are generated using bootstrap resampling and are aggregated to improve the estimator. A prototype implementation of the proposed approach is developed and test results on different models show its advantage over the conventional Monte Carlo approach in reducing the time by tens of times to achieve the same level of estimation accuracy.
by Wei Mao.
Ph.D.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Lopes, Diaz Adriana Carleton University Dissertation Computer Science. "An Object-oriented reflective simulation environment for distributed algorithms". Ottawa, 1996.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Jang, Duh 1957. "Realization of distributed experimental frame in DEVS-SCHEME and simulation environment". Thesis, The University of Arizona, 1988. http://hdl.handle.net/10150/276665.

Texto completo da fonte
Resumo:
The thesis describes a realization of distributed experimental frame concepts in DEVS-SCHEME, an object-oriented simulation environment. Also discussed, are the design and implementation issues concerning the attachments of frame components to a model in a given model structure. The algorithm for the attachments is derived to set up the model composition and model couplings when needed. An example of a simplified computer system which consists of a CPU, and a memory management (MGMT), is presented to demonstrate how such a system is observed and experimented with under centralized and decentralized experimental frames. A graphical interactive interface is provided to facilitate the attachments of frame components to models. The simulation shows that the theory regarding decentralized experimental frames is correct and feasible. Some prospective research topics and future study activities are also brought up.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Miller, John. "Distributed virtual environment scalability and security". Thesis, University of Cambridge, 2011. https://www.repository.cam.ac.uk/handle/1810/241109.

Texto completo da fonte
Resumo:
Distributed virtual environments (DVEs) have been an active area of research and engineering for more than 20 years. The most widely deployed DVEs are network games such as Quake, Halo, and World of Warcraft (WoW), with millions of users and billions of dollars in annual revenue. Deployed DVEs remain expensive centralized implementations despite significant research outlining ways to distribute DVE workloads. This dissertation shows previous DVE research evaluations are inconsistent with deployed DVE needs. Assumptions about avatar movement and proximity - fundamental scale factors - do not match WoW's workload, and likely the workload of other deployed DVEs. Alternate workload models are explored and preliminary conclusions presented. Using realistic workloads it is shown that a fully decentralized DVE cannot be deployed to today's consumers, regardless of its overhead. Residential broadband speeds are improving, and this limitation will eventually disappear. When it does, appropriate security mechanisms will be a fundamental requirement for technology adoption. A trusted auditing system ('Carbon') is presented which has good security, scalability, and resource characteristics for decentralized DVEs. When performing exhaustive auditing, Carbon adds 27% network overhead to a decentralized DVE with a WoW-like workload. This resource consumption can be reduced significantly, depending upon the DVE's risk tolerance. Finally, the Pairwise Random Protocol (PRP) is described. PRP enables adversaries to fairly resolve probabilistic activities, an ability missing from most decentralized DVE security proposals. Thus, this dissertations contribution is to address two of the obstacles for deploying research on decentralized DVE architectures. First, lack of evidence that research results apply to existing DVEs. Second, the lack of security systems combining appropriate security guarantees with acceptable overhead.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Chen, Min. "A distributed object-oriented discrete event-driven simulation environment-DODESE". FIU Digital Commons, 1991. http://digitalcommons.fiu.edu/etd/2140.

Texto completo da fonte
Resumo:
A new distributed object-oriented discrete event-driven simulation environment, DODESE, is developed to provide a common framework for simulation model design and implementation. The DODESE can be used to define a simulation including all the simulation objects participating in the simulation while the execution of the simulation can be interactively monitored on DODESE. The DODESE system has combined the strengths of both object-oriented paradigms and data base technology to make computer simulation more powerful and has achieved the goals of object-orientation, distribution, reusability, maintainability and extensibility. The system runs on two Sun workstations concurrently connected by an Ethernet. One of the workstations performs the simulation tasks while the other workstation displays the status of the simulation interactively. Both workstations communicate through the GemStone data base, thus a mechanism is designed for synchronization and concurrency control. The DODESE is implemented using OPAL, GemStone’s data definition and manipulation language, C and Xlib.
Estilos ABNT, Harvard, Vancouver, APA, etc.
11

Floros, Nikolaos. "An incompressible flow simulation environment for parallel and distributed computers". Thesis, University of Southampton, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.241983.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
12

DeSa, Colin Joseph. "Distributed problem solving environments for scientific computing". Thesis, This resource online, 1991. http://scholar.lib.vt.edu/theses/available/etd-08042009-040307/.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
13

Pon, Carlos (Carlos Roberto) Carleton University Dissertation Engineering Electronics. "Time warping - waveform relaxation (TW - WR) in a distributed simulation environment". Ottawa, 1995.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
14

Guan, Shichao. "A Multi-layered Scheme for Distributed Simulations on the Cloud Environment". Thesis, Université d'Ottawa / University of Ottawa, 2015. http://hdl.handle.net/10393/32121.

Texto completo da fonte
Resumo:
In order to improve simulation performance and integrate simulation resources among geographically distributed locations, the concept of distributed simulation is proposed. Several types of distributed simulation standards, such as DIS and HLA are established to formalize simulations and achieve reusability and interoperability of simulation components. In order to implement these distributed simulation standards and manage the underlying system of distributed simulation applications, Grid Computing and Cloud Computing technologies are employed to tackle the details of operation, configuration, and maintenance of the simulation platforms in which simulation applications are deployed. However, for modelers who may not be familiar with the management of distributed systems, it is challenging to create a simulation-run-ready environment that incorporates different types of computing resources and network environments. In this thesis, we propose a new multi-layered cloud-based scheme for enabling modeling and simulation based on different distributed simulation standards. The scheme is designed to ease the management of underlying resources and achieve rapid elasticity, providing unlimited computing capability to end users; energy consumption, security, multi-user availability, scalability and deployment issues are all considered. We describe a mechanism for handling diverse network environments. With its adoption, idle public resources can easily be configured as additional computing capabilities for the local resource pool. A fast deployment model is built to relieve the migration and installation process of this platform. An energy conservation strategy is utilized to reduce the energy consumption of computing resources. Security components are also implemented to protect sensitive information and block malicious attacks in the cloud. In the experiments, the proposed scheme is compared with its corresponding grid computing platform; the cloud computing platform achieves a similar performance, but incorporates many of the cloud's advantages.
Estilos ABNT, Harvard, Vancouver, APA, etc.
15

Ryan, Matthew D. "Virtual relativity : a relativistic model for distributed interactive simulation". Thesis, University of Reading, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.252198.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
16

Yasar, Ansar-Ul-Haque, e Adeel Jameel. "A Computational Analysis of Driving Variations in a Distributed Simulated Driving Environment". Thesis, Linköping University, Department of Computer and Information Science, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-10009.

Texto completo da fonte
Resumo:

This Master thesis report is the research conducted at the Linköping University (LiU) in the Cognitive Engineering group. This report describes and discusses the possible driving variations at T-intersections. In this study we tested how voice based command (GPS) system and traffic lights did influence the driving behavior. This computational study was conducted on a multi user driving simulation environment at Linköping University. A total of 12 groups each consisting of 4 persons participated in this study. The participants also completed a survey on paper with their valuable comments. To study the driving behavior we analyzed the conflict indicators at the Tintersection. We selected Post Encroachment Time (PET), speed and acceleration as good conflict indicators.

Estilos ABNT, Harvard, Vancouver, APA, etc.
17

Kunnamareddi, Sadhishkumar. "Programmable logic controller emulator enhancements to facilitate a distributed manufacturing simulation environment". Ohio : Ohio University, 2001. http://www.ohiolink.edu/etd/view.cgi?ohiou1173980723.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
18

Hsiao, Chen-Fu. "Development of a web-based distributed interactive simulation (DIS) environment using javascript". Thesis, Monterey, California: Naval Postgraduate School, 2014. http://hdl.handle.net/10945/43928.

Texto completo da fonte
Resumo:
Approved for public release; distribution is unlimited
This thesis investigated the current infrastructure for web-based simulations using the DIS network protocol. The main technologies studied were WebSockets, WebRTC and WebGL. This thesis sought readily available means to establish networks for interchanging DIS message (PDUs), so the WebSocket gateway server from Open-DIS project was used to construct a Client-Server structure and PeerJS API was used to construct a peer-to-peer structure. WebGL was used to create a scene and render 3D models in browsers. A first-person-shooter tank game was used as a test application with both WebSocket and WebRTC infrastructures. Experiments in this thesis included measuring the rate of sending and receiving DIS packets and analysis of the tank game by profiling tools. All the experiments were run on Chrome and Firefox browsers in a closed network. The experimental results showed that both WebSocket and WebRTC infrastructures were competent enough to support web-based DIS simulation. The results also found the significant differences of performance between Chrome and Firefox. Currently, the best performance is provided by the combination of Firefox using the WebRTC framework. The analysis of the tank game showed that most of the browser’s computational resources were spent on the WebGL graphics, with only a small percentage of the resources expended on exchanging DIS packets.
Estilos ABNT, Harvard, Vancouver, APA, etc.
19

Ma, Yifei. "A Database Supported Modeling Environment for Pandemic Planning and Course of Action Analysis". Diss., Virginia Tech, 2013. http://hdl.handle.net/10919/23264.

Texto completo da fonte
Resumo:
Pandemics can significantly impact public health and society, for instance, the 2009 H1N1
and the 2003 SARS. In addition to analyzing the historic epidemic data, computational simulation of epidemic propagation processes and disease control strategies can help us understand the spatio-temporal dynamics of epidemics in the laboratory. Consequently, the public can be better prepared and the government can control future epidemic outbreaks more effectively. Recently, epidemic propagation simulation systems, which use high performance computing technology, have been proposed and developed to understand disease propagation processes. However, run-time infection situation assessment and intervention adjustment, two important steps in modeling disease propagation, are not well supported in these simulation systems. In addition, these simulation systems are computationally efficient in their simulations, but most of them have
limited capabilities in terms of modeling interventions in realistic scenarios.
In this dissertation, we focus on building a modeling and simulation environment for epidemic propagation and propagation control strategy. The objective of this work is to
design such a modeling environment that both supports the previously missing functions,
meanwhile, performs well in terms of the expected features such as modeling fidelity,
computational efficiency, modeling capability, etc. Our proposed methodologies to build
such a modeling environment are: 1) decoupled and co-evolving models for disease propagation, situation assessment, and propagation control strategy, and 2) assessing situations and simulating control strategies using relational databases. Our motivation for exploring these methodologies is as follows: 1) a decoupled and co-evolving model allows us to design modules for each function separately and makes this complex modeling system design simpler, and 2) simulating propagation control strategies using relational databases improves the modeling capability and human productivity of using this modeling environment. To evaluate our proposed methodologies, we have designed and built a loosely coupled and database supported epidemic modeling and simulation environment. With detailed experimental results and realistic case studies, we demonstrate that our modeling environment provides the missing functions and greatly enhances many expected features, such as modeling capability, without significantly sacrificing computational efficiency and scalability.
Ph. D.
Estilos ABNT, Harvard, Vancouver, APA, etc.
20

Liu, Dan. "Design and Analysis of an Interoperable HLA-based Simulation System over a Cloud Environment". Thesis, Université d'Ottawa / University of Ottawa, 2017. http://hdl.handle.net/10393/35675.

Texto completo da fonte
Resumo:
Distributed simulation over Cloud environment is still a new subject. Cloud computing is expected to bring new benefits to conventional distributed simulation, including elasticity on computation resource, cost saving on investment and convenience on service accessibility. A few researches have been done on applying Cloud computing on distributed simulation. However, there are various drawbacks and limitations on those works. Lack of interoperability across Cloud platforms is one of critical drawbacks among them. It can greatly limit the usability and flexibility of distributed simulation over Cloud environment. Based on the investigation on Cloud computing and existing distributed simulation systems over Cloud environment, a novel interoperable HLA-based (High Level Architecture) simulation system over a Cloud environment, ISSC (Interoperable Simulation System over a Cloud Environment), is proposed in this thesis. ISSC aims to address the interoperability issue of simulation system across various Cloud platforms. It employs OCCI and a set of technologies, including Ruby on Rails, OpenVPN and RESTful web services, to build the interoperability across Cloud platforms. It adopts a distributed architecture to construct flexibility and expansibility of the system. The prototype and related experiments performed provides an excellent demonstration that ISSC is a reliable and effective solution on interoperable simulation system over a diverse Cloud environment.
Estilos ABNT, Harvard, Vancouver, APA, etc.
21

Abed, Nagy Youssef. "Physical dynamic simulation of shipboard power system components in a distributed computational environment". FIU Digital Commons, 2007. http://digitalcommons.fiu.edu/etd/1100.

Texto completo da fonte
Resumo:
Shipboard power systems have different characteristics than the utility power systems. In the Shipboard power system it is crucial that the systems and equipment work at their peak performance levels. One of the most demanding aspects for simulations of the Shipboard Power Systems is to connect the device under test to a real-time simulated dynamic equivalent and in an environment with actual hardware in the Loop (HIL). The real time simulations can be achieved by using multi-distributed modeling concept, in which the global system model is distributed over several processors through a communication link. The advantage of this approach is that it permits the gradual change from pure simulation to actual application. In order to perform system studies in such an environment physical phase variable models of different components of the shipboard power system were developed using operational parameters obtained from finite element (FE) analysis. These models were developed for two types of studies low and high frequency studies. Low frequency studies are used to examine the shipboard power systems behavior under load switching, and faults. High-frequency studies were used to predict abnormal conditions due to overvoltage, and components harmonic behavior. Different experiments were conducted to validate the developed models. The Simulation and experiment results show excellent agreement. The shipboard power systems components behavior under internal faults was investigated using FE analysis. This developed technique is very curial in the Shipboard power systems faults detection due to the lack of comprehensive fault test databases. A wavelet based methodology for feature extraction of the shipboard power systems current signals was developed for harmonic and fault diagnosis studies. This modeling methodology can be utilized to evaluate and predicate the NPS components future behavior in the design stage which will reduce the development cycles, cut overall cost, prevent failures, and test each subsystem exhaustively before integrating it into the system.
Estilos ABNT, Harvard, Vancouver, APA, etc.
22

Cho, Hyup Jae. "Discrete event system homomorphisms: Design and implementation of quantization-based distributed simulation environment". Diss., The University of Arizona, 1999. http://hdl.handle.net/10150/284060.

Texto completo da fonte
Resumo:
The demand for parallel and distributed discrete event simulation (PDES) is rapidly growing due to the advent of middleware programs which allow multiple processes running on one or more machines to interact across networks. High Level Architecture (HLA) proposed by DoD is the standard middleware designed for distributed simulation environment. DEVS/HLA, developed in this dissertation, is a parallel and distributed modeling and simulation environment which employs a sound system theory, modeling formalism (extended DEVS) and system homomorphisms in its design. The environment includes a highly efficient message filtering scheme called quantization and is based on a risk-free PDES simulation protocol that exploits simultaneous events. In its implementation, DEVS/HLA employs hierarchical and modular object-oriented technology. To the user it presents a high level modeling paradigm and a highly reliable distributed HLA-compliant environment. This dissertation presents an analysis of quantization-based message filtering and some very promising empirical results that clarify the tradeoff between reduced message bandwidth demand and error incurred due to message reduction. The results relate bandwidth utilization and error against quantum size for federations executing on DEVS/HLA in Unix and NT networking platforms in both LAN and WAN environments. The theoretical and empirical results indicate that predictive quantization can be very scaleable due to reduced local computation demands as well as having extremely favorable communication reduction/simulation fidelity tradeoffs. How the solution extends to real-time DEVS simulation and implications for the design of real time infrastructures are topics for further research.
Estilos ABNT, Harvard, Vancouver, APA, etc.
23

Gu, Yunfeng. "Data Distribution Management In Large-scale Distributed Environments". Thèse, Université d'Ottawa / University of Ottawa, 2012. http://hdl.handle.net/10393/20691.

Texto completo da fonte
Resumo:
Data Distribution Management (DDM) deals with two basic problems: how to distribute data generated at the application layer among underlying nodes in a distributed system and how to retrieve data back whenever it is necessary. This thesis explores DDM in two different network environments: peer-to-peer (P2P) overlay networks and cluster-based network environments. DDM in P2P overlay networks is considered a more complete concept of building and maintaining a P2P overlay architecture than a simple data fetching scheme, and is closely related to the more commonly known associative searching or queries. DDM in the cluster-based network environment is one of the important services provided by the simulation middle-ware to support real-time distributed interactive simulations. The only common feature shared by DDM in both environments is that they are all built to provide data indexing service. Because of these fundamental differences, we have designed and developed a novel distributed data structure, Hierarchically Distributed Tree (HD Tree), to support range queries in P2P overlay networks. All the relevant problems of a distributed data structure, including the scalability, self-organizing, fault-tolerance, and load balancing have been studied. Both theoretical analysis and experimental results show that the HD Tree is able to give a complete view of system states when processing multi-dimensional range queries at different levels of selectivity and in various error-prone routing environments. On the other hand, a novel DDM scheme, Adaptive Grid-based DDM scheme, is proposed to improve the DDM performance in the cluster-based network environment. This new DDM scheme evaluates the input size of a simulation based on probability models. The optimum DDM performance is best approached by adapting the simulation running in a mode that is most appropriate to the size of the simulation.
Estilos ABNT, Harvard, Vancouver, APA, etc.
24

Demaine, Erik. "Effcient Simulation of Message-Passing in Distributed-Memory Architectures". Thesis, University of Waterloo, 1996. http://hdl.handle.net/10012/1069.

Texto completo da fonte
Resumo:
In this thesis we propose a distributed-memory parallel-computer simulation system called PUPPET (Performance Under a Pseudo-Parallel EnvironmenT). It allows the evaluation of parallel programs run in a pseudo-parallel system, where a single processor is used to multitask the program's processes, as if they were run on the simulated system. This allows development of applications and teaching of parallel programming without the use of valuable supercomputing resources. We use a standard message-passing language, MPI, so that when desired (e. g. , development is complete) the program can be run on a truly parallel system without any changes. There are several features in PUPPET that do not exist in any other simulation system. Support for all deterministic MPI features is available, including collective and non-blocking communication. Multitasking (more processes than processors) can be simulated, allowing the evaluation of load-balancing schemes. PUPPET is very loosely coupled with the program, so that a program can be run once and then evaluated on many simulated systems with multiple process-to-processor mappings. Finally, we propose a new model of direct networks that ignores network traffic, greatly improving simulation speed and often not signficantly affecting accuracy.
Estilos ABNT, Harvard, Vancouver, APA, etc.
25

Abbas, Muhammad Hassan, e Mati-ur-Rehman Khan. "Correlational Analysis of Drivers Personality Traits and Styles in a Distributed Simulated Driving Environment". Thesis, Linköping University, Department of Computer and Information Science, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-10027.

Texto completo da fonte
Resumo:

In this thesis report we conducted research study on driver's behavior in T-Intersections using simulated environment. This report describes and discusses correlation analysis of driver's personality traits and style while driving at T-Intersections.

The experiments were performed on multi user driving simulator under controlled settings, at Linköping University. A total of forty-eight people participated in the study and were divided into groups of four, all driving in the same simulated world.

During the experiments participants were asked to fill a series of well-known self-report questionnaires. We evaluated questionnaires to get the insight in driver's personality traits and driving style. The self-report questionnaires consist of Schwartz's configural model of 10 values types and NEO-five factor inventory. Also driver's behavior was studied with the help of questionnaires based on driver's behavior, style, conflict avoidance, time horizon and tolerance of uncertainty. Then these 10 Schwartz's values are correlated with the other questionnaires to give the detail insight of the driving habits and personality traits of the drivers.

Estilos ABNT, Harvard, Vancouver, APA, etc.
26

Oppong, Eric Asamoah. "A QoS framework for modeling and simulation of distributed services within a cloud environment". Thesis, Kingston University, 2014. http://eprints.kingston.ac.uk/30599/.

Texto completo da fonte
Resumo:
Distributed computing paradigm such as Cloud and SOA provides the architecture and medium for service computing which gives flexibility to organisations in implementing IT solutions meeting specific business objectives. The advancement of internet technology opens up the use of service computing and broaden the scope into areas classified as utility computing, computing solutions modelled as services allowing consumers to use and pay for solutions that includes applications and physical devices. The model of service computing offers great opportunity in cutting cost and deployment but also presents a case for user demands changes that is different from the usual service level agreement in computing deployment. Service providers must consider different aspects of consumer demands in provisioning of services including non-functional requirements such as Quality of Service, this not only relates to the users expectations but also managing the effective distribution of resources and applications. The normal model for meeting user requirements is over-stretched and therefore requires more information gathering and analysis of requirements that can be used to determine effective management in service computing by leveraging SOA and Cloud computing based on QoS factors. A model is needed to consider multiple criteria in decision making to enable proper mapping of resources from service composition level to resource provision for processing user request, a framework that is capable of analysing service composition and resource requirements. Thus, the aim of the thesis is to develop a framework for enabling service allocation in Cloud Computing based on SOA and QoS for analysing user requirements to ensure effective allocation and performance in a distributed system. The framework is designed to handle the top layer of user requirements in terms of application development and the lower layer of resource management, analysing the requirement in terms of QoS in order to identify the common factors that matches the user requirement and the available resources. The framework is evaluated using Cloudsim simulator to test its effectiveness in improving service and resource allocation in Distributed Computing environment. This approach offers a greater flexible to overcome issues of over-provisioning and underprovisioning of resources by maintaining an effective provisioning using Service Oriented QoS Enabled Framework (SOQ-Framework) for requirement analysis of service composition and resource capabilities.
Estilos ABNT, Harvard, Vancouver, APA, etc.
27

Deshpande, Isha Sanjay. "HETEROGENEOUS COMPUTING AND LOAD BALANCING TECHNIQUES FOR MONTE CARLO SIMULATION IN A DISTRIBUTED ENVIRONMENT". The Ohio State University, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=osu1308244580.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
28

Zhang, Xiaohui. "Integration of a stochastic space-time rainfall model and distributed hydrologic simulation with GIS". Diss., The University of Arizona, 1997. http://hdl.handle.net/10150/282409.

Texto completo da fonte
Resumo:
This research presents an integration of a stochastic space-time rainfall model and distributed hydrologic simulation with GIS. The integrated simulation system consists of three subsystems: a stochastic space-time rainfall model, a geographical information system (GIS), and a distributed physically-based hydrologic model. The developed stochastic space-time rainfall model is capable of estimating the storm movement and simulating a random rainfall field over a study area, based on the measurement from three raingauges. An optimization-based lag-k correlation method was developed to estimate the storm movement, and a stochastic model was developed to simulate the rainfall field. A GIS tool, ARC/INFO, was integrated into this simulation system. GIS has been applied to automatically extract the spatially distributed parameters for hydrologic modeling. Digital elevation modeling techniques were used to process a high resolution digital map. A distributed physically-based hydrologic model, operated in HEC-1, simulated the stochastic, distributed, interrelated hydrological processes. The Green-Ampt equation is used for modeling the infiltration process, kinematic wave approximation for infiltration-excess overland flow, and the diffusion wave model for the unsteady channel flow. Two small nested experimental watersheds in southern Arizona were chosen as the study area where three raingauges are located. Using five recorded storm events, a series of simulations were performed under a variety of conditions. The simulation results show the model performs very well, by comparing the simulated runoff peak flow and runoff depth with the measured ones, and evaluated by the model efficiency. Both model structure and model parameter uncertainties were investigated in the sensitivity analysis. The statistical tests for the simulation results show that it is important to model stochastic rainfall with storm movement, which caused a significant change in runoff peak flow and runoff depth from that where the input is only one gage data. The sensitivity of runoff to roughness factor N and hydraulic conductivity Ks were intensively investigated. The research demonstrated this integrated system presents an improved simulation environment for the distributed hydrology.
Estilos ABNT, Harvard, Vancouver, APA, etc.
29

Zhou, Wei. "An investigation into a distributed virtual reality environment for real-time collaborative 4D construction planning and simulation". Thesis, University of Wolverhampton, 2009. http://hdl.handle.net/2436/98506.

Texto completo da fonte
Resumo:
The use and application of 4 Dimensional Computer Aided Design (4D CAD) is growing within the construction industry. 4D approaches have been the focus of many research efforts within the last decade and several commercial tools now exist for the creation of construction simulations using 4D approaches. However, there are several key limitations to the current approaches. For example, 4D models are normally developed after the initial planning of a project has taken place using more traditional techniques such as Critical Path Method (CPM). Furthermore, mainstream methodologies for planning are based on individual facets of the construction process developed by discrete contractors or sub-contractors. Any 4D models generated from these data are often used to verify work flows and identify problems that may arise, either in terms of work methods or sequencing issues. Subsequently, it is perceived that current 4D CAD approaches provide a planning review mechanism rather than a platform for a novel integrated approach to construction planning. The work undertaken in this study seeks to address these issues through the application of a distributed virtual reality (VR) environment for collaborative 4D based construction planning. The key advances lie in catering for geographically dispersed planning by discrete construction teams. By leveraging networked 4D-VR based technologies, multidisciplinary planners, in different places, can be connected to collaboratively perform planning and create an integrated and robust construction schedule leading to a complete 4D CAD simulation. Establishing such a complex environment faces both technological and social challenges. Technological challenges arise from the integration of traditional and recent 4D approaches for construction planning with an ad hoc application platform of VR linked through networked computing. Social challenges arise from social dynamics and human behaviours when utilizing VR-based applications for collaborative work. An appropriate 4D-based planning method in a networked VR based environment is the key to gaining a technical advancement and this approach to distributed collaborative planning tends to promote computer-supported collaborative work (CSCW). Subsequently, probing suitable CSCW design and user interface/interaction (UI) design are imperative for solutions to achieve successful applicability. Based on the foregoing, this study developed a novel robust 4D planning approach for networked construction planning. The new method of interactive definition was devised through theoretical analysis of human-computer interaction (HCI) studies, a comparison of existing 4D CAD creation, and 3D model based construction planning. It was created to support not only individual planners’ work but multidisciplinary planners’ collaboration, and lead to interactive and dynamic development of a 4D simulation. From a social perspective, the method clarified and highlighted relevant CSCW design to enhance collaboration. Applying this rationale, the study specified and implemented a distributed groupware solution for collaborative 4D construction planning. Based on a developed system architecture, application mode and dataflow, as well as a real-time data exchange protocol, a prototype system entitled ‘4DX’ was implemented which provides a platform for distributed multidisciplinary planners to perform real-time collaborative 4D construction planning. The implemented toolkit targeted a semi-immersive VR platform for enhanced usability with compatibility of desktop VR. For the purpose of obtaining optimal UI design of this kind of VR solution, the research implemented a new user-centred design (UCD) framework of Taguchi-Compliant User-Centred Design (TC-UCD) by adapting and adopting the Taguchi philosophy and current UCD framework. As a result, a series of UIs of the VR-based solution for multifactor usability evaluation and optimization were developed leading to a VR-based solution with optimal UIs. The final distributed VR solution was validated in a truly geographically dispersed condition. Findings from the verification testing, the validation, and the feedback from construction professionals proved positive in addition to providing constructive suggestions to further reinforce the applicability of the approach in the future.
Estilos ABNT, Harvard, Vancouver, APA, etc.
30

Cho, Yŏng-gwan. "RTDEVS/CORBA: A distributed object computing environment for simulation-based design of real-time discrete event systems". Diss., The University of Arizona, 2001. http://hdl.handle.net/10150/279904.

Texto completo da fonte
Resumo:
Ever since distributed systems technology became increasingly popular in the real-time computing area about two decades ago, real-time distributed object computing technologies have been attracting more attention from researchers and engineers. While highly effective object-oriented methodologies are now widely adopted to reduce the development complexity and maintenance costs of large scale non-real-time software applications, real-time systems engineering practice has not kept pace with these system development methodologies. Indeed, real-time design techniques have not fully adopted the concepts of modular design and analysis which are the main virtues of object-oriented design technologies. As a consequence, the demand for object-oriented analysis, design, and implementation of large-scale real-time applications has been growing. To address the need for object-oriented real-time systems engineering environments we propose the Real-Time DEVS/CORBA (RTDEVS/CORBA) distributed object computing environment. In this dissertation, we show how this environment is an extension of previously developed DEVS-based modeling and simulation frameworks that have been shown to support an effective modeling and simulation methodology in various application areas. The major objective in developing Distributed Real-Time DEVS/CORBA is to establish a framework in which distributed real-time systems can be designed through DEVS-based modeling and simulation studies, and then migrated with minimal additional effort to be executed in the real-time distributed environment. This environment provides generic support for developing models of distributed embedded software systems, evaluating their performance and timing behavior through simulation and easing the transition from the simulation to actual executions. In this dissertation we describe, in some detail, the design and implementation of the RTDEVS/CORBA environment. It was implemented over Visibroker CORBA middleware along with the use of ACE/TAO real-time CORBA services, such as the real-time event service and the runtime scheduling service. Implementation aspects considered include time synchronization issues, priority-based message dispatching for timely message delivery, implementation of activity with threads, and other features required for simulating and executing real-time DEVS models. Finally, application examples are presented in the last part of the dissertation to show applicability of the environment to real systems-engineering problems.
Estilos ABNT, Harvard, Vancouver, APA, etc.
31

Thomas, Nicholas Wayne. "Simulating the hydrologic impact of distributed flood mitigation practices, tile drainage, and terraces in an agricultural catchment". Diss., University of Iowa, 2015. https://ir.uiowa.edu/etd/2017.

Texto completo da fonte
Resumo:
In 2008 flooding occurred over a majority of Iowa, damaging homes, displacing residents, and taking lives. In the wake of this event, the Iowa Flood Center (IFC) was charged with the investigation of distributed flood mitigation strategies to reduce the frequency and magnitude of peak flows in Iowa. This dissertation is part of the several studies developed by the IFC and focused on the application of a coupled physics based modeling platform, to quantify the coupled benefits of distributed flood mitigation strategies on the reduction of peak flows in an agricultural watershed. Additional investigation into tile drainage and terraces, illustrated the hydrologic impact of each commonly applied agricultural practice. The effect of each practice was represented in numerical simulations through a parameter adjustment. Systems were analyzed at the field scale, to estimate representative parameters, and applied at the watershed scale. The impact of distributed flood mitigation wetlands reduced peak flows by 4 % to 17 % at the outlet of a 45 km2 watershed. Variability in reduction was a product of antecedent soil moisture, 24-hour design storm total depth, and initial structural storage capacity. The highest peak flow reductions occurred in scenarios with dry soil, empty project storage, and low rainfall depths. Peak flow reductions were estimated to dissipate beyond a total drainage area of 200 km2, approximately 2 km downstream of the small watershed outlet. A numerical tracer analysis identified the contribution of tile drainage to stream flow (QT/Q) which varied between 6 % and 71 % through an annual cycle. QT/Q responded directly to meteorological forcing. Precipitation driven events produced a strong positive logarithmic correlation between QT/Q and drainage area. The addition of precipitation into the system saturated near surface soils, increased lateral soil water movement, and reduced the contribution of instream tile flow. A negative logarithmic trend in QT/Q to drainage area persisted in non-event durations. Simulated gradient terraces reduced and delayed peak flows in subcatchments of less than 3 km2 of drainage area. The hydrographs were shifted responding to rainfall later than non-terraced scenarios, while retaining the total volumetric outflow over longer time periods. The effects of dense terrace systems quickly dissipated, and found to be inconsequential at a drainage area of 45 km2. Beyond the analysis of individual agricultural features, this work assembled a framework to analyze the feature at the field scale for implementation at the watershed scale. It showed large scale simulations reproduce field scale results well. The product of this work was, a systematic hydrologic characterization of distributed flood mitigation structures, pattern tile drainage, and terrace systems facilitating the simulation of each practices in a physically-based coupled surface-subsurface model.
Estilos ABNT, Harvard, Vancouver, APA, etc.
32

Bruschi, Sarita Mazzini. "ASDA: um ambiente de simulação distribuída automático". Universidade de São Paulo, 2002. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-26062007-134827/.

Texto completo da fonte
Resumo:
Esta tese propõe um ambiente automático para desenvolvimento de simulação distribuída ASDA (Ambiente de Simulação Distribuída Automático), que tem como objetivo principal facilitar a utilização e desenvolvimento de simulação distribuída. As funcionalidades definidas no ASDA tornam-o diferente de todos os outros ambientes encontrados na literatura. A especificação do ASDA foi realizada através de um diagrama modular composto por sete módulos e também com o auxílio da ferramenta UML (Unified Modelling Language), através da utilização de três de seus diagramas: de casos de uso, de classes e de atividades. O ASDA permite aos usuários a utilização de simulação distribuída através da definição de uma nova simulação ou da replicação de um programa de simulação já desenvolvido. Se a opção for pelo desenvolvimento de um novo programa de simulação, o usuário deve fornecer o modelo e os parâmetros e o ambiente se encarrega de gerar o código do programa de simulação utilizando a abordagem que proporciona o melhor desempenho, levando em consideração as características do modelo e da plataforma. Além da especificação do ASDA, esta tese definiu um protótipo do ambiente com o objetivo de mostrar sua viabilidade de utilização. Neste protótipo, três módulos foram implementados, destacando-se o módulo Replicador, que utiliza a abordagem MRIP (Multiple Replication in Parallel). Esta tese contribui também com a definição de algumas diretrizes para a utilização da abordagem MRIP. A base para essa definição foram os resultados obtidos com a utilização do módulo Replicador
This thesis proposes an automatic environment for the development of distributed simulation ASDA (Ambiente de Simulação Distribuída Automático (in Portuguese), whose main goal is to make easier the use and development of distributed simulation. The ASDA functionality makes it different from all other environments found in the literature. The ASDA has been specified through a modular diagram, composed of seven modules built with the help of the UML (Unified Modelling Language) tool, using three of its diagrams: use case, class and activity. ASDA users can define the distributed simulation by means of the specification of a new simulation program or the replication of a simulation program already developed. If the user chooses to develop a new simulation program, he must only provide the model and the parameters. The environment will then generate the simulation program code using the approach that provides the best performance considering the model and platform characteristics. Besides the specification, this thesis presents a prototype of the ASDA environment with the goal of showing its viability. Three modules have been implemented for the prototype, highlighting the Replication module, which uses the MRIP (Multiple Replication in Parallel) approach. Another contribution of this thesis is the definition of a set of guidelines to the utilization of the MRIP approach. The basis to define these procedures was the results obtained with the utilization of the Replication module
Estilos ABNT, Harvard, Vancouver, APA, etc.
33

Sabatier, Camille. "Toward the temperature and strain discrimination by Brillouin based distributed fiber sensor". Thesis, Lyon, 2019. http://www.theses.fr/2019LYSES027.

Texto completo da fonte
Resumo:
L’objectif est de développer un capteur capable de discriminer à la fois la température et la déformation sur de longues distances, s’appuyant sur une fibre unique. Ceci sera réalisé via une approche couplée simulation/expérience. Un modèle de simulation de la réponse Brillouin dans une fibre optique a été développé. Le modèle de simulation prend en compte la composition de la fibre et la répartition des dopants. Deux structures de fibre optique ont été optimisées par simulation, ce qui a permis de mettre en avant la robustesse de nos modèles. Par la suite, ces deux fibres optiques ont été fabriquées. Des tests sur les conditions de fibrages ont été réalisés afin d’obtenir une fibre avec les meilleurs capacités possibles de discrimination entre la température et la déformation et de vérifier la robustesse de la fabrication. Toutes les fibres fabriquées présentent une signature Brillouin avec plusieurs pics. Les résultats expérimentaux ont été comparés avec les calculs et confirment notre capacité de prédiction. Les capacités de discrimination des fibres optiques ont été vérifiées et comparées avec les fibres déjà présentes sur le marché. Certaines fibres présentées dans cette thèse montrent des capacités de discrimination supérieures aux meilleures fibres de la littérature
The objective is to develop a sensor capable of discriminating between temperature and strain relying on a single fiber over long distances. This will be done by using a coupled simulation / experiment approach. a simulation model of the Brillouin response in an optical fiber has been developed. The simulation model takes into account the composition of the fiber and the distribution of dopants. An optimization of two optical fiber structures was modeled, which made it possible to highlight the robustness of the simulation model. Subsequently, these two structures were manufactured. Tests on fiber conditions were carried out in order to obtain a fiber with the best temperature / strain discrimination capabilities and to verify the robustness of the manufacturing. All the fibers presented have a Brillouin signature with several peaks. The experimental results were compared with the simulation data and show similar results. The discrimination capabilities of optical fibers have been verified and compared with the fibers already on the market. Some fibers presented in this PhD thesis show discriminating capabilities superior to the fibers reported in literature
Estilos ABNT, Harvard, Vancouver, APA, etc.
34

Brink, Michael Joseph. "Hardware-in-the-loop simulation of pressurized water reactor steam-generator water-level control, designed for use within physically distributed testing environments". The Ohio State University, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=osu1357273230.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
35

Bigelow, Matthew Steven. "Examining the relative costs and benefits of shifting the locus of control in a novel air traffic management environment via multi-agent dynamic analysis and simulation". Thesis, Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/41142.

Texto completo da fonte
Resumo:
The current air traffic management system has primarily evolved via incremental changes around historic control, navigation, and surveillance technologies. As a result, the system as a whole is not capable of handling air traffic capacities well beyond current levels, despite recent developments, such as ADS-B, that could potentially enable new concepts of operation. Methods of analyzing air traffic for safety and performance have also evolved around current-day operating constructs. Thus, attempts to examine future systems tend to use different analysis methods developed for each. Most notably, questions of 'locus of control' - whether the control should be centralized or de-centralized and distributed - have no common framework by which to judge relative costs and benefits. For instance, a completely centralized control paradigm is commonly asserted to provide an airspace-wide optimal traffic management solution due to a more complete picture of the state of the airspace, whereas a completely decentralized control paradigm is commonly asserted to provide a more user-specific optimal traffic management solution, to distribute the traffic management workload, and potentially be more robust. Given the disparate nature of these assertions and the different types of evaluations commonly used with each, some shared framework must be established to allow comparisons between very different control paradigms. The objective of this thesis was to construct a formal framework to examine the relative costs and benefits of shifting the locus of control in a novel air traffic management environment. This framework provides useful definitions and quantitative measures of flexibility and robustness with respect to various control paradigms ranging between, and including, completely centralized and completely decentralized concepts of operation. Multi-agent dynamic analysis and simulation was used to analyze the range of dynamics found in the different control paradigms. In addition, futuristic air traffic management concepts were developed in sufficient detail to demonstrate the framework. In other words, the objectives were met because the framework was demonstrated to have the ability to identify (or dispel) hypotheses about the relative costs and benefits of locus of control.
Estilos ABNT, Harvard, Vancouver, APA, etc.
36

Lee, Chin Siong. "NPS AUV workbench: collaborative environment for autonomous underwater vehicles (AUV) mission planning and 3D visualization". Thesis, Monterey, California. Naval Postgraduate School, 2004. http://hdl.handle.net/10945/1658.

Texto completo da fonte
Resumo:
Approved for public release, distribution is unlimited
alities. The extensible Markup Language (XML) is used for data storage and message exchange, Extensible 3D (X3D) Graphics for visualization and XML Schema-based Binary Compression (XSBC) for data compression. The AUV Workbench provides an intuitive cross-platform-capable tool with extensibility to provide for future enhancements such as agent-based control, asynchronous reporting and communication, loss-free message compression and built-in support for mission data archiving. This thesis also investigates the Jabber instant messaging protocol, showing its suitability for text and file messaging in a tactical environment. Exemplars show that the XML backbone of this open-source technology can be leveraged to enable both human and agent messaging with improvements over current systems. Integrated Jabber instant messaging support makes the NPS AUV Workbench the first custom application supporting XML Tactical Chat (XTC). Results demonstrate that the AUV Workbench provides a capable testbed for diverse AUV technologies, assisting in the development of traditional single-vehicle operations and agent-based multiple-vehicle methodologies. The flexible design of the Workbench further encourages integration of new extensions to serve operational needs. Exemplars demonstrate how in-mission and post-mission event monitoring by human operators can be achieved via simple web page, standard clients or custom instant messaging client. Finally, the AUV Workbench's potential as a tool in the development of multiple-AUV tactics and doctrine is discussed.
Civilian, Singapore Defence Science and Technology Agency
Estilos ABNT, Harvard, Vancouver, APA, etc.
37

Bouvry, Pascal. "Placement de tâches sur ordinateurs parallèles à mémoire distribuée". Grenoble INPG, 1994. http://tel.archives-ouvertes.fr/tel-00005081.

Texto completo da fonte
Resumo:
La demande croissante de puissance de calcul est telle que des ordinateurs de plus en plus performants sont fabriqués. Afin que ces machines puissent être facilement exploitées, les lacunes actuelles en terme d'environnements de programmation doivent être comblées. Le but à atteindre est de trouver un compromis entre recherche de performances et portabilité. Cette thèse s'intéresse plus particulièrement au placement statique de graphes de taches sur architectures parallèles a mémoire distribuée. Ce travail s'inscrit dans le cadre du projet INRIA-IMAG APACHE et du projet européen SEPP-COPERNICUS (Software Engineering for Parallel Processing). Le graphe de taches sans précédence est le modèle de représentation de programmes parallèles utilise dans cette thèse. Un tour d'horizon des solutions apportées dans la littérature au problème de l'ordonnancement et du placement est fourni. La possibilité d'utilisation des algorithmes de placement sur des graphes de précédence, après une phase de regroupement, est soulignée. Une solution originale est proposée, cette solution est interfacée avec un environnement de programmation complet. Trois types d'algorithmes (gloutons, iteratifs et exacts) ont été conçus et implémentes. Parmi ceux-ci, on retrouve plus particulièrement un recuit simule et une recherche tabu. Ces algorithmes optimisent différentes fonctions objectives (des plus simples et universelles aux plus complexes et ciblées). Les différents paramètres caractérisant le graphe de taches peuvent être affinés suite à un relevé de traces. Des outils de prise de traces permettent de valider les différentes fonctions de cout et les différents algorithmes d'optimisation. Un jeu de tests est défini et utilise. Les tests sont effectué sur le Mégapode (machine a 128 transputers), en utilisant comme routeur VCR de l'université de Southampton, les outils de génération de graphes synthétiques ANDES du projet ALPES (développé par l'équipe d'évaluation de performances du LGI-IMAG) et l'algorithme de regroupement DSC (Dominant Sequence Clustering) de PYRROS (développé par Tao Yang et Apostolos Gerasoulis). Mapping task graphs on distributed memory parallel computers
The growing needs in computing performance imply more complex computer architectures. The lack of good programming environments for these machines must be filled. The goal to be reached is to find a compromise solution between portability and performance. The subject of this thesis is studying the problem of static allocation of task graphs onto distributed memory parallel computers. This work takes part of the project INRIA-IMAG APACHE and of the european one SEPP-COPERNICUS (Software Engineering for Parallel Processing). The undirected task graph is the chosen programming model. A survey of the existing solutions for scheduling and for mapping problems is given. The possibility of using directed task graphs after a clustering phase is underlined. An original solution is designed and implemented ; this solution is implemented within a working programming environment. Three kinds of mapping algorithms are used: greedy, iterative and exact ones. Most developments have been done for tabu search and simulated annealing. These algorithms improve various objective functions (from most simple and portable to the most complex and architecturaly dependant). The weigths of the task graphs can be tuned using a post-mortem analysis of traces. The use of tracing tools leads to a validation of the cost function and of the mapping algorithms. A benchmark protocol is defined and used. The tests are runned on the Meganode (a 128 transputer machine) using VCR from the university of Southampton as a router, synthetic task graphs generation with ANDES of the ALPES project (developped by the performance evaluation team of the LGI-IMAG) and the Dominant Sequence Clustering of PYRROS (developped by Tao Yang and Apostolos Gerasoulis)
Estilos ABNT, Harvard, Vancouver, APA, etc.
38

Mosley, Liam M. "Modeling and Phylodynamic Simulations of Avian Influenza". Miami University / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=miami1556812302845438.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
39

Lerat, Julien. "Quels apports hydrologiques pour les modèles hydrauliques ? : vers un modèle intégré de simulation des crues". Phd thesis, Université Pierre et Marie Curie - Paris VI, 2009. http://tel.archives-ouvertes.fr/tel-00392240.

Texto completo da fonte
Resumo:
Les modèles hydrauliques sont couramment utilisés pour l'aménagement des rivières et la prévention des dommages liés aux inondations. Ces modèles calculent les hauteurs d'eau et débits sur un tronçon de rivière à partir de sa géométrie et des conditions aux limites du système: débit à l'amont du tronçon, débits d'apports latéraux provenant du bassin intermédiaire et hauteurs d'eau à l'aval. Lorsque le tronçon est long, les apports latéraux deviennent conséquents tout en demeurant rarement mesurés car provenant d'affluents secondaires. L'évaluation de ces apports constitue alors une étape essentielle dans la simulation des crues sous peine de fortes sous ou surestimations des variables hydrauliques. Cette thèse a pour objectif principal d'identifier une méthode de complexité minimale permettant de reconstituer ces apports. Nos travaux s'appuient sur un échantillon de 50 tronçons de rivière situés en France et aux Etats-Unis sur lesquels les apports latéraux ont été estimés à l'aide d'un modèle hydrologique semi-distribué connecté avec un modèle hydraulique simplifié.

Une méthode automatisée de découpage du bassin intermédiaire en sous-bassins a d'abord été élaborée afin de faciliter la construction du modèle hydrologique sur les 50 tronçons de rivière. Des tests de sensibilité ont été menés sur le nombre de sous-bassins, la nature uniforme ou distribuée des entrées de pluie et des paramètres du modèle hydrologique. Une configuration à 4 sous-bassins présentant des pluies et des paramètres uniformes s'est avérée la plus performante sur l'ensemble de l'échantillon.

Enfin, une méthode alternative de calcul des apports latéraux a été proposée utilisant une transposition du débit mesuré à l'amont et une combinaison avec le modèle hydrologique.
Estilos ABNT, Harvard, Vancouver, APA, etc.
40

Duarte, Max. "Méthodes numériques adaptatives pour la simulation de la dynamique de fronts de réaction multi-échelles en temps et en espace". Phd thesis, Ecole Centrale Paris, 2011. http://tel.archives-ouvertes.fr/tel-00667857.

Texto completo da fonte
Resumo:
Nous abordons le développement d'une nouvelle génération de méthodes numériques pour la résolution des EDP évolutives qui modélisent des phénomènes multi-échelles en temps et en espace issus de divers domaines applicatifs. La raideur associée à ce type de problème, que ce soit via le terme source chimique qui présente un large spectre d'échelles de temps caractéristiques ou encore via la présence de fort gradients très localisés associés aux fronts de réaction, implique en général de sévères difficultés numériques. En conséquence, il s'agit de développer des méthodes qui garantissent la précision des résultats en présence de forte raideur en s'appuyant sur des outils théoriques solides, tout en permettant une implémentation aussi efficace. Même si nous étendons ces idées à des systèmes plus généraux par la suite, ce travail se focalise sur les systèmes de réaction-diffusion raides. La base de la stratégie numérique s'appuie sur une décomposition d'opérateur spécifique, dont le pas de temps est choisi de manière à respecter un niveau de précision donné par la physique du problème, et pour laquelle chaque sous-pas utilise un intégrateur temporel d'ordre élevé dédié. Ce schéma numérique est ensuite couplé à une approche de multirésolution spatiale adaptative permettant une représentation de la solution sur un maillage dynamique adapté. L'ensemble de cette stratégie a conduit au développement du code de simulation générique 1D/2D/3D académique MBARETE de manière à évaluer les développements théoriques et numériques dans le contexte de configurations pratiques raides issue de plusieurs domaines d'application. L'efficacité algorithmique de la méthode est démontrée par la simulation d'ondes de réaction raides dans le domaine de la dynamique chimique non-linéaire et dans celui de l'ingénierie biomédicale pour la simulation des accidents vasculaires cérébraux caractérisée par un terme source "chimique complexe''. Pour étendre l'approche à des applications plus complexes et plus fortement instationnaires, nous introduisons pour la première fois une technique de séparation d'opérateur avec pas de temps adaptatif qui permet d'atteindre une précision donnée garantie malgré la raideur des EDP. La méthode de résolution adaptative en temps et en espace qui en résulte, étendue au cas convectif, permet une description consistante de problèmes impliquant une très large palette d'échelles de temps et d'espace et des scénarios physiques très différents, que ce soit la propagation des décharges répétitives pulsées nanoseconde dans le domaine des plasmas ou bien l'allumage et la propagation de flammes dans celui de la combustion. L'objectif de la thèse est l'obtention d'un solveur numérique qui permet la résolution des EDP raides avec contrôle de la précision du calcul en se basant sur des outils d'analyse numérique rigoureux, et en utilisant des moyens de calculs standard. Quelques études complémentaires sont aussi présentées comme la parallélisation temporelle, des techniques de parallélisation à mémoire partagée et des outils de caractérisation mathématique des schémas de type séparation d'opérateur.
Estilos ABNT, Harvard, Vancouver, APA, etc.
41

Shi, Hongsen. "Building Energy Efficiency Improvement and Thermal Comfort Diagnosis". The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1555110595177379.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
42

Ruiz, Anthony. "Simulations Numériques Instationnaires de la Combustion Turbulente et Transcritique dans les Moteurs Cryotechniques". Phd thesis, Institut National Polytechnique de Toulouse - INPT, 2012. http://tel.archives-ouvertes.fr/tel-00691975.

Texto completo da fonte
Resumo:
Ces 50 dernières années, la majorité des paramètres de conception des moteurs cryotechniques ont été ajustés en l'absence d'une compréhension détaillée de la dynamique de flamme, en raison des limites des diagnostiques expérimentaux et des capacités de calcul. L'objectif de cette thèse est de réaliser des simulations numériques instationnaires d'écoulements réactifs transcritiques de haute fidélité, pour permettre une meilleure compréhension de la dynamique de flamme dans les moteurs cryotechniques et finalement guider leur amélioration. Dans un premier temps, la thermodynamique gaz-réel et son impact sur les schémas numériques sont présentés. Comme la Simulation aux Grandes Echelles (SGE) comporte des équations filtrées, les effets de filtrages induits par la thermodynamique gaz-réel sont ensuite mis en évidence dans une configuration transcritique type et un opérateur de diffusion artificiel, spécifique au gaz réel, est proposé pour lisser les gradients transcritiques en SGE. Dans un deuxième temps, une étude fondamentale du mélange turbulent et de la combustion dans la zone proche-injecteur des moteurs cryotechniques est menée grâce à la Simulation Numérique Directe (SND). Dans le cas non-réactif, les lâchers tourbillonnaires dans le sillage de la lèvre de l'injecteur jouent un rôle majeur dans le mélange turbulent et provoquent la formation de structures en peigne déjà observées expérimentalement dans des conditions similaires. Dans le cas réactif, la flamme reste attachée à la lèvre de l'injecteur, sans extinction locale, et les structures en peigne disparaissent. La structure de flamme est analysée et différents modes de combustion sont identifiés. Enfin, une étude de flamme-jet transcritique H2/O2, accrochée à un injecteur coaxial avec et sans retrait interne, est menée. Les résultats numériques sont d'abord validés par des données expérimentales pour l'injecteur sans retrait. Ensuite, la configuration avec retrait est comparée à la solution de référence sans retrait et à des données experimentales pour observer les effets de ce paramètre de conception sur l'efficacité de combustion.
Estilos ABNT, Harvard, Vancouver, APA, etc.
43

Hsieh, Wen-Hsing, e 謝文興. "Circuit Simulation Speedup Techniques on Distributed Computing Environment". Thesis, 1996. http://ndltd.ncl.edu.tw/handle/48905135461923741907.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
44

Esteves, João Francisco Veríssimo Dias. "Distributed Simulation and Exploration of a Game Environment". Master's thesis, 2020. https://hdl.handle.net/10216/128976.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
45

Esteves, João Francisco Veríssimo Dias. "Distributed Simulation and Exploration of a Game Environment". Dissertação, 2020. https://hdl.handle.net/10216/128976.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
46

Lin-Chao, Chang, e 張霖釗. "The design of auto-boot capability for fully distributed simulation environment". Thesis, 1999. http://ndltd.ncl.edu.tw/handle/91045481136853382249.

Texto completo da fonte
Resumo:
碩士
淡江大學
資訊工程學系
87
A simulator is a computing that integrates the technologies of computer graphics, multimedia and mechanical control to create a synthetic virtual environment. In order to allow the user to fully immerse into this synthetic environment, simulator must be able to compute synthetic images, sound effect and mechanical control interactively. High performance computers, such as mainframe or special purpose computers where traditionally used to construct such simulators. As the evolution of processors and network technology, coordinating multiple PCs or workstations over network to construct a distributed computing environment (DCE) to meet the requirements of high performance of computation is the trend of the recent research.The distributive computing environment is constructed by interconnect multiple computers. To run a program in a DCE, it must be decompose into a set of logical processes (LPs) to distribute them among computers of DCE. However, some LPs may have some special hardware/software requirements in order to correctly executed and not every computer has the same equipment to support such demand. For example, LPs for a flight simulator required 3D accelerator, Sound card, A/D converter, and D/A converter to complete the job. Hence, a mechanism is required to cluster LPs according to their special demands and dispatch each cluster into a computer that can satisfy its special demand. We refer this problem as Constrained Clustering problem. In this thesis, an algorithm is proposed to solve this Constrained Cluster problem with additional requirement of achieving static load balance among distributed computers. That is, a static load balance algorithm is design at the same time so that each cluster has an approximate computation load with an acceptable tolerance range.In addition, thesis presents a mechanism to provide auto-boot capability in a DCE. This capability is achieved by employing Object Model Template (OMT) of High Level Architecture (HLA) that is designed by Department of Defense(DoD), US. With the help of OMT along with Constrained Clustering solution, after the DCE is initiated, each LP will be automatically loaded and executed at an appropriate computer that satisfy its special demand.This auto-boot capability with mechanism for Constrained Cluster problem are implemented in Multiple User Distributive Simulation (MUDS) system. The MUDS system is a DCE for interactive simulation, which is designed and implemented in Multimedia and Virtual Reality Lab., Department of CS. Tamkang University.
Estilos ABNT, Harvard, Vancouver, APA, etc.
47

Falcone, Alberto, Alfredo Garro e Felice Crupi. "Distribution, Reuse and Interoperability of simulation models in heterogeneous distributed computing environments". Thesis, 2017. http://hdl.handle.net/10955/1864.

Texto completo da fonte
Resumo:
Dottorato di Ricerca in: Information and comunication engineering for pervasive intelligent environments. Ciclo XXIX
Modeling and Simulation (M&S) is gaining a central role in several industrial domains such as automotive, e-science and aerospace, due to the increasing complexity of system requirements and thus of the related engineering problems. Specifically, M&S methods, tools, and techniques can e↵ectively support the analysis and design of modern systems by enabling the evaluation and comparison of di↵erent design choices against requirements through virtual testing; this opportunity becomes even crucial when complete and actual tests are too expensive to be performed in terms of cost, time and other resources. Moreover, as systems result from the integration of components which are often designed and manufactured by di↵erent organizations belonging to di↵erent engineering domains (including mechanical, electrical, control, and software), great benefits can derive from the possibility to perform simulations which involve components independently developed and running on di↵erent and possibly geographically distributed machines. Indeed, distributed simulation promotes an e↵ective cooperative, integrated and concurrent approach to complex systems analysis and design. Although M&S o↵ers many advantages related to the possibility of doing controlled experiments on an artificial representation of a system, its practical use requires to face with important issues such as, (i) difficulties to reuse simulation models already made; (ii) lack of rules and procedures by which to make interoperable models created with di↵erent simulation environments; and, (iii) lack of mechanisms for executing simulation models in distributed and heterogeneous environments. Indeed, there are di↵erent simulation environments both commercial and noncommercial highly specialized that allow the design and implementation of simulation models in specific domains. However, a single simulation environment is not able to manage all the necessary aspects to model a system when it is composed of several components. Typically, the modeling and simulation of such systems, whose behavior cannot be straightforwardly defined, derived and easily analyzed starting from the behavior of their components, require to identify and face with some important research issues.
Università della Calabria
Estilos ABNT, Harvard, Vancouver, APA, etc.
48

Wen, Hsin-Hua, e 溫欣華. "Distributed Virtual Environment For Volume Based Surgical Simulation Via The World Wide Web". Thesis, 1999. http://ndltd.ncl.edu.tw/handle/17374001502051193470.

Texto completo da fonte
Resumo:
碩士
中原大學
資訊工程學系
87
3D rendition and surgical simulation by manipulating volumes assist clinicians to make more accurate diagnosis, and let surgeons to verify and modify surgical plans. By applying techniques of virtual reality to 3D rendition and surgical simulation, accuracy can be improved because of reality of 3D images and surgical simulation. By applying techniques of hypermedia and distributed computing of Internet, users can expand utilities of computer software and hardware. In this research, we develop techniques of propagating volume data changed by surgical simulation on WWW (world-wide web).Then, by distributed computing, we can achieve the purpose of implementing virtual surgery on general platforms. We implement the following routines to achieve this purpose. 1. Rewriting the functions of volume visualization and surgical simulation as independent modules that can drive 3D input and output devices. As the result, users can execute these functions under virtual reality environment. 2. Developing a homepage to plug in the functions described in ŕ" to applets of the homepage. As the result, users can execute the volume visualization and surgical simulation functions or see results by the functions through the hypermedia environment of WWW. 3. Developing the functions of propagating volume data between platforms. As the result, we can distribute computing or rendering processes to appropriate platforms and keep data consistency.
Estilos ABNT, Harvard, Vancouver, APA, etc.
49

Wang, Shinn-Chih, e 王信智. "Design of A Real-time Interactive Visual Simulation on A Distributed Computing Environment". Thesis, 1996. http://ndltd.ncl.edu.tw/handle/89567546762591766255.

Texto completo da fonte
Resumo:
碩士
淡江大學
資訊工程學系
84
In order to make users feel immersive in an interactive visual simulation system, both of the visual render and user input must be completed in real-time. As the complexity of the virtual scene increase and the realism of the virtual environment is requested by the user, the demand of high performance computing for virtual environment is become very important. Such as supercomputers or special purpose computers are often used. But these computers are expensive. As the evolution of the computer, both PC and workstation have gaining more power with less cost. Hence, we can connect several PCs and workstations via LAN to form a distributed high-speed computing environment to meet the requirement of real-time visual simulation. Based upon this concept, several distributed real-time computing environment have proposed over this years. Such as division's dVS, MIT's VETT and SICS's DIVE ..etc. All of them allow multi-user, but confined in UNIX-based OS or special machines. In this paper, we propose a distributed run-time environment which use PCs or workstations that run WIN32-based OS and also, with the help of DIS(distributed interactive simulation) a multi-user virtual environment can be achieve. PNs theory allows a system to be modeled by a mathematical representation. Timed Petri net( TPN) is another kind of PNs. Recently, TPN is widely used to analyze the performance of distributed computing systems and it is very useful to model parallel and distributed systems. We use the properties of TPNs to model the real-time visual simulation to find the parallel , conflict tasks relation among these tasks. These tasks are then partition into a set of logic processes(LPs), which are distributed among networked distributed machines. Also, Job Dispather and Communication Interface are designed to communicate and synchronize these distributed LPs.
Estilos ABNT, Harvard, Vancouver, APA, etc.
50

"Development of a distributed water quality model using advanced hydrologic simulation". Thesis, 2012. http://hdl.handle.net/1911/70473.

Texto completo da fonte
Resumo:
Cypress Creek is an urbanizing watershed in the Gulf Coast region of Texas that contributes the largest inflow of urban runoff containing suspended solids to Lake Houston, the primary source of drinking water for the City of Houston. Historical water quality data was statistically analyzed to characterize the watershed and its pollutant sources. It was determined that the current sampling program provides limited information on the complex behaviors of pollutant sources in both dry weather and rainfall events. In order to further investigate the dynamics of pollutant export from Cypress Creek to Lake Houston, fully distributed hydrologic and water quality models were developed and employed to simulate high frequency small storms. A fully distributed hydrologic model, Vflo(TM) , was used to model streamflow during small storm events in Cypress Creek. Accurately modeling small rainfall events, which have traditionally been difficult to model, is necessary for investigation and design of watershed management since small storms occur more frequently. An assessment of the model for multiple storms shows that using radar rainfall input produces results well matched to the observed streamflow for both volume and peak streamflow. Building on the accuracy and utility of distributed hydrologic modeling, a water quality model was developed to simulate buildup, washoff, and advective transport of a conservative pollutant. Coupled with the physically based Vflo(TM) hydrologic model, the pollutant transport model was used to simulate the washoff and transport of total suspended solids for multiple small storm events in Cypress Creek Watershed. The output of this distributed buildup and washoff model was compared to storm water quality sampling in order to assess the performance of the model and to further temporally and spatially characterize the storm events. This effort was the first step towards developing a fully distributed water quality model that can be widely applied to a wide variety of watersheds. It provides the framework for future incorporation of more sophisticated pollutant dynamics and spatially explicit evaluation of best management practices and land use dynamics. This provides an important tool and decision aid for watershed and resource management and thus efficient protection of the sources waters.
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia