To see the other types of publications on this topic, follow the link: System-level models.

Dissertations / Theses on the topic 'System-level models'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'System-level models.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Chadha, Vikrampal. "Simulation of large-scale system-level models." Thesis, This resource online, 1994. http://scholar.lib.vt.edu/theses/available/etd-12162009-020334/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Nel, Christoffel Antonie. "The creation of nonlinear behavioral-level models for system level receiver simulation." Thesis, Stellenbosch : Stellenbosch University, 2004. http://hdl.handle.net/10019.1/50128.

Full text
Abstract:
Thesis (MScEng)--Stellenbosch University, 2004.
ENGLISH ABSTRACT: The aim of this thesis was to investigate the use of behavioral level models in receiver simulations using the capabilities of Agilent's Advanced Design System. Behavioral level modeling has become increasingly attractive because it offers faster and easier results for system level simulations. The work in this thesis focused strongly on nonlinear measurements to characterize the various nonlinear phenomena that are present in amplifiers and mixers. Measurement automation software was developed to automate the process. An error correction technique was also developed to increase the accuracy of spectrum analyzer measurements. The measured data was used to implement the behavioral level amplifier and mixer models in ADS. The accuracy of the models was compared to measured data and the different available models were compared. Finally the models were combined to realize different receivers and were used to do typical receiver tests. These test include gain and gain compression, two-tone intermodulation and spurious responses. The results are compared to measured data to test the accuracy and usefulness of the models and simulation techniques.
AFRIKAANSE OPSOMMING: Die doel van hierdie tesis was om stelsel-vlak gedrags-modelle te ondersoek soos hulle in Agilent se Advanced Design System (ADS) aangebied word. Die modellering van die stelselvlak-gedrag van komponente en stelsels is aantreklik aangesien dit 'n hoë vlak beskrywing van komplekse kommunikasie stelsels moontlik maak. Akkurate stelsel-vlak simulasies sal lei tot vinnige ontwikkeling en evaluasie van nuwe sisteme. Die resultate wat verkry word is egter afhanklik van die beskikbaarheid van akkurate stelsel-vlak gedragsmodelle Die tesis het baie sterk op metings staat gemaak om die nie-liniêre gedrag van versterkers en mengers te karakteriseer. Meet sagteware is ontwikkel om die verskillende metings te automatiseer. Fout korreksie vir spetrum-analiseerder-metings is ook ontwikkel. Die gemete data is gebruik om die nie-liniêre gedrags-modelle in ADS te implementer. Die modelle is in simulasies gebruik en die akuraatheid van die simulasies is teen gemete data getoets. Die finale deel van die tesis gebruik die modelle om tipiese ontvanger karakteristieke te voorspel. Die volgende toetse is gedoen: aanwins en kompressie, twee-toon intermodulasie en hoer orde meng produkte. Die resultate van die toetse is met gemete data vergelyk om die akuraatehied en bruikbaarheid van die verskillende modelle te vergelyk.
APA, Harvard, Vancouver, ISO, and other styles
3

Arnedo, Luis. "System Level Black-Box Models for DC-DC Converters." Diss., Virginia Tech, 2008. http://hdl.handle.net/10919/29193.

Full text
Abstract:
The aim of this work is to develop a two-port black-box dc-dc converter modeling methodology for system level simulation and analysis. The models do not require any information about the components, structure, or control parameters of the converter. Instead, all the information needed to build the models is collected from unterminated experimental frequency response function (FRF) measurements performed at the converter power terminals. These transfer funtions are known as audiosuceptibility, back current gain, output impedance, and input admittance. The measurements are called unterminated because they do not contain any information about the source and/or the load dynamics. This work provides insights into how the source and the load affect FRF measurements and how to decouple those effects from the measurements. The actual linear time invariant model is obtained from the experimental FRFs via system identification. Because the the two-port model obtained from a set of FRFs is linear, it will be valid in a specific operating region defined by the converter operating conditions. Therefore, to satisfy the need for models valid in a wide operating region, a model structure that combines a family of linear two-port models is proposed. One structure, known as the Wiener structure, is especially useful when the converter nonlinearities are reflected mainly in the steady state currents and voltage values. The other structure is known as a polytopic structure, and it is able to capture nonlinearities that affect the transient and steady state converter behavior. The models are used for prediction of steady state and transient behavior of voltages and currents at the converter terminals. In addition, the models are useful for subsystem interaction and small signal stability assesment of interconnected dc distribution systems comprising commericially available converters. This work presents for first time simulation and stability analysis results of a system that combines dc-dc converters from two different manufucturers. All simulation results are compared against experimental results to verify the usefulness of the approach.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
4

Lai, Jimmy Chi-Ming. "Abstraction models at system level for networked interactive multimedia scripting." Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/38066.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1995.
Includes bibliographical references (p. 85-86).
by Jimmy Chi-Ming Lai.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
5

Montagna, Sara <1982&gt. "Multi-level models and infrastructures for simulating biological system development." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2011. http://amsdottorato.unibo.it/3415/.

Full text
Abstract:
The hierarchical organisation of biological systems plays a crucial role in the pattern formation of gene expression resulting from the morphogenetic processes, where autonomous internal dynamics of cells, as well as cell-to-cell interactions through membranes, are responsible for the emergent peculiar structures of the individual phenotype. Being able to reproduce the systems dynamics at different levels of such a hierarchy might be very useful for studying such a complex phenomenon of self-organisation. The idea is to model the phenomenon in terms of a large and dynamic network of compartments, where the interplay between inter-compartment and intra-compartment events determines the emergent behaviour resulting in the formation of spatial patterns. According to these premises the thesis proposes a review of the different approaches already developed in modelling developmental biology problems, as well as the main models and infrastructures available in literature for modelling biological systems, analysing their capabilities in tackling multi-compartment / multi-level models. The thesis then introduces a practical framework, MS-BioNET, for modelling and simulating these scenarios exploiting the potential of multi-level dynamics. This is based on (i) a computational model featuring networks of compartments and an enhanced model of chemical reaction addressing molecule transfer, (ii) a logic-oriented language to flexibly specify complex simulation scenarios, and (iii) a simulation engine based on the many-species/many-channels optimised version of Gillespie’s direct method. The thesis finally proposes the adoption of the agent-based model as an approach capable of capture multi-level dynamics. To overcome the problem of parameter tuning in the model, the simulators are supplied with a module for parameter optimisation. The task is defined as an optimisation problem over the parameter space in which the objective function to be minimised is the distance between the output of the simulator and a target one. The problem is tackled with a metaheuristic algorithm. As an example of application of the MS-BioNET framework and of the agent-based model, a model of the first stages of Drosophila Melanogaster development is realised. The model goal is to generate the early spatial pattern of gap gene expression. The correctness of the models is shown comparing the simulation results with real data of gene expression with spatial and temporal resolution, acquired in free on-line sources.
APA, Harvard, Vancouver, ISO, and other styles
6

Patel, Hiren Dhanji. "Ingredients for Successful System Level Automation & Design Methodology." Diss., Virginia Tech, 2007. http://hdl.handle.net/10919/26825.

Full text
Abstract:
This dissertation addresses the problem of making system level design (SLD) methodology based on SystemC more useful to the complex embedded system design community by presenting a number of ingredients currently absent in the existing SLD methodologies and frameworks. The complexity of embedded systems have been increasing at a rapid rate due to proliferation of desired functionality of such systems (e.g., cell phones, game consoles, hand-held devices, etc., are providing more features every few months), and the device technology still riding the curve predicted by Moore's law. Design methodology is shifting slowly towards system level design (also called electronic system level (ESL)). A number of SLD languages and supporting frameworks are being proposed. SystemC is positioned as being one of the dominant SLD languages. The various design automation tool vendors are proposing frameworks for supporting SystemC-based design methodologies. We believe that compared to the necessity and potential of ESL, the success of the frameworks have been limited due to lack of support for a number of facilities and features in the languages and tool environments. This dissertation proposes, formulates, and provides proof of concept demonstrations of a number of ingredients that we have identified as essential for efficient and productive use of SystemC-based tools and techniques. These are heterogeneity in the form of multiple models of computation, behavioral hierarchy in addition to structural hierarchy, model-driven validation for SystemC designs and a service-oriented tool integration environment. In particular, we define syntactic extensions to the SystemC language, semantic modifications, and simulation algorithms, precise semantics for model based validation etc. For each of these we provide reference implementation for further experimentation on the utility of these extensions.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
7

Park, Yong Hwa. "An evaluation methodology for the level of service at the airport landside system." Thesis, Online version, 1994. http://ethos.bl.uk/OrderDetails.do?did=1&uin=uk.bl.ethos.260890.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Zaum, Daniel [Verfasser]. "System Level Analysis of Mixed-Signal Systems using State Space Models / Daniel Zaum." München : Verlag Dr. Hut, 2011. http://d-nb.info/1017353360/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

La, Fratta Patrick Anthony. "Evaluating the Design and Performance of a Single-Chip Parallel Computer Using System-Level Models and Methodology." Thesis, Virginia Tech, 2005. http://hdl.handle.net/10919/32424.

Full text
Abstract:
As single-chip systems are predicted to soon contain over a billion transistors, design methodologies are evolving dramatically to account for the fast evolution of technologies and product properties. Novel methodologies feature the exploration of design alternatives early in development, the support for IPs, and early error detection â all with a decreasing time-to-market. In order to accommodate these product complexities and development needs, the modeling levels at which designers are working have quickly changed, as development at higher levels of abstraction allows for faster simulations of system models and earlier estimates of system performance while considering design trade-offs. Recent design advancements to exploit instruction-level parallelism on single-processor computer systems have become exceedingly complex, and modern applications are presenting an increasing potential to be partitioned and parallelized at the thread level. The new Single-Chip, Message-Passing (SCMP) parallel computer is a tightly coupled mesh of processing nodes that is designed to exploit thread-level parallelism as efficiently as possible. By minimizing the latency of communication among processors, memory access time, and the time for context switching, the system designer will undoubtedly observe an overall performance increase. This study presents in-depth evaluations and quantitative analyses of various design and performance aspects of SCMP through the development of abstract hardware models by following a formalized, well-defined methodology. The performance evaluations are taken through benchmark simulation while taking into account system-level communication and synchronization among nodes as well as node-level timing and interaction amongst node components. Through the exploration of alternatives and optimization of the components within the SCMP models, maximum system performance in the hardware implementation can be achieved.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
10

Shahid, Hamid. "Integration of System-Level Design and Mechanical Design Models in the Development of Mechanical Systems." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-53061.

Full text
Abstract:
Modern-day systems are becoming complex due to the growing needs of the market. These systems contain various subsystems developed by different groups of engineers. Particularly, all mechatronics systems involve different mechanical, electrical and software parts developed by multidisciplinary teams of engineers from different backgrounds. Designing of these complex systems requires effective management, of the engineering and the system integration information, across all the involved disciplines. Model Based System Engineering (MBSE) is one of the effective ways for managing the engineering design process. In MBSE, design information is formally stored in the form of models, which allows better control of requirements throughout the development life cycle and provides ability to perform better analysis. Engineers usually are expert in their own discipline, where they utilize modeling languages and tools with a domain-specific focus. This creation of models with the domain-specific focus does not provide a view of the system as a whole. Hence, in order to have a complete system view, it is required to provide information transfer means across different domains, through models developed in different modeling languages and tools supporting them. Model integration is one of the ways to integrate and transfer model information across different domains. An approach for model integration is proposed, with the focus on the integration between system level models created in SysML and mechanical CAD (MCAD) models. The approach utilizes the feature of SysML to create domain specific profiles and presents a SysML profile for MCAD domain. This profile aids in establishing a mapping between SysML and MCAD concepts, as it allows the extension of SysML constructs to represent MCAD concepts in SysML. Model transformations are used to transform a model created through SysML profile for MCAD in to the corresponding model in a MCAD tool, and vice versa. A robot model is presented to exemplify the working of the approach and to explain the integration of mechanical design model with a system-level design model and vice versa. The approach presented in this thesis depicts a scalable concept, which can be extended towards the integration of other domains with MCAD, by building new relations and profiles in SysML. This approach aids in co-evolution of a system model during domain-specific development activities, hence providing better means to understand the system as a whole.
APA, Harvard, Vancouver, ISO, and other styles
11

Nimmatoori, Praneeth. "Comparison of Several Project Level Pavement Condition Prediction Models." University of Toledo / OhioLINK, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1578491583921183.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Alhroob, Aysh M. "Software test case generation from system models and specification. Use of the UML diagrams and High Level Petri Nets models for developing software test cases." Thesis, University of Bradford, 2010. http://hdl.handle.net/10454/5453.

Full text
Abstract:
The main part in the testing of the software is in the generation of test cases suitable for software system testing. The quality of the test cases plays a major role in reducing the time of software system testing and subsequently reduces the cost. The test cases, in model de- sign stages, are used to detect the faults before implementing it. This early detection offers more flexibility to correct the faults in early stages rather than latter ones. The best of these tests, that covers both static and dynamic software system model specifications, is one of the chal- lenges in the software testing. The static and dynamic specifications could be represented efficiently by Unified Modelling Language (UML) class diagram and sequence diagram. The work in this thesis shows that High Level Petri Nets (HLPN) can represent both of them in one model. Using a proper model in the representation of the software specifications is essential to generate proper test cases. The research presented in this thesis introduces novel and automated test cases generation techniques that can be used within a software sys- tem design testing. Furthermore, this research introduces e cient au- tomated technique to generate a formal software system model (HLPN) from semi-formal models (UML diagrams). The work in this thesis con- sists of four stages: (1) generating test cases from class diagram and Object Constraint Language (OCL) that can be used for testing the software system static specifications (the structure) (2) combining class diagram, sequence diagram and OCL to generate test cases able to cover both static and dynamic specifications (3) generating HLPN automat- ically from single or multi sequence diagrams (4) generating test cases from HLPN. The test cases that are generated in this work covered the structural and behavioural of the software system model. In first two phases of this work, the class diagram and sequence diagram are decomposed to nodes (edges) which are linked by Classes Hierarchy Table (CHu) and Edges Relationships Table (ERT) as well. The linking process based on the classes and edges relationships. The relationships of the software system components have been controlled by consistency checking technique, and the detection of these relationships has been automated. The test cases were generated based on these interrelationships. These test cases have been reduced to a minimum number and the best test case has been selected in every stage. The degree of similarity between test cases is used to ignore the similar test cases in order to avoid the redundancy. The transformation from UML sequence diagram (s) to HLPN facilitates the simpli cation of software system model and introduces formal model rather than semi-formal one. After decomposing the sequence diagram to Combined Fragments, the proposed technique converts each Combined Fragment to the corresponding block in HLPN. These blocks are con- nected together in Combined Fragments Net (CFN) to construct the the HLPN model. The experimentations with the proposed techniques show the effectiveness of these techniques in covering most of the software system specifications.
APA, Harvard, Vancouver, ISO, and other styles
13

Schürmans, Stefan [Verfasser], Rainer [Akademischer Betreuer] Leupers, and Tobias [Akademischer Betreuer] Gemmeke. "Power Estimation on Electronic System Level using Linear Power Models / Stefan Schürmans ; Rainer Leupers, Tobias Gemmeke." Aachen : Universitätsbibliothek der RWTH Aachen, 2018. http://d-nb.info/1171323808/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Alhroob, Aysh Menoer. "Software test case generation from system models and specification : use of the UML diagrams and high level Petri nets models for developing software test cases." Thesis, University of Bradford, 2010. http://hdl.handle.net/10454/5453.

Full text
Abstract:
The main part in the testing of the software is in the generation of test cases suitable for software system testing. The quality of the test cases plays a major role in reducing the time of software system testing and subsequently reduces the cost. The test cases, in model de- sign stages, are used to detect the faults before implementing it. This early detection offers more flexibility to correct the faults in early stages rather than latter ones. The best of these tests, that covers both static and dynamic software system model specifications, is one of the chal- lenges in the software testing. The static and dynamic specifications could be represented efficiently by Unified Modelling Language (UML) class diagram and sequence diagram. The work in this thesis shows that High Level Petri Nets (HLPN) can represent both of them in one model. Using a proper model in the representation of the software specifications is essential to generate proper test cases. The research presented in this thesis introduces novel and automated test cases generation techniques that can be used within a software sys- tem design testing. Furthermore, this research introduces e cient au- tomated technique to generate a formal software system model (HLPN) from semi-formal models (UML diagrams). The work in this thesis con- sists of four stages: (1) generating test cases from class diagram and Object Constraint Language (OCL) that can be used for testing the software system static specifications (the structure) (2) combining class diagram, sequence diagram and OCL to generate test cases able to cover both static and dynamic specifications (3) generating HLPN automat- ically from single or multi sequence diagrams (4) generating test cases from HLPN. The test cases that are generated in this work covered the structural and behavioural of the software system model. In first two phases of this work, the class diagram and sequence diagram are decomposed to nodes (edges) which are linked by Classes Hierarchy Table (CHu) and Edges Relationships Table (ERT) as well. The linking process based on the classes and edges relationships. The relationships of the software system components have been controlled by consistency checking technique, and the detection of these relationships has been automated. The test cases were generated based on these interrelationships. These test cases have been reduced to a minimum number and the best test case has been selected in every stage. The degree of similarity between test cases is used to ignore the similar test cases in order to avoid the redundancy. The transformation from UML sequence diagram (s) to HLPN facilitates the simpli cation of software system model and introduces formal model rather than semi-formal one. After decomposing the sequence diagram to Combined Fragments, the proposed technique converts each Combined Fragment to the corresponding block in HLPN. These blocks are con- nected together in Combined Fragments Net (CFN) to construct the the HLPN model. The experimentations with the proposed techniques show the effectiveness of these techniques in covering most of the software system specifications.
APA, Harvard, Vancouver, ISO, and other styles
15

Fischer, Marco. "A Formal Fault Model for Component-Based Models of Embedded Systems." Doctoral thesis, Dresden : TUDpress, 2007. http://deposit.d-nb.de/cgi-bin/dokserv?id=2960240&prov=M&dok_var=1&dok_ext=htm.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Shrestha, Pratigya. "Inverter-based Control to Enhance the Resiliency of a Distribution System." Thesis, Virginia Tech, 2019. http://hdl.handle.net/10919/93764.

Full text
Abstract:
Due to the increase in the integration of renewable energy to the grid, there is a critical need for varying the existing methods and techniques for grid operation. With increased renewable energy, mainly wind and photovoltaics, there is a reduction in inertia as the percentage of inverter-based resources is increasing. This can bring about an issue with the maintenance and operation of the grid with respect to frequency and voltage. Thus, the ability of inverters to regulate the voltage and frequency becomes significant. Under normal operation of the system, the ability of the inverters to support the grid frequency and voltage while following the grid is sufficient. However, the operation of the inverters during a resiliency mode, under which there is an extended outage of the utility system, will require the inverter functionality to go beyond support and actually maintain the voltage and frequency as done by synchronous machines, acting as the grid-forming inverter. This project focuses on the operation of grid forming sources based on the virtual synchronous generator to regulate the voltage and frequency in the absence of the grid voltage through decentralized control of the inverters in the distribution feeder. With the most recent interconnection standard for the distributed generation, IEEE-1547 2018, the inverter-based generation can be used for this purpose. The simulations are performed in the Simulink environment and the case studies are done on the IEEE 13 node test-feeder.
Master of Science
With the increase in the renewable energy sources in the present grid, the established methods for the operation of the grid needs to be updated due to the changes that the large amount of renewable energy sources bring to the system. Due to the While the conventional resources in the power system was mainly synchronous generators that had an inherent characteristic for frequency support and regulation due to the inertia this characteristic can be lacking in many of the renewable energy sources that are usually inverter-based. At present, the commonly adapted function for the inverters is to follow the grid which is suitable in case of normal operation of the power system. However, during emergency scenarios when the utility is disconnected and a part of the system has to operate independently the inverters need to be able to regulate both the voltage and frequency on their own. In this project the inverter-based control, termed as the virtual synchronous generator, has been studied such that it mimics the well-established controls for the conventional generators so that the inverter-based renewable resource appears similar to the conventional generator from the point of view of the grid in terms of the electrical quantities. The utilization of this type of control for operation of a part of the feeder with each inverter-based resource controlling its output in a decentralized manner is studied. The controls try to mimic the established controls for conventional synchronous machine and use it for maintain operation of the system with inverters.
APA, Harvard, Vancouver, ISO, and other styles
17

Fischer, Marco. "A formal fault model for component based models of embedded systems." Dresden TUDpress, 2006. http://deposit.d-nb.de/cgi-bin/dokserv?id=2960240&prov=M&dok_var=1&dok_ext=htm.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Ungureanu, George. "Automatic Software Synthesis from High-Level ForSyDe Models Targeting Massively Parallel Processors." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-127832.

Full text
Abstract:
In the past decade we have witnessed an abrupt shift to parallel computing subsequent to the increasing demand for performance and functionality that can no longer be satisfied by conventional paradigms. As a consequence, the abstraction gab between the applications and the underlying hardware increased, triggering both industry and academia in several research directions. This thesis project aims at analyzing some of these directions in order to offer a solution for bridging the abstraction gap between the description of a problem at a functional level and the implementation on a heterogeneous parallel platform using ForSyDe – a formal design methodology. This report treats applications employing data-parallel and time-parallel computation, regards nvidia CUDA-enabled GPGPUs as the main backend platform. The report proposes a heuristic transformation-and-refinement process based on analysis methods and design decisions to automate and aid in a correct-by-design backend code synthesis. Its purpose is to identify potential data parallelism and time parallelism in a high-level system. Furthermore, based on a basic platform model, the algorithm load-balances and maps the execution onto the best computation resources in an automated design flow. This design flow will be embedded into an already existing tool, f2cc (ForSyDe-to-CUDA C) and tested for correctness on an industrial-scale image processing application aimed at monitoring inkjet print-heads reliability.
APA, Harvard, Vancouver, ISO, and other styles
19

Kazi, Tarannum Ayesha. "An investigation of drivers' self-reported level of trust in adaptive-cruise-control and their conceptual models of the system." Thesis, Brunel University, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.436500.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Raymond, Alexander William. "Investigation of microparticle to system level phenomena in thermally activated adsorption heat pumps." Thesis, Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/34682.

Full text
Abstract:
Heat actuated adsorption heat pumps offer the opportunity to improve overall energy efficiency in waste heat applications by eliminating shaft work requirements accompanying vapor compression cycles. The coefficient of performance (COP) in adsorption heat pumps is generally low. The objective of this thesis is to model the adsorption system to gain critical insight into how its performance can be improved. Because adsorption heat pumps are intermittent devices, which induce cooling by adsorbing refrigerant in a sorption bed heat/mass exchanger, transient models must be used to predict performance. In this thesis, such models are developed at the adsorbent particle level, heat/mass exchanger component level and system level. Adsorption heat pump modeling is a coupled heat and mass transfer problem. Intra-particle mass transfer resistance and sorption bed heat transfer resistance are shown to be significant, but for very fine particle sizes, inter-particle resistance may also be important. The diameter of the adsorbent particle in a packed bed is optimized to balance inter- and intra-particle resistances and improve sorption rate. In the literature, the linear driving force (LDF) approximation for intra-particle mass transfer is commonly used in place of the Fickian diffusion equation to reduce computation time; however, it is shown that the error in uptake prediction associated with the LDF depends on the working pair, half-cycle time, adsorbent particle radius, and operating temperatures at hand. Different methods for enhancing sorption bed heat/mass transfer have been proposed in the literature including the use of binders, adsorbent compacting, and complex extended surface geometries. To maintain high reliability, the simple, robust annular-finned-tube geometry with packed adsorbent is specified in this work. The effects of tube diameter, fin pitch and fin height on thermal conductance, metal/adsorbent mass ratio and COP are studied. As one might expect, many closely spaced fins, or high fin density, yields high thermal conductance; however, it is found that the increased inert metal mass associated with the high fin density diminishes COP. It is also found that thin adsorbent layers with low effective conduction resistance lead to high thermal conductance. As adsorbent layer thickness decreases, the relative importance of tube-side convective resistance rises, so mini-channel sized tubes are used. After selecting the proper tube geometry, an overall thermal conductance is calculated for use in a lumped-parameter sorption bed simulation. To evaluate the accuracy of the lumped-parameter approach, a distributed parameter sorption bed simulation is developed for comparison. Using the finite difference method, the distributed parameter model is used to track temperature and refrigerant distributions in the finned tube and adsorbent layer. The distributed-parameter tube model is shown to be in agreement with the lumped-parameter model, thus independently verifying the overall UA calculation and the lumped-parameter sorption bed model. After evaluating the accuracy of the lumped-parameter model, it is used to develop a system-level heat pump simulation. This simulation is used to investigate a non-recuperative two-bed heat pump containing activated carbon fiber-ethanol and silica gel-water working pairs. The two-bed configuration is investigated because it yields a desirable compromise between the number of components (heat exchangers, pumps, valves, etc.) and steady cooling rate. For non-recuperative two-bed adsorption heat pumps, the average COP prediction in the literature is 0.39 for experiments and 0.44 for models. It is important to improve the COP in mobile waste heat applications because without high COP, the available waste heat during startup or idle may be insufficient to deliver the desired cooling duty. In this thesis, a COP of 0.53 is predicted for the non-recuperative, silica gel-water chiller. If thermal energy recovery is incorporated into the cycle, a COP as high as 0.64 is predicted for a 90, 35 and 7.0°C source, ambient and average evaporator temperature, respectively. The improvement in COP over heat pumps appearing in the literature is attributed to the adsorbent particle size optimization and careful selection of sorption bed heat exchanger geometry.
APA, Harvard, Vancouver, ISO, and other styles
21

Specht, Emilena. "An approach for embedded software generation based in declarative alloy models." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2008. http://hdl.handle.net/10183/22812.

Full text
Abstract:
Este trabalho propõe uma nova abordagem para o desenvolvimento de sistemas embarcados, através da combinação da abstração e propriedades de verificação de modelos da linguagem declarativa Alloy com a ampla aceitação de Java na indústria. A abordagem surge no contexto de que a automação de software no domínio embarcado tornou-se extremamente necessária, uma vez que atualmente a maior parte do tempo de desenvolvimento é gasta no projeto de software de produtos tão restritos em termos de recursos. As ferramentas de automação de software embarcado devem atender a demanda por produtividade e manutenibilidade, mas respeitar restrições naturais deste tipo de sistema, tais como espaço de memória, potência e desempenho. As ferramentas de automação de projeto lidam com produtividade e manutenibilidade ao permitir especificações de alto nível, tarefa difícil de atender no domínio embarcado devido ao comportamento misto de muitas aplicações embarcadas. Abordagens que promovem meios para verificação formal também são atrativas, embora geralmente sejam difíceis de usar, e por este motivo não são de grande auxílio na tarefa de reduzir o tempo de chegada ao mercado do produto. Através do uso de Alloy, baseada em lógica de primeira-ordem, é possível obter especificações em altonível e verificação formal de modelos com uma única linguagem. Este trabalho apresenta a poderosa abstração proporcionada pela linguagem Alloy em aplicações embarcadas, assim como regras para obter automaticamente código Java a partir de modelos Alloy. A geração de código Java a partir de modelos Alloy, combinada a uma ferramenta de estimativa, provê exploração de espaço de projeto, atendendo assim as fortes restrições do projeto de software embarcado, o que normalmente não é contemplado pela engenharia de software tradicional.
This work proposes a new approach for embedded software development, by combining the abstraction and model verification properties of the Alloy declarative language with the broad acceptance in industry of Java. The approach comes into play since software automation in the embedded domain has become a major need, as currently most of the development time is spent designing software for such hardconstrained resources products. Design automation tools for embedded systems must meet the demand for productivity and maintainability, but constraints such as memory, power and performance must still be considered. Design automation tools deal with productivity and maintainability by allowing high-level specifications, which is hard to accomplish on the embedded domain due to the mixed behavior nature of many embedded applications. Approaches that provide means for formal verification are also attractive, but their usage is usually not straightforward, and for this reason they are not that helpful in dealing with time-tomarket constraints. By using Alloy, based in first-order logic, it is possible to obtain high-level specifications and formal model verification with a single language. This work shows the powerful abstraction provided by the Alloy language for embedded applications, as well as rules for obtaining automatically Java code from Alloy models. The Java source code generation from Alloy models, combined with an estimation tool, provides design space exploration to match tight embedded software design constraints, what is usually not taken into account by standard software engineering techniques.
APA, Harvard, Vancouver, ISO, and other styles
22

Rullmann, Markus. "Models, Design Methods and Tools for Improved Partial Dynamic Reconfiguration." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2010. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-61526.

Full text
Abstract:
Partial dynamic reconfiguration of FPGAs has attracted high attention from both academia and industry in recent years. With this technique, the functionality of the programmable devices can be adapted at runtime to changing requirements. The approach allows designers to use FPGAs more efficiently: E. g. FPGA resources can be time-shared between different functions and the functions itself can be adapted to changing workloads at runtime. Thus partial dynamic reconfiguration enables a unique combination of software-like flexibility and hardware-like performance. Still there exists no common understanding on how to assess the overhead introduced by partial dynamic reconfiguration. This dissertation presents a new cost model for both the runtime and the memory overhead that results from partial dynamic reconfiguration. It is shown how the model can be incorporated into all stages of the design optimization for reconfigurable hardware. In particular digital circuits can be mapped onto FPGAs such that only small fractions of the hardware must be reconfigured at runtime, which saves time, memory, and energy. The design optimization is most efficient if it is applied during high level synthesis. This book describes how the cost model has been integrated into a new high level synthesis tool. The tool allows the designer to trade-off FPGA resource use versus reconfiguration overhead. It is shown that partial reconfiguration causes only small overhead if the design is optimized with regard to reconfiguration cost. A wide range of experimental results is provided that demonstrates the benefits of the applied method
Partielle dynamische Rekonfiguration von FPGAs hat in den letzten Jahren große Aufmerksamkeit von Wissenschaft und Industrie auf sich gezogen. Die Technik erlaubt es, die Funktionalität von progammierbaren Bausteinen zur Laufzeit an veränderte Anforderungen anzupassen. Dynamische Rekonfiguration erlaubt es Entwicklern, FPGAs effizienter einzusetzen: z.B. können Ressourcen für verschiedene Funktionen wiederverwendet werden und die Funktionen selbst können zur Laufzeit an veränderte Verarbeitungsschritte angepasst werden. Insgesamt erlaubt partielle dynamische Rekonfiguration eine einzigartige Kombination von software-artiger Flexibilität und hardware-artiger Leistungsfähigkeit. Bis heute gibt es keine Übereinkunft darüber, wie der zusätzliche Aufwand, der durch partielle dynamische Rekonfiguration verursacht wird, zu bewerten ist. Diese Dissertation führt ein neues Kostenmodell für Laufzeit und Speicherbedarf ein, welche durch partielle dynamische Rekonfiguration verursacht wird. Es wird aufgezeigt, wie das Modell in alle Ebenen der Entwurfsoptimierung für rekonfigurierbare Hardware einbezogen werden kann. Insbesondere wird gezeigt, wie digitale Schaltungen derart auf FPGAs abgebildet werden können, sodass nur wenig Ressourcen der Hardware zur Laufzeit rekonfiguriert werden müssen. Dadurch kann Zeit, Speicher und Energie eingespart werden. Die Entwurfsoptimierung ist am effektivsten, wenn sie auf der Ebene der High-Level-Synthese angewendet wird. Diese Arbeit beschreibt, wie das Kostenmodell in ein neuartiges Werkzeug für die High-Level-Synthese integriert wurde. Das Werkzeug erlaubt es, beim Entwurf die Nutzung von FPGA-Ressourcen gegen den Rekonfigurationsaufwand abzuwägen. Es wird gezeigt, dass partielle Rekonfiguration nur wenig Kosten verursacht, wenn der Entwurf bezüglich Rekonfigurationskosten optimiert wird. Eine Anzahl von Beispielen und experimentellen Ergebnissen belegt die Vorteile der angewendeten Methodik
APA, Harvard, Vancouver, ISO, and other styles
23

Patel, Hiren Dhanji. "HEMLOCK: HEterogeneous ModeL Of Computation Kernel for SystemC." Thesis, Virginia Tech, 2003. http://hdl.handle.net/10919/9632.

Full text
Abstract:
As SystemC gains popularity as a System Level Design Language (SLDL) for System-On-Chip (SOC) designs, heterogeneous modelling and efficient simulation become increasingly important. The key in making an SLDL heterogeneous is the facility to express different Models Of Computation (MOC). Currently, all SystemC models employ a Discrete-Event simulation kernel making it difficult to express most MOCs without specific designer guidelines. This often makes it unnatural to express different MOCs in SystemC. For the simulation framework, this sometimes results in unnecessary delta cycles for models away from the Discrete-Event MOC, hindering the simulation performance of the model. Our goal is to extend SystemC's simulation framework to allow for better modelling expressiveness and efficiency for the Synchronous Data Flow (SDF) MOC. The SDF MOC follows a paradigm where the production and consumption rates of data by a function block are known a priori. These systems are common in Digital Signal Processing applications where relative sample rates are specified for every component. Knowledge of these rates enables the use of static scheduling. When compared to dynamic scheduling of SDF models, we experience a noticeable improvement in simulation efficiency. We implement an extension to the SystemC kernel that exploits such static scheduling for SDF models and propose designer style guidelines for modelers to use this extension. The modelling paradigm becomes more natural to SDF which results to better simulation efficiency. We will distribute our implementation to the SystemC community to demonstrate that SystemC can be a heterogeneous SLDL.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
24

Hu, Chih-Chieh. "Mechanistic modeling of evaporating thin liquid film instability on a bwr fuel rod with parallel and cross vapor flow." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/28148.

Full text
Abstract:
Thesis (M. S.)--Mechanical Engineering, Georgia Institute of Technology, 2009.
Committee Chair: Abdel-Khalik, Said; Committee Member: Ammar, Mostafa H.; Committee Member: Ghiaasiaan, S. Mostafa; Committee Member: Hertel, Nolan E.; Committee Member: Liu, Yingjie.
APA, Harvard, Vancouver, ISO, and other styles
25

Saman, Nariman Goran. "A Framework for Secure Structural Adaptation." Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-78658.

Full text
Abstract:
A (self-) adaptive system is a system that can dynamically adapt its behavior or structure during execution to "adapt" to changes to its environment or the system itself. From a security standpoint, there has been some research pertaining to (self-) adaptive systems in general but not enough care has been shown towards the adaptation itself. Security of systems can be reasoned about using threat models to discover security issues in the system. Essentially that entails abstracting away details not relevant to the security of the system in order to focus on the important aspects related to security. Threat models often enable us to reason about the security of a system quantitatively using security metrics. The structural adaptation process of a (self-) adaptive system occurs based on a reconfiguration plan, a set of steps to follow from the initial state (configuration) to the final state. Usually, the reconfiguration plan consists of multiple strategies for the structural adaptation process and each strategy consists of several steps steps with each step representing a specific configuration of the (self-) adaptive system. Different reconfiguration strategies have different security levels as each strategy consists of a different sequence configuration with different security levels. To the best of our knowledge, there exist no approaches which aim to guide the reconfiguration process in order to select the most secure available reconfiguration strategy, and the explicit security of the issues associated with the structural reconfiguration process itself has not been studied. In this work, based on an in-depth literature survey, we aim to propose several metrics to measure the security of configurations, reconfiguration strategies and reconfiguration plans based on graph-based threat models. Additionally, we have implemented a prototype to demonstrate our approach and automate the process. Finally, we have evaluated our approach based on a case study of our making. The preliminary results tend to expose certain security issues during the structural adaptation process and exhibit the effectiveness of our proposed metrics.
APA, Harvard, Vancouver, ISO, and other styles
26

Park, Byung-Goo. "A system-level testability allocation model /." free to MU campus, to others for purchase, 1997. http://wwwlib.umi.com/cr/mo/fullcit?p9842588.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Nemouchi, Yakoub. "Model-based Testing of Operating System-Level Security Mechanisms." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLS061/document.

Full text
Abstract:
Le test à base de modèle, en particulier test basé sur des assistants à la preuve, réduit de façon transparente l'écart entre la théorie, le modèle formel, et l’implémentation d'un système informatique. Actuellement, les techniques de tests offrent une possibilité d'interagir directement avec de "vrais" systèmes : via différentes propriétés formelles, les tests peuvent être dérivés et exécutés sur le système sous test. Convenablement, l'ensemble du processus peut être entièrement automatisé. Le but de cette thèse est de créer un environnement de test de séquence à base de modèle pour les programmes séquentiels et concurrents. Tout d'abord une théorie générique sur les monades est présentée, qui est indépendante de tout programme ou système informatique. Il se trouve que notre théorie basée sur les monades est assez expressive pour couvrir tous les comportements et les concepts de tests. En particulier, nous considérons ici : les exécutions séquentielles, les exécutions concurrentes, les exécutions synchronisées, les exécutions avec interruptions. Sur le plan conceptuel, la théorie apporte des notions comme la notion raffinement de test, les cas de tests abstraits, les cas de test concrets, les oracles de test, les scénarios de test, les données de tests, les pilotes de tests, les relations de conformités et les critères de couverture dans un cadre théorique et pratique. Dans ce cadre, des règles de raffinement de comportements et d'exécution symbolique sont élaborées pour le cas générique, puis affinées et utilisées pour des systèmes complexes spécifique. Comme application pour notre théorie, nous allons instancier notre environnement par un modèle séquentiel d'un microprocesseur appelé VAMP développé au cours du projet Verisoft. Pour le cas d'étude sur la concurrence, nous allons utiliser notre environnement pour modéliser et tester l'API IPC d'un système d'exploitation industriel appelé PikeOS.Notre environnement est implémenté en Isabelle / HOL. Ainsi, notre approche bénéficie directement des modèles, des outils et des preuves formelles de ce système
Formal methods can be understood as the art of applying mathematical reasoningto the modeling, analysis and verification of computer systems. Three mainverification approaches can be distinguished: verification based on deductive proofs,model checking and model-based testing.Model-based testing, in particular in its radical form of theorem proving-based testingcite{brucker.ea:2012},bridges seamlessly the gap between the theory, the formal model, and the implementationof a system. Actually,theorem proving based testing techniques offer a possibility to directly interactwith "real" systems: via differentformal properties, tests can be derived and executed on the system under test.Suitably supported, the entire process can fully automated.The purpose of this thesis is to create a model-based sequence testing environmentfor both sequential and concurrent programs. First a generic testing theory basedon monads is presented, which is independent of any concrete program or computersystem. It turns out that it is still expressive enough to cover all common systembehaviours and testing concepts. In particular, we consider here: sequential executions,concurrent executions, synchronised executions, executions with abort.On the conceptual side, it brings notions like test refinements,abstract test cases, concrete test cases,test oracles, test scenarios, test data, test drivers, conformance relations andcoverage criteria into one theoretical and practical framework.In this framework, both behavioural refinement rules and symbolic executionrules are developed for the generic case and then refined and used for specificcomplex systems. As an application, we will instantiate our framework by an existingsequential model of a microprocessor called VAMP developed during the Verisoft-Project.For the concurrent case, we will use our framework to model and test the IPC API of areal industrial operating system called PikeOS.Our framework is implemented in Isabelle/HOL. Thus, our approach directly benefitsfrom the existing models, tools, and formal proofs in this system
APA, Harvard, Vancouver, ISO, and other styles
28

Saha, Bhaskar. "A model-based reasoning architecture for system-level fault diagnosis." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/22653.

Full text
Abstract:
Thesis (Ph. D.)--Electrical and Computer Engineering, Georgia Institute of Technology, 2008.
Committee Chair: Vachtsevanos, George; Committee Member: Liang, Steven; Committee Member: Michaels, Thomas; Committee Member: Vela, Patricio; Committee Member: Wardi, Yorai.
APA, Harvard, Vancouver, ISO, and other styles
29

Ammer, Michael Johannes [Verfasser], Linus [Akademischer Betreuer] Maurer, Linus [Gutachter] Maurer, Martin [Gutachter] Sauter, and Bernd [Gutachter] Deutschmann. "A Methodology to Generate Transient Behavioral Models of Complete ICs out of Design Data for ESD and Electrical Stress Simulation on System Level / Michael Johannes Ammer ; Gutachter: Linus Maurer, Martin Sauter, Bernd Deutschmann ; Akademischer Betreuer: Linus Maurer ; Universität der Bundeswehr München, Fakultät für Elektrotechnik und Informationstechnik." Neubiberg : Universitätsbibliothek der Universität der Bundeswehr München, 2020. http://d-nb.info/1229997016/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Blömeling, Frank. "Multi-level substructuring methods for model order reduction." Berlin dissertation.de, 2008. http://d-nb.info/988537184/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Shi, Jianlin. "Model and tool integration in high level design of embedded system /." Stockholm : Maskinkontruktion, Kungliga Tekniska högskolan, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4589.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Toczek, Tomasz. "Une approche fonctionnelle pour la conception et l'exploration architecturale de systèmes numériques." Phd thesis, Université de Grenoble, 2011. http://tel.archives-ouvertes.fr/tel-00665104.

Full text
Abstract:
Ce manuscrit présente une méthode de conception au niveau système reposant sur la programmation fonctionnelle typée et visant à atténuer certains des problèmes complexifiant le développement des systèmes numériques modernes, tels que leurs tailles importantes ou la grande variété des blocs les constituant. Nous proposons un ensemble de mécanismes permettant de mélanger au sein d'un même design plusieurs formalismes de description distincts ("modèles de calcul") se situant potentiellement à des niveaux d'abstraction différents. De plus, nous offrons au concepteur la possibilité d'expliciter directement les paramètres explorables de chaque sous-partie du design, puis d'en déterminer des valeurs acceptables via une étape d'exploration partiellement ou totalement automatisée réalisée à l'échelle du système. Les gains qu'apportent ces stratégies nouvelles sont illustrés sur plusieurs exemples.
APA, Harvard, Vancouver, ISO, and other styles
33

Gladigau, Jens [Verfasser], and Teich [Akademischer Betreuer] Jürgen. "Combining Formal Model-Based System-Level Design with SystemC Transaction Level Modeling / Jens Gladigau. Betreuer: Teich Jürgen." Erlangen : Universitätsbibliothek der Universität Erlangen-Nürnberg, 2012. http://d-nb.info/1028958757/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Zhang, Ran. "Decision Support Models for A Few Critical Problems in Transportation System Design and Operations." Scholar Commons, 2017. http://scholarcommons.usf.edu/etd/6669.

Full text
Abstract:
Transportation system is one of the key functioning components of the modern society and plays an important role in the circulation of commodity and growth of economy. Transportation system is not only the major influencing factor of the efficiency of large-scale complex industrial logistics, but also closely related to everyone’s daily life. The goals of an ideal transportation system are focused on improving mobility, accessibility, safety, enhancing the coordination of different transportation modals and reducing the impact on the environment, all these activities require sophisticated design and plan that consider different factors, balance tradeoffs and maintaining efficiency. Hence, the design and planning of transportation system are strongly considered to be the most critical problems in transportation research. Transportation system planning and design is a sequential procedure which generally contains two levels: strategic and operational. This dissertation conducts extensive research covering both levels, on the strategic planning level, two network design problems are studied and on the operational level, routing and scheduling problems are analyzed. The main objective of this study is utilizing operations research techniques to generate and provide managerial decision supports in designing reliable and efficient transportation system. Specifically, three practical problems in transportation system design and operations are explored. First, we collaborate with a public transit company to study the bus scheduling problem for a bus fleet with multiples types of vehicles. By considering different cost characteristics, we develop integer program and exact algorithm to efficiently solve the problem. Next, we examine the network design problem in emergency medical service and develop a novel two stage robust optimization framework to deal with uncertainty, then propose an approximate algorithm which is fast and efficient in solving practical instance. Finally, we investigate the major drawback of vehicle sharing program network design problem in previous research and provide a counterintuitive finding that could result in unrealistic solution. A new pessimistic model as well as a customized computational scheme are then introduced. We benchmark the performance of new model with existing model on several prototypical network structures. The results show that our proposed models and solution methods offer powerful decision support tools for decision makers to design, build and maintain efficient and reliable transportation systems.
APA, Harvard, Vancouver, ISO, and other styles
35

Beltz, Jeffrey R. "Transitioning Middle Level Students Through a Tuition Model in Pennsylvania's Public School System." Youngstown State University / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=ysu1541160306624089.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Muhammad, Naeem. "Suitability of the Requirements Abstraction Model (RAM) Requirements for High Level System Testing." Thesis, Blekinge Tekniska Högskola, Avdelningen för programvarusystem, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-4127.

Full text
Abstract:
In market-driven requirements engineering requirements are elicited from various internal and external sources. These sources may include engineers, marketing teams, customers etc. This results in a collection of requirements at multiple levels of abstractions. The Requirements Abstraction Model (RAM) is a Market Driven Requirements Engineering (MDRE) model that helps in managing requirements by organizing them at four levels (product, feature, function and component) of abstraction. The model is adaptable and can be tailored to meet the needs of the various organizations e.g. number of abstraction levels can be changed according to the needs of the organization. Software requirements are an important source of information when developing high-level tests (acceptance and system level tests). In order to place a requirement on a suitable level, workup activities (producing abstraction or breaking down a requirement) can be performed on the requirement. Such activities on the requirements can affect the test cases designed from them. Organizations willing to adopt the RAM need to know the suitability of the RAM requirements for designing high-level tests. This master thesis analyzes the requirements at product, feature, function and component level to evaluate their suitability for supporting the creation of high-level system test. This analysis includes designing test cases from requirements at different levels and evaluating how much of the information needed in the test cases is available in the RAM requirements. Test cases are graded on a 5 to 1 scale according to the level of detail they contain, 5 for better detailed and 1 for very incomplete. Twenty requirements have been selected for this document analysis; twenty requirements contain five requirements from each level (product, feature, function and component). Organizations can utilize the results of this study, while making decision to adopt the RAM model. Decomposition of the tests developed from the requirements is another area that has been explored in this study. Test decomposition involves dividing tests into sub-tests. Some benefits of the test decomposition include better resource utilization, meet time-to-market and better test prioritization. This study explores how tests designed from the RAM requirements support test decomposition, and help in utilizing above listed benefits of the test decomposition.
APA, Harvard, Vancouver, ISO, and other styles
37

Harrath, Nesrine. "A stepwise compositional approach to model and analyze system C designs at the transactional level and the delta cycle level." Thesis, Paris, CNAM, 2014. http://www.theses.fr/2014CNAM0957/document.

Full text
Abstract:
Les systèmes embarqués sont de plus en plus intégrés dans les applications temps réel actuelles. Ils sont généralement constitués de composants matériels et logiciels profondément Intégrés mais hétérogènes. Ces composants sont développés sous des contraintes très strictes. En conséquence, le travail des ingénieurs de conception est devenu plus difficile. Pour répondre aux normes de haute qualité dans les systèmes embarqués de nos jours et pour satisfaire aux besoins quotidiens de l'industrie, l'automatisation du processus de développement de ces systèmes prend de plus en plus d'ampleur. Un défi majeur est de développer une approche automatisée qui peut être utilisée pour la vérification intégrée et la validation de systèmes complexes et hétérogènes.Dans le cadre de cette thèse, nous proposons une nouvelle approche compositionnelle pour la modélisation et la vérification des systèmes complexes décrits en langage SystemC. Cette approche est basée sur le modèle des SystemC Waiting State Automata (WSA). Les SystemC Waiting State Automata sont des automates permettant de modéliser le comportement abstrait des systèmes matériels et logiciels décrits en SystemC tout en préservant la sémantique de l'ordonnanceur SystemC au niveau des cycles temporels et au niveau des delta-cycles. Ce modèle permet de réduire la complexité de la modélisation des systèmes complexes due au problème de l'explosion combinatoire tout en restant fidèle au système initial. Ce modèle est compositionnel et supporte le rafinement. De plus, il est étendu par des paramètres temps ainsi que des compteurs afin de prendre en compte les aspects relatifs à la temporalité et aux propriétés fonctionnelles comme notamment la qualité de service. Nous proposons ensuite une chaîne de construction automatique des WSAs à partir de la description SystemC. Cette construction repose sur l'exécution symbolique et l'abstraction des prédicats. Nous proposons un ensemble d'algorithmes de composition et de réduction de ces automates afin de pouvoir étudier, analyser et vérifier les comportements concurrents des systèmes décrits ainsi que les échanges de données entre les différents composants. Nous proposons enfin d'appliquer notre approche dans le cadre de la modélisation et la simulation des systèmes complexes. Ensuite l'expérimenter pour donner une estimation du pire temps d'exécution (worst-case execution time (WCET)) en utilisant le modèle du Timed SystemC WSA. Enfin, on définit l'application des techniques du model checking pour prouver la correction de l'analyse abstraite de notre approche
Embedded systems are increasingly integrated into existing real-time applications. They are usually composed of deeply integrated but heterogeneous hardware and software components. These components are developed under strict constraints. Accordingly, the work of design engineers became more tricky and challenging. To meet the high quality standards in nowadays embedded systems and to satisfy the rising industrial demands, the automatization of the developing process of those systems is gaining more and more importance. A major challenge is to develop an automated approach that can be used for the integrated verification and validation of complex and heterogeneous HW/SW systems.In this thesis, we propose a new compositional approach to model and verify hardware and software written in SystemC language. This approach is based on the SystemC Waiting State Automata (WSA). The SystemC Waiting State Automata are used to model the abstract behavior of hardware or software systems described in SystemC. They preserve the semantics of the SystemC scheduler at the temporal and the delta-cycle level. This model allows to reduce the complexity of the modeling process of complex systems due to the problem of state explosion during modeling while remaining faithful to the original system. The SystemC waiting state automaton is also compositional and supports refinement. In addition, this model is extended with parameters such as time and counters in order to take into account further aspects like temporality and other extra-functional properties such as QoS.In this thesis, we propose a stepwise approach on how to automatically extract the SystemC WSAs from SystemC descriptions. This construction is based on symbolic execution together with predicate abstraction. We propose a set of algorithms to symbolically compose and reduce the SystemC WSAs in order to study, analyze and verify concurrent behavior of systems as well as the data exchange between various components. We then propose to use the SystemC WSA to model and simulate hardware and software systems, and to compute the worst cas execution time (WCET) using the Timed SystemC WSA. Finally, we define how to apply model checking techniques to prove the correctness of the abstract analysis
APA, Harvard, Vancouver, ISO, and other styles
38

Tamssaouet, Ferhat. "Towards system-level prognostics : modeling, uncertainty propagation and system remaining useful life prediction." Thesis, Toulouse, INPT, 2020. http://www.theses.fr/2020INPT0079.

Full text
Abstract:
Le pronostic est le processus de prédiction de la durée de vie résiduelle utile (RUL) des composants, sous-systèmes ou systèmes. Cependant, jusqu'à présent, le pronostic a souvent été abordé au niveau composant sans tenir compte des interactions entre les composants et l'impact de l'environnement, ce qui peut conduire à une mauvaise prédiction du temps de défaillance dans des systèmes complexes. Dans ce travail, une approche de pronostic au niveau du système est proposée. Cette approche est basée sur un nouveau cadre de modélisation : le modèle d'inopérabilité entrée-sortie (IIM), qui permet de prendre en compte les interactions entre les composants et les effets du profil de mission et peut être appliqué pour des systèmes hétérogènes. Ensuite, une nouvelle méthodologie en ligne pour l'estimation des paramètres (basée sur l'algorithme de la descente du gradient) et la prédiction du RUL au niveau système (SRUL) en utilisant les filtres particulaires (PF), a été proposée. En détail, l'état de santé des composants du système est estimé et prédit d'une manière probabiliste en utilisant les PF. En cas de divergence consécutive entre les estimations a priori et a posteriori de l'état de santé du système, la méthode d'estimation proposée est utilisée pour corriger et adapter les paramètres de l'IIM. Finalement, la méthodologie développée, a été appliquée sur un système industriel réaliste : le Tennessee Eastman Process, et a permis une prédiction du SRUL dans un temps de calcul raisonnable
Prognostics is the process of predicting the remaining useful life (RUL) of components, subsystems, or systems. However, until now, the prognostics has often been approached from a component view without considering interactions between components and effects of the environment, leading to a misprediction of the complex systems failure time. In this work, a prognostics approach to system-level is proposed. This approach is based on a new modeling framework: the inoperability input-output model (IIM), which allows tackling the issue related to the interactions between components and the mission profile effects and can be applied for heterogeneous systems. Then, a new methodology for online joint system RUL (SRUL) prediction and model parameter estimation is developed based on particle filtering (PF) and gradient descent (GD). In detail, the state of health of system components is estimated and predicted in a probabilistic manner using PF. In the case of consecutive discrepancy between the prior and posterior estimates of the system health state, the proposed estimation method is used to correct and to adapt the IIM parameters. Finally, the developed methodology is verified on a realistic industrial system: The Tennessee Eastman Process. The obtained results highlighted its effectiveness in predicting the SRUL in reasonable computing time
APA, Harvard, Vancouver, ISO, and other styles
39

Gupta, Vishakha. "Coordinated system level resource management for heterogeneous many-core platforms." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/42750.

Full text
Abstract:
A challenge posed by future computer architectures is the efficient exploitation of their many and sometimes heterogeneous computational cores. This challenge is exacerbated by the multiple facilities for data movement and sharing across cores resident on such platforms. To answer the question of how systems software should treat heterogeneous resources, this dissertation describes an approach that (1) creates a common manageable pool for all the resources present in the platform, and then (2) provides virtual machines (VMs) with multiple `personalities', flexibly mapped to and efficiently run on the heterogeneous underlying hardware. A VM's personality is its execution context on the different types of available processing resources usable by the VM. We provide mechanisms for making such platforms manageable and evaluate coordinated scheduling policies for mapping different VM personalities on heterogeneous hardware. Towards that end, this dissertation contributes technologies that include (1) restructuring hypervisor and system functions to create high performance environments that enable flexibility of execution and data sharing, (2) scheduling and other resource management infrastructure for supporting diverse application needs and heterogeneous platform characteristics, and (3) hypervisor level policies to permit efficient and coordinated resource usage and sharing. Experimental evaluations on multiple heterogeneous platforms, like one comprised of x86-based cores with attached NVIDIA accelerators and others with asymmetric elements on chip, demonstrate the utility of the approach and its ability to efficiently host diverse applications and resource management methods.
APA, Harvard, Vancouver, ISO, and other styles
40

Invergo, Brandon M. 1982. "A system-level, molecular evolutionary analysis of mammalian phototransduction." Doctoral thesis, Universitat Pompeu Fabra, 2013. http://hdl.handle.net/10803/145482.

Full text
Abstract:
Phototransduction is the biochemical process by which a light stimulus is converted to a neuronal signal. The process functions through complex interactions between many proteins, which work in concert to tightly control the dynamics of the photoresponse. The primary aim of this thesis is to describe how the topology and kinetics of these interactions have given rise to detectable patterns of molecular evolution. To this end, a secondary aim is to develop a comprehensive mathematical model of mammalian phototransduction, first through the improvement of an existing model of the amphibian system and then through the re-tuning of that model to fit mammalian data. The results show a striking importance of the signal recovery-related proteins in shaping the photoresponse. This is reflected in relaxed evolutionary constraint on those proteins that exert the greatest dynamic influence. Meanwhile, the proteins most central to the process, while less important dynamically, are strongly constrained due to their essentiality in proper signal transduction.
La fototransducció és el procés bioquímic pel qual un estímul de llum es converteix en un senyal neuronal. El procés funciona a través d'interaccions complexes entre moltes proteïnes, que funcionen en conjunt per controlar estretament la dinàmica de la fotoresposta. L'objectiu principal d'aquesta tesi és descriure com la topologia i la cinètica d'aquestes interaccions han donat lloc a patrons detectables d'evolució molecular. Amb aquesta finalitat, un objectiu secundari és el desenvolupament d'un model matemàtic integral de la fototransducció en mamífers, primer a través de la millora d'un model existent del sistema d'amfibis i després a través de la refinament d'aquest model per ajustar-lo a les dades de mamífers. Els resultats mostren una importància notable de les proteïnes relacionades amb la recuperació del senyal en la fotoresposta. Això es reflecteix en una relaxació de les constriccions evolutives en les proteïnes que exerceixen la major influència dinàmica. Alhora, les proteïnes més centrals per al procés, tot i essent menys importants dinàmicament, es troben fortament limitades degut a la seva essencialitat en la correcta transducció de senyal.
APA, Harvard, Vancouver, ISO, and other styles
41

Liang, Feng. "Modeling in Modelica and SysML of System Engineering at Scania Applied to Fuel Level Display." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-84829.

Full text
Abstract:
The main objective of this thesis is to introduce a four perspectives structure in order to provide one solution for traceability and dependability in the system design phase. The traceability between different perspectives help engineers have a clear picture of the whole system before goes to the real implementation.  Fuel Level Display system from Scania Truck is used to undertake as a case study to offer insights of the approach. A four perspectives structure is made in the first place in order to analysis traceability between different viewpoints. After implementing the Fuel Level Display system in Modelica, a verification scenario is specified to perform a complete requirement verification process for system design against requirements.
APA, Harvard, Vancouver, ISO, and other styles
42

Alshekhly, Zoubida, and Namra Gill. "Proof-of-concept of Model-based testing based on an UML-model of a water-level measurement system." Thesis, Malmö universitet, Fakulteten för teknik och samhälle (TS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-20548.

Full text
Abstract:
Software testing is a very important phase in software development as it minimize risks ina software system, however, it consumes time and can be very expensive. With automatictest case generation time consumption and cost can be reduced. Model-based testing isa method to test a software system with a model of the systems behaviour. Automatictest case generation is often considered a favorable support in model-based testing. In thiswork, the concept of model-based testing is explored along with testing the embedded partof a water-level measurement system (WLM) to investigate the efficiency of model-basedtesting on a software system. As a result of this, a model-based testing tool, MoMut::UMLis used to generate the test-cases on the UML model of WLM system that is built ina UML modeling environment, Eclipse-Papyrus. However, MoMut::UML implements aspecial type of model-based testing technique, model-based mutation testing; that injectsfaults in the UML model, and generates test-data on the fault-based model. By this, thebehaviour of system-under-test, only the UML model of water-level measurement system,is tested.
APA, Harvard, Vancouver, ISO, and other styles
43

Gill, Janet Ann. "A model linking safety, threat and other critical causal factors to their system-level mitigators." Thesis, Queen Mary, University of London, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.435052.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Sinclair, Frazer Hamilton. "Community level consequences of adaptive management through climate matching : oak galls as a model system." Thesis, University of Edinburgh, 2012. http://hdl.handle.net/1842/7684.

Full text
Abstract:
In the present century, ecosystems across the globe will be subject to profound changes in climate. Forests are expected to be particularly sensitive to such change as the long life span of trees limits the potential for rapid adaptation. In order to preserve commercial viability and the essential ecosystem services provided by forests, there has been much interest in strategies for managing the adaptation of trees to their climatic environment. Climate Matching has emerged as one such strategy, whereby climate models are used to identify provenances – tree populations at a particular locality - with seed expected to be well adapted to the future conditions of a particular planting site. Debate continues about the feasibility and merit of this and other approaches, but it has yet to be demonstrated that the underlying assumptions of Climate Matching are valid for focal European tree species. Furthermore, a potentially major omission thus far has been consideration of how the Climate Matching strategy might influence associated organisms. Given the widely demonstrated bottom-up effects of foundation species genotype that have emerged from the field of community genetics, it is possible that planting seed of non-local provenance could effect forest organisms such as insect herbivores. In this thesis, I investigate the underlying assumptions of Climate Matching and its community level consequences using a model system of cynipid oak galls on Quercus petraea. Following a general introduction to Climate Matching and the study system, in Chapter 2 I use data from a provenance trial of Q. petraea in France to explore a central assumption of the Climate Matching strategy: that provenances of focal tree species show climate associated variation in adaptive phenotypic traits. In Chapter 3, I explore correlations between these phenotypic traits and the abundance, diversity, and community composition of an associated guild of specialist gall-inducing herbivores. Tree phenological traits in particular showed strong patterns of adaptation to climatic gradients, and influenced the abundance and community structure of galling species. However, as the response to non-local tree provenances was not strongly negative, it was considered unlikely that mixed planting of local and Climate Matched provenances would have sever impact on the gallwasp community. Having assessed the bottom-up effects of provenance phenotypic variation on the galling community, my ultimate aim is to extend analysis to include associated hymenopteran inquilines and parasitoids. However, interpretation of effects at this level is hindered by taxonomic uncertainty, with a growing appreciation that morpho-taxa may not represent independently evolving lineages (i.e. ‘true’ species). In Chapters 4 & 5 I therefore develop approaches for addressing taxonomic uncertainty with this ultimate aim in mind. In Chapter 4, I apply a DNA barcoding approach to parasitoid and inquiline specimens reared from the provenance trial, and compare taxa based on barcodes with those based on morphology to identify points of taxonomic uncertainty. I also investigate the extent to which networks based on morphological and molecular taxa support contrasting conclusions of network properties. In Chapter 5 I explore the potential for molecular based resolution of species level taxonomic error in a challenging group of parasitoids: the genus Cecidostiba. Beginning with a framework of single locus DNA barcoding, I use data from multiple nuclear loci to reveal the existence of cryptic species. Finally, in Chapter 6 I explore the practicalities of Climate Matching in light of my empirical results, and suggest fruitful avenues for further research.
APA, Harvard, Vancouver, ISO, and other styles
45

Chang, Biao. "Spatial analysis of sea level rise associated with climate change." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/49062.

Full text
Abstract:
Sea level rise (SLR) is one of the most damaging impacts associated with climate change. The objective of this study is to develop a comprehensive framework to identify the spatial patterns of sea level in the historical records, project regional mean sea levels in the future, and assess the corresponding impacts on the coastal communities. The first part of the study suggests a spatial pattern recognition methodology to characterize the spatial variations of sea level and to investigate the sea level footprints of climatic signals. A technique based on artificial neural network is proposed to reconstruct average sea levels for the characteristic regions identified. In the second part of the study, a spatial dynamic system model (DSM) is developed to simulate and project the changes in regional sea levels and sea surface temperatures (SST) under different development scenarios of the world. The highest sea levels are predicted under the scenario A1FI, ranging from 71 cm to 86 cm (relative to 1990 global mean sea level); the lowest predicted sea levels are under the scenario B1, ranging from 51 cm to 64 cm (relative to 1990 global mean sea level). Predicted sea levels and SST's of the Indian Ocean are significantly lower than those of the Pacific and the Atlantic Ocean under all six scenarios. The last part of this dissertation assesses the inundation impacts of projected regional SLR on three representative coastal U.S. states through a geographic information system (GIS) analysis. Critical issues in the inundation impact assessment process are identified and discussed.
APA, Harvard, Vancouver, ISO, and other styles
46

Ibrahim, Shire Mohammed. "Participatory system dynamics modelling approach to safe and efficient staffing level management within hospital pharmacies." Thesis, Loughborough University, 2018. https://dspace.lboro.ac.uk/2134/34790.

Full text
Abstract:
With increasingly complex safety-critical systems like healthcare being developed and managed, there is a need for a tool that allows us to understand their complexity, design better strategies and guide effective change. System dynamics (SD) has been widely used in modelling across a range of applications from socio-economic to engineering systems, but its potential has not yet been fully realised as a tool for understanding trade-off dynamics between safety and efficiency in healthcare. SD has the potential to provide balanced and trustworthy insights into strategic decision making. Participatory SD modelling and learning is particularly important in healthcare since problems in healthcare are difficult to comprehend due to complexity, involvement of multiple stakeholders in decision making and fragmented structure of delivery systems. Participatory SD modelling triangulates stakeholder expertise, data and simulation of implementation plans prior to attempting change. It provides decision-makers with an evaluation and learning tool to analyse impacts of changes and determine which input data is most likely to achieve desired outcomes. This thesis aims to examine the feasibility of applying participatory SD modelling approach to safe and efficient staffing level management within hospital pharmacies and to evaluate the utility and usability of participatory SD modelling approach as a learning method. A case study was conducted looking at trade-offs between dispensing backlog (efficiency) and dispensing errors (safety) in a hospital pharmacy dispensary in an English teaching hospital. A participatory modelling approach was employed where the stakeholders from the hospital pharmacy dispensary were engaged in developing an integrated qualitative conceptual model. The model was constructed using focus group sessions with 16 practitioners consisting of labelling and checking practitioners, the literature and hospital pharmacy databases. Based on the conceptual model, a formal quantitative simulation model was then developed using an SD simulation approach, allowing different scenarios and strategies to be identified and tested. Besides the baseline or business as usual scenario, two additional scenarios (hospital winter pressures and various staffing arrangements, interruptions and fatigue) identified by the pharmacist team were simulated and tested using a custom simulation platform (Forio: user-friendly GUI) to enable stakeholders to play out the likely consequences of the intervention scenarios. We carried out focus group-based survey of 21 participants working in the hospital pharmacy dispensaries to evaluate the applicability, utility and usability of how participatory SD enhanced group learning and building of shared vision for problems within the hospital dispensaries. Findings from the simulation illustrate the knock-on impact rework has on dispensing errors, which is often missing from the traditional linear model-based approaches. This potentially downward-spiral knock-on effect makes it more challenging to deal with demand variability, for example, due to hospital winter pressures. The results provide pharmacy management in-depth insights into potential downward-spiral knock-on effects of high workload and potential challenges in dealing with demand variability. Results and simulated scenarios reveal that it is better to have a fixed adequate staff number throughout the day to keep backlog and dispensing errors to a minimum than calling additional staff to combat growing backlog; and that whilst having a significant amount of trainees might be cost efficient, it has a detrimental effect on dispensing errors (safety) as number of rework done to correct the errors increases and contributes to the growing backlog. Finally, capacity depletion initiated by high workload (over 85% of total workload), even in short bursts, has a significant effect on the amount of rework. Evaluative feedback revealed that participatory SD modelling can help support consensus agreement, thus gaining a deeper understanding of the complex interactions in the systems they strive to manage. The model introduced an intervention to pharmacy management by changing their mental models on how hospital winter pressures, various staffing arrangements, interruptions and fatigue affect productivity and safety. Although the outcome of the process is the model as an artefact, we concluded that the main benefit is the significant mental model change on how hospital winter pressures, various staffing arrangements, interruptions and fatigue are interconnected, as derived from participants involvement and their interactions with the GUI scenarios. The research contributes to the advancement of participatory SD modelling approach within healthcare by evaluating its utility and usability as a learning method, which until recently, has been dominated by the linear reductionist approaches. Methodologically, this is one of the few studies to apply participatory SD approach as a modelling tool for understanding trade-offs dynamics between safety and efficiency in healthcare. Practically, this research provides stakeholders and managers, from pharmacists to managers the decision support tools in the form of a GUI-based platform showcasing the integrated conceptual and simulation model for staffing level management in hospital pharmacy.
APA, Harvard, Vancouver, ISO, and other styles
47

Deb, Abhijit Kumar. "System Design for DSP Applications with the MASIC Methodology." Doctoral thesis, KTH, Microelectronics and Information Technology, IMIT, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-3820.

Full text
Abstract:

The difficulties of system design are persistentlyincreasing due to the integration of more functionality on asystem, time-to-market pressure, productivity gap, andperformance requirements. To address the system designproblems, design methodologies build system models at higherabstraction level. However, the design task to map an abstractfunctional model on a system architecture is nontrivial becausethe architecture contains a wide variety of system componentsand interconnection topology, and a given functionality can berealized in various ways depending on cost-performancetradeoffs. Therefore, a system design methodology must provideadequate design steps to map the abstract functionality on adetailed architecture.

MASIC—Maths to ASIC—is a system design methodologytargeting DSP applications. In MASIC, we begin with afunctional model of the system. Next, the architecturaldecisions are captured to map the functionality on the systemarchitecture. We present a systematic approach to classify thearchitectural decisions in two categories: system leveldecisions (SLDs) and implementation level decisions (ILDs). Asa result of this categorization, we only need to consider asubset of the decisions at once. To capture these decisions inan abstract way, we present three transaction level models(TLMs) in the context of DSP systems. These TLMs capture thedesign decisions using abstract transactions where timing ismodeled only to describe the major synchronization events. As aresult the functionality can be mapped to the systemarchitecture without meticulous details. Also, the artifacts ofthe design decisions in terms of delay can be simulatedquickly. Thus the MASIC approach saves both modeling andsimulation time. It also facilitates the reuse of predesignedhardware and software components.

To capture and inject the architectural decisionsefficiently, we present the grammar based language of MASIC.This language effectively helps us to implement the stepspertaining to the methodology. A Petri net based simulationtechnique is developed, which avoids the need to compile theMASIC description to VHDL for the sake of simulation. We alsopresent a divide and conquer based approach to verify the MASICmodel of a system.

Keywords:System design methodology, Signal processingsystems, Design decision, Communication, Computation, Modeldevelopment, Transaction level model, System design language,Grammar, MASIC.

APA, Harvard, Vancouver, ISO, and other styles
48

Park, Nai Soo. "Frequency Management Database Model (FMDM) for the Korean Army Communication System at the regiment unit level." Thesis, Monterey, California. Naval Postgraduate School, 1989. http://hdl.handle.net/10945/25961.

Full text
Abstract:
This thesis provides a Frequency Management Database Model ( FMDM) for the Republic of Korea Army (ROKA) Communication System. The FMDM uses a personal computer to increase the efficiency of the frequency management system at the regiment unit level in the Korean Army. A signal officer in the ROKA can use the FMDM for the allocation, planning, and distribution of radio frequencies, in order to achieve the optimum use of the frequency spectrum. A discussion of security has been included in this thesis so that both the hardware and software of the FMDM are protected. Keywords: Frequency management, Database model, Korean army communications, Communications traffic
APA, Harvard, Vancouver, ISO, and other styles
49

Yoon, Sungwon. "Limited-data tomography : a level-set reconstruction algorithm, a tomosynthesis system model, and an anthropomorphic phantom /." May be available electronically:, 2008. http://proquest.umi.com/login?COPT=REJTPTU1MTUmSU5UPTAmVkVSPTI=&clientId=12498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Cain, Mark J. "A GAMS-based model of the U.S. Army Wartime Ammunition Distribution System for the Corps level." Thesis, Monterey, California. Naval Postgraduate School, 1985. http://hdl.handle.net/10945/23244.

Full text
Abstract:
Approved for public release; distribution is unlimited
The U.S. Army Wartime Ammunition Distribution System (WADS) will experience an unprecedented demand for ammunition under the operational concept of Airland Battle. To meet demand, proper storage facility location and an efficient flow through the distribution network will be required. Using information from Army Field Manuals, maps and simulation data for demand, both a mixed integer program (MIP) and a sequential, optimization-based heuristic are developed to model the WADS. The Generalized Algebraic Modelling System is used to implement both models. The sequential heuristic locates ammunition facilities with a binary integer program and then directs ammunition through those facilities utilizing a network flow model with side constraints. The MIP integrates location and flow decisions in the same model. For a general scenario, the sequential heuristic locates a 21 node, 30 arc network with ammunition flows over 30 time periods in 22 CPU seconds on an IBM 3033AP. For the same scenario the MIP obtains a solution for only a 3 time period problem in 87 CPU seconds. Keywords: Ammunition, Integer programming, Heuristic, Networks
http://archive.org/details/gamsbasedmodelof00cain
Captain, United States Army
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography