Dissertations / Theses on the topic 'Model-based systems e'

To see the other types of publications on this topic, follow the link: Model-based systems e.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Model-based systems e.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Flanagan, Genevieve (Genevieve Elise Cregar). "Key challenges to model-based design : distinguishing model confidence from model validation." Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/76492.

Full text
Abstract:
Thesis (S.M. in Engineering and Management)--Massachusetts Institute of Technology, Engineering Systems Division, System Design and Management Program, 2012.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 93-97).
Model-based design is becoming more prevalent in industry due to increasing complexities in technology while schedules shorten and budgets tighten. Model-based design is a means to substantiate good design under these circumstances. Despite this, organizations often have a lack of confidence in the use of models to make critical decisions. As a consequence they often invest heavily in expensive test activities that may not yield substantially new or better information. On the other hand, models are often used beyond the bounds within which they had been previously calibrated and validated and their predictions in the new regime may be substantially in error and this can add substantial risk to a program. This thesis seeks to identify factors that cause either of these behaviors. Eight factors emerged as the key variables to misaligned model confidence. These were found by studying three case studies to setup the problem space. This was followed by a review of the literature with emphasis on model validation and assessment processes to identify remaining gaps. These gaps include proper model validation processes, limited research from the perspective of the decision-maker, and lack of understanding of the impact of contextual variables surrounding a decision. The impact these eight factors have on model confidence and credibility was tested using a web-based experiment that included a simple model of a catapult and varying contextual details representing the factors. In total 252 respondents interacted with the model and made a binary decision on a design problem to provide a measure for model confidence. Results from the testing showed several factors proved to cause an outright change in model confidence. One factor, a representation of model uncertainty, did not result in any differences to model confidence despite support from the literature suggesting otherwise. Findings such as these were used to gain additional insights and recommendations to address the problem of misaligned model confidence. Recommendations included system-level approaches, improved quality of communication, and use of decision analysis techniques. Applying focus in these areas can help to alleviate pressures from the contextual factors involved in the decision-making process. This will allow models to be used more effectively thereby supporting model-based design efforts.
by Genevieve Flanagan.
S.M.in Engineering and Management
APA, Harvard, Vancouver, ISO, and other styles
2

Kinder, Andrew M. K. "A model-based approach to System of Systems risk management." Thesis, Loughborough University, 2017. https://dspace.lboro.ac.uk/2134/27553.

Full text
Abstract:
The failure of many System of Systems (SoS) enterprises can be attributed to the inappropriate application of traditional Systems Engineering (SE) processes within the SoS domain, because of the mistaken belief that a SoS can be regarded as a single large, or complex, system. SoS Engineering (SoSE) is a sub-discipline of SE; Risk Management and Modelling and Simulation (M&S) are key areas within SoSE, both of which also lie within the traditional SE domain. Risk Management of SoS requires a different approach to that currently taken for individual systems; if risk is managed for each component system then it cannot be assumed that the aggregated affect will be to mitigate risk at the SoS level. A literature review was undertaken examining three themes: (1) SoS Engineering (SoSE), (2) M&S and (3) Risk. Theme 1 of the literature provided insight into the activities comprising SoSE and its difference from traditional SE with risk management identified as a key activity. The second theme discussed the application of M&S to SoS, providing an output, which supported the identification of appropriate techniques and concluding that, the inherent complexity of a SoS required the use of M&S in order to support SoSE activities. Current risk management approaches were reviewed in theme 3 as well as the management of SoS risk. Although some specific examples of the management of SoS risk were found, no mature, general approach was identified, indicating a gap in current knowledge. However, it was noted most of these examples were underpinned by M&S approaches. It was therefore concluded a general approach SoS risk management utilising M&S methods would be of benefit. In order to fill the gap identified in current knowledge, this research proposed a new model based approach to Risk Management where risk identification was supported by a framework, which combined SoS system of interest dimensions with holistic risk types, where the resulting risks and contributing factors are captured in a causal network. Analysis of the causal network using a model technique selection tool, developed as part of this research, allowed the causal network to be simplified through the replacement of groups of elements within the network by appropriate supporting models. The Bayesian Belief Network (BBN) was identified as a suitable method to represent SoS risk. Supporting models run in Monte Carlo Simulations allowed data to be generated from which the risk BBNs could learn, thereby providing a more quantitative approach to SoS risk management. A method was developed which provided context to the BBN risk output through comparison with worst and best-case risk probabilities. The model based approach to Risk Management was applied to two very different case studies: Close Air Support mission planning and the Wheat Supply Chain, UK National Food Security risks, demonstrating its effectiveness and adaptability. The research established that the SoS SoI is essential for effective SoS risk identification and analysis of risk transfer, effective SoS modelling requires a range of techniques where suitability is determined by the problem context, the responsibility for SoS Risk Management is related to the overall SoS classification and the model based approach to SoS risk management was effective for both application case studies.
APA, Harvard, Vancouver, ISO, and other styles
3

London, Brian (Brian N. ). "A model-based systems engineering framework for concept development." Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/70822.

Full text
Abstract:
Thesis (S.M. in Engineering and Management)--Massachusetts Institute of Technology, Engineering Systems Division, 2012.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 148-151).
The development of increasingly complex, innovative systems under greater constraints has been the trend over the past several decades. In order to be successful, organizations must develop products that meet customer needs more effectively than the competitors' alternatives. The development of these concepts is based on a broad set of stakeholder objectives, from which alternative designs are developed and compared. When properly performed, this process helps those involved understand the benefits and drawbacks of each option. This is crucial as firms need to effectively and quickly explore many concepts, and easily determine those most likely to succeed. It is generally accepted that a methodical design approach leads to the reduction in design flaws and cost over a product's life cycle. Several techniques have been developed to facilitate these efforts. However, the traditional tools and work products are isolated, and require diligent manual inspection. It is expected that the effectiveness of the high-level product design and development will improve dramatically through the adoption of computer based modeling and simulation. This emerging capability can mitigate the challenges and risks imposed by complex systems by enforcing rigor and precision. Model-based systems engineering (MBSE) is a methodology for designing systems using interconnected computer models. The recent proliferation of MBSE is evidence of its ability to improve the design fidelity and enhance communication among development teams. Existing descriptions of leveraging MBSE for deriving requirements and system design are prevalent. However, very few descriptions of model-based concept development have been presented. This may be due to the lack of MBSE methodologies for performing concept development. Teams that attempt a model-based approach without well defined, structured strategy are often unsuccessful. However, when MBSE is combined with a clear methodology, designs can be more efficiently generated and evaluated. While it may not be feasible to provide a "standard" methodology for concept development, a framework is envisioned that incorporates a variety of methods and techniques. This thesis proposes such a framework and presents an example based on a simulated concept development effort.
by Brian London.
S.M.in Engineering and Management
APA, Harvard, Vancouver, ISO, and other styles
4

Quezada, Gomez Juan Manuel. "Model-based guidelines for automotive electronic systems software development." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/100383.

Full text
Abstract:
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, Engineering Systems Division, System Design and Management Program, 2015.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 96-98).
The automobile innovation transformed the human life style ever since its introduction to the public, and for over the last one hundred years incumbent technologies have been adopted to improve its performance characteristics. Yet, we need a holistic approach to understand that automobiles shifted from being a mere assembly of mechanical parts to a multidisciplinary system that form the modern automobile. Thanks to the increased use of electronics and software in automobiles, consumers benefit from better gas mileage, more amenities and features, such as comfort, driving assistance, and entertainment. At the same time, stability and performance of automobiles as systems have been facing deterioration, and eventually vehicle owners are finding that features and functions become inoperative over time, causing frustration, loss of time and money. Reports of problems experienced by vehicle owners have stem from casual factors of system defects that model-based systems engineering can reduce or eliminate. This research presents a model-based systems engineering approach to an automobile electronic system design. The work is founded on a comprehensive OPM model and engineering guidelines for electronic control module software design. The purpose of the framework developed in this study is to support development of complex vehicle software that allows flexibility for changing features and creating new ones, and enables software developers to pinpoint systemic faults quicker and at earlier lifecycle phases, reducing rework, increasing safety, and providing for more effective resolution of such problems.
by Juan Manuel Quezada Gomez.
S.M. in Engineering and Management
APA, Harvard, Vancouver, ISO, and other styles
5

Griesebner, Klaus. "Model-based Controller Development." Thesis, Högskolan i Halmstad, Akademin för informationsteknologi, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-34929.

Full text
Abstract:
Model-based design is a powerful design technique for embedded system development. The technique enables virtual prototyping to develop and debug controllers before touching real hardware. There are many tools available covering the distinct steps of the design cycle including modeling, simulation, and implementation. Unfortunately, none of them covers all three steps. This thesis proposes a formalism coupling the model and the implementation of a controller for equation-based simulation tools. The resulting formalism translates defined controller models to platform specific code using a defined set of syntax. A case study of a line-following robot has been developed to illustrate the feasibility of the approach. The prototype has been tested and evaluated using a sequence of test scenarios of increasing difficulty. The final experiments suggest that the behaviors of both modeled and generated controllers are similar. The thesis concludes that the approach of model-implementation coupling of controllers in the simplest form is feasible for equation-based tools. This allows it to conduct the whole model-based design cycle within a single environment.
APA, Harvard, Vancouver, ISO, and other styles
6

Torres, Edwin Ross. "Team Collaboration as a System of Systems Agent-Based Model." Thesis, The George Washington University, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10743109.

Full text
Abstract:

There is a current need to study and understand the behaviors and characteristics of systems of systems. Studying a single system is relatively straightforward when compared to studying a system of systems. A system of systems has unique characteristics that distinguish it from a single system. The additional complexity in a system of systems leads to complicated models and advanced computer simulations. Although modeling and simulation are popular methods for researching a single system, there have been fewer attempts at modeling and simulating systems of systems. Agent-based modeling is an effective approach for researching systems of systems, but validation of agent-based models is difficult, especially if data are not available. Finally, communicating an agent-based model is more difficult than communicating an analytical model because analytical models use familiar mathematical notation. The purpose of this research is to increase the knowledge of system of systems engineering by developing, executing, and analyzing an agent-based model of team collaboration in a real-world, operational system of systems. This research has several goals. The first goal is to address a current need to increase the understanding of the behaviors and characteristics of systems of systems. More specifically, this research aims to model and explain how collaboration and integration in a real-world system of systems affect the achievability of the overall goal of the system of systems. There is an emphasis on the operations and integration of heterogeneous component systems of the collaborative system of systems. This includes understanding the behaviors, characteristics, and interactions among the component systems. The second goal is to develop and thoroughly document a new, repeatable agent-based model of the real-world system of systems. The final goal is to develop a useful tool for understanding and predicting the achievability of the overall goal of the system of systems. Specifically, this research explores team collaboration in a National Basketball Association offensive lineup. This lineup possesses the necessary characteristics to categorize it as a system of systems. Players are the individual, heterogeneous component systems that belong to and operate in the system of systems. This research introduces a new agent-based model and simulation to understand how the individual component systems affect the achievability of the system of systems goal. The NetLogo modeling platform provides an effective environment for executing the model. Data for initialization and validation come from the National Basketball Association. Results show that the overall goal of scoring is an emergent behavior of the collaborative system of systems. Top performing combinations of lineups and collaboration levels emerge. The heterogeneity and interactions of the component systems affect the achievability of the overall goal in different ways. Specific combinations of the collaboration levels and integration of individual component systems determine the scoring output. Observing the component systems individually offers no explanation for the achievability of the overall goal. Instead, it is necessary to view the component systems as a whole. Finally, the verified and validated agent-based model of the offensive lineup contributes to system of systems research, and it is an effective tool for understanding and exploring offensive lineups in the National Basketball Association.

APA, Harvard, Vancouver, ISO, and other styles
7

Ramos, Ana Luísa Ferreira Andrade. "Model-based systems engineering: a system for traffic & environment." Doctoral thesis, Universidade de Aveiro, 2011. http://hdl.handle.net/10773/7273.

Full text
Abstract:
Doutoramento em Gestão Industrial
The contemporary world is crowded of large, interdisciplinary, complex systems made of other systems, personnel, hardware, software, information, processes, and facilities. The Systems Engineering (SE) field proposes an integrated holistic approach to tackle these socio-technical systems that is crucial to take proper account of their multifaceted nature and numerous interrelationships, providing the means to enable their successful realization. Model-Based Systems Engineering (MBSE) is an emerging paradigm in the SE field and can be described as the formalized application of modelling principles, methods, languages, and tools to the entire lifecycle of those systems, enhancing communications and knowledge capture, shared understanding, improved design precision and integrity, better development traceability, and reduced development risks. This thesis is devoted to the application of the novel MBSE paradigm to the Urban Traffic & Environment domain. The proposed system, the GUILTE (Guiding Urban Intelligent Traffic & Environment), deals with a present-day real challenging problem “at the agenda” of world leaders, national governors, local authorities, research agencies, academia, and general public. The main purposes of the system are to provide an integrated development framework for the municipalities, and to support the (short-time and real-time) operations of the urban traffic through Intelligent Transportation Systems, highlighting two fundamental aspects: the evaluation of the related environmental impacts (in particular, the air pollution and the noise), and the dissemination of information to the citizens, endorsing their involvement and participation. These objectives are related with the high-level complex challenge of developing sustainable urban transportation networks. The development process of the GUILTE system is supported by a new methodology, the LITHE (Agile Systems Modelling Engineering), which aims to lightening the complexity and burdensome of the existing methodologies by emphasizing agile principles such as continuous communication, feedback, stakeholders involvement, short iterations and rapid response. These principles are accomplished through a universal and intuitive SE process, the SIMILAR process model (which was redefined at the light of the modern international standards), a lean MBSE method, and a coherent System Model developed through the benchmark graphical modeling languages SysML and OPDs/OPL. The main contributions of the work are, in their essence, models and can be settled as: a revised process model for the SE field, an agile methodology for MBSE development environments, a graphical tool to support the proposed methodology, and a System Model for the GUILTE system. The comprehensive literature reviews provided for the main scientific field of this research (SE/MBSE) and for the application domain (Traffic & Environment) can also be seen as a relevant contribution.
O mundo contemporâneo é caracterizado por sistemas de grande dimensão e de natureza marcadamente complexa, sócio-técnica e interdisciplinar. A Engenharia de Sistemas (ES) propõe uma abordagem holística e integrada para desenvolver tais sistemas, tendo em consideração a sua natureza multifacetada e as numerosas inter-relações que advêm de uma quantidade significativa de diferentes pontos de vista, competências, responsabilidades e interesses. A Engenharia de Sistemas Baseada em Modelos (ESBM) é um paradigma emergente na área da ES e pode ser descrito como a aplicação formal de princípios, métodos, linguagens e ferramentas de modelação ao ciclo de vida dos sistemas descritos. Espera-se que, na próxima década, a ESBM desempenhe um papel fundamental na prática da moderna Engenharia de Sistemas. Esta tese é dedicada à aplicação da ESBM a um desafio real que constitui uma preocupação do mundo actual, estando “na agenda” dos líderes mundiais, governantes nacionais, autoridades locais, agências de investigação, universidades e público em geral. O domínio de aplicação, o Tráfego & Ambiente, caracteriza-se por uma considerável complexidade e interdisciplinaridade, sendo representativo das áreas de interesse para a ES. Propõe-se um sistema (GUILTE) que visa dotar os municípios de um quadro de desenvolvimento integrado para adopção de Sistemas de Transporte Inteligentes e apoiar as suas operações de tráfego urbano, destacando dois aspectos fundamentais: a avaliação dos impactos ambientais associados (em especial, a poluição atmosférica e o ruído) e a divulgação de informação aos cidadãos, motivando o seu envolvimento e participação. Estes objectivos relacionam-se com o desafio mais abrangente de desenvolver redes de transporte urbano sustentáveis. O processo de desenvolvimento do sistema apoia-se numa nova metodologia (LITHE), mais ágil, que enfatiza os princípios de comunicação contínua, feedback, participação e envolvimento dos stakeholders, iterações curtas e resposta rápida. Estes princípios são concretizados através de um processo de ES universal e intuitivo (redefinido à luz dos padrões internacionais), de um método simples e de linguagens gráficas de modelação de referência (SysML e OPDs/OPL). As principais contribuições deste trabalho são, na sua essência, modelos: um modelo revisto para o processo da ES, uma metodologia ágil para ambientes de desenvolvimento baseados em modelos, uma ferramenta gráfica para suportar a metodologia proposta e o modelo de um sistema para as operações de tráfego & ambiente num contexto urbano. Contribui-se ainda com uma cuidada revisão bibliográfica para a principal área de investigação (ES/ESBM) e para o domínio de aplicação (Tráfego & Ambiente).
APA, Harvard, Vancouver, ISO, and other styles
8

Ghosheh, Emad. "A novel model for improving the maintainability of web-based systems." Thesis, University of Westminster, 2010. https://westminsterresearch.westminster.ac.uk/item/905xy/a-novel-model-for-improving-the-maintainability-of-web-based-systems.

Full text
Abstract:
Web applications incorporate important business assets and offer a convenient way for businesses to promote their services through the internet. Many of these web applica- tions have evolved from simple HTML pages to complex applications that have a high maintenance cost. This is due to the inherent characteristics of web applications, to the fast internet evolution and to the pressing market which imposes short development cycles and frequent modifications. In order to control the maintenance cost, quantita- tive metrics and models for predicting web applications’ maintainability must be used. Maintainability metrics and models can be useful for predicting maintenance cost, risky components and can help in assessing and choosing between different software artifacts. Since, web applications are different from traditional software systems, models and met- rics for traditional systems can not be applied with confidence to web applications. Web applications have special features such as hypertext structure, dynamic code generation and heterogenousity that can not be captured by traditional and object-oriented metrics. This research explores empirically the relationships between new UML design met- rics based on Conallen’s extension for web applications and maintainability. UML web design metrics are used to gauge whether the maintainability of a system can be im- proved by comparing and correlating the results with different measures of maintain- ability. We studied the relationship between our UML metrics and the following main- tainability measures: Understandability Time (the time spent on understanding the soft- ware artifact in order to complete the questionnaire), Modifiability Time(the time spent on identifying places for modification and making those modifications on the software artifact), LOC (absolute net value of the total number of lines added and deleted for com- ponents in a class diagram), and nRev (total number of revisions for components in a class diagram). Our results gave an indication that there is a possibility for a relationship to exist between our metrics and modifiability time. However, the results did not show statistical significance on the effect of the metrics on understandability time. Our results showed that there is a relationship between our metrics and LOC(Lines of Code). We found that the following metrics NAssoc, NClientScriptsComp, NServerScriptsComp, and CoupEntropy explained the effort measured by LOC(Lines of Code). We found that NC, and CoupEntropy metrics explained the effort measured by nRev(Number of Revi- sions). Our results give a first indication of the usefulness of the UML design metrics, they show that there is a reasonable chance that useful prediction models can be built from early UML design metrics.
APA, Harvard, Vancouver, ISO, and other styles
9

Wilmer, Greg. "OPM model-based integration of multiple data repositories." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/100389.

Full text
Abstract:
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, Engineering Systems Division, System Design and Management Program, 2015.
Cataloged from PDF version of thesis.
Includes bibliographical references (page 90).
Data integration is at the heart of a significant portion of current information system implementations. As companies continue to move towards a diverse, growing set of Commercial Off the Shelf (COTS) applications to fulfill their information technology needs, the need to integrate data between them continues to increase. In addition, these diverse application portfolios are becoming more geographically dispersed as more software is provided using the Software as a Service (SaaS) model, and companies continue the pattern of moving their internal data centers to cloud-based computing. As the growth of data integration activities continues, several prominent data integration patterns have emerged, and commercial software packages have been created that covers each of the patterns below: 1. Bulk and/or batch data extraction and delivery (ETL, ELT, etc.); 2. Messaging / Message-oriented data movement; 3. Granular, low-latency data capture and propagation (data synchronization). As the data integration landscape within an organization, and between organizations, becomes larger and more complex, opportunities exist to streamline aspects of the data integrating process not covered by current toolsets including: 1. Extensibility by third parties. Many COTS integration toolsets today are difficult if not impossible to extend by third parties; 2. Capabilities to handle different types of structured data from relational to hierarchical to graph models; 3. Enhanced modeling capabilities through use of data visualization and modeling techniques and tools; 4. Capabilities for automated unit testing of integrations; 5. A unified toolset that covers all three patterns, allowing an enterprise to implement the pattern that best suites business needs for the specific scenario; 6. A Web-based toolset that allows configuration, management and deployment via Web-based technologies allowing geographical indifference for application deployment and integration. While discussing these challenges with a large Fortune 500 client, they expressed the need for an enhanced data integration toolset that would allow them to accomplish such tasks. Given this request, the Object Process Methodology (OPM) and the Opcat toolset were used to begin design of a data integration toolset that could fulfill these needs. As part of this design process, lessons learned covering both the use of OPM in software design projects as well as enhancement requests for the Opcat toolset were documented.
by Greg Wilmer.
S.M. in Engineering and Management
APA, Harvard, Vancouver, ISO, and other styles
10

Wright, Lynda. "Model-Based Systems Engineering: Status and Challenges." Digital Commons at Loyola Marymount University and Loyola Law School, 2014. https://digitalcommons.lmu.edu/etd/438.

Full text
Abstract:
Just as technology drove engineers to develop and implement systems engineering more than 60 years ago, the increasing complexity of today's systems is driving academia and industry to find better methods to design successful solutions. Traditional systems engineering is no longer enough to completely understand and communicate user needs, required system integrations, and design solutions. Model-Based Systems Engineering (MBSE) represents the next generation methodology for system design and verification. MBSE truly allows multi-disciplinary, parallel engineering design to occur. This paper will explore why traditional systems engineering must evolve to MBSE, the current state of MBSE, and the challenges that still need to be overcome before it can be fully instantiated throughout academia and the engineering community as a standard for systems engineering (SE).
APA, Harvard, Vancouver, ISO, and other styles
11

Wimmel, Guido Oliver. "Model-based development of security-critical systems." [S.l.] : [s.n.], 2005. http://deposit.ddb.de/cgi-bin/dokserv?idn=979096634.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Gonzalez, Pavel. "Model checking GSM-based multi-agent systems." Thesis, Imperial College London, 2014. http://hdl.handle.net/10044/1/39038.

Full text
Abstract:
Business artifacts are a growing topic in service oriented computing. Artifact systems include both data and process descriptions at interface level thereby providing more sophisticated and powerful service inter-operation capabilities. The Guard-Stage-Milestone (GSM) language provides a novel framework for specifying artifact systems that features declarative descriptions of the intended behaviour without requiring an explicit specification of the control flow. While much of the research is focused on the design, deployment and maintenance of GSM programs, the verification of this formalism has received less attention. This thesis aims to contribute to the topic. We put forward a holistic methodology for the practical verification of GSM-based multi-agent systems via model checking. The formal verification faces several challenges: the declarative nature of GSM programs; the mechanisms for data hiding and access control; and the infinite state spaces inherent in the underlying data. We address them in stages. First, we develop a symbolic representation of GSM programs, which makes them amenable to model checking. We then extend GSM to multi-agent systems and map it into a variant of artifact-centric multi-agent systems (AC-MAS), a paradigm based on interpreted systems. This allows us to reason about the knowledge the agents have about the artifact system. Lastly, we investigate predicate abstraction as a key technique to overcome the difficulty of verifying infinite state spaces. We present a technique that lifts 3-valued abstraction to epistemic logic and makes GSM programs amenable to model checking against specifications written in a quantified version of temporal-epistemic logic. The theory serves as a basis for developing a symbolic model checker that implements SMT-based, 3-valued abstraction for GSM-based multi-agent systems. The feasibility of the implementation is demonstrated by verifying GSM programs for concrete applications from the service community.
APA, Harvard, Vancouver, ISO, and other styles
13

Meira, Jorge Augusto. "Model-based stress testing for database systems." reponame:Repositório Institucional da UFPR, 2014. http://hdl.handle.net/1884/37344.

Full text
Abstract:
Orientador : Prof. Dr. Eduardo Cunha de Almeida
Co-orientador : Prof. Dr. Yves Le Traon
Tese (doutorado) - Universidade Federal do Paraná, Setor de Tecnologia, Programa de Pós-Graduação em Ciências da Computação. Defesa: Curitiba, 17/12/2014
Inclui referências
Abstract: Database Management Systems (DBMS) have been successful at processing transaction workloads over decades. But contemporary systems, including Cloud computing, Internet-based systems, and sensors (i.e., Internet of Things (IoT)), are challenging the architecture of the DBMS with burgeoning transaction workloads. The direct consequence is that the development agenda of the DBMS is now heavily concerned with meeting non-functional requirements, such as performance, robustness and scalability [85]. Otherwise, any stressing workload will make the DBMS lose control of simple functional requirements, such as responding to a transaction request [62]. While traditional DBMS, including DB2, Oracle, and PostgreSQL, require embedding new features to meet non-functional requirements, the contemporary DBMS called as NewSQL [56, 98, 65] present a completely new architecture. What is still lacking in the development agenda is a proper testing approach coupled with burgeoning transaction workloads for validating the DBMS with nonfunctional requirements in mind. The typical non-functional validation is carried out by performance benchmarks. However, they focus on metrics comparison instead of finding defects. In this thesis, we address this lack by presenting different contributions for the domain of DBMS stress testing. These contributions fit different testing objectives to challenge each specific architecture of traditional and contemporary DBMS. For instance, testing the earlier DBMS (e.g., DB2, Oracle) requires incremental performance tuning (i.e., from simple setup to complex one), while testing the latter DBMS (e.g., VoltDB, NuoDB) requires driving it into different performance states due to its self-tuning capabilities [85]. Overall, this thesis makes the following contributions: 1) Stress TEsting Methodology (STEM): A methodology to capture performance degradation and expose system defects in the internal code due to the combination of a stress workload and mistuning; 2) Model-based approach for Database Stress Testing (MoDaST): An approach to test NewSQL database systems. Supported by a Database State Machine (DSM), MoDaST infers internal states of the database based on performance observations under different workload levels; 3) Under Pressure Benchmark (UPB): A benchmark to assess the impact of availability mechanisms in NewSQL database systems. We validate our contributions with several popular DBMS. Among the outcomes, we highlight that our methodologies succeed in driving the DBMS up to stress state conditions and expose several related defects, including a new major defect in a popular NewSQL.
Resumo: Sistemas de Gerenciamento de Bando de Dados (SGBD) têm sido bem sucedidos no processamento de cargas de trabalho transacionais ao longo de décadas. No entanto, sistemas atuais, incluindo Cloud computing, sistemas baseados na Internet, e os sensores (ou seja, Internet of Things (IoT)), estão desafiando a arquitetura dos SGBD com crescentes cargas de trabalho. A conseqüência direta é que a agenda de desenvolvimento de SGBD está agora fortemente preocupada em atender requisitos não funcionais, tais como desempenho, robustez e escalabilidade [85]. Caso contrário, uma simples carga de trabalho de estresse pode fazer com que os SGBD não atendam requisitos funcionais simples, como responder a um pedido de transação [62]. Enquanto SGBD tradicionais exigem a incorporação de novos recursos para atender tais requisitos não-funcionais, os SGBD contemporâneos conhecidos como NewSQL [56, 98, 65] apresentam uma arquitetura completamente nova. O que ainda falta na agenda do desenvolvimento é uma abordagem de teste adequada que leve em conta requisitos não-funcionais. A validação não-funcional típica para SGBD é realizada por benchmarks. No entanto, eles se concentram na comparação baseada em métricas em vez de encontrar defeitos. Nesta tese, abordamos essa deficiência na agenda de desenvolvimento, apresentando contribuições diferentes para o domínio de testes de estresse para SGBD. Estas contribuições atendem diferentes objetivos de teste que desafiam arquiteturas específica de SGBD tradicionais e contemporâneos. No geral, esta tese faz as seguintes contribuições: 1) Stress TEstingMethodology (STEM): Uma metodologia para capturar a degradação do desempenho e expor os defeitos do sistema no código interno devido a combinação de uma carga de trabalho de estresse e problemas de configuração; 2) Model-based Database Stress Testing (MoDaST): Uma abordagem para testar sistemas de banco de dados NewSQL. Apoiado por uma máquina de estado de banco de dados (DSM), MoDaST infere estados internos do banco de dados com base em observações de desempenho sob diferentes níveis de carga de trabalho; 3) Under Pressure Benchmark (UPB): Um benchmark para avaliar o impacto dos mecanismos de disponibilidade em sistemas de banco de dados NewSQL. Nós validamos nossas contribuições com vários SGBD populares. Entre os resultados, destaca-se em nossas metodologias o sucesso em conduzir o SGBD para condições de estresse e expor defeitos relacionados, incluindo um novo major bug em um SGBD NewSQL popular.
APA, Harvard, Vancouver, ISO, and other styles
14

Smith, Matthew William Ph D. Massachusetts Institute of Technology. "Model-based requirement definition for instrument systems." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/90729.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2014.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 199-210).
Instrument systems such as imagers, radars, spectrometers, and radiometers are important to users in the astronomy, Earth science, defense, and intelligence communities. Relatively early in the development cycle, performance requirements are defined at the top level and allocated to various subsystems or components. This is a critical step, as poor requirement definition and resulting requirement instability has historically led to increased cost and, in some cases, program cancelation. Defining requirements for instrument systems is uniquely challenging in part due to the divide between system users (e.g. scientists) and system designers (e.g. engineers). The two groups frequently differ in terms of background, objectives, and priorities, and this disconnect often leads to difficulty in evaluating and resolving requirement trade-offs. The objective of this thesis is to develop a model-based approach to requirement definition that addresses the above challenges. The goal of the methodology is to map science objectives to a set of top-level engineering requirements in a manner that enables traceability in the requirement hierarchy and facilitates informed trades across the science/engineering interface. This is accomplished by casting the requirement definition process as an optimization problem. First, an executable instrument model is created to capture the forward mapping between engineering decisions and science capability. The model is then exercised to find an inverse mapping that produces multiple sets of top-level engineering requirements that all meet the performance objectives. A new heuristic optimization algorithm is developed to carry out this task efficiently and exhaustively. Termed the Level Set Genetic Algorithm (LSGA), this procedure identifies contours of equal performance in the design space using an elite-preserving selection operator to ensure convergence, together with a global diversity metric to ensure thorough exploration. LSGA is derivative-free, parallelizable, and compatible with mixed integer problems, making it applicable to a wide variety of modeling and simulation scenarios. As a case study, the model-based requirement definition methodology is applied to the Regolith X-ray Imaging Spectrometer (REXIS), an instrument currently in development at MIT and scheduled to launch on NASA's OSIRIS-REx asteroid sample return mission in the fall of 2016. REXIS will determine the elemental composition of the target asteroid by measuring the solar-induced fluorescence spectrum in the soft x-ray regime (0.5-7.5 keV). A parametric model of the instrument is created to simulate its end-to-end operation, including x-ray propagation and absorption, detector noise processes, pixel read-out, signal quantization, and spectrum reconstruction. After validating the model against laboratory data, LSGA is used to identify multiple sets of top-level engineering requirements that meet the REXIS science objectives with regard to spectral resolution. These results are compared to the existing baseline requirement set, providing insights into the alternatives enabled by the model-based approach. Several additional strategies are presented to quantify and mediate requirement trades that may occur later in the development cycle due to science creep or engineering push-back. Overall, these methods provide a means of synthesizing and then evaluating top-level engineering requirements based on given science objectives. By doing so in a comprehensive and traceable manner, this approach clarifies the trade-offs between scientists and engineers that inevitably arise during the design of instrument systems.
by Matthew William Smith.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
15

Durak, Umut [Verfasser]. "Model-based simulation systems engineering / Umut Durak." Clausthal-Zellerfeld : Technische Universität Clausthal, 2018. http://d-nb.info/1230910069/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Koch, Oliver, and Jürgen Weber. "Model-Based Systems Engineering in Mobile Applications." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-200676.

Full text
Abstract:
An efficient system development needs reuse, traceability and understanding. Today, specifications are usually written in text documents. Reuse means a copy and paste of suitable specifications. Traceability is the textual note that references to affected requirements. Achieving a full context understanding requires reading hundreds of pages in a variety of documents. Changing one textual requirement in complex systems can be very time-consuming. Model-based systems engineering (MBSE) addresses these issues. There, an integrated system model is used for the design, analysis, communication and system specification and shall contribute to handling the system complexity. This paper shows aspects of this approach in the development of a wheel loader\'s attachment system. Customer requirements will be used to derive a specification model. Based on this, the author introduces the system and software architecture. The connection between requirement and architecture leads to a traceable system design and produces the huge advantage of MBSE.
APA, Harvard, Vancouver, ISO, and other styles
17

Kamdem, Simo Freddy. "Model-based federation of systems of modelling." Thesis, Compiègne, 2017. http://www.theses.fr/2017COMP2374.

Full text
Abstract:
L'ingénierie des systèmes complexes et systèmes de systèmes conduit souvent à des activités de modélisation (MA) complexes. Les problèmes soulevés par les MA sont notamment : comprendre le contexte dans lequel elles sont exécutées, comprendre l'impact sur les cycles de vie des modèles qu'elles produisent, et finalement trouver une approche pour les maîtriser. L'objectif principal de cette thèse est d'élaborer une approche formelle pour adresser ce problème. Dans cette thèse, après avoir étudié les travaux connexes en ingénierie système et plus spécifiquement ceux qui portent sur la co-ingénierie du système à faire (le produit) et du système pour faire (le projet), nous développons une méthodologie nommée MODEF pour traiter ce problème. MODEF consiste en: (1) Caractériser les MA comme un système et plus généralement une fédération de systèmes. (2) Construire de manière itérative une architecture de ce système via la modélisation du contenu conceptuel des modèles produits par MA et leur cycle de vie, les tâches réalisées au sein des MA et leurs effets sur ces cycles de vie. (3) Spécifier les attentes sur ces cycles de vie. (4) Analyser les modèles (des MA) par rapport à ces attentes (et éventuellement les contraintes sur les tâches) pour vérifier jusqu'à quel point elles sont atteignables via la synthèse des points (ou états) acceptables. D'un point de vue pratique, l'exploitation des résultats de l'analyse permet de contrôler le déroulement des tâches de modélisation à partir de la mise en évidence de leur impact sur les modèles qu'elles produisent. En effet, cette exploitation fournit des données pertinentes sur la façon dont les MA se déroulent et se dérouleraient de bout en bout. A partir de ces informations, il est possible de prendre des mesures préventives ou correctives. Nous illustrons cela à l'aide de deux cas d'étude (le fonctionnement d'un supermarché et la modélisation de la couverture fonctionnelle d'un système). D'un point de vue théorique, les sémantiques formelles des modèles des MA et le formalisme des attentes sont d'abord données. Ensuite, les algorithmes d'analyse et d'exploitation sont présentés. Cette approche est brièvement comparée avec des approches de vérification des modèles et de synthèse de systèmes. Enfin, deux facilitateurs de la mise en œuvre de MODEF sont présentés. Le premier est une implémentation modulaire des blocs de base de MODEF. Le second est une architecture fédérée (FA) des modèles visant à faciliter la réutilisation des modèles formels en pratique. La formalisation de FA est faite dans le cadre de la théorie des catégories. De ce fait, afin de construire un lien entre abstraction et implémentation, des structures de données et algorithmes de base sont proposés pour utiliser FA en pratique. Différentes perspectives sur les composantes de MODEF concluent ce travail
The engineering of complex systems and systems of systems often leads to complex modelling activities (MA). Some challenges exhibited by MA are: understanding the context where they are carried out and their impacts on the lifecycles of models they produce, and ultimately providing a support for mastering them. How to address these challenges with a formal approach is the central challenge of this thesis. In this thesis, after discussing the related works from systems engineering in general and the co-engineering of the system to be made (product) and the system for make (project) systems specifically, we position and develop a methodology named MODEF, that aims to master the operation of MA. MODEF consists in: (1) characterizing MA as a system (and more globally as a federation of systems) in its own right; (2) iteratively architecting this system through: the modelling of the conceptual content of the models produced by MA and their life cycles, the tasks carried out within MA and their effects on these life cycles; (3) specifying the expectations over these life cycles and; (4) analysing models (of MA) against expectations (and possibly tasks constraints) - to check how far expectations are achievable - via the synthesis of the acceptable behaviours. On a practical perspective, the exploitation of the results of the analysis allows figuring out what could happen with the modelling tasks and their impacts on the whole state of models they handle. We show on two case studies (the operation of a supermarket and the modelling of the functional coverage of a system) how this exploitation provides insightful data on how the system is end-to-end operated and how it can behave. Based on this information, it is possible to take some preventive or corrective actions on how the MA are carried out. On the foundational perspective, the formal semantics of three kinds of involved models and the expectations formalism are first discussed. Then the analysis and exploitation algorithms are presented. Finally this approach is roughly compared with model checking and systems synthesis approaches. Last but not least, two enablers whose first objectives are to ease the implementation of MODEF are presented. The first one is a modular implementation of MODEF's buildings blocks. The second one is a federated architecture (FA) of models which aims to ease working with formal models in practice. Despite the fact that FA is formalised within the abstract framework of category theory, an attempt to bridge the gap between abstraction and implementation is sketched via some basic data structures and base algorithms. Several perspectives related to the different components of MODEF conclude this work
APA, Harvard, Vancouver, ISO, and other styles
18

Kozhakenov, Temirzhan. "MODEL-BASED SIMULATION OF AUTOMOTIVE SOFTWARE SYSTEMS." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-48851.

Full text
Abstract:
The car is the most common vehicle in the world. Millions of cars are produced annually. In order for each car to find its buyer, car companies are forced to constantly improve the design of the car. Modern models are emerging, new car systems are being developed and implemented. All this is accompanied by a huge flow of information, in which it is easy to get lost. This master’s work is devoted to the trace analysis and connection of two different files. The paper proposes a developed algorithm of trace analysis for some functions of the vehicle in the C++ programming language. The files that we use to trace analysis relate to the model and the final result of its simulation.EATOP is a tool with which a model based on the EAST-ADL language was developed. Adapt is an event simulator with which our model of automotive functionality was simulated. The purpose of the study is to identify possible ways to meet timing requirements. The work is carried out in collaboration with Volvo Group Truck Technology. This company provided the LogFile, which presents results of the simulation, and model. We get an analysis of performance, one of the ways to trace data and timing. The results of our implementation are presented and discussed.
APA, Harvard, Vancouver, ISO, and other styles
19

Haschemi, Siamak. "Model-based testing of dynamic component systems." Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, 2015. http://dx.doi.org/10.18452/17273.

Full text
Abstract:
Die Arbeit widmet sich der Frage, ob sich die etablierte Technik des modellbasierten Testens (MBT) auf eine spezielle Art von Software-Komponentensystemen, den dynamischen Komponentensystemen (DCS), anwenden lässt. DCS bieten die besondere Eigenschaft, dass sich die Komposition der Komponenteninstanzen zur Laufzeit ändern kann, da in solchen Systemen jede Komponenteninstanz einen Lebenszyklus aufweist. Damit ist es möglich, im laufenden Betrieb einzelne Komponenten im Softwaresystem zu aktualisieren oder dem System neue hinzuzufügen. Derartige Eingriffe führen dazu, dass die von den Komponenteninstanzen bereitgestellte Funktionalität jederzeit eingeschränkt oder unverfügbar werden kann. Diese Eigenschaft der DCS macht die Entwicklung von Komponenten schwierig, da diese in ihrem potentiellen Verhalten darauf vorbereitet werden müssen, dass die von ihnen jeweils benötigte und genutzte Funktionalität nicht ständig verfügbar ist. Ziel dieser Dissertation ist es nun, einen systematischen Testansatz zu entwickeln, der es erlaubt, bereits während der Entwicklung von DCS-Komponenten Toleranzaussagen bzgl. ihrer dynamischen Verfügbarkeit treffen zu können. Untersucht wird, inwieweit bestehende MBT-Ansätze bei entsprechender Anpassung für den neuen Testansatz übernommen werden können. Durch die in der Dissertation entwickelten Ansätze sowie deren Implementierung und Anwendung in einer Fallstudie wird gezeigt, dass eine systematische Testfallgenerierung für dynamische Komponentensysteme mit Hilfe der Anwendung und Anpassung von modellbasierten Testtechnologien erreicht werden kann.
This dissertation devotes to the question whether the established technique of model based testing (MBT) can be applied to a special type of software component systems called dynamic component systems (DCSs). DCSs have the special characteristic that they support the change of component instance compositions during runtime of the system. In these systems, each component instance exhibits an own lifecycle. This makes it possible to update existing, or add new components to the system, while it is running. Such changes cause that functionality provided by the component instances may become restricted or unavailable at any time. This characteristic of DCSs makes the development of components difficult because required and used functionality is not available all the time. The goal of this dissertation is to develop a systematic testing approach which allows to test a component’s tolerance to dynamic availability during development time. We analyze, to what extend existing MBT approaches can be reused or adapted. The approaches of this dissertation has been implemented in a software prototype. This prototype has been used in a case study and it has been showed, that systematic test generation for DCSs can be done with the help of MBT.
APA, Harvard, Vancouver, ISO, and other styles
20

Kwon, Ky-Sang. "Multi-layer syntactical model transformation for model based systems engineering." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/42835.

Full text
Abstract:
This dissertation develops a new model transformation approach that supports engineering model integration, which is essential to support contemporary interdisciplinary system design processes. We extend traditional model transformation, which has been primarily used for software engineering, to enable model-based systems engineering (MBSE) so that the model transformation can handle more general engineering models. We identify two issues that arise when applying the traditional model transformation to general engineering modeling domains. The first is instance data integration: the traditional model transformation theory does not deal with instance data, which is essential for executing engineering models in engineering tools. The second is syntactical inconsistency: various engineering tools represent engineering models in a proprietary syntax. However, the traditional model transformation cannot handle this syntactic diversity. In order to address these two issues, we propose a new multi-layer syntactical model transformation approach. For the instance integration issue, this approach generates model transformation rules for instance data from the result of a model transformation that is developed for user model integration, which is the normal purpose of traditional model transformation. For the syntactical inconsistency issue, we introduce the concept of the complete meta-model for defining how to represent a model syntactically as well as semantically. Our approach addresses the syntactical inconsistency issue by generating necessary complete meta-models using a special type of model transformation.
APA, Harvard, Vancouver, ISO, and other styles
21

Cavalin, Paulo Rodrigo. "Adaptive systems for hidden Markov model-based pattern recognition systems." Mémoire, École de technologie supérieure, 2011. http://espace.etsmtl.ca/976/1/CAVALIN_Paulo_Rodrigo.pdf.

Full text
Abstract:
Cette thèse porte sur l’étude des systèmes adaptatifs pour la reconnaissance de formes. Habituellement les systèmes de reconnaissance reposent sur une connaissance statique du problème à résoudre et cela pour la durée de vie du système. Cependant il y a des circonstances où la connaissance du problème est partielle lors de l’apprentissage initial à l’étape de la conception. Pour cette raison, les systèmes de classification adaptatifs de nouvelle génération permettent au système de base de s’adapter à la fois en apprenant sur les nouvelles données et sont également capables de s’adapter à l’environnement lors de la généralisation. Cette thèse propose une nouvelle définition d’un système de reconnaissance adaptatif où les MMCs (Modèles de Markov Cachés) sont considérés comme étude de cas. La première partie de la thèse présente une évaluation des principaux algorithmes d’apprentissage incrémental utilisés pour l’estimation des paramètres des MMCs. L’objectif de cette étude est de dégager les stratégies d’apprentissage incrémental dont la performance en généralisation se rapproche de cette obtenue avec un apprentissage hors-ligne (batch). Les résultats obtenus sur le problème de la reconnaissance de chiffres et de lettres manuscrits montrent la supériorité des approches basées sur les ensembles de modèles. De plus, nous avons montré l’importance de conserver dans une mémoire à court terme des exemples utilisés en validation, ce qui permet d’obtenir un niveau de performance qui peut même dépasser celui obtenu en mode batch. La deuxième partie de cette thèse est consacrée à la formulation d’une nouvelle approche pour la sélection dynamique des ensembles de classifieurs. Inspiré du concept de fusion appelé « organisation multi-niveau » (multistage organizations), nous avons formulé une variante de ce concept appelé DMO (dynamic multistage organization - DMO) qui permet d’adapter la fonction de fusion dynamiquement pour chaque exemple de test à classer. De plus, le concept DMO a été intégré à la méthode DSA proposée par Dos Santos et al pour la sélection dynamique d’ensembles de classifieurs. Ainsi, deux nouvelles variantes, DSAm et DSAc, ont été proposées et évaluées. Dans le premier cas (DSAm), plusieurs fonctions de sélection permettent une généralisation de la structure DMO. Pour ce qui est de la variante DSAc, nous utilisons l’information contextuelle (représentée par les profils de décisions des classifieurs de base) acquise par le système et qui est associée à la base de validation conservée dans une mémoire à court terme. L’évaluation des deux approches sur des bases de données de petite et de grande échelle ont montré que la méthode DSAc domine DSAm sur la plupart des cas étudiés. Ce résultat montre que l’utilisation d’informations contextuelles permet une meilleure performance en généralisation comparées aux méthodes non informées. Une propriété importante de l’approche DSAc est qu’elle peut également servir pour apprendre de nouvelles données dans le temps, une propriété très importante pour la conception de systèmes de reconnaissance adaptatifs dans les environnements dynamiques caractérisés par un niveau important d’incertitude sur le problème à résoudre. Finalement, un nouveau framework appelé LoGID (Local and Global Incremental Learning for Dynamic Selection) est proposé pour la conception d’un système de reconnaissance adaptatif basé sur les MMC, et capable de s’adapter dans le temps durant les phases d’apprentissage de généralisation. Le système est composé d’un pool de classifieurs de base et l’adaptation durant la phase de généralisation est effectuée par la sélection dynamique des membres du pool les plus compétents pour classer chaque exemple de test. Le mécanisme de sélection dynamique est basé sur l’algorithme des K plus proches vecteurs de décision, tandis que l’adaptation durant la phase d’apprentissage consiste à la mise à jour et à l’ajout de classifieurs de base dans le système. Durant la phase d’apprentissage, deux stratégies sont proposées pour apprendre incrémentalement sur des nouvelles données: l’apprentissage local et l’apprentissage global. L’apprentissage incrémentale local implique la mise à jour du pool de classifieurs de base en ajoutant des nouveaux membres à cet ensemble. Les nouveaux membres sont générés avec l’algorithme Learn++. L’apprentissage incrémental global consiste à la mise à jour de la base de connaissances composée des vecteurs de décisions qui seront utilisés en généralisation pour la sélection dynamique des membres les plus compétents. Le système LoGID a été validé sur plusieurs bases de données et les résultats comparés à ceux publiés dans la littérature. En général, la méthode proposée domine les autres méthodes incluant les méthodes d’apprentissage hors-ligne. Enfin, le système LoGID évalué en mode adaptatif montre qu’il est en mesure d’apprendre de nouvelles connaissances dans le temps au moment où les nouvelles données sont disponibles. Cette faculté d’adaptation est très importante également lorsque les données disponibles pour l’apprentissage sont peu nombreuses.
APA, Harvard, Vancouver, ISO, and other styles
22

Pietruschka, Dirk. "Model based control optimisation of renewable energy based HVAC Systems." Thesis, De Montfort University, 2010. http://hdl.handle.net/2086/4022.

Full text
Abstract:
During the last 10 years solar cooling systems attracted more and more interest not only in the research area but also on a private and commercial level. Several demonstration plants have been installed in different European countries and first companies started to commercialise also small scale absorption cooling machines. However, not all of the installed systems operate efficiently and some are, from the primary energy point of view, even worse than conventional systems with a compression chiller. The main reason for this is a poor system design combined with suboptimal control. Often several non optimised components, each separately controlled, are put together to form a ‘cooling system’. To overcome these drawbacks several attempts are made within IEA task 38 (International Energy Agency Solar Heating and Cooling Programme) to improve the system design through optimised design guidelines which are supported by simulation based design tools. Furthermore, guidelines for an optimised control of different systems are developed. In parallel several companies like the SolarNext AG in Rimsting, Germany started the development of solar cooling kits with optimised components and optimised system controllers. To support this process the following contributions are made within the present work: - For the design and dimensioning of solar driven absorption cooling systems a detailed and structured simulation based analysis highlights the main influencing factors on the required solar system size to reach a defined solar fraction on the overall heating energy demand of the chiller. These results offer useful guidelines for an energy and cost efficient system design. - Detailed system simulations of an installed solar cooling system focus on the influence of the system configuration, control strategy and system component control on the overall primary energy efficiency. From the results found a detailed set of clear recommendations for highly energy efficient system configurations and control of solar driven absorption cooling systems is provided. - For optimised control of open desiccant evaporative cooling systems (DEC) an innovative model based system controller is developed and presented. This controller consists of an electricity optimised sequence controller which is assisted by a primary energy optimisation tool. The optimisation tool is based on simplified simulation models and is intended to be operated as an online tool which evaluates continuously the optimum operation mode of the DEC system to ensure high primary energy efficiency of the system. Tests of the controller in the simulation environment showed that compared to a system with energy optimised standard control the innovative model based system controller can further improve the primary energy efficiency by 19 %.
APA, Harvard, Vancouver, ISO, and other styles
23

Howell, John. "Model-based fault diagnosis in information poor processes." Thesis, University of Glasgow, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.295670.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Jin, Licheng. "Reachability and model prediction based system protection schemes for power systems." [Ames, Iowa : Iowa State University], 2009. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3355509.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Jayousi, Rashid. "An agent-based co-operative preference model." Thesis, Keele University, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.288428.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Thiers, George. "A model-based systems engineering methodology to make engineering analysis of discrete-event logistics systems more cost-accessible." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/52259.

Full text
Abstract:
This dissertation supports human decision-making with a Model-Based Systems Engineering methodology enabling engineering analysis, and in particular Operations Research analysis of discrete-event logistics systems, to be more widely used in a cost-effective and correct manner. A methodology is a collection of related processes, methods, and tools, and the process of interest is posing a question about a system model and then identifying and building answering analysis models. Methods and tools are the novelty of this dissertation, which when applied to the process will enable the dissertation's goal. One method which directly enables the goal is adding automation to analysis model-building. Another method is abstraction, to make explicit a frequently-used bridge to analysis and also expose analysis model-building repetition to justify automation. A third method is formalization, to capture knowledge for reuse and also enable automation without human interpreters. The methodology, which is itself a contribution, also includes two supporting tool contributions. A tool to support the abstraction method is a definition of a token-flow network, an abstract concept which generalizes many aspects of discrete-event logistics systems and underlies many analyses of them. Another tool to support the formalization method is a definition of a well-formed question, the result of an initial study of semantics, categories, and patterns in questions about models which induce engineering analysis. This is more general than queries about models in any specific modeling language, and also more general than queries answerable by navigating through a model and retrieving recorded information. A final contribution follows from investigating tools for the automation method. Analysis model-building is a model-to-model transformation, and languages and tools for model-to-model transformation already exist in Model-Driven Architecture of software. The contribution considers if and how these tools can be re-purposed by contrasting software object-oriented code generation and engineering analysis model-building. It is argued that both use cases share a common transformation paradigm but executed at different relative levels of abstraction, and the argument is supported by showing how several Operations Research analyses can be defined in an object-oriented way across multiple layered instance-of abstraction levels. Enabling Operations Research analysis of discrete-event logistics systems to be more widely used in a cost-effective and correct manner requires considering fundamental questions about what knowledge is required to answer a question about a system, how to formally capture that knowledge, and what that capture enables. Developments here are promising, but provide only limited answers and leave much room for future work.
APA, Harvard, Vancouver, ISO, and other styles
27

Marinescu, Raluca. "Model-checking and Model-based Testing of Automotive Embedded Systems : Starting from the System Architecture." Licentiate thesis, Mälardalens högskola, Inbyggda system, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-26501.

Full text
Abstract:
Nowadays, modern vehicles are equipped with electrical and electronic systems that implement highly complex functions such as anti-lock braking or cruise control. The use of such embedded systems in the automotive domain requires a revised development process that addresses their particular features. In this context, architectural models have been introduced in system development as convenient abstractions of the system’s structure represented as interacting components. To enjoy the full benefits of such abstractions, the architectural models should be complemented by an analysis framework that provides means for formal verification, and ideally also model-based testing, tailored to complex automotive systems. One major difficulty in developing such a framework lies in the fact that architectural models represent the system’s structure as well as inter-component communication, often without the actual description of the behavior. This entails the need to integrate the two “views” (structural and behavioral) in order to integrate them in a formal framework for verification. In this thesis, we propose an integrated formal modeling and analysis methodology for automotive embedded systems that are originally described in the domain-specific architectural language EAST-ADL. Our analysis methodology relies on formal veri- fication of the original EAST-ADL model by model-checking with UPPAAL PORT for component-based analysis, and UPPAAL SMC for statistical model-checking. To enable this, we first propose a formal description of the EAST-ADL components as networks of timed automata (TA), which are UPPAAL’s modeling language. Since C code implementation is in fact what is deployed on the vehicle, it is highly desirable to narrow the gap between the code and the architectural model, but also to test the implementation for various requirements. To accomplish the former, we define an exe- cutable semantics of the UPPAAL PORT components. To be able to support testing of EAST-ADL based implementations, we take advantage of the model-checker’s ability to generate witness traces during verification of reachability properties. Consequently, we employ UPPAAL PORT to generate such traces that become our abstract test-cases. By pairing the automated model-based test-case generator with an automatic transformation from the abstract test-cases to Python scripts, we enable the execution of the generated  Python scripts (our concrete test cases) on the system under test. The entire formal analysis and model-based testing framework is one solution to analyzing EAST-ADL models by model-checking techniques We show the framework’s applicability on an automotive industrial prototype, namely a Brake-by-Wire system.
APA, Harvard, Vancouver, ISO, and other styles
28

Pastrana, John. "Model-Based Systems Engineering Approach to Distributed and Hybrid Simulation Systems." Doctoral diss., University of Central Florida, 2014. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/6336.

Full text
Abstract:
INCOSE defines Model-Based Systems Engineering (MBSE) as "the formalized application of modeling to support system requirements, design, analysis, verification, and validation activities beginning in the conceptual design phase and continuing throughout development and later life cycle phases." One very important development is the utilization of MBSE to develop distributed and hybrid (discrete-continuous) simulation modeling systems. MBSE can help to describe the systems to be modeled and help make the right decisions and partitions to tame complexity. The ability to embrace conceptual modeling and interoperability techniques during systems specification and design presents a great advantage in distributed and hybrid simulation systems development efforts. Our research is aimed at the definition of a methodological framework that uses MBSE languages, methods and tools for the development of these simulation systems. A model-based composition approach is defined at the initial steps to identify distributed systems interoperability requirements and hybrid simulation systems characteristics. Guidelines are developed to adopt simulation interoperability standards and conceptual modeling techniques using MBSE methods and tools. Domain specific system complexity and behavior can be captured with model-based approaches during the system architecture and functional design requirements definition. MBSE can allow simulation engineers to formally model different aspects of a problem ranging from architectures to corresponding behavioral analysis, to functional decompositions and user requirements (Jobe, 2008).
Ph.D.
Doctorate
Industrial Engineering and Management Systems
Engineering and Computer Science
Industrial Engineering
APA, Harvard, Vancouver, ISO, and other styles
29

McKean, David Keith. "Leveraging Model-Based Techniques for Component Level Architecture Analysis in Product-Based Systems." Thesis, The George Washington University, 2019. http://pqdtopen.proquest.com/#viewpdf?dispub=13812870.

Full text
Abstract:

System design at the component level seeks to construct a design trade space of alternate solutions comprising mapping(s) of system function(s) to physical hardware or software product components. The design space is analyzed to determine a near-optimal next-level allocated architecture solution that system function and quality requirements. Software product components are targeted to increasingly complex computer systems that provide heterogeneous combinations of processing resources. These processing technologies facilitate performance (speed) optimization via algorithm parallelization. However, speed optimization can conflict with electrical energy and thermal constraints. A multi-disciplinary architecture analysis method is presented that considers all attribute constraints required to synthesize a robust, optimum, extensible next-level solution. This paper presents an extensible, executable model-based architecture attribute framework that efficiently constructs a component-level design trade space. A proof-of-concept performance attribute model is introduced that targets single-CPU systems. The model produces static performance estimates that support optimization analysis and dynamic performance estimation values that support simulation analysis. This model-based approach replaces current architecture analysis of alternatives spreadsheet approaches. The capability to easily model computer resource alternatives that produces attribute estimates improves design space exploration productivity. Performance estimation improvements save time and money through reduced prototype requirements. Credible architecture attribute estimates facilitate more informed design tradeoff discussions with specialty engineers. This paper presents initial validation of a model-based architecture attribute analysis method and model framework using a single computation thread application on two laptop computers with different CPU configurations. Execution time estimates are calibrated for several data input sizes using the first laptop. Actual execution times on the second laptop are shown to be within 10 percent of execution time estimates for all data input sizes.

APA, Harvard, Vancouver, ISO, and other styles
30

Brug, Arnold van de. "A framework for model-based adaptive training." Thesis, Heriot-Watt University, 1996. http://hdl.handle.net/10399/1177.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Ruzicka, Theophil. "Model based Design of a Sailboat Autopilot." Thesis, Högskolan i Halmstad, Centrum för forskning om inbyggda system (CERES), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-34926.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Thipphayathetthana, Somwang. "Model-based guidelines for user-centric satellite control software development." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/105320.

Full text
Abstract:
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, Engineering Systems Division, 2015.
Cataloged from PDF version of thesis.
Includes bibliographical references (page 45).
Three persistent common problems in satellite ground control software used by satellite controllers are obsolescence, lack of desired features and flexibilities, and endless software bug fixing. The obsolescence problem occurs when computer and ground equipment hardware become obsolete usually after only one third into the satellite mission lifetime. The satellite ground control software needs to be updated to accommodate changes on the hardware side, requiring significant work of satellite operators to test, verify, and validate these software updates. Software updates can also result from a new software version that offers new features or just fixes some bugs. Trying to help solve these problems, an OPM model and guidelines for developing satellite ground control software have been proposed. The system makes use of a database-driven application and concepts of object-process orientation and modularity. In the new proposed framework, instead of coding each software function separately, the common base functions will be coded, and combining them in various ways will provide the different required functions. The formation and combination of these base functions will be governed by the main code, definitions, and database parameters. These design principles will make sure that the new software framework would provide satellite operators with the flexibility to create new features, and enable software developer to find bugs quicker and fix them more effectively.
by Somwang Thipphayathetthana.
S.M. in Engineering and Management
APA, Harvard, Vancouver, ISO, and other styles
33

Storoshchuk, Orest Lev Poehlman William Frederick Skipper. "Model based synchronization of monitoring and control systems /." *McMaster only, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
34

Reimann, Carsten. "Model-Based Monitoring in Large-Scale Distributed Systems." Master's thesis, Universitätsbibliothek Chemnitz, 2002. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-200200938.

Full text
Abstract:
Monitoring remains an important problem in computer science. This thesis describes which monitor information is needed to analyze distributed service environments. This thesis also describes how to get these information and how to store them in a monitoring database. The resulting model is used to describe a distributed media content environment and a simulation system that runs on the CLIC helps to generate measurements as in real systems
Monitoring ist ein wichtiges Problem in der Informatik. In dieser Arbeit werden die benoetigten Daten beschrieben, welche zur Analyse von verteilten Dienstumgebungen dienen. Weiterhin wird beschrieben, wie man diese Daten messen und in einer geeigneten Datenbank speichern kann. Das daraus entstehende Modell wird verwendet um eine verteilte Medien-Daten-Umgebung zu beschreiben und eine Simulation auf dem CLIC erzeugt Messdaten wie sie in realen Systemen vorkommen
APA, Harvard, Vancouver, ISO, and other styles
35

Kuehnen, Stefan Alexander. "Model Based Conceptual Communication Design in Coordination Systems." NCSU, 2001. http://www.lib.ncsu.edu/theses/available/etd-20010405-170340.

Full text
Abstract:

KUEHNEN, STEFAN ALEXANDER. Model Based Conceptual Communication Design in Coordination Systems (under the Direction of Dr. Padmini Srinivasan-Hands and Dr. Samuel C. Winchester)The purpose of this research has been to investigate the feasibility of developing a model-based method for conceptual communication design in coordination systems. Business process modeling methodologies are surveyed and the methodology of choice, Actionworkflow?, is presented. As the basis for method development Language/Action and Speech Act theories, underlying the Actionworkflow? methodology, are examined for potential concepts aiding the development of the method. Their history and surrounding philosophies are presented. Critique of the Actionworkflow? methodology is presented and discussed.The major focus of the research is the development of the model-based method to conceptually design communications in coordination systems. Its development, structure and components are presented and explained. The method is illustrated with a simple, everyday-life, application example. Applications of the method to examine web-based e-commerce sites are presented. It has been determined that the application for these environments is insightful. The examples discussed are ebay, an auction provider, e-trade, an on-line broker, and priceline.com, a purchasing service applying a unique process for the purchase of services and goods. Consequently the application of the method to establish the feasibility of designing coordination support systems for textile new product development is provided. Coordination model development and design of communications are discussed in parallel. Application results show that the method can successfully be used for conceptually designing coordination support systems, although practical issues have to be further investigated.Finally underlying assumptions are displayed and discussed, model validation provided, performance evaluation, as to the goals set forth for the research undertaken, and recommendations for future research provided.

APA, Harvard, Vancouver, ISO, and other styles
36

Sen, Mainak. "Model-based hardware design for image processing systems." College Park, Md. : University of Maryland, 2006. http://hdl.handle.net/1903/4181.

Full text
Abstract:
Thesis (Ph. D.) -- University of Maryland, College Park, 2006.
Thesis research directed by: Electrical Engineering. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO, and other styles
37

Randall, A. "Adaptive model based control for steel rolling systems." Thesis, Coventry University, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.364162.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Aydal, Emine Gokce. "Model Based Robustness Testing of Black box Systems." Thesis, University of York, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.507672.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Wang, Zhenheng. "Model based fault detection for two-dimensional systems." Thesis, Laurentian University of Sudbury, 2014. https://zone.biblio.laurentian.ca/dspace/handle/10219/2186.

Full text
Abstract:
Fault detection and isolation (FDI) are essential in ensuring safe and reliable operations in industrial systems. Extensive research has been carried out on FDI for one dimensional (1-D) systems, where variables vary only with time. The existing FDI strategies are mainly focussed on 1-D systems and can generally be classified as model based and process history data based methods. In many industrial systems, the state variables change with space and time (e.g., sheet forming, fixed bed reactors, and furnaces). These systems are termed as distributed parameter systems (DPS) or two dimensional (2-D) systems. 2-D systems have been commonly represented by the Roesser Model and the F-M model. Fault detection and isolation for 2-D systems represent a great challenge in both theoretical development and applications and only limited research results are available. In this thesis, model based fault detection strategies for 2-D systems have been investigated based on the F-M and the Roesser models. A dead-beat observer based fault detection has been available for the F-M model. In this work, an observer based fault detection strategy is investigated for systems modelled by the Roesser model. Using the 2-D polynomial matrix technique, a dead-beat observer is developed and the state estimate from the observer is then input to a residual generator to monitor occurrence of faults. An enhanced realization technique is combined to achieve efficient fault detection with reduced computations. Simulation results indicate that the proposed method is effective in detecting faults for systems without disturbances as well as those affected by unknown disturbances.The dead-beat observer based fault detection has been shown to be effective for 2-D systems but strict conditions are required in order for an observer and a residual generator to exist. These strict conditions may not be satisfied for some systems. The effect of process noises are also not considered in the observer based fault detection approaches for 2-D systems. To overcome the disadvantages, 2-D Kalman filter based fault detection algorithms are proposed in the thesis. A recursive 2-D Kalman filter is applied to obtain state estimate minimizing the estimation error variances. Based on the state estimate from the Kalman filter, a residual is generated reflecting fault information. A model is formulated for the relation of the residual with faults over a moving evaluation window. Simulations are performed on two F-M models and results indicate that faults can be detected effectively and efficiently using the Kalman filter based fault detection. In the observer based and Kalman filter based fault detection approaches, the residual signals are used to determine whether a fault occurs. For systems with complicated fault information and/or noises, it is necessary to evaluate the residual signals using statistical techniques. Fault detection of 2-D systems is proposed with the residuals evaluated using dynamic principal component analysis (DPCA). Based on historical data, the reference residuals are first generated using either the observer or the Kalman filter based approach. Based on the residual time-lagged data matrices for the reference data, the principal components are calculated and the threshold value obtained. In online applications, the T2 value of the residual signals are compared with the threshold value to determine fault occurrence. Simulation results show that applying DPCA to evaluation of 2-D residuals is effective.
APA, Harvard, Vancouver, ISO, and other styles
40

Ambraziūnas, Martas. "Enterprise model based MDA information systems engineering method." Doctoral thesis, Lithuanian Academic Libraries Network (LABT), 2014. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2014~D_20141111_114310-77387.

Full text
Abstract:
Although new methods of information systems engineering are being researched and developed, they are empirical in nature. The problem domain knowledge acquisition process relies heavily on the system analyst and user; therefore it is not clear whether the knowledge of the problem domain is comprehensive. This may lead to occurrence of logical gaps, misinterpretation of system requirements, thus causing issues for project. The research work is meant to develop new IS engineering method that will allow validation of the problem domain knowledge against formal criteria. In order to create such method basic principles of Knowledge Based ISE and Model Driven ISE were combined. By combining the two approaches Knowledge Based MDA method was created that extends traditional MDA with Enterprise Model. During the research Knowledge Based MDA tool’s prototype, that is capable of partly automating Knowledge Based MDA process, was created. The efficiency of Knowledge Based MDA method was validated by creating real life application for mobile devices. During the empirical research it was established that by using developed method software requirements quality is improved and comprehensive documentation is created (due to Enterprise Model based validation), occurrence of logical gaps between software development stakeholders is reduced, time consumption needed for creation of application for multiplatform systems is reduced (due to automated code generation and shorter testing stage).
Šiuolaikiniai IS inžinerijos metodai yra nuolat vystomi ir tobulinami, tačiau iš esmės jie yra grindžiami empiriniais procesais. Empiriškai išgautų žinių kokybė gali būti nepakankama sėkmingam projekto įgyvendinimui, nes netikslus vartotojo reikalavimų specifikavimas neigiamai įtakoja visus programinės įrangos kūrimo etapus, o tai didina projekto įgyvendinimo riziką. Disertacinis darbas skirtas sukurti IS inžinerijos metodą, kuris įgalintų empiriniais būdais surinktas dalykinės srities žinias patikrinti formalių kriterijų atžvilgiu. Metodui sukurti buvo apjungti žiniomis grindžiamos ir modeliais grindžiamos IS inžinerijos principai. Šiuo tikslu klasikinis MDA procesas buvo papildytas pagrindiniu žiniomis grindžiamos IS inžinerijos komponentu – veiklos modeliu. Darbo metu buvo sukurtas žiniomis grindžiamo MDA metodo dalykinės programos prototipas, kuris iš dalies automatizuoja siūlomo metodo procesą. Žiniomis grindžiamo MDA metodo efektyvumas buvo patikrintas jį taikant eksperimentinio tyrimo atlikimui, kurio metu buvo sukurta pašto siuntų stebėjimo programėlė. Tyrimo metu nustatyta, kad tikslinga taikyti žiniomis grindžiamą MDA metodą PĮ kūrime nes: 1) detaliau dokumentuojami vartotojo reikalavimai (tikrinami formalių kriterijų atžvilgiu); 2) sumažinama loginių trūkių atsiradimo galimybė (tarp programinės įrangos kūrimo dalyvių); 3) daugiaplatforminiuose sprendimuose sumažinamos projekto įgyvendinimo laiko sąnaudos (dėka automatinio kodo generavimo iš patikrintų modelių).
APA, Harvard, Vancouver, ISO, and other styles
41

Loer, Karsten. "Model-based automated analysis for dependable interactive systems." Thesis, University of York, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.399265.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Kirwan, Ryan F. "Applying model checking to agent-based learning systems." Thesis, University of Glasgow, 2014. http://theses.gla.ac.uk/5050/.

Full text
Abstract:
In this thesis we present a comprehensive approach for applying model checking to Agent-Based Learning (ABL) systems. Model checking faces a unique challenge with ABL systems, as the modelling of learning is thought to be outwith its scope. The practical work performed to model these systems is presented in the incremental stages by which it was carried out. This allows for a clearer understanding of the problems faced and of the progress made on traditional ABL system analysis. Our focus is on applying model checking to a specific type of system. It involves a biologically-inspired robot that uses Input Correlation learning to help it navigate environments. We present a highly detailed PROMELA model of this system, using embedded C code to avoid losing accuracy when modelling it. We also propose an abstraction method for this type of system: Agent-centric abstraction. Our abstraction is the main contribution of this thesis. It is defined in detail, and we provide a proof of its soundness in the form of a simulation relation. In addition to this, we use it to generate an abstract model of the system. We give a comparison between our models and traditional system analysis, specifically simulation. A strong case for using model checking to aid ABL system analysis is made by our comparison and the verification results we obtain from our models. Overall, we present a framework for analysing ABL systems that differs from the more common approach of simulation. We define this framework in detail, and provide results from practical work coupled with a discussion about drawbacks and future enhancements.
APA, Harvard, Vancouver, ISO, and other styles
43

de, Araujo Rodrigues Vieira Elisangela. "Automated model-based test generation for timed systems." Evry, Institut national des télécommunications, 2007. http://www.theses.fr/2007TELE0011.

Full text
Abstract:
Timed Systems are systems with real-time constraints. The correctness of a timed system depends not only upon the operations it performs but also the timing when they are performed. Testing a system aims to guarantee its correctness. Model-based test generation is an approach to generate test cases based on a formal model. Although test generation methods have been far proposed, its timed counterpart is still a new field. In addition, most of the proposed solutions suffer from combinatory explosion which still limits their applicability in practice. Accordingly, it explains why there are so few automatic formal methods for testing generation, for both time and untimed systems. This thesis presents an automatic test generation approach addressed for timed systems using a test-purpose algorithm. Test purpose approach guarantees the generation of test case with regard to critical parts of the system and avoid the state explosion problem. In addition, we propose techniques to generate test sequences with timing-fault detection and with delayed and/or instantaneous transitions. In order to evaluate the applicability and efficiency of the proposed method, we have implemented two prototype tools: one based on an industrial simulator for SDL specifications and other using a free toolset based on IF models. Two real industrial applications are used as case study: a Railroad Crossing and a Vocal Service furnished by France Telecom
Les systèmes temporisés sont des systèmes avec des contraintes de temps réel. L'exactitude d'un système temporisé dépend non seulement des opérations qu'il effectue mais également de la synchronisation quand ils sont exécutés. La synchronisation prend en compte non seulement l’ordre des opérations mais surtout le moment quand elles sont exécutées. Tester un système vise à garantir son exactitude. La génération de teste basée sur des modèles c’est une approche pour produire des cas de test basés sur un modèle formel. Bien que d’autres méthodes de génération de test ont déjà été proposés, la génération pour les systèmes temporisés c’est un domaine bien plus récente. En outre, la plupart des solutions proposées souffrent de l'explosion combinatoire, ce qui limite toujours leur applicabilité dans la pratique. En conséquence, cela explique pourquoi il y a tellement peu de méthodes formelles automatiques pour la génération de test dans tout les domaines. Cette thèse présente une approche automatique de génération de teste adressée aux systèmes temporisés. Pour cela, nous proposons un algorithme de génération basé sur des objectives de test. Cette approche permet de générer des tests pour ce qui concerne les parties critiques du système et évite le problème d'explosion combinatoire. En outre, nous proposons des techniques pour produire des tests avec la détection des timing faults et avec des transitions retardées et/ou instantanées. Afin d'évaluer l'applicabilité et l'efficacité de la méthode proposée, nous avons mis en oeuvre deux outils: une en utilisant un simulateur industriel, pour des modèles en LDS et une autre employant un simulateur basé sur le langage IF. Deux applications industrielles sont employées comme étude de cas : Un système de Passage à Niveau et un Service Vocal fourni par France Telecom
APA, Harvard, Vancouver, ISO, and other styles
44

Moseley, Charles Warren. "A Timescale Estimating Model for Rule-Based Systems." Thesis, North Texas State University, 1987. https://digital.library.unt.edu/ark:/67531/metadc332089/.

Full text
Abstract:
The purpose of this study was to explore the subject of timescale estimating for rule-based systems. A model for estimating the timescale necessary to build rule-based systems was built and then tested in a controlled environment.
APA, Harvard, Vancouver, ISO, and other styles
45

Filho, João Bosco Ferreira. "Leveraging model-based product lines for systems engineering." Thesis, Rennes 1, 2014. http://www.theses.fr/2014REN1S080/document.

Full text
Abstract:
Actuellement, de nombreuses entreprises ont besoin de construire des versions\variantes légèrement différentes d'un même système. Ces versions partagent des points communs et des différences, le tout pouvant être géré à l'aide d'une approche ligne de produits (SPL). L'objectif principal d'une SPL est d'exploiter la personnalisation de masse, dans laquelle les produits sont réalisés pour répondre aux besoins spécifiques de chaque client. Pour répondre à ce besoin de personnalisation, les systèmes doivent être étendus de manière efficace, ou modifiés, configurés pour être utilisé dans un contexte particulier. Une approche encourageante consiste à connecter l'approche MDE (l'ingénierie dirigée par les modèles) à l'approche SPL – les SPL basées sur les modèles (MSPL). L'espace de conception, l'environnement du système logiciel que l'on construit (i.e., l'ingénierie du domaine) d'une MSPL est extrêmement complexe à gérer pour un ingénieur. Tout d'abord, le nombre possible des produits d'une MSPL est exponentielle au nombre d'éléments ou de décisions exprimé dans le modèle de variabilité. Ensuite, les modèles de produits dérivés doivent être conformes à de nombreuses règles liées au domaine métier mais aussi aux langages de modélisation utilisés. Troisièmement, le modèle de réalisation qui relie un modèle de variabilité et un modèle de base peut être très expressif. En plus, il faut ajouter que les ingénieurs système utilisent différents langages de modélisation dédiés dans le cadre de projets pour la réalisation de systèmes critiques. Nos contributions sont basées sur le fait qu'une solution générique, pour tous les domaines, et qui dérive des modèles corrects n'est pas réaliste, surtout si on prend en considération le contexte des systèmes complexes décrits précédemment. Nous proposons une approche indépendante du domaine pour générer des contre-exemples de MSPLs, révélant des erreurs de conceptions de modèles et supportant les parties prenantes à construire de meilleures MSPLs et des mécanismes de dérivation plus efficaces. Plus précisément, la première et principale contribution de la thèse est un processus systématique et automatisé, basé sur CVL (common variability language), pour la recherche aléatoire de contre-exemples de MSPL dans un langage donné. La seconde contribution de la thèse est un étude sur les mécanismes pour étendre la sémantique des moteurs de dérivation, offrant une approche basée sur des modèles à fin de personnaliser leurs sémantique opérationnelle. Dans la troisième contribution de la thèse, nous présentons une étude empirique à large échelle sur le langage Java en utilisant notre approche générative. La quatrième et dernière contribution de la thèse est une méthodologie pour intégrer notre travail dans une organisation qui cherche à mettre en œuvre les lignes de produit logiciels basées sur des modèles pour l'ingénierie des systèmes
Systems Engineering is a complex and expensive activity in several kinds of companies, it imposes stakeholders to deal with massive pieces of software and their integration with several hardware components. To ease the development of such systems, engineers adopt a divide and conquer approach : each concern of the system is engineered separately, with several domain specific languages (DSL) and stakeholders. The current practice for making DSLs is to rely on the Model-driven Engineering (MDE. On the other hand, systems engineering companies also need to construct slightly different versions/variants of a same system; these variants share commonalities and variabilities that can be managed using a Software Product Line (SPL) approach. A promising approach is to ally MDE with SPL – Model-based SPLs (MSPL) – in a way that the products of the SPL are expressed as models conforming to a metamodel and well-formedness rules. The Common Variability Language (CVL) has recently emerged as an effort to standardize and promote MSPLs. Engineering an MSPL is extremely complex to an engineer: the number of possible products is exponential; the derived product models have to conform to numerous well- formedness and business rules; and the realization model that connects a variability model and a set of design models can be very expressive specially in the case of CVL. Managing variability models and design models is a non-trivial activity. Connecting both parts and therefore managing all the models is a daunting and error-prone task. Added to these challenges, we have the multiple different modeling languages of systems engineering. Each time a new modeling language is used for developing an MSPL, the realization layer should be revised accordingly. The objective of this thesis is to assist the engineering of MSPLs in the systems engineering field, considering the need to support it as earlier as possible and without compromising the existing development process. To achieve this, we provide a systematic and automated process, based on CVL, to randomly search the space of MSPLs for a given language, generating counterexamples that can server as antipatterns. We then provide ways to specialize CVL’s realization layer (and derivation engine) based on the knowledge acquired from the counterexamples. We validate our approach with four modeling languages, being one acquired from industry; the approach generates counterexamples efficiently, and we could make initial progress to increase the safety of the MSPL mechanisms for those languages, by implementing antipattern detection rules. Besides, we also analyse big Java programs, assessing the adequacy of CVL to deal with complex languages; it is also a first step to assess qualitatively the counterexamples. Finally, we provide a methodology to define the processes and roles to leverage MSPL engineering in an organization
APA, Harvard, Vancouver, ISO, and other styles
46

Sellers, Benjamin D. "Physics-based refinement of proteins in model systems." Diss., Search in ProQuest Dissertations & Theses. UC Only, 2008. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3311345.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Schulz, Stephan. "Model-based codesign for real-time embedded systems." Diss., The University of Arizona, 2001. http://hdl.handle.net/10150/289712.

Full text
Abstract:
This dissertation presents a model-based codesign framework for real-time embedded systems applications. The presented research provides a theoretical modeling foundation for the construction of design models and their implementation. Whereas most current codesign approaches leverage from a complete specification of an application design at the implementation level, a completely modular, implementation independent system level specification is pursued here. Benefits of the approach presented include: a stepwise refinement of abstract design models for complex applications, a larger design space for possible application implementations, a late design partitioning into hardware and software components, the representation of concurrency inherent to the application, as well as an implementations thereof on a parallel processing platform. A formal abstraction for a general, system level specification of real-time embedded systems is derived which may be used with a variety of executable discrete event modeling specifications. Furthermore, the construction of abstract design models from textual system specifications is discussed based on this abstraction as well as their correct refinement. A set of analysis methods which evaluate system simulation results is introduced to validate and improve abstracted design model performance during each refinement step. A direct transition from the design model to an efficient implementation is addressed though a model compilation algorithm which validates alternative processing platforms for a detailed design model specification against real-time constraints. In addition, a formal mapping of system model component specifications to implementation specifications is given. Separately, this model continuity problem is also addressed through a definition of a multi-level approach to the testing of integrated application implementation prototypes. The presented model-based codesign concepts are illustrated with an embedded systems application example.
APA, Harvard, Vancouver, ISO, and other styles
48

Baur, Ulrike, and Peter Benner. "Gramian-Based Model Reduction for Data-Sparse Systems." Universitätsbibliothek Chemnitz, 2007. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-200701952.

Full text
Abstract:
Model reduction is a common theme within the simulation, control and optimization of complex dynamical systems. For instance, in control problems for partial differential equations, the associated large-scale systems have to be solved very often. To attack these problems in reasonable time it is absolutely necessary to reduce the dimension of the underlying system. We focus on model reduction by balanced truncation where a system theoretical background provides some desirable properties of the reduced-order system. The major computational task in balanced truncation is the solution of large-scale Lyapunov equations, thus the method is of limited use for really large-scale applications. We develop an effective implementation of balancing-related model reduction methods in exploiting the structure of the underlying problem. This is done by a data-sparse approximation of the large-scale state matrix A using the hierarchical matrix format. Furthermore, we integrate the corresponding formatted arithmetic in the sign function method for computing approximate solution factors of the Lyapunov equations. This approach is well-suited for a class of practical relevant problems and allows the application of balanced truncation and related methods to systems coming from 2D and 3D FEM and BEM discretizations.
APA, Harvard, Vancouver, ISO, and other styles
49

Szabo, Andrew P. "System Identification and Model-Based Control of Quadcopter UAVs." Wright State University / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=wright1553197265058507.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Vasquez, Arvallo Agustin. "Condition-based maintenance of actuator systems using a model-based approach /." Full text (PDF) from UMI/Dissertation Abstracts International, 2000. http://wwwlib.umi.com/cr/utexas/fullcit?p3004389.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography