Thèses sur le sujet « Model-based analysi »

Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Model-based analysi.

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleures thèses pour votre recherche sur le sujet « Model-based analysi ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.

1

VIRGILI, LUCA. « Graphs behind data : A network-based approach to model different scenarios ». Doctoral thesis, Università Politecnica delle Marche, 2022. http://hdl.handle.net/11566/295088.

Texte intégral
Résumé :
Al giorno d’oggi, i contesti che possono beneficiare di tecniche di estrazione della conoscenza a partire dai dati grezzi sono aumentati drasticamente. Di conseguenza, la definizione di modelli capaci di rappresentare e gestire dati altamente eterogenei è un argomento di ricerca molto dibattuto in letteratura. In questa tesi, proponiamo una soluzione per affrontare tale problema. In particolare, riteniamo che la teoria dei grafi, e più nello specifico le reti complesse, insieme ai suoi concetti ed approcci, possano rappresentare una valida soluzione. Infatti, noi crediamo che le reti complesse possano costituire un modello unico ed unificante per rappresentare e gestire dati altamente eterogenei. Sulla base di questa premessa, mostriamo come gli stessi concetti ed approcci abbiano la potenzialità di affrontare con successo molti problemi aperti in diversi contesti. ​
Nowadays, the amount and variety of scenarios that can benefit from techniques for extracting and managing knowledge from raw data have dramatically increased. As a result, the search for models capable of ensuring the representation and management of highly heterogeneous data is a hot topic in the data science literature. In this thesis, we aim to propose a solution to address this issue. In particular, we believe that graphs, and more specifically complex networks, as well as the concepts and approaches associated with them, can represent a solution to the problem mentioned above. In fact, we believe that they can be a unique and unifying model to uniformly represent and handle extremely heterogeneous data. Based on this premise, we show how the same concepts and/or approach has the potential to address different open issues in different contexts. ​
Styles APA, Harvard, Vancouver, ISO, etc.
2

SIVORI, DANIELE. « Ambient vibration tools supporting the model-based seismic assessment of existing buildings ». Doctoral thesis, Università degli studi di Genova, 2021. http://hdl.handle.net/11567/1045713.

Texte intégral
Résumé :
The technological advancements of the last decades are making dynamic monitoring an efficient and widespread resource to investigate the safety and health of engineering structures. In the wake of these developments, the thesis proposes methodological tools supporting the seismic assessment of existing buildings through the use of ambient vibration tests. In this context, the literature highlights considerable room to broaden the ongoing research, especially regarding masonry buildings. The recent earthquakes, once again, highlighted the significant vulnerability of this structural typology as an important part of our built heritage, remarking the importance of risk mitigation strategies for the territorial scale. The thesis builds upon a simplified methodology recently proposed in the literature, conceived to assess the post-seismic serviceability of strategic buildings based on their operational modal parameters. The original contributions of the work pursue the theoretical and numerical validation of its basic simplifying assumptions, in structural modelling – such as the in-plane rigid behaving floor diaphragms – and seismic analysis – related to the nonlinear fundamental frequency variations induced by earthquakes. These strategies are commonly employed in the seismic assessment of existing buildings, but require further developments for masonry buildings. The novel proposal of the thesis takes advantage of ambient vibration data to establish direct and inverse mechanical problems in the frequency domain targeted at, first, qualitatively distinguishing between rigid and nonrigid behaving diaphragms and, second, quantitatively identifying their in-plane shear stiffness, mechanical feature playing a primary role in the seismic behaviour of masonry buildings. The application of these tools to real case studies points out their relevance in the updating and validation of structural models for seismic assessment purposes. In the light of these achievements, a model-based computational framework is proposed to develop frequency decay-damage control charts for masonry buildings, which exploit ambient vibration measurements for quick damage evaluations in post-earthquake scenarios. The results of the simulations, finally, highlight the generally conservative nature of ambient vibration-based simplified methodologies, confirming their suitability for the serviceability assessment of existing masonry buildings.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Cerneaz, Nicholas J. « Model-based analysis of mammograms ». Thesis, University of Oxford, 1994. http://ora.ox.ac.uk/objects/uuid:a8d91bb2-429c-4da3-9f1b-6209771c61b5.

Texte intégral
Résumé :
Metastasised breast cancer kills. There is no known cure, there are no known preventative measures, there are no drugs available with proven capacity to abate its effects. Early identification and excision of a malignancy prior to metastasis is the only method currently available for reducing the mortality due to breast disease. Automated analysis of mammograms has been proposed as a tool to aid radiologists detect breast disease earlier and with greater efficiency and success. This thesis addresses some of the major difficulties associated with the automated analysis of mammograms, in particular the difficulties caused by the high-frequency, relatively insignificant curvi-linear structures (CLS) comprising the blood vessels, milk-ducts and fibrous tissues. Previous attempts at automation have been overlooked these structures and the resultant complexity of that oversight has been handled inappropriately. We develop a model-based analysis of the CLS features, from the very anatomy of the breast, through mammography and digitisation to the image intensities. The model immediately dictates an algorithm for extracting a high-level feature description of the CLS features. This high-level feature description allows a systematic treatment of these image features prior to searching for instances of breast disease. We demonstrate a procedure for implementing such prior treatment by 'removing' the CLS features from the images. Furthermore, we develop a model of the expected appearance of mammographic densities in the CLS-removed image, which leads directly to an algorithm for their identification. Unfortunately the model also extracts many regions of the image that are not significant mammographic densities, and this therefore requires a subsequent segmentation stage. Unlike previous attempts which apply neural networks to this task, and therefore incorporate inherent insignificance as a consequence of insufficient data availability describing the significant mammographic densities, we illustrate the application of a new statistical method (novelty analysis) for achieving a statistically significant segmentation of the mammographic densities from the plethora of candidates identified at the previous stage. We demonstrate the ability of the CLS feature description to identify instances of radial-scar in mammograms, and note the suitability of the CLS and density descriptions for assessment of bilateral and temporal asymmetry. Some additional potential applications of these feature descriptions in arenas other than mammogram analysis are also noted.
Styles APA, Harvard, Vancouver, ISO, etc.
4

McGarry, Gregory John. « Model-based mammographic image analysis ». Thesis, Queensland University of Technology, 2002.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Montrieux, Lionel. « Model-based analysis of role-based access control ». Thesis, Open University, 2013. http://oro.open.ac.uk/38672/.

Texte intégral
Résumé :
Model-Driven Engineering (MDE) has been extensively studied. Many directions have been explored, sometimes with the dream of providing a fully integrated approach for designers, developers and other stakeholders to create, reason about and modify models representing software systems. Most, but not all, of the research in MDE has focused on general-purpose languages and models, such as Java and UML. Domain-specific and cross-cutting concerns, such as security, are increasingly essential parts of a software system, but are only treated as second-class citizens in the most popular modelling languages. Efforts have been made to give security, and in particular access control, a more prominent place in MDE, but most of these approaches require advanced knowledge in security, programming (often declarative), or both, making them difficult to use by less technically trained stakeholders. In this thesis, we propose an approach to modelling, analysing and automatically fixing role-based access control (RBAC) that does not require users to write code or queries themselves. To this end, we use two UML profiles and associated OCL constraints that provide the modelling and analysis features. We propose a taxonomy of OCL constraints and use it to define a partial order between categories of constraints, that we use to propose strategies to speed up the models’ evaluation time. Finally, by representing OCL constraints as constraints on a graph, we propose an automated approach for generating lists of model changes that can be applied to an incorrect model in order to fix it. All these features have been fully integrated into a UML modelling IDE, IBM Rational Software Architect.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Zhang, Fan. « Model identification and model based analysis of membrane reactors ». Aachen Shaker, 2008. http://d-nb.info/992051029/04.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Graham, Matthew R. « Extensions in model-based system analysis ». Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2007. http://wwwlib.umi.com/cr/ucsd/fullcit?p3273192.

Texte intégral
Résumé :
Thesis (Ph. D.)--University of California, San Diego, 2007.
Title from first page of PDF file (viewed August 31, 2007). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references (p. 116-123).
Styles APA, Harvard, Vancouver, ISO, etc.
8

Woolrich, Mark. « Model-based approaches to FMRI analysis ». Thesis, University of Oxford, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.249485.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Rekik, Saoussen. « Methodology for a model based timing analysis process for automotive systems ». Phd thesis, Université Paris Sud - Paris XI, 2011. http://tel.archives-ouvertes.fr/tel-00647906.

Texte intégral
Résumé :
Aujourd'hui, les applications automobiles sont devenues de plus en plus complexes avec des ressources limitées et plus de contraintes de temps et de safety. La vérification temporelle est effectuée très tard aujourd'hui au cours du processus de développement automobile (après l'implémentation et au cours de la phase d'intégration). Pour apporter des solutions aux problèmes du développement logiciel automobile, plusieurs approches de développement dirigé par les modèles ont été définit. Ces approches donnent des langages, des concepts et des méthodologies pour la description de l'architecture des systèmes automobiles. Cependant, ces approches ne donnent aucun guide méthodologique pour intégrer l'analyse temporelle (notamment l'analyse d'ordonnancement) tout au long du processus de développement. Ce travail de thèse propose de développer une méthodologie décrivant un processus d'analyse temporelle dirigé par les modèles. Cette méthodologie décrit les différentes phases du processus de développement dirigé par les modèles et comment l'analyse temporelle est effectuée durant chaque phase.
Styles APA, Harvard, Vancouver, ISO, etc.
10

de, Araujo Rodrigues Vieira Elisangela. « Automated model-based test generation for timed systems ». Evry, Institut national des télécommunications, 2007. http://www.theses.fr/2007TELE0011.

Texte intégral
Résumé :
Timed Systems are systems with real-time constraints. The correctness of a timed system depends not only upon the operations it performs but also the timing when they are performed. Testing a system aims to guarantee its correctness. Model-based test generation is an approach to generate test cases based on a formal model. Although test generation methods have been far proposed, its timed counterpart is still a new field. In addition, most of the proposed solutions suffer from combinatory explosion which still limits their applicability in practice. Accordingly, it explains why there are so few automatic formal methods for testing generation, for both time and untimed systems. This thesis presents an automatic test generation approach addressed for timed systems using a test-purpose algorithm. Test purpose approach guarantees the generation of test case with regard to critical parts of the system and avoid the state explosion problem. In addition, we propose techniques to generate test sequences with timing-fault detection and with delayed and/or instantaneous transitions. In order to evaluate the applicability and efficiency of the proposed method, we have implemented two prototype tools: one based on an industrial simulator for SDL specifications and other using a free toolset based on IF models. Two real industrial applications are used as case study: a Railroad Crossing and a Vocal Service furnished by France Telecom
Les systèmes temporisés sont des systèmes avec des contraintes de temps réel. L'exactitude d'un système temporisé dépend non seulement des opérations qu'il effectue mais également de la synchronisation quand ils sont exécutés. La synchronisation prend en compte non seulement l’ordre des opérations mais surtout le moment quand elles sont exécutées. Tester un système vise à garantir son exactitude. La génération de teste basée sur des modèles c’est une approche pour produire des cas de test basés sur un modèle formel. Bien que d’autres méthodes de génération de test ont déjà été proposés, la génération pour les systèmes temporisés c’est un domaine bien plus récente. En outre, la plupart des solutions proposées souffrent de l'explosion combinatoire, ce qui limite toujours leur applicabilité dans la pratique. En conséquence, cela explique pourquoi il y a tellement peu de méthodes formelles automatiques pour la génération de test dans tout les domaines. Cette thèse présente une approche automatique de génération de teste adressée aux systèmes temporisés. Pour cela, nous proposons un algorithme de génération basé sur des objectives de test. Cette approche permet de générer des tests pour ce qui concerne les parties critiques du système et évite le problème d'explosion combinatoire. En outre, nous proposons des techniques pour produire des tests avec la détection des timing faults et avec des transitions retardées et/ou instantanées. Afin d'évaluer l'applicabilité et l'efficacité de la méthode proposée, nous avons mis en oeuvre deux outils: une en utilisant un simulateur industriel, pour des modèles en LDS et une autre employant un simulateur basé sur le langage IF. Deux applications industrielles sont employées comme étude de cas : Un système de Passage à Niveau et un Service Vocal fourni par France Telecom
Styles APA, Harvard, Vancouver, ISO, etc.
11

Tantrum, Jeremy. « Model based and hybrid clustering of large datasets / ». Thesis, Connect to this title online ; UW restricted, 2003. http://hdl.handle.net/1773/8933.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
12

Hatefi, Armin. « Mixture model analysis with rank-based samples ». Statistica Sinica, 2013. http://hdl.handle.net/1993/23849.

Texte intégral
Résumé :
Simple random sampling (SRS) is the most commonly used sampling design in data collection. In many applications (e.g., in fisheries and medical research) quantification of the variable of interest is either time-consuming or expensive but ranking a number of sampling units, without actual measurement on them, can be done relatively easy and at low cost. In these situations, one may use rank-based sampling (RBS) designs to obtain more representative samples from the underlying population and improve the efficiency of the statistical inference. In this thesis, we study the theory and application of the finite mixture models (FMMs) under RBS designs. In Chapter 2, we study the problems of Maximum Likelihood (ML) estimation and classification in a general class of FMMs under different ranked set sampling (RSS) designs. In Chapter 3, deriving Fisher information (FI) content of different RSS data structures including complete and incomplete RSS data, we show that the FI contained in each variation of the RSS data about different features of FMMs is larger than the FI contained in their SRS counterparts. There are situations where it is difficult to rank all the sampling units in a set with high confidence. Forcing rankers to assign unique ranks to the units (as RSS) can lead to substantial ranking error and consequently to poor statistical inference. We hence focus on the partially rank-ordered set (PROS) sampling design, which is aimed at reducing the ranking error and the burden on rankers by allowing them to declare ties (partially ordered subsets) among the sampling units. Studying the information and uncertainty structures of the PROS data in a general class of distributions, in Chapter 4, we show the superiority of the PROS design in data analysis over RSS and SRS schemes. In Chapter 5, we also investigate the ML estimation and classification problems of FMMs under the PROS design. Finally, we apply our results to estimate the age structure of a short-lived fish species based on the length frequency data, using SRS, RSS and PROS designs.
Styles APA, Harvard, Vancouver, ISO, etc.
13

Morrison, Steven. « Model based parameter estimation for image analysis ». Thesis, Heriot-Watt University, 1999. http://hdl.handle.net/10399/561.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
14

Dehmeshki, Jamshid. « Stochastic model-based approach to image analysis ». Thesis, University of Nottingham, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.363908.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
15

Lin, Dong. « Model-based cluster analysis using Bayesian techniques ». To access this resource online via ProQuest Dissertations and Theses @ UTEP, 2008. http://0-proquest.umi.com.lib.utep.edu/login?COPT=REJTPTU0YmImSU5UPTAmVkVSPTI=&clientId=2515.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
16

GOMES, Adriano José Oliveira. « Systematic model-based safety assessment via probabilistic model checking ». Universidade Federal de Pernambuco, 2010. https://repositorio.ufpe.br/handle/123456789/2651.

Texte intégral
Résumé :
Made available in DSpace on 2014-06-12T15:59:55Z (GMT). No. of bitstreams: 2 arquivo5803_1.pdf: 2496332 bytes, checksum: b4666e127bf620dbcb7437f9d83c2344 (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2010
Faculdade de Amparo à Ciência e Tecnologia do Estado de Pernambuco
A análise da segurança (Safety Assessment) é um processo bem conhecido que serve para garantir que as restrições de segurança de um sistema crítico sejam cumpridas. Dentro dele, a análise de segurança quantitativa lida com essas restrições em um contexto numérico (probabilístico). Os métodos de análise de segurança, como a tradicional Fault Tree Analysis (FTA), são utilizados no processo de avaliação da segurança quantitativo, seguindo as diretrizes de certificação (por exemplo, a ARP4761 Guia de Práticas Recomendadas da Aviação). No entanto, este método é geralmente custoso e requer muito tempo e esforço para validar um sistema como um todo, uma vez que para uma aeronave chegam a ser construídas, em média, 10.000 árvores de falha e também porque dependem fortemente das habilidades humanas para lidar com suas limitações temporais que restringem o âmbito e o nível de detalhe que a análise e os resultados podem alcançar. Por outro lado, as autoridades certificadoras também permitem a utilização da análise de Markov, que, embora seus modelos sejam mais poderosos que as árvores de falha, a indústria raramente adota esta análise porque seus modelos são mais complexos e difíceis de lidar. Diante disto, FTA tem sido amplamente utilizada neste processo, principalmente porque é conceitualmente mais simples e fácil de entender. À medida que a complexidade e o time-to-market dos sistemas aumentam, o interesse em abordar as questões de segurança durante as fases iniciais do projeto, ao invés de nas fases intermediárias/finais, tornou comum a adoção de projetos, ferramentas e técnicas baseados em modelos. Simulink é o exemplo padrão atualmente utilizado na indústria aeronáutica. Entretanto, mesmo neste cenário, as soluções atuais seguem o que os engenheiros já utilizavam anteriormente. Por outro lado, métodos formais que são linguagens, ferramentas e métodos baseados em lógica e matemática discreta e não seguem as abordagens da engenharia tradicional, podem proporcionar soluções inovadoras de baixo custo para engenheiros. Esta dissertação define uma estratégia para a avaliação quantitativa de segurança baseada na análise de Markov. Porém, em vez de lidar com modelos de Markov diretamente, usamos a linguagem formal Prism (uma especificação em Prism é semanticamente interpretada como um modelo de Markov). Além disto, esta especificação em Prism é extraída de forma sistemática a partir de um modelo de alto nível (diagramas Simulink anotados com lógicas de falha do sistema), através da aplicação de regras de tradução. A verificação sob o aspecto quantitativo dos requisitos de segurança do sistema é realizada utilizando o verificador de modelos de Prism, no qual os requisitos de segurança tornam-se fórmulas probabilísticas em lógica temporal. O objetivo imediato do nosso trabalho é evitar o esforço de se criar várias árvores de falhas até ser constatado que um requisito de segurança foi violado. Prism não constrói árvores de falha para chegar neste resultado. Ele simplesmente verifica de uma só vez se um requisito de segurança é satisfeito ou não no modelo inteiro. Finalmente, nossa estratégia é ilustrada com um sistema simples (um projeto-piloto), mas representativo, projetado pela Embraer
Styles APA, Harvard, Vancouver, ISO, etc.
17

Andalib, Maryam Alsadat. « Model-based Analysis of Diversity in Higher Education ». Diss., Virginia Tech, 2018. http://hdl.handle.net/10919/96221.

Texte intégral
Résumé :
U.S. higher education is an example of a large multi-organizational system within the service sector. Its performance regarding workforce development can be analyzed through the lens of industrial and systems engineering. In this three-essay dissertation, we seek the answer to the following question: How can the U.S. higher education system achieve an equal representation of female and minority members in its student and faculty populations? In essay 1, we model the education pipeline with a focus on the system's gender composition from k-12 to graduate school. We use a system dynamics approach to present a systems view of the mechanisms that affect the dynamics of higher education, replicate historical enrollment data, and forecast future trends of higher education's gender composition. Our results indicate that, in the next two decades, women will be the majority of advanced degree holders. In essay 2, we look at the support mechanisms for new-parent, tenure-track faculty in universities with a specific focus on tenure-clock extension policies. We construct a unique data set to answer questions around the effectiveness of removing the stigma connected with automatic tenure-clock policies. Our results show that such policies are successful in removing the stigma and that, overall, faculty members that have newborns and are employed by universities that adopt auto-TCE policies stay one year longer in their positions than other faculty members. In addition, although faculty employed at universities that adopt such policies are generally more satisfied with their jobs, there is no statistically significant effect of auto TCE policies on the chances of obtaining tenure. In essay 3, we focus on the effectiveness of training underrepresented minorities (e.g., African Americans and Hispanics) in U.S. higher education institutions using a Data Envelopment Analysis approach. Our results indicate that graduation rates, average GPAs, and post-graduate salaries of minority students are higher in selective universities and those located in more diverse towns/cities. Furthermore, the graduation rate of minority students in private universities and those with affirmative action programs is higher than in other institutions. Overall, this dissertation provides new insights into improving diversity within the science workforce at different organizational levels by using industrial and systems engineering and management sciences methods.
Ph. D.
Styles APA, Harvard, Vancouver, ISO, etc.
18

Coletti, Mark. « An analysis of a model-based evolutionary algorithm| Learnable Evolution Model ». Thesis, George Mason University, 2014. http://pqdtopen.proquest.com/#viewpdf?dispub=3625081.

Texte intégral
Résumé :

An evolutionary algorithm (EA) is a biologically inspired metaheuristic that uses mutation, crossover, reproduction, and selection operators to evolve solutions for a given problem. Learnable Evolution Model (LEM) is an EA that has an evolutionary algorithm component that works in tandem with a machine learner to collaboratively create populations of individuals. The machine learner infers rules from best and least fit individuals, and then this knowledge is exploited to improve the quality of offspring.

Unfortunately, most of the extant work on LEM has been ad hoc , and so there does not exist a deep understanding of how LEM works. And this lack of understanding, in turn, means that there is no set of best practices for implementing LEM. For example, most LEM implementations use rules that describe value ranges corresponding to areas of higher fitness in which offspring should be created. However, we do not know the efficacy of different approaches for sampling those intervals. Also, we do not have sufficient guidance for assembling training sets of positive and negative examples from populations from which the ML component can learn.

This research addresses those open issues by exploring three different rule interval sampling approaches as well as three different training set configurations on a number of test problems that are representative of the types of problems that practitioners may encounter. Using the machine learner to create offspring induces a unique emergent selection pressure separate from the selection pressure that manifests from parent and survivor selection; an outcome of this research is a partially ordered set of the impact that these rule interval sampling approaches and training set configurations have on this selection pressure that practitioners can use for implementation guidance. That is, a practitioner can modulate selection pressure by traversing a set of design configurations within a Hasse graph defined by partially ordered selection pressure.

Styles APA, Harvard, Vancouver, ISO, etc.
19

Zhang, Fan [Verfasser]. « Model Identification and Model Based Analysis of Membrane Reactors / Fan Zhang ». Aachen : Shaker, 2009. http://d-nb.info/1161308121/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
20

Thiers, George. « A model-based systems engineering methodology to make engineering analysis of discrete-event logistics systems more cost-accessible ». Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/52259.

Texte intégral
Résumé :
This dissertation supports human decision-making with a Model-Based Systems Engineering methodology enabling engineering analysis, and in particular Operations Research analysis of discrete-event logistics systems, to be more widely used in a cost-effective and correct manner. A methodology is a collection of related processes, methods, and tools, and the process of interest is posing a question about a system model and then identifying and building answering analysis models. Methods and tools are the novelty of this dissertation, which when applied to the process will enable the dissertation's goal. One method which directly enables the goal is adding automation to analysis model-building. Another method is abstraction, to make explicit a frequently-used bridge to analysis and also expose analysis model-building repetition to justify automation. A third method is formalization, to capture knowledge for reuse and also enable automation without human interpreters. The methodology, which is itself a contribution, also includes two supporting tool contributions. A tool to support the abstraction method is a definition of a token-flow network, an abstract concept which generalizes many aspects of discrete-event logistics systems and underlies many analyses of them. Another tool to support the formalization method is a definition of a well-formed question, the result of an initial study of semantics, categories, and patterns in questions about models which induce engineering analysis. This is more general than queries about models in any specific modeling language, and also more general than queries answerable by navigating through a model and retrieving recorded information. A final contribution follows from investigating tools for the automation method. Analysis model-building is a model-to-model transformation, and languages and tools for model-to-model transformation already exist in Model-Driven Architecture of software. The contribution considers if and how these tools can be re-purposed by contrasting software object-oriented code generation and engineering analysis model-building. It is argued that both use cases share a common transformation paradigm but executed at different relative levels of abstraction, and the argument is supported by showing how several Operations Research analyses can be defined in an object-oriented way across multiple layered instance-of abstraction levels. Enabling Operations Research analysis of discrete-event logistics systems to be more widely used in a cost-effective and correct manner requires considering fundamental questions about what knowledge is required to answer a question about a system, how to formally capture that knowledge, and what that capture enables. Developments here are promising, but provide only limited answers and leave much room for future work.
Styles APA, Harvard, Vancouver, ISO, etc.
21

Zhao, Yanbin. « Relaxed stability analysis of fuzzy-model-based control systems ». Thesis, King's College London (University of London), 2018. https://kclpure.kcl.ac.uk/portal/en/theses/relaxed-stability-analysis-of-fuzzymodelbased-control-systems(df7ec615-6b23-4344-844d-00300a43f975).html.

Texte intégral
Résumé :
This thesis presents and extrapolates on the research works concerning the stability analysis of fuzzy-model-based (FMB) control systems. In this study, two types of FMB control systems are considered: 1) Takagi-Sugeno (T-S) FMB control systems; and 2) polynomial fuzzy-model-based (PFMB) control systems. The control scheme illustrated in this thesis has great design flexibility because it allows the number and/or shape of membership functions of fuzzy controllers to be designed independently from the fuzzy models. However, in wake of the imperfectly matched membership functions, the stability conditions of the FMB control systems are typically very conservative given the fact that they are con-gruent with traditional stability analysis methods. In this thesis, based on Lya-punov stability theory, membership-function-dependent (MFD) stability analysis methods are proposed to relax the stability conditions. Firstly, piecewise mem-bership functions (PMFs) are utilised as approximate membership functions to carry out a relaxed stability analysis of T-S FMB control system. Subsequently, PMF-based stability analysis is improved with the consideration of membership function boundary information. Based on the PMF method, we propose a lower-upper-PFM-based stability analysis method. Relaxed stability conditions are obtained in the form of linear matrix inequalities (LMIs) in consideration of the approximation accuracy of the membership function. For the purpose of stability analysis of PFMB control system, the other MFD method proposed is to extract the regional membership function information via operating domain partition. Two types of membership information are consid-ered in each sub-domain: 1) the numerical relationship between all membership function overlap terms; and 2) the bounds of every single membership function overlap term. Thereafter, relaxed sum of squares (SOS)-based stability conditions are derived. In conjunction with these proposed MFD methods, sub-domain fuzzy controllers are utilised to enhance the capability of feedback compensation. In this thesis, all the LMI/SOS-based stability conditions obtained can be solved nu-merically using existing computational tools. Furthermore, simulation examples are provided to illustrate the validity and applicability of the proposed methods.
Styles APA, Harvard, Vancouver, ISO, etc.
22

Malsiner-Walli, Gertraud, Sylvia Frühwirth-Schnatter et Bettina Grün. « Model-based clustering based on sparse finite Gaussian mixtures ». Springer, 2016. http://dx.doi.org/10.1007/s11222-014-9500-2.

Texte intégral
Résumé :
In the framework of Bayesian model-based clustering based on a finite mixture of Gaussian distributions, we present a joint approach to estimate the number of mixture components and identify cluster-relevant variables simultaneously as well as to obtain an identified model. Our approach consists in specifying sparse hierarchical priors on the mixture weights and component means. In a deliberately overfitting mixture model the sparse prior on the weights empties superfluous components during MCMC. A straightforward estimator for the true number of components is given by the most frequent number of non-empty components visited during MCMC sampling. Specifying a shrinkage prior, namely the normal gamma prior, on the component means leads to improved parameter estimates as well as identification of cluster-relevant variables. After estimating the mixture model using MCMC methods based on data augmentation and Gibbs sampling, an identified model is obtained by relabeling the MCMC output in the point process representation of the draws. This is performed using K-centroids cluster analysis based on the Mahalanobis distance. We evaluate our proposed strategy in a simulation setup with artificial data and by applying it to benchmark data sets. (authors' abstract)
Styles APA, Harvard, Vancouver, ISO, etc.
23

McDonald, Adam. « An integrated UML based model for design analysis ». Pullman, Wash. : Washington State University, 2010. http://www.dissertations.wsu.edu/Thesis/Spring2010/a_mcdonald_041810.pdf.

Texte intégral
Résumé :
Thesis (M.S. in computer science)--Washington State University, May 2010.
Title from PDF title page (viewed on June 23, 2010). "School of Engineering and Computer Science." Includes bibliographical references (p. 79-80).
Styles APA, Harvard, Vancouver, ISO, etc.
24

Wang, Yuehe. « Model based dynamic analysis of human sleep electroencephalogram ». Thesis, University of Leicester, 1997. http://hdl.handle.net/2381/30210.

Texte intégral
Résumé :
For sleep classification, automatic electroencephalogram (EEG) interpretation techniques are of interest because they are labour saving, in contrast to manual (visual) methods. More importantly, some automatic methods, which offer a less subjective approach, can provide additional information which it is not possible to obtain by manual analysis. An extensive literature review has been undertaken to investigate the background of automatic EEG analysis techniques. Frequency domain and time domain methods are considered and their limitations are summarised. The weakness in the R & K rules for visual classification and from which most of the automatic systems borrow heavily are discussed. A new technique - model based dynamic analysis - was developed in an attempt to classify the sleep EEG automatically. The technique comprises of two phases, these are the modelling of EEG signals and the analysis of the model's coefficients using dynamic systems theory. Three techniques of modelling EEG signals are compared: the implementation of the non-linear prediction technique of Schaffer and Tidd (1990) based on chaos theory; Kalman filters and a recursive version of a radial basis function for modelling and forecasting the EEG signals during sleep. The Kalman filter approach produced good results and this approach was used in an attempt to classify the EEG automatically. For classifying the model's (Kalman filter's) coefficients, a new technique was developed by a state-space approach. A 'state variable' was defined based on the state changes of the EEG and was shown to be correlated with the depth of sleep. Furthermore it is shown that this technique may be useful for automatic sleep staging. Possible applications include automatic staging of sleep, detection of micro-arousals, anaesthesia monitoring and monitoring the alertness of workers in sensitive or potentially dangerous environments.
Styles APA, Harvard, Vancouver, ISO, etc.
25

Shah, Sohrab P. « Model based approaches to array CGH data analysis ». Thesis, University of British Columbia, 2008. http://hdl.handle.net/2429/2808.

Texte intégral
Résumé :
DNA copy number alterations (CNAs) are genetic changes that can produce adverse effects in numerous human diseases, including cancer. CNAs are segments of DNA that have been deleted or amplified and can range in size from one kilobases to whole chromosome arms. Development of array comparative genomic hybridization (aCGH) technology enables CNAs to be measured at sub-megabase resolution using tens of thousands of probes. However, aCGH data are noisy and result in continuous valued measurements of the discrete CNAs. Consequently, the data must be processed through algorithmic and statistical techniques in order to derive meaningful biological insights. We introduce model-based approaches to analysis of aCGH data and develop state-of-the-art solutions to three distinct analytical problems. In the simplest scenario, the task is to infer CNAs from a single aCGH experiment. We apply a hidden Markov model (HMM) to accurately identify CNAs from aCGH data. We show that borrowing statistical strength across chromosomes and explicitly modeling outliers in the data, improves on baseline models. In the second scenario, we wish to identify recurrent CNAs in a set of aCGH data derived from a patient cohort. These are locations in the genome altered in many patients, providing evidence for CNAs that may be playing important molecular roles in the disease. We develop a novel hierarchical HMM profiling method that explicitly models both statistical and biological noise in the data and is capable of producing a representative profile for a set of aCGH experiments. We demonstrate that our method is more accurate than simpler baselines on synthetic data, and show our model produces output that is more interpretable than other methods. Finally, we develop a model based clustering framework to stratify a patient cohort, expected to be composed of a fixed set of molecular subtypes. We introduce a model that jointly infers CNAs, assigns patients to subgroups and infers the profiles that represent each subgroup. We show our model to be more accurate on synthetic data, and show in two patient cohorts how the model discovers putative novel subtypes and clinically relevant subgroups.
Styles APA, Harvard, Vancouver, ISO, etc.
26

Loer, Karsten. « Model-based automated analysis for dependable interactive systems ». Thesis, University of York, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.399265.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
27

Arbab-Zavar, Banafshe. « On guided model-based analysis for ear biometrics ». Thesis, University of Southampton, 2009. https://eprints.soton.ac.uk/72062/.

Texte intégral
Résumé :
Ears are a new biometric with major advantage in that they appear to maintain their structure with increasing age. Current approaches have exploited 2D and 3D images of the ear in human identification. Contending that the ear is mainly a planar shape we use 2D images, which are consistent with deployment in surveillance and other planar-image scenarios. So far ear biometric approaches have mostly used general properties and overall appearance of ear images in recognition, while the structure of the ear has not been discussed. In this thesis, we propose a new model-based approach to ear biometrics. Our model is a part-wise description of the ear structure. By embryological evidence of ear development, we shall show that the ear is indeed a composite structure of individual components. Our model parts are derived by a stochastic clustering method on a set of scale invariant features on a training set. We shall review different accounts of ear formation and consider some research into congenital ear anomalies which discuss apportioning various components to the ear's complex structure. We demonstrate that our model description is in accordance with these accounts. We extend our model description, by proposing a new wavelet-based analysis with a specific aim of capturing information in the ear's outer structures. We shall show that this section of the ear is not sufficiently explored by the model, while given that it exhibits large variations in shape, intuitively, it is significant to the recognition process. In this new analysis, log-Gabor filters exploit the frequency content of the ear's outer structures. In recognition, ears are automatically enrolled via our new enrolment algorithm, which is based on the elliptical shape of ears in head profile images. These samples are then recognized via the parts selected by the model. The incorporation of the wavelet-based analysis of the outer ear structures forms an extended or hybrid method. The performance is evaluated on test sets selected from the XM2VTS database. By results, bothin modelling and recognition, our new model-based approach does indeed appear to be a promising new approach to ear biometrics. In this, the recognition performance has improved notably by the incorporation of our new wavelet-based analysis. The main obstacle hindering the deployment of ear biometrics is the potential occlusion by hair. A model-based approach has a further attraction, since it has an advantage in handling noise and occlusion. Also, by localization, a wavelet can offer performance advantages when handling occluded data. A robust matching technique is also added to restrict the influence of corrupted wavelet projections. Furthermore, our automatic enrolment is tolerant of occlusion in ear samples. We shall present a thorough evaluation of performance in occlusion, using PCA and a robust PCA for comparison purposes. Our hybrid method obtains promising results recognizing occluded ears. Our results have confirmed the validity of this approach both in modelling and recognition. Our new hybrid method does indeed appear to be a promising new approach to ear biometrics, by guiding a model-based analysis via anatomical knowledge.
Styles APA, Harvard, Vancouver, ISO, etc.
28

Crawford, Gordon Finlay. « Vision-based analysis, interpretation and segmentation of hand shape using six key marker points ». Thesis, University of Ulster, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.243732.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
29

Robinson, Elinirina Iréna. « Filtering and uncertainty propagation methods for model-based prognosis ». Thesis, Paris, CNAM, 2018. http://www.theses.fr/2018CNAM1189/document.

Texte intégral
Résumé :
Les travaux présentés dans ce mémoire concernent le développement de méthodes de pronostic à base de modèles. Le pronostic à base de modèles a pour but d'estimer le temps qu'il reste avant qu'un système ne soit défaillant, à partir d'un modèle physique de la dégradation du système. Ce temps de vie restant est appelé durée de résiduelle (RUL) du système.Le pronostic à base de modèle est composé de deux étapes principales : (i) estimation de l'état actuel de la dégradation et (ii) prédiction de l'état futur de la dégradation. La première étape, qui est une étape de filtrage, est réalisée à partir du modèle et des mesures disponibles. La seconde étape consiste à faire de la propagation d'incertitudes. Le principal enjeu du pronostic concerne la prise en compte des différentes sources d'incertitude pour obtenir une mesure de l'incertitude associée à la RUL prédite. Les principales sources d'incertitude sont les incertitudes de modèle, les incertitudes de mesures et les incertitudes liées aux futures conditions d'opération du système. Afin de gérer ces incertitudes et les intégrer au pronostic, des méthodes probabilistes ainsi que des méthodes ensemblistes ont été développées dans cette thèse.Dans un premier temps, un filtre de Kalman étendu ainsi qu'un filtre particulaire sont appliqués au pronostic de propagation de fissure, en utilisant la loi de Paris et des données synthétiques. Puis, une méthode combinant un filtre particulaire et un algorithme de détection (algorithme des sommes cumulatives) a été développée puis appliquée au pronostic de propagation de fissure dans un matériau composite soumis à un chargement variable. Cette fois, en plus des incertitudes de modèle et de mesures, les incertitudes liées aux futures conditions d'opération du système ont aussi été considérées. De plus, des données réelles ont été utilisées. Ensuite, deux méthodes de pronostic sont développées dans un cadre ensembliste où les erreurs sont considérées comme étant bornées. Elles utilisent notamment des méthodes d'inversion ensembliste et un observateur par intervalles pour des systèmes linéaires à temps discret. Enfin, l'application d'une méthode issue du domaine de l'analyse de fiabilité des systèmes au pronostic à base de modèles est présentée. Il s'agit de la méthode Inverse First-Order Reliability Method (Inverse FORM).Pour chaque méthode développée, des métriques d'évaluation de performance sont calculées dans le but de comparer leur efficacité. Il s'agit de l'exactitude, la précision et l'opportunité
In this manuscript, contributions to the development of methods for on-line model-based prognosis are presented. Model-based prognosis aims at predicting the time before the monitored system reaches a failure state, using a physics-based model of the degradation. This time before failure is called the remaining useful life (RUL) of the system.Model-based prognosis is divided in two main steps: (i) current degradation state estimation and (ii) future degradation state prediction to predict the RUL. The first step, which consists in estimating the current degradation state using the measurements, is performed with filtering techniques. The second step is realized with uncertainty propagation methods. The main challenge in prognosis is to take the different uncertainty sources into account in order to obtain a measure of the RUL uncertainty. There are mainly model uncertainty, measurement uncertainty and future uncertainty (loading, operating conditions, etc.). Thus, probabilistic and set-membership methods for model-based prognosis are investigated in this thesis to tackle these uncertainties.The ability of an extended Kalman filter and a particle filter to perform RUL prognosis in presence of model and measurement uncertainty is first studied using a nonlinear fatigue crack growth model based on the Paris' law and synthetic data. Then, the particle filter combined to a detection algorithm (cumulative sum algorithm) is applied to a more realistic case study, which is fatigue crack growth prognosis in composite materials under variable amplitude loading. This time, model uncertainty, measurement uncertainty and future loading uncertainty are taken into account, and real data are used. Then, two set-membership model-based prognosis methods based on constraint satisfaction and unknown input interval observer for linear discete-time systems are presented. Finally, an extension of a reliability analysis method to model-based prognosis, namely the inverse first-order reliability method (Inverse FORM), is presented.In each case study, performance evaluation metrics (accuracy, precision and timeliness) are calculated in order to make a comparison between the proposed methods
Styles APA, Harvard, Vancouver, ISO, etc.
30

Liu, Chuang. « Relaxed stability analysis for fuzzy-model-based observer-control systems ». Thesis, King's College London (University of London), 2016. https://kclpure.kcl.ac.uk/portal/en/theses/relaxed-stability-analysis-for-fuzzymodelbased-observercontrol-systems(082673fa-9a83-4cda-8622-9358ed8d7118).html.

Texte intégral
Résumé :
Fuzzy-model-based (FMB) control scheme is an efficient approach to conduct stability analysis for nonlinear systems. Both Takagi-Sugeno (T-S) FMB and polynomial fuzzy-model-based (PFMB) control systems have been widely investigated. In this thesis, the stability analysis of FMB control systems is conducted via Lyapunov stability theory. The main contribution of the thesis is improving the applicability of T-S FMB and PFMB control strategies by relaxing stability conditions and designing fuzzy observer-controller, which is presented in the following three parts: 1) The stability conditions of FMB control systems are relaxed such that the FMB control strategy can be applied to a wider range of nonlinear systems. For T-S FMB control systems, higher order derivatives of Lyapunov function (HODLF) are employed, which generalizes the commonly used first order derivative. For PFMB control systems, Taylor series membership functions (TSMF) are brought into stability conditions such that the relation between membership grades and system states is expressed. 2) Two types of T-S fuzzy observer-controller are designed such that the T-S FMB control strategy can be applied to systems with unmeasurable states. For the first type, the T-S fuzzy observer with unmeasurable premise variables is designed to estimate the system states and then the estimated states are employed for state-feedback control of nonlinear systems. Convex stability conditions are obtained through matrix decoupling technique. For the second type, the T-S fuzzy functional observer is designed to directly estimate the control input instead of the system states, which can reduce the order of the observer. A new form of fuzzy functional observer is proposed to facilitate the stability analysis such that the observer gains can be numerically obtained and the stability can be guaranteed simultaneously. 3) The polynomial fuzzy observer-controller with unmeasurable premise variables is designed for systems with unmeasurable states. Although the consideration of the polynomial fuzzy model and unmeasurable premise variables enhances the applicability of the FMB control strategy, it leads to non-convex stability conditions. Therefore, two methods are applied to derive convex stability conditions: refined completing square approach and matrix decoupling technique. Additionally, the designed polynomial fuzzy observer-controller is extended for systems where only sampled-output measurements are available. Furthermore, the membership functions of the designed polynomial observer-controller are optimized by the improved gradient descent method. Simulation examples are provided to demonstrate and verify the theoretical analysis.
Styles APA, Harvard, Vancouver, ISO, etc.
31

Ponge, Julien Nicolas Computer Science &amp Engineering Faculty of Engineering UNSW. « Model based analysis of time-aware web services interactions ». Publisher:University of New South Wales. Computer Science & ; Engineering, 2009. http://handle.unsw.edu.au/1959.4/43525.

Texte intégral
Résumé :
Web services are increasingly gaining acceptance as a framework for facilitating application-to-application interactions within and across enterprises. It is commonly accepted that a service description should include not only the interface, but also the business protocol supported by the service. The present work focuses on the formalization of the important category of protocols that include time-related constraints (called timed protocols), and the impact of time on compatibility and replaceability analysis. We formalized the following timing constraints: CInvoke constraints define time windows of availability while MInvoke constraints define expirations deadlines. We extended techniques for compatibility and replaceability analysis between timed protocols by using a semantic-preserving mapping between timed protocols and timed automata, leading to the novel class of protocol timed automata (PTA). Specifically, PTA exhibit silent transitions that cannot be removed in general, yet they are closed under complementation, making every type of compatibility or replaceability analysis decidable. Finally, we implemented our approach in the context of a larger project called ServiceMosaic, a model-driven framework for web service life-cycle management.
Styles APA, Harvard, Vancouver, ISO, etc.
32

Farooq, Usman. « Model based test suite minimization using metaheuristics ». Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2011. https://ro.ecu.edu.au/theses/409.

Texte intégral
Résumé :
Software testing is one of the most widely used methods for quality assurance and fault detection purposes. However, it is one of the most expensive, tedious and time consuming activities in software development life cycle. Code-based and specification-based testing has been going on for almost four decades. Model-based testing (MBT) is a relatively new approach to software testing where the software models as opposed to other artifacts (i.e. source code) are used as primary source of test cases. Models are simplified representation of a software system and are cheaper to execute than the original or deployed system. The main objective of the research presented in this thesis is the development of a framework for improving the efficiency and effectiveness of test suites generated from UML models. It focuses on three activities: transformation of Activity Diagram (AD) model into Colored Petri Net (CPN) model, generation and evaluation of AD based test suite and optimization of AD based test suite. Unified Modeling Language (UML) is a de facto standard for software system analysis and design. UML models can be categorized into structural and behavioral models. AD is a behavioral type of UML model and since major revision in UML version 2.x it has a new Petri Nets like semantics. It has wide application scope including embedded, workflow and web-service systems. For this reason this thesis concentrates on AD models. Informal semantics of UML generally and AD specially is a major challenge in the development of UML based verification and validation tools. One solution to this challenge is transforming a UML model into an executable formal model. In the thesis, a three step transformation methodology is proposed for resolving ambiguities in an AD model and then transforming it into a CPN representation which is a well known formal language with extensive tool support. Test case generation is one of the most critical and labor intensive activities in testing processes. The flow oriented semantic of AD suits modeling both sequential and concurrent systems. The thesis presented a novel technique to generate test cases from AD using a stochastic algorithm. In order to determine if the generated test suite is adequate, two test suite adequacy analysis techniques based on structural coverage and mutation have been proposed. In terms of structural coverage, two separate coverage criteria are also proposed to evaluate the adequacy of the test suite from both perspectives, sequential and concurrent. Mutation analysis is a fault-based technique to determine if the test suite is adequate for detecting particular types of faults. Four categories of mutation operators are defined to seed specific faults into the mutant model. Another focus of thesis is to improve the test suite efficiency without compromising its effectiveness. One way of achieving this is identifying and removing the redundant test cases. It has been shown that the test suite minimization by removing redundant test cases is a combinatorial optimization problem. An evolutionary computation based test suite minimization technique is developed to address the test suite minimization problem and its performance is empirically compared with other well known heuristic algorithms. Additionally, statistical analysis is performed to characterize the fitness landscape of test suite minimization problems. The proposed test suite minimization solution is extended to include multi-objective minimization. As the redundancy is contextual, different criteria and their combination can significantly change the solution test suite. Therefore, the last part of the thesis describes an investigation into multi-objective test suite minimization and optimization algorithms. The proposed framework is demonstrated and evaluated using prototype tools and case study models. Empirical results have shown that the techniques developed within the framework are effective in model based test suite generation and optimization
Styles APA, Harvard, Vancouver, ISO, etc.
33

ROTA, GRAZIOSI ANDREA. « EVALUATION AND CHARACTERIZATION OF DIETARY STRATEGIES ON ENVIRONMENTAL SUSTAINABILITY OF DAIRY COW MILK PRODUCTION ». Doctoral thesis, Università degli Studi di Milano, 2022. http://hdl.handle.net/2434/924352.

Texte intégral
Résumé :
The livestock sector is facing different challenges, and the demand for higher sustainability seems to be one of the most urgent. This PhD project debated, in particular, the environmental impacts related to ruminant nutrition, focusing on dairy cows, since nutrition is bound tightly to two of the most important sources of impact: enteric CH4 emission and land use change (LUC). Enteric CH4 emission from ruminants represents 29-38% of the total (anthropic + natural) emission of this powerful (21 CO2 equivalent) greenhouse gas. The production of CH4 is a physiological process used by ruminants to discharge the [H] resulting from rumen fermentation. Different strategies can be implemented to mitigate this impact, and they can be roughly grouped into three main categories: animal and feed management, diet formulation, and rumen manipulation. The second issue investigated in the project is the high reliance of European livestock on soybean meal as a protein source for diet formulation. A total of 30 million tonnes of this feedstuff was imported into Europe in 2020. The main countries of origin are in South America (65% of total import), where 20% of soybean meal production was linked with deforestation (and consequently LUC) in the last decades. Clearing these areas means loss of carbon sink and emission of CO2 in the atmosphere. Other feedstuffs, like grain legumes, oilseed meals alternative to soybean, and high quality forages could be considered to provide protein feed with a lower environmental cost. In this context, the PhD project was developed as follows:  To address the problem of CH4 emission, plant essential oils, as modulators of rumen fermentation, were evaluated (Experiment 1). Furthermore, the effect on CH4 emission of different forages in the diet of dairy cows was investigated (Experiment 2). For validation of mitigation strategies and inventory computation of emissions at a national scale, country-specific equations to quantify CH4 emission were evaluated (Experiment 3).  To address the problem of soybean meal environmental impact, soybean silage and responsible soybean meal (not connected with land use change) were evaluated as protein source alternatives to soybean meal in the diet of lactating cows (Experiments 4 and 5). Enteric methane direct emission In the first experiment, Achille moschata essential oil and its main pure components, namely bornyl acetate, camphor, and eucalyptol, were evaluated in an in vitro experiment. The trial comprehended a short-term in vitro incubation (48 h), with 200 mg of compound per L of inoculum, and a long-term one by continuous fermenter (9 d), with 100 mg/L for each compound. In the first incubation, no differences due to the treatments were found for in vitro gas production (on average, 30.4 mL/200 mg DM, P = 0.772 at 24 h and 45.2 mL/200 mg DM, P = 0.545 at 48 h). Camphor and eucalyptol reduced CH4 production when expressed as % of gas production at 48 h (P < 0.05): -7.4% and -7% compared to control. In the second incubation, CH4 was reduced by eucalyptol (-18%, P < 0.05). Regarding volatile fatty acids, the main effects were a decrease of total production for camphor (-19.5%, P < 0.05) and an increase in acetate production at 9 d with bornyl acetate and camphor (+13% and 7.6%, respectively, P < 0.05) compared to control. Total protozoa count was increased compared to the control (on average: +37%, P = 0.006, at 48 h and +48%, P < 0.001, at 9 d) with all the pure compounds tested. In the short-term incubation, all the treatments reduced Bacteroidetes (30.3%, on average, vs. 37.1% of control, P = 0.014) and Firmicutes (26.3%, on average, vs. 30.7% of control, P = 0.031) abundances but increased Proteobacteria (36.0%, on average, vs. 22.5% of control, P = 0.014). In the long-term incubation, eucalyptol increased the genus Ruminococcus abundance (2.60% vs. 1.18% of control, P = 0.011). An adaptation at long time incubation was observed. In particular, considering eucalyptol addition at 9 d incubation, VFA production was reduced (26.8 vs. 33.3 mmol of control, P < 0.05) contrary to the 48 h incubation (P = 0.189). Furthermore, the treatments affected protozoa genera relative abundances at 24 h (increased abundance for Entodinium with all the treatments, P < 0.001, and reduced for Diplodinium, P = 0.001); at 9 d, instead, protozoa genera relative abundances were not affected by the treatment. The additives tested showed potential in reducing CH4 production without compromising the overall fermentation efficiency. A meta-analysis (Experiment 2) investigated the effects on lactation performance and enteric CH4 of the main forage included in the diet. In the dataset, composed of in vivo experiments, four main forage bases were evaluated: corn silage, alfalfa silage, grass silage, and green forage. Cows fed corn, and alfalfa silages had the highest DMI (21.9 and 22.0 kg/d, P < 0.05) and milk yield (29.7 and 30.4 kg/d, P < 0.05). On the opposite, NDF digestibility was highest for grass silage and green forage (67.6% and 73.1%, P < 0.05) than corn and alfalfa silages (51.8% on average). CH4 production was lower (P < 0.05) for green forage (332 g/d) than the silage diets (on average 438 g/d). Instead, corn silage and alfalfa silage gave the lowest CH4 per kg of milk yield (14.2 g/kg and 14.9 g/kg, P < 0.05). Considering CH4 per kg of DMI, the only difference was between corn silage and grass silage (19.7 g/kg vs. 21.3 g/kg respectively for corn and grass silage, P < 0.05). Finally, prediction models for CH4 production were obtained through a step-wise multi regression. In particular, the models for the prediction of: CH4 in g/d (CH4 = - 65.3(±63.7) + 11.6(±1.67) × DMI - 4.47(±1.09) × CP - 0.86(±0.33) × Starch + 2.62(±0.78) × OM digestibility + 30.8(±9.45) × Milk fat) and for CH4 in g/kg of milk yield (CH4/milk yield = - 55.5(±20.1) - 0.37(±0.13) × DMI + 0.18(±0.05) × Total forage inclusion on diet DM - 0.10(±0.04) × Inclusion of the main forage on diet DM + 0.48(±0.21) × OM + 0.14(±0.06) × NDF + 1.98(±0.86) × Milk fat +4.34(±1.66) × Milk protein) showed high precision (R2 = 95.4% and 88.6%, respectively), but the best AIC value (320) was found for the model predicting CH4 in g/kg DMI: CH4/kg DMI = 6.16(±3.89) - 0.36(±0.03) × CP + 0.12(±0.05) ×OM digestibility + 3.77(±0.56) × Milk fat - 3.94(±1.07) × Milk fat yield. A dataset (66 observations in total) of three in vivo experiments conducted in Italy on lactating cows in respiration chambers was built to evaluate IPCC Tier 2 equations to estimate enteric CH4 production (Experiment 3). In the dataset, the CH4 conversion factor (conversion of gross energy intake into enteric CH4 energy) was lowest for a diet based on grass and alfalfa silages (5.05%, P < 0.05), while the others values ranged between 5.41 and 5.92%. On average, energy digestibility was 69.0% across the dataset, but the diet based on hays had a lower value (64.8%, P < 0.05). The IPCC (2019) Tier 2 (conversion factor = 5.7% or 6.1% for diet with NDF concentration < 35% or >35%, respectively; digestible energy = 70%) gave, on average, a value of CH4 production not statistically different from the ones measured in vivo (382 vs. 388 g/d in vivo, P > 0.05). The IPCC (2006) Tier 2 (conversion factor = 6.5%, digestible energy = 70%) over-predicted CH4 emission (428 vs. 388 g/d in vivo, P < 0.05; μ = -1.05). The most precise models were the two considering digestible energy equal to 70% and average values of conversion factor for IPCC (2006) and IPCC (2019) (R = 0.630); the most accurate models was the one considering a conversion factor equal to 5.7% and energy digestibility measured in vivo (Cb = 0.995). Overall, the best performance among the predicting models tested was for the one based on a conversion factor equal to 5.7% and energy digestibility of 70% (CCC = 0.579 and RMPSE = 9.10%). Use of alternative protein source to conventional soybean meal The dietary inclusion of soybean silage in partial replacement of soybean meal for dairy cows was evaluated in vivo in lactating cow diets (Experiment 4). Cows were fed two diets, one with 12.4% of DM from soybean silage in substitution of 35% of the soybean meal of the control diet. The treatment did not affect DMI and milk yield (on average, 23.7 kg/d, P = 0.659, and 33.0 kg/d, P = 0.377, respectively). Cows fed the soybean silage diet had lower milk protein concentration (3.43% vs. 3.55% of the control, P < 0.001) and higher milk urea (30.5 vs. 28.7 mg/dL, P = 0.002). The soybean silage had lower nutrient digestibility than the control: DMD 65.2% vs. 68.6%, OMD 66.4% vs. 69.8%, NDFD 31.5% vs. 38.8% (respectively for soybean silage and control diet; P < 0.001 for all of them). Regarding N balance, cows fed soybean silage excreted more nitrogen in the urines (32.3 % of N intake vs. 28.9%, P = 0.005) and less in the milk (31.3% vs. 32.7%, P =0.003) than the control. When used as a protein source alternative to soybean meal, soybean silage sustained comparable milk production, but NDF digestibility and N use efficiency should be improved. The environmental impact of the use of soybean silage in comparison to a control diet with soybean meal as the main protein source was evaluated through an LCA approach (Experiment 5). In addition, two scenarios were included in the study, considering the two diets mentioned before, but with soybean meal not connected to LUC (responsible soybean meal). Regarding the single forages, soybean silage had higher global warming potential than alfalfa hay (477 vs. 201 kg CO2eq/ton DM), also when this was expressed per tonnes of protein production (2439 and 1034 kg CO2eq/ton CP, respectively), probably due to the lower contribution of the cultivation phase for alfalfa, being a multi-year crop. The scenario with soybean silage reduced the global warming potential per kg of fat and protein corrected milk (1.17 kg CO2eq) compared to the control (1.38 kg CO2eq). Responsible soybean meal reduced the global warming potential per kg of fat and protein corrected milk (1.13 kg CO2eq/kg vs. 1.38 of the scenario with the control diet). Overall, the best result per kg of fat and protein corrected milk was obtained when responsible soybean meal and soybean silage were used in combination (1.01 kg CO2eq). Also, when global warming potential was evaluated per daily fed TMR, the impact was lowest for the scenario with responsible soybean meal (13.4 kg CO2eq/d) due to the lower contribution of soybean meal to the total impact (11% vs. 43% of the control). Therefore, the two alternative protein sources tested should be preferred when considering environmental impact compared to conventional soybean meals.
Styles APA, Harvard, Vancouver, ISO, etc.
34

Wurzbacher, Tobias. « Vocal fold dynamics : quantification and model-based classification / ». Aachen : Shaker Verlag, 2008. http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&doc_number=016315367&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
35

Lunsford, Ian M. « SUBSYSTEM FAILURE ANALYSIS WITHIN THE HORIZON SIMULATION FRAMEWORK ». DigitalCommons@CalPoly, 2016. https://digitalcommons.calpoly.edu/theses/1560.

Texte intégral
Résumé :
System design is an inherently expensive and time consuming process. Engineers are constantly tasked to investigate new solutions for various programs. Model-based systems engineering (MBSE) is an up and coming successful method used to reduce the time spent during the design process. By utilizing simulations, model-based systems engineering can verify high-level system requirements quickly and at low cost early in the design process. The Horizon Simulation Framework, or HSF, provides the capability of simulating a system and verifying the system performance. This paper outlines an improvement to the Horizon Simulation Framework by providing information to the user regarding schedule failures due to subsystem failures and constraint violations. Using the C# language, constraint violation rates and subsystem failure rates are organized by magnitude and written to .csv files. Also, proper subsystem failure and constraint violation checking orders were stored for HSF to use as new evaluation sequences. The functionalities of the systemEval framework were verified by five test cases. The output information can be used for the user to improve their system and possibly reduce the total run-time of the Horizon Simulation Framework.
Styles APA, Harvard, Vancouver, ISO, etc.
36

Li, Yan. « Analysis of complex survey data using robust model-based and model-assisted methods ». College Park, Md. : University of Maryland, 2006. http://hdl.handle.net/1903/4080.

Texte intégral
Résumé :
Thesis (Ph. D.) -- University of Maryland, College Park, 2006.
Thesis research directed by: Survey Methodology. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
Styles APA, Harvard, Vancouver, ISO, etc.
37

Ponge, Julien. « Model based analysis of Time-aware Web service interactions ». Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2008. http://tel.archives-ouvertes.fr/tel-00730187.

Texte intégral
Résumé :
Les services web gagnent de l'importance en tant que cadre facilitant l'intégration d'applications au sein et en dehors des frontières des entreprises. Il est accepté que la description d'un service ne devrait pas seulement inclure l'interface, mais aussi le protocole métier supporté par le service. Dans le cadre de ce travail, nous avons formalisé la catégorie des protocoles incluant des contraintes de temps (appelés protocoles temporisés) et étudié l'impact du temps sur l'analyse de compatibilité et de remplaçabilité. Nous avons formalisé les contraintes suivantes : les contraintes Clnvoke définissent des fenêtres de disponibilités tandis que les contraintes Mlnvoke définissent des délais d'expiration. Nous avons étendu les techniques pour l'analyse de compatibilité et de remplaçabilité entre protocoles temporisés à l'aide d'un mapping préservant la sémantique entre les protocoles temporisés et les automates temporisés, ce qui a défini la classe des automates temporisés de protocoles (PTA). Les PTA possèdent des transitions silencieuses qui ne peuvent pas être supprimées en général, et pourtant ils sont fermés par calcul du complément, ce qui rend décidable les différents types d'analyse de compatibilité et de remplaçabilité. Enfin, nous avons mis en oeuvre notre approche dans le cadre du projet ServiceMosaic, une plate-forme pour la gestion du cycle de vie des services web.
Styles APA, Harvard, Vancouver, ISO, etc.
38

Fernandez, Cuesta Roald. « Motion Analysis : Model Based Head Pose Estimation of Infants ». Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for teknisk kybernetikk, 2010. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-11935.

Texte intégral
Résumé :
This thesis presents a method for performing tracking and estimation of head position and orientation by means of template based particle filtering. The implementation is designed to withstand high levels of occlusion and noise, and allow for system dynamics to be accounted for. To accelerate the computation, GPGPU techniques are used to enable the GPU to function a co-processor, resulting in real-time performance. A method is devised for dynamic creation of feature points used in the particle filter. Furthermore, the graphics pipeline is used to overlay and visualize the tracking, as well as play a key role in the dynamic template functionality. Finally, a benchmarking system is suggested and developed for carrying out controlled evaluation of tracking methods in general.
Styles APA, Harvard, Vancouver, ISO, etc.
39

Dong, Jing. « Sparse analysis model based dictionary learning and signal reconstruction ». Thesis, University of Surrey, 2016. http://epubs.surrey.ac.uk/811095/.

Texte intégral
Résumé :
Sparse representation has been studied extensively in the past decade in a variety of applications, such as denoising, source separation and classification. Earlier effort has been focused on the well-known synthesis model, where a signal is decomposed as a linear combination of a few atoms of a dictionary. However, the analysis model, a counterpart of the synthesis model, has not received much attention until recent years. The analysis model takes a different viewpoint to sparse representation, and it assumes that the product of an analysis dictionary and a signal is sparse. Compared with the synthesis model, this model tends to be more expressive to represent signals, as a much richer union of subspaces can be described. This thesis focuses on the analysis model and aims to address the two main challenges: analysis dictionary learning (ADL) and signal reconstruction. In the ADL problem, the dictionary is learned from a set of training samples so that the signals can be represented sparsely based on the analysis model, thus offering the potential to fit the signals better than pre-defined dictionaries. Among the existing ADL algorithms, such as the well-known Analysis K-SVD, the dictionary atoms are updated sequentially. The first part of this thesis presents two novel analysis dictionary learning algorithms to update the atoms simultaneously. Specifically, the Analysis Simultaneous Codeword Optimization (Analysis SimCO) algorithm is proposed, by adapting the SimCO algorithm which is proposed originally for the synthesis model. In Analysis SimCO, the dictionary is updated using optimization on manifolds, under the $\ell_2$-norm constraints on the dictionary atoms. This framework allows multiple dictionary atoms to be updated simultaneously in each iteration. However, similar to the existing ADL algorithms, the dictionary learned by Analysis SimCO may contain similar atoms. To address this issue, Incoherent Analysis SimCO is proposed by employing a coherence constraint and introducing a decorrelation step to enforce this constraint. The competitive performance of the proposed algorithms is demonstrated in the experiments for recovering synthetic dictionaries and removing additional noise in images, as compared with existing ADL methods. The second part of this thesis studies how to reconstruct signals with learned dictionaries under the analysis model. This is demonstrated by a challenging application problem: multiplicative noise removal (MNR) of images. In the existing sparsity motivated methods, the MNR problem is addressed using pre-defined dictionaries, or learned dictionaries based on the synthesis model. However, the potential of analysis dictionary learning for the MNR problem has not been investigated. In this thesis, analysis dictionary learning is applied to MNR, leading to two new algorithms. In the first algorithm, a dictionary learned based on the analysis model is employed to form a regularization term, which can preserve image details while removing multiplicative noise. In the second algorithm, in order to further improve the recovery quality of smooth areas in images, a smoothness regularizer is introduced to the reconstruction formulation. This regularizer can be seen as an enhanced Total Variation (TV) term with an additional parameter controlling the level of smoothness. To address the optimization problem of this model, the Alternating Direction Method of Multipliers (ADMM) is adapted and a relaxation technique is developed to allow variables to be updated flexibly. Experimental results show the superior performance of the proposed algorithms as compared with three sparsity or TV based algorithms for a range of noise levels.
Styles APA, Harvard, Vancouver, ISO, etc.
40

Rutaganda, Remmy. « Automated Model-Based Reliability Prediction and Fault Tree Analysis ». Thesis, Linköpings universitet, Institutionen för datavetenskap, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-67240.

Texte intégral
Résumé :
This work was undertaken as a final year project in Computer Engineering, within the Department of Computer and Information Science at Linköping University. At the Department of Computer and Information Science, work oriented at testing and analyzing applications is developed to provide solution approaches to problems that arise in system product development. One of the current applications being developed is the ‘Systemics Analyst’. The purpose of the application is to facilitate for system developers with an analysis tool permitting insights on system reliability, system critical components, how to improve the system and the consequences as well as risks of a system failure. The purpose of the present thesis was to enhance the ‘Systemics Analyst application’ by incorporating an ‘automated model-based reliability prediction’ and ‘fault tree analysis’ modules. This enables reliability prediction and fault tree analysis diagrams to be generated automatically from the data files and relieves the system developer from manual creation of the diagrams. The enhanced Systemics Analyst application managed to present the results in respective models using the new incorporated functionality. To accomplish the above tasks, ‘Systemics Analyst application’ was integrated with a library that handles automated model-based reliability prediction and fault tree analysis, which is described in this thesis. The reader will be guided through the steps that are performed to accomplish the tasks with illustrating figures, methods and code examples in order to provide a closer vision of the work performed.
Styles APA, Harvard, Vancouver, ISO, etc.
41

Yin, Lijun. « Facial expression analysis and synthesis for model based coding ». Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0011/NQ59702.pdf.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
42

Cunado, David. « Automatic gait recognition via model-based moving feature analysis ». Thesis, University of Southampton, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.297628.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
43

Beaumont, Paul James. « Model-based analysis of nuclear arms control verification processes ». Thesis, Imperial College London, 2017. http://hdl.handle.net/10044/1/54653.

Texte intégral
Résumé :
Reduction of nuclear arms in a verifiable manner that is trusted by two or more parties is a hard but important problem. Nations and organisations that wish to engage in such arms control verification activities need to be able to design procedures and control mechanisms that let them compute pertinent degrees of belief. Crucially, they also will need methods for reliably assessing their confidence in such beliefs, in situations with little or no contextual data to apply data-driven learning techniques on. This motivates the investigation of alternative methods of modelling beliefs. This thesis will cover three key models: a probabilistic Bayesian Network (BN) model for an arms control inspection scenario; a dynamical system that models an arms race with dynamics reflecting verification activities; and mathematical games, which are used for understanding the design space of treaties that constrain inspection schedules. We extend our models beyond their conventional computational abilities, and encode uncertainty over variables and probabilities within the models. This thesis explores the techniques required to enable such computations and to use these to answer questions of interest to decision making. In doing so, we also show that these abstractions can mitigate against the risk that lack of prior data represents for modelling and analysis. A main contribution of the thesis is to not only develop such methods for dealing with uncertainty, but to also extend these models with external constraints that reflect beliefs, knowledge or assumptions. We extend BNs to constrained Bayesian Networks, and relax the requirement of declaring Real valued probabilities of events. This then enables us to analyse marginal probabilities of interest symbolically, or develop metrics that check for agreement in outputs between multiple different models, and even optimise such metrics over the uncertainty. Whereas Stochastical Optimisation and other utility based techniques would enable an analysis of likelihoods, this work employs Robust Optimisation. This means that we are assessing 'best case', 'worst case' or 'is this ever possible' events, which are important to our arms control verification domain. For dynamical systems, we are able to leave initial parameters of the model as unknown, and then compute an optimum inspection routine (based on any arbitrarily set metric) that holds true despite the uncertainty. This allows us to provide decision-support regarding the best timings for rationing out a limited number of inspections, and how such an inspection regime should be the optimal one to meet the desired metric. In game theory, we develop constrained symbolic games that include symbolic pay-offs, and for which we can find Nash Equilibria that vary as the symbolic terms change. This allows us to advise players on the best mix of strategies to consider as the uncertain pay-offs vary, to either optimise pay-offs, or the use of particular strategies. Eventually, we are able to combine our approaches into an all-encompassing, yet fine-grained, model. Such integration accomplishes modelling all aspects of an inspection process and the regime that may call such a process. Integration also accounts for the shortcomings individual mathematical techniques have that other techniques can overcome. Our tools encode models in a Satisfiability Modulo Theories (SMT) solver: SMT are powerful decision procedures for quantifier-free, first-order logic. Solving our problems using SMT enables us to assess the sensitivity and relative confidence we have in particular models, as well as optimise for variables of interest and test hypotheses even without full data. The practical difficulties lie in leveraging the SMT to work for our large mathematical models, when, normally, they can only be relied on for simple or small numbers of mathematical computations. Although the theory, formalisations and methodologies engineered here are not specific to this domain, we utilise a case study in nuclear arms control to evaluate our approach and to demonstrate the real world insights gained. We conclude that the increased analytical capabilities from combining mathematical modelling and SMT allows us to - in principle - support the design or assessment of future bilateral arms control instruments by applying them to models of interest.
Styles APA, Harvard, Vancouver, ISO, etc.
44

Bailey, William. « Using model-based methods to support vehicle analysis planning ». Thesis, Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/50377.

Texte intégral
Résumé :
Vehicle system analysis models are becoming crucial to automotive designers wishing to better understand vehicle-level attributes and how they vary under different operating conditions. Such models require substantial planning and collaboration between multidisciplinary engineering teams. To improve the process used to create a vehicle system analysis model, the broader question of how to plan and develop any model should be addressed. Model-Based Systems Engineering (MBSE) is one approach that can be used to make such complex engineering tasks more efficient. MBSE can improve these tasks in several ways. It allows for more formal communication among stakeholders, avoids the ambiguity commonly found in document-based approaches to systems engineering, and allows stakeholders to all contribute to a single, integrated system model. Commonly, the Systems Modeling Language (SysML) is used to integrate existing analysis models with a system-level SysML model. This thesis, on the other hand, focuses on using MBSE to support the planning and development of the analysis models themselves. This thesis proposes an MBSE approach to improve the development of system models for Integrated Vehicle Analysis (IVA). There are several contributions of this approach. A formal process is proposed that can be used to plan and develop system analysis models. A comprehensive SysML model is used to capture both a descriptive model of a Vehicle Reference Architecture (VRA), as well as the requirements, specifications, and documentation needed to plan and develop vehicle system analysis models. The development of both the process and SysML model was performed alongside Ford engineers to investigate how their current practices can be improved. For the process and SysML model to be implemented effectively, a set of software tools is used to create a more intuitive user interface for the stakeholders involved. First, functionality is added to views and viewpoints in SysML so that they may be used to formally capture the concerns of different stakeholders as exportable XML files. Using these stakeholder-specific XML files, a custom template engine can be used to generate unique spreadsheets for each stakeholder. In this way, the concerns and responsibilities of each stakeholder can be defined within the context of a formally defined process. The capability of these two tools is illustrated through the use of examples which mimic current practices at Ford and can demonstrate the utility of such an approach.
Styles APA, Harvard, Vancouver, ISO, etc.
45

Bagheri, Mehrdad. « Analysis of Model-based Testing methods for Embedded Systems ». Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-300744.

Texte intégral
Résumé :
The work presented in this master's thesis is a part of the Artemis-MBAT project. MBAT will provide European industry with a new leading-edge Validation and Verification technology in the form of a Reference Technology Platform (RTP) that will enable the production of high-quality and safe embedded systems at reduced cost in terms of time and money [1]. Model- Based Automated Testing is a new technique which is used for automating the generation of test cases from systems/software requirements. Despite handcrafted tests, the test suite could be derived automatically in this approach by focusing on the model behaviors. The goal of this thesis is to analyze and prototyping a tool where the scope is limited to analyzing the given Timed Automata model as an input, and generating the test suite accordingly. The output is supposed to be used with Enea Farkle's Test-bench as an input. Farkle Testbech has been implemented by Enea already and has been integrated into the other Enea tools used for debuging the embedded systems.
Styles APA, Harvard, Vancouver, ISO, etc.
46

Aguilar, Chongtay María del Rocío. « Model based system for automated analysis of biomedical images ». Thesis, University of Edinburgh, 1997. http://hdl.handle.net/1842/30059.

Texte intégral
Résumé :
This thesis is concerned with developing a probabilistic formulation of model-based vision using generalised flexible template models. It includes the design and implementation of a system which extends flexible template models to include grey level information in the object representation for image interpretation. This system was designed to deal with microscope images where the different stain and illumination conditions during the image acquisition process produce a strong correlation between density profile and geometric shape. This approach is based on statistical knowledge from a training set of examples. The variability of the shape-grey level relationships is characterised by applying principal component analysis to the shape-grey level vector extracted from the training set. The main modes of variation of each object class are encoded with a generic object formulation constrained by the training set limits. This formulation adapts to the diversity and irregularities of shape and view during the object recognition process. The modes of variation are used to generate new object instances for the matching process of new image data. A genetic algorithm method is used to find the best possible explanation for a candidate of a given model, based on the probability distribution of all possible matches. This approach is demonstrated by its application to microscope images of brain cells. It provides the means to obtain information such as brain cells density and distribution. This information could be useful in the understanding of the development and properties of some Central Nervous System (CNS) related diseases, such as in studies on the effects of HIV in CNS where neuronal loss is expected. The performance of the SGmodel system was compared with manual neuron counts from domain experts. The results show no significant difference between SGmodel and manual neuron estimates. The observation of bigger differences between the counts of the domain experts underlines the automated approach importance to perform an objective analysis.
Styles APA, Harvard, Vancouver, ISO, etc.
47

Ponge, Julien. « Model based analysis of Time-aware Web Services Interactions ». Clermont-Ferrand 2, 2008. http://www.theses.fr/2008CLF21840.

Texte intégral
Résumé :
Les services web gagnent de l'importance en tant que cadre facilitant l'intégration d'applications au sein et en dehors des frontières des entreprises. Il est accepté que la description d'un service ne devrait pas seulement inclure l'interface, mais aussi le protocole métier supporté par le service. Dans le cadre de ce travail, nous avons formalisé la catégorie des protocoles incluant des contraintes de temps (appelés protocoles temporisés) et étudié l'impact du temps sur l'analyse de compatibilité et de remplaçabilité. Nous avons formalisé les contraintes suivantes : les contraintes Clnvoke définissent des fenêtres de disponibilités tandis que les contraintes Mlnvoke définissent des délais d'expiration. Nous avons étendu les techniques pour l'analyse de compatibilité et de remplaçabilité entre protocoles temporisés à l'aide d'un mapping préservant la sémantique entre les protocoles temporisés et les automates temporisés, ce qui a défini la classe des automates temporisés de protocoles (PTA). Les PTA possèdent des transitions silencieuses qui ne peuvent pas être supprimées en général, et pourtant ils sont fermés par calcul du complément, ce qui rend décidable les différents types d'analyse de compatibilité et de remplaçabilité. Enfin, nous avons mis en oeuvre notre approche dans le cadre du projet ServiceMosaic, une plate-forme pour la gestion du cycle de vie des services web
Styles APA, Harvard, Vancouver, ISO, etc.
48

Coulibaly, Ibrahim. « Microcomputer based optimization model for photovoltaic system performance analysis ». Thesis, Virginia Polytechnic Institute and State University, 1986. http://hdl.handle.net/10919/104314.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
49

Panas, Dagmara. « Model-based analysis of stability in networks of neurons ». Thesis, University of Edinburgh, 2017. http://hdl.handle.net/1842/28883.

Texte intégral
Résumé :
Neurons, the building blocks of the brain, are an astonishingly capable type of cell. Collectively they can store, manipulate and retrieve biologically important information, allowing animals to learn and adapt to environmental changes. This universal adaptability is widely believed to be due to plasticity: the readiness of neurons to manipulate and adjust their intrinsic properties and strengths of connections to other cells. It is through such modifications that associations between neurons can be made, giving rise to memory representations; for example, linking a neuron responding to the smell of pancakes with neurons encoding sweet taste and general gustatory pleasure. However, this malleability inherent to neuronal cells poses a dilemma from the point of view of stability: how is the brain able to maintain stable operation while in the state of constant flux? First of all, won’t there occur purely technical problems akin to short-circuiting or runaway activity? And second of all, if the neurons are so easily plastic and changeable, how can they provide a reliable description of the environment? Of course, evidence abounds to testify to the robustness of brains, both from everyday experience and scientific experiments. How does this robustness come about? Firstly, many control feedback mechanisms are in place to ensure that neurons do not enter wild regimes of behaviour. These mechanisms are collectively known as homeostatic plasticity, since they ensure functional homeostasis through plastic changes. One well-known example is synaptic scaling, a type of plasticity ensuring that a single neuron does not get overexcited by its inputs: whenever learning occurs and connections between cells get strengthened, subsequently all the neurons’ inputs get downscaled to maintain a stable level of net incoming signals. And secondly, as hinted by other researchers and directly explored in this work, networks of neurons exhibit a property present in many complex systems called sloppiness. That is, they produce very similar behaviour under a wide range of parameters. This principle appears to operate on many scales and is highly useful (perhaps even unavoidable), as it permits for variation between individuals and for robustness to mutations and developmental perturbations: since there are many combinations of parameters resulting in similar operational behaviour, a disturbance of a single, or even several, parameters does not need to lead to dysfunction. It is also that same property that permits networks of neurons to flexibly reorganize and learn without becoming unstable. As an illustrative example, consider encountering maple syrup for the first time and associating it with pancakes; thanks to sloppiness, this new link can be added without causing the network to fire excessively. As has been found in previous experimental studies, consistent multi-neuron activity patterns arise across organisms, despite the interindividual differences in firing profiles of single cells and precise values of connection strengths. Such activity patterns, as has been furthermore shown, can be maintained despite pharmacological perturbation, as neurons compensate for the perturbed parameters by adjusting others; however, not all pharmacological perturbations can be thus amended. In the present work, it is for the first time directly demonstrated that groups of neurons are by rule sloppy; their collective parameter space is mapped to reveal which are the sensitive and insensitive parameter combinations; and it is shown that the majority of spontaneous fluctuations over time primarily affect the insensitive parameters. In order to demonstrate the above, hippocampal neurons of the rat were grown in culture over multi-electrode arrays and recorded from for several days. Subsequently, statistical models were fit to the activity patterns of groups of neurons to obtain a mathematically tractable description of their collective behaviour at each time point. These models provide robust fits to the data and allow for a principled sensitivity analysis with the use of information-theoretic tools. This analysis has revealed that groups of neurons tend to be governed by a few leader units. Furthermore, it appears that it was the stability of these key neurons and their connections that ensured the stability of collective firing patterns across time. The remaining units, in turn, were free to undergo plastic changes without risking destabilizing the collective behaviour. Together with what has been observed by other researchers, the findings of the present work suggest that the impressively adaptable yet robust functioning of the brain is made possible by the interplay of feedback control of few crucial properties of neurons and the general sloppy design of networks. It has, in fact, been hypothesised that any complex system subject to evolution is bound to rely on such design: in order to cope with natural selection under changing environmental circumstances, it would be difficult for a system to rely on tightly controlled parameters. It might be, therefore, that all life is just, by nature, sloppy.
Styles APA, Harvard, Vancouver, ISO, etc.
50

Deosthale, Eeshan Vijay. « Model-Based Fault Diagnosis of Automatic Transmissions ». The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu1542631227815892.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie