Siga este enlace para ver otros tipos de publicaciones sobre el tema: Automated Modal Analysis.

Tesis sobre el tema "Automated Modal Analysis"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores tesis para su investigación sobre el tema "Automated Modal Analysis".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Agee, Barry L. "Development of a laser-based automated mechanical mobility measurement system for one-dimensional experimental modal analysis". Thesis, This resource online, 1992. http://scholar.lib.vt.edu/theses/available/etd-12042009-020017/.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Yorgason, Robert Ivan. "Heteromorphic to Homeomorphic Shape Match Conversion Toward Fully Automated Mesh Morphing to Match Manufactured Geometry". BYU ScholarsArchive, 2016. https://scholarsarchive.byu.edu/etd/6414.

Texto completo
Resumen
The modern engineering design process includes computer software packages that require approximations to be made when representing geometries. These approximations lead to inherent discrepancies between the design geometry of a part or assembly and the corresponding manufactured geometry. Further approximations are made during the analysis portion of the design process. Manufacturing defects can also occur, which increase the discrepancies between the design and manufactured geometry. These approximations combined with manufacturing defects lead to discrepancies which, for high precision parts, such as jet engine compressor blades, can affect the modal analysis results. In order to account for the manufacturing defects during analysis, mesh morphing is used to morph a structural finite element analysis mesh to match the geometry of compressor blades with simulated manufacturing defects. The mesh morphing process is improved by providing a novel method to convert heteromorphic shape matching within Sculptor to homeomorphic shape matching. This novel method is automated using Java and the NX API. The heteromorphic to homeomorphic conversion method is determined to be valid due to its post-mesh morphing maximum deviations being on the same order as the post-mesh morphing maximum deviations of the ideal homeomorphic case. The usefulness of the automated heteromorphic to homeomorphic conversion method is demonstrated by simulating manufacturing defects on the pressure surface of a compressor blade model, morphing a structural finite element analysis mesh to match the geometry of compressor blades with simulated manufacturing defects, performing a modal analysis, and making observations on the effect of the simulated manufacturing defects on the modal characteristics of the compressor blade.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Kodikara, Kodikara Arachchige Tharindu Lakshitha. "Structural health monitoring through advanced model updating incorporating uncertainties". Thesis, Queensland University of Technology, 2017. https://eprints.qut.edu.au/110811/1/Kodikara%20Arachchige%20Tharindu%20Lakshitha_Kodikara_Thesis.pdf.

Texto completo
Resumen
This research developed comprehensive model updating systems for real structures including a hybrid approach which enhanced existing deterministic model updating techniques by providing measures to incorporate uncertainties in a computationally efficient way compared to probabilistic model updating approaches. Further, utilizing the developed hybrid approach a methodology was developed to assess the deterioration of reinforced concrete buildings under serviceability loading conditions. The developed methodologies in the research were successfully validated utilizing two real benchmark structures at Queensland University of Technology equipped with continuous monitoring systems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Loer, Karsten. "Model-based automated analysis for dependable interactive systems". Thesis, University of York, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.399265.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Abdul, Sani Asmiza. "Towards automated formal analysis of model transformation specifications". Thesis, University of York, 2013. http://etheses.whiterose.ac.uk/8641/.

Texto completo
Resumen
In Model-Driven Engineering, model transformation is a key model management operation, used to translate models between notations. Model transformation can be used for many engineering activities, for instance as a preliminary to merging models from different meta- models, or to generate codes from diagrammatic models. A mapping model needs to be developed (the transformation specification) to represent relations between concepts from the metamodels. The evaluation of the mapping model creates new challenges, for both conventional verification and validation, and also in guaranteeing that models generated by applying the transformation specification to source models still retain the intention of the initial transformation requirements. Most model transformation creates and evaluates a transformation specification in an ad-hoc manner. The specifications are usu- ally unstructured, and the quality of the transformations can only be assessed when the transformations are used. Analysis is not systematically applied even when the transformations are in use, so there is no way to determine whether the transformations are correct and consistent. This thesis addresses the problem of systematic creation and analysis of model transformation, via a facility for planning and designing model transformations which have conceptual-level properties that are tractable to formal analysis. We proposed a framework that provides steps to systematically build a model transformation specification, a visual notation for specifying model transformation and a template-based approach for producing a formal specification that is not just structure-equivalent but also amenable to formal analysis. The framework allows evaluation of syntactic and semantic correctness of generated models, metamodel coverage, and semantic correctness of the transformations themselves, with the help of snapshot analysis using patterns.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Rutaganda, Remmy. "Automated Model-Based Reliability Prediction and Fault Tree Analysis". Thesis, Linköpings universitet, Institutionen för datavetenskap, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-67240.

Texto completo
Resumen
This work was undertaken as a final year project in Computer Engineering, within the Department of Computer and Information Science at Linköping University. At the Department of Computer and Information Science, work oriented at testing and analyzing applications is developed to provide solution approaches to problems that arise in system product development. One of the current applications being developed is the ‘Systemics Analyst’. The purpose of the application is to facilitate for system developers with an analysis tool permitting insights on system reliability, system critical components, how to improve the system and the consequences as well as risks of a system failure. The purpose of the present thesis was to enhance the ‘Systemics Analyst application’ by incorporating an ‘automated model-based reliability prediction’ and ‘fault tree analysis’ modules. This enables reliability prediction and fault tree analysis diagrams to be generated automatically from the data files and relieves the system developer from manual creation of the diagrams. The enhanced Systemics Analyst application managed to present the results in respective models using the new incorporated functionality. To accomplish the above tasks, ‘Systemics Analyst application’ was integrated with a library that handles automated model-based reliability prediction and fault tree analysis, which is described in this thesis. The reader will be guided through the steps that are performed to accomplish the tasks with illustrating figures, methods and code examples in order to provide a closer vision of the work performed.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Aguilar, Chongtay María del Rocío. "Model based system for automated analysis of biomedical images". Thesis, University of Edinburgh, 1997. http://hdl.handle.net/1842/30059.

Texto completo
Resumen
This thesis is concerned with developing a probabilistic formulation of model-based vision using generalised flexible template models. It includes the design and implementation of a system which extends flexible template models to include grey level information in the object representation for image interpretation. This system was designed to deal with microscope images where the different stain and illumination conditions during the image acquisition process produce a strong correlation between density profile and geometric shape. This approach is based on statistical knowledge from a training set of examples. The variability of the shape-grey level relationships is characterised by applying principal component analysis to the shape-grey level vector extracted from the training set. The main modes of variation of each object class are encoded with a generic object formulation constrained by the training set limits. This formulation adapts to the diversity and irregularities of shape and view during the object recognition process. The modes of variation are used to generate new object instances for the matching process of new image data. A genetic algorithm method is used to find the best possible explanation for a candidate of a given model, based on the probability distribution of all possible matches. This approach is demonstrated by its application to microscope images of brain cells. It provides the means to obtain information such as brain cells density and distribution. This information could be useful in the understanding of the development and properties of some Central Nervous System (CNS) related diseases, such as in studies on the effects of HIV in CNS where neuronal loss is expected. The performance of the SGmodel system was compared with manual neuron counts from domain experts. The results show no significant difference between SGmodel and manual neuron estimates. The observation of bigger differences between the counts of the domain experts underlines the automated approach importance to perform an objective analysis.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Tanuan, Meyer C. "Automated Analysis of Unified Modeling Language (UML) Specifications". Thesis, University of Waterloo, 2001. http://hdl.handle.net/10012/1140.

Texto completo
Resumen
The Unified Modeling Language (UML) is a standard language adopted by the Object Management Group (OMG) for writing object-oriented (OO) descriptions of software systems. UML allows the analyst to add class-level and system-level constraints. However, UML does not describe how to check the correctness of these constraints. Recent studies have shown that Symbolic Model Checking can effectively verify large software specifications. In this thesis, we investigate how to use model checking to verify constraints of UML specifications. We describe the process of specifying, translating and verifying UML specifications for an elevator example. We use the Cadence Symbolic Model Verifier (SMV) to verify the system properties. We demonstrate how to write a UML specification that can be easily translated to SMV. We propose a set of rules and guidelines to translate UML specifications to SMV, and then use these to translate a non-trivial UML elevator specification to SMV. We look at errors detected throughout the specification, translation and verification process, to see how well they reveal errors, ambiguities and omissions in the user requirements.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Aho, P. (Pekka). "Automated state model extraction, testing and change detection through graphical user interface". Doctoral thesis, Oulun yliopisto, 2019. http://urn.fi/urn:isbn:9789526224060.

Texto completo
Resumen
Abstract Testing is an important part of quality assurance, and the use of agile processes, continuous integration and DevOps is increasing the pressure for automating all aspects of testing. Testing through graphical user interfaces (GUIs) is commonly automated by scripts that are captured or manually created with a script editor, automating the execution of test cases. A major challenge with script-based GUI test automation is the manual effort required for maintaining the scripts when the GUI changes. Model-based testing (MBT) is an approach for automating also the design of test cases. Traditionally, models for MBT are designed manually with a modelling tool, and an MBT tool is used for generating abstract test cases from the model. Then, an adapter is implemented to translate the abstract test cases into concrete test cases that can be executed on system under test (SUT). When the GUI changes, the model has to be updated and the test cases can be generated from the updated model, reducing the maintenance effort. However, designing models and implementing adapters requires effort and specialized expertise. The main research questions of this thesis are 1) how to automatically extract state-based models of software systems with GUI, and 2) how to use the extracted models to automate testing. Our focus is on using dynamic analysis through the GUI during automated exploration of the system, and we concentrate on desktop applications. Our results show that extracting state models through GUI is possible and the models can be used to generate regression test cases, but a more promising approach is to use model comparison on extracted models of consequent system versions to automatically detect changes between the versions
Tiivistelmä Testaaminen on tärkeä osa laadun varmistusta. Ketterät kehitysprosessit ja jatkuva integrointi lisäävät tarvetta automatisoida kaikki testauksen osa-alueet. Testaus graafisten käyttöliittymien kautta automatisoidaan yleensä skripteinä, jotka luodaan joko tallentamalla manuaalista testausta tai kirjoittamalla käyttäen skriptieditoria. Tällöin scriptit automatisoivat testitapausten suorittamista. Muutokset graafisessa käyttöliittymässä vaativat scriptien päivittämistä ja scriptien ylläpitoon kuluva työmäärä on iso ongelma. Mallipohjaisessa testauksessa automatisoidaan testien suorittamisen lisäksi myös testitapausten suunnittelu. Perinteisesti mallipohjaisessa testauksessa mallit suunnitellaan manuaalisesti käyttämällä mallinnustyökalua, ja mallista luodaan abstrakteja testitapauksia automaattisesti mallipohjaisen testauksen työkalun avulla. Sen jälkeen implementoidaan adapteri, joka muuttaa abstraktit testitapaukset konkreettisiksi, jotta ne voidaan suorittaa testattavassa järjestelmässä. Kun testattava graafinen käyttöliittymä muuttuu, vain mallia täytyy päivittää ja testitapaukset voidaan luoda automaattisesti uudelleen, vähentäen ylläpitoon käytettävää työmäärää. Mallien suunnittelu ja adapterien implementointi vaatii kuitenkin huomattavan työmäärän ja erikoisosaamista. Tämä väitöskirja tutkii 1) voidaanko tilamalleja luoda automaattisesti järjestelmistä, joissa on graafinen käyttöliittymä, ja 2) voidaanko automaattisesti luotuja tilamalleja käyttää testauksen automatisointiin. Tutkimus keskittyy työpöytäsovelluksiin ja dynaamisen analyysin käyttämiseen graafisen käyttöliittymän kautta järjestelmän automatisoidun läpikäynnin aikana. Tutkimustulokset osoittavat, että tilamallien automaattinen luominen graafisen käyttöliittymän kautta on mahdollista, ja malleja voidaan käyttää testitapausten generointiin regressiotestauksessa. Lupaavampi lähestymistapa on kuitenkin vertailla malleja, jotka on luotu järjestelmän peräkkäisistä versioista, ja havaita versioiden väliset muutokset automaattisesti
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Blom, Rikard. "Advanced metering infrastructure reference model with automated cyber security analysis". Thesis, KTH, Elkraftteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-204910.

Texto completo
Resumen
European Union has set a target to install nearly 200 million smart metersspread over Europe before 2020, this leads into a vast increase of sensitiveinformation flow for Distribution System Operators (DSO’s), simultaneously thisleads to raised cyber security threats. The in and outgoing information of the DSOneeds to be processed and stored by different Information technology (IT)- andOperational Technology (OT)-systems depending on the information. High demandsare therefore required of the enterprise cyber security to be able to protect theenterprise IT- and OT-systems. Sensitive customer information and a variety ofservices and functionality is examples that could be fatal to a DSO if compromised.For instance, if someone with bad intentions has the possibility to tinker with yourelectricity, while you’re away on holiday. If they succeed with the attack and shuttingdown the house electricity, your food stored in your fridge and freezer would mostlikely to be rotted, additionally damage from defrost water leaking could cause severedamaging on walls and floors. In this thesis, a detailed reference model of theadvanced metering architecture (AMI) has been produced to support enterprisesinvolved in the process of implementing smart meter architecture and to adapt to newrequirements regarding cyber security. This has been conduct using foreseeti's toolsecuriCAD, foreseeti is a proactive cyber security company using architecturemanagement. SecuriCAD is a modeling tool that can conduct cyber security analysis,where the user can see how long time it would take for a professional penetrationtester to penetrate the systems in the model depending of the set up and defenseattributes of the architecture. By varying defense mechanisms of the systems, fourscenarios have been defined and used to formulate recommendations based oncalculations of the advanced meter architecture. Recommendation in brief: Use smalland distinct network zones with strict communication rules between them. Do diligentsecurity arrangements for the system administrator PC. The usage of IntrusionProtection System (IPS) in the right fashion can delay the attacker with a percentageof 46% or greater.
Europeiska Unionen har satt upp ett mål att installera nära 200miljoner smarta elmätare innan år 2020, spritt utöver Europa, implementeringen ledertill en rejäl ökning av känsliga dataflöden för El-distributörer och intresset av cyberattacker ökar. Både ingående och utgående information behöver processas och lagraspå olika IT- och OT-system beroende på informationen. Höga krav gällande ITsäkerhet ställs för att skydda till exempel känslig kundinformation samt en mängdvarierande tjänster och funktioner som är implementerade i systemen. Typer avattacker är till exempel om någon lyckats få kontroll over eltillgängligheten och skullestänga av elektriciteten till hushåll vilket skulle till exempel leda till allvarligafuktskador till följd av läckage från frysen. I den här uppsatsen så har en tillräckligtdetaljerad referens modell för smart elmätar arkitektur tagits fram för att möjliggörasäkerhetsanalyser och för att underlätta för företag i en potentiell implementation avsmart elmätare arkitektur. Ett verktyg som heter securiCAD som är utvecklat avforeseeti har använts för att modellera arkitekturen. securiCAD är ett modelleringsverktyg som använder sig av avancerade beräknings algoritmer för beräkna hur långtid det skulle ta för en professionell penetrationstestare att lyckats penetrera de olikasystem med olika sorters attacker beroende på försvarsmekanismer och hurarkitekturen är uppbyggd. Genom att variera systemens försvar och processer så harfyra scenarion definierats. Med hjälp av resultaten av de fyra scenarierna så harrekommendationer tagits fram. Rekommendationer i korthet: Använd små ochdistinkta nätverkszoner med tydliga regler som till exempel vilka system som fårkommunicera med varandra och vilket håll som kommunikationen är tillåten.Noggranna säkerhetsåtgärder hos systemadministratörens dator. Användningen avIPS: er, genom att placera och använda IPS: er på rätt sätt så kan man fördröjaattacker med mer än 46% enligt jämförelser mellan de olika scenarier.
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Mårtensson, Jonas. "Geometric analysis of stochastic model errors in system identification". Doctoral thesis, KTH, Reglerteknik, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4506.

Texto completo
Resumen
Models of dynamical systems are important in many disciplines of science, ranging from physics and traditional mechanical and electrical engineering to life sciences, computer science and economics. Engineers, for example, use models for development, analysis and control of complex technical systems. Dynamical models can be derived from physical insights, for example some known laws of nature, (which are models themselves), or, as considered here, by fitting unknown model parameters to measurements from an experiment. The latter approach is what we call system identification. A model is always (at best) an approximation of the true system, and for a model to be useful, we need some characterization of how large the model error is. In this thesis we consider model errors originating from stochastic (random) disturbances that the system was subject to during the experiment. Stochastic model errors, known as variance-errors, are usually analyzed under the assumption of an infinite number of data. In this context the variance-error can be expressed as a (complicated) function of the spectra (and cross-spectra) of the disturbances and the excitation signals, a description of the true system, and the model structure (i.e., the parametrization of the model). The primary contribution of this thesis is an alternative geometric interpretation of this expression. This geometric approach consists in viewing the asymptotic variance as an orthogonal projection on a vector space that to a large extent is defined from the model structure. This approach is useful in several ways. Primarily, it facilitates structural analysis of how, for example, model structure and model order, and possible feedback mechanisms, affect the variance-error. Moreover, simple upper bounds on the variance-error can be obtained, which are independent of the employed model structure. The accuracy of estimated poles and zeros of linear time-invariant systems can also be analyzed using results closely related to the approach described above. One fundamental conclusion is that the accuracy of estimates of unstable poles and zeros is little affected by the model order, while the accuracy deteriorates fast with the model order for stable poles and zeros. The geometric approach has also shown potential in input design, which treats how the excitation signal (input signal) should be chosen to yield informative experiments. For example, we show cases when the input signal can be chosen so that the variance-error does not depend on the model order or the model structure. Perhaps the most important contribution of this thesis, and of the geometric approach, is the analysis method as such. Hopefully the methodology presented in this work will be useful in future research on the accuracy of identified models; in particular non-linear models and models with multiple inputs and outputs, for which there are relatively few results at present.
QC 20100810
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Kara, Ismihan Refika. "Automated Navigation Model Extraction For Web Load Testing". Master's thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12613992/index.pdf.

Texto completo
Resumen
Web pages serve a huge number of internet users in nearly every area. An adequate testing is needed to address the problems of web domains for more efficient and accurate services. We present an automated tool to test web applications against execution errors and the errors occured when many users connect the same server concurrently. Our tool, called NaMoX, attains the clickables of the web pages, creates a model exerting depth first search algorithm. NaMoX simulates a number of users, parses the developed model, and tests the model by branch coverage analysis. We have performed experiments on five web sites. We have reported the response times when a click operation is eventuated. We have found 188 errors in total. Quality metrics are extracted and this is applied to the case studies.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Chan, Carlos Chun Ming. "Speaker model adaptation in automatic speech recognition". Thesis, Robert Gordon University, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.339307.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Ronval, Gilles P. L. "Automatic modal analysis and taxonomy for vibration signature recognition". Thesis, University of Huddersfield, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.305084.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Ahmedt, Aristizabal David Esteban. "Multi-modal analysis for the automatic evaluation of epilepsy". Thesis, Queensland University of Technology, 2019. https://eprints.qut.edu.au/132537/1/David_Ahmedt%20Aristizabal_Thesis.pdf.

Texto completo
Resumen
Motion recognition technology is proposed to support neurologists in the study of patients' behaviour during epileptic seizures. This system can provide clues on the sub-type of epilepsy that patients have, it identifies unusual manifestations that require further investigation, as well as better understands the temporal evolution of seizures, from their onset through to termination. The incorporation of quantitative methods would assist in developing and formulating a diagnosis in situations where clinical expertise is unavailable. This research provides important supplementary and unbiased data to assist with seizure localization. It is a vital complementary resource in the era of seizure-based detection through electrophysiological data.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Delgado, Diogo Miguel Melo. "Automated illustration of multimedia stories". Master's thesis, Faculdade de Ciências e Tecnologia, 2010. http://hdl.handle.net/10362/4478.

Texto completo
Resumen
Submitted in part fulfillment of the requirements for the degree of Master in Computer Science
We all had the problem of forgetting about what we just read a few sentences before. This comes from the problem of attention and is more common with children and the elderly. People feel either bored or distracted by something more interesting. The challenge is how can multimedia systems assist users in reading and remembering stories? One solution is to use pictures to illustrate stories as a mean to captivate ones interest as it either tells a story or makes the viewer imagine one. This thesis researches the problem of automated story illustration as a method to increase the readers’ interest and attention. We formulate the hypothesis that an automated multimedia system can help users in reading a story by stimulating their reading memory with adequate visual illustrations. We propose a framework that tells a story and attempts to capture the readers’ attention by providing illustrations that spark the readers’ imagination. The framework automatically creates a multimedia presentation of the news story by (1) rendering news text in a sentence by-sentence fashion, (2) providing mechanisms to select the best illustration for each sentence and (3) select the set of illustrations that guarantees the best sequence. These mechanisms are rooted in image and text retrieval techniques. To further improve users’ attention, users may also activate a text-to-speech functionality according to their preference or reading difficulties. First experiments show how Flickr images can illustrate BBC news articles and provide a better experience to news readers. On top of the illustration methods, a user feedback feature was implemented to perfect the illustrations selection. With this feature users can aid the framework in selecting more accurate results. Finally, empirical evaluations were performed in order to test the user interface,image/sentence association algorithms and users’ feedback functionalities. The respective results are discussed.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Karlapudi, Janakiram. "Analysis on automatic generation of BEPS model from BIM model". Verlag der Technischen Universität Graz, 2020. https://tud.qucosa.de/id/qucosa%3A73547.

Texto completo
Resumen
The interlinking of enriched BIM data to Building Energy Performance Simulation (BEPS) models facilitates the data flow throughout the building life cycle. This seamless data transfer from BIM to BEPS models increases design efficiency. To investigate the interoperability between these models, this paper analyses different data transfer methodologies along with input data requirements for the simulation process. Based on the analysed knowledge, a methodology is adopted and demonstrated to identify the quality of the data transfer process. Furthermore, discussions are provided on identified efficiency gaps and future work.:Abstract Introduction and background Methodology Methodology demonstration Creation and export of BIM data Verification of OpenBIM meta-data BEPS model generation and validation Import statics Model Geometry and Orientation Construction details Thermal Profile Results and discussion Summary and future work References
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Kypuros, Javier Angel. "Variable structure model synthesis for switched systems /". Full text (PDF) from UMI/Dissertation Abstracts International, 2001. http://wwwlib.umi.com/cr/utexas/fullcit?p3008373.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Nasser-Barakat, Fatima. "Automatic modal variation tracking via a filter-free random decrement technique application to ambient vibration recordings on high-rise buildings". Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAT044/document.

Texto completo
Resumen
Cette thèse propose une nouvelle approche pour surveiller automatiquementles variations des fréquences et des taux d’amortissement des batiments de grande hauteursoumis à des vibrations ambiantes. L’approche vise à relever simultanément avec les défissuivants: signaux multi-composants enregistrées sur les bâtiments mentionnés ci-dessusavec des réponses impulsionnelles ayant des modes de fréquences rapprochées, des amplitudesfaibles, exponentielles et amorties noyées dans des bruits additifs élevés. La méthoderepose sur l’application de la technique de décrément aléatoire directement sur le signal multicomposantece qui conduit à l’estimation d’une signature de décrément aléatoire multi-modeéquivalente à la réponse impulsionnelle de système. Pour caractériser une telle signature,nous proposons un modèle de signal basé sur la structure physique du bâtiment à partir delaquelle les paramètres modaux peuvent être estimés. Dans le but d’avoir une estimationnon biaisée, nous proposons d’utiliser une méthode itérative sur la base d’une estimation dumaximum de vraisemblance optimisé par une technique de recuit simulé. Afin d’initialiserles paramètres de ce dernier, une première étape est conçu qui peut être considéré commeun estimateur indépendant des paramètres modaux. L’originalité de cette étape réside danssa capacité à définir automatiquement le nombre de modes de la signature estimé grâce àl’utilisation des propriétés statistiques d’un spectre estimé par une transformée de Fourier.Les paramètres modaux estimés par l’étape d’initialisation sont finalement affinés par l’étaped’estimation du maximum de vraisemblance. Celui-ci réduit le biais de l’estimation et donnedes résultats plus fiables et plus robustes. Toutes ces étapes sont définies de manière à être enmesure de surveiller automatiquement l’état de santé d’un bâtiment par l’intermédiaire d’unsuivi long terme en temps réel des variations modales dans le temps sans que l’interventionde l’utilisateur soit nécessaire. En outre, l’approche proposée a accordé une attention touteparticulière à l’estimation automatique du paramètre modal les plus problématique, c’està-dire, le taux d’amortissement. Ces deux caractéristiques sont des atouts originaux parrapport aux techniques existantes. L’adaptabilité et la fonctionnalité de l’AMBA a été validésur six bâtiments réels excités par des vibrations ambiantes. D’après les résultats obtenus,AMBA a prouvé une grande efficacité dans l’estimation automatique des fréquences et destaux d’amortissement dans le cas de modes de fréquences rapprochées et avec un très faiblerapport signal-sur-bruit. AMBA a ainsi démontré une bonne performance pour suivre lesvariations modales au fil du temps
This thesis proposes a novel approach to automatically monitor the variationsof the frequencies and the damping ratios of actual high-rise buildings subjected to realworldambient vibrations. The approach aims at dealing simultaneously with the followingchallenges: multi-component signals recorded over the aforementioned buildings and havingclosely-spaced frequency modes with low, exponential and damped amplitudes of theirimpulse responses and contaminated with high additive noises. The approach relies on theapplication of the Random Decrement Technique directly over the multi-component signalunder study which leads to the extraction of a Multi-mode Random Decrement Signatureequivalent to the system impulse response. To characterize such a signature, we propose asignal model based on the physical structure of the building from where the modal parameterscan be estimated. For the purpose of non-biased modal estimate, we propose to usean iterative method based on a Maximum-Likelihood Estimation optimized by a simulatedannealing technique. In order to initialize the parameters of the latter, a first step is designedwhich can be considered as an independent estimator of the modal parameters. Theoriginality of this step lies in its ability to automatically define the number of modes of theestimated signature through the use of the statistical properties of a Welch spectrum. Themodal parameters estimated by the spectral-based initialization step are finally refined bythe Maximum-Likelihood Estimation step. The latter reduces the bias in the estimation andyields more reliable and robust results. All these steps are defined in order to be able to automaticallymonitor the health of a building via a long-term real-time tracking of the modalvariations over time without the need to any user intervention . In addition, the proposedapproach has paid very special attention to the automatic estimation of the most problematicmodal parameter, i.e., the damping ratio. Such features making two of the original featuresas compared to existing techniques. The adaptability and functionality of AMBA is validatedover six actual buildings excited by real-world ambient vibrations. From the obtained results,AMBA proved high efficiency in automatically estimating the frequencies and moreover thedamping ratios in case of closely-spaced frequency modes and very low signal-to-noise ratiolevel. AMBA as well demonstrated a good performance for tracking the modal variationsover time
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Ghosh, Krishnendu. "Formal Analysis of Automated Model Abstractions under Uncertainty: Applications in Systems Biology". University of Cincinnati / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1330024977.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Keil, Mitchel J. "Automatic generation of interference-free geometric models of spatial mechanisms". Diss., This resource online, 1990. http://scholar.lib.vt.edu/theses/available/etd-08252008-162631/.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Wang, Yizhi. "Automated Analysis of Astrocyte Activities from Large-scale Time-lapse Microscopic Imaging Data". Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/95988.

Texto completo
Resumen
The advent of multi-photon microscopes and highly sensitive protein sensors enables the recording of astrocyte activities on a large population of cells over a long-time period in vivo. Existing tools cannot fully characterize these activities, both within single cells and at the population-level, because of the insufficiency of current region-of-interest-based approaches to describe the activity that is often spatially unfixed, size-varying, and propagative. Here, we present Astrocyte Quantitative Analysis (AQuA), an analytical framework that releases astrocyte biologists from the ROI-based paradigm. The framework takes an event-based perspective to model and accurately quantify the complex activity in astrocyte imaging datasets, with an event defined jointly by its spatial occupancy and temporal dynamics. To model the signal propagation in astrocyte, we developed graphical time warping (GTW) to align curves with graph-structured constraints and integrated it into AQuA. To make AQuA easy to use, we designed a comprehensive software package. The software implements the detection pipeline in an intuitive step by step GUI with visual feedback. The software also supports proof-reading and the incorporation of morphology information. With synthetic data, we showed AQuA performed much better in accuracy compared with existing methods developed for astrocytic data and neuronal data. We applied AQuA to a range of ex vivo and in vivo imaging datasets. Since AQuA is data-driven and based on machine learning principles, it can be applied across model organisms, fluorescent indicators, experimental modes, and imaging resolutions and speeds, enabling researchers to elucidate fundamental astrocyte physiology.
Doctor of Philosophy
Astrocyte is an important type of glial cell in the brain. Unlike neurons, astrocyte cannot be electrically excited. However, the concentrations of many different molecules inside and near astrocytes change over space and time and show complex patterns. Recording, analyzing, and deciphering these activity patterns enables the understanding of various roles astrocyte may play in the nervous system. Many of these important roles, such as sensory-motor integration and brain state modulation, were traditionally considered the territory of neurons, but recently found to be related to astrocytes. These activities can be monitored in the intracellular and extracellular spaces in either brain slices and living animals, thanks to the advancement of microscopes and genetically encoded fluorescent sensors. However, sophisticated analytical tools lag far behind the impressive capability of generating the data. The major reason is that existing tools are all based on the region-of-interest-based (ROI) approach. This approach assumes the field of view can be segmented to many regions, and all pixels in the region should be active together. In neuronal activity analysis, all pixels in an ROI (region of interest) correspond to a neuron and are assumed to share a common activity pattern (curve). This is not true for astrocyte activity data because astrocyte activities are spatially unfixed, size-varying, and propagative. In this dissertation, we developed a framework called AQuA to detect the activities directly. We designed an accurate and flexible detection pipeline that works with different types of astrocyte activity data sets. We designed a machine learning model to characterize the signal propagation for the pipeline. We also implemented a compressive and user-friendly software package. The advantage of AQuA is confirmed in both simulation studies and three different types of real data sets.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Johnson, John Peter. "Automatic sensitivity analysis for an Army modernization optimization model". Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1995. http://handle.dtic.mil/100.2/ADA302947.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Cunado, David. "Automatic gait recognition via model-based moving feature analysis". Thesis, University of Southampton, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.297628.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Sproston, Jeremy James. "Model checking of probabilistic timed and hybrid systems". Thesis, University of Birmingham, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.391021.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Wei, Ran. "An extensible static analysis framework for automated analysis, validation and performance improvement of model management programs". Thesis, University of York, 2016. http://etheses.whiterose.ac.uk/14375/.

Texto completo
Resumen
Model Driven Engineering (MDE) is a state-of-the-art software engineering approach, which adopts models as first class artefacts. In MDE, modelling tools and task-specific model management languages are used to reason about the system under development and to (automatically) produce software artefacts such as working code and documentation. Existing tools which provide state-of-the-art model management languages exhibit the lack of support for automatic static analysis for error detection (especially when models defined in various modelling technologies are involved within a multi-step MDE development process) and for performance optimisation (especially when very large models are involved in model management operations). This thesis investigates the hypothesis that static analysis of model management programs in the context of MDE can help with the detection of potential runtime errors and can be also used to achieve automated performance optimisation of such programs. To assess the validity of this hypothesis, a static analysis framework for the Epsilon family of model management languages is designed and implemented. The static analysis framework is evaluated in terms of its support for analysis of task-specific model management programs involving models defined in different modelling technologies, and its ability to improve the performance of model management programs operating on large models.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Zheng, Yilei. "IFSO: A Integrated Framework For Automatic/Semi-automatic Software Refactoring and Analysis". Digital WPI, 2004. https://digitalcommons.wpi.edu/etd-theses/241.

Texto completo
Resumen
To automatically/semi-automatically improve internal structures of a legacy system, there are several challenges: most available software analysis algorithms focus on only one particular granularity level (e.g., method level, class level) without considering possible side effects on other levels during the process; the quality of a software system cannot be judged by a single algorithm; software analysis is a time-consuming process which typically requires lengthy interactions. In this thesis, we present a framework, IFSO (Integrated Framework for automatic/semi-automatic Software refactoring and analysis), as a foundation for automatic/semi-automatic software refactoring and analysis. Our proposed conceptual model, LSR (Layered Software Representation Model), defines an abstract representation for software using a layered approach. Each layer corresponds to a granularity level. The IFSO framework, which is built upon the LSR model for component-based software, represents software at the system level, component level, class level, method level and logic unit level. Each level can be customized by different algorithms such as cohesion metrics, design heuristics, design problem detection and operations independently. Cooperating between levels together, a global view and an interactive environment for software refactoring and analysis are presented by IFSO. A prototype was implemented for evaluation of our technology. Three case studies were developed based on the prototype: three metrics, dead code removing, low coupled unit detection.
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Deosthale, Eeshan Vijay. "Model-Based Fault Diagnosis of Automatic Transmissions". The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu1542631227815892.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Al, Ramadhani Saif Ahmed. "A gap analysis of the automated speed enforcement operations and regulations in Oman". Thesis, Queensland University of Technology, 2019. https://eprints.qut.edu.au/134252/1/Saif_Al%20Ramadhani_Thesis.pdf.

Texto completo
Resumen
This research is the first of its kind on the conceptual approach to automated speed enforcement and operational practices in Oman, from the perspectives of police and policy-makers. Two gap analysis tools were used to identify the gaps within the conceptual approach and operational practices: Congruence Model and Benchmarking. Suggestions are provided for improving the conceptual and operational aspects of the automated speed enforcement program in Oman, and can also be adopted in other neighbouring countries of the Gulf Cooperation Council.
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Meiklejohn, Mark. "Automated software development and model generation by means of syntactic and semantic analysis". Thesis, University of Strathclyde, 2014. http://oleg.lib.strath.ac.uk:80/R/?func=dbin-jump-full&object_id=24855.

Texto completo
Resumen
Software development is a global activity and the development of a software system starts from some requirement that describes the problem domain. These requirements need to be communicated so that the software system can be fully engineered and in the majority of cases the communication of software requirements typically take the form of written text, which is difficult to transform into a model of the software system and consumes an inordinate amount of project effort. This thesis proposes and evaluates a fully automated analysis and model creation technique that exploits the syntactic and semantic information contained within an English natural language requirements specification to construct a Unified Modelling Language (UML) model of the software requirements. The thesis provides a detailed description of the related literature, a thorough description of the Common Semantic Model (CSM) and Syntactic Analysis Model (SAM) models, and the results of a qualitative and comparative evaluation given realistic requirement specifications and ideal models. The research findings confirm that the CSM and SAM models can identify: classes, relationships, multiplicities, operations, parameters and attributes all from the written natural language requirements specification which is subsequently transformed into a UML model. Furthermore, this transformation is undertaken without the need of manual intervention or manipulation of the requirements specification.
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

CABBOI, ALESSANDRO. "Automatic operational modal analysis: challenges and applications to historic structures and infrastructures". Doctoral thesis, Università degli Studi di Cagliari, 2014. http://hdl.handle.net/11584/266404.

Texto completo
Resumen
The core of the work turns around the capability to automate Operational Modal Analysis methods for permanent dynamic monitoring systems. In general, the application of OMA methods requires an experienced engineer in experimental dynamics and modal analysis; in addition, a lot of time is usually spent in manual analysis, necessary to ensure the best estimation of modal parameters. Those features are in contrast with permanent dynamic monitoring, which requires algorithms in order to efficiently manage the huge amount of recorded data in short time, ensuring an acceptable quality of results. Therefore, the use of parametric identification methods, like SSI methods, are explored and some recommendations concerning its application are provided. The identification process is combined with the automatic interpretation of stabilization diagrams based on a damping ratio check and on modal complexity inspection. Finally, a clustering method for the identified modes and a modal tracking strategy is suggested and discussed. The whole procedure is validated with a one-month and a one-year set of "manually-identified" modal parameters. This constitutes a quite unique set of validation data in the literature. Two monitoring case studies are studied: a railway iron arch bridge (1889) and a masonry bell-tower (XII century). Within this framework, classical and new strategies to handle the huge amount of recorded and identified data are proposed and compared for structural anomaly detection. The classical strategies are mainly based on the inspection of any irreversible frequency variation. To such purpose, it is mandatory an extensive correlation study with environmental and operational factors which affect the frequency of the vibration modes. Conversely, one of the proposed strategy aims to use alternative dynamic features that are not sensitive to environmental factors, like mode shape or modal complexity, instead of frequency parameters in order to detect any structural anomaly. In addition, a further strategy has the goal to eliminate the environmental-induced effects on frequency without the knowledge and the measurements of such factors. The procedure is mainly based on the combination of a simple regression model with the results obtained by a Principal Component Analysis. Furthermore, two automated Operational Modal Analysis (OMA) procedures are compared for Structural Health Monitoring (SHM) purposes: the first one is based on SSI methods, while the second one involves a non-parametric technique like the Frequency Domain Decomposition method (FDD). In conclusion, a model updating strategy for historic structures using Ambient Vibration Test and long term monitoring results is presented. The main goal is to integrate the information provided by a FE model with those continuously extracted by a dynamic monitoring system, basing so any detection of structural anomalies on the variation of the uncertain structural parameters.
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Kautz, Oliver [Verfasser]. "Model Analyses Based on Semantic Differencing and Automatic Model Repair / Oliver Kautz". Düren : Shaker, 2021. http://d-nb.info/1233548298/34.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Ohlsson, Henrik. "Mathematical Analysis of a Biological Clock Model". Thesis, Linköping University, Department of Electrical Engineering, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-6750.

Texto completo
Resumen

Have you thought of why you get tired or why you get hungry? Something in your body keeps track of time. It is almost like you have a clock that tells you all those things.

And indeed, in the suparachiasmatic region of our hypothalamus reside cells which each act like an oscillator, and together form a coherent circadian rhythm to help our body keep track of time. In fact, such circadian clocks are not limited to mammals but can be found in many organisms including single-cell, reptiles and birds. The study of such rhythms constitutes a field of biology, chronobiology, and forms the background for my research and this thesis.

Pioneers of chronobiology, Pittendrigh and Aschoff, studied biological clocks from an input-output view, across a range of organisms by observing and analyzing their overt activity in response to stimulus such as light. Their study was made without recourse to knowledge of the biological underpinnings of the circadian pacemaker. The advent of the new biology has now made it possible to "break open the box" and identify biological feedback systems comprised of gene transcription and protein translation as the core mechanism of a biological clock.

My research has focused on a simple transcription-translation clock model which nevertheless possesses many of the features of a circadian pacemaker including its entrainability by light. This model consists of two nonlinear coupled and delayed differential equations. Light pulses can reset the phase of this clock, whereas constant light of different intensity can speed it up or slow it down. This latter property is a signature property of circadian clocks and is referred to in chronobiology as "Aschoff's rule". The discussion in this thesis focus on develop a connection and also a understanding of how constant light effect this clock model.

Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Alexsson, Andrei. "Unsupervised hidden Markov model for automatic analysis of expressed sequence tags". Thesis, Linköpings universitet, Bioinformatik, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-69575.

Texto completo
Resumen
This thesis provides an in-depth analyze of expressed sequence tags (EST) that represent pieces of eukaryotic mRNA by using unsupervised hidden Markov model (HMM). ESTs are short nucleotide sequences that are used primarily for rapid identificationof new genes with potential coding regions (CDS). ESTs are made by sequencing on double-stranded cDNA and the synthesizedESTs are stored in digital form, usually in FASTA format. Since sequencing is often randomized and that parts of mRNA contain non-coding regions, some ESTs will not represent CDS.It is desired to remove these unwanted ESTs if the purpose is to identifygenes associated with CDS. Application of stochastic HMM allow identification of region contents in a EST. Softwares like ESTScanuse HMM in which a training of the HMM is done by supervised learning with annotated data. However, because there are not always annotated data at hand this thesis focus on the ability to train an HMM with unsupervised learning on data containing ESTs, both with and without CDS. But the data used for training is not annotated, i.e. the regions that an EST consists of are unknown. In this thesis a new HMM is introduced where the parameters of the HMM are in focus so that they are reasonablyconsistent with biologically important regionsof an mRNA such as the Kozak sequence, poly(A)-signals and poly(A)-tails to guide the training and decoding correctly with ESTs to proper statesin the HMM. Transition probabilities in the HMMhas been adapted so that it represents the mean length and distribution of the different regions in mRNA. Testing of the HMM's specificity and sensitivityhave been performed via BLAST by blasting each EST and compare the BLAST results with the HMM prediction results.A regression analysis test shows that the length of ESTs used when training the HMM is significantly important, the longer the better. The final resultsshows that it is possible to train an HMM with unsupervised machine learning but to be comparable to supervised machine learning as ESTScan, further expansion of the HMM is necessary such as frame-shift correction of ESTs byimproving the HMM's ability to choose correctly positioned start codons or nucleotides. Usually the false positive results are because of incorrectly positioned start codons leadingto too short CDS lengths. Since no frame-shift correction is implemented, short predicted CDS lengths are not acceptable and is hence not counted as coding regionsduring prediction. However, when there is a lack of supervised models then unsupervised HMM is a potential replacement with stable performance and able to be adapted forany eukaryotic organism.
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Bakir, Mehmet Emin. "Automatic selection of statistical model checkers for analysis of biological models". Thesis, University of Sheffield, 2017. http://etheses.whiterose.ac.uk/20216/.

Texto completo
Resumen
Statistical Model Checking (SMC) blends the speed of simulation with the rigorous analytical capabilities of model checking, and its success has prompted researchers to implement a number of SMC tools whose availability provides flexibility and fine-tuned control over model analysis. However, each tool has its own practical limitations, and different tools have different requirements and performance characteristics. The performance of different tools may also depend on the specific features of the input model or the type of query to be verified. Consequently, choosing the most suitable tool for verifying any given model requires a significant degree of experience, and in most cases, it is challenging to predict the right one. The aim of our research has been to simplify the model checking process for researchers in biological systems modelling by simplifying and rationalising the model selection process. This has been achieved through delivery of the various key contributions listed below. • We have developed a software component for verification of kernel P (kP) system models, using the NuSMV model checker. We integrated it into a larger software platform (www.kpworkbench.org). • We surveyed five popular SMC tools, comparing their modelling languages, external dependencies, expressibility of specification languages, and performance. To best of our knowledge, this is the first known attempt to categorise the performance of SMC tools based on the commonly used property specifications (property patterns) for model checking. • We have proposed a set of model features which can be used for predicting the fastest SMC for biological model verification, and have shown, moreover, that the proposed features both reduce computation time and increase predictive power. • We used machine learning algorithms for predicting the fastest SMC tool for verification of biological models, and have shown that this approach can successfully predict the fastest SMC tool with over 90% accuracy. • We have developed a software tool, SMC Predictor, that predicts the fastest SMC tool for a given model and property query, and have made this freely available to the wider research community (www.smcpredictor.com). Our results show that using our methodology can generate significant savings in the amount of time and resources required for model verification.
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Kovanovic, Vitomir. "Assessing cognitive presence using automated learning analytics methods". Thesis, University of Edinburgh, 2017. http://hdl.handle.net/1842/28759.

Texto completo
Resumen
With the increasing pace of technological changes in the modern society, there has been a growing interest from educators, business leaders, and policymakers in teaching important higher-order skills which were identified as necessary for thriving in the present-day globalized economy. In this regard, one of the most widely discussed higher order skills is critical thinking, whose importance in shaping problem solving, decision making, and logical thinking has been recognized. Within the domain of distance and online education, the Community of Inquiry (CoI) model provides a pedagogical framework for understanding the critical dimensions of student learning and factors which impact the development of student critical thinking. The CoI model follows the social-constructivist perspective on learning in which learning is seen as happening in both individual minds of learners and through the discourse within the group of learners. Central to the CoI model is the construct of cognitive presence, which captures the student cognitive engagement and the development of critical thinking and deep thinking skills. However, the assessment of cognitive presence is challenging task, particularly given its latent nature and the inherent physical and time separation between students and instructors in distance education settings. One way to address this problem is to make use of the vast amounts of learning data being collected by learning systems. This thesis presents novel methods for understanding and assessing the levels of cognitive presence based on learning analytics techniques and the data collected by learning environments. We first outline a comprehensive model for cognitive presence assessment which builds on the well-established evidence-cantered design (ECD) assessment framework. The proposed assessment model provides a foundation of the thesis, showing how the developed analytical models and their components fit together and how they can be adjusted for new learning contexts. The thesis shows two distinct and complementary analytical methods for assessing students’ cognitive presence and its development. The first method is based on the automated classification of student discussion messages and captures learning as it is observed in the student dialogue. The second analytics method relies on the analysis of log data of students’ use of the learning platform and captures the individual dimension of the learning process. The developed analytics also extend current theoretical understanding of the cognitive presence construct through data-informed operationalization of cognitive presence with different quantitative measures extracted from the student use of online discussions. We also examine methodological challenges of assessing cognitive presence and other forms of cognitive engagement through the analysis of trace data. Finally, with the intent of enabling for the wider adoption of the CoI model for new online learning modalities, the last two chapters examine the use of developed analytics within the context of Massive Open Online Courses (MOOCs). Given the substantial differences between traditional online and MOOC contexts, we first evaluate the suitability of the CoI model for MOOC settings and then assess students’ cognitive presence using the data collected by the MOOC platform. We conclude the thesis with the discussion of practical application and impact of the present work and the directions for the future research.
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Schmitt, Eugene David. "Control synthesis and stability analysis of a fuzzy Sugeno Model system". Thesis, Georgia Institute of Technology, 1997. http://hdl.handle.net/1853/18878.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Aldokhail, Abdullah M. "Automated Signal to Noise Ratio Analysis for Magnetic Resonance Imaging Using a Noise Distribution Model". University of Toledo Health Science Campus / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=mco1469557255.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Germeys, Jasper. "Supervision of the Air Loop in the Columbus Module of the International Space Station". Thesis, Linköpings universitet, Fordonssystem, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-133926.

Texto completo
Resumen
Failure detection and isolation (FDI) is essential for reliable operations of complex autonomous systems or other systems where continuous observation or maintenance thereof is either very costly or for any other reason not easily accessible. Beneficial for the model based FDI is that there is no need for fault data to detect and isolate a fault in contrary to design by data clustering. However, it is limited by the accuracy and complexity of the model used. As models grow more complex, or have multiple interconnections, problems with the traditional methods for FDI emerge. The main objective of this thesis is to utilise the automated methodology presented in [Svärd, 2012] to create a model based FDI system for the Columbus air loop. A small but crucial part of the life support on board the European space laboratory Columbus. The process of creating a model based FDI, from creation of the model equations, validation thereof to the design of residuals, test quantities and evaluation logic is handled in this work. Although the latter parts only briefly which leaves room for future work. This work indicate that the methodology presented is capable to create quite decent model based FDI systems even with poor sensor placement and limited information of the actual design. [] Carl Svärd. Methods for Automated Design of Fault Detection and Isolation Systems with Automotive Applications. PhD thesis, Linköping University, Vehicular Systems, The Institute of Technology, 2012
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Sohaib, Muhammad. "Parameterized Automated Generic Model for Aircraft Wing Structural Design and Mesh Generation for Finite Element Analysis". Thesis, Linköpings universitet, Maskinkonstruktion, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-71264.

Texto completo
Resumen
This master thesis work presents the development of a parameterized automated generic model for the structural design of an aircraft wing. Furthermore, in order to perform finite element analysis on the aircraft wing geometry, the process of finite element mesh generation is automated. Aircraft conceptual design is inherently a multi-disciplinary design process which involves a large number of disciplines and expertise. In this thesis work, it is investigated how high-end CAD software‟s can be used in the early stages of an aircraft design process, especially for the design of an aircraft wing and its structural entities wing spars and wing ribs. The generic model that is developed in this regard is able to automate the process of creation and modification of the aircraft wing geometry based on a series of parameters which define the geometrical characteristics of wing panels, wing spars and wing ribs.Two different approaches are used for the creation of the generic model of an aircraft wing which are “Knowledge Pattern” and “PowerCopy with Visual Basic Scripting” using the CATIA V5 software. A performance comparison of the generic wing model based on these two approaches is also performed. In the early stages of the aircraft design process, an estimate of the structural characteristic of the aircraft wing is desirable for which a surface structural analysis (using 2D mesh elements) is more suitable. In this regard, the process of finite element mesh generation for the generic wing model is automated. The finite element mesh is generated for the wing panels, wing spars and wing ribs. Furthermore, the finite element mesh is updated based on any changes in geometry and the shape of the wing panels, wing spars or wing ribs, and ensure that all the mesh elements are always properly connected at the nodes. The automated FE mesh generated can be used for performing the structural analysis on an aircraft wing.
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Moffitt, Kevin Christopher. "Toward Enhancing Automated Credibility Assessment: A Model for Question Type Classification and Tools for Linguistic Analysis". Diss., The University of Arizona, 2011. http://hdl.handle.net/10150/145456.

Texto completo
Resumen
The three objectives of this dissertation were to develop a question type model for predicting linguistic features of responses to interview questions, create a tool for linguistic analysis of documents, and use lexical bundle analysis to identify linguistic differences between fraudulent and non-fraudulent financial reports. First, The Moffitt Question Type Model (MQTM) was developed to aid in predicting linguistic features of responses to questions. It focuses on three context independent features of questions: tense (past vs. present vs. future), perspective (introspective vs. extrospective), and abstractness (concrete vs. conjectural). The MQTM was tested on responses to real-world pre-polygraph examination questions in which guilty (n = 27) and innocent (n = 20) interviewees were interviewed. The responses were grouped according to question type and the linguistic cues from each groups' transcripts were compared using independent samples t-tests with the following results: future tense questions elicited more future tense words than either past or present tense questions and present tense questions elicited more present tense words than past tense questions; introspective questions elicited more cognitive process words and affective words than extrospective questions; and conjectural questions elicited more auxiliary verbs, tentativeness words, and cognitive process words than concrete questions. Second, a tool for linguistic analysis of text documents, Structured Programming for Linguistic Cue Extraction (SPLICE), was developed to help researchers and software developers compute linguistic values for dictionary-based cues and cues that require natural language processing techniques. SPLICE implements a GUI interface for researchers and an API for developers. Finally, an analysis of 560 lexical bundles detected linguistic differences between 101 fraudulent and 101 non-fraudulent 10-K filings. Phrases such as "the fair value of," and "goodwill and other intangible assets" were used at a much higher rate in fraudulent 10-Ks. A principal component analysis reduced the number of variables to 88 orthogonal components which were used in a discriminant analysis that classified the documents with 71% accuracy. Findings in this dissertation suggest the MQTM could be used to predict features of interviewee responses in most contexts and that lexical bundle analysis is a viable tool for discriminating between fraudulent and non-fraudulent text.
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Larsson, Jonatan. "Automatic Test Generation and Mutation Analysis using UPPAAL SMC". Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-36415.

Texto completo
Resumen
Software testing is an important process for ensuring the quality of the software. As the complexity of the software increases, traditional means of manual testing becomes increasingly more complex and time consuming. In most embedded systems, designing software with as few errors as possible is often critical. Resource usage is also of concern for proper behavior because of the very nature of embedded systems.  To design reliable and energy-efficient systems, methods are needed to detect hot points of consumption and correct them prior to deployment. To reduce testing effort, Model-based testing can be used which is one testing method that allows for automatic testing of model based systems. Model-based testing has not been investigated extensively for revealing resource usage anomalies in embedded systems. UPPAAL SMC is a statistical model checking tool which can be used to model the system’s resource usage. Currently UPPAAL SMC lacks the support for performing automatic test generation and test selection. In this thesis we provide this support with a framework for automatic test generation and test selection using mutation analysis, a method for minimizing the generated test suite while maximizing the fault coverage and a tool implementing the framework on top of the UPPAAL SMC tool. The thesis also evaluates the framework on a Brake by Wire industrial system. Our results show that we could for a Brake-by-wire system, simulated on a consumer processor with five mutants, in best case find a test case that achieved 100% mutation score within one minute and confidently identify at least one test case that achieved full mutation score within five minutes. The evaluation shows that this framework is applicable and relatively efficient on an industrial system for reducing continues resource usage target testing effort.
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Sanderson, Conrad y conradsand@ieee org. "Automatic Person Verification Using Speech and Face Information". Griffith University. School of Microelectronic Engineering, 2003. http://www4.gu.edu.au:8080/adt-root/public/adt-QGU20030422.105519.

Texto completo
Resumen
Identity verification systems are an important part of our every day life. A typical example is the Automatic Teller Machine (ATM) which employs a simple identity verification scheme: the user is asked to enter their secret password after inserting their ATM card; if the password matches the one prescribed to the card, the user is allowed access to their bank account. This scheme suffers from a major drawback: only the validity of the combination of a certain possession (the ATM card) and certain knowledge (the password) is verified. The ATM card can be lost or stolen, and the password can be compromised. Thus new verification methods have emerged, where the password has either been replaced by, or used in addition to, biometrics such as the person’s speech, face image or fingerprints. Apart from the ATM example described above, biometrics can be applied to other areas, such as telephone & internet based banking, airline reservations & check-in, as well as forensic work and law enforcement applications. Biometric systems based on face images and/or speech signals have been shown to be quite effective. However, their performance easily degrades in the presence of a mismatch between training and testing conditions. For speech based systems this is usually in the form of channel distortion and/or ambient noise; for face based systems it can be in the form of a change in the illumination direction. A system which uses more than one biometric at the same time is known as a multi-modal verification system; it is often comprised of several modality experts and a decision stage. Since a multi-modal system uses complimentary discriminative information, lower error rates can be achieved; moreover, such a system can also be more robust, since the contribution of the modality affected by environmental conditions can be decreased. This thesis makes several contributions aimed at increasing the robustness of single- and multi-modal verification systems. Some of the major contributions are listed below. The robustness of a speech based system to ambient noise is increased by using Maximum Auto-Correlation Value (MACV) features, which utilize information from the source part of the speech signal. A new facial feature extraction technique is proposed (termed DCT-mod2), which utilizes polynomial coefficients derived from 2D Discrete Cosine Transform (DCT) coefficients of spatially neighbouring blocks. The DCT-mod2 features are shown to be robust to an illumination direction change as well as being over 80 times quicker to compute than 2D Gabor wavelet derived features. The fragility of Principal Component Analysis (PCA) derived features to an illumination direction change is solved by introducing a pre-processing step utilizing the DCT-mod2 feature extraction. We show that the enhanced PCA technique retains all the positive aspects of traditional PCA (that is, robustness to compression artefacts and white Gaussian noise) while also being robust to the illumination direction change. Several new methods, for use in fusion of speech and face information under noisy conditions, are proposed; these include a weight adjustment procedure, which explicitly measures the quality of the speech signal, and a decision stage comprised of a structurally noise resistant piece-wise linear classifier, which attempts to minimize the effects of noisy conditions via structural constraints on the decision boundary.
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Andersson, Jonny. "Automatic test vector generation and coverage analysis in model-based software development". Thesis, Linköping University, Department of Electrical Engineering, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-5204.

Texto completo
Resumen

Thorough testing of software is necessary to assure the quality of a product before it is released. The testing process requires substantial resources in software development. Model-based software development provides new possibilities to automate parts of the testing process. By automating tests, valuable time can be saved. This thesis focuses on different ways to utilize models for automatic generation of test vectors and how test coverage analysis can be used to assure the quality of a test suite or to find "dead code" in a model. Different test-automation techniques have been investigated and applied to a model of an adaptive cruise control system (ACC) used at Scania. Source code has been generated automatically from the model, model coverage and code coverage has therefore been compared. The work with this thesis resulted in a new method to create test vectors for models based on a combinatorial test technique.

Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Sturgill, David Matthew. "Comparative Genome Analysis of Three Brucella spp. and a Data Model for Automated Multiple Genome Comparison". Thesis, Virginia Tech, 2003. http://hdl.handle.net/10919/10163.

Texto completo
Resumen
Comparative analysis of multiple genomes presents many challenges ranging from management of information about thousands of local similarities to definition of features by combination of evidence from multiple analyses and experiments. This research represents the development stage of a database-backed pipeline for comparative analysis of multiple genomes. The genomes of three recently sequenced species of Brucella were compared and a superset of known and hypothetical coding sequences was identified to be used in design of a discriminatory genomic cDNA array for comparative functional genomics experiments. Comparisons were made of coding regions from the public, annotated sequence of B. melitensis (GenBank) to the annotated sequence of B. suis (TIGR) and to the newly-sequenced B. abortus (personal communication, S. Halling, National Animal Disease Center, USDA). A systematic approach to analysis of multiple genome sequences is described including a data model for storage of defined features is presented along with necessary descriptive information such as input parameters and scores from the methods used to define features. A collection of adjacency relationships between features is also stored, creating a unified database that can be mined for patterns of features which repeat among or within genomes. The biological utility of the data model was demonstrated by a detailed analysis of the multiple genome comparison used to create the sample data set. This examination of genetic differences between three Brucella species with different virulence patterns and host preferences enabled investigation of the genomic basis of virulence. In the B. suis genome, seventy-one differentiating genes were found, including a contiguous 17.6 kb region unique to the species. Although only one unique species-specific gene was identified in the B. melitensis genome and none in the B. abortus genome, seventy-nine differentiating genes were found to be present in only two of the three Brucella species. These differentiating features may be significant in explaining differences in virulence or host specificity. RT-PCR analysis was performed to determine whether these genes are transcribed in vitro. Detailed comparisons were performed on a putative B. suis pathogenicity island (PAI). An overview of these genomic differences and discussion of their significance in the context of host preference and virulence is presented.
Master of Science
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Badenhorst, Jacob Andreas Cornelius. "Data sufficiency analysis for automatic speech recognition / by J.A.C. Badenhorst". Thesis, North-West University, 2009. http://hdl.handle.net/10394/3994.

Texto completo
Resumen
The languages spoken in developing countries are diverse and most are currently under-resourced from an automatic speech recognition (ASR) perspective. In South Africa alone, 10 of the 11 official languages belong to this category. Given the potential for future applications of speech-based information systems such as spoken dialog system (SDSs) in these countries, the design of minimal ASR audio corpora is an important research area. Specifically, current ASR systems utilise acoustic models to represent acoustic variability, and effective ASR corpus design aims to optimise the amount of relevant variation within training data while minimising the size of the corpus. Therefore an investigation of the effect that different amounts and types of training data have on these models is needed. With this dissertation specific consideration is given to the data sufficiency principals that apply to the training of acoustic models. The investigation of this task lead to the following main achievements: 1) We define a new stability measurement protocol that provides the capability to view the variability of ASR training data. 2) This protocol allows for the investigation of the effect that various acoustic model complexities and ASR normalisation techniques have on ASR training data requirements. Specific trends with regard to the data requirements for different phone categories and how these are affected by various modelling strategies are observed. 3) Based on this analysis acoustic distances between phones are estimated across language borders, paving the way for further research in cross-language data sharing. Finally the knowledge obtained from these experiments is applied to perform a data sufficiency analysis of a new speech recognition corpus of South African languages: The Lwazi ASR corpus. The findings correlate well with initial phone recognition results and yield insight into the sufficient number of speakers required for the development of minimal telephone ASR corpora.
Thesis (M. Ing. (Computer and Electronical Engineering))--North-West University, Potchefstroom Campus, 2009.
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Ponge, Julien Nicolas Computer Science &amp Engineering Faculty of Engineering UNSW. "Model based analysis of time-aware web services interactions". Publisher:University of New South Wales. Computer Science & Engineering, 2009. http://handle.unsw.edu.au/1959.4/43525.

Texto completo
Resumen
Web services are increasingly gaining acceptance as a framework for facilitating application-to-application interactions within and across enterprises. It is commonly accepted that a service description should include not only the interface, but also the business protocol supported by the service. The present work focuses on the formalization of the important category of protocols that include time-related constraints (called timed protocols), and the impact of time on compatibility and replaceability analysis. We formalized the following timing constraints: CInvoke constraints define time windows of availability while MInvoke constraints define expirations deadlines. We extended techniques for compatibility and replaceability analysis between timed protocols by using a semantic-preserving mapping between timed protocols and timed automata, leading to the novel class of protocol timed automata (PTA). Specifically, PTA exhibit silent transitions that cannot be removed in general, yet they are closed under complementation, making every type of compatibility or replaceability analysis decidable. Finally, we implemented our approach in the context of a larger project called ServiceMosaic, a model-driven framework for web service life-cycle management.
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Svärd, Carl y Henrik Wassén. "Development of Methods for Automatic Design of Residual Generators". Thesis, Linköping University, Department of Electrical Engineering, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-7931.

Texto completo
Resumen

Legislation requires substantially lowered emissions and that all trucks manufactured are equipped with an On-Board Diagnosis (OBD) system. One approach for designing an OBD system is to use model based diagnosis and residual generation. At Scania CV AB, a method for automatic design of a diagnosis system from a model has been developed but there are still possibilities for improvements to get more and better residual generators. The main objective of this thesis is to analyze and improve the existing method.

A theoretic outline of two methods using different causality assumptions is presented and the differences are analyzed and discussed. Stability of residual generators is analyzed and a method for constructing stable residual generators and its consequences for the diagnosis system is presented.

Methods using integral and derivative causality are found not to be equivalent for all dynamic systems, resulting in that a diagnosis system utilizing both methods would be preferred for detectability reasons. A stable residual generator can be constructed from an unstable residual generator. The method for stabilizing a residual generator affects the fault sensitivity of the residual generator and the fault detectability properties of the diagnosis system.


Lagkrav kräver väsentligt sänkta emissionsnivåer och att alla tillverkade lastbilar är utrustade med ett system för On-Board Diagnosis (OBD). Ett sätt att konstruera ett OBD system är att använda modellbaserad diagnos och residualgenerering. På Scania CV AB har en metod för automatisk konstruktion av ett diagnossystem utifrån en modell utvecklats, men det finns utrymme för bättringar som leder till att fler och bättre residualgeneratorer konstrueras. Huvudsyftet med examensarbetet är att analysera och förbättra den existerande metoden.

En teoretisk beskrivning av två metoder som använder sig av olika kausalitet presenteras och skillnaderna analyseras och diskuteras. Stabiliteten hos residualgeneratorer analyseras och en metod för att konstruera stabila residualgeneratorer och dess konsekvenser för diagnossystemet presenteras.

Metoder som använder sig av integrerande respektive deriverande kausalitet visar sig inte vara ekvivalenta för alla dynamiska system, vilket resulterar i att ett diagnossystem som använder sig av båda kausaliteterna är att föredra i ett diagnossystem med avseende på detekterbarhet. En stabil residualgenerator kan konstrueras från en instabil residualgenerator. Metoden för att stabilisera en residualgenerator påverkar felkänsligheten hos residualgeneratorn och feldetekterbarheten hos diagnossystemet.

Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Martinez, Salvador. "Automatic reconstruction and analysis of security policies from deployed security components". Phd thesis, Ecole des Mines de Nantes, 2014. http://tel.archives-ouvertes.fr/tel-01065944.

Texto completo
Resumen
Security is a critical concern for any information system. Security properties such as confidentiality, integrity and availability need to be enforced in order to make systems safe. In complex environments, where information systems are composed by a number of heterogeneous subsystems, each subsystem plays a key role in the global system security. For the specific case of access-control, access-control policies may be found in several components (databases, networksand applications) all, supposedly, working together. Nevertheless since most times these policies have been manually implemented and/or evolved separately they easily become inconsistent. In this context, discovering and understanding which security policies are actually being enforced by the information system comes out as a critical necessity. The main challenge to solve is bridging the gap between the vendor-dependent security features and a higher-level representation that express these policies in a way that abstracts from the specificities of concrete system components, and thus, it's easier to understand and reason with. This high-level representation would also allow us to implement all evolution/refactoring/manipulation operations on the security policies in a reusable way. In this work we propose such a reverse engineering and integration mechanism for access-control policies. We rely on model-driven technologies to achieve this goal.
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Thongmal, Larsson Marie. "A model for material handling improvements when using automated storage systems: A case study". Thesis, Linnaeus University, School of Engineering, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-6350.

Texto completo
Resumen

The purpose of this thesis is to create a model of how to organize the placements of articles in an automated storage system in order to reduce time and cost related to the extractions. The model was developed during an investigation at a case company, and a comprehensive study of the material handling identified bottlenecks, whereof one was chosen to be further investigated: the automated storage system. The automated storage system is newly installed equipment, which required new working methods to be incorporated to the already existing working environment. The ABC-analysis was used in order to motivate how the articles should be placed in the automated storage. The goal for the new way of handling material was to put as little effort as possible on time related to the extraction of material. This due to the realization of material handling processes being a huge contributor to waste activities. This resulted in the development of the model and the suggestion that is given to the case company is to place the most frequent extracted articles close to the users. However, advantages must be held against disadvantages of rearrangements since the material handling will not be eliminated totally due to smaller improvements.


Syftet med denna uppsats är att skapa en modell för hur man ska organisera artikelplaceringar i ett automatiserat lagersystem för att reducera tid och kostnad relaterat till uttag av material. Modellen skapades genom en undersökning på ett fallföretag, en omfattande studie av materialhanteringen identifierade flaskhalsar, varav en valdes för fortsatt undersökning: det automatiserade lagersystemet. Det automatiserade lagersystemet är en nyinstallerad utrustning, som kräver att nya arbetsmetoder ska inkorporeras till den redan existerande arbetsmiljön. ABC-analysen användes för att motivera hur artiklarna ska placeras i det automatiserade lagret. Målet för det nya sättet att hantera material på var att så få insatser som möjligt skulle användas för uttag av material. Detta pågrund av insikten utav att materialhanteringsprocesser är en stor bidragande slöserifaktor. Detta resulterade i modellen och förslagen som ges till fallföretaget är att placera de mest frekvent använda artiklarna nära användaren. Emellertid så måste man väga fördelar mot nackdelar av en förändring eftersom materialhanteringen inte kommer att elimineras helt pågrund av mindre förbättringar.

Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía