Dissertations / Theses on the topic 'Automated model transformation'

To see the other types of publications on this topic, follow the link: Automated model transformation.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 32 dissertations / theses for your research on the topic 'Automated model transformation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Lin, Yuehua. "A model transformation approach to automated model evolution." Birmingham, Ala. : University of Alabama at Birmingham, 2007. http://www.mhsl.uab.edu/dt/2007p/lin.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Holm, Gustav. "Automated Model Transformation for Cyber-Physical Power System Models." Thesis, KTH, Skolan för teknikvetenskap (SCI), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-214750.

Full text
Abstract:
Standardized information and mathematicalmodels, which model the characteristics of the power generationand power transmission systems, are requirements for futuredevelopment and maintenance of different applications tooperate the electrical grid. Available databases such as Nordpoolprovides large amounts of data for power supply and demand [1].The typical misconception with open availability of data is thatexisting power system software tools can interact and process thisdata. Difficulties occur mainly because of two reasons. The firston is the amount of data produced. When the topology of theelectrical grid changes e.g. when a switch opens or closes, the flowof electrical power changes. This event produce changes ingeneration, transmission and distribution of the energy anddifferent data sets are produced. The second problem is therepresentation of information [2]. There are a limited number ofsoftware tools that can analyze this data, but each software toolrequires a specific data format structure to run. Dealing withthese difficulties requires an effective way to transform theprovided data representation into new data structures that canbe used in different execution platforms. This project aims tocreate a generic Model-to-Text (M2T) transformation capable oftransforming standardized power system information modelsinto input files executable by the Power System Analysis Tool(PSAT). During this project, a working M2T transformation wasnever achieved. However, missing functionality in someprograms connected to sub processes resulted in unexpectedproblems. This led to a new task of updating the informationmodel interpreter PyCIM. This task is partially completed andcan load basic power system information models.
APA, Harvard, Vancouver, ISO, and other styles
3

Abdul, Sani Asmiza. "Towards automated formal analysis of model transformation specifications." Thesis, University of York, 2013. http://etheses.whiterose.ac.uk/8641/.

Full text
Abstract:
In Model-Driven Engineering, model transformation is a key model management operation, used to translate models between notations. Model transformation can be used for many engineering activities, for instance as a preliminary to merging models from different meta- models, or to generate codes from diagrammatic models. A mapping model needs to be developed (the transformation specification) to represent relations between concepts from the metamodels. The evaluation of the mapping model creates new challenges, for both conventional verification and validation, and also in guaranteeing that models generated by applying the transformation specification to source models still retain the intention of the initial transformation requirements. Most model transformation creates and evaluates a transformation specification in an ad-hoc manner. The specifications are usu- ally unstructured, and the quality of the transformations can only be assessed when the transformations are used. Analysis is not systematically applied even when the transformations are in use, so there is no way to determine whether the transformations are correct and consistent. This thesis addresses the problem of systematic creation and analysis of model transformation, via a facility for planning and designing model transformations which have conceptual-level properties that are tractable to formal analysis. We proposed a framework that provides steps to systematically build a model transformation specification, a visual notation for specifying model transformation and a template-based approach for producing a formal specification that is not just structure-equivalent but also amenable to formal analysis. The framework allows evaluation of syntactic and semantic correctness of generated models, metamodel coverage, and semantic correctness of the transformations themselves, with the help of snapshot analysis using patterns.
APA, Harvard, Vancouver, ISO, and other styles
4

Shah, Seyyed Madasar Ali. "Model transformation dependability evaluation by the automated creation of model generators." Thesis, University of Birmingham, 2012. http://etheses.bham.ac.uk//id/eprint/3407/.

Full text
Abstract:
This thesis is on the automatic creation of model generators to assist the validation of model transformations. The model driven software development methodology advocates models as the main artefact to represent software during development. Such models are automatically converted, by transformation tools, to apply in different stages of development. In one application of the method, it becomes possible to synthesise software implementations from design models. However, the transformations used to convert models are man-made, and so prone to development error. An error in a transformation can be transmitted to the created software, potentially creating many invalid systems. Evaluating that model transformations are reliable is fundamental to the success of modelling as a principle software development practice. Models generated via the technique presented in this thesis can be applied to validate transformations. In several existing transformation validation techniques, some form of conversion is employed. However, those techniques do not apply to validate the conversions used there-in. A defining feature of the current presentation is the utilization of transformations, making the technique self-hosting. That is, an implementation of the presented technique can create generators to assist model transformations validation and to assist validation of that implementation of the technique.
APA, Harvard, Vancouver, ISO, and other styles
5

Liang, Dong. "Automatic generation of software applications." Doctoral thesis, Technische Universitaet Bergakademie Freiberg Universitaetsbibliothek "Georgius Agricola", 2014. http://nbn-resolving.de/urn:nbn:de:bsz:105-qucosa-149742.

Full text
Abstract:
The Model Driven Architecture (MDA) allows moving the software development from the time consuming and error-prone level of writing program code to the next higher level of modeling. In order to gain benefit from this innovative technology, it is necessary to satisfy two requirements. These are first, the creation of compact, complete and correct platform independent models (PIM) and second, the development of a flexible and extensible model transformation framework taking into account frequent changes of the target platform. In this thesis a platform-based methodology is developed to create PIM by abstracting common modeling elements into a platform independent modeling library called Design Platform Model (DPM). The DPM contains OCL-based types for modeling primitive and collection types, a platform independent GUI toolkit as well as other common modeling elements, such as those for IO-operations. Furthermore, a DPM profile containing diverse domain specific and design pattern-based stereotypes is also developed to create PIM with high-level semantics. The behavior in PIM is specified using an OCL-like action language called eXecutable OCL (XOCL), which is also developed in this thesis. For model transformation, the model compiler MOCCA is developed based on a flexible and extensible architecture. The model mapper components in the current version of MOCCA are able to map desktop applications onto JSE platform; the both business object layer and persistence layer of a three-layered enterprise applications onto JEE platform and SAP ABAP platform. The entire model transformation process is finished with complete code generation.
APA, Harvard, Vancouver, ISO, and other styles
6

Sonntag, Christian [Verfasser]. "Model Transformations for the Engineering of Complex Automated Systems / Christian Sonntag." Aachen : Shaker, 2018. http://d-nb.info/1188550268/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Brown, Stephen Anthony. "Models for automatic diffrentiation : a conceptual framework for exploiting program transformation." Thesis, University of Hertfordshire, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.263028.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Li, Jinyu. "Soft margin estimation for automatic speech recognition." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/26613.

Full text
Abstract:
Thesis (Ph.D)--Electrical and Computer Engineering, Georgia Institute of Technology, 2009.
Committee Chair: Dr. Chin-Hui Lee; Committee Member: Dr. Anthony Joseph Yezzi; Committee Member: Dr. Biing-Hwang (Fred) Juang; Committee Member: Dr. Mark Clements; Committee Member: Dr. Ming Yuan. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
9

Wang, Tiexin. "A study to define an automatic model transformation approach based on semantic and syntactic comparisons." Thesis, Ecole nationale des Mines d'Albi-Carmaux, 2015. http://www.theses.fr/2015EMAC0015/document.

Full text
Abstract:
Les modèles sont de plus en plus utilisés que ce soit pour la description d’un point de vue d’un système complexe ou pour l’échange d’information. Cependant, le partage d’information, le transfert d’information d’un modèle à un autre est aujourd’hui une problématique liée à l’interopérabilité des systèmes. Cette problématique peut être abordée selon trois approches : intégrée (tous les modèles identiques), unifiée (tous les modèles font référence à un modèle pivot), fédérée (pas de règles précises sur les modèles). Bien que des standards existent, ils sont rarement respectés avec rigueur. L’approche fédérée semble par conséquent l’approche la plus réaliste. Cependant, cette approche est complexe car les différents modèles, bien que comportant des concepts communs, peuvent avoir une structure et un vocabulaire très hétérogène pour décrire le même concept. Par conséquent, il faut identifier les concepts communs des différents modèles avant de définir les règles de transformation permettant de passer d’un format à un autre. Cette thèse propose une méthodologie permettant d’atteindre ces objectifs, elle se base d’une part sur la proposition d’un méta-méta-modèle permettant d’unifier la description de la structure des modèles, i.e. le méta-modèle, et d’autre part sur le calcul de distance entre chaque élément des modèles qui permettront de déduire les règles de transformation. Cette mesure de distance reflète la distance à la fois syntaxique, écritures différentes d’un même terme, ainsi que sémantique liée à l’utilisation de synonyme. La recherche de synonyme est basée sur l’utilisation de base de connaissance, représentée sous forme d’ontologie, tel que WordNet
The models are increasingly used both for the description of a view of a complex system or for information exchange. However, to share the information, transferring information from one model to another is an issue related to the interoperability of systems now. This problem can be approached in three ways: integrated (all identical models), unified (all models refer to a pivot model), federated (no specific rules on the models). Although standards exist, they are rarely respected rigorously. The federated approach therefore seems to be the most realistic approach. However, because of the different models, this approach is complicated. Models can have a very heterogeneous structure and different vocabulary to describe the same concept. Therefore, we must identify the common concepts of different models before defining the transformation rules for transforming from one format to another. This thesis proposes a methodology to achieve these goals. It is partly based on the proposal of a meta-meta-model (to unify the description of the model structure), i.e. the meta-model, and secondly calculating the distance between each element of models to deduce the transformation rules. This distance reflecting both syntactic distance (words occurrence) and semantic relation that related to the synonymous. Researching synonym relation is based on the use of knowledge base, represented as ontology, such as WordNet
APA, Harvard, Vancouver, ISO, and other styles
10

Amer, Hoda. "Automatic transformation of UML software specification into LQN performance models using graph grammar techniques." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/MQ61015.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Provost, Julien. "Test de conformité de contrôleurs logiques spécifiés en grafcet." Thesis, Cachan, Ecole normale supérieure, 2011. http://www.theses.fr/2011DENS0029/document.

Full text
Abstract:
Les travaux présentés dans ce mémoire de thèse s'intéressent à la génération et à la mise en œuvre de séquences de test pour le test de conformité de contrôleurs logiques. Dans le cadre de ces travaux, le Grafcet (IEC 60848 (2002)), langage de spécification graphique utilisé dans un contexte industriel, a été retenu comme modèle de spécification. Les contrôleurs logiques principalement considérés dans ces travaux sont les automates programmables industriels (API). Afin de valider la mise en œuvre du test de conformité pour des systèmes de contrôle/commande critiques, les travaux présentés proposent: - Une formalisation du langage de spécification Grafcet. En effet, l'application des méthodes usuelles de vérification et de validation nécessitent la connaissance du comportement à partir de modèles formels. Cependant, dans un contexte industriel, les modèles utilisés pour la description des spécifications fonctionnelles sont choisis en fonction de leur pouvoir d'expression et de leur facilité d'utilisation, mais ne disposent que rarement d'une sémantique formelle. - Une étude de la mise en œuvre de séquences de test et l'analyse des verdicts obtenus lors du changement simultané de plusieurs entrées logiques. Une campagne d'expérimentation a permis de quantifier, pour différentes configurations de l'implantation, le taux de verdicts erronés dus à ces changements simultanés. - Une définition du critère de SIC-testabilité d'une implantation. Ce critère, déterminé à partir de la spécification Grafcet, définit l'aptitude d'une implantation à être testée sans erreur de verdict. La génération automatique de séquences de test minimisant le risque de verdict erroné est ensuite étudiée
The works presented in this PhD thesis deal with the generation and implementation of test sequences for conformance test of logic controllers. Within these works, Grafcet (IEC 60848 (2002)), graphical specification language used in industry, has been selected as the specification model. Logic controllers mainly considered in these works are Programmable Logic Controllers (PLC). In order to validate the carrying out of conformance test of critical control systems, this thesis presents: - A formalization of the Grafcet specification language. Indeed, to apply usual verification and validation methods, the behavior is required to be expressed through formal models. However, in industry, the models used to describe functional specifications are chosen for their expression power and usability, but these models rarely have a formal semantics. - A study of test sequences execution and analysis of obtained verdicts when several logical inputs are changed simultaneously. Series of experimentation have permitted to quantify, for different configurations of the implantation under test, the rate of erroneous verdicts due to these simultaneous changes. - A definition of the SIC-testability criterion for an implantation. This criterion, determined on the Grafect specification defines the ability of an implementation to be tested without any erroneous verdict. Automatic generation of test sequences that minimize the risk of erroneous verdict is then studied
APA, Harvard, Vancouver, ISO, and other styles
12

Vitorino, dos Santos Filho Jairson. "CHROME: a model-driven component-based rule engine." Universidade Federal de Pernambuco, 2009. https://repositorio.ufpe.br/handle/123456789/1638.

Full text
Abstract:
Made available in DSpace on 2014-06-12T15:51:39Z (GMT). No. of bitstreams: 2 arquivo2757_1.pdf: 5759741 bytes, checksum: 8075c58c36a6d409b242f2a7873fb02f (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2009
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
Vitorino dos Santos Filho, Jairson; Pierre Louis Robin, Jacques. CHROME: a model-driven component-based rule engine. 2009. Tese (Doutorado). Programa de Pós-Graduação em Ciência da Computação, Universidade Federal de Pernambuco, Recife, 2009.
APA, Harvard, Vancouver, ISO, and other styles
13

Koc, San Dilek. "Approaches For Automatic Urban Building Extraction And Updating From High Resolution Satellite Imagery." Phd thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/3/12610501/index.pdf.

Full text
Abstract:
Approaches were developed for building extraction and updating from high resolution satellite imagery. The developed approaches include two main stages: (i) detecting the building patches and (ii) delineating the building boundaries. The building patches are detected from high resolution satellite imagery using the Support Vector Machines (SVM) classification, which is performed for both the building extraction and updating approaches. In the building extraction part of the study, the previously detected building patches are delineated using the Hough transform and boundary tracing based techniques. In the Hough transform based technique, the boundary delineation is carried out using the processing operations of edge detection, Hough transformation, and perceptual grouping. In the boundary tracing based technique, the detected edges are vectorized using the boundary tracing algorithm. The results are then refined through line simplification and vector filters. In the building updating part of the study, the destroyed buildings are determined through analyzing the existing building boundaries and the previously detected building patches. The new buildings are delineated using the developed model based approach, in which the building models are selected from an existing building database by utilizing the shape parameters. The developed approaches were tested in the Batikent district of Ankara, Turkey, using the IKONOS panchromatic and pan-sharpened stereo images (2002) and existing vector database (1999). The results indicate that the proposed approaches are quite satisfactory with the accuracies computed in the range from 68.60% to 98.26% for building extraction, and from 82.44% to 88.95% for building updating.
APA, Harvard, Vancouver, ISO, and other styles
14

Richa, Elie. "Qualification des générateurs de code source dans le domaine de l'avionique : le test automatisé des chaines de transformation de modèles." Thesis, Paris, ENST, 2015. http://www.theses.fr/2015ENST0082/document.

Full text
Abstract:
Dans l’industrie de l’avionique, les Générateurs Automatiques de Code (GAC) sont de plus en plus utilisés pour produire des parties du logiciel embarqué. Puisque le code généré fait partie d’un logiciel critique, les standards de sûreté exigent une vérification approfondie du GAC: la qualification. Dans cette thèse en collaboration avec AdaCore, nous cherchons à réduire le coût des activités de test par des méthodes automatiques et efficaces.La première partie de la thèse aborde le sujet du test unitaire qui assure une exhaustivité élevée mais qui est difficile à réaliser pour les GACs. Nous proposons alors une méthode qui garantit le même niveau d’exhaustivité en n’utilisant que des tests d’intégration de mise en œuvre plus facile. Nous proposons tout d’abord une formalisation du langage ATL de définition du GAC dans la théorie des Transformations Algébriques de Graphes. Nous définissons ensuite une traduction de postconditions exprimant l’exhaustivité du test unitaire en des préconditions équivalentes qui permettent à terme de produire des tests d’intégration assurant le même niveau d’exhaustivité. Enfin, nous proposons d’optimiser l’algorithme complexe de notre analyse à l’aide de stratégies de simplification dont nous mesurons expérimentalement l’efficacité.La seconde partie du travail concerne les oracles de tests du GAC, c’est à dire le moyen de valider le code généré par le GAC lors d’un test. Nous proposons un langage de spécification de contraintes textuelles capables d’attester automatiquement de la validité du code généré. Cette approche est déployée expérimentalement à AdaCore pour le projet QGen, un générateur de code Ada/C à partir de Simulink®
In the avionics industry, Automatic Code Generators (ACG) are increasingly used to produce parts of the embedded software. Since the generated code is part of critical software, safety standards require a thorough verification of the ACG called qualification. In this thesis in collaboration with AdaCore, we seek to reduce the cost of testing activities by automatic and effective methods.The first part of the thesis addresses the topic of unit testing which ensures exhaustiveness but is difficult to achieve for ACGs. We propose a method that guarantees the same level of exhaustiveness by using only integration tests which are easier to carry out. First, we propose a formalization of the ATL language in which the ACG is defined in the Algebraic Graph Transformation theory. We then define a translation of postconditions expressing the exhaustiveness of unit testing into equivalent preconditions that ultimately support the production of integration tests providing the same level of exhaustiveness. Finally, we propose to optimize the complex algorithm of our analysis using simplification strategies that we assess experimentally.The second part of the work addresses the oracles of ACG tests, i.e. the means of validating the code generated by the ACG during a test. We propose a language for the specification of textual constraints able to automatically check the validity of the generated code. This approach is experimentally deployed at AdaCore for a Simulink® to Ada/C ACG called QGen
APA, Harvard, Vancouver, ISO, and other styles
15

Kumar, Rahul. "Using Live Sequence Chart Specifications for Formal Verification." BYU ScholarsArchive, 2008. https://scholarsarchive.byu.edu/etd/1500.

Full text
Abstract:
Formal methods play an important part in the development as well as testing stages of software and hardware systems. A significant and often overlooked part of the process is the development of specifications and correctness requirements for the system under test. Traditionally, English has been used as the specification language, which has resulted in verbose and difficult to use specification documents that are usually abandoned during product development. This research focuses on investigating the use of Live Sequence Charts (LSCs), a graphical and intuitive language directly suited for expressing communication behaviors of a system as the specification language for a system under test. The research presents two methods for using LSCs as a specification language: first, by translating LSCs to temporal logic, and second, by translating LSCs to an automaton structure that is directly suited for formal verification of systems. The research first presents the translation for each method and further, identifies the pros and cons for each verification method.
APA, Harvard, Vancouver, ISO, and other styles
16

Trifunovic, Konrad. "Efficient search-based strategies for polyhedral compilation : algorithms and experience in a production compiler." Phd thesis, Université Paris Sud - Paris XI, 2011. http://tel.archives-ouvertes.fr/tel-00661334.

Full text
Abstract:
In order to take the performance advantages of the current multicore and heterogeneous architectures the compilers are required to perform more and more complex program transformations. The search space of the possible program optimizations is huge and unstructured. Selecting the best transformation and predicting the potential performance benefits of that transformation is the major problem in today's optimizing compilers. The promising approach to handling the program optimizations is to focus on the automatic loop optimizations expressed in the polyhedral model. The current approaches for optimizing programs in the polyhedral model broadly fall into two classes. The first class of the methods is based on the linear optimization of the analytical cost function. The second class is based on the exhaustive iterative search. While the first approach is fast, it can easily miss the optimal solution. The iterative approach is more precise, but its running time might be prohibitively expensive. In this thesis we present a novel search-based approach to program transformations in the polyhedral model. The new method combines the benefits - effectiveness and precision - of the current approaches, while it tries to minimize their drawbacks. Our approach is based on enumerating the evaluations of the precise, nonlinear performance predicting cost-function. The current practice is to use the polyhedral model in the context of source-to-source compilers. We have implemented our techniques in a GCC framework that is based on the low level three address code representation. We show that the chosen level of abstraction for the intermediate representation poses scalability challenges, and we show the ways to overcome those problems. On the other hand, it is shown that the low level IR abstraction opens new degrees of freedom that are beneficial for the search-based transformation strategies and for the polyhedral compilation in general.
APA, Harvard, Vancouver, ISO, and other styles
17

Biscuola, Vinicius Bertolazzi. "Modelo matemático híbrido determinístico-estocástico para a previsão da macroestrutura de grãos bruta de solidificação." Universidade de São Paulo, 2011. http://www.teses.usp.br/teses/disponiveis/3/3133/tde-04042011-121401/.

Full text
Abstract:
As variáveis de processo determinam as propriedades dos produtos resultantes dos processos de fundição ou de soldagem, que são função da sua macro e microestrutura bruta de solidificação. Um dos parâmetros importantes para se determinar as propriedades de um produto é a posição da transição colunarequiaxial (CET) e, por este motivo, o entendimento dos fenômenos físicos que causam esta transição é essencial. Com o intuito de se prever a formação da CET, surgiram os métodos empíricos e os modelos matemáticos, que são divididos em dois grandes grupos: modelos determinísticos e modelos estocásticos. Estes dois grupos foram bem estudados, porém nunca foram comparados entre si, particularmente em relação à previsão da posição da CET. O presente trabalho tem como um primeiro objetivo preencher esta lacuna através da comparação entre estes modelos. No entanto, o objetivo principal é apresentar, implementar e validar um novo modelo matemático, denominado de híbrido estocástico-determinístico (CADE -\"Cellular Automaton Deterministic\"), que combine características importantes e vantajosas de cada um dos dois grupos de modelos. Inicialmente, um modelo representante do grupo dos modelos estocásticos foi implementado e validado frente a resultados disponíveis na literatura. Durante esta validação, foi necessária a elaboração de um critério baseado na razão de aspecto dos grãos para a identificação da CET nas macroestruturas calculadas pelo modelo. Estes resultados foram então comparados com os resultados de modelos determinísticos para, após cuidadosa discussão, possibilitar a proposta e implementação do modelo híbrido. Os modelos determinísticos que utilizam o critério mecânico para prever o bloqueio de grãos colunares e a ocorrência da CET mostram regiões colunares em geral maiores que as previstas pelo modelo estocástico. Por outro lado, os modelos determinísticos que utilizam um critério de bloqueio a partir da interação do campo de concentração de soluto ao redor dos grãos prevêem uma CET em posições semelhantes às calculadas pelos modelos estocásticos. O modelo implementado no presente trabalho é capaz de prever a macroestrutura bruta de solidificação e ainda utilizar as equações tradicionalmente empregadas nos modelos determinísticos, sem a necessidade de qualquer método extra para prever a posição da frente de crescimento colunar ou o seu bloqueio por grãos equiaxiais.
The processing variables determine many properties of the products obtained by casting and welding processes and these properties, on the other hand, are strongly affected by the as-cast micro and macrostructure. Particularly the position of the columnar-to-equiaxed transition (CET), which determines the amount of columnar and equiaxed grains in the macrostructure, has an important effect on the properties of as-cast parts. Therefore, understanding the important physical phenomena that cause and affect the formation of the CET plays a crucial role in predicting the ascast macrostructure. To predict the CET formation, empirical methods and mathematical models have been developed. These models are frequently divided into two main groups: deterministic and stochastic. Both groups have been thoroughly studied, but a comparison between them was never attempted, especially regarding the prediction of the CET position. One of the main objectives of the present work is to fulfill this gap by carefully comparing these models. Nevertheless, the most important objective is to propose, implement, and validate a hybrid stochastic-deterministic model, referred to as CADE (Cellular Automaton Deterministic), that combines some important and well-known features of each model. Initially, a model from the stochastic group was implemented and validated using results available in the literature and then used to analyze the effects of some processing variables on the CET prediction. To carry out this analyzes, a criteria based on the aspect ratio of the grains was proposed and developed to identify the CET region from macrostructure images calculated by the model. The results were compared with those obtained by deterministic models and finally led to the development of the new proposed model. This new model has some characteristics from each group of mathematical models and, for this reason, was denoted as hybrid. A deterministic model based on a mechanical blocking criterion to block columnar grains and define the CET position showed, for the most part, larger columnar regions than those predicted by the stochastic model. A deterministic model with a solutal blocking criterion to predict the CET showed results similar to those calculated with the stochastic model. The model proposed in the present work (CADE) was able to predict the as-cast macrostructure using the well-established deterministic equations, without the need for a new method to track columnar grains or predict their blocking by equiaxed grains.
APA, Harvard, Vancouver, ISO, and other styles
18

Valderas, Aranda Pedro José. "A requirements engineering approach for the development of web applications." Doctoral thesis, Universitat Politècnica de València, 2008. http://hdl.handle.net/10251/1997.

Full text
Abstract:
Uno de los problemas más importantes que se propuso solucionar cuando apareció la Ingeniería Web fue la carencia de técnicas para la especificación de requisitos de aplicaciones Web. Aunque se han presentado diversas propuestas que proporcionan soporte metodológico al desarrollo de aplicaciones Web, la mayoría de ellas se centran básicamente en definir modelos conceptuales que permiten representar de forma abstracta una aplicación Web; las actividades relacionadas con la especificación de requisitos son vagamente tratadas por estas propuestas. Además, las técnicas tradicionales para la especificación de requisitos no proporcionan un soporte adecuado para considerar características propias de las aplicaciones Web como la Navegación. En esta tesis, se presenta una aproximación de Ingeniería de Requisitos para especificar los requisitos de las aplicaciones Web. Esta aproximación incluye mecanismos basados en la metáfora de tarea para especificar no sólo los requisitos relacionados con aspectos estructurales y de comportamiento de una aplicación Web sino también los requisitos relacionados con aspectos navegacionales. Sin embargo, una especificación de requisitos es poco útil si no somos capaces de transformarla en los artefactos software adecuados. Este es un problema clásico que la comunidad de Ingeniería del Software ha tratado de resolver desde sus inicios: cómo pasar del espacio del problema (requisitos de usuario) al espacio de la solución (diseño e implementación) siguiendo una guía metodológica clara y precisa. En esta tesis, se presenta una estrategia que, basándose en transformaciones de grafos, y estando soportada por un conjunto de herramientas, nos permite realizar de forma automática transformaciones entre especificaciones de requisitos basadas en tareas y esquemas conceptuales Web. Además, esta estrategia se ha integrado con un método de Ingeniería Web con capacidades de generación automática de código. Esta integración nos permite proporcionar un mecanis
Valderas Aranda, PJ. (2008). A requirements engineering approach for the development of web applications [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/1997
Palancia
APA, Harvard, Vancouver, ISO, and other styles
19

Jabri, Sana. "Génération de scénarios de tests pour la vérification de systèmes complexes et répartis : application au système européen de signalisation ferroviaire (ERTMS)." Phd thesis, Ecole Centrale de Lille, 2010. http://tel.archives-ouvertes.fr/tel-00584308.

Full text
Abstract:
Dans les années 90, la commission européenne a sollicité la mise au point d'un système de contrôle commande et de signalisation ferroviaire commun à tous les réseaux des états membres : le système ERTMS " European Railway Traffic Management System ". Il s'agit d'un système réparti complexe dont le déploiement complet est long et coûteux. L'objectif global consiste à diminuer les coûts de validation et de certification liés à la mise en œuvre de ce nouveau système en Europe. La problématique scientifique réside dans la modélisation formelle de la spécification afin de permettre la génération automatique des scénarios de test. Les verrous scientifiques, traités dans cette thèse, sont liés d'une part à la transformation de modèle semi-formel en modèle formel en préservant les propriétés structurelles et fonctionnelles des constituants réactifs du système réparti, et d'autre part à la couverture des tests générés automatiquement. Les constituants sont sous la forme de boîte noire. L'objectif consiste à tester ces derniers à travers la spécification ERTMS. Nous avons développé une approche de modélisation basée sur le couplage de modèles semi-formels (UML) et de modèles formels (Réseaux de Petri). Ce couplage se fait à travers une technique de transformation de modèles. Nous avons développé ensuite une méthode de génération automatique de scénarios de test de conformité à partir des modèles en réseaux de Petri. Les scénarios de test ont été considérés comme une séquence de franchissement filtrée puis réduite du réseau de Petri interprété représentant la spécification. Ces scénarios ont été exécutés sur notre plateforme de simulation ERTMS
APA, Harvard, Vancouver, ISO, and other styles
20

Prokopetc, Kristina. "Precise Mapping for Retinal Photocoagulation in SLIM (Slit-Lamp Image Mosaicing)." Thesis, Université Clermont Auvergne‎ (2017-2020), 2017. http://www.theses.fr/2017CLFAC093/document.

Full text
Abstract:
Cette thèse est issue d’un accord CIFRE entre le groupe de recherche EnCoV de l’Université Clermont Auvergne et la société Quantel Medical (www.quantel-medical.fr). Quantel Medical est une entreprise spécialisée dans le développement innovant des ultrasons et des produits laser en ophtalmologie. Cette thèse présente un travail de recherche visant à l’application du diagnostic assisté par ordinateur et du traitement des maladies de la rétine avec une utilisation du prototype industriel TrackScan développé par Quantel Medical. Plus précisément, elle contribue au problème du mosaicing précis de l’image de la lampe à fente (SLIM) et du recalage automatique et multimodal en utilisant les images SLIM avec l’angiographie par fluorescence (FA) pour aider à la photo coagulation pan-rétienne naviguée. Nous abordons trois problèmes différents.Le premier problème est lié à l’accumulation des erreurs du recalage en SLIM., il dérive de la mosaïque. Une approche commune pour obtenir la mosaïque consiste à calculer des transformations uniquement entre les images temporellement consécutives dans une séquence, puis à les combiner pour obtenir la transformation entre les vues non consécutives temporellement. Les nombreux algorithmes existants suivent cette approche. Malgré le faible coût de calcul et la simplicité de cette méthode, en raison de sa nature de ‘chaînage’, les erreurs d’alignement s’accumulent, ce qui entraîne une dérive des images dans la mosaïque. Nous proposons donc d’utilise les récents progrès réalisés dans les méthodes d’ajustement de faisceau et de présenter un cadre de réduction de la dérive spécialement conçu pour SLIM. Nous présentons aussi une nouvelle procédure de raffinement local.Deuxièmement, nous abordons le problème induit par divers types d’artefacts communs á l’imagerie SLIM. Ceus-sont liés à la lumière utilisée, qui dégrade considérablement la qualité géométrique et photométrique de la mosaïque. Les solutions existantes permettent de faire face aux blouissements forts qui corrompent entièrement le rendu de la rétine dans l’image tout en laissant de côté la correction des reflets spéculaires semi-transparents et reflets des lentilles. Cela introduit des images fantômes et des pertes d’information. En outre, les méthodes génériques ne produisent pas de résultats satisfaisants dans SLIM. Par conséquent, nous proposons une meilleure alternative en concevant une méthode basée sur une technique rapide en utilisant une seule image pour éliminer les éblouissements et la notion de feux spéculaires semi-transparents en utilisant les indications de mouvement pour la correction intelligente de reflet de lentille.Finalement, nous résolvons le problème du recalage multimodal automatique avec SLIM. Il existe une quantité importante de travaux sur le recalage multimodal de diverses modalités d’image rétinienne. Cependant, la majorité des méthodes existantes nécessitent une détection de points clés dans les deux modalités d’image, ce qui est une tâche très difficile. Dans le cas de SLIM et FA ils ne tiennent pas compte du recalage précis dans la zone maculaire - le repère prioritaire. En outre, personne n’a développé une solution entièrement automatique pour SLIM et FA. Dans cette thèse, nous proposons la première méthode capable de recolu ces deux modalités sans une saisie manuelle, en détectant les repères anatomiques uniquement sur une seule image pour assurer un recalage précis dans la zone maculaire. (...)
This thesis arises from an agreement Convention Industrielle de Formation par la REcherche (CIFRE) between the Endoscopy and Computer Vision (EnCoV) research group at Université Clermont Auvergne and the company Quantel Medical (www.quantel-medical.fr), which specializes in the development of innovative ultrasound and laser products in ophthalmology. It presents a research work directed at the application of computer-aided diagnosis and treatment of retinal diseases with a use of the TrackScan industrial prototype developed at Quantel Medical. More specifically, it contributes to the problem of precise Slit-Lamp Image Mosaicing (SLIM) and automatic multi-modal registration of SLIM with Fluorescein Angiography (FA) to assist navigated pan-retinal photocoagulation. We address three different problems.The first is a problem of accumulated registration errors in SLIM, namely the mosaicing drift.A common approach to image mosaicking is to compute transformations only between temporally consecutive images in a sequence and then to combine them to obtain the transformation between non-temporally consecutive views. Many existing algorithms follow this approach. Despite the low computational cost and the simplicity of such methods, due to its ‘chaining’ nature, alignment errors tend to accumulate, causing images to drift in the mosaic. We propose to use recent advances in key-frame Bundle Adjustment methods and present a drift reduction framework that is specifically designed for SLIM. We also introduce a new local refinement procedure.Secondly, we tackle the problem of various types of light-related imaging artifacts common in SLIM, which significantly degrade the geometric and photometric quality of the mosaic. Existing solutions manage to deal with strong glares which corrupt the retinal content entirely while leaving aside the correction of semi-transparent specular highlights and lens flare. This introduces ghosting and information loss. Moreover, related generic methods do not produce satisfactory results in SLIM. Therefore, we propose a better alternative by designing a method based on a fast single-image technique to remove glares and the notion of the type of semi-transparent specular highlights and motion cues for intelligent correction of lens flare.Finally, we solve the problem of automatic multi-modal registration of FA and SLIM. There exist a number of related works on multi-modal registration of various retinal image modalities. However, the majority of existing methods require a detection of feature points in both image modalities. This is a very difficult task for SLIM and FA. These methods do not account for the accurate registration in macula area - the priority landmark. Moreover, none has developed a fully automatic solution for SLIM and FA. In this thesis, we propose the first method that is able to register these two modalities without manual input by detecting retinal features only on one image and ensures an accurate registration in the macula area.The description of the extensive experiments that were used to demonstrate the effectiveness of each of the proposed methods is also provided. Our results show that (i) using our new local refinement procedure for drift reduction significantly ameliorates the to drift reduction allowing us to achieve an improvement in precision over the current solution employed in the TrackScan; (ii) the proposed methodology for correction of light-related artifacts exhibits a good efficiency, significantly outperforming related works in SLIM; and (iii) despite our solution for multi-modal registration builds on existing methods, with the various specific modifications made, it is fully automatic, effective and improves the baseline registration method currently used on the TrackScan
APA, Harvard, Vancouver, ISO, and other styles
21

Li, Yunming. "Machine vision algorithms for mining equipment automation." Thesis, Queensland University of Technology, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
22

CARNEVALI, LAURA. "Formal methods in the development life cycle of real-time software." Doctoral thesis, 2010. http://hdl.handle.net/2158/521924.

Full text
Abstract:
Preemptive Time Petri Nets (pTPNs) support modeling and analysis of concurrent timed software components running under fixed priority preemptive scheduling. The model is supported by a well established theory based on symbolic state-space analysis through Difference Bounds Matrix (DBM), with specific contributions on compositional modularization, trace analysis, and efficient over-approximation and clean-up in the management of suspension deriving from preemptive behavior. The aim of this dissertation is to devise and implement a framework that brings the theory to application. To this end, the theory is cast into an organic tailoring of design, coding, and testing activities within a V-Model software life cycle in respect of the principles of regulatory standards applied to the construction of safety-critical software components. To implement the toolchain subtended by the overall approach into a Model Driven Development (MDD) framework, the theory of state-space analysis is complemented with methods and techniques supporting semi-formal specification and automated compilation into pTPN models and real-time code, measurement-based Execution Time estimation, test-case selection and sensitization, coverage evaluation.
APA, Harvard, Vancouver, ISO, and other styles
23

Deters, Isabel Kristin. "Automating shopping for consumer goods : the potential of IoT-enabled replenishments as a new business model in FMCG." Master's thesis, 2021. http://hdl.handle.net/10400.14/34889.

Full text
Abstract:
Consumers increasingly value convenience and innovation when it comes to purchasing decisions. Companies are aware that technology and digitalization play an important role in individuals’ lives. In FMCG this trend is visible through increasing E-Commerce sales. The next step for online and offline retailers and manufacturers to increase market share is to automate repurchasing by offering replenishment solutions. Through IoT technology devices are smart. They can interact and communicate when a product is running low so that the purchase is automatically initiated without the customer manually ordering. This thesis elaborates on how a replenishment technology can constitute a new business model in FMCG and if it has the potential to become a commonly used model. The research consists of several expert interviews and an online survey. Both methods seek to evaluate the development of such a model; from the consumer’s perspective and from industry and topical experts’ perspectives. The results reveal that consumers are not ready yet to replace conventional shopping, especially when it comes to certain types of products. Additionally, the technology is not fully ready yet for replenishments to take over physical and online shopping. However, certain products in FMCG are suitable to run on such a model and it is likely, that with progressing digitalization and consumer acceptance the model will develop further and find its way into the individuals’ everyday lives.
Os consumidores cada vez mais valorizam a conveniência e a inovação nas suas decisões de compra. As empresas estão conscientes de que a tecnologia e a digitalização desempenham um papel importante na vida dos indivíduos. Na FMCG esta tendência é visível através do aumento das vendas de E-Commerce. O próximo passo para retalhistas e fabricantes aumentarem as suas quotas de mercado online e offline passa por automatizarem a recompra oferecendo soluções de reabastecimento. Através da tecnologia IoT, os dispositivos são inteligentes. Podem comunicar quando um produto está a ficar com pouco, para que a compra seja automaticamente iniciada sem que o cliente encomende manualmente. Esta tese desenvolve como uma tecnologia de reabastecimento pode constituir um novo modelo de negócio em FMCG e se tem o potencial de se tornar um modelo de utilização comum. A pesquisa consiste em entrevistas a peritos e num inquérito online. Ambos os métodos procuram avaliar o desenvolvimento do modelo; pela perspetiva do consumidor e pela perspetiva dos especialistas da atualidade e da indústria. Os resultados revelam que os consumidores ainda não se sentem preparados para substituir as compras convencionais, especialmente para certos produtos concretos. Adicionalmente, esta tecnologia ainda não está pronta para reabastecimentos para assumir as compras físicas e online. No entanto, certos produtos em FMCG têm aptidão para funcionar com este modelo e é provável que, com o progresso da digitalização e a aceitação do consumidor, o modelo se desenvolva ainda mais e encontre o seu caminho na vida quotidiana dos indivíduos.
APA, Harvard, Vancouver, ISO, and other styles
24

Rebac, Laura Katja. "The future of traditional grocery retailers : the impact of enabling trends on grocery retailers´ business model." Master's thesis, 2021. http://hdl.handle.net/10400.14/35403.

Full text
Abstract:
The grocery retail industry is transforming based on changing consumer expectations, new cutting-edge technologies and increasing competition through new market entrants. Slow adaption to external circumstances places traditional grocery retailers under increasing pressure, which adds urgency to strategic choices between a selection of digital and non-digital activities. This study analyses digital and non-digital trends that are defining the future of the grocery retail industry. Combining secondary data with a consumer survey and expert interviews, major enabling trends for traditional grocery retailers are identified to provide insights on how future business models need to strategically evolve. The results reveal, that several technologies along the entire value chain will be crucial for a holistic transformation, especially automation in warehousing and distribution processes and the launch of digital customer applications as an omni-channel solution. Thus, customer profiling can be enhanced to generate valuable customer insights. In the future, major key performance indicators will focus on increasing efficiency and personnel cost reduction, which makes implementation of technologies essential.
A indústria retalhista de mercearia está a transformar-se com base na mudança das expectativas dos clientes, em novas tecnologias de ponta e no aumento da concorrência através da entrada de novos operadores no mercado. A lenta adaptação às circunstâncias externas coloca os retalhistas tradicionais de mercearia sob pressão crescente, o que acrescenta urgência às escolhas estratégicas entre uma selecção de actividades digitais e não digitais. Este estudo analisa as tendências digitais e não-digitais que estão a definir o futuro da indústria retalhista de mercearia. Combinando dados secundários com um inquérito aos consumidores e entrevistas a peritos, identificam-se as principais tendências que permitem aos retalhistas de mercearia tradicional fornecerem informações sobre a forma como os futuros modelos de negócio precisam de evoluir estrategicamente. Os resultados revelam que várias tecnologias ao longo de toda a cadeia de valor serão cruciais para uma transformação holística, especialmente a automatização dos processos de armazenagem e distribuição e o lançamento de aplicações digitais de clientes como uma solução omni-canal. Assim, a definição de perfis de clientes pode ser melhorada para gerar percepções valiosas dos clientes. No futuro, os principais indicadoreschave de desempenho concentrar-se-ão no aumento da eficiência e na redução de custos de pessoal, o que torna essencial a implementação de tecnologias.
APA, Harvard, Vancouver, ISO, and other styles
25

Soares, João António Custódio. "Automatic Model Transformation from UML Sequence Diagrams to Coloured Petri Nets." Master's thesis, 2017. https://repositorio-aberto.up.pt/handle/10216/106158.

Full text
Abstract:
A dependência da sociedade em sistemas de software cada vez mais complexos torna a tarefa de testar e validar estes sistemas cada vez mais importante e desafiante. Em vários casos, múltiplos sistemas independentes e heterogéneos formam um sistema de sistemas responsável por providenciar serviços aos utilizadores e as ferramentas e técnicas atuais de automação de testes aos mesmos oferecem pouco suporte e apoio para para o desempenho desta tarefa.Este trabalho está inserido num projeto de maior escala que tem como objetivo produzir uma ferramenta de Model-based Testing que automatizará o processo de teste de sistemas distribuídos, a partir de diagramas de sequência UML. Estes diagramas definem graficamente a interação entre os diferentes módulos de um sistema e os seus atores de uma forma sequencial, facilitando a compreensão do funcionamento do sistema e possibilitando a definição de secções críticas dos sistemas distribuídos como situações de concorrência e paralelismo. Esta dissertação pretende desenvolver um dos componentes deste projeto que terá como objetivo a conversão dos diagramas descritivos do sistema em Petri Nets. Petri Nets são um formalismo de modelação que é indicado para descrição de sistemas distribuídos pela sua capacidade de definição de tarefas de comunicação e de sincronização, e pela possibilidade de execução usando ferramentas como CPN Tools.O objetivo será a definição de regras de tradução Model-to-Model que permitirão a conversão de modelos, de modo a possibilitar a integração com o sistema desejado, tirando partido de frameworks existentes de transformação de modelos (por exemplo, EMF - Eclipse Modeling Framework). Com isto conseguimos esconder a complexidade da análise do sistema ao utilizador (Software Tester) introduzindo automação, geração e execução de testes a partir dos diagramas de casos de teste, e apresentando os resultados (Erros e Cobertura de Código) visualmente.Este documento está dividido em quatro secções. A primeira secção apresenta o contexto e motivação para a dissertação e define o problema e objetivos. A segunda secção consiste no resumo dos conceitos necessários à compreensão da dissertação, o estado da arte dos estudos neste domínio e análise das ferramentas para implementar a solução. A terceira sec�\xA7ão explica a arquitetura e as escolhas tecnológicas para a solução proposta. Finalmente, a última secção explica as conclusões para este estudo e define o plano para trabalho futuro.
The dependence of our society on ever more complex software systems makes the task of testing and validating this software increasingly important and challenging. In many cases, multiple independent and heterogeneous systems form a system of systems responsible for providing services to users, and the current testing automation tools and techniques provide little support for the performance of this task.This dissertation is part of a larger scale project that aims to produce a Model-based Testing tool that will automate the process of testing distributed systems, from UML sequence diagrams. These diagrams graphically define the interaction between the different modules of a system and its actors in a sequential way, facilitating the understanding of the system's operation and allowing the definition of critical sections of distributed systems such as situations of concurrency and parallelism.This dissertation intends to develop one of the components of this project that will be in charge of the conversion of the descriptive diagrams of the system in Petri Nets. Petri Nets are a modeling formalism that is indicated for describing distributed systems by their ability to define communication and synchronization tasks, and by the possibility of executing them in runtime using tools such as CPN Tools.The objective will be to define Model-to-Model translation rules that will allow the conversion of models, in order to allow integration with the target system, taking advantage of existing model transformation frameworks (e.g. EMF - Eclipse Modeling Framework). With this, we have been able to hide the complexity of the system analysis to the user (Software Tester) introducing the possibility of automation, generation and execution of tests from the diagrams of test cases, and presenting the results (Errors and Code Coverage) visually.This document is divided in four sections. The first section introduces the context and motivation for the dissertation and defines the problem and goals. The second section consists in the summarization of concepts required to understand this dissertation, the state of the art of studies in this domain and an analysis of the tools to implement the solution. The third section explains the architecture and technological choices for the proposed solution. Finally, the last section explains the conclusions for this study and defines the future work plan.
APA, Harvard, Vancouver, ISO, and other styles
26

Soares, João António Custódio. "Automatic Model Transformation from UML Sequence Diagrams to Coloured Petri Nets." Dissertação, 2017. https://repositorio-aberto.up.pt/handle/10216/106158.

Full text
Abstract:
A dependência da sociedade em sistemas de software cada vez mais complexos torna a tarefa de testar e validar estes sistemas cada vez mais importante e desafiante. Em vários casos, múltiplos sistemas independentes e heterogéneos formam um sistema de sistemas responsável por providenciar serviços aos utilizadores e as ferramentas e técnicas atuais de automação de testes aos mesmos oferecem pouco suporte e apoio para para o desempenho desta tarefa.Este trabalho está inserido num projeto de maior escala que tem como objetivo produzir uma ferramenta de Model-based Testing que automatizará o processo de teste de sistemas distribuídos, a partir de diagramas de sequência UML. Estes diagramas definem graficamente a interação entre os diferentes módulos de um sistema e os seus atores de uma forma sequencial, facilitando a compreensão do funcionamento do sistema e possibilitando a definição de secções críticas dos sistemas distribuídos como situações de concorrência e paralelismo. Esta dissertação pretende desenvolver um dos componentes deste projeto que terá como objetivo a conversão dos diagramas descritivos do sistema em Petri Nets. Petri Nets são um formalismo de modelação que é indicado para descrição de sistemas distribuídos pela sua capacidade de definição de tarefas de comunicação e de sincronização, e pela possibilidade de execução usando ferramentas como CPN Tools.O objetivo será a definição de regras de tradução Model-to-Model que permitirão a conversão de modelos, de modo a possibilitar a integração com o sistema desejado, tirando partido de frameworks existentes de transformação de modelos (por exemplo, EMF - Eclipse Modeling Framework). Com isto conseguimos esconder a complexidade da análise do sistema ao utilizador (Software Tester) introduzindo automação, geração e execução de testes a partir dos diagramas de casos de teste, e apresentando os resultados (Erros e Cobertura de Código) visualmente.Este documento está dividido em quatro secções. A primeira secção apresenta o contexto e motivação para a dissertação e define o problema e objetivos. A segunda secção consiste no resumo dos conceitos necessários à compreensão da dissertação, o estado da arte dos estudos neste domínio e análise das ferramentas para implementar a solução. A terceira sec�\xA7ão explica a arquitetura e as escolhas tecnológicas para a solução proposta. Finalmente, a última secção explica as conclusões para este estudo e define o plano para trabalho futuro.
The dependence of our society on ever more complex software systems makes the task of testing and validating this software increasingly important and challenging. In many cases, multiple independent and heterogeneous systems form a system of systems responsible for providing services to users, and the current testing automation tools and techniques provide little support for the performance of this task.This dissertation is part of a larger scale project that aims to produce a Model-based Testing tool that will automate the process of testing distributed systems, from UML sequence diagrams. These diagrams graphically define the interaction between the different modules of a system and its actors in a sequential way, facilitating the understanding of the system's operation and allowing the definition of critical sections of distributed systems such as situations of concurrency and parallelism.This dissertation intends to develop one of the components of this project that will be in charge of the conversion of the descriptive diagrams of the system in Petri Nets. Petri Nets are a modeling formalism that is indicated for describing distributed systems by their ability to define communication and synchronization tasks, and by the possibility of executing them in runtime using tools such as CPN Tools.The objective will be to define Model-to-Model translation rules that will allow the conversion of models, in order to allow integration with the target system, taking advantage of existing model transformation frameworks (e.g. EMF - Eclipse Modeling Framework). With this, we have been able to hide the complexity of the system analysis to the user (Software Tester) introducing the possibility of automation, generation and execution of tests from the diagrams of test cases, and presenting the results (Errors and Code Coverage) visually.This document is divided in four sections. The first section introduces the context and motivation for the dissertation and defines the problem and goals. The second section consists in the summarization of concepts required to understand this dissertation, the state of the art of studies in this domain and an analysis of the tools to implement the solution. The third section explains the architecture and technological choices for the proposed solution. Finally, the last section explains the conclusions for this study and defines the future work plan.
APA, Harvard, Vancouver, ISO, and other styles
27

Chang, Yi-Fan, and 張依帆. "Automatic 2D Virtual Face Generation by 3D Model Transformation Techniques and Applications." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/64659203981015260316.

Full text
Abstract:
碩士
國立交通大學
多媒體工程研究所
95
In this study, a system for automatic generation of talking cartoon faces is proposed, which includes four processes: cartoon face creation, speech analysis, facial expression and lip movement synthesis, and animation generation. A face model of 72 facial feature points is adopted. A method for construction of a 3D local coordinate system for the cartoon face is proposed, and a transformation between the global and the local coordinate systems by the use of a knowledge-based coordinate system transformation method is conducted. A 3D rotation technique is applied to the cartoon face model with some additional points to draw the face in different poses. A concept of assigning control points is applied to animate the cartoon face with different facial expressions. A statistical method is proposed to simulate the timing information of various facial expressions. For lip synchronization, a sentence utterance segmentation algorithm is proposed and a syllable alignment technique is applied. Twelve basic mouth shapes for Mandarin speaking are defined to synthesize lip movements. A frame interpolation method is utilized to generate the animation. Finally, an editable and opened vector-based XML language - Scalable Vector Graphics (SVG) is used for rendering and synchronizing the cartoon face with speech. Two kinds of interesting applications are implemented. Good experimental results show the feasibility and applicability of the proposed methods.
APA, Harvard, Vancouver, ISO, and other styles
28

Liang, Dong. "Automatic generation of software applications: a platform-based MDA approach." Doctoral thesis, 2013. https://tubaf.qucosa.de/id/qucosa%3A22931.

Full text
Abstract:
The Model Driven Architecture (MDA) allows moving the software development from the time consuming and error-prone level of writing program code to the next higher level of modeling. In order to gain benefit from this innovative technology, it is necessary to satisfy two requirements. These are first, the creation of compact, complete and correct platform independent models (PIM) and second, the development of a flexible and extensible model transformation framework taking into account frequent changes of the target platform. In this thesis a platform-based methodology is developed to create PIM by abstracting common modeling elements into a platform independent modeling library called Design Platform Model (DPM). The DPM contains OCL-based types for modeling primitive and collection types, a platform independent GUI toolkit as well as other common modeling elements, such as those for IO-operations. Furthermore, a DPM profile containing diverse domain specific and design pattern-based stereotypes is also developed to create PIM with high-level semantics. The behavior in PIM is specified using an OCL-like action language called eXecutable OCL (XOCL), which is also developed in this thesis. For model transformation, the model compiler MOCCA is developed based on a flexible and extensible architecture. The model mapper components in the current version of MOCCA are able to map desktop applications onto JSE platform; the both business object layer and persistence layer of a three-layered enterprise applications onto JEE platform and SAP ABAP platform. The entire model transformation process is finished with complete code generation.
APA, Harvard, Vancouver, ISO, and other styles
29

ZHAO, XULIN. "A BUSINESS PROCESS DRIVEN APPROACH FOR AUTOMATIC GENERATION OF BUSINESS APPLICATIONS." Thesis, 2011. http://hdl.handle.net/1974/6296.

Full text
Abstract:
Business processes describe a set of tasks for accomplishing business objectives of an organization. Business applications automate business processes to improve the productivity of business users. Nowadays, the business environment changes fast due to rapid market growth and technological innovations. Business processes are continuously updated to reflect new business initiatives. Business applications are frequently evolved to add features to meet new requirements and fix defects. Quite often, business processes and business applications evolve independently without direct reference to each other. Over time, it becomes more and more challenging to maintain the consistency between a business application and the corresponding business processes. Moreover, the existing development approaches rely on software developers’ craftsmanship to design and implement business applications. Such a development paradigm is inefficient and leads to inconsistency between business processes and business applications. To facilitate the alignment between business applications and business processes, we present an approach that automatically generates software architecture and code skeletons of business applications from business processes. We identify architectural components from business processes by analyzing dependencies among tasks. To verify the achievement of quality requirements, we extend a set of existing product metrics to automatically evaluate the quality of the generated software architecture designs. Eventually, we apply refactoring strategies, such as software architectural styles or design patterns, to optimize the generated software architecture designs and resolve identified quality problems. Moreover, we also present an approach to automatically refine software architecture to design models and code skeletons of business applications. The effectiveness of our proposed approach is illustrated through case studies.
Thesis (Ph.D, Electrical & Computer Engineering) -- Queen's University, 2011-01-30 00:06:34.77
APA, Harvard, Vancouver, ISO, and other styles
30

Gamboa, Miguel. "Using Workflows to Automate Activities in MDE Tools." Thèse, 2016. http://hdl.handle.net/1866/18758.

Full text
Abstract:
Le génie logiciel a pour but de créer des outils logiciels qui permettent de résoudre des problèmes particuliers d’une façon facile et efficace. À cet égard, l’ingénierie dirigée par les modèles (IDM), facilite la création d’outils logiciels, en modélisant et transformant systématiquement des modèles. À cette fin, l’IDM s’appuie sur des workbenches de langage : des environnements de développement intégré (IDE) pour modéliser des langages, concevoir des modèles, les exécuter et les vérifier. Mais l’utilisation des outils est loin d’être efficace. Les activités de l’IDM typiques, telles que la création d’un langage de domaine dédié ou créer une transformation de modèles, sont des activités complexes qui exigent des opérations souvent répétitives. Par conséquent, le temps de développement augmentate inutilement. Le but de ce mémoire est de proposer une approche qui augmente la productivité des modélisateurs dans leurs activités quotidiennes en automatisant le plus possible les tâches à faire dans les outils IDM. Je propose une solution utilisant l’IDM où l’utilisateur définit un flux de travail qui peut être paramétré lors de l’exécution. Cette solution est implémentée dans un IDE pour la modélisation graphique. À l’aide de deux évaluations empiriques, je montre que la productivité des utilisateurs est augmentée et amééliorée.
Software engineering aims to create software tools that allow people to solve particular problems in an easy and efficient way. In this regard, Model-driven engineering (MDE) enables to generate software tools, by systematically modeling and transforming models. In order to do this, MDE relies on language workbenches: Integrated Development Environment (IDE) for engineering modeling languages, designing models executing them and verifying them. However, the usability of these tools is far from efficient. Common MDE activities, such as creating a domain-specific language or developing a model transformation, are nontrivial and often require repetitive tasks. This results in unnecessary risings of development time. The goal of this thesis is to increase the productivity of modelers in their daily activities by automating the tasks performed in current MDE tools. I propose an MDE-based solution where the user defines a reusable workflow that can be parameterized at run-time and executed. This solution is implemented in an IDE for graphical modeling. I also performed two empirical evaluations in which the users’ productivity is improved.
APA, Harvard, Vancouver, ISO, and other styles
31

Yan, Haowen. "Theory of Spatial Similarity Relations and Its Applications in Automated Map Generalization." Thesis, 2014. http://hdl.handle.net/10012/8317.

Full text
Abstract:
Automated map generalization is a necessary technique for the construction of multi-scale vector map databases that are crucial components in spatial data infrastructure of cities, provinces, and countries. Nevertheless, this is still a dream because many algorithms for map feature generalization are not parameter-free and therefore need human’s interference. One of the major reasons is that map generalization is a process of spatial similarity transformation in multi-scale map spaces; however, no theory can be found to support such kind of transformation. This thesis focuses on the theory of spatial similarity relations in multi-scale map spaces, aiming at proposing the approaches and models that can be used to automate some relevant algorithms in map generalization. After a systematic review of existing achievements including the definitions and features of similarity in various communities, a classification system of spatial similarity relations, and the calculation models of similarity relations in the communities of psychology, computer science, music, and geography, as well as a number of raster-based approaches for calculating similarity degrees between images, the thesis achieves the following innovative contributions. First, the fundamental issues of spatial similarity relations are explored, i.e. (1) a classification system is proposed that classifies the objects processed by map generalization algorithms into ten categories; (2) the Set Theory-based definitions of similarity, spatial similarity, and spatial similarity relation in multi-scale map spaces are given; (3) mathematical language-based descriptions of the features of spatial similarity relations in multi-scale map spaces are addressed; (4) the factors that affect human’s judgments of spatial similarity relations are proposed, and their weights are also obtained by psychological experiments; and (5) a classification system for spatial similarity relations in multi-scale map spaces is proposed. Second, the models that can calculate spatial similarity degrees for the ten types of objects in multi-scale map spaces are proposed, and their validity is tested by psychological experiments. If a map (or an individual object, or an object group) and its generalized counterpart are given, the models can be used to calculate the spatial similarity degrees between them. Third, the proposed models are used to solve problems in map generalization: (1) ten formulae are constructed that can calculate spatial similarity degrees by map scale changes in map generalization; (2) an approach based on spatial similarity degree is proposed that can determine when to terminate a map generalization system or an algorithm when it is executed to generalize objects on maps, which may fully automate some relevant algorithms and therefore improve the efficiency of map generalization; and (3) an approach is proposed to calculate the distance tolerance of the Douglas-Peucker Algorithm so that the Douglas-Peucker Algorithm may become fully automatic. Nevertheless, the theory and the approaches proposed in this study possess two limitations and needs further exploration. • More experiments should be done to improve the accuracy and adaptability of the proposed models and formulae. The new experiments should select more typical maps and map objects as samples, and find more subjects with different cultural backgrounds. • Whether it is feasible to integrate the ten models/formulae for calculating spatial similarity degrees into an identical model/formula needs further investigation. In addition, it is important to find out the other algorithms, like the Douglas-Peucker Algorithm, that are not parameter-free and closely related to spatial similarity relation, and explore the approaches to calculating the parameters used in these algorithms with the help of the models and formulae proposed in this thesis.
APA, Harvard, Vancouver, ISO, and other styles
32

Bhaskaracharya, Somashekaracharya G. "Automatic Storage Optimization of Arrays Affine Loop Nests." Thesis, 2016. http://hdl.handle.net/2005/3208.

Full text
Abstract:
Efficient memory usage is crucial for data-intensive applications as a smaller memory footprint ensures better cache performance and allows one to run a larger problem size given a axed amount of main memory. The solutions found by existing techniques for automatic storage optimization for arrays in a new loop-nests, which minimize the storage requirements for the arrays, are often far from good or optimal and could even miss nearly all storage optimization potential. In this work, we present a new automatic storage optimization framework and techniques that can be used to achieve intra-array as well as inter-array storage reuse within a new loop-nests with a pre-determined schedule. Over the last two decades, several heuristics have been developed for achieving complex transformations of a new loop-nests using the polyhedral model. However, there are no comparably strong heuristics for tackling the problem of automatic memory footprint optimization. We tackle the problem of storage optimization for arrays by formulating it as one of ending the right storage partitioning hyperplanes: each storage partition corresponds to a single storage location. Statement-wise storage partitioning hyperplanes are determined that partition a unit end global array space so that values with overlapping live ranges are not mapped to the same partition. Our integrated heuristic for exploiting intra-array as well as inter-array reuse opportunities is driven by a fourfold objective function that not only minimizes the dimensionality and storage requirements of arrays required for each high-level statement, but also maximizes inter-statement storage reuse. We built an automatic polyhedral storage optimizer called SMO using our storage partitioning approach. Storage reduction factors and other results we report from SMO demon-strate the e activeness of our approach on several benchmarks drawn from the domains of image processing, stencil computations, high-performance computing, and the class of tiled codes in general. The reductions in storage requirement over previous approaches range from a constant factor to asymptotic in the loop blocking factor or array extents { the latter being a dramatic improvement for practical purposes. As an incidental and related topic, we also studied the problem of polyhedral compilation of graphical data programs. While polyhedral techniques for program transformation are now used in several proprietary and open source compilers, most of the research on poly-herald compilation has focused on imperative languages such as C, where the computation is species in terms of statements with zero or more nested loops and other control structures around them. Graphical data ow languages, where there is no notion of statements or a schedule specifying their relative execution order, have so far not been studied using a powerful transformation or optimization approach. The execution semantics and ref-eventual transparency of data ow languages impose a di errant set of challenges. In this work, we attempt to bridge this gap by presenting techniques that can be used to extract polyhedral representation from data ow programs and to synthesize them from their equivalent polyhedral representation. We then describe Polyglot, a framework for automatic transformation of data ow programs that we built using our techniques and other popular research tools such as Clan and Pluto. For the purpose of experimental evaluation, we used our tools to compile LabVIEW, one of the most widely used data ow programming languages. Results show that data ow programs transformed using our framework are able to outperform those compiled otherwise by up to a factor of seventeen, with a mean speed-up of 2.30 while running on an 8-core Intel system.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography