Journal articles on the topic 'Automated model transformation'

To see the other types of publications on this topic, follow the link: Automated model transformation.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Automated model transformation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Murenecs, Timofejs, and Erika Asnina. "Automated Derivation of Use Case Model from Topological Functioning Model." Scientific Journal of Riga Technical University. Computer Sciences 44, no. 1 (January 1, 2011): 91–100. http://dx.doi.org/10.2478/v10143-011-0026-1.

Full text
Abstract:
Automated Derivation of Use Case Model from Topological Functioning ModelThe first important step in Model Driven Architecture software development is qualitative analysis and specification of structure and behavior of business and its supporting information system as well as software requirements. We continue the research on achieving a qualitative software requirements model, Use Case Model (UCM), based on a formal business model, Topological Functioning Model (TFM), by using formal model transformations. This paper discusses the results of implementation of the transformation from TFM to UCM by using Query/View/Transformation Relations supported by mediniQvt.
APA, Harvard, Vancouver, ISO, and other styles
2

Lano, K., S. Kolahdouz-Rahimi, and S. Fang. "Model Transformation Development Using Automated Requirements Analysis, Metamodel Matching, and Transformation by Example." ACM Transactions on Software Engineering and Methodology 31, no. 2 (April 30, 2022): 1–71. http://dx.doi.org/10.1145/3471907.

Full text
Abstract:
In this article, we address how the production of model transformations (MT) can be accelerated by automation of transformation synthesis from requirements, examples, and metamodels. We introduce a synthesis process based on metamodel matching, correspondence patterns between metamodels, and completeness and consistency analysis of matches. We describe how the limitations of metamodel matching can be addressed by combining matching with automated requirements analysis and model transformation by example (MTBE) techniques. We show that in practical examples a large percentage of required transformation functionality can usually be constructed automatically, thus potentially reducing development effort. We also evaluate the efficiency of synthesised transformations. Our novel contributions are: The concept of correspondence patterns between metamodels of a transformation. Requirements analysis of transformations using natural language processing (NLP) and machine learning (ML). Symbolic MTBE using “predictive specification” to infer transformations from examples. Transformation generation in multiple MT languages and in Java, from an abstract intermediate language.
APA, Harvard, Vancouver, ISO, and other styles
3

Brdjanin, Drazen, Danijela Banjac, Goran Banjac, and Slavko Maric. "Automated two-phase business model-driven synthesis of conceptual database models." Computer Science and Information Systems 16, no. 2 (2019): 657–88. http://dx.doi.org/10.2298/csis181010014b.

Full text
Abstract:
Existing approaches to business process model-driven synthesis of data models are characterized by a direct synthesis of a target model based on source models represented by concrete notations, where the synthesis is supported by monolithic (semi)automatic transformation programs. This article presents an approach to automated two-phase business process model-driven synthesis of conceptual database models. It is based on the introduction of a domain specific language (DSL) as an intermediate layer between different source notations and the target notation, which splits the synthesis into two phases: (i) automatic extraction of specific concepts from the source model and their DSL-based representation, and (ii) automated generation of the target model based on the DSL-based representation of the extracted concepts. The proposed approach enables development of modular transformation tools for automatic synthesis of the target model based on business process models represented by different concrete notations. In this article we present an online generator, which implements the proposed approach. The generator is implemented as a web-based, service-oriented tool, which enables automatic generation of the initial conceptual database model represented by the UML class diagram, based on business models represented by two concrete notations.
APA, Harvard, Vancouver, ISO, and other styles
4

Tuma, Jakub, and Petr Hanzlik. "Automated model transformation method from BORM to BPMN." Applied Mathematical Sciences 9 (2015): 5769–77. http://dx.doi.org/10.12988/ams.2015.54291.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Mészáros, Tamás, Gergely Mezei, Tihamér Levendovszky, and Márk Asztalos. "Manual and automated performance optimization of model transformation systems." International Journal on Software Tools for Technology Transfer 12, no. 3-4 (April 11, 2010): 231–43. http://dx.doi.org/10.1007/s10009-010-0151-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Elmounadi, Abdelali, Naoual Berbiche, Nacer Sefiani, and Nawfal El Moukhi. "ADM-Based Hybrid Model Transformation for Obtaining UML Models from PHP Code." International Journal of Recent Contributions from Engineering, Science & IT (iJES) 7, no. 1 (March 22, 2019): 32. http://dx.doi.org/10.3991/ijes.v7i1.10052.

Full text
Abstract:
In this paper, we present a hybrid-based model transformation, according to the Architecture Driven Modernization (ADM) approach, intended for getting UML (Unified Modeling Language) models from the PHP (Hypertext Preprocessor) code. This latter has been done by offering a tool support for automated generation of UML platform independent models from PHP ASTM (Abstract Syntax Tree Metamodel) representations, which are specific platform models. The model transformation rules are expressed in ATL (Atlas Transformation Language), which is a widely used model transformation language based on the hybrid approach. This work aims to fill the gap between the web-based applications maintenance, especially PHP-based implementations, and the model transformation processes in the ADM context.
APA, Harvard, Vancouver, ISO, and other styles
7

Carpenter, Chris. "Digital Transformation Enables Automated Real-Time Torque-and-Drag Modeling." Journal of Petroleum Technology 73, no. 01 (January 1, 2021): 69–70. http://dx.doi.org/10.2118/0121-0069-jpt.

Full text
Abstract:
This article, written by JPT Technology Editor Chris Carpenter, contains highlights of paper SPE 199670, “Digital Transformation Strategy Enables Automated Real-Time Torque-and-Drag Modeling,” by Dingzhou Cao, Occidental Petroleum; Don Hender, SPE, IPCOS; and Sam Ariabod, Apex Systems, et al., prepared for the 2020 IADC/SPE International Drilling Conference, Galveston, Texas, 3-5 March. The paper has not been peer reviewed. Automated real-time torque-and-drag (RT-T&D) analysis compares real-time measurements with evergreen models to monitor and manage downhole wellbore friction, improving drilling performance and safety. Enabling RT-T&D modeling with contextual well data, rig-state detection, and RT-interval event filters poses significant challenges. The complete paper presents a solution that integrates a physics-based T&D stiff/soft string model with a real-time drilling (RTD) analytics system using a custom-built extract, transform, and load (ETL) translator and digital-transformation applications to automate the T&D modeling work flow. Methodology A T&D representational state transfer (REST) application program interface (API) was integrated with an RTD analytics system capable of receiving and processing both real-time (hookload, torque, and rig-state) and digitized (drillstring and casing components, trajectory profiles, and mud-property) well data across multiple platforms. This strategy consists of four parts: Digital transformation apps, ETL, and translator Physics-based stiff/soft string T&D model API Pre-existing data infrastructure RTD analytics system The data-flow architecture reveals a flexible design in the sense that it can accommodate different types of T&D models or any other physics-based REST API models (e.g., drillstring buckling or drilling hydraulics) and can be accessed offline for prejob/post-job planning. Drilling engineers can also leverage the RTD systems’ historical database to perform recalculations, comparative analysis, and friction calibrations. The RT-T&D model also can be deployed in a cloud environment to ensure that horizontal scalability is achieved.
APA, Harvard, Vancouver, ISO, and other styles
8

Dorodnykh, N. O., O. A. Nikolaychuk, and A. Yu Yurin. "METAMODEL ENGINEERING FOR SUPPORTING FUZZY KNOWLEDGE BASE SYNTHESIS." Vestnik komp'iuternykh i informatsionnykh tekhnologii, no. 187 (2020): 34–47. http://dx.doi.org/10.14489/vkit.2020.01.pp.034-047.

Full text
Abstract:
The paper is devoted to fuzzy knowledge base engineering problem. The effectiveness of this process can be improved by automated generation of source codes and analysis of data presented in different forms, in particular, in the form of conceptual models describing a certain subject domain. The knowledge base code generation is based on the transformation of conceptual models from the model-based approach and the use of metamodels. The metamodeling provides the description of the source and target formalisms of conceptual modeling and knowledge representation. We present an approach for fuzzy knowledge base engineering based on model transformations. In particular, metamodels for describing fuzzy rule-based models and fuzzy ontologies and method for automated metamodel generation are presented.
APA, Harvard, Vancouver, ISO, and other styles
9

Hamioud, Sohaib, and Fadila Atil. "Model-driven Java code refactoring." Computer Science and Information Systems 12, no. 2 (2015): 375–403. http://dx.doi.org/10.2298/csis141025015h.

Full text
Abstract:
Refactoring is an important technique for restructuring code to improve its design and increase programmer productivity and code reuse. Performing refactorings manually, however, is tedious, time consuming and error-prone. Thus, providing an automated support for them is necessary. Unfortunately even in our days, such automation is still not easily achieved and requires formal specifications of the refactoring process. Moreover, extensibility and tool development automation are factors that should be taken into consideration when designing and implementing automated refactorings. In this paper, we introduce a model-driven approach where refactoring features, such as code representation, analysis and transformation adopt models as first-class artifacts. We aim at exploring the value of model transformation and code generation when formalizing refactorings and developing tool support. The presented approach is applied to the refactoring of Java code using a prototypical implementation based on the Eclipse Modeling Framework, a language workbench, a Java metamodel and a set of OMG standards.
APA, Harvard, Vancouver, ISO, and other styles
10

Rafe, Vahid, and Adel T. Rahmani. "Towards automated software model checking using graph transformation systems and Bogor." Journal of Zhejiang University-SCIENCE A 10, no. 8 (August 2009): 1093–105. http://dx.doi.org/10.1631/jzus.a0820415.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Basciani, Francesco, Mattia DEmidio, Davide Di Ruscio, Daniele Frigioni, Ludovico Iovino, and Alfonso Pierantonio. "Automated Selection of Optimal Model Transformation Chains via Shortest-Path Algorithms." IEEE Transactions on Software Engineering 46, no. 3 (March 1, 2020): 251–79. http://dx.doi.org/10.1109/tse.2018.2846223.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Vieira da Silva, Luis Miguel, René Heesch, Aljosha Köcher, and Alexander Fay. "Transformation eines Fähigkeitsmodells in einen PDDL-Planungsansatz." at - Automatisierungstechnik 71, no. 2 (February 1, 2023): 105–15. http://dx.doi.org/10.1515/auto-2022-0112.

Full text
Abstract:
Abstract Automated planning approaches provide robust and efficient methods to automatically find plans for a given problem and a set of possible actions. However, due to the rather high effort required to create planning models, these approaches cannot be used for adaptable manufacturing plants. In this contribution, we present a method to automatically generate a planning problem in the form of PDDL from an existing capability model. This method eliminates the additional effort required to model a planning problem, making planning approaches usable for adaptable manufacturing plants.
APA, Harvard, Vancouver, ISO, and other styles
13

Liu, Peng, Qinghua Wang, Yanli Luo, Zhiguo He, and Wei Luo. "Study on a New Transient Productivity Model of Horizontal Well Coupled with Seepage and Wellbore Flow." Processes 9, no. 12 (December 14, 2021): 2257. http://dx.doi.org/10.3390/pr9122257.

Full text
Abstract:
Digital transformation has become one of the major themes of the development of the global oil industry today. With the development of digital transformation, on-site production will surely achieve further automated management, that is, on-site production data automatic collection, real-time tracking, diagnosis and optimization, and remote control of on-site automatic adjustment devices. In this process, the realization of real-time optimization work based on massive data collection needs to be carried out combined with oil and gas well transient simulation. Therefore, research of the horizontal well capacity prediction transient model is one of the important basic works in the work of oil and gas digital transformation. In this paper, the method and process of establihing the transient calculation model of single-phase flow in horizontal wells are introduced in detail from three aspects: reservoir seepage, horizontal wellbore flow (taking one kind of flow as an example), and the coupling model of two flows. The model is more reliable through the verification of pressure recovery data from multiple field logs. The transient model of single-phase seepage in horizontal wells will lay the foundation for the establishment of transient models of oil-gas two-phase seepage and oil-gas-water three-phase seepage.
APA, Harvard, Vancouver, ISO, and other styles
14

Sheik, N. A., G. Deruyter, and P. Veelaert. "AUTOMATED REGISTRATION OF BUILDING SCAN WITH BIM THROUGH DETECTION OF CONGRUENT CORNER POINTS." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVIII-4/W3-2022 (December 2, 2022): 179–85. http://dx.doi.org/10.5194/isprs-archives-xlviii-4-w3-2022-179-2022.

Full text
Abstract:
Abstract. Current methods of construction progress monitoring involve manual data collection and processing, which are time-consuming and labor-intensive, with a dominant human presence entailing several flaws such as missing or inaccurate information. Recent research efforts for automated progress monitoring have largely focused on model-based assessment methods that are dependent on a pre-requisite step known as registration which is still performed manually due to numerous challenges. This study proposes a novel automated coarse registration method that utilizes the BIM model as the as-planned model to align it with the corresponding as-built model using their geometrical features. First, it extracts the corner points in both models using their planar features and then identifies the conjugate corner points based on different geometric invariants. Later, the transformations are determined from those conjugate points and the most accurate transformation parameter is finalized in the end. The proposed method is validated on different datasets.
APA, Harvard, Vancouver, ISO, and other styles
15

Kobyshev, Kirill, and Sergey Molodyakov. "An algorithm of test generation from functional specification using Open IE model and clustering." Proceedings of the Institute for System Programming of the RAS 34, no. 2 (2022): 17–24. http://dx.doi.org/10.15514/ispras-2022-34(2)-2.

Full text
Abstract:
Automated test coverage is a widespread practice in long-live software development projects for now. According to the test development approach, each automated test should reuse functions implemented in test framework. The provided research is aimed at improving the test framework development approach using natural language processing methods. The algorithm includes the following steps: preparation of test scenarios; transformation of scenario paragraphs to syntax tree using pretrained OpenIE model; test steps comparison with test framework interfaces using GloVe model; transformation of the given semantic tree to the Kotlin language code. The paper contains the description of protype of system automatically generating Kotlin language tests from natural language specification.
APA, Harvard, Vancouver, ISO, and other styles
16

Asnina, Erika. "Notion of causal relations of the topological functioning model." Applied Computer Systems 13, no. 1 (November 8, 2012): 68–73. http://dx.doi.org/10.2478/v10312-012-0009-z.

Full text
Abstract:
Abstract - The paper discusses application of the topological functioning model (TFM) of the system for its automated transformation to behavioural specifications such as UML Activity Diagram, BPMN diagrams, scenarios, etc. The paper addresses a lack of formal specification of causal relations between functional features of the TFM by using inference means suggested by classical logic. The result is reduced human participation in the transformation as well as additional check of analysis and specification of the system.
APA, Harvard, Vancouver, ISO, and other styles
17

Yang, Chia-han (John), and Valeriy Vyatkin. "Automated Model Transformation between MATLAB Simulink/Stateflow and IEC 61499 Function Blocks." IFAC Proceedings Volumes 42, no. 4 (2009): 205–10. http://dx.doi.org/10.3182/20090603-3-ru-2001.0302.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Zhereb, K. A. "Improving performance of Python code using rewriting rules technique." PROBLEMS IN PROGRAMMING, no. 2-3 (September 2020): 115–25. http://dx.doi.org/10.15407/pp2020.02-03.115.

Full text
Abstract:
Python is a popular programming language used in many areas, but its performance is significantly lower than many compiled languages. We propose an approach to increasing performance of Python code by transforming fragments of code to more efficient languages such as Cython and C++. We use high-level algebraic models and rewriting rules technique for semi-automated code transformation. Performance-critical fragments of code are transformed into a low-level syntax model using Python parser. Then this low-level model is further transformed into a high-level algebraic model that is language-independent and easier to work with. The transformation is automated using rewriting rules implemented in Termware system. We also improve the constructed high-level model by deducing additional information such as data types and constraints. From this enhanced high-level model of code we generate equivalent fragments of code using code generators for Cython and C++ languages. Cython code is seamlessly integrated with Python code, and for C++ code we generate a small utility file in Cython that also integrates this code with Python. This way, the bulk of program code can stay in Python and benefit from its facilities, but performance-critical fragments of code are transformed into more efficient equivalents, improving the performance of resulting program. Comparison of execution times between initial version of Python code, different versions of transformed code and using automatic tools such as Cython compiler and PyPy demonstrates the benefits of our approach – we have achieved performance gains of over 50x compared to the initial version written in Python, and over 2x compared to the best automatic tool we have tested.
APA, Harvard, Vancouver, ISO, and other styles
19

Liu, Si, Jose Meseguer, Peter Csaba Ölveczky, Min Zhang, and David Basin. "Bridging the semantic gap between qualitative and quantitative models of distributed systems." Proceedings of the ACM on Programming Languages 6, OOPSLA2 (October 31, 2022): 315–44. http://dx.doi.org/10.1145/3563299.

Full text
Abstract:
Today’s distributed systems must satisfy both qualitative and quantitative properties. These properties are analyzed using very different formal frameworks: expressive untimed and non-probabilistic frameworks, such as TLA+ and Hoare/separation logics, for qualitative properties; and timed/probabilistic-automaton-based ones, such as Uppaal and Prism, for quantitative ones. This requires developing two quite different models of the same system, without guarantees of semantic consistency between them. Furthermore, it is very hard or impossible to represent intrinsic features of distributed object systems—such as unbounded data structures, dynamic object creation, and an unbounded number of messages—using finite automata. In this paper we bridge this semantic gap, overcome the problem of manually having to develop two different models of a system, and solve the representation problem by: (i) defining a transformation from a very general class of distributed systems (a generalization of Agha’s actor model) that maps an untimed non-probabilistic distributed system model suitable for qualitative analysis to a probabilistic timed model suitable for quantitative analysis; and (ii) proving the two models semantically consistent. We formalize our models in rewriting logic, and can therefore use the Maude tool to analyze qualitative properties, and statistical model checking with PVeStA to analyze quantitative properties. We have automated this transformation and integrated it, together with the PVeStA statistical model checker, into the Actors2PMaude tool. We illustrate the expressiveness of our framework and our tool’s ease of use by automatically transforming untimed, qualitative models of numerous distributed system designs—including an industrial data store and a state-of-the-art transaction system—into quantitative models to analyze and compare the performance of different designs.
APA, Harvard, Vancouver, ISO, and other styles
20

Liang, Bo, Yuangang Liu, Yanlin Shao, Qing Wang, Naidan Zhang, and Shaohua Li. "3D Quantitative Characterization of Fractures and Cavities in Digital Outcrop Texture Model Based on Lidar." Energies 15, no. 5 (February 22, 2022): 1627. http://dx.doi.org/10.3390/en15051627.

Full text
Abstract:
The combination of lidar and digital photography provides a new technology for creating a high-resolution 3D digital outcrop model. The digital outcrop model can accurately and conveniently depict the surface 3D properties of an outcrop profile, making up for the shortcomings of traditional outcrop research techniques. However, the advent of digital outcrop poses additional challenges to the 3D spatial analysis of virtual outcrop models, particularly in the interpretation of geological characteristics. In this study, the detailed workflow of automated interpretation of geological characteristics of fractures and cavities on a 3D digital outcrop texture model is described. Firstly, advanced automatic image analysis technology is used to detect the 2D contour of the fractures and cavities in the picture. Then, to obtain an accurate representation of the 3D structure of the fractures and cavities on the digital outcrop model, a projection method for converting 2D coordinates to 3D space based on geometric transformations such as affine transformation and linear interpolation is proposed. Quantitative data on the size, shape, and distribution of geological features are calculated using this information. Finally, a novel and comprehensive automated 3D quantitative characterization technique for fractures and cavities on the 3D digital outcrop texture model is developed. The proposed technology has been applied to the 3D mapping and quantitative characterization of fractures and cavities on the outcrop profile for the Dengying Formation (second member), providing a foundation for profile reservoir appraisal in the research region. Furthermore, this approach may be extended to the 3D characterization and analysis of any point, line, and surface objects derived from outcrop photos, hence increasing the application value of the 3D digital outcrop model.
APA, Harvard, Vancouver, ISO, and other styles
21

Kaiser, T., C. Clemen, and H. G. Maas. "AUTOMATED ALIGNMENT OF LOCAL POINT CLOUDS IN DIGITAL BUILDING MODELS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-5/W2 (September 20, 2019): 35–39. http://dx.doi.org/10.5194/isprs-archives-xlii-5-w2-35-2019.

Full text
Abstract:
<p><strong>Abstract.</strong> For the correct usage and analysis within a BIM environment, image-based point clouds that were created with Structure from Motion (SfM) tools have to be transformed into the building coordinate system via a seven parameter Helmert Transformation. Usually control points are used for the estimation of the transformation parameters. In this paper we present a novel, highly automated approach to calculate these transformation parameters without the use of control points. The process relies on the relationship between wall respectively plane information of the BIM and three-dimensional line data that is extracted from the image data. In a first step, 3D lines are extracted from the oriented input images using the tool Line3D++. These lines are defined by the 3D coordinates of the start and end points. Afterwards the lines are matched to the planes originating from the BIM model representing the walls, floors and ceilings. Besides finding a suitable functional and stochastic model for the observation equations and the adjustment calculation, the most critical aspect is finding a correct match for the lines and the planes. We therefore developed a RANSAC-inspired matching algorithm to get a correct assignment between elements of the two data sources. Synthetic test data sets have been created for evaluating the methodology.</p>
APA, Harvard, Vancouver, ISO, and other styles
22

Kim, H., W. Yoon, and T. Kim. "AUTOMATED MOSAICKING OF MULTIPLE 3D POINT CLOUDS GENERATED FROM A DEPTH CAMERA." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B3 (June 9, 2016): 269–72. http://dx.doi.org/10.5194/isprs-archives-xli-b3-269-2016.

Full text
Abstract:
In this paper, we propose a method for automated mosaicking of multiple 3D point clouds generated from a depth camera. A depth camera generates depth data by using ToF (Time of Flight) method and intensity data by using intensity of returned signal. The depth camera used in this paper was a SR4000 from MESA Imaging. This camera generates a depth map and intensity map of 176 x 44 pixels. Generated depth map saves physical depth data with mm of precision. Generated intensity map contains texture data with many noises. We used texture maps for extracting tiepoints and depth maps for assigning z coordinates to tiepoints and point cloud mosaicking. There are four steps in the proposed mosaicking method. In the first step, we acquired multiple 3D point clouds by rotating depth camera and capturing data per rotation. In the second step, we estimated 3D-3D transformation relationships between subsequent point clouds. For this, 2D tiepoints were extracted automatically from the corresponding two intensity maps. They were converted into 3D tiepoints using depth maps. We used a 3D similarity transformation model for estimating the 3D-3D transformation relationships. In the third step, we converted local 3D-3D transformations into a global transformation for all point clouds with respect to a reference one. In the last step, the extent of single depth map mosaic was calculated and depth values per mosaic pixel were determined by a ray tracing method. For experiments, 8 depth maps and intensity maps were used. After the four steps, an output mosaicked depth map of 454x144 was generated. It is expected that the proposed method would be useful for developing an effective 3D indoor mapping method in future.
APA, Harvard, Vancouver, ISO, and other styles
23

Kim, H., W. Yoon, and T. Kim. "AUTOMATED MOSAICKING OF MULTIPLE 3D POINT CLOUDS GENERATED FROM A DEPTH CAMERA." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B3 (June 9, 2016): 269–72. http://dx.doi.org/10.5194/isprsarchives-xli-b3-269-2016.

Full text
Abstract:
In this paper, we propose a method for automated mosaicking of multiple 3D point clouds generated from a depth camera. A depth camera generates depth data by using ToF (Time of Flight) method and intensity data by using intensity of returned signal. The depth camera used in this paper was a SR4000 from MESA Imaging. This camera generates a depth map and intensity map of 176 x 44 pixels. Generated depth map saves physical depth data with mm of precision. Generated intensity map contains texture data with many noises. We used texture maps for extracting tiepoints and depth maps for assigning z coordinates to tiepoints and point cloud mosaicking. There are four steps in the proposed mosaicking method. In the first step, we acquired multiple 3D point clouds by rotating depth camera and capturing data per rotation. In the second step, we estimated 3D-3D transformation relationships between subsequent point clouds. For this, 2D tiepoints were extracted automatically from the corresponding two intensity maps. They were converted into 3D tiepoints using depth maps. We used a 3D similarity transformation model for estimating the 3D-3D transformation relationships. In the third step, we converted local 3D-3D transformations into a global transformation for all point clouds with respect to a reference one. In the last step, the extent of single depth map mosaic was calculated and depth values per mosaic pixel were determined by a ray tracing method. For experiments, 8 depth maps and intensity maps were used. After the four steps, an output mosaicked depth map of 454x144 was generated. It is expected that the proposed method would be useful for developing an effective 3D indoor mapping method in future.
APA, Harvard, Vancouver, ISO, and other styles
24

Lee, Hyeoksoo, Jiwoo Hong, and Jongpil Jeong. "MARL-Based Dual Reward Model on Segmented Actions for Multiple Mobile Robots in Automated Warehouse Environment." Applied Sciences 12, no. 9 (May 7, 2022): 4703. http://dx.doi.org/10.3390/app12094703.

Full text
Abstract:
The simple and labor-intensive tasks of workers on the job site are rapidly becoming digital. In the work environment of logistics warehouses and manufacturing plants, moving goods to a designated place is a typical labor-intensive task for workers. These tasks are rapidly undergoing digital transformation by leveraging mobile robots in automated warehouses. In this paper, we studied and tested realistically necessary conditions to operate mobile robots in an automated warehouse. In particular, considering conditions for operating multiple mobile robots in an automated warehouse, we added more complex actions and various routes and proposed a method for improving sparse reward problems when learning paths in a warehouse with reinforcement learning. Multi-Agent Reinforcement Learning (MARL) experiments were conducted with multiple mobile robots in an automated warehouse simulation environment, and it was confirmed that the proposed reward model method makes learning start earlier even there is a sparse reward problem and learning progress was maintained stably. We expect this study to help us understand the actual operation of mobile robots in an automated warehouse further.
APA, Harvard, Vancouver, ISO, and other styles
25

Shailesh, Tanuja, Ashalatha Nayak, and Devi Prasad. "An UML Based Performance Evaluation of Real-Time Systems Using Timed Petri Net." Computers 9, no. 4 (November 27, 2020): 94. http://dx.doi.org/10.3390/computers9040094.

Full text
Abstract:
Performance is a critical non-functional parameter for real-time systems and performance analysis is an important task making it more challenging for complex real-time systems. Mostly performance analysis is performed after the system development but an early stage analysis and validation of performance using system models can improve the system quality. In this paper, we present an early stage automated performance evaluation methodology to analyse system performance using the UML sequence diagram model annotated with modeling and analysis of real-time and embedded systems (MARTE) profile. MARTE offers a performance domain sub-profile that is used for representing real-time system properties essential for performance evaluation. In this paper, a transformation technique and transformation rules are proposed to map the UML sequence diagram model into a Generalized Stochastic Timed Petri net model. All the transformation rules are implemented using a metamodel based approach and Atlas Transformation Language (ATL). A case study from the manufacturing domain a Kanban system is used for validating the proposed technique.
APA, Harvard, Vancouver, ISO, and other styles
26

Dickerson, Charles E., Rosmira Roslan, and Siyuan Ji. "A Formal Transformation Method for Automated Fault Tree Generation From a UML Activity Model." IEEE Transactions on Reliability 67, no. 3 (September 2018): 1219–36. http://dx.doi.org/10.1109/tr.2018.2849013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Belghaddar, Yassine, Carole Delenne, Nanée Chahinian, Abderrahmane Seriai, and Ahlame Begdouri. "Parametrization of a wastewater hydraulic model under incomplete data constraint." IOP Conference Series: Earth and Environmental Science 1136, no. 1 (January 1, 2023): 012053. http://dx.doi.org/10.1088/1755-1315/1136/1/012053.

Full text
Abstract:
Abstract Hydraulic simulation represents a powerful tool for studying wastewater networks. In order to achieve this target, hydraulic software require a set of parameters such as pipe slopes, roughness, diameters, etc. However, these pieces of information are rarely known for each and every pipe. Moreover, underground networks are frequently expanded, repaired and improved and these changes are not always reported in databases. The task of completing the required data represents the most time-consuming part of model implementation. In this context, we present algorithms that complete missing data required by hydraulic software. We automated this data insertion and transformation in SWMM© format to make it quicker and easier for the user. This automated solution was compared with manually estimated inputs. The simulation results show a coherent hydraulic behaviour.
APA, Harvard, Vancouver, ISO, and other styles
28

Górski, Tomasz, and Grzegorz Ziemski. "UML activity diagram transformation into BPEL integration flow." Bulletin of the Military University of Technology 67, no. 3 (September 28, 2018): 15–45. http://dx.doi.org/10.5604/01.3001.0012.6587.

Full text
Abstract:
The growing interest of companies in integration and interoperability between information systems has caused increase in significance of Service-Oriented Architecture which provides tools for Enterprise Application Integration. In that architecture, Enterprise Service Bus provides technical possibilities of communication between IT systems. A key element in the communication are integration flows. Objective: The aim of this article is to present a new transformation Integration2BPEL, which automates the development of executable integration flow expressed in the Web Services Business Process Execution Language (WS-BPEL) based on the model of the integration flow presented in the Unified Modelling Language (UML) activity diagram. Method: The author proposes a transformation of the type of model-to-code type which generates integration flow expressed in WS-BPEL, which can be executed in any BPEL-compliant process engine. The integration flow is modelled using UML activity diagram with stereotypes from ‘UML Profile for Integration Flows’ profile in an IBM Rational Software Architect (RSA). Using Integration2BPEL transformation a complete, executable integration flow is generated, which is composed of many mediation mechanisms. Generated integration flows have been executed on OpenESB. Results: The ability to generate a complete integration flow in BPEL, which without any additions can be run on enterprise service bus. Implementation phase of an integration flow construction was automated. Each of integration flows is implemented according to the same rules. In addition, it allows to avoid mistakes made by designers and programmers. Conclusions: Model-Driven Development is an approach that leads to the automation of the design and programming phases. Integration2BPEL transformation is a uniform mechanism to design integration flow. Potentially, it also allows to avoid implementation errors. Keywords: Web Services Business Process Execution Language (BPEL), Enterprise Service Bus (ESB), Unified Modelling Language (UML), UML activity diagram, Model-Driven Development (MDD), Transformation. null
APA, Harvard, Vancouver, ISO, and other styles
29

Vakulenko, Darya Vitalievna, and Alla Grigorievna Kravets. "REENGINEERING OF BUSINESS PROCESSES OF AGROINDUSTRIAL ENTERPRISES IN CONDITIONS OF THROUGH DIGITAL TRANSFORMATION." Vestnik of Astrakhan State Technical University. Series: Management, computer science and informatics 2021, no. 3 (July 30, 2021): 115–25. http://dx.doi.org/10.24143/2072-9502-2021-3-115-125.

Full text
Abstract:
The article describes the high dynamism and uncertainty of the external and internal envi-ronment, which actualize the implementation of innovative technologies in the management of business processes in the agro-industrial complex (AIC). Attention is focused on the strategic nature of the transformation of agricultural production within the digital ecosystem. The perspective technologies of collection and processing of remote sensing data obtained from various satellite sensors, unmanned vehicles, weather stations; geographic information systems; global positioning systems are considered. The fundamental elements of reengineering of business processes in the context of digital transformation, the creation of control systems to control the development of agricultural crops using streaming processing of remote sensing data are considered. The factors of restraining and catalyzing production processes in AIC are substantiated, the features of the elements of the organization of the digital spatial environment are revealed, which largely determine the transition to a unified information support system for an agro-industrial enterprise. A structural and functional model of reengineering of business processes is proposed, aimed at ensuring sustainability in making managerial decisions. As part of the reengineering process, it is planned to create an information environment for an agricultural enterprise, consisting of interconnected procedures for merging information of its component functional systems: an automated monitoring system, a system for automated recognition of the specifics of the state of plant surface elements and an automated analytical decision support system for selecting agrotechnological techniques. Reengineering of business processes according to the proposed model will reduce risks in terms of compliance with time factors, increase production volumes and profitability of an agricultural enterprise due to the transition to digital technologies for automated collection and processing of big data, the ability to make decisions based on automated analytical systems and the ability to store in the knowledge base the generated chains of agro-technological operations for the needs of future periods
APA, Harvard, Vancouver, ISO, and other styles
30

Freeman, J. M., and D. G. Ford. "Automated error analysis of serial manipulators and servo heads." Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science 217, no. 9 (September 1, 2003): 1077–84. http://dx.doi.org/10.1243/095440603322407308.

Full text
Abstract:
This paper presents a general mathematical treatment of serial manipulators, an important example of which is the servo head. The paper includes validation by application to the angle head via comparison with the previously known transformations and a new application to the error analysis of the angle head. The usual approach to the error analysis of a servo head is to develop a geometrical model by elementary geometrical considerations using trigonometric relationships and various simplifying assumptions. This approach is very error prone, difficult to verify and extremely time consuming. The techniques described here constitute matrix methods that have been programmed in a general way to derive automatically the analytical equations relating the angles of rotation of the head and alignment errors in the head to the position of the tool and errors in that position. The approach is to use rotation and transformation matrices to evaluate the influence of the various errors such as offsets and angular errors. A general approach to the sign convention and notation for angular errors is presented in an attempt to reduce the possibility of errors of definition.
APA, Harvard, Vancouver, ISO, and other styles
31

Rappaport, Jack M., Stephen B. Richter, and Dennis T. Kennedy. "A Strategic Perspective on Using Symbolic Transformation in STEM Education." International Journal of Strategic Decision Sciences 7, no. 1 (January 2016): 39–75. http://dx.doi.org/10.4018/ijsds.2016010103.

Full text
Abstract:
This paper describes and implements an innovative model for teaching Science, Technology, Engineering and Mathematics (STEM) that enhances the decision making process of students considering a major or a career in STEM fields. The model can also be used as a decision making tool for educators interested in stressing the importance of STEM for career enhancement and for society as a whole. The model creates analogies and metaphors for various STEM topics using the contents of popular music videos. Theories of neuroscience, the interdisciplinary study of the nervous system, are used to describe and validate our decision making model. Concepts such as, embodied cognition, mirror neurons and the connection between emotion and cognition, are used to explain how the brain processes the information and multi-modal stimuli generated by our model. The model was implemented using the topic of automated decision processes in robotics and automation with a group of university and high school students and teachers. The impact of the model was evaluated using the National Science Foundation (NSF) frameworks for evaluating informal science projects. The results indicate that the model using symbolic transformation to teach STEM can have a significant impact on students' attitude towards STEM and the decision making process about their careers.
APA, Harvard, Vancouver, ISO, and other styles
32

Agyemang, Malena, Julie Linsey, and Cameron J. Turner. "Transforming functional models to critical chain models via expert knowledge and automatic parsing rules for design analogy identification." Artificial Intelligence for Engineering Design, Analysis and Manufacturing 31, no. 4 (September 14, 2017): 501–11. http://dx.doi.org/10.1017/s0890060417000488.

Full text
Abstract:
AbstractCritical chains composed of critical flows and functions have been demonstrated as an effective qualitative analogy retrieval approach based on performance metrics. In prior work, engineers used expert knowledge to transform functional models into critical chain models, which are abstractions of the functional model. Automating this transformation process is highly desirable so as to provide for a robust transformation method. Within this paper, two paradigms for functional modeling abstraction are compared. A series of pruning rules provide an automated transformation approach, and this is compared to the results generated previously through an expert knowledge approach. These two approaches are evaluated against a set of published functional models. The similarity of the resulting transformation of the functional models into critical chain models is evaluated using a functional chain similarity metric, developed in previous work. Once critical chain models are identified, additional model evaluation criteria are used to evaluate the utility of the critical chain models for design analogy identification. Since the functional vocabulary acts as a common language among designers and engineers to abstract and represent critical design artifact information, analogous matching can be made about the functional vocabulary. Thus, the transformation of functional models into critical chain models enables engineers to use functional abstraction as a mechanism to identify design analogies. The critical flow rule is the most effective first step when automatically transforming a functional model to a critical chain model. Further research into more complex critical chain model architectures and the interactions between criteria is merited.
APA, Harvard, Vancouver, ISO, and other styles
33

Yu, Yijun, Haruhiko Kaiya, Nobukazu Yoshioka, Zhenjiang Hu, Hironori Washizaki, Yingfei Xiong, and Amin Hosseinian-Far. "Goal Modelling for Security Problem Matching and Pattern Enforcement." International Journal of Secure Software Engineering 8, no. 3 (July 2017): 42–57. http://dx.doi.org/10.4018/ijsse.2017070103.

Full text
Abstract:
This article describes how earlier detection of security problems and the implementation of solutions would be a cost-effective approach for developing secure software systems. Developing, gathering and sharing similar repeatable programming knowledge and solutions has led to the introduction of Patterns in the 90's. The same concept has been adopted to realise reoccurring security knowledge and hence security patterns. Detecting a security problem using the patterns in requirements models may lead to its early prevention. In this article, the authors have provided an overview of security patterns in the past two decades, followed by a summary of i*/Tropos goal modelling framework. Section 2 outlines model-driven development, meta-models and model transformation, within the context of requirements engineering. They have summarised security access control types, and formally described role-based access control (RBAC) in particular as a pattern that may occur in the stakeholder requirements models. Then the authors used the i* modelling language and some elements from its constructs - model-driven queries and transformations - to describe the pattern enforcement. This is applied to a number of requirements models within the literature, and the pattern-based transformation tool they designed has automated the detection and resolution of this security pattern in several goal-oriented stakeholder requirements. Finally, the article also reflects on a variety of existing applications and future work.
APA, Harvard, Vancouver, ISO, and other styles
34

EL-GENDY, HAZEM, and NABIL EL-KADHI. "FORMAL METHOD FOR AUTOMATED TRANSFORMATION OF LOTOS SPECIFICATIONS TO ESTELLE SPECIFICATIONS." International Journal of Software Engineering and Knowledge Engineering 15, no. 05 (October 2005): 873–91. http://dx.doi.org/10.1142/s0218194005002567.

Full text
Abstract:
ISO and IEC have jointly developed two Formal Description Techniques (FDTs) for specifying distributed real time systems such as computer/telecommunications protocols. These are Lotos and Estelle. In this paper, a formal method for automated transformation of a Lotos specification to an Estelle specification is presented. The method is applicable to various Lotos specification styles and to various communications protocols of ISO OSI layers. Our method has applications in conformance testing of such systems and building common semantic model for the various FDTs. In this paper, we develop an algorithm for constructing a 'Data Oriented'-Restricted Behavior Tree T that represent both the control flow aspects and the data flow aspects of the system. Then, we develop an algorithm for constructing the Estelle specifications from T. A minimization rule is also developed to optimize the size of the Estelle specification by reducing both the number of states and the number of transitions.
APA, Harvard, Vancouver, ISO, and other styles
35

Legat, Benoît, Oscar Dowson, Joaquim Dias Garcia, and Miles Lubin. "MathOptInterface: A Data Structure for Mathematical Optimization Problems." INFORMS Journal on Computing 34, no. 2 (March 2022): 672–89. http://dx.doi.org/10.1287/ijoc.2021.1067.

Full text
Abstract:
We introduce MathOptInterface, an abstract data structure for representing mathematical optimization problems based on combining predefined functions and sets. MathOptInterface is significantly more general than existing data structures in the literature, encompassing, for example, a spectrum of problems classes from integer programming with indicator constraints to bilinear semidefinite programming. We also outline an automated rewriting system between equivalent formulations of a constraint. MathOptInterface has been implemented in practice, forming the foundation of a recent rewrite of JuMP, an open-source algebraic modeling language in the Julia language. The regularity of the MathOptInterface representation leads naturally to a general file format for mathematical optimization we call MathOptFormat. In addition, the automated rewriting system provides modeling power to users while making it easy to connect new solvers to JuMP. Summary of Contribution: This paper describes a new abstract data structure for representing mathematical optimization models with a corresponding file format and automatic transformation system. The advances are useful for algebraic modeling languages, allowing practitioners to model problems more naturally and more generally than before.
APA, Harvard, Vancouver, ISO, and other styles
36

Iloon, Tayebeh, Ramin Barati, and Hamid Azad. "Siamese Network-Based Feature Transformation for Improved Automated Epileptic Seizure Detection." Complexity 2022 (December 6, 2022): 1–14. http://dx.doi.org/10.1155/2022/9161827.

Full text
Abstract:
Epilepsy is a common electrophysiological disorder of the brain, detected mainly by electroencephalogram (EEG) signals. Since correctly diagnosing epilepsy seizures by monitoring the EEG signal is very tedious and time-consuming for a neurologist, a growing number of studies have been conducted on developing automated epileptic seizure detection (AESD). In this work, a novel supervised model is proposed for AESD. Initially, the EEG signals are collected from Bonn University EEG (BU-EEG) database. Then, empirical mode decomposition and feature extraction (combination of entropy, frequency, and statistical features) are applied to extract the features. Furthermore, Siamese network is utilized to lessen the number of extracted features and obtain the most discriminative features. Then, these features are exploited to classify seizure and non-seizure EEG signals by using a support vector machine classifier. This paper examines the Siamese network’s contribution as a learning-based feature transformation in improving seizure detection performance. The numerical results confirm that the proposed framework can achieve a perfect classification performance ( 100 % accuracy). This approach can constructively help doctors to detect epileptic seizure activity and reduce their workload.
APA, Harvard, Vancouver, ISO, and other styles
37

Ivanov, Toni, Aleksandar Simonovic, Nebojsa Petrovic, Vasko Fotev, and Ivan Kostic. "Influence of selected turbulence model on the optimization of a Class-Shape Transformation parameterized airfoil." Thermal Science 21, suppl. 3 (2017): 737–44. http://dx.doi.org/10.2298/tsci160209194i.

Full text
Abstract:
An airfoil was parameterized using the class-shape transformation technique and then optimized via genetic algorithm. The aerodynamic characteristics of the airfoil were obtained with the use of a CFD software. The automated numerical technique was validated using available experimental data and then the optimization procedure was repeated for few different turbulence models. The obtained optimized airfoils were then compared in order to gain some insight on the influence of the different turbulence models on the optimization result.
APA, Harvard, Vancouver, ISO, and other styles
38

Benabbou, Amel, and Safia Nait-Bahloul. "Automated Context Formalization for Context-aware Specification Approach." International Journal of Information System Modeling and Design 9, no. 3 (July 2018): 23–47. http://dx.doi.org/10.4018/ijismd.2018070102.

Full text
Abstract:
Requirement specification is a key element in model-checking verification. The context-aware approach is an effective technique for automating the specification of requirement considering specific environmental conditions. In most of existing approaches, there is no support of this crucial task and are mainly based on the considerable efforts and expertise of engineers. A domain-specific language, called CDL, has been proposed to facilitate the specification of requirement by formalizing contexts. However, the feedback has shown that manually writing CDL is hard, error prone and difficult to grasp on complex systems. In this article, the authors propose an approach to automatically generate CDL models using (IODs) elaborated through transformation chains from textual use cases. They offer an intermediate formalism between informal use cases scenarios and CDL models allowing to engineers to manipulate with familiar artifacts. Thanks to such high-level formalism, the gap between informal and formal requirements is reduced; consequently, the requirement specification is facilitated.
APA, Harvard, Vancouver, ISO, and other styles
39

Kemerbaev, Nurgan T., and Andrej A. Sholomickij. "NEW SURVEYING TASKS IN AUTOMATED INDUSTRIAL ENTERPRISE MANAGEMENT SYSTEM." Interexpo GEO-Siberia 1 (May 21, 2021): 35–39. http://dx.doi.org/10.33764/2618-981x-2021-1-35-39.

Full text
Abstract:
In 2018, the Pavlodar petrochemical plant introduced a modern system of maintenance and repair, designed to modernize and improve the production process. A maintenance system consisting of subsystems allows: - organizing a system for managing the condition of the company fixed assets taking into account the current condition of the equipment; - receiving information about the performed maintenance, repair or replacement works, about reducing the risks of unplanned production downtime; - determining the feasibility of repairing or replacing equipment based on the total cost of maintenance; - establishing the validity of planning for the procurement of spare parts and materials for scheduled repairs; - reducing the risk of missing spare parts in case of emergency shutdown of equipment. The process of implementation and expansion of maintenance subsystems is ongoing. The Engineering Data Creation and Management (EDCM) system is being introducing using AVEVA solutions. The introducing and integration of EDCM is carried out with the IBM Maximo subsystem of the maintenance system based on a three-dimensional digital plant model, created in the AVEVA Everything 3D software. A brief description of the process of creating a three-dimensional model of an industrial enterprise, its connection with a maintenance system, as well as the transformation of surveying information through the lens of the development of digital transformation of surveying information when creating digital twins of industrial enterprises, using the example of creating a 3D model of PNHZ for an engineering data management system is given. As a result of the 3D modeling development in the framework of the widespread introduction of digitalization in manufacture, surveying works received a new launch for development. It should be noted that surveying measurements are the key to the success of a complex and multi-stage technological process for creating and operating a digital spatial twin or 3D plant model in a TOPO system.
APA, Harvard, Vancouver, ISO, and other styles
40

HRYMAK, ROMAN, OLEKSANDR PASICHNYK, TETIANA SKRYPNYK, and EDUARD MANZIUK. "INFORMATION TECHNOLOGY OF MAKING CONTROLLED CRITICALLY SAFE DECISIONS ABOUT MODEL PARAMETERS CONVERSION AT TRANSFER BETWEEN VISUALIZATION SYSTEMS." HERALD OF KHMELNYTSKYI NATIONAL UNIVERSITY 299, no. 4 (October 2021): 35–42. http://dx.doi.org/10.31891/2307-5732-2021-299-4-35-42.

Full text
Abstract:
In modern production, computer-aided design systems have become widespread, which provide the opportunity to create technological processes with less time and engineering. Automated design system is a system capable of automated level to implement information technology to perform design functions, is an organizational and technical set of software tools designed to automate the design process, consisting of staff and a group of technical, software and other means of automating its activities. Computer-aided design systems are an important link in industrial design, widely used in many industries, including the automotive, shipbuilding and aerospace industries, industrial and architectural design, prosthetics and many others. CAD is also widely used in computer animation for special effects in movies, commercials, and technical materials, often referred to as digital content. Due to its economic importance, the computer-aided design system has become the main driving force of research in the field of computational geometry, computer graphics (both hardware and software) and discrete differential geometry. In today’s automated manufacturing market, most constructors use additional engineering software. As a rule, such add-ins are used in the functional infrastructure of a specialized set of solutions that implement the principle of building information modeling (BIM). The most common system of this type is Autodesk Revit, a platform that provides three-dimensional modeling of building elements and flat drawing of design elements, designed for architects, designers and design engineers. This research presents the results of information technology for Autodesk Revit computer-aided design system based on 2019-2021 packages, which will allow users of the architectural visualization platform to use the functions of viewing, processing, mathematical transformation and serialization on elements of 3D building models.
APA, Harvard, Vancouver, ISO, and other styles
41

Sinagra, Marco, Carmelo Nasello, Tullio Tucciarelli, Silvia Barbetta, Christian Massari, and Tommaso Moramarco. "A Self-Contained and Automated Method for Flood Hazard Maps Prediction in Urban Areas." Water 12, no. 5 (April 29, 2020): 1266. http://dx.doi.org/10.3390/w12051266.

Full text
Abstract:
Water depths and velocities predicted inside urban areas during severe storms are traditionally the final result of a chain of hydrologic and hydraulic models. The use of a single model embedding all the components of the rainfall–runoff transformation, including the flux concentration in the river network, can reduce the subjectivity and, as a consequence, the final uncertainty of the computed water depths and velocities. In the model construction, a crucial issue is the management of the topographic data. The information given by a Digital Elevation Model (DEM) available on a regular grid, as well as all the other elevation data provided by single points or contour lines, allow the creation of a Triangulated Irregular Network (TIN) based unstructured digital terrain model, which provides the spatial discretization for both the hydraulic and the hydrologic models. The procedure is split into four steps: (1) correction of the elevation z* measured in the nodes of a preliminary network connecting the edges with all the DEM cell centers; (2) the selection of a suitable hydrographic network where at least one edge of each node has a strictly descending elevation, (3) the generation of the computational mesh, whose edges include all the edges of the hydrographic network and also other lines following internal boundaries provided by roads or other infrastructures, and (4) the estimation of the elevation of the nodes of the computational mesh. A suitable rainfall–runoff transformation model is finally applied to each cell of the identified computational mesh. The proposed methodology is applied to the Sovara stream basin, in central Italy, for two flood events—one is used for parameter calibration and the other one for validation purpose. The comparison between the simulated and the observed flooded areas for the validation flood event shows a good reconstruction of the urban flooding.
APA, Harvard, Vancouver, ISO, and other styles
42

Zarembo, Imants. "Automatic Transformation of Relational Database Schema into OWL Ontologies." Environment. Technology. Resources. Proceedings of the International Scientific and Practical Conference 3 (June 16, 2015): 217. http://dx.doi.org/10.17770/etr2015vol3.170.

Full text
Abstract:
<p class="R-AbstractKeywords"><span lang="EN-US">Ontology alignment, or ontology matching, is a technique to map different concepts between ontologies. For this purpose at least two ontologies are required. In certain scenarios, such as data integration, heterogeneous database integration and data model compatibility evaluation, a need to transform a relational database schema to an ontology can arise. </span></p><p class="R-AbstractKeywords"><span lang="EN-US">To conduct a successful transformation it is necessary to identify the differences between relational database schema and ontology information representation methods, and then to define transformation rules. The most straight forward but time consuming way to carry out transformation is to do it manually. Often this is not an option due to the size of data to be transformed. For this reason there is a need for an automated solution.</span></p><p class="R-AbstractKeywords"><span lang="EN-US">The automatic transformation of OWL ontology from relational database schema is presented in this paper; the data representation differences between relational database schema and OWL ontologies are described; the transformation rules are defined and the transformation tool’s prototype is developed to perform the described transformation.</span></p>
APA, Harvard, Vancouver, ISO, and other styles
43

Baev, Aleksey V., Aleksandr V. Samonov, Vadim M. Safonov, Sergey V. Krasnov, and Sergey R. Malyshev. "METHODS AND TOOLS FOR THE DEVELOPMENT AND VERIFICATION OF UML MODELS OF A SET OF REQUIREMENTS TO AUTOMATED MANAGEMENT SYSTEMS FOR ORGANIZATIONAL AND TECHNICAL SYSTEMS." Автоматизация процессов управления 4, no. 66 (2021): 95–103. http://dx.doi.org/10.35752/1991-2927-2021-4-66-95-103.

Full text
Abstract:
The article describes the methods and tools for developing and verifying a formal model of a set of requirements to the automated management systems for complicated organizational and technical systems in accordance with the methodology of the model-oriented approach. A classification characteristics quality of a set of requirements is proposed. The methodology and means of automated development of a formal model of a set of requirements, its transformation and loading into the environment of the Neo4j graph database are presented. Methods and means of verification and validation of a set of requirements by creating and executing test tasks and queries in the Cypher language are proposed. Promising areas of further research are identified.
APA, Harvard, Vancouver, ISO, and other styles
44

Zou, Guoxia. "Designing a Credit Bank Model Based on Blockchain Technology." Scientific and Social Research 4, no. 4 (April 28, 2022): 42–49. http://dx.doi.org/10.26689/ssr.v4i4.3779.

Full text
Abstract:
In the implementation of credit bank, the transformation of learning accomplishments cannot be automated, and the workload of credit achievement management is large. Credits cannot interact freely across different credit banking systems. In order to solve the aforementioned problems, this study proposes the use of alliance chain technology to overcome the technical challenges encountered in the establishment of credit bank. In line with the basic framework of the alliance chain, a credit bank model based on blockchain technology is designed. At the moment, only the model design has been completed; the implementation of the model will take place in the later stage.
APA, Harvard, Vancouver, ISO, and other styles
45

Gorbatsevich, V., Yu Vizilter, V. Knyaz, and S. Zheltov. "Face Pose Recognition Based on Monocular Digital Imagery and Stereo-Based Estimation of its Precision." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-5 (June 6, 2014): 257–63. http://dx.doi.org/10.5194/isprsarchives-xl-5-257-2014.

Full text
Abstract:
A technique for automated face detection and its pose estimation using single image is developed. The algorithm includes: face detection, facial features localization, face/background segmentation, face pose estimation, image transformation to frontal view. Automatic face/background segmentation is performed by original graph-cut technique based on detected feature points. The precision of face orientation estimation based on monocular digital imagery is addressed. The approach for precision estimation is developed based on comparison of synthesized facial 2D images and scanned face 3D model. The software for modelling and measurement is developed. The special system for non-contact measurements is created. Required set of 3D real face models and colour facial textures is obtained using this system. The precision estimation results demonstrate the precision of face pose estimation enough for further successful face recognition.
APA, Harvard, Vancouver, ISO, and other styles
46

Thangaraj, Jagadeeswaran, and Senthilkumaran Ulaganathan. "A Comparative Study on Transformation of UML/OCL to Other Specifications." Recent Advances in Computer Science and Communications 13, no. 2 (June 3, 2020): 256–64. http://dx.doi.org/10.2174/2213275912666190129121059.

Full text
Abstract:
Background: Static verification is a sound programming methodology that permits automated reasoning about the correctness of an implementation with respect to its formal specification before its execution. Unified Modelling Language is most commonly used modelling language which describes the client’s requirement. Object Constraint Language is a formal language which allows users to express textual constraints regarding the UML model. Therefore, UML/OCL express formal specification and helps the developers to implement the code according to the client’s requirement through software design. Objective: This paper aims to compare the existing approaches generating Java, C++, C# code or JML, Spec# specifications from UML/OCL. Methods: Nowadays, software system is developed via automatic code generation from software design to implementation when using formal specification and static analysis. In this paper, the study considers transformation from design to implementation and vice versa using model transformation, code generation or other techniques. Results: The related tools, which generate codes, do not support verification at the implementation phase. On the other hand, the specification generation tools do not generate all the required properties which are needed for verification at the implementation phase. Conclusion: If the generated system supports the verification with all required properties, code developer needs less efforts to produce correct software system. Therefore, this study recommends introducing a new framework which can act as an interface between design and implementation to generate verified software systems.
APA, Harvard, Vancouver, ISO, and other styles
47

Fuchs, S., M. Witbrock, J. Dimyadi, and R. Amor. "Neural Semantic Parsing of Building Regulations for Compliance Checking." IOP Conference Series: Earth and Environmental Science 1101, no. 9 (November 1, 2022): 092022. http://dx.doi.org/10.1088/1755-1315/1101/9/092022.

Full text
Abstract:
Abstract Computerising building regulations to allow reasoning is one of the main challenges in automated compliance checking in the built environment. While there has been a long history of translating regulations manually, in recent years, natural language processing (NLP) has been used to support or automate this task. While rule- and ontology-based information extraction and transformation approaches have achieved accurate translations for narrow domains and specific regulation types, machine learning (ML) promises increased scalability and adaptability to new regulation styles. Since ML usually requires many annotated examples as training data, we take advantage of the long history of building code computerisation and use a corpus of manually translated regulations to train a transformer-based encoder-decoder model. Given a relatively small corpus, the model learns to predict the logical structure and extracts entities and relations reasonably well. While the translation quality is not adequate to fully automate the process, the model shows the potential to serve as an auto-completion system and to identify manually translated regulations that need to be reviewed.
APA, Harvard, Vancouver, ISO, and other styles
48

Kalla, Hamoudi, David Berner, and Jean-Pierre Talpin. "Automated Generation of Synchronous Formal Models from SystemC Descriptions." Journal of Circuits, Systems and Computers 28, no. 04 (March 31, 2019): 1950061. http://dx.doi.org/10.1142/s0218126619500610.

Full text
Abstract:
SystemC is one of the most popular electronic system-level design language and it is embraced by a growing community that seeks to move to a higher level of abstraction. It lacks however a standard way of integrating formal methods and formal verification techniques into a SystemC design flow. In this paper, we show how SystemC descriptions are automatically transformed into the formal synchronous language Signal, while conserving the original structure and enabling the application of formal verification techniques. Signal provides a simple semantics of concurrency and time, and allows verification with an existing theorem prover and model checker. The approach that we propose consists of two steps: the extraction of the structure and the transformation of the behavior. In the first step, SystemC model is analyzed and the structural information is extracted. In the second step, for each SystemC module, the corresponding Signal behavior is generated and filled into the already prepared Signal structure.
APA, Harvard, Vancouver, ISO, and other styles
49

RAFE, VAHID, ADEL T. RAHMANI, and REZA RAFEH. "FORMAL ANALYSIS OF UML 2.0 ACTIVITIES USING GRAPH TRANSFORMATION SYSTEMS." International Journal of Software Engineering and Knowledge Engineering 20, no. 05 (August 2010): 679–94. http://dx.doi.org/10.1142/s0218194010004918.

Full text
Abstract:
Graph transformation is a general visual modeling language which is suitable for stating the dynamic semantics of the designed models formally. We present a highly understandable yet precise approach to formally define the behavioral semantics of UML 2.0 Activity diagrams by using graph transformation. In our approach we take into account control flow and data flow semantics. Our proposed semantics is based on token-like semantics and traverse-to-completion. The main advantage of our approach is automated formal verification and analysis of UML Activities. We use AGG to design Activities and we use our previous approach to model checking graph transformation system. Hereby, designers can verify and analyze designed Activity diagrams. Since workflow modeling is one of the main application areas of the Activities, we use our proposed semantics for modeling and verification of workflows to illustrate our approach.
APA, Harvard, Vancouver, ISO, and other styles
50

Nemova, Natalya A., Aleksander V. Reznik, and Vladimir N. Kapov. "MODELING OF GEOMECHANICAL PROCESSES IN THE FIELDS IN THE CONDITIONS OF DIGITAL TRANSFORMATION OF MINING ENTERPRISES." Interexpo GEO-Siberia 2, no. 3 (May 21, 2021): 332–41. http://dx.doi.org/10.33764/2618-981x-2021-2-3-332-341.

Full text
Abstract:
The integrated digitalization of the mining enterprise can provide a positive effect from the optimization of the relationships between the elements of the mining system. Digitalization involves the transition from fragmentary automation of individual stages or processes to fully automated production, controlled in real time by intelligent systems. 3D models, including geological and structural analysis and geodynamic modeling of the geological structures of the deposit, should be used for a more accurate and objective view of the state of the rock mass and its changes over time. The creation of an integrated geomechanical model of the field, including a geological model, models of rock mass and rock properties, structural, hydrogeological, and others, allows for current monitoring and long-term forecast of the stress-strain state of the geomedium of mining enterprises.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography