To see the other types of publications on this topic, follow the link: Software re-engineering; Legacy systems.

Journal articles on the topic 'Software re-engineering; Legacy systems'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Software re-engineering; Legacy systems.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Guo, Jiang. "Software reuse through re-engineering the legacy systems." Information and Software Technology 45, no. 9 (June 2003): 597–609. http://dx.doi.org/10.1016/s0950-5849(03)00047-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

GUO, JIANG, and LUQI. "OBJECT MODELING TO RE-ENGINEER LEGACY SYSTEMS." International Journal of Software Engineering and Knowledge Engineering 10, no. 04 (August 2000): 471–85. http://dx.doi.org/10.1142/s0218194000000225.

Full text
Abstract:
This paper summarizes our experiences in using computer-supported methods to develop a software architecture to support the re-engineering of the Janus Combat Simulation System. We have analyzed the Janus FORTRAN source code, interviewed Janus domain experts, developed an object-oriented architecture for the Janus Combat Simulation subsystem, and validated the architecture with an executable prototype. In this paper, we propose methods to facilitate the evolution of the software component of these systems by recovering the behavior of the systems using systematic methods, and illustrate their use in the context of the Janus System.
APA, Harvard, Vancouver, ISO, and other styles
3

Alamin M, Hind, and Hany H. Ammar. "Concerns-Based Reverse Engineering for Partial Software Architecture Visualization." JOIV : International Journal on Informatics Visualization 4, no. 2 (May 26, 2020): 58. http://dx.doi.org/10.30630/joiv.4.2.357.

Full text
Abstract:
Recently, reverse engineering (RE) is becoming one of the essential engineering trends for software evolution and maintenance. RE is used to support the process of analyzing and recapturing the design information in legacy systems or complex systems during the maintenance phase. The major problem stakeholders might face in understanding the architecture of existing software systems is that the knowledge of software architecture information is difficult to obtain because of the size of the system, and the existing architecture document often is missing or does not match the current implementation of the source code. Therefore, much more effort and time are needed from multiple stakeholders such as developers, maintainers and architects for obtaining and re-documenting and visualizing the architecture of a target system from its source code files. The current works is mainly focused on the developer viewpoint. In this paper, we present a RE methodology for visualizing architectural information for multiple stakeholders and viewpoints based on applying the RE process on specific parts of the source code. The process is driven by eliciting stakeholders’ concerns on specific architectural viewpoints to obtain and visualize architectural information related these concerns. Our contributions are three fold: 1- The RE methodology is based on the IEEE 1471 standard for architectural description and supports concerns of stakeholder including the end-user and maintainer; 2- It supports the visualization of a particular part of the target system by providing a visual model of the architectural representation which highlights the main components needed to execute specific functionality of the target system, 3- The methodology also uses architecture styles to organize the visual architecture information. We illustrate the methodology using a case study of a legacy web application system.
APA, Harvard, Vancouver, ISO, and other styles
4

Shaikh, Mohsin, and Chan-Gun Lee. "Aspect Oriented Re-engineering of Legacy Software Using Cross-Cutting Concern Characterization and Significant Code Smells Detection." International Journal of Software Engineering and Knowledge Engineering 26, no. 03 (April 2016): 513–36. http://dx.doi.org/10.1142/s0218194016500212.

Full text
Abstract:
Although object-oriented programming (OOP) methodologies immensely promote reusable and well-factored decomposition of complex source code, legacy software systems often show symptoms of deteriorating design over time due to lack of maintenance. Software systems may have different business and application contexts, but most of these systems require similar maintenance mechanism of understanding, analysis and transformation. As a consequence, intensive re-engineering efforts based on the model driven approach can be effective ensuring that best practices are followed during maintenance and eventually reducing the development cost. In this paper, we suggest detailed framework of re-engineering which includes: (i) rigorous and automated source code analysis technique for identification, characterization and prioritization of most prominent and threatening design flaws in legacy software, (ii) migration of existing the code to aspect-oriented programming (AOP) code by exploiting current state of art for aspect mining mechanism and incorporating behavioral knowledge of cross-cutting concerns. To exemplify how the approach works a case study has been conducted to experimentally validate the idea and analyze the effect of process on specific software quality spectrum. An explicit analysis of prevalent work on the subject and their critical reviews are also presented to further enhance the recognition of proposed re-engineering framework.
APA, Harvard, Vancouver, ISO, and other styles
5

Conejero, José M., Roberto Rodríguez-Echeverría, Fernando Sánchez-Figueroa, Marino Linaje, Juan C. Preciado, and Pedro J. Clemente. "Re-engineering legacy Web applications into RIAs by aligning modernization requirements, patterns and RIA features." Journal of Systems and Software 86, no. 12 (December 2013): 2981–94. http://dx.doi.org/10.1016/j.jss.2013.04.053.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Moraga, Maximiliano, and Yang-Yang Zhao. "Reverse engineering a legacy software in a complex system: A systems engineering approach." INCOSE International Symposium 28, no. 1 (July 2018): 1250–64. http://dx.doi.org/10.1002/j.2334-5837.2018.00546.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Schmidt, Frederick, Stephen MacDonell, and Andy M. Connor. "Multi-Objective Reconstruction of Software Architecture." International Journal of Software Engineering and Knowledge Engineering 28, no. 06 (June 2018): 869–92. http://dx.doi.org/10.1142/s0218194018500262.

Full text
Abstract:
Design erosion is a persistent problem within the software engineering discipline. Software designs tend to deteriorate over time and there is a need for tools and techniques that support software architects when dealing with legacy systems. This paper presents an evaluation of a search-based software engineering (SBSE) approach intended to recover high-level architecture designs of software systems by structuring low-level artifacts into high-level architecture artifact configurations. In particular, this paper describes the performance evaluation of a number of metaheuristic search algorithms applied to architecture reconstruction problems with high dimensionality in terms of objectives. These problems have been selected as representative of the typical challenges faced by software architects dealing with legacy systems and the results inform the ongoing development of a software tool that supports the analysis of trade-offs between different reconstructed architectures.
APA, Harvard, Vancouver, ISO, and other styles
8

DONG, JING, YAJING ZHAO, and TU PENG. "A REVIEW OF DESIGN PATTERN MINING TECHNIQUES." International Journal of Software Engineering and Knowledge Engineering 19, no. 06 (September 2009): 823–55. http://dx.doi.org/10.1142/s021819400900443x.

Full text
Abstract:
The quality of a software system highly depends on its architectural design. High quality software systems typically apply expert design experience which has been captured as design patterns. As demonstrated solutions to recurring problems, design patterns help to reuse expert experience in software system design. They have been extensively applied in the industry. Mining the instances of design patterns from the source code of software systems can assist in the understanding of the systems and the process of re-engineering them. More importantly, it also helps to trace back to the original design decisions, which are typically missing in legacy systems. This paper presents a review on current techniques and tools for mining design patterns from source code or design of software systems. We classify different approaches and analyze their results in a comparative study. We also examine the disparity of the discovery results of different approaches and analyze possible reasons with some insight.
APA, Harvard, Vancouver, ISO, and other styles
9

Alkhalil, Adel. "Evolution of existing software to mobile computing platforms: Framework support and case study." International Journal of ADVANCED AND APPLIED SCIENCES 8, no. 3 (March 2021): 100–111. http://dx.doi.org/10.21833/ijaas.2021.03.013.

Full text
Abstract:
Mobile computing as ubiquitous and pervasive technology supports portable and context-aware computation. To date, there exist a significant number of traditional computing systems–running on the web and/or workstation-based platforms–that lack features of mobile computing, including but not limited to ubiquity, context-sensing, and high interactivity. Software that executes on these traditional computing systems is referred to as legacy software that can be upgraded to exploit the features of mobile technologies. However, legacy software may contain critical data, logic, and processes that cannot be easily replaced. One of the solutions is to evolve legacy software systems by (a) upgrading their functionality while (b) preserving their data and logic. Recently research and development efforts are focused on modernizing the legacy systems as per the needs of service and cloud-based platforms. However, there does not exist any research that supports a systematic modernization of legacy software as per the requirements of the mobile platforms. We propose a framework named Legacy-to-Mobile as a solution that supports an incremental and process-driven evolution of the legacy software to mobile computing software. The proposed Legacy-to-Mobile framework unifies the concepts of software reverse engineering (recovering software artifacts) and software change (upgrading software artifacts) to support the legacy evolution. The framework follows an incremental approach with four processes that include (i) evolution planning, (ii) architecture modeling, (iii) architecture change, and (iv) software validation of mobile computing software. The framework provides the foundation (as part of futuristic research) to develop a tool prototype that supports automation and user decision support for incremental and process-driven evolution of legacy software to mobile computing platforms.
APA, Harvard, Vancouver, ISO, and other styles
10

Li, Jing Lei. "Retrieval and Modelling of Software Evolution Process Component." Applied Mechanics and Materials 241-244 (December 2012): 2867–71. http://dx.doi.org/10.4028/www.scientific.net/amm.241-244.2867.

Full text
Abstract:
As more and more successful software systems become legacy systems,the importance and popularity of software evolution increase[1]. In order to make the mature software systems as components, so that they can be re-assembled and maintenance as automobile parts. In this context, the software evolution process component formal definition is designed based on this background. And then define the component model. Based on the questions in component retrieval, faceted classification of the Components and the theory of tree matching algorithm is discussed and analyzed. The Retrieval of Software Evolution Process Component is designed with the theory of tree matching algorithm so as to support the software evolution process modelling.
APA, Harvard, Vancouver, ISO, and other styles
11

KÖLSCH, ULRIKE, and JÜRGEN LASCHEWSKI. "A FRAMEWORK FOR OBJECT-ORIENTED REVERSE ENGINEERING OF LEGACY INFORMATION SYSTEMS." International Journal of Software Engineering and Knowledge Engineering 09, no. 01 (February 1999): 27–54. http://dx.doi.org/10.1142/s0218194099000048.

Full text
Abstract:
There is every indication that an object-oriented view of an information system is a solid foundation for understanding its legacy organization, for relating it to the environment in which it is embedded and for guiding its reengineering. In this paper we present a framework based upon the formal object-oriented specification language TROLL, which provides an object-oriented view of legacy information systems. The aim is to combine existing methods and keep results in a common and suitable description base which provides the appropriate form for deriving object specifications from the legacy IS. We usethe language TROLL not only as description language, but also as a framework to support the maintenance engineers in their reverse engineering tasks by giving hints about what to do next to complete the object specifications. The result of the approach is a formal object-oriented specification of the legacy IS that is suitable both for developing a new IS or for reengineering the legacy system.
APA, Harvard, Vancouver, ISO, and other styles
12

Bera, Debjyoti, Mathijs Schuts, Jozef Hooman, and Ivan Kurtev. "Reverse engineering models of software interfaces." Computer Science and Information Systems 18, no. 3 (2021): 657–86. http://dx.doi.org/10.2298/csis200131013b.

Full text
Abstract:
Cyber-physical systems consist of many hardware and software components. Over the lifetime of these systems their components are often replaced or updated. To avoid integration problems, formal specifications of component interface behavior are crucial. Such a formal specification captures not only the set of provided operations but also the order of using them and the constraints on their timing behavior. Usually the order of operations are expressed in terms of a state machine. For new components such a formal specification can be derived from requirements. However, for legacy components such interface descriptions are usually not available. So they have to be reverse engineered from existing event logs and source code. This costs a lot of time and does not scale very well. To improve the efficiency of this process, we present a passive learning technique for interface models inspired by process mining techniques. The approach is based on representing causal relations between events present in an event log and their timing information as a timed-causal graph. The graph is further processed and eventually transformed into a state machine and a set of timing constraints. Compared to other approaches in literature which focus on the general problem of inferring state-based behavior, we exploit patterns of client-server interactions in event logs.
APA, Harvard, Vancouver, ISO, and other styles
13

de Cesare, Sergio, and Chris Partridge. "BORO as a Foundation to Enterprise Ontology." Journal of Information Systems 30, no. 2 (February 1, 2016): 83–112. http://dx.doi.org/10.2308/isys-51428.

Full text
Abstract:
ABSTRACT Modern business organizations experience increasing challenges in the development and evolution of their enterprise systems. Typical problems include legacy re-engineering, systems integration/interoperability, and the architecting of the enterprise. At the heart of all these problems is enterprise modeling. Many enterprise modeling approaches have been proposed in the literature with some based on ontology. Few however adopt a foundational ontology to underpin a range of enterprise models in a consistent and coherent manner. Fewer still take data-driven re-engineering as their natural starting point for modeling. This is the approach taken by Business Object Reference Ontology (BORO). It has two closely intertwined components: a foundational ontology and a re-engineering methodology. These were originally developed for the re-engineering of enterprise systems and subsequently evolved into approaches to enterprise architecture and systems integration. Together these components are used to systematically unearth reusable and generalized business patterns from existing data. Most of these patterns have been developed for the enterprise context and have been successfully applied in several commercial projects within the financial, defense, and oil and gas industries. BORO's foundational ontology is grounded in philosophy and its metaontological choices (including perdurantism, extensionalism, and possible worlds) follow well-established theories. BORO's re-engineering methodology is rooted in the philosophical notion of grounding; it emerged from the practice of deploying its foundational ontology and has been refined over the last 25 years. This paper presents BORO and its application to enterprise modeling.
APA, Harvard, Vancouver, ISO, and other styles
14

Hull, M. E. C., Y. Bi, and P. N. Nicholl. "Approaches to component technologies for software reuse of legacy systems." Computing & Control Engineering Journal 12, no. 6 (December 1, 2001): 281–87. http://dx.doi.org/10.1049/cce:20010605.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

WARD, M. P., and K. H. BENNETT. "FORMAL METHODS TO AID THE EVOLUTION OF SOFTWARE." International Journal of Software Engineering and Knowledge Engineering 05, no. 01 (March 1995): 25–47. http://dx.doi.org/10.1142/s0218194095000034.

Full text
Abstract:
There is a vast collection of operational software systems which are vitally important to their users, yet are becoming increasingly difficult to maintain, enhance, and keep up to date with rapidly changing requirements. For many of these so-called legacy systems, the option of throwing the system away and rewriting it from scratch is not economically viable. Methods are therefore urgently required which enable these systems to evolve in a controlled manner. The approach described in this paper uses formal proven program transformations, which preserve or refine the semantics of a program while changing its form. These transformations are applied to restructure and simplify the legacy systems and to extract higher-level representations. By using an appropriate sequence of transformations, the extracted representation is guaranteed to be equivalent to the code. The method is based on a formal wide spectrum language, called WSL, with an accompanying formal method. Over the last ten years we have developed a large catalog of proven transformations, together with mechanically verifiable applicability conditions. These have been applied to many software development, reverse engineering, and maintenance problems. In this paper, we focus on the results of using this approach in the reverse engineering of medium scale, industrial software, written mostly in languages such as assembler and JOVIAL. Results from both benchmark algorithms and heavily modified, geriatric software are summarized. We conclude that formal methods have an important practical role in software evolution.
APA, Harvard, Vancouver, ISO, and other styles
16

Srinivas, Malladi, G. Rama Krishna, K. Rajasekhara Rao, and E. Suresh Babu. "GATALSS: A Generic Automated Tool for Analysing the Legacy Software Systems." Research Journal of Applied Sciences, Engineering and Technology ` 12, no. 3 (February 5, 2016): 361–65. http://dx.doi.org/10.19026/rjaset.12.2344.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Iqbal, Nayyar, Jun Sang, Jing Chen, and Xiaofeng Xia. "Measuring Software Maintainability with Naïve Bayes Classifier." Entropy 23, no. 2 (January 22, 2021): 136. http://dx.doi.org/10.3390/e23020136.

Full text
Abstract:
Software products in the market are changing due to changes in business processes, technology, or new requirements from the customers. Maintainability of legacy systems has always been an inspiring task for the software companies. In order to determine whether the software requires maintainability by reverse engineering or by forward engineering approach, a system assessment was done from diverse perspectives: quality, business value, type of errors, etc. In this research, the changes required in the existing software components of the legacy system were identified using a supervised learning approach. New interfaces for the software components were redesigned according to the new requirements and/or type of errors. Software maintainability was measured by applying a machine learning technique, i.e., Naïve Bayes classifier. The dataset was designed based on the observations such as component state, successful or error type in the component, line of code of error that exists in the component, component business value, and changes required for the component or not. The results generated by the Waikato Environment for Knowledge Analysis (WEKA) software confirm the effectiveness of the introduced methodology with an accuracy of 97.18%.
APA, Harvard, Vancouver, ISO, and other styles
18

CANFORA, GERARDO, ANDREA DE LUCIA, and GIUSEPPE A. DI LUCCA. "AN INCREMENTAL OBJECT-ORIENTED MIGRATION STRATEGY FOR RPG LEGACY SYSTEMS." International Journal of Software Engineering and Knowledge Engineering 09, no. 01 (February 1999): 5–25. http://dx.doi.org/10.1142/s0218194099000036.

Full text
Abstract:
We present a strategy for incrementally migrating legacy systems to object-oriented platforms. The migration process consists of six sequential phases and encompasses reverse engineering and reengineering activities. The aim of reverse engineering is to decompose programs into components implementing the user interface and components implementing application domain objects. The identification of objects is centred around persistent data stores and exploits object-oriented design metrics. Wrapping is the core of the reengineering activities. It makes new systems able to exploit existing resources, thus allowing an incremental and selective replacement of the identified objects. The migration strategy has been defined and experimented within the project ERCOLE (Encapsulation, Reengineering and Coexistence of Object with Legacy) on legacy systems developed in RPG for the IBM AS/400 environment.
APA, Harvard, Vancouver, ISO, and other styles
19

Budgen, David, and James E. Tomayko. "The SEI curriculum modules and their influence: Norm Gibbs' legacy to software engineering education." Journal of Systems and Software 75, no. 1-2 (February 2005): 55–62. http://dx.doi.org/10.1016/j.jss.2004.02.027.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

JAHNKE, JENS H., and ANDREW WALENSTEIN. "EVALUATING THEORIES FOR MANAGING IMPERFECT KNOWLEDGE IN HUMAN-CENTRIC DATABASE REENGINEERING ENVIRONMENTS." International Journal of Software Engineering and Knowledge Engineering 12, no. 01 (February 2002): 77–102. http://dx.doi.org/10.1142/s0218194002000834.

Full text
Abstract:
Modernizing heavily evolved and poorly documented information systems is a central software engineering problem in our current IT industry. It is often necessary to reverse engineer the design documentation of such legacy systems. Several interactive CASE tools have been developed to support this human-intensive process. However, practical experience indicates that their applicability is limited because they do not adequately handle imperfect knowledge about legacy systems. In this paper, we investigate the applicability of several major theories of imperfect knowledge management in the area of soft computing and approximate reasoning. The theories are evaluated with respect to how well they meet requirements for generating effective human-centred reverse engineering environments. The requirements were elicited with help from practical case studies in the area of database reverse engineering. A particular theory called "possibilistic logic" was found to best meet these requirements most comprehensively. This evaluation highlights important challenges to the designers of knowledge management techniques, and should help reverse engineering tool implementers select appropriate technologies.
APA, Harvard, Vancouver, ISO, and other styles
21

Ananthavijayan, Ramesh, Prabhakar Karthikeyan Shanmugam, Sanjeevikumar Padmanaban, Jens Holm-Nielsen, Frede Blaabjerg, and Viliam Fedak. "Software Architectures for Smart Grid System—A Bibliographical Survey." Energies 12, no. 6 (March 26, 2019): 1183. http://dx.doi.org/10.3390/en12061183.

Full text
Abstract:
Smart grid software interconnects multiple Engineering disciplines (power systems, communication, software and hardware technology, instrumentation, big data, etc.). The software architecture is an evolving concept in smart grid systems, in which system architecture development is a challenging process. The architecture has to realize the complex legacy power grid systems and cope with current Information and Communication Technologies (ICT). The distributed generation in a smart grid environment expects the software architecture to be distributed and to enable local control. Smart grid architecture should also be modular, flexible, and adaptable to technology upgrades. In this paper, the authors have made a comprehensive review of architectures for smart grids. An in depth analysis of layered and agent-based architectures based on the National Institute of Standards and Technology (NIST) conceptual model is presented. Also presented is a set of smart grid Reference Architectures dealing with cross domain technology.
APA, Harvard, Vancouver, ISO, and other styles
22

Manole, E. M. "Detecting the Most Important Classes from Software Systems with Self Organizing Maps." Studia Universitatis Babeș-Bolyai Informatica 66, no. 1 (July 1, 2021): 54. http://dx.doi.org/10.24193/subbi.2021.1.04.

Full text
Abstract:
Self Organizing Maps (SOM) are unsupervised neural networks suited for visualisation purposes and clustering analysis. This study uses SOM to solve a software engineering problem: detecting the most important (key) classes from software projects. Key classes are meant to link the most valuable concepts of a software system and in general these are found in the solution documentation. UML models created in the design phase become deprecated in time and tend to be a source of confusion for large legacy software. Therefore, developers try to reconstruct class diagrams from the source code using reverse engineering. However, the resulting diagram is often very cluttered and difficult to understand. There is an interest for automatic tools for building concise class diagrams, but the machine learning possibilities are not fully explored at the moment. This paper proposes two possible algorithms to transform SOM in a classification algorithm to solve this task, which involves separating the important classes - that should be on the diagrams - from the others, less important ones. Moreover, SOM is a reliable visualization tool which able to provide an insight about the structure of the analysed projects.
APA, Harvard, Vancouver, ISO, and other styles
23

Arevalo, C., I. Ramos, J. Gutiérrez, and M. Cruz. "Practical Experiences in the Use of Pattern-Recognition Strategies to Transform Software Project Plans into Software Business Processes of Information Technology Companies." Scientific Programming 2019 (May 2, 2019): 1–21. http://dx.doi.org/10.1155/2019/7973289.

Full text
Abstract:
Business process management (BPM) is a strategic advantage for all kinds of organizations, including information technology companies (ITCs), which cannot stay out of the BPM approach. ITCs manage business processes like projects to create and maintain software. Although Project Management Systems (PMSs), such as Microsoft™ Project Server® (MPS®), are considered as non-process-aware information systems (Non-PAISs), they may be a source to generate processes. In this paper, we propose a reverse engineering approach, which uses patterns to transform software projects stored in MPS® legacy databases into software business processes. For this, we base on the model-driven engineering paradigm and deal with the time perspective of the processes. This kind of experiences are scarce or almost nonexistent, so we show the AQUA-WS project case study, which runs with MPS® as source system and software process modeling languages as target systems. ITCs can benefit from this research by gathering knowledge about perspectives of their processes that would otherwise be wasted, such as executed projects or expired documents used in Non-PAISs. This fact can become a key factor for ITCs, which can increase their competitiveness and reduce software costs, as part of the BPM lifecycle of continuous improvement.
APA, Harvard, Vancouver, ISO, and other styles
24

Rodriguez, Guillermo. "AN INFORMATION RETRIEVAL APPROACH FOR ASSISTING USERS IN SOFTWARE ENGINEERING PROCESSES." Interfaces Científicas - Exatas e Tecnológicas 3, no. 1 (June 26, 2018): 9–18. http://dx.doi.org/10.17564/2359-4942.2018v3n1p9-18.

Full text
Abstract:
Documents written in natural language constitute a major part of the artifacts produced during the software engineering life cycle. There is a growing interest in creating tools that can assist users in all phases of the software life cycle. The assistance requires techniques that go beyond traditional static and dynamic analysis. An example of such a technique is the application of information retrieval (IR), which exploits information found in documents of a software engineering process. The increased availability of data created as part of the software development process allows managers to apply novel analysis techniques on the data and use the results to guide the project's stakeholders. These data are then used to predict defects, gather insight into a project's life-cycle, and other tasks. This work proposes an IR approach to assist users in software engineering processes according their profile. The approach consists in recommending them related documents to a retrieved one in order to users understand and follow the process in a correct way. Furthermore, the assistance concentrates on legacy systems in which engineers must acquire knowledge generated by others. Implementation of the approach and an overview of evaluation are also summarized.
APA, Harvard, Vancouver, ISO, and other styles
25

Petrie, Charles J., Teresa A. Webster, and Mark R. Cutkosky. "Using Pareto optimality to coordinate distributed agents." Artificial Intelligence for Engineering Design, Analysis and Manufacturing 9, no. 4 (September 1995): 269–81. http://dx.doi.org/10.1017/s0890060400002821.

Full text
Abstract:
AbstractPareto optimality is a domain-independent property that can be used to coordinate distributed engineering agents. Within a model of design called Redux, some aspects of dependency-directed backtracking can be interpreted as tracking Pareto optimality. These concepts are implemented in a framework, called Next-Link, that coordinates legacy engineering systems. This framework allows existing software tools to communicate with each other and a Redux agent over the Internet. The functionality is illustrated with examples from the domain of electrical cable harness design.
APA, Harvard, Vancouver, ISO, and other styles
26

Ristic, Sonja, Slavica Aleksic, Milan Celikovic, and Ivan Lukovic. "Generic and standard database constraint meta-models." Computer Science and Information Systems 11, no. 2 (2014): 679–96. http://dx.doi.org/10.2298/csis140216037r.

Full text
Abstract:
Many software engineering activities entail dealing with legacy information systems. When these systems become too costly to maintain, or when new technologies need to be incorporated, they need to be replaced or somehow reengineered. This can be done with significantly reduced amount of effort and cost if the conceptual models of these systems are available. Reverse engineering is the process of analyzing a subject system to create representations of the system at a higher level of abstraction. Relational databases are a common source of reverse engineering. Starting from a physical database schema, that is recorded into relational database schema data repository, the conceptual database schema or logical database schema could be extracted. The extraction process may be seen as a chain of model-to-model transformations that trace model elements from a model at the lower level of abstraction to a model at the higher level of abstraction, achieved through meta-modeling. In the paper we present generic and standard database constraint meta-models, focusing on multi-relational database constraints captured in a legacy database. These meta-models are aimed at support of model transformations to create conceptual models, as a useful source for the system reengineering process.
APA, Harvard, Vancouver, ISO, and other styles
27

Timmers, T., and E. M. van Mulligen. "Trends in Integrated Clinical Workstations." Yearbook of Medical Informatics 05, no. 01 (August 1996): 101–7. http://dx.doi.org/10.1055/s-0038-1638051.

Full text
Abstract:
AbstractDuring the last decade, several projects aiming at integrated clinical workstations have been described and several prototypes have been demonstrated. In most of these projects, the clinical workstation accesses information and functionality provided by the present proprietary legacy systems of health-care institutions. We discuss trends in integrated clinical workstations from the viewpoints of software engineering and integration, considering that the clinical workstation itself basically consists of three layers: a presentation layer, a data integration layer, and a communication layer. The software engineering view on clinical workstations focuses on the development of basic building blocks from which clinical workstations, specific to a particular medical domain, can be composed. The integration view on clinical workstations addresses methods and techniques to deal with the, in general, intrinsically closed information systems in health-care institutions.
APA, Harvard, Vancouver, ISO, and other styles
28

Ören, Tuncer I., and Bernard P. Zeigler. "System theoretic foundations of modeling and simulation: a historic perspective and the legacy of A Wayne Wymore." SIMULATION 88, no. 9 (June 27, 2012): 1033–46. http://dx.doi.org/10.1177/0037549712450360.

Full text
Abstract:
AW Wymore, the founder of the world’s first systems engineering department at the University of Arizona, has been at the origin of the system theoretic foundations of modeling and simulation. Wymore’s intellectual family tree, which goes back to Gauss and Weierstrass, is given. How the authors met, cooperated, and advocated system theory for the advancement of modeling and simulation are explained. The concept of model-based simulation was also one of the outcomes of this cooperation. This article reviews the emergence of systems-theory-based modeling and simulation languages and environments, such as the General System Theory implementor and Discrete Event System Specification, and their relation to Wymore’s concepts. We also discuss the application of powerful software development frameworks to support user-friendly access to systems concepts and to increase the power to support systems design and engineering.
APA, Harvard, Vancouver, ISO, and other styles
29

Dawadi, Babu R., Abhishek Thapa, Roshan Guragain, Dilochan Karki, Sandesh P. Upadhaya, and Shashidhar R. Joshi. "Routing Performance Evaluation of a Multi-Domain Hybrid SDN for Its Implementation in Carrier Grade ISP Networks." Applied System Innovation 4, no. 3 (July 21, 2021): 46. http://dx.doi.org/10.3390/asi4030046.

Full text
Abstract:
Legacy IPv4 networks are strenuous to manage and operate. Network operators are in need of minimizing the capital and operational expenditure of running network infrastructure. The implementation of software-defined networking (SDN) addresses those issues by minimizing the expenditures in the long run. Legacy networks need to integrate with the SDN networks for smooth migration towards the fully functional SDN environment. In this paper, we compare the network performance of the legacy network with the SDN network for IP routing in order to determine the feasibility of the SDN deployment in the Internet Service provider (ISP) network. The simulation of the network is performed in the Mininet test-bed and the network traffic is generated using a distributed Internet traffic generator. An open network operating system is used as a controller for the SDN network, in which the SDN-IP application is used for IP routing. Round trip time, bandwidth, and packet transmission rate from both SDN and legacy networks are first collected and then the comparison is made. We found that SDN-IP performs better in terms of bandwidth and latency as compared to legacy routing. The experimental analysis of interoperability between SDN and legacy networks shows that SDN implementation in a production level carrier-grade ISP network is viable and progressive.
APA, Harvard, Vancouver, ISO, and other styles
30

Silverman, Barry G., Gnana Bharathy, Kevin O'Brien, and Jason Cornwell. "Human Behavior Models for Agents in Simulators and Games: Part II: Gamebot Engineering with PMFserv." Presence: Teleoperators and Virtual Environments 15, no. 2 (April 2006): 163–85. http://dx.doi.org/10.1162/pres.2006.15.2.163.

Full text
Abstract:
Many producers and consumers of legacy training simulator and game environments are beginning to envision a new era where psycho-socio-physiologic models could be interoperated to enhance their environments' simulation of human agents. This paper explores whether we could embed our behavior modeling framework (described in the companion paper, Part 1) behind a legacy first person shooter 3D game environment to recreate portions of the Black Hawk Down scenario. Section 1 amplifies the interoperability needs and challenges confronting the field, presents the questions that are examined, and describes the test scenario. Sections 2 and 3 review the software and knowledge engineering methodology, respectively, needed to create the system and populate it with bots. Results (Section 4) and discussion (Section 5) reveal that we were able to generate plausible and adaptive recreations of Somalian crowds, militia, women acting as shields, suicide bombers, and more. Also, there are specific lessons learned about ways to advance the field so that such interoperabilities will become more affordable and widespread.
APA, Harvard, Vancouver, ISO, and other styles
31

Adams, Bram, Kris De Schutter, Andy Zaidman, Serge Demeyer, Herman Tromp, and Wolfgang De Meuter. "Using aspect orientation in legacy environments for reverse engineering using dynamic analysis—An industrial experience report." Journal of Systems and Software 82, no. 4 (April 2009): 668–84. http://dx.doi.org/10.1016/j.jss.2008.09.031.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

GALL, HARALD C., RENÉ R. KLÖSCH, and ROLAND T. MITTERMEIR. "USING DOMAIN KNOWLEDGE TO IMPROVE REVERSE ENGINEERING." International Journal of Software Engineering and Knowledge Engineering 06, no. 03 (September 1996): 477–505. http://dx.doi.org/10.1142/s021819409600020x.

Full text
Abstract:
Integrating application domain knowledge into reverse engineering is an important step to overcome the shortcomings of conventional reverse engineering approaches that are based exclusively on information derivable from source code. In this paper, we show the basic concepts of a program transformation process from a conventional to an object-oriented architecture which incorporates extraneous higher-level knowledge in its process. To which degree this knowledge might stem from some general domain knowledge, and to which extent it needs to be introduced as application dependent knowledge by a human expert is discussed. The paper discusses these issues in the context of the architectural transformation of legacy systems to an object-oriented architecture.
APA, Harvard, Vancouver, ISO, and other styles
33

Abdelkader, Mostefai, Mimoun Malki, and Sidi Mohamed Benslimane. "A heuristic approach to locate candidate web service in legacy software." International Journal of Computer Applications in Technology 47, no. 2/3 (2013): 152. http://dx.doi.org/10.1504/ijcat.2013.054348.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Popov, Peter. "Bayesian reliability assessment of legacy safety-critical systems upgraded with fault-tolerant off-the-shelf software." Reliability Engineering & System Safety 117 (September 2013): 98–113. http://dx.doi.org/10.1016/j.ress.2013.03.017.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Lee, JongHyup, and Taekyoung Kwon. "Distributed Watchdogs Based on Blockchain for Securing Industrial Internet of Things." Sensors 21, no. 13 (June 26, 2021): 4393. http://dx.doi.org/10.3390/s21134393.

Full text
Abstract:
The Industrial Internet of Things (IIoT) could enhance automation and analytics in industrial environments. Despite the promising benefits of IIoT, securely managing software updates is a challenging problem for those critical applications. This is due to at least the intrinsic lack of software protection mechanisms in legacy industrial systems. In this paper, to address the challenges in building a secure software supply chain for industrial environments, we propose a new approach that leverages distributed watchdogs with blockchain systems in protecting software supply chains. For this purpose, we bind every entity with a unique identity in the blockchain and employ the blockchain as a delegated authenticator by mapping every reporting action to a non-fungible token transfer. Moreover, we present a detailed specification to clearly define the behavior of systems and to apply model checking.
APA, Harvard, Vancouver, ISO, and other styles
36

Moutaouakkil, Amine, and Samir Mbarki. "PHP modernization approach generating KDM models from PHP legacy code." Bulletin of Electrical Engineering and Informatics 9, no. 1 (February 1, 2020): 247–55. http://dx.doi.org/10.11591/eei.v9i1.1269.

Full text
Abstract:
With the rise of new web technologies such as web 2.0, Jquery, Bootstrap. Modernizing legacy web systems to benefit from the advantages of the new technologies is more and more relevant. The migration of a system from an environment to another is a time and effort consuming process, it involves a complete rewrite of the application adapted to the target platform. To realize this migration in an automated and standardized way, many approaches have tried to define standardized engineering processes. Architecture Driven Modernization (ADM) defines an approach to standardize and automate the reengineering process. We defined an ADM approach to represent PHP web applications in the highest level of abstraction models. To do this, we have used software artifacts as a entry point . This paper describes the extraction process, which permits discovering and understanding of the legacy system. And generate models to represent the system in an abstract way.
APA, Harvard, Vancouver, ISO, and other styles
37

Porter, J. David, Richard E. Billo, and Robert Rucker. "Architectures for integrating legacy information systems with modern bar code technology." Journal of Manufacturing Systems 23, no. 3 (January 2004): 256–65. http://dx.doi.org/10.1016/s0278-6125(04)80038-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Flores, Rick, David Rosa, and Mohan Murugesan. "Using Model Transformation/Code Generation Technology to Migrate Legacy Software Assets to AUTOSAR." SAE International Journal of Passenger Cars - Electronic and Electrical Systems 4, no. 1 (April 12, 2011): 10–16. http://dx.doi.org/10.4271/2011-01-1264.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Lu, Bo, and Michael Piasecki. "Community modeling systems: classification and relevance to hydrologic modeling." Journal of Hydroinformatics 14, no. 4 (May 29, 2012): 840–56. http://dx.doi.org/10.2166/hydro.2012.060.

Full text
Abstract:
Numerical modeling in the water sciences has been shifting from developing single or specific purpose-oriented tightly intertwined model applications to integrated model systems addressing more complex and interlinked geo-physical, -chemical and -biological processes across all strata of the critical zone geo-volume. This is a response to a number of important issues that span from preservation of legacy software, to a higher degree of development cost efficiency, to the realization that processes in one strata depend on others, to harmonizing software system usage, and to improving code provenance and repeatability of model runs. Consequently, a number of community modeling systems (CMS) have either been proposed or are being developed with individual communities typically taking the lead to develop a CMS for their constituency. While the development of CMS is a major step forward in trying to harmonize modeling efforts, chosen approaches vary with numerous efforts underway to arrive at a workable and functional CMS. This review seeks to provide an overview of these efforts, with a focus on those that address processes located in the critical zone, and tries to assess their degree of success based on some general criteria for the development of CMS.
APA, Harvard, Vancouver, ISO, and other styles
40

Buzzoni, Enrico, Fabio Forlani, Carlo Giannelli, Matteo Mazzotti, Stefano Parisotto, Alessandro Pomponio, and Cesare Stefanelli. "The Advent of the Internet of Things in Airfield Lightning Systems: Paving the Way from a Legacy Environment to an Open World." Sensors 19, no. 21 (October 31, 2019): 4724. http://dx.doi.org/10.3390/s19214724.

Full text
Abstract:
This paper discusses the design and prototype implementation of a software solution facilitating the interaction of third-party developers with a legacy monitoring and control system in the airfield environment. By following the Internet of Things (IoT) approach and adopting open standards and paradigms such as REpresentational State Transfer (REST) and Advanced Message Queuing Protocol (AMQP) for message dispatching, the work aims at paving the way towards a more open world in the airfield industrial sector. The paper also presents performance results achieved by extending legacy components to support IoT standards. Quantitative results not only demonstrate the feasibility of the proposed solution, but also its suitability in terms of prompt message dispatching and increased fault tolerance.
APA, Harvard, Vancouver, ISO, and other styles
41

CUI, JianFeng, and HeungSeok CHAE. "Component Identification and Evaluation for Legacy Systems —— An Empirical Study ——." IEICE Transactions on Information and Systems E93-D, no. 12 (2010): 3306–20. http://dx.doi.org/10.1587/transinf.e93.d.3306.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Liu, Shaofeng, Alex H. B. Duffy, Robert Ian Whitfield, Iain M. Boyle, and Iain McKenna. "Towards the Realization of an Integrated Decision Support Environment for Organizational Decision Making." International Journal of Decision Support System Technology 1, no. 4 (October 2009): 38–58. http://dx.doi.org/10.4018/jdsst.2009062603.

Full text
Abstract:
Traditional decision support systems are based on the paradigm of a single decision maker working at a standalone computer or terminal who has a specific decision to make with a specific goal in mind. Organizational decision support systems aim to support decision makers at all levels of an organization (from executive, middle management managers to operators), who have a variety of decisions to make, with different priorities, often in a distributed and dynamic environment. Such systems need to be designed and developed with extra functionality to meet the challenges such as collaborative working. This article proposes an Integrated Decision Support Environment (IDSE) for organizational decision making. The IDSE distinguishes itself from traditional decision support systems in that it can flexibly configure and re-configure its functions to support various decision applications. IDSE is an open software platform which allows its users to define their own decision processes and choose their own exiting decision tools to be integrated into the platform. The IDSE is designed and developed based on distributed client/server networking, with a multi-tier integration framework for consistent information exchange and sharing, seamless process co-ordination and synchronisation, and quick access to packaged and legacy systems. The prototype of the IDSE demonstrates good performance in agile response to fast changing decision situations.
APA, Harvard, Vancouver, ISO, and other styles
43

Stepashko, Vladimir S., and Vsevold M. Kuntsevich. "On Scientific Legacy of Academican of National Academy of Sciences of Ukraine A.G. Ivakhnenko." Journal of Automation and Information Sciences 40, no. 3 (2008): 1–3. http://dx.doi.org/10.1615/jautomatinfscien.v40.i3.10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Ray, Wendel A., and Molly R. Govener. "Legacy: lessons from the Bateson team meetings." Kybernetes 36, no. 7/8 (August 14, 2007): 1026–36. http://dx.doi.org/10.1108/03684920710777801.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Kumar, Saurav, Adil N. Godrej, and Thomas J. Grizzard. "A web-based environmental decision support system for legacy models." Journal of Hydroinformatics 17, no. 6 (July 16, 2015): 874–90. http://dx.doi.org/10.2166/hydro.2015.007.

Full text
Abstract:
An environmental decision support system (EDSS) was designed for the Occoquan system in Northern Virginia, USA. This EDSS is available through the internet using web-browsers, and enables stakeholders to interact with complexly-linked water resources models for the Occoquan system based on seven implementations of HSPF and two implementations of CE-QUAL-W2 software. Using the web-interface of the EDSS, users may delineate land use changes and simulate the water quality impact of such changes by remotely executing the water resources models. The EDSS utilizes a server cluster to share the computational load of simultaneously executing multiple instances of the linked Occoquan system models along with methods to limit ‘similar’ model executions. The server cluster was assembled from disparate machines with spare computing resource available on the local network, thereby eliminating the need for any additional hardware to execute an increased number of model simulations. It is expected that the enhanced accessibility to the water resources models through the EDSS may allow stakeholders to use the models as a planning and educational resource, without direct expert modeler's involvement. Further, this EDSS is comprised of modules that may be extended to other watersheds with similar legacy, calibrated modeling systems.
APA, Harvard, Vancouver, ISO, and other styles
46

Janßen-Tapken, Damir, and Andreas Pfnür. "Critical success factors of ERP benefits in CREM: evidence from Austria, Germany and Switzerland." Journal of Corporate Real Estate 18, no. 4 (November 14, 2016): 287–310. http://dx.doi.org/10.1108/jcre-10-2015-0032.

Full text
Abstract:
Purpose The purpose of this study is to find answers to the question whether a fully-integrated real estate (RE) solution within an Enterprise Resource Planning (ERP) landscape delivers a visible and measurable contribution to organizational efficiency in corporate real estate management (CREM), a field still dominated by specialized, but stand-alone software packages. Design/methodology/approach The authors set up a model of CREM with the enterprise resource planning (ERP) systems being the hinge between the RE strategies and organizational efficiency. The model was tested by a written questionnaire to respond on the benefit expectations on ERP benefits. Findings In many cases, the results show a negative gap between expectations and realized benefits. The authors identified benefit stars and dogs within the sample. Stars realizing high benefit ratios on average have more often chosen the form of a shared service center for their CREM department, have reengineered the business processes more intensively, had more often a legacy system as a predecessor of the SAP ERP, trained employees more intensively and showed a higher degree of customization of the RE module than the benefit dogs of the sample. Practical implications Newly formed CREM departments looking for optimal IT solutions find decision support regarding the best fit for their IT landscape. Already institutionalized CREM units running an ERP system will find concrete evidence for improvement. Originality/value This is the first study of benefits and critical success factors of ERP implementation and operation for modern CREM. It is the attempt to bridge the gap between business and IT, showing the enabler role of ERP systems for efficient business processes, satisfied corporate users and motivated employees.
APA, Harvard, Vancouver, ISO, and other styles
47

Alssaheli, Omran M. A., Z. Zainal Abidin, N. A. Zakaria, and Z. Abal Abas. "Implementation of Network Traffic Monitoring using Software Defined Networking Ryu Controller." WSEAS TRANSACTIONS ON SYSTEMS AND CONTROL 16 (May 25, 2021): 270–77. http://dx.doi.org/10.37394/23203.2021.16.23.

Full text
Abstract:
Network traffic monitoring is vital for enhancing the overall network performance and for optimizing the traffic flows. However, an emerging growth of use in cloud services, internet-of-things, block-chain and data analytics, demand the hardware-based-network-controller to provide more features for expanding network architecture. Therefore, Software Defined Networking (SDN) offers a new solution in terms of scalability, usability and programmable software-based-network-controller for the legacy network infrastructure. In fact, SDN provides a dynamic platform for the network traffic monitoring using international standard. In this study, SDN setup and installation method uses a Mininet emulator containing a controller Ryu with switching hub component, OpenFlow switches, and nodes. The number of nodes is adding until reaches to 16 nodes and evaluated through different network scenarios (single, linear and tree topology). Findings show that the single topology gives a winning criterion compared to other topologies. SDN implementation is measured with performance parameters such as Throughput, Jitter, Bandwidth and Round-Trip Time between scenarios using the Ryu controller. Future research explores on the performance of SDN in larger network and investigates the efficiency and effectiveness of SDN implementation in mesh topology.
APA, Harvard, Vancouver, ISO, and other styles
48

Gorban, Alexander N., Richard Burton, Ilya Romanenko, and Ivan Yu Tyukin. "One-trial correction of legacy AI systems and stochastic separation theorems." Information Sciences 484 (May 2019): 237–54. http://dx.doi.org/10.1016/j.ins.2019.02.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Thomas, Frank N., Rebekah A. Waits, and Gail L. Hartsfield. "The influence of Gregory Bateson: legacy or vestige?" Kybernetes 36, no. 7/8 (August 14, 2007): 871–83. http://dx.doi.org/10.1108/03684920710777397.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Gupta, M. P., and Deepak Bhatia. "Reworking with a Legacy Financial Accounting System: Lessons from a Pharma Company." Vikalpa: The Journal for Decision Makers 30, no. 3 (July 2005): 79–92. http://dx.doi.org/10.1177/0256090920050307.

Full text
Abstract:
The issues of legacy systems become more pronounced at the time of a major IT upheaval such as implementation of ERP or business process reengineering (BPR) exercise. In this changing scenario, there is a need to update the systems and skills and integrate them with the emerging enterprisewide infrastructure. The main problems with a legacy system are that it remains insulated from the update attempt that largely follows market trend thus rendering it outdated and also that its documentation is poor. In this paper, the authors share the experiences of a project undertaken in one of India's leading multinational pharmaceutical companies (MPC) which was to rework on the existing legacy system and design a new application. The legacy system referred to here is the company's financial accounting system which was developed in 1993. Originally designed in COBOL, it was subsequently improved as and when the finance department put forth its requirements. The major downside of the system was that it had virtually no documentation and no one from the original team that developed the system was still working with the company. This made it all the more difficult to understand and document the system. Also, the system had a high response time thus leading to lower productivity of the data entry staff and other users. Further, it had a limited reporting capability and was basically used for storing financial data. When this project was undertaken for rework, the MPC was in the process of implementing an ERP package for its manufacturing and, therefore, it was necessary to bring all its applications to the same database structure. The most obvious question was whether to discard the legacy system and implement ERP's accounting module. The management, however, decided to retain and rework on the legacy system with the intention of integrating the new system with ERP. The driving point in favour of this decision was the realization that the legacy system was regarded as very critical for the accounting function and also that the users had become conversant with the system despite it being not very user-friendly. Also, there was no risk of failure. Incidentally, the review of the legacy system and ERP implementation coincided thereby easing out concerns of managing organizational changes as the company already had its strategy and preparedness in place for the scenario emerging out of ERP implementation. The computer-aided systems engineering (CASE) tool was chosen for designing the new system because of its inherent advantages in handling software projects which are as follows: The well-documented new system simplifies the maintenance jobs and, therefore, fewer people are required for its maintenance (this was the major problem with the previous system). It has removed the dependence of the management on a small set of people who specialized in the maintenance of an undocumented system. Financial reporting has become easier and better. The experience on this project made it amply clear that the top management support can make or mar a project. This is one of the most popular hypotheses in the information systems literature which has been found to be true in the case of the MPC.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography