To see the other types of publications on this topic, follow the link: Support Code Refactorings.

Journal articles on the topic 'Support Code Refactorings'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 37 journal articles for your research on the topic 'Support Code Refactorings.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Hamioud, Sohaib, and Fadila Atil. "Model-driven Java code refactoring." Computer Science and Information Systems 12, no. 2 (2015): 375–403. http://dx.doi.org/10.2298/csis141025015h.

Full text
Abstract:
Refactoring is an important technique for restructuring code to improve its design and increase programmer productivity and code reuse. Performing refactorings manually, however, is tedious, time consuming and error-prone. Thus, providing an automated support for them is necessary. Unfortunately even in our days, such automation is still not easily achieved and requires formal specifications of the refactoring process. Moreover, extensibility and tool development automation are factors that should be taken into consideration when designing and implementing automated refactorings. In this paper, we introduce a model-driven approach where refactoring features, such as code representation, analysis and transformation adopt models as first-class artifacts. We aim at exploring the value of model transformation and code generation when formalizing refactorings and developing tool support. The presented approach is applied to the refactoring of Java code using a prototypical implementation based on the Eclipse Modeling Framework, a language workbench, a Java metamodel and a set of OMG standards.
APA, Harvard, Vancouver, ISO, and other styles
2

THOMPSON, SIMON, and HUIQING LI. "Refactoring tools for functional languages." Journal of Functional Programming 23, no. 3 (May 2013): 293–350. http://dx.doi.org/10.1017/s0956796813000117.

Full text
Abstract:
AbstractRefactoring is the process of changing the design of a program without changing what it does. Typical refactorings, such as function extraction and generalisation, are intended to make a program more amenable to extension, more comprehensible and so on. Refactorings differ from other sorts of program transformation in being applied to source code, rather than to a ‘core’ language within a compiler, and also in having an effect across a code base, rather than to a single function definition, say. Because of this, there is a need to give automated support to the process. This paper reflects on our experience of building tools to refactor functional programs written in Haskell (HaRe) and Erlang (Wrangler). We begin by discussing what refactoring means for functional programming languages, first in theory, and then in the context of a larger example. Next, we address system design and details of system implementation as well as contrasting the style of refactoring and tooling for Haskell and Erlang. Building both tools led to reflections about what particular refactorings mean, as well as requiring analyses of various kinds, and we discuss both of these. We also discuss various extensions to the core tools, including integrating the tools with test frameworks; facilities for detecting and eliminating code clones; and facilities to make the systems extensible by users. We then reflect on our work by drawing some general conclusions, some of which apply particularly to functional languages, while many others are of general value.
APA, Harvard, Vancouver, ISO, and other styles
3

VAN HORN, DAVID, and MATTHEW MIGHT. "Systematic abstraction of abstract machines." Journal of Functional Programming 22, no. 4-5 (August 15, 2012): 705–46. http://dx.doi.org/10.1017/s0956796812000238.

Full text
Abstract:
AbstractWe describe a derivational approach to abstract interpretation that yields novel and transparently sound static analyses when applied to well-established abstract machines for higher-order and imperative programming languages. To demonstrate the technique and support our claim, we transform the CEK machine of Felleisen and Friedman (Proc. of the 14th ACM SIGACT-SIGPLAN Symp. Prin. Program. Langs, 1987, pp. 314–325), a lazy variant of Krivine's machine (Higher-Order Symb. Comput. Vol 20, 2007, pp. 199–207), and the stack-inspecting CM machine of Clements and Felleisen (ACM Trans. Program. Lang. Syst. Vol 26, 2004, pp. 1029–1052) into abstract interpretations of themselves. The resulting analyses bound temporal ordering of program events; predict return-flow and stack-inspection behavior; and approximate the flow and evaluation of by-need parameters. For all of these machines, we find that a series of well-known concrete machine refactorings, plus a technique of store-allocated continuations, leads to machines that abstract into static analyses simply by bounding their stores. These machines are parameterized by allocation functions that tune performance and precision and substantially expand the space of analyses that this framework can represent. We demonstrate that the technique scales up uniformly to allow static analysis of realistic language features, including tail calls, conditionals, mutation, exceptions, first-class continuations, and even garbage collection. In order to close the gap between formalism and implementation, we provide translations of the mathematics as running Haskell code for the initial development of our method.
APA, Harvard, Vancouver, ISO, and other styles
4

Szőke, Gábor. "Automating the Refactoring Process." Acta Cybernetica 23, no. 2 (2017): 715–35. http://dx.doi.org/10.14232/actacyb.23.2.2017.16.

Full text
Abstract:
To decrease software maintenance cost, software development companies use static source code analysis techniques. Static analysis tools are capable of finding potential bugs, anti-patterns, coding rule violations, and they can also enforce coding style standards. Although there are several available static analyzers to choose from, they only support issue detection. The elimination of the issues is still performed manually by developers. Here, we propose a process that supports the automatic elimination of coding issues in Java. We introduce a tool that uses a third-party static analyzer as input and enables developers to automatically fix the detected issues for them. Our tool uses a special technique, called reverse AST-search, to locate source code elements in a syntax tree, just based on location information. Our tool was evaluated and tested in a two-year project with six software development companies where thousands of code smells were identified and fixed in five systems that have altogether over five million lines of code.
APA, Harvard, Vancouver, ISO, and other styles
5

Mei, Xin Yun, and Jian Bin Liu. "A Refactoring Framework of Program Model Based on Procedure Blueprint." Applied Mechanics and Materials 198-199 (September 2012): 490–94. http://dx.doi.org/10.4028/www.scientific.net/amm.198-199.490.

Full text
Abstract:
Refactoring has been studied for a long time, especially model refactoring has become the hot spot of research in recent years. However, the difference between source-based refactoring and model-based refactoring makes it hard to keep consistent between the target code and model refactoring operations. To resolve the problem, this paper presents a refactoring framework of program model based on Procedure Blueprint and gives the prototyping tool system of program model refactoring. Through seamless connection source code established by procedure blueprint to program model,the formalized framework unified the refactoring of source-based and program model based. The refactoring framework supports the visualization representation of program model and the validation for behavior preservation of graphics transformation, which reduces the complexity of refactoring analysis and software maintenance costs.
APA, Harvard, Vancouver, ISO, and other styles
6

Sagar, Priyadarshni Suresh, Eman Abdulah AlOmar, Mohamed Wiem Mkaouer, Ali Ouni, and Christian D. Newman. "Comparing Commit Messages and Source Code Metrics for the Prediction Refactoring Activities." Algorithms 14, no. 10 (September 30, 2021): 289. http://dx.doi.org/10.3390/a14100289.

Full text
Abstract:
Understanding how developers refactor their code is critical to support the design improvement process of software. This paper investigates to what extent code metrics are good indicators for predicting refactoring activity in the source code. In order to perform this, we formulated the prediction of refactoring operation types as a multi-class classification problem. Our solution relies on measuring metrics extracted from committed code changes in order to extract the corresponding features (i.e., metric variations) that better represent each class (i.e., refactoring type) in order to automatically predict, for a given commit, the method-level type of refactoring being applied, namely Move Method, Rename Method, Extract Method, Inline Method, Pull-up Method, and Push-down Method. We compared various classifiers, in terms of their prediction performance, using a dataset of 5004 commits and extracted 800 Java projects. Our main findings show that the random forest model trained with code metrics resulted in the best average accuracy of 75%. However, we detected a variation in the results per class, which means that some refactoring types are harder to detect than others.
APA, Harvard, Vancouver, ISO, and other styles
7

Ferenzini Martins Sirqueira, Tassio, Allan Henrique Moreira Brandl, Evandro Jose Pereira Pedro, Ramon de Souza Silva, and Marco Antonio Pereira Araujo. "Code Smell Analyzer: A Tool To Teaching Support Of Refactoring Techniques Source Code." IEEE Latin America Transactions 14, no. 2 (February 2016): 877–84. http://dx.doi.org/10.1109/tla.2016.7437235.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ivanov, R. A., and T. F. Valeev. "Automatic Refactoring of Java Code Using the Stream API." Vestnik NSU. Series: Information Technologies 17, no. 2 (2019): 49–60. http://dx.doi.org/10.25205/1818-7900-2019-17-2-49-60.

Full text
Abstract:
For a long time, functional Java programming was not possible. However, lambda expressions appeared in version 8 of the Java language. Due to the support of standard library classes (Stream, Optional, etc.) in Java, it became possible to describe transformations over data in a functional style. Java is a rather old language with a large imperative code base written in it. In order to take advantage of the new approach, it is necessary to perform a non-trivial refactoring, which can be very tedious and error prone when applied manually. Fortunately, for a sufficiently large number of situations, this refactoring can be safely performed automatically. Using IntelliJ Idea as a platform software tool was developed that allows you to find places where it is possible to automatically convert an imperative code to an equivalent code that uses Stream API, as well as an automatic fix that allows you to make a replacement. Refactoring uses IntelliJ Idea framework to analyze Java code, and integrates into the IDE itself. One of the main criteria for the correct operation of the algorithm is the security of this transformation. The user cannot trust the tool if the transformation can change the semantics of the code. This article discusses various constraints that are imposed on code patterns so that transformation without distortion of semantics is possible. Refactoring has been tested in various libraries to verify the semantics are preserved by checking the test results before and after applying refactoring. This article will not describe the impact of using the Stream API on the performance of the application.
APA, Harvard, Vancouver, ISO, and other styles
9

Tkachuk, Andrii, and Bogdan Bulakh. "Research of possibilities of default refactoring actions in Swift language." Technology audit and production reserves 5, no. 2(67) (October 24, 2022): 6–10. http://dx.doi.org/10.15587/2706-5448.2022.266061.

Full text
Abstract:
The object of research in the paper is a built-in refactoring mechanism in the Swift programming language. Swift has gained a lot of popularity recently, which is why there are many new challenges associated with the need to support and modify the source code written in this programming language. The problem is that the more powerful refactoring mechanism that can be applied to Swift is proprietary and cannot be used by other software. Moreover, even closed-source refactoring software tools are not capable of performing more complex queries. To explore the possibilities of expanding the built-in refactoring, it is suggested to investigate the software implementation of the sourcekit component of the Swift programming language, which is responsible for working with «raw» source code, and to implement new refactoring action in practice. To implement the research plan, one refactoring activity that was not present in the refactoring utilities (adding an implementation of the Equatable protocol) was chosen. Its implementation was developed using the components and resources provided within the sourcekit component. To check the correctness and compliance with the development conditions, several tests were created and conducted. It has been discovered that both refactoring mechanisms supported by the Swift programming language have a limited context and a limited scope and application. That is why the possibility of expanding the functionality should not be based on the local level of code processing, but on the upper level, where it is possible to combine several source files, which often happens in projects. The work was directed to the development of the own refactoring action to analyze and obtain a perfect representation of the advantages and disadvantages of the existing component. As a result, a new approach to refactoring was proposed, which will allow solving the problems described above.
APA, Harvard, Vancouver, ISO, and other styles
10

Ying, Ming, and James Miller. "ART-Improving Execution Time for Flash Applications." International Journal of Systems and Service-Oriented Engineering 2, no. 1 (January 2011): 1–20. http://dx.doi.org/10.4018/jssoe.2011010101.

Full text
Abstract:
Rich Internet Applications (RIA) require fast execution time and allow richer, faster, and more interactive experiences. Flash is a common technology for building RIAs. Flash programmers usually specialize in graphic design rather than programming. In addition, the tight schedule of projects makes the Flash programmers ignore non-functional characteristics such as the efficiency of their systems; yet, to enhance Flash users’ experiences, writing efficient ActionScript code is a key requirement. Flash programmers require automated support to assist with this key requirement. This paper proposes a refactoring system called ART (ActionScript Refactoring Tool) to provide automatic support for Flash programmers by rewriting their ActionScript code to make their applications faster.
APA, Harvard, Vancouver, ISO, and other styles
11

Binkley, D., M. Ceccato, M. Harman, F. Ricca, and P. Tonella. "Tool-Supported Refactoring of Existing Object-Oriented Code into Aspects." IEEE Transactions on Software Engineering 32, no. 9 (September 2006): 698–717. http://dx.doi.org/10.1109/tse.2006.95.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Jhamat, Naveed, Zeeshan Arshad, and Kashif Riaz. "Towards Automatic Updates of Software Dependencies based on Artificial Intelligence." Global Social Sciences Review V, no. III (September 30, 2020): 174–80. http://dx.doi.org/10.31703/gssr.2020(v-iii).19.

Full text
Abstract:
Software reusability encourages developers to heavily rely on a variety of third-party libraries and packages, resulting in dependent software products. Often ignored by developers due to the risk of breakage but dependent software have to adopt security and performance updates in their external dependencies. Existing work advocates a shift towards Automatic updation of dependent software code to implement update dependencies. Emerging automatic dependency management tools notify the availability of new updates, detect their impacts on dependent software and identify potential breakages or other vulnerabilities. However, support for automatic source code refactoring to fix potential breaking changes (to the best of my current knowledge) is missing from these tools. This paper presents a prototyping tool, DepRefactor, that assist in the programmed refactoring of software code caused by automatic updating of their dependencies. To measure the accuracy and effectiveness of DepRefactor, we test it on various students project developed in C#.
APA, Harvard, Vancouver, ISO, and other styles
13

Fulop, Endre, Attila Gyen, and Norbert Pataki. "Monaco Support for an Improved Exception Specification in C++." IPSI Transactions on Internet Research 19, no. 01 (January 1, 2023): 24–31. http://dx.doi.org/10.58245/ipsi.tir.2301.05.

Full text
Abstract:
Exception handling is a beneficial language construct in modern programming languages. However, C++’s type system does not really conform to these elements. As a consequence, developers have to pay attention to avoid mistakes because of the missing compiler support. Moreover, C++11 provides an approach in which exceptions appear in the function’s signature in an inverse manner compared to the earlier standards. Static analysis is an approach in which we reason about a program based on the source with no execution of the analyzed code. It can be used for many purposes, for instance, finding bugs, refactoring the code, or measuring code complexity. In this paper, we analyze how the older style exception specifications can be rejuvenated for modern idioms. Explicitly marking the functions as having a guaranteed exception-free execution is the primary way since C++11. We develop a static analyzer tool for providing hints for these specifications. We evaluate our method with the analysis of open-source projects. Based on the previous evaluation, not using the strictest possible exception-specification is a problem that occurred in every analyzed project. We would like to assist developers in identifying instances of this problem by providing an integrated comprehension tool, that enables them to make use of the exception analysis results in an interactive way in their IDEs.
APA, Harvard, Vancouver, ISO, and other styles
14

Baez, Abelardo, Himar Fabelo, Samuel Ortega, Giordana Florimbi, Emanuele Torti, Abian Hernandez, Francesco Leporati, Giovanni Danese, Gustavo M. Callico, and Roberto Sarmiento. "High-Level Synthesis of Multiclass SVM Using Code Refactoring to Classify Brain Cancer from Hyperspectral Images." Electronics 8, no. 12 (December 6, 2019): 1494. http://dx.doi.org/10.3390/electronics8121494.

Full text
Abstract:
Currently, high-level synthesis (HLS) methods and tools are a highly relevant area in the strategy of several leading companies in the field of system-on-chips (SoCs) and field programmable gate arrays (FPGAs). HLS facilitates the work of system developers, who benefit from integrated and automated design workflows, considerably reducing the design time. Although many advances have been made in this research field, there are still some uncertainties about the quality and performance of the designs generated with the use of HLS methodologies. In this paper, we propose an optimization of the HLS methodology by code refactoring using Xilinx SDSoCTM (Software-Defined System-On-Chip). Several options were analyzed for each alternative through code refactoring of a multiclass support vector machine (SVM) classifier written in C, using two different Zynq®-7000 SoC devices from Xilinx, the ZC7020 (ZedBoard) and the ZC7045 (ZC706). The classifier was evaluated using a brain cancer database of hyperspectral images. The proposed methodology not only reduces the required resources using less than 20% of the FPGA, but also reduces the power consumption −23% compared to the full implementation. The speedup obtained of 2.86× (ZC7045) is the highest found in the literature for SVM hardware implementations.
APA, Harvard, Vancouver, ISO, and other styles
15

BUSONIU, PAULA-ANDRA, JOHANNES OETSCH, JÖRG PÜHRER, PETER SKOČOVSKÝ, and HANS TOMPITS. "SeaLion: An eclipse-based IDE for answer-set programming with advanced debugging support." Theory and Practice of Logic Programming 13, no. 4-5 (July 2013): 657–73. http://dx.doi.org/10.1017/s1471068413000410.

Full text
Abstract:
AbstractIn this paper, we present SeaLion, an integrated development environment (IDE) for answer-set programming (ASP).SeaLionprovides source-code editors for the languages ofGringoandDLVand offers popular amenities like syntax highlighting, syntax checking, code completion, visual program outline, and refactoring functionality. The tool has been realised in the context of a research project whose goal is the development of techniques to support the practical coding process of answer-set programs. In this respect,SeaLionis the first IDE for ASP that provides debugging features that work for real-world answer-set programs and supports the rich languages of modern answer-set solvers. Indeed,SeaLionimplements a stepping-based debugging approach that allows the developer to quickly track down programming errors by simply following his or her intuitions on the intended semantics. Besides that,SeaLionsupports ASP development using model-driven engineering techniques including domain modelling with extended UML class diagrams and visualisation of answer sets in corresponding instance diagrams. Moreover, customised visualisation as well as visual editing of answer sets is realised by the Kara plugin ofSeaLion. Further implemented features are a documentation generator based on the Lana annotation language, support for external solvers, and interoperability with external tools.SeaLioncomes as a plugin of the popular Eclipse platform and provides interfaces for future extensions of the IDE.
APA, Harvard, Vancouver, ISO, and other styles
16

Akour, Mohammed, Mamdouh Alenezi, and Hiba Alsghaier. "Software Refactoring Prediction Using SVM and Optimization Algorithms." Processes 10, no. 8 (August 15, 2022): 1611. http://dx.doi.org/10.3390/pr10081611.

Full text
Abstract:
Test suite code coverage is often used as an indicator for test suite capability in detecting faults. However, earlier studies that have explored the correlation between code coverage and test suite effectiveness have not addressed this correlation evolutionally. Moreover, some of these works have only addressed small sized systems, or systems from the same domain, which makes the result generalization process unclear for other domain systems. Software refactoring promotes a positive consequence in terms of software maintainability and understandability. It aims to enhance the software quality by modifying the internal structure of systems without affecting their external behavior. However, identifying the refactoring needs and which level should be executed is still a big challenge to software developers. In this paper, the authors explore the effectiveness of employing a support vector machine along with two optimization algorithms to predict software refactoring at the class level. In particular, the SVM was trained in genetic and whale algorithms. A well-known dataset belonging to open-source software systems (i.e., ANTLR4, JUnit, MapDB, and McMMO) was used in this study. All experiments achieved a promising accuracy rate range of between 84% for the SVM–Junit system and 93% for McMMO − GA + Whale + SVM. It was clear that added value was gained from merging the SVM with two optimization algorithms. All experiments achieved a promising F-measure range between the SVM–Antlr4 system’s result of 86% and that of the McMMO − GA + Whale + SVM system at 96%. Moreover, the results of the proposed approach were compared with the results from four well known ML algorithms (NB-Naïve, IBK-Instance, RT-Random Tree, and RF-Random Forest). The results from the proposed approach outperformed the prediction performances of the studied MLs.
APA, Harvard, Vancouver, ISO, and other styles
17

Panigrahi, Rasmita, Sanjay Kumar Kuanar, Sanjay Misra, and Lov Kumar. "Class-Level Refactoring Prediction by Ensemble Learning with Various Feature Selection Techniques." Applied Sciences 12, no. 23 (November 29, 2022): 12217. http://dx.doi.org/10.3390/app122312217.

Full text
Abstract:
Background: Refactoring is changing a software system without affecting the software functionality. The current researchers aim i to identify the appropriate method(s) or class(s) that needs to be refactored in object-oriented software. Ensemble learning helps to reduce prediction errors by amalgamating different classifiers and their respective performances over the original feature data. Other motives are added in this paper regarding several ensemble learners, errors, sampling techniques, and feature selection techniques for refactoring prediction at the class level. Objective: This work aims to develop an ensemble-based refactoring prediction model with structural identification of source code metrics using different feature selection techniques and data sampling techniques to distribute the data uniformly. Our model finds the best classifier after achieving fewer errors during refactoring prediction at the class level. Methodology: At first, our proposed model extracts a total of 125 software metrics computed from object-oriented software systems processed for a robust multi-phased feature selection method encompassing Wilcoxon significant text, Pearson correlation test, and principal component analysis (PCA). The proposed multi-phased feature selection method retains the optimal features characterizing inheritance, size, coupling, cohesion, and complexity. After obtaining the optimal set of software metrics, a novel heterogeneous ensemble classifier is developed using techniques such as ANN-Gradient Descent, ANN-Levenberg Marquardt, ANN-GDX, ANN-Radial Basis Function; support vector machine with different kernel functions such as LSSVM-Linear, LSSVM-Polynomial, LSSVM-RBF, Decision Tree algorithm, Logistic Regression algorithm and extreme learning machine (ELM) model are used as the base classifier. In our paper, we have calculated four different errors i.e., Mean Absolute Error (MAE), Mean magnitude of Relative Error (MORE), Root Mean Square Error (RMSE), and Standard Error of Mean (SEM). Result: In our proposed model, the maximum voting ensemble (MVE) achieves better accuracy, recall, precision, and F-measure values (99.76, 99.93, 98.96, 98.44) as compared to the base trained ensemble (BTE) and it experiences less errors (MAE = 0.0057, MORE = 0.0701, RMSE = 0.0068, and SEM = 0.0107) during its implementation to develop the refactoring model. Conclusions: Our experimental result recommends that MVE with upsampling can be implemented to improve the performance of the refactoring prediction model at the class level. Furthermore, the performance of our model with different data sampling techniques and feature selection techniques has been shown in the form boxplot diagram of accuracy, F-measure, precision, recall, and area under the curve (AUC) parameters.
APA, Harvard, Vancouver, ISO, and other styles
18

Brown, Christopher, Vladimir Janjic, M. Goli, and J. McCall. "Programming Heterogeneous Parallel Machines Using Refactoring and Monte–Carlo Tree Search." International Journal of Parallel Programming 48, no. 4 (June 10, 2020): 583–602. http://dx.doi.org/10.1007/s10766-020-00665-z.

Full text
Abstract:
Abstract This paper presents a new technique for introducing and tuning parallelism for heterogeneous shared-memory systems (comprising a mixture of CPUs and GPUs), using a combination of algorithmic skeletons (such as farms and pipelines), Monte–Carlo tree search for deriving mappings of tasks to available hardware resources, and refactoring tool support for applying the patterns and mappings in an easy and effective way. Using our approach, we demonstrate easily obtainable, significant and scalable speedups on a number of case studies showing speedups of up to 41 over the sequential code on a 24-core machine with one GPU. We also demonstrate that the speedups obtained by mappings derived by the MCTS algorithm are within 5–15% of the best-obtained manual parallelisation.
APA, Harvard, Vancouver, ISO, and other styles
19

Arshinoff, Bradley I., Gregory A. Cary, Kamran Karimi, Saoirse Foley, Sergei Agalakov, Francisco Delgado, Vaneet S. Lotay, et al. "Echinobase: leveraging an extant model organism database to build a knowledgebase supporting research on the genomics and biology of echinoderms." Nucleic Acids Research 50, no. D1 (November 17, 2021): D970—D979. http://dx.doi.org/10.1093/nar/gkab1005.

Full text
Abstract:
Abstract Echinobase (www.echinobase.org) is a third generation web resource supporting genomic research on echinoderms. The new version was built by cloning the mature Xenopus model organism knowledgebase, Xenbase, refactoring data ingestion pipelines and modifying the user interface to adapt to multispecies echinoderm content. This approach leveraged over 15 years of previous database and web application development to generate a new fully featured informatics resource in a single year. In addition to the software stack, Echinobase uses the private cloud and physical hosts that support Xenbase. Echinobase currently supports six echinoderm species, focused on those used for genomics, developmental biology and gene regulatory network analyses. Over 38 000 gene pages, 18 000 publications, new improved genome assemblies, JBrowse genome browser and BLAST + services are available and supported by the development of a new echinoderm anatomical ontology, uniformly applied formal gene nomenclature, and consistent orthology predictions. A novel feature of Echinobase is integrating support for multiple, disparate species. New genomes from the diverse echinoderm phylum will be added and supported as data becomes available. The common code development design of the integrated knowledgebases ensures parallel improvements as each resource evolves. This approach is widely applicable for developing new model organism informatics resources.
APA, Harvard, Vancouver, ISO, and other styles
20

Zhang, Shaoqing, Haohuan Fu, Lixin Wu, Yuxuan Li, Hong Wang, Yunhui Zeng, Xiaohui Duan, et al. "Optimizing high-resolution Community Earth System Model on a heterogeneous many-core supercomputing platform." Geoscientific Model Development 13, no. 10 (October 8, 2020): 4809–29. http://dx.doi.org/10.5194/gmd-13-4809-2020.

Full text
Abstract:
Abstract. With semiconductor technology gradually approaching its physical and thermal limits, recent supercomputers have adopted major architectural changes to continue increasing the performance through more power-efficient heterogeneous many-core systems. Examples include Sunway TaihuLight that has four management processing elements (MPEs) and 256 computing processing elements (CPEs) inside one processor and Summit that has two central processing units (CPUs) and six graphics processing units (GPUs) inside one node. Meanwhile, current high-resolution Earth system models that desperately require more computing power generally consist of millions of lines of legacy code developed for traditional homogeneous multicore processors and cannot automatically benefit from the advancement of supercomputer hardware. As a result, refactoring and optimizing the legacy models for new architectures become key challenges along the road of taking advantage of greener and faster supercomputers, providing better support for the global climate research community and contributing to the long-lasting societal task of addressing long-term climate change. This article reports the efforts of a large group in the International Laboratory for High-Resolution Earth System Prediction (iHESP) that was established by the cooperation of Qingdao Pilot National Laboratory for Marine Science and Technology (QNLM), Texas A&M University (TAMU), and the National Center for Atmospheric Research (NCAR), with the goal of enabling highly efficient simulations of the high-resolution (25 km atmosphere and 10 km ocean) Community Earth System Model (CESM-HR) on Sunway TaihuLight. The refactoring and optimizing efforts have improved the simulation speed of CESM-HR from 1 SYPD (simulation years per day) to 3.4 SYPD (with output disabled) and supported several hundred years of pre-industrial control simulations. With further strategies on deeper refactoring and optimizing for remaining computing hotspots, as well as redesigning architecture-oriented algorithms, we expect an equivalent or even better efficiency to be gained on the new platform than traditional homogeneous CPU platforms. The refactoring and optimizing processes detailed in this paper on the Sunway system should have implications for similar efforts on other heterogeneous many-core systems such as GPU-based high-performance computing (HPC) systems.
APA, Harvard, Vancouver, ISO, and other styles
21

Franky, Maria Consuelo, Jaime A. Pavlich-Mariscal, Maria Catalina Acero, Angee Zambrano, John C. Olarte, Jorge Camargo, and Nicolás Pinzón. "ISML-MDE." International Journal of Web Information Systems 12, no. 4 (November 7, 2016): 533–56. http://dx.doi.org/10.1108/ijwis-04-2016-0025.

Full text
Abstract:
Purpose This purpose of this paper is to present ISML-MDE, a model-driven environment that includes ISML, a platform-independent modeling language for enterprise applications; ISML-GEN, a code generation framework to automatically generate code from models; and LionWizard, a tool to automatically integrate different components into a unified codebase. Design/methodology/approach The development comprises five stages: standardizing architecture; refactoring and adapting existing components; automating their integration; developing a modeling language; and creating code generators. After development, model-to-code ratios in ISML-MDE are measured for different applications. Findings The average model-to-code ratio is approximately 1:4.6 when using the code generators from arbitrary models. If a model transformation is performed previously to the code generation, this ratio raises to 1:115. The current validation efforts show that ISML properly supports several DSL essential characteristics described by Kahraman and Bilgen (2015). Research limitations/implications ISML-MDE was tested on relatively small applications. Further validation of the approach requires measurement of development times and their comparison with previous similar projects, to determine the gains in productivity. Originality/value The value of ISML-MDE can be summarized as follows: ISML-MDE has the potential to significantly reduce development times, because of an adequate use of models and transformations. The design of ISML-MDE addresses real-world development requirements, obtained from a tight interaction between the researchers and the software development company. The underlying process has been thoroughly documented and it is believed it can be used as a reference for future developments of MDE tools under similar conditions.
APA, Harvard, Vancouver, ISO, and other styles
22

Volkanin, L. S., and A. Yu Khachay. "FUNCTIONAL REWORK OF THE 1C:UNIVERSITY PROF USING THE CONFIGURATION EXTENSION MECHANISM." Informatics and education, no. 3 (May 10, 2019): 33–46. http://dx.doi.org/10.32517/0234-0453-2019-34-3-33-46.

Full text
Abstract:
The 1C:University PROF can not be implemented without adjusting to the characteristics of a particular university. Together with new functionality problems of further maintenance, installation of new versions come. The use of well-proven rework practices does not always solve the problem, because it is impossible to change the typical supplier’s code without removing the configuration from support, and this will make it impossible to update fully automatically. It is the most difficult to splice changes after serious refactoring of the code from the supplier. If in the university there is no own specialist, this may cause rejection of improvements and a significant slowdown in implementation.In article the way of completion of functionality of a configuration without loss of a possibility of automatic updates, suitable for all modern solutions 1C is considered, including used in educational activity: 1C:University and 1C:University PROF, 1C:College, 1C:Educational Institution, 1C:Management of training center. Information is provided on the development of the capabilities of the configuration expansion mechanism and examples of improvements that can be implemented using this mechanism. The given examples are based on experience of implementation of projects of automation in the universities of the Ural Federal District.
APA, Harvard, Vancouver, ISO, and other styles
23

Nikitenko, Ye V., and N. V. Ometsynska. "A search system for media content in telegram messenger." Mathematical machines and systems 1 (2021): 42–51. http://dx.doi.org/10.34121/1028-9763-2021-1-42-51.

Full text
Abstract:
An autonomous Telegram bot aimed at searching for media content (movies, cartoons and TV series) displaying video titles, categories, main tags, description, cast, ratings, links to third-party resources and a list of similar content is developed. This Telegram-bot has a special interface for the administrator with usage statistics and the ability to send messages. It was decided to use the WebStorm development environment to implement the chatbot. The TypeScript language on the NodeJS platform was chosen as the programming language, because it is compatible with JavaScript and is compiled in it. After the compilation, the program on TypeScript can be run in any modern browser or used in conjunction with the server platform. TypeScript differs from the JavaScript language by its ability to explicitly statically define types, support the use of full-fledged classes and support the connection of modules, which is designed to increase the speed of development, facilitate readability, refactoring and reuse of the code, help to debug at the development and compilation stage, and possibly speed up program execution. The Telegram bot was created and configured to implement the client part. This feature is implemented inside the messenger itself. To speed up the development and simplify the interaction with Telegram, a modern framework for NodeJS – Telegraph – was used. Cloud Firestore was used as a database. It is a flexible, scalable cloud-based NoSQL database from Firebase and Google CloudPlatform for the Internet, mobile platforms and server applications. Cloud Firestore database supports flexible, hierarchical data structures, stores data in documents which, in their turn, are stored in collections. Documents can have attachments, objects and subcollections.
APA, Harvard, Vancouver, ISO, and other styles
24

Pissanetzky, Sergio. "Reasoning with Computer Code: a new Mathematical Logic." Journal of Artificial General Intelligence 3, no. 3 (January 4, 2013): 11–42. http://dx.doi.org/10.2478/v10229-011-0020-6.

Full text
Abstract:
Abstract A logic is a mathematical model of knowledge used to study how we reason, how we describe the world, and how we infer the conclusions that determine our behavior. The logic presented here is natural. It has been experimentally observed, not designed. It represents knowledge as a causal set, includes a new type of inference based on the minimization of an action functional, and generates its own semantics, making it unnecessary to prescribe one. This logic is suitable for high-level reasoning with computer code, including tasks such as self-programming, objectoriented analysis, refactoring, systems integration, code reuse, and automated programming from sensor-acquired data. A strong theoretical foundation exists for the new logic. The inference derives laws of conservation from the permutation symmetry of the causal set, and calculates the corresponding conserved quantities. The association between symmetries and conservation laws is a fundamental and well-known law of nature and a general principle in modern theoretical Physics. The conserved quantities take the form of a nested hierarchy of invariant partitions of the given set. The logic associates elements of the set and binds them together to form the levels of the hierarchy. It is conjectured that the hierarchy corresponds to the invariant representations that the brain is known to generate. The hierarchies also represent fully object-oriented, self-generated code, that can be directly compiled and executed (when a compiler becomes available), or translated to a suitable programming language. The approach is constructivist because all entities are constructed bottom-up, with the fundamental principles of nature being at the bottom, and their existence is proved by construction. The new logic is mathematically introduced and later discussed in the context of transformations of algorithms and computer programs. We discuss what a full self-programming capability would really mean. We argue that self-programming and the fundamental question about the origin of algorithms are inextricably linked. We discuss previously published, fully automated applications to self-programming, and present a virtual machine that supports the logic, an algorithm that allows for the virtual machine to be simulated on a digital computer, and a fully explained neural network implementation of the algorithm.
APA, Harvard, Vancouver, ISO, and other styles
25

Forsberg, Björn, Marco Solieri, Marko Bertogna, Luca Benini, and Andrea Marongiu. "The Predictable Execution Model in Practice." ACM Transactions on Embedded Computing Systems 20, no. 5 (July 2021): 1–25. http://dx.doi.org/10.1145/3465370.

Full text
Abstract:
Adoption of multi- and many-core processors in real-time systems has so far been slowed down, if not totally barred, due do the difficulty in providing analytical real-time guarantees on worst-case execution times. The Predictable Execution Model (PREM) has been proposed to solve this problem, but its practical support requires significant code refactoring, a task better suited for a compilation tool chain than human programmers. Implementing a PREM compiler presents significant challenges to conform to PREM requirements, such as guaranteed upper bounds on memory footprint and the generation of efficient schedulable non-preemptive regions. This article presents a comprehensive description on how a PREM compiler can be implemented, based on several years of experience from the community. We provide accumulated insights on how to best balance conformance to real-time requirements and performance and present novel techniques that extend the applicability from simple benchmark suites to real-world applications. We show that code transformed by the PREM compiler enables timing predictable execution on modern commercial off-the-shelf hardware, providing novel insights on how PREM can protect 99.4% of memory accesses on random replacement policy caches at only 16% performance loss on benchmarks from the PolyBench benchmark suite. Finally, we show that the requirements imposed on the programming model are well-aligned with current coding guidelines for timing critical software, promoting easy adoption.
APA, Harvard, Vancouver, ISO, and other styles
26

Christensen, Paul, Sishir Subedi, Heather Hendrickson, Jessica Thomas, Zejuan Li, S. Wesley Long, and Randall Olsen. "Updates to a Lab-Developed Bioinformatics Pipeline and Application for Interpretation of Clinical Next-Generation Sequencing Panels." American Journal of Clinical Pathology 152, Supplement_1 (September 11, 2019): S9—S10. http://dx.doi.org/10.1093/ajcp/aqz112.019.

Full text
Abstract:
Abstract Objectives Our goal was to enhance our next-generation sequencing (NGS) molecular oncology workflow from sequencing to analysis through improvements to our custom-built and previously described NGS application. Methods Over 1 year, we collected feedback regarding workflow pain-points and feature requests from all end users of our NGS application. The application consists of a series of scripted pipelines, a MySQL database, and a Java Graphic User Interface (GUI); the end users include molecular pathologists (MPs), medical technologist/medical laboratory technologists (MTs/MLTs), and the molecular laboratory manager. These feedback data were used to engineer significant changes to the pipelines and software architecture. These architecture changes provided the backbone to a suite of feature enhancements aimed to improve turnaround time, decrease manual processes, and increase efficiency for the molecular laboratory staff and directors. Summary The key software architecture changes include implementing support for multiple environments, refactoring common code in the different pipelines, migrating from a per-run pipeline model to a per-sample pipeline model, and key updates to the MySQL database. These changes enabled development of many technical and user experience improvements. We eliminated the need for the pipelines to be launched manually from the Linux command line. Multiple pipelines can be executed concurrently. We created a per-sample pipeline status monitor. Sample entry is integrated with our Laboratory Information System (LIS) barcodes, thus reducing the possibility of transcription errors. We developed quality assurance reports. Socket-based integration with Integrated Genomics Viewer (IGV) was enhanced. We enabled rapid loading of key alignment data into IGV over a wireless network. Features to support resident and fellow driven variant and gene annotation reporting were developed. Support for additional clinical databases was implemented. Conclusions The designed feature enhancements to our previously reported NGS application have added significant sophistication and safety to our clinical NGS workflow. For example, our NGS consensus conference can be held in a conference room over a wireless network, and a trainee can prepare and present each case without ever leaving the application. To date, we have analyzed 2,540 samples using three different assays (TruSight Myeloid Sequencing Panel, AmpliSeq Cancer Hotspot Panel, GlioSeq) and four sequencing instruments (NextSeq, MiSeq, Proton, PGM) in this application. The code is freely available on GitHub.
APA, Harvard, Vancouver, ISO, and other styles
27

"UML Activity Diagram Use for Functional Test Suit Generation and Redundancy Removal Supported Model Driven Testing." International Journal of Engineering and Advanced Technology 8, no. 6 (August 30, 2019): 2391–96. http://dx.doi.org/10.35940/ijeat.f8370.088619.

Full text
Abstract:
The method mixes up the extended finite state machine & UML activity diagram to generate the test model. H good coverage of test of all probable scenarios. Here an activity diagram describes the operation of the system, decision ere we have considered different coverage criteria for generating the test paths from the model for node transition from one action state to another. Also flow of control is represented. These will emphasis on sequence and condition of flow. It also gives idea about internal nodes. Refactoring is the process of altering an application’s source code of its external behavior is not changing. The purpose of code refactoring is to improve some of the nonfunctional properties of the code, such as readability, complexity, maintainability and extensibility. Refactoring can extend the life of source code, preventing it from becoming legacy code. The refactoring process makes future enhancements to such code a more pleasant experience. Refactoring is also known as reengineering. Test cases tend to be massive in range as redundant take a look at cases square measure generated because of the presence of code smells, thus the requirement to scale back these smells. Methods Statistical Analysis: This analysis adopts a proactive approach of reducing action at laws by police investigation the lazy category code smells supported the cohesion and dependency of the code and applying the inline category refactoring practices before take a look at case generation there by considerably avoiding redundant take a look at cases from being generated..
APA, Harvard, Vancouver, ISO, and other styles
28

Mastropaolo, Antonio, Emad Aghajani, Luca Pascarella, and Gabriele Bavota. "Automated variable renaming: are we there yet?" Empirical Software Engineering 28, no. 2 (February 14, 2023). http://dx.doi.org/10.1007/s10664-022-10274-8.

Full text
Abstract:
AbstractIdentifiers, such as method and variable names, form a large portion of source code. Therefore, low-quality identifiers can substantially hinder code comprehension. To support developers in using meaningful identifiers, several (semi-)automatic techniques have been proposed, mostly being data-driven (e.g., statistical language models, deep learning models) or relying on static code analysis. Still, limited empirical investigations have been performed on the effectiveness of such techniques for recommending developers with meaningful identifiers, possibly resulting in rename refactoring operations. We present a large-scale study investigating the potential of data-driven approaches to support automated variable renaming. We experiment with three state-of-the-art techniques: a statistical language model and two DL-based models. The three approaches have been trained and tested on three datasets we built with the goal of evaluating their ability to recommend meaningful variable identifiers. Our quantitative and qualitative analyses show the potential of such techniques that, under specific conditions, can provide valuable recommendations and are ready to be integrated in rename refactoring tools. Nonetheless, our results also highlight limitations of the experimented approaches that call for further research in this field.
APA, Harvard, Vancouver, ISO, and other styles
29

Biswas, Som. "Role of ChatGPT in Computer Programming." Mesopotamian Journal of Computer Science, January 15, 2022, 20–28. http://dx.doi.org/10.58496/mjcsc/2022/004.

Full text
Abstract:
Purpose: The purpose of this abstract is to outline the role and capabilities of ChatGPT, a language model developed by OpenAI for computer programming. Methodology: ChatGPT is a large language model that has been trained on a diverse range of texts and can perform a variety of programming-related tasks. These tasks include code completion and correction, code snippet prediction and suggestion, automatic syntax error fixing, code optimization and refactoring suggestions, missing code generation, document generation, chatbot development, text-to-code generation, and answering technical queries. Results: ChatGPT can provide users with explanations, examples, and guidance to help them understand complex concepts and technologies, find relevant resources, and diagnose and resolve technical problems. Its use can improve overall satisfaction with support services and help organizations build a reputation for expertise and reliability. Conclusions: ChatGPT is a powerful and versatile tool for computer programming that can support developers and users in a wide range of tasks. Its ability to provide explanations, examples, and guidance makes it a valuable resource for technical support, while its ability to perform programming-related tasks can improve efficiency and accuracy.
APA, Harvard, Vancouver, ISO, and other styles
30

Biswas, Som. "Role of ChatGPT in Computer Programming." Mesopotamian Journal of Computer Science, January 15, 2023, 8–16. http://dx.doi.org/10.58496/mjcsc/2023/002.

Full text
Abstract:
Purpose: The purpose of this abstract is to outline the role and capabilities of ChatGPT, a language model developed by OpenAI for computer programming. Methodology: ChatGPT is a large language model that has been trained on a diverse range of texts and can perform a variety of programming-related tasks. These tasks include code completion and correction, code snippet prediction and suggestion, automatic syntax error fixing, code optimization and refactoring suggestions, missing code generation, document generation, chatbot development, text-to-code generation, and answering technical queries. Results: ChatGPT can provide users with explanations, examples, and guidance to help them understand complex concepts and technologies, find relevant resources, and diagnose and resolve technical problems. Its use can improve overall satisfaction with support services and help organizations build a reputation for expertise and reliability. Conclusions: ChatGPT is a powerful and versatile tool for computer programming that can support developers and users in a wide range of tasks. Its ability to provide explanations, examples, and guidance makes it a valuable resource for technical support, while its ability to perform programming-related tasks can improve efficiency and accuracy
APA, Harvard, Vancouver, ISO, and other styles
31

Ralhan, Chavi, Rakesh Bishnoi, Ankit, Anjali, and Hitesh Kumar. "Analysis of Software Clones." International Journal of Scientific Research in Computer Science, Engineering and Information Technology, April 20, 2021, 439–50. http://dx.doi.org/10.32628/cseit217290.

Full text
Abstract:
Copied code or code clones are a sort of code that contrarily affect the improvement and support of software frameworks. Software clone research in the past generally cantered around the discovery. what's more, examination of code clones, while research lately reaches out to the entire range of clone the board. In the last decade, three reviews showed up in the writing, which cover the recognition, examination and transformative attributes of code clones. This paper presents a complete overview on the state of the workmanship in clone the board, with top to bottom examination of clone the executives exercise (e.g., following, refactoring, cost benefit investigation) past the recognition and examination. This is the main overview on clone the board, where we highlight the accomplishments up until now, and uncover roads for additional exploration essential towards an incorporated clone the board framework. We accept that we have worked really hard in studying the territory of clone the board and that this work may fill in as a guide for future research in the area.
APA, Harvard, Vancouver, ISO, and other styles
32

Sujadi, Sendy Ferdian. "Evaluasi Deteksi Smell Code dan Anti Pattern pada Aplikasi Berbasis Java." Jurnal Teknik Informatika dan Sistem Informasi 5, no. 3 (January 26, 2020). http://dx.doi.org/10.28932/jutisi.v5i3.1981.

Full text
Abstract:
This paper presents an evaluation result of smell code and anti pattern detection in java based application development. The main objective to be achieved in this research is to determine the proper way in the detection of smell code and anti pattern in the development of java based software, and to evaluate the impact of using code inspection tools and software metrics to refactoring code in java based software development. Smell code to be detected in this research is Long Parameter List, Large Class, Lazy Class, Feature Envy, Long Method, and Dead Code. Anti pattern that will be detected is The Blob / God Class and Lava Flow. The selection of smell code and anti pattern is based on the definition, characteristics, detection factor, and software metrics. To support the research process is done through the evaluation stage of a case study java based application as a sample for inspection of code for the detection of smell code and anti pattern and calculation software metrics. Case studies of selected applications as sample applications are E-Commerce applications with functional master data management of goods and customers as well as management of sales and payment transactions. The detection of the smell code and anti-pattern on the case study is done in stages so it can be determined whether or not to refact. As well as ensuring the technique of making the program better fit the characteristics and rules of object-oriented programming.
APA, Harvard, Vancouver, ISO, and other styles
33

Pasquini, Marta, and Marco Stenta. "LinChemIn: SynGraph—a data model and a toolkit to analyze and compare synthetic routes." Journal of Cheminformatics 15, no. 1 (April 1, 2023). http://dx.doi.org/10.1186/s13321-023-00714-y.

Full text
Abstract:
Abstract Background The increasing amount of chemical reaction data makes traditional ways to navigate its corpus less effective, while the demand for novel approaches and instruments is rising. Recent data science and machine learning techniques support the development of new ways to extract value from the available reaction data. On the one side, Computer-Aided Synthesis Planning tools can predict synthetic routes in a model-driven approach; on the other side, experimental routes can be extracted from the Network of Organic Chemistry, in which reaction data are linked in a network. In this context, the need to combine, compare and analyze synthetic routes generated by different sources arises naturally. Results Here we present LinChemIn, a python toolkit that allows chemoinformatics operations on synthetic routes and reaction networks. Wrapping some third-party packages for handling graph arithmetic and chemoinformatics and implementing new data models and functionalities, LinChemIn allows the interconversion between data formats and data models and enables route-level analysis and operations, including route comparison and descriptors calculation. Object-Oriented Design principles inspire the software architecture, and the modules are structured to maximize code reusability and support code testing and refactoring. The code structure should facilitate external contributions, thus encouraging open and collaborative software development. Conclusions The current version of LinChemIn allows users to combine synthetic routes generated from various tools and analyze them, and constitutes an open and extensible framework capable of incorporating contributions from the community and fostering scientific discussion. Our roadmap envisages the development of sophisticated metrics for routes evaluation, a multi-parameter scoring system, and the implementation of an entire “ecosystem” of functionalities operating on synthetic routes. LinChemIn is freely available at https://github.com/syngenta/linchemin. Graphical Abstract
APA, Harvard, Vancouver, ISO, and other styles
34

Mendonça, Walter Lucas Monteiro, José Fortes, Francisco Vitor Lopes, Diego Marcílio, Rodrigo Bonifácio De Almeida, Edna Dias Canedo, Fernanda Lima, and João Saraiva. "Understanding the Impact of Introducing Lambda Expressions in Java Programs." Journal of Software Engineering Research and Development 8 (October 11, 2020). http://dx.doi.org/10.5753/jserd.2020.744.

Full text
Abstract:
Background: The Java programming language version eight introduced several features that encourage the functional style of programming, including the support for lambda expressions and the Stream API. Currently, there is common wisdom that refactoring legacy code to introduce lambda expressions, besides other potential benefits, simplifies the code and improves program comprehension. Aims: The purpose of this work is to investigate this belief, conducting an in-depth study to evaluate the effect of introducing lambda expressions on program comprehension. Method: We conduct this research using a mixed-method study. For the quantitative method, we quantitatively analyze 166 pairs of code snippets extracted directly either from GitHub or from recommendations from three tools (RJTL, NetBeans, and IntelliJ). We also surveyed practitioners to collect their perceptions about the benefits on program comprehension when introducing lambda expressions. We asked practitioners to evaluate and rate sets of pairs of code snippets. Results: We found contradictory results in our research. Based on the quantitative assessment, we could not find evidence that the introduction of lambda expressions improves software readability—one of the components of program comprehension. Our results suggest that the transformations recommended by the aforementioned tools decrease program comprehension when assessed by two state-of-the-art models to estimate readability. Differently, our findings of the qualitative assessment suggest that the introduction of lambda expression improves program comprehension in three scenarios: when we convert anonymous inner classes to a lambda expression, structural loops with inner conditional to a anyMatch operator, and structural loops to filter operator combined with a collect method. Implications: We argue in this paper that one can improve program comprehension when she applies particular transformations to introduce lambda expressions (e.g., replacing anonymous inner classes with lambda expressions). Also, the opinion of the participants shines the opportunities in which a transformation for introducing lambda might be advantageous. This might support the implementation of effective tools for automatic program transformations.
APA, Harvard, Vancouver, ISO, and other styles
35

Kim, Taeyoung, Suntae Kim, Duksan Ryu, and Jaehyuk Cho. "Deep Tasks Summarization for Comprehending Mixed Tasks in a Commit." International Journal of Software Engineering and Knowledge Engineering, December 10, 2022, 1–23. http://dx.doi.org/10.1142/s0218194022500711.

Full text
Abstract:
In Version Control System (VCS), a developer frequently uploads multiple tasks such as adding features, code refactoring, and fixing bugs, into a single commit and crumbles each task’s summary when writing a commit message. It causes code readers to feel challenged in understanding the developer’s past tasks within the commit history. To resolve this issue, we propose an automatic approach to generating a task summary to help comprehend multiple mixed tasks in a commit and developed tool support named Task summary Generator (TsGen). In our approach, we use the commit with a single task as input and identify the task to sort its elements sequentially. Then we generate feature vectors from each sorted element to train the Neural Machine Translation (NMT) model. Based on the trained NMT model, we generate the feature vector from each task of a commit with multiple tasks and put each of them into the model to provide the task summary. In evaluation, we compared the performance of TsGen with two existing methods for nine open-source projects. As a result, TsGen outperformed CoDiSum and Jiang’s NMT by 52.08% and 28.07% in BiLingual Evaluation Understudy (BLEU) scores. In addition, the human evaluation was carried out to demonstrate that TsGen helps understand mixed tasks in a commit and gained a 0.27 higher preference than the actual commit message.
APA, Harvard, Vancouver, ISO, and other styles
36

Semenets, A. V., and V. P. Martsenyuk. "РЕФАКТОРІНГ ПРОГРАМНОГО КОДУ ДІАЛОГОВОГО КОМПОНЕНТУ ПЛАТФОРМИ СИСТЕМИ ПІДТРИМКИ ПРИЙНЯТТЯ РІШЕННЯ ДЛЯ МЕДИЧНОЇ ІНФОРМАЦІЙНОЇ СИСТЕМИ З ВІДКРИТИМ КОДОМ OpenMRS." Medical Informatics and Engineering, no. 2 (August 9, 2016). http://dx.doi.org/10.11603/mie.1996-1960.2016.2.6487.

Full text
Abstract:
The importance of Medical Information Systems (MIS) for medical practice is emphasized. The wide usage of the Electronic Medical Records (EMR) software is displayed. The importance and alternative approaches to implementation of the MIS in the Ukraine healthcare system are discussed. The benefits of the open-source MIS usage are shown. Effectiveness of the Clinical Decision Support System (CDSS) application in the medical decision making process is emphasized.<p>The open-source MIS OpenMRS developer tools and software API are reviewed. The results of code refactoring of the dialog subsystem of the CDSS platform which is made as module for the open-source MIS OpenMRS are presented. The structure of information model of database of the CDSS dialog subsystem was updated according with MIS OpenMRS requirements. The Model-View-Controller (MVC) based approach to the CDSS dialog subsystem architecture was re-implemented with Java programming language using Spring and Hibernate frameworks. The MIS OpenMRS Encounter portlet form for the CDSS dialog subsystem integration is developed as an extension. The administrative module of the CDSS platform is recreated. The data exchanging formats and methods for interaction of OpenMRS CDSS dialog subsystem module and DecisionTree GAE service are re-implemented with help of AJAX technology via jQuery library</p>
APA, Harvard, Vancouver, ISO, and other styles
37

Glöckler, Falko, James Macklin, David Shorthouse, Christian Bölling, Satpal Bilkhu, and Christian Gendreau. "DINA—Development of open source and open services for natural history collections & research." Biodiversity Information Science and Standards 4 (October 6, 2020). http://dx.doi.org/10.3897/biss.4.59070.

Full text
Abstract:
The DINA Consortium (DINA = “DIgital information system for NAtural history data”, https://dina-project.net) is a framework for like-minded practitioners of natural history collections to collaborate on the development of distributed, open source software that empowers and sustains collections management. Target collections include zoology, botany, mycology, geology, paleontology, and living collections. The DINA software will also permit the compilation of biodiversity inventories and will robustly support both observation and molecular data. The DINA Consortium focuses on an open source software philosophy and on community-driven open development. Contributors share their development resources and expertise for the benefit of all participants. The DINA System is explicitly designed as a loosely coupled set of web-enabled modules. At its core, this modular ecosystem includes strict guidelines for the structure of Web application programming interfaces (APIs), which guarantees the interoperability of all components (https://github.com/DINA-Web). Important to the DINA philosophy is that users (e.g., collection managers, curators) be actively engaged in an agile development process. This ensures that the product is pleasing for everyday use, includes efficient yet flexible workflows, and implements best practices in specimen data capture and management. There are three options for developing a DINA module: create a new module compliant with the specifications (Fig. 1), modify an existing code-base to attain compliance (Fig. 2), or wrap a compliant API around existing code that cannot be or may not be modified (e.g., infeasible, dependencies on other systems, closed code) (Fig. 3). create a new module compliant with the specifications (Fig. 1), modify an existing code-base to attain compliance (Fig. 2), or wrap a compliant API around existing code that cannot be or may not be modified (e.g., infeasible, dependencies on other systems, closed code) (Fig. 3). All three of these scenarios have been applied in the modules recently developed: a module for molecular data (SeqDB), modules for multimedia, documents and agents data and a service module for printing labels and reports: The SeqDB collection management and molecular tracking system (Bilkhu et al. 2017) has evolved through two of these scenarios. Originally, the required architectural changes were going to be added into the codebase, but after some time, the development team recognised that the technical debt inherent in the project wasn’t worth the effort of modification and refactoring. Instead a new codebase was created bringing forward the best parts of the system oriented around the molecular data model for Sanger Sequencing and Next Generation Sequencing (NGS) workflows. In the case of the Multimedia and Document Store module and the Agents module, a brand new codebase was established whose technology choices were aligned with the DINA vision. These two modules have been created from fundamental use cases for collection management and digitization workflows and will continue to evolve as more modules come online and broaden their scope. The DINA Labels &amp; Reporting module is a generic service for transforming data in arbitrary printable layouts based on customizable templates. In order to use the module in combination with data managed in collection management software Specify (http://specifysoftware.org) for printing labels of collection objects, we wrapped the Specify 7 API with a DINA-compliant API layer called the “DINA Specify Broker”. This allows for using the easy-to-use web-based template engine within the DINA Labels &amp; Reports module without changing Specify’s codebase. In our presentation we will explain the DINA development philosophy and will outline benefits for different stakeholders who directly or indirectly use collections data and related research data in their daily workflows. We will also highlight opportunities for joining the DINA Consortium and how to best engage with members of DINA who share their expertise in natural science, biodiversity informatics and geoinformatics.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography