Journal articles on the topic 'Code source (informatique) – Documentation'

To see the other types of publications on this topic, follow the link: Code source (informatique) – Documentation.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Code source (informatique) – Documentation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Wang 王冰, Bingjie 洁., Joel Leja, Ivo Labbé, Rachel Bezanson, Katherine E. Whitaker, Gabriel Brammer, Lukas J. Furtak, et al. "The UNCOVER Survey: A First-look HST+JWST Catalog of Galaxy Redshifts and Stellar Population Properties Spanning 0.2 ≲ z ≲ 15." Astrophysical Journal Supplement Series 270, no. 1 (December 28, 2023): 12. http://dx.doi.org/10.3847/1538-4365/ad0846.

Full text
Abstract:
Abstract The recent UNCOVER survey with the James Webb Space Telescope (JWST) exploits the nearby cluster A2744 to create the deepest view of our Universe to date by leveraging strong gravitational lensing. In this work, we perform photometric fitting of more than 50,000 robustly detected sources out to z ∼ 15. We show the redshift evolution of stellar ages, star formation rates, and rest-frame colors across the full range of 0.2 ≲ z ≲ 15. The galaxy properties are inferred using the Prospector Bayesian inference framework using informative Prospector-β priors on the masses and star formation histories to produce joint redshift and stellar populations posteriors. Additionally, lensing magnification is performed on the fly to ensure consistency with the scale-dependent priors. We show that this approach produces excellent photometric redshifts with σ NMAD ∼ 0.03, of a similar quality to the established photometric redshift code EAzY. In line with the open-source scientific objective of this Treasury survey, we publicly release the stellar population catalog with this paper, derived from our photometric catalog adapting aperture sizes based on source profiles. This release (the catalog and all related documentation are accessible via the UNCOVER survey web page: https://jwst-uncover.github.io/DR2.html#SPSCatalogs with a copy deposited to Zenodo at doi:10.5281/zenodo.8401181) includes posterior moments, maximum likelihood spectra, star formation histories, and full posterior distributions, offering a rich data set to explore the processes governing galaxy formation and evolution over a parameter space now accessible by JWST.
APA, Harvard, Vancouver, ISO, and other styles
2

Pan, Lu, Huy Q. Dinh, Yudi Pawitan, and Trung Nghia Vu. "Isoform-level quantification for single-cell RNA sequencing." Bioinformatics 38, no. 5 (December 2, 2021): 1287–94. http://dx.doi.org/10.1093/bioinformatics/btab807.

Full text
Abstract:
Abstract Motivation RNA expression at isoform level is biologically more informative than at gene level and can potentially reveal cellular subsets and corresponding biomarkers that are not visible at gene level. However, due to the strong 3ʹ bias sequencing protocol, mRNA quantification for high-throughput single-cell RNA sequencing such as Chromium Single Cell 3ʹ 10× Genomics is currently performed at the gene level. Results We have developed an isoform-level quantification method for high-throughput single-cell RNA sequencing by exploiting the concepts of transcription clusters and isoform paralogs. The method, called Scasa, compares well in simulations against competing approaches including Alevin, Cellranger, Kallisto, Salmon, Terminus and STARsolo at both isoform- and gene-level expression. The reanalysis of a CITE-Seq dataset with isoform-based Scasa reveals a subgroup of CD14 monocytes missed by gene-based methods. Availability and implementation Implementation of Scasa including source code, documentation, tutorials and test data supporting this study is available at Github: https://github.com/eudoraleer/scasa and Zenodo: https://doi.org/10.5281/zenodo.5712503. Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
3

Couture, Stéphane. "L'écriture collective du code source informatique." Revue d'anthropologie des connaissances 6, 1, no. 1 (2012): 21. http://dx.doi.org/10.3917/rac.015.0061.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Boumann, K. "Terminologische Databank En Geautomatiseerde Informatie En Documentatie." Vertalen in theorie en praktijk 21 (January 1, 1985): 128–33. http://dx.doi.org/10.1075/ttwia.21.16bou.

Full text
Abstract:
This paper is essentially a progress report on - the European Community terminology data bank, known as Eurodicautom, - the machine translation projects Systran and Eurotra - access to Community, European and American data banks. Eurodicom is now a fully developed electronic dictionary, containing about 334.000 terminological units (single words, phrases and abbreviations) in English, 318.000 in French, 239.000 in German, 150.000 in Italian, 145.000 in Danish, 136.000 in Dutch, 64.000 in Spanish, 10.000 in Portuguese and 700 in Latin (5 November 1984). Translations are accompanied by descriptieve, linguistic and documentary information (viz. definition, context, source, originating office, author, subject code and reliability rating). Eurodicautom is also available to the public on-line (for details apply to Echo, Customer Service, 15 avenue de La Faïencerie, L-1510 Luxemburg, tel. 352-20764). At present Systran provides machine translations for the Language pairs English-French, French-English and English-Italian. English-German is being introduced and French-German will become available shortly. Consideration is being given to developing systems from either French or English into Greek, Danish or Dutch. (Report Ian M. Pigott, 29 May 1984). Rapid post-editing (emphasis on accuracy or full post-editing (thorough revision) is always required. Work on Eurotra, a machine translation system of advanced design, is now well under way (Preparatory phase, 2 years, is all but terminated). Unlike Systran (language pairs, one way) Eurotra will be set up to supply translations from any source language available in the system into a number of target languages. The first results are due by 1989, after a phase of basic and applied linguistic research (2 years) and a phase of stabilization of the linguistic models and evaluation of results (18 months). There is a brief outline of the objectives and the programme of work in Council Decision 82/752/EEC, Official Journal of the European Communities, 1982 No L 317, pp. 19-23. Translators can now be assisted by Information Officers to consult titles, abstracts and full articles in some 140 Community, European and American data systems.
APA, Harvard, Vancouver, ISO, and other styles
5

Al-Msie'deen, Ra'Fat, and Anas H. Blasi. "Supporting software documentation with source code summarization." International Journal of ADVANCED AND APPLIED SCIENCES 6, no. 1 (January 2019): 59–67. http://dx.doi.org/10.21833/ijaas.2019.01.008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sulír, Matúš, and Jaroslav Porubän. "Source Code Documentation Generation Using Program Execution." Information 8, no. 4 (November 17, 2017): 148. http://dx.doi.org/10.3390/info8040148.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Wu, Y. C., and T. P. Baker. "A source code documentation system for Ada." ACM SIGAda Ada Letters IX, no. 5 (July 1989): 84–88. http://dx.doi.org/10.1145/71340.71344.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Arthur, Menaka Pushpa. "Automatic Source Code Documentation using Code Summarization Technique of NLP." Procedia Computer Science 171 (2020): 2522–31. http://dx.doi.org/10.1016/j.procs.2020.04.273.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Carvalho, Nuno, Alberto Simões, and José Almeida. "DMOSS: Open source software documentation assessment." Computer Science and Information Systems 11, no. 4 (2014): 1197–207. http://dx.doi.org/10.2298/csis131005027c.

Full text
Abstract:
Besides source code, the fundamental source of information about open source software lies in documentation, and other non source code files, like README, INSTALL, or How-To files, commonly available in the software ecosystem. These documents, written in natural language, provide valuable information during the software development stage, but also in future maintenance and evolution tasks. DMOSS3 is a toolkit designed to systematically assess the quality of non source code content found in software packages. The toolkit handles a package as an attribute tree, and performs several tree traverse algorithms through a set of plugins, specialized in retrieving specific metrics from text, gathering information about the software. These metrics are later used to infer knowledge about the software, and composed together to build reports that assess the quality of specific features. This paper discusses the motivations for this work, continues with a description of the toolkit implementation and design goals. This is followed by an example of its usage to process a software package, and the produced report.
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, April Yi, Dakuo Wang, Jaimie Drozdal, Michael Muller, Soya Park, Justin D. Weisz, Xuye Liu, Lingfei Wu, and Casey Dugan. "Documentation Matters: Human-Centered AI System to Assist Data Science Code Documentation in Computational Notebooks." ACM Transactions on Computer-Human Interaction 29, no. 2 (April 30, 2022): 1–33. http://dx.doi.org/10.1145/3489465.

Full text
Abstract:
Computational notebooks allow data scientists to express their ideas through a combination of code and documentation. However, data scientists often pay attention only to the code, and neglect creating or updating their documentation during quick iterations. Inspired by human documentation practices learned from 80 highly-voted Kaggle notebooks, we design and implement Themisto, an automated documentation generation system to explore how human-centered AI systems can support human data scientists in the machine learning code documentation scenario. Themisto facilitates the creation of documentation via three approaches: a deep-learning-based approach to generate documentation for source code, a query-based approach to retrieve online API documentation for source code, and a user prompt approach to nudge users to write documentation. We evaluated Themisto in a within-subjects experiment with 24 data science practitioners, and found that automated documentation generation techniques reduced the time for writing documentation, reminded participants to document code they would have ignored, and improved participants’ satisfaction with their computational notebook.
APA, Harvard, Vancouver, ISO, and other styles
11

Kuhn, Tobias, and Alexandre Bergel. "Verifiable source code documentation in controlled natural language." Science of Computer Programming 96 (December 2014): 121–40. http://dx.doi.org/10.1016/j.scico.2014.01.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Righolt, Christiaan H., Barret A. Monchka, and Salaheddin M. Mahmud. "From source code to publication: Code Diary, an automatic documentation parser for SAS." SoftwareX 7 (January 2018): 222–25. http://dx.doi.org/10.1016/j.softx.2018.07.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

de Queiroz Lafetá, Raquel Fialho, Thiago Fialho de Queiroz Lafetá, and Marcelo de Almeida Maia. "An Automated Approach for Constructing Framework Instantiation Documentation." International Journal of Software Engineering and Knowledge Engineering 30, no. 04 (April 2020): 575–601. http://dx.doi.org/10.1142/s0218194020500205.

Full text
Abstract:
A substantial effort, in general, is required for understanding APIs of application frameworks. High-quality API documentation may alleviate the effort, but the production of such documentation still poses a major challenge for modern frameworks. To facilitate the production of framework instantiation documentation, we hypothesize that the framework code itself and the code of existing instantiations provide useful information. However, given the size and complexity of existent code, automated approaches are required to assist the documentation production. Our goal is to assess an automated approach for constructing relevant documentation for framework instantiation based on source code analysis of the framework itself and of existing instantiations. The criterion for defining whether documentation is relevant would be to compare the documentation with an traditional framework documentation, considering the time spent and correctness during instantiation activities, information usefulness, complexity of the activity, navigation, satisfaction, information localization and clarity. We propose an automated approach for constructing relevant documentation for framework instantiation based on source code analysis of the framework itself and of existing instantiations. The proposed approach generates documentation in a cookbook style, where the recipes are programming activities using the necessary API elements driven by the framework features. We performed an empirical study, consisting of three experiments with 44 human subjects executing real framework instantiations aimed at comparing the use of the proposed cookbooks to traditional manual framework documentation (baseline). Our empirical assessment shows that the generated cookbooks performed better or, at least, with non-significant difference when compared to the traditional documentation, evidencing the effectiveness of the approach.
APA, Harvard, Vancouver, ISO, and other styles
14

Lemos, Filipe, Filipe F. Correia, Ademar Aguiar, and Paulo G. G. Queiroz. "Live software documentation of design pattern instances." PeerJ Computer Science 10 (August 16, 2024): e2090. http://dx.doi.org/10.7717/peerj-cs.2090.

Full text
Abstract:
Background Approaches to documenting the software patterns of a system can support intentionally and manually documenting them or automatically extracting them from the source code. Some of the approaches that we review do not maintain proximity between code and documentation. Others do not update the documentation after the code is changed. All of them present a low level of liveness. Approach This work proposes an approach to improve the understandability of a software system by documenting the design patterns it uses. We regard the creation and the documentation of software as part of the same process and attempt to streamline the two activities. We achieve this by increasing the feedback about the pattern instances present in the code, during development—i.e., by increasing liveness. Moreover, our approach maintains proximity between code and documentation and allows us to visualize the pattern instances under the same environment. We developed a prototype—DesignPatternDoc—for IntelliJ IDEA that continuously identifies pattern instances in the code, suggests them to the developer, generates the respective pattern-instance documentation, and enables live editing and visualization of that documentation. Results To evaluate this approach, we conducted a controlled experiment with 21 novice developers. We asked participants to complete three tasks that involved understanding and evolving small software systems—up to six classes and 100 lines of code—and recorded the duration and the number of context switches. The results show that our approach helps developers spend less time understanding and documenting a software system when compared to using tools with a lower degree of liveness. Additionally, embedding documentation in the IDE and maintaining it close to the source code reduces context switching significantly.
APA, Harvard, Vancouver, ISO, and other styles
15

Klieber, Werner, Michael Granitzer, Mansuet Gaisbauer, and Klaus Tochtermann. "Semantically Enhanced Software Documentation Processes." Serdica Journal of Computing 4, no. 2 (July 20, 2010): 243–62. http://dx.doi.org/10.55630/sjc.2010.4.243-262.

Full text
Abstract:
High-quality software documentation is a substantial issue for understanding software systems. Shorter time-to-market software cycles increase the importance of automatism for keeping the documentation up to date. In this paper, we describe the automatic support of the software documentation process using semantic technologies. We introduce a software documentation ontology as an underlying knowledge base. The defined ontology is populated automatically by analysing source code, software documentation and code execution. Through selected results we demonstrate that the use of such semantic systems can support software documentation processes efficiently.
APA, Harvard, Vancouver, ISO, and other styles
16

MARCUS, ANDRIAN, JONATHAN I. MALETIC, and ANDREY SERGEYEV. "RECOVERY OF TRACEABILITY LINKS BETWEEN SOFTWARE DOCUMENTATION AND SOURCE CODE." International Journal of Software Engineering and Knowledge Engineering 15, no. 05 (October 2005): 811–36. http://dx.doi.org/10.1142/s0218194005002543.

Full text
Abstract:
An approach for the semi-automated recovery of traceability links between software documentation and source code is presented. The methodology is based on the application of information retrieval techniques to extract and analyze the semantic information from the source code and associated documentation. A semi-automatic process is defined based on the proposed methodology. The paper advocates the use of latent semantic indexing (LSI) as the supporting information retrieval technique. Two case studies using existing software are presented comparing this approach with others. The case studies show positive results for the proposed approach, especially considering the flexibility of the methods used.
APA, Harvard, Vancouver, ISO, and other styles
17

Swapnil Shinde, Vishnu Suryawanshi, Varsha Jadhav, Nakul Sharma, Mandar Diwakar,. "Graph-Based Keyphrase Extraction for Software Traceability in Source Code and Documentation Mapping." International Journal on Recent and Innovation Trends in Computing and Communication 11, no. 9 (October 30, 2023): 832–36. http://dx.doi.org/10.17762/ijritcc.v11i9.8973.

Full text
Abstract:
Natural Language Processing (NLP) forms the basis of several computational tasks. However, when applied to the software system’s, NLP provides several irrelevant features and the noise gets mixed up while extracting features. As the scale of software system’s increases, different metrics are needed to assess these systems. Diagrammatic and visual representation of the SE projects code forms an essential component of Source Code Analysis (SCA). These SE projects cannot be analyzed by traditional source code analysis methods nor can they be analyzed by traditional diagrammatic representation. Hence, there is a need to modify the traditional approaches in lieu of changing environments to reduce learning gap for the developers and traceability engineers. The traditional approaches fall short in addressing specific metrics in terms of document similarity and graph dependency approaches. In terms of source code analysis, the graph dependency graph can be used for finding the relevant key-terms and keyphrases as they occur not just intra-document but also inter-document. In this work, a similarity measure based on context is proposed which can be employed to find a traceability link between the source code metrics and API documents present in a package. Probabilistic graph-based keyphrase extraction approach is used for searching across the different project files.
APA, Harvard, Vancouver, ISO, and other styles
18

Gallaway, Michael Shayne, Bin Huang, Quan Chen, Tom Tucker, Jaclyn McDowell, Eric Durbin, David Siegel, and Eric Tai. "Identifying Smoking Status and Smoking Cessation Using a Data Linkage Between the Kentucky Cancer Registry and Health Claims Data." JCO Clinical Cancer Informatics, no. 3 (December 2019): 1–8. http://dx.doi.org/10.1200/cci.19.00011.

Full text
Abstract:
PURPOSE Linkage of cancer registry data with complementary data sources can be an informative way to expand what is known about patients and their treatment and improve delivery of care. The purpose of this study was to explore whether patient smoking status and smoking-cessation modalities data in the Kentucky Cancer Registry (KCR) could be augmented by linkage with health claims data. METHODS The KCR conducted a data linkage with health claims data from Medicare, Medicaid, state employee insurance, Humana, and Anthem. Smoking status was defined as documentation of personal history of tobacco use (International Classification of Diseases, Ninth Revision [ICD-9] code V15.82) or tobacco use disorder (ICD-9 305.1) before and after a cancer diagnosis. Use of smoking-cessation treatments before and after the cancer diagnosis was defined as documentation of smoking-cessation counseling (Healthcare Common Procedure Coding System codes 99406, 99407, G0375, and G0376) or pharmacotherapy (eg, nicotine replacement therapy, bupropion, varenicline). RESULTS From 2007 to 2011, among 23,703 patients in the KCR, we discerned a valid prediagnosis smoking status for 78%. KCR data only (72%), claims data only (6%), and a combination of both data sources (22%) were used to determine valid smoking status. Approximately 4% of patients with cancer identified as smokers (n = 11,968) and were provided smoking-cessation counseling, and 3% were prescribed pharmacotherapy for smoking cessation. CONCLUSION Augmenting KCR data with medical claims data increased capture of smoking status and use of smoking-cessation modalities. Cancer registries interested in exploring smoking status to influence treatment and research activities could consider a similar approach, particularly if their registry does not capture smoking status for a majority of patients.
APA, Harvard, Vancouver, ISO, and other styles
19

Aljumah, Sarah, and Lamia Berriche. "Bi-LSTM-Based Neural Source Code Summarization." Applied Sciences 12, no. 24 (December 8, 2022): 12587. http://dx.doi.org/10.3390/app122412587.

Full text
Abstract:
Code summarization is a task that is often employed by software developers for fixing code or reusing code. Software documentation is essential when it comes to software maintenance. The highest cost in software development goes to maintenance because of the difficulty of code modification. To help in reducing the cost and time spent on software development and maintenance, we introduce an automated comment summarization and commenting technique using state-of-the-art techniques in summarization. We use deep neural networks, specifically bidirectional long short-term memory (Bi-LSTM), combined with an attention model to enhance performance. In this study, we propose two different scenarios: one that uses the code text and the structure of the code represented in an abstract syntax tree (AST) and another that uses only code text. We propose two encoder-based models for the first scenario that encodes the code text and the AST independently. Previous works have used different techniques in deep neural networks to generate comments. This study’s proposed methodologies scored higher than previous works based on the gated recurrent unit encoder. We conducted our experiment on a dataset of 2.1 million pairs of Java methods and comments. Additionally, we showed that the code structure is beneficial for methods’ signatures featuring unclear words.
APA, Harvard, Vancouver, ISO, and other styles
20

Zanoni. "COMENTE+: A TOOL FOR IMPROVING SOURCE CODE DOCUMENTATION USING INFORMATION RETRIEVAL." Journal of Computer Science 10, no. 5 (May 1, 2014): 755–62. http://dx.doi.org/10.3844/jcssp.2014.755.762.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Hamidović, Haris. "SOFTWARE ESCROW CONTRACTS." FBIM Transactions 10, no. 1 (April 15, 2022): 27–32. http://dx.doi.org/10.12709/fbim.10.10.01.03.

Full text
Abstract:
Because for many companies, continuous operation and maintenance of custom software are crucial, they usually want to ensure that this continues even if the software licenser cannot operate in the future, for example, due to bankruptcy. The easiest way to overcome this for the licensee is to get a copy of the updated source code. Software developers are understandably reluctant to give a copy of their proprietary source code to the customer. The most significant asset of software developers is usually their source code, which can contain valuable trade secrets. One of the ways a software developer can deal with a client that requires access to the source code is to agree to store the source code using a triple escrow agreement. Under the escrow source code agreement, the developer provides a copy of the source code and documentation to a neutral party for safekeeping. The third party will hand over the source code to the buyer only after certain conditions defined by the contract have been met, such as the bankruptcy of software developers. That keeps the developer's source code confidential while, in theory, giving the user access to it if necessary. In this paper, we present the term software escrow contract.
APA, Harvard, Vancouver, ISO, and other styles
22

Shinkarev, A. A. "Role of Open Source Software in Modern Development of Enterprise Information Systems." Bulletin of the South Ural State University. Ser. Computer Technologies, Automatic Control & Radioelectronics 21, no. 2 (May 2021): 16–22. http://dx.doi.org/10.14529/ctcr210202.

Full text
Abstract:
At the moment there are many open source software products and packages, and their number is increasing every day. So it can be concluded that publishing source code is becoming more and more popular in the world of software development. When publishing the source code of a software solution or software package for use in the developer community, special attention should be given to the license type – this affects which scenarios will be available for use of the published package or software solution. It is also necessary to draw up full and detailed documentation and decide on the ways to promote the published package among developers. The purpose of the study was to justify the feasibility and necessity of publishing software products, packages and libraries for their use by other developers to build their own systems and services. The author meant to describe the major open source licenses, identify their features and differences, and those situations for which this or that type of license is suitable, as well as to demonstrate the need of writing documentation and describe ways to promote and popularize published software products, packages, and libraries in the developer community. Materials and methods. The paper considers official license documents describing conditions of use, reproduction, and distribution. The author analyzes the main ways and means to promote open source software products. Results. The article substantiates the relevance of publishing and using the source code of a software product, package or library. The author describes the main provisions of the most common licenses and gives advice on choosing the type of license when publishing source code for free use. The necessity of writing documentation for the published software product is substantiated. The article also describes some of the ways to promote published packages, such as the choice of name, speaking at conferences, and publishing articles with case studies.
APA, Harvard, Vancouver, ISO, and other styles
23

Wali, Muhammad, and Lukman Ahmad. "Perancangan Aplikasi Source code library Sebagai Solusi Pembelajaran Pengembangan Perangkat Lunak." Jurnal JTIK (Jurnal Teknologi Informasi dan Komunikasi) 1, no. 1 (July 1, 2017): 39. http://dx.doi.org/10.35870/jtik.v1i1.32.

Full text
Abstract:
a b s t r a c tof reference codes on a programming language and software evaluation. Today, most Source code library for the purposes of learning software developers in the form of documentation of the use of a programming language that can be accessed through the official website developer programming languages, forum and various blogs. Because of the complexity of the features most web-based Source code library can only be accessed through the website and some others have provided documentation on each software vendor from the developers company the device. This research tries to construct a model of the application Source code library that can be used as a form of documentation for learning the use of various programming languages flexibly both in online and offline. The application allows the renewal of data/content Source code library at a time when the Internet is still available or at the time of the user's area does not have a network the Internet. In the implementation of this research will be divided in three stages, namely data collection pre development, development and implementation, and data collection of post-war development. Data collection pre development intended to get a preliminary study about the provision the core issue at hand, while the development and implementation phase focuses on model software design into diagrams and make the programming code to implement the design that has been created. While the data collection stage of the post-war development was for revamping the application made in conclusion, withdrawal, and suggestions for further research topics.Keywords:Application, Source code library, software development a b s t r a kSource code library memungkinkan pengajar, programer maupun pelajar dan pengembang perangkat lunak untuk mendapatkan berbagai referensi kode-kode pada sebuah bahasa pemrograman perangkat lunak dan memberikan evaluasi. Saat ini, kebanyakan Source code library untuk keperluan pembelajaran pengembang perangkat lunak berupa dokumentasi penggunaan suatu bahasa pemrograman yang dapat diakses melalui website resmi pengembang bahasa pemrograman, forum dan berbagai blog. Karena kompleksitas fiturnya kebanyakan web-based Source code library hanya dapat diakses melalui website dan sebagian lainnya telah disediakan dokumentasi pada setiap software vendor dari perusahaan pengembang perangkat. Penelitian ini mencoba untuk membangun model aplikasi Source code library yang dapat digunakan sebagai bentuk dokumentasi pembelajaran penggunaan berbagai bahasa pemrograman secara fleksibel baik dalam kondisi online maupun offline. Aplikasi tersebut memungkinkan pembaharuan data/konten Source code library pada saat Internet masih tersedia atau pada saat pengguna pada area tidak memiliki jaringan Internet. Dalam pelaksanaannya penelitian ini akan dibagi dalam tiga tahapan, yaitu pengumpulan data pra pengembangan, pengembangan serta implementasi, dan pengumpulan data pasca pengembangan. Pengumpulan data pra pengembangan dimaksudkan untuk mendapatkan bekal studi pendahuluan tentang inti masalah yang sedang dihadapi, sedangkan tahap pengembangan dan implementasi berfokus pada memodelkan perancangan perangkat lunak ke dalam diagram dan membuat kode pemrograman untuk mengimplementasikan perancangan yang telah dibuat. Sedangkan tahapan pengumpulan data pasca pengembangan adalah untuk pembenahan aplikasi yang dibuat, penarikan kesimpulan, dan saran untuk topik penelitian selanjutnya.Kata Kunci:Aplikasi, Source code library, pengembangan perangkat lunak
APA, Harvard, Vancouver, ISO, and other styles
24

Juliarta, I. Made, and Putu Eka Suardana. "AN ANALYSIS OF USING CODE MIXING FOUND IN THE FEMINA MAGAZINE." Journal of English Language Teaching, Literatures, Applied Linguistic (JELTLAL) 1, no. 1 (June 29, 2023): 1–14. http://dx.doi.org/10.69820/jeltlal.v1i1.25.

Full text
Abstract:
This research focuses on code mixing found in the femina magazine. The objective of this research was to find out the types and levels of code mixing found in the femina magazine. Intra-sentential code mixing is called as the appearance of a phrase, clause, or a sentence boundary in the utterance of someone This research applied descriptive qualitative methodand the human research is the main instrument of this research. In method of collecting the data, this research applied documentation method. This study applied content analysis that focused on analyzing the types of code mixing which defined by Hoffman and the levels of code mixing that argued by Suwito. Intra sentential code mixing in the form of word and phrase are found in the data source. There were 9 data of intra sentential code mixing in the form of word found in the data source. And there were 9 data of intra sentential code mixing in the form of phrases that were found in the data source.
APA, Harvard, Vancouver, ISO, and other styles
25

Ovchinnikova, Viktoria. "Obtaining and Visualization of the Topological Functioning Model from the UML Model." Applied Computer Systems 18, no. 1 (December 1, 2015): 43–51. http://dx.doi.org/10.1515/acss-2015-0018.

Full text
Abstract:
Abstract A domain model can provide compact information about its corresponding software system for business people. If the software system exists without its domain model and documentation it is time-consuming to understand its behavior and structure only from the code. Reverse Engineering (RE) tools can be used for obtaining behavior and structure of the software system from source code. After that the domain model can be created. A short overview and an example of obtaining the domain model, Topological Functioning Model (TFM), from source code are provided in the paper. Positive and negative effects of the process of TFM backward derivation are also discussed.
APA, Harvard, Vancouver, ISO, and other styles
26

Mahfoud, Asmaa, Abu Bakar Sultan, Abdul Azim Abd, Norhayati Mohd Ali, and Novia Admodisastro. "Code Obfuscation. Where is it Heading?" International Journal of Engineering & Technology 7, no. 4.1 (September 12, 2018): 22. http://dx.doi.org/10.14419/ijet.v7i4.1.19485.

Full text
Abstract:
Reverse Engineering is the process of revealing hidden code from class file. It converts garbage to readable English text. The main purpose of Reverse Engineering is to uncover the hidden code when the documentation is poor, missing source file, and developer is no longer available to provide the original code source file. Hacker uses Reverse Engineering to attack the class file to uncover the code. Then, the code can be reused for other purposes without taking any permission from the original author. The class file contains all the information and business rules that will be revealed once Reverse Engineering process attacks. Anti-Reverse Engineering techniques are developed to stop, delay, and prevent Reverse Engineering; one of the most common techniques is Obfuscation. It has many forms of protection such as, changing the names of classes and variables names, hide classes, and change form of code. In this paper, an appraisal will be conducted to study the current Obfuscation techniques. This research proposes a new hybrid technique that is based on obfuscation; the technique will be using mathematics, Unicode, and unknown language to convert the source file to a garbage running file that does same task which normal source file does for java applications.
APA, Harvard, Vancouver, ISO, and other styles
27

Jardine, Bartholomew, Gary M. Raymond, and James B. Bassingthwaighte. "Semi-automated Modular Program Constructor for physiological modeling: Building cell and organ models." F1000Research 4 (December 16, 2015): 1461. http://dx.doi.org/10.12688/f1000research.7476.1.

Full text
Abstract:
The Modular Program Constructor (MPC) is an open-source Java based utility, built upon JSim's Mathematical Modeling Language (MML) (http://www.physiome.org/jsim/) that uses directives embedded in model code to construct larger, more complicated models quickly and with less error than manually combining models. A major obstacle in writing complex programs for modeling physiological processes is the large amount of time it takes to code the myriad processes taking place simultaneously in cells, tissues, and organs. MPC replaces this task by code-generating algorithms that take the code from several different modules and produce model code for a new JSim model. This is particularly useful during multi-scale model development where many variants are to be configured and tested against data. MPC is implemented in Java and requires JSim to use its output. MPC source code and documentation are available at http://www.physiome.org/software/MPC/.
APA, Harvard, Vancouver, ISO, and other styles
28

Chen, Xiaofan, John Hosking, John Grundy, and Robert Amor. "DCTracVis: a system retrieving and visualizing traceability links between source code and documentation." Automated Software Engineering 25, no. 4 (July 11, 2018): 703–41. http://dx.doi.org/10.1007/s10515-018-0243-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Jabłońska, Patrycja. "Developing application in JavaScript - comparison of commercial and open source solution." Journal of Computer Sciences Institute 7 (September 30, 2018): 126–31. http://dx.doi.org/10.35784/jcsi.660.

Full text
Abstract:
Subject of this article is comparative analysis of two popular frameworks in JavaScript: AngularJS (open source) and Ext JS (commercial package). There were two original applications used for this study, each implemented in one of frameworks. Structure of applications, difficulty of implementing GUI components, code metrics, documentation availability and community support were compared. Results were presented in charts.
APA, Harvard, Vancouver, ISO, and other styles
30

Malihatuz Zuhriyah Istianti, Mukhlis Mukhlis, and HR Utami. "Campur Kode Dan Alih Kode Dalam Novel Milea Suara Dari Dilan Karya Pidi Baiq." Pragmatik : Jurnal Rumpun Ilmu Bahasa dan Pendidikan 2, no. 1 (December 21, 2023): 131–36. http://dx.doi.org/10.61132/pragmatik.v2i1.178.

Full text
Abstract:
The research entitled "Code Mixing and Code Switching in the Novel Milea Suara dari Dilan by Pidi Baiq" aims to describe the types of code mixing and code switching in the novel Milea Suara dari Dilan with the data source being the novel Milea Suara dari Dilan by Pidi Baiq. To find the research data, documentation techniques were used. Apart from that, to analyze the research data, content analysis techniques were used. The results found contained 77 pieces of data including, 54 code-mixing in the form of words, 5 code-mixing in the form of phrases, 3 code-mixing in the form of clauses, 1 code-mixing in the form of basters, 11 internal code-switching and 3 external code-switching.
APA, Harvard, Vancouver, ISO, and other styles
31

Xu, Aiqiao. "Software Engineering Code Workshop Based on B-RRT ∗ FND Algorithm for Deep Program Understanding Perspective." Journal of Sensors 2022 (September 26, 2022): 1–11. http://dx.doi.org/10.1155/2022/1564178.

Full text
Abstract:
Developers will perform a lot of search behaviors when facing daily work tasks, searching for reusable code fragments, solutions to specific problems, algorithm designs, software documentation, and software tools from public repositories (including open source communities and forum blogs) or private repositories (internal software repositories, source code platforms, communities, etc.) to make full use of existing software development resources and experiences. This paper first takes a deep programmatic understanding view of the software development process. In this paper, we first define the software engineering code search task from the perspective of deep program understanding. Secondly, this paper summarizes two research paradigms of deep software engineering code search and composes the related research results. At the same time, this paper summarizes and organizes the common evaluation methods for software engineering code search tasks. Finally, the results of this paper are combined with an outlook on future research.
APA, Harvard, Vancouver, ISO, and other styles
32

Arrijal Nagara Yanottama and Siti Rochimah. "Analisis Dampak Perubahan Artefak Kebutuhan Berdasarkan Kedekatan Semantik Pada Pengembangan XP." Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) 5, no. 4 (August 20, 2021): 721–28. http://dx.doi.org/10.29207/resti.v5i4.3281.

Full text
Abstract:
The Extreme Programming (XP) development method is popular because of the flexibility of the development process, it can accommodate changes quickly. But this method has a weakness in terms of documentation. It is expected that the speed of discovering which parts of the source code need to be changed will be greatly improved by analyzing the impact of changes on the requirements document. In this study, a method of analyzing the impact of changes is proposed by tracing changes in the artifact of the need to find out the source code that occurs. Early language methods and semantic approaches are used. Based on the proximity of the semantics, it will be analyzed to find out the elements in the source code that use the Spearman Correlation Coefficient. The test dataset in this study consisted of the source code in the PHP programming language as well as the functional requirements of the software. Requirements change list is generated by analysis of the latest 2 (two) expert versions of the source code. The changing needs are described in a user story document. Based on the test results in this study, the average precision was 0.1725 and the average recall value was 0.6041.
APA, Harvard, Vancouver, ISO, and other styles
33

Koznov, Dmitry Vladimirovich, Ekaterina Iurevna Ledeneva, Dmitry Vadimovich Luciv, and Pavel Isaakovich Braslavski. "Evaluation of Similarity of Javadoc Comments." Proceedings of the Institute for System Programming of the RAS 35, no. 4 (2023): 177–86. http://dx.doi.org/10.15514/ispras-2023-35(4)-10.

Full text
Abstract:
Code comments are an essential part of software documentation. Many software projects suffer the problem of low-quality comments that are often produced by copy-paste. In case of similar methods, classes, etc. copy-pasted comments with minor modifications are justified. However, in many cases this approach leads to degraded documentation quality and, subsequently, to problematic maintenance and development of the project. In this study, we address the problem of near-duplicate code comments detection, which can potentially improve software documentation. We have conducted a thorough evaluation of traditional string similarity metrics and modern machine learning methods. In our experiment, we use a collection of Javadoc comments from four industrial open-source Java projects. We have found out that LCS (Longest Common Subsequence) is the best similarity algorithm taking into account both quality (Precision 94%, Recall 74%) and performance.
APA, Harvard, Vancouver, ISO, and other styles
34

Wang, Xiaobo, Guanhui Lai, and Chao Liu. "Recovering Relationships between Documentation and Source Code based on the Characteristics of Software Engineering." Electronic Notes in Theoretical Computer Science 243 (July 2009): 121–37. http://dx.doi.org/10.1016/j.entcs.2009.07.009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Zalukhu, Ade Andi Firman, Rebecca Evelyn Laiya, and Mohammad Yunus Laia. "ANALYSIS OF INDONESIAN-ENGLISH CODE SWITCHING AND CODE MIXING ON FACEBOOK." Research on English Language Education 3, no. 2 (October 25, 2021): 1–10. http://dx.doi.org/10.57094/relation.v3i2.387.

Full text
Abstract:
This research aimed to analyze the types of code switching and code mixing and describe the reason of using them on facebook. This research designed by qualitative research with content analysis approach. The data were collected by documentation and interview. Based on data analysis there were 47 code switching and 61 code mixing. The types of code switching there were 14 data of intrasentential switching, 3 data of tag switching and 30 data of intersentential swicthing. The types of code mixing there were 43 data of insertion, 14 data of alternation and 4 data of congruent lexicalization. For the reasons that affected of using code switching and code mixing, there were indicating the level of education, to show prestige and to draw attention. As the result of research finding, intersentential switching and insertion are the most types that used on the facebook. the researher proposed a suggestion for the students to enrich their knowledge about sociolinguistics concerning code switching and code mixing and also as the source for the next researcher and the reader.
APA, Harvard, Vancouver, ISO, and other styles
36

Kuszczyński, Kajetan, and Michał Walkowski. "Comparative Analysis of Open-Source Tools for Conducting Static Code Analysis." Sensors 23, no. 18 (September 19, 2023): 7978. http://dx.doi.org/10.3390/s23187978.

Full text
Abstract:
The increasing complexity of web applications and systems, driven by ongoing digitalization, has made software security testing a necessary and critical activity in the software development lifecycle. This article compares the performance of open-source tools for conducting static code analysis for security purposes. Eleven different tools were evaluated in this study, scanning 16 vulnerable web applications. The selected vulnerable web applications were chosen for having the best possible documentation regarding their security vulnerabilities for obtaining reliable results. In reality, the static code analysis tools used in this paper can also be applied to other types of applications, such as embedded systems. Based on the results obtained and the conducted analysis, recommendations for the use of these types of solutions were proposed, to achieve the best possible results. The analysis of the tested tools revealed that there is no perfect tool. For example, Semgrep performed better considering applications developed using JavaScript technology but had worse results regarding applications developed using PHP technology.
APA, Harvard, Vancouver, ISO, and other styles
37

Jardine, Bartholomew, Gary M. Raymond, and James B. Bassingthwaighte. "Semi-automated Modular Program Constructor for physiological modeling: Building cell and organ models." F1000Research 4 (April 6, 2016): 1461. http://dx.doi.org/10.12688/f1000research.7476.2.

Full text
Abstract:
The Modular Program Constructor (MPC) is an open-source Java based modeling utility, built upon JSim's Mathematical Modeling Language (MML) (http://www.physiome.org/jsim/) that uses directives embedded in model code to construct larger, more complicated models quickly and with less error than manually combining models. A major obstacle in writing complex models for physiological processes is the large amount of time it takes to model the myriad processes taking place simultaneously in cells, tissues, and organs. MPC replaces this task with code-generating algorithms that take model code from several different existing models and produce model code for a new JSim model. This is particularly useful during multi-scale model development where many variants are to be configured and tested against data. MPC encodes and preserves information about how a model is built from its simpler model modules, allowing the researcher to quickly substitute or update modules for hypothesis testing. MPC is implemented in Java and requires JSim to use its output. MPC source code and documentation are available at http://www.physiome.org/software/MPC/.
APA, Harvard, Vancouver, ISO, and other styles
38

Jardine, Bartholomew, Gary M. Raymond, and James B. Bassingthwaighte. "Semi-automated Modular Program Constructor for physiological modeling: Building cell and organ models." F1000Research 4 (June 16, 2016): 1461. http://dx.doi.org/10.12688/f1000research.7476.3.

Full text
Abstract:
The Modular Program Constructor (MPC) is an open-source Java based modeling utility, built upon JSim's Mathematical Modeling Language (MML) ( http://www.physiome.org/jsim/) that uses directives embedded in model code to construct larger, more complicated models quickly and with less error than manually combining models. A major obstacle in writing complex models for physiological processes is the large amount of time it takes to model the myriad processes taking place simultaneously in cells, tissues, and organs. MPC replaces this task with code-generating algorithms that take model code from several different existing models and produce model code for a new JSim model. This is particularly useful during multi-scale model development where many variants are to be configured and tested against data. MPC encodes and preserves information about how a model is built from its simpler model modules, allowing the researcher to quickly substitute or update modules for hypothesis testing. MPC is implemented in Java and requires JSim to use its output. MPC source code and documentation are available at http://www.physiome.org/software/MPC/.
APA, Harvard, Vancouver, ISO, and other styles
39

Xu, Ruoyu, Zhenyu Xu, Gaoxiang Li, and Victor S. Sheng. "Bridging the Gap between Source Code and Requirements Using GPT (Student Abstract)." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 21 (March 24, 2024): 23686–87. http://dx.doi.org/10.1609/aaai.v38i21.30526.

Full text
Abstract:
Reverse engineering involves analyzing the design, architecture, and functionality of systems, and is crucial for legacy systems. Legacy systems are outdated software systems that are still in use and often lack proper documentation, which makes their maintenance and evolution challenging. To address this, we introduce SC2Req, utilizing the Generative Pre-trained Transformer (GPT) for automated code analysis and requirement generation. This approach aims to convert source code into understandable requirements and bridge the gap between those two. Through experiments on diverse software projects, SC2Req shows the potential to enhance the accuracy and efficiency of the translation process. This approach not only facilitates faster software development and easier maintenance of legacy systems but also lays a strong foundation for future research, promoting better understanding and communication in software development.
APA, Harvard, Vancouver, ISO, and other styles
40

Bertini, Marco, and Mathias Lux. "nteract." ACM SIGMultimedia Records 12, no. 2 (June 2020): 1. http://dx.doi.org/10.1145/3548562.3548569.

Full text
Abstract:
Writing source code for programs with lightweight text editors or fully featured integrated development environments is considered the main method of programming. Notebooks, however, are an extremely practical tool. In contrast to IDEs, projects are set up more easily and they allow for running programs in a read-eval-print loop (REPL) environment. The Jupyter Notebooks Quick Start Guide [1] describes notebook documents as "… both human-readable documents containing the analysis description and the results (figures, tables, etc..) as well as executable documents which can be run to perform data analysis." Basically, markdown text can be mixed with program source code in a sequence of sections, each dedicated to either programming or description and documentation. Source code sections can be executed and the output is appended to the section, even formatted in the form of graphs, diagrams, or tables. REPL and notebook based environments have proven to be useful in many scenarios including when exploring new libraries and frameworks, to prototype code, or as an educational tool to create interactive lecture material. A prominent example is Jupyter, which is the highly successful project behind Jupyter Notebooks and the recent JupyterLab, i.e. web-based systems to run and share notebooks that contain code, equations, data visualizations, and data exploration and narrative text.
APA, Harvard, Vancouver, ISO, and other styles
41

Kacmajor, Magdalena, and John Kelleher. "Automatic Acquisition of Annotated Training Corpora for Test-Code Generation." Information 10, no. 2 (February 17, 2019): 66. http://dx.doi.org/10.3390/info10020066.

Full text
Abstract:
Open software repositories make large amounts of source code publicly available. Potentially, this source code could be used as training data to develop new, machine learning-based programming tools. For many applications, however, raw code scraped from online repositories does not constitute an adequate training dataset. Building on the recent and rapid improvements in machine translation (MT), one possibly very interesting application is code generation from natural language descriptions. One of the bottlenecks in developing these MT-inspired systems is the acquisition of parallel text-code corpora required for training code-generative models. This paper addresses the problem of automatically synthetizing parallel text-code corpora in the software testing domain. Our approach is based on the observation that self-documentation through descriptive method names is widely adopted in test automation, in particular for unit testing. Therefore, we propose synthesizing parallel corpora comprised of parsed test function names serving as code descriptions, aligned with the corresponding function bodies. We present the results of applying one of the state-of-the-art MT methods on such a generated dataset. Our experiments show that a neural MT model trained on our dataset can generate syntactically correct and semantically relevant short Java functions from quasi-natural language descriptions of functionality.
APA, Harvard, Vancouver, ISO, and other styles
42

Gusenko, Mikhail YUr'evich. "Creating a common notation of the x86 processor software interface for automated disassembler construction." Программные системы и вычислительные методы, no. 2 (February 2024): 119–46. http://dx.doi.org/10.7256/2454-0714.2024.2.70951.

Full text
Abstract:
The subject of the study is the process of reverse engineering of programs in order to obtain their source code in low- or high-level languages for processors with x86 architecture, the software interface of which is developed by Intel and AMD. The object of the study is the technical specifications in the documentation produced by these companies. The intensity of updating documentation for processors is investigated and the need to develop technological approaches aimed at automated disassembler construction, taking into account regularly released and frequent updates of the processor software interface, is justified. The article presents a method for processing documentation in order to obtain a generalized, formalized and uniform specification of processor commands for further automated translation into the disassembler program code. The article presents two main results: the first is an analysis of the various options for describing commands presented in the Intel and AMD documentation, and a concise reduction of these descriptions to a monotonous form of representation; the second is a comprehensive syntactic analysis of machine code description notations and the form of representation of each command in assembly language. This, taking into account some additional details of the description of the commands, for example, the permissible operating mode of the processor when executing the command, made it possible to create a generalized description of the command for translating the description into the disassembler code. The results of the study include the identification of a number of errors in both the documentation texts and in the operation of existing industrial disassemblers, built, as shown by the analysis of their implementation, using manual coding. The identification of such errors in the existing reverse engineering tools is an indirect result of the author's research.
APA, Harvard, Vancouver, ISO, and other styles
43

Gupta, Aditi, and Rinkaj Goyal. "Identifying High-Level Concept Clones in Software Programs Using Method’s Descriptive Documentation." Symmetry 13, no. 3 (March 10, 2021): 447. http://dx.doi.org/10.3390/sym13030447.

Full text
Abstract:
Software clones are code fragments with similar or nearly similar functionality or structures. These clones are introduced in a project either accidentally or deliberately during software development or maintenance process. The presence of clones poses a significant threat to the maintenance of software systems and is on the top of the list of code smell types. Clones can be simple (fine-grained) or high-level (coarse-grained), depending on the chosen granularity of code for the clone detection. Simple clones are generally viewed at the lines/statements level, whereas high-level clones have granularity as a block, method, class, or file. High-level clones are said to be composed of multiple simple clones. This study aims to detect high-level conceptual code clones (having granularity as java methods) in java-based projects, which is extendable to the projects developed in other languages as well. Conceptual code clones are the ones implementing a similar higher-level abstraction such as an Abstract Data Type (ADT) list. Based on the assumption that “similar documentation implies similar methods”, the proposed mechanism uses “documentation” associated with methods to identify method-level concept clones. As complete documentation does not contribute to the method’s semantics, we extracted only the description part of the method’s documentation, which led to two benefits: increased efficiency and reduced text corpus size. Further, we used Latent Semantic Indexing (LSI) with different combinations of weight and similarity measures to identify similar descriptions in the text corpus. To show the efficacy of the proposed approach, we validated it using three java open source systems of sufficient length. The findings suggest that the proposed mechanism can detect methods implementing similar high-level concepts with improved recall values.
APA, Harvard, Vancouver, ISO, and other styles
44

Domander, Richard, Alessandro A. Felder, and Michael Doube. "BoneJ2 - refactoring established research software." Wellcome Open Research 6 (February 22, 2021): 37. http://dx.doi.org/10.12688/wellcomeopenres.16619.1.

Full text
Abstract:
Research software is often developed with expedience as a core development objective because experimental results, but not the software, are specified and resourced as a project output. While such code can help find answers to specific research questions, it may lack longevity and flexibility to make it reusable. We reimplemented BoneJ, our software for skeletal biology image analysis, to address design limitations that put it at risk of becoming unusable. We improved the quality of BoneJ code by following contemporary best programming practices. These include separation of concerns, dependency management, thorough testing, continuous integration and deployment, source code management, code reviews, issue and task ticketing, and user and developer documentation. The resulting BoneJ2 represents a generational shift in development technology and integrates with the ImageJ2 plugin ecosystem.
APA, Harvard, Vancouver, ISO, and other styles
45

Domander, Richard, Alessandro A. Felder, and Michael Doube. "BoneJ2 - refactoring established research software." Wellcome Open Research 6 (April 28, 2021): 37. http://dx.doi.org/10.12688/wellcomeopenres.16619.2.

Full text
Abstract:
Research software is often developed with expedience as a core development objective because experimental results, but not the software, are specified and resourced as a project output. While such code can help find answers to specific research questions, it may lack longevity and flexibility to make it reusable. We reimplemented BoneJ, our software for skeletal biology image analysis, to address design limitations that put it at risk of becoming unusable. We improved the quality of BoneJ code by following contemporary best programming practices. These include separation of concerns, dependency management, thorough testing, continuous integration and deployment, source code management, code reviews, issue and task ticketing, and user and developer documentation. The resulting BoneJ2 represents a generational shift in development technology and integrates with the ImageJ2 plugin ecosystem.
APA, Harvard, Vancouver, ISO, and other styles
46

Van Huffel, Kirsten, Michiel Stock, and Bernard De Baets. "BioCCP.jl: collecting coupons in combinatorial biotechnology." Bioinformatics 38, no. 4 (November 11, 2021): 1144–45. http://dx.doi.org/10.1093/bioinformatics/btab775.

Full text
Abstract:
Abstract Summary In combinatorial biotechnology, it is crucial for screening experiments to sufficiently cover the design space. In the BioCCP.jl package (Julia), we provide functions for minimum sample size determination based on the mathematical framework coined the Coupon Collector Problem. Availability and implementation BioCCP.jl, including source code, documentation and Pluto notebooks, is available at https://github.com/kirstvh/BioCCP.jl.
APA, Harvard, Vancouver, ISO, and other styles
47

Jeba, Tahmim, Tarek Mahmud, Pritom S. Akash, and Nadia Nahar. "God Class Refactoring Recommendation and Extraction Using Context based Grouping." International Journal of Information Technology and Computer Science 12, no. 5 (October 8, 2020): 14–37. http://dx.doi.org/10.5815/ijitcs.2020.05.02.

Full text
Abstract:
Code smells are the indicators of the flaws in the design and development phases that decrease the maintainability and reusability of a system. A system with uneven distribution of responsibilities among the classes is generated by one of the most hazardous code smells called God Class. To address this threatening issue, an extract class refactoring technique is proposed that incorporates both cohesion and contextual aspects of a class. In this work, greater emphasis was provided on the code documentation to extract classes with higher contextual similarity. Firstly, the source code is analyzed to generate a set of cluster of extracted methods. Secondly, another set of clusters is generated by analyzing code documentation. Then, merging these two, a final cluster set is formed to extract the God Class. Finally, an automatic refactoring approach is also followed to build newly identified classes. Using two different metrics, a comparative result analysis is provided where it is shown that the cohesion among the classes is increased if the context is added in the refactoring process. Moreover, a manual inspection is conducted to ensure that the methods of the refactored classes are contextually organized. This recommendation of God Class extraction can significantly help the developers in minimizing the burden of refactoring on own their own and maintaining the software systems.
APA, Harvard, Vancouver, ISO, and other styles
48

Engel, Deena, and Glenn Wharton. "Reading between the lines: Source code documentation as a conservation strategy for software-based art." Studies in Conservation 59, no. 6 (January 8, 2014): 404–15. http://dx.doi.org/10.1179/2047058413y.0000000115.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Dietrich, Jan Philipp, Benjamin Leon Bodirsky, Florian Humpenöder, Isabelle Weindl, Miodrag Stevanović, Kristine Karstens, Ulrich Kreidenweis, et al. "MAgPIE 4 – a modular open-source framework for modeling global land systems." Geoscientific Model Development 12, no. 4 (April 3, 2019): 1299–317. http://dx.doi.org/10.5194/gmd-12-1299-2019.

Full text
Abstract:
Abstract. The open-source modeling framework MAgPIE (Model of Agricultural Production and its Impact on the Environment) combines economic and biophysical approaches to simulate spatially explicit global scenarios of land use within the 21st century and the respective interactions with the environment. Besides various other projects, it was used to simulate marker scenarios of the Shared Socioeconomic Pathways (SSPs) and contributed substantially to multiple IPCC assessments. However, with growing scope and detail, the non-linear model has become increasingly complex, computationally intensive and non-transparent, requiring structured approaches to improve the development and evaluation of the model.Here, we provide an overview on version 4 of MAgPIE and how it addresses these issues of increasing complexity using new technical features: modular structure with exchangeable module implementations, flexible spatial resolution, in-code documentation, automatized code checking, model/output evaluation and open accessibility. Application examples provide insights into model evaluation, modular flexibility and region-specific analysis approaches. While this paper is focused on the general framework as such, the publication is accompanied by a detailed model documentation describing contents and equations, and by model evaluation documents giving insights into model performance for a broad range of variables.With the open-source release of the MAgPIE 4 framework, we hope to contribute to more transparent, reproducible and collaborative research in the field. Due to its modularity and spatial flexibility, it should provide a basis for a broad range of land-related research with economic or biophysical, global or regional focus.
APA, Harvard, Vancouver, ISO, and other styles
50

Theunissen, Theo, Stijn Hoppenbrouwers, and Sietse Overbeek. "Approaches for Documentation in Continuous Software Development." Complex Systems Informatics and Modeling Quarterly, no. 32 (October 28, 2022): 1–27. http://dx.doi.org/10.7250/csimq.2022-32.01.

Full text
Abstract:
It is common practice for practitioners in industry as well as for ICT/CS students to keep writing – and reading ¬– about software products to a bare minimum. However, refraining from documentation may result in severe issues concerning the vaporization of knowledge regarding decisions made during the phases of design, build, and maintenance. In this article, we distinguish between knowledge required upfront to start a project or iteration, knowledge required to complete a project or iteration, and knowledge required to operate and maintain software products. With `knowledge', we refer to actionable information. We propose three approaches to keep up with modern development methods to prevent the risk of knowledge vaporization in software projects. These approaches are `Just Enough Upfront' documentation, `Executable Knowledge', and `Automated Text Analytics' to help record, substantiate, manage and retrieve design decisions in the aforementioned phases. The main characteristic of `Just Enough Upfront' documentation is that knowledge required upfront includes shaping thoughts/ideas, a codified interface description between (sub)systems, and a plan. For building the software and making maximum use of progressive insights, updating the specifications is sufficient. Knowledge required by others to use, operate and maintain the product includes a detailed design and accountability of results. `Executable Knowledge' refers to any executable artifact except the source code. Primary artifacts include Test Driven Development methods and infrastructure-as-code, including continuous integration scripts. A third approach concerns `Automated Text Analysis' using Text Mining and Deep Learning to retrieve design decisions.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography