Journal articles on the topic 'Source code management systems'

To see the other types of publications on this topic, follow the link: Source code management systems.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Source code management systems.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Yesin, Vitalii, Mikolaj Karpinski, Maryna Yesina, Vladyslav Vilihura, and Kornel Warwas. "Hiding the Source Code of Stored Database Programs." Information 11, no. 12 (December 9, 2020): 576. http://dx.doi.org/10.3390/info11120576.

Full text
Abstract:
The objective of the article is to reveal an approach to hiding the code of stored programs stored in the database. The essence of this approach is the complex use of the method of random permutation of code symbols related to a specific stored program, located in several rows of some attribute of the database system table, as well as the substitution method. Moreover, with the possible substitute of each character obtained after the permutation with another one randomly selected from the Unicode standard, a legitimate user with the appropriate privileges gets access to the source code of the stored program due to the ability to quickly perform the inverse to masking transformation and overwrite the program code into the database. All other users and attackers without knowledge of certain information can only read the codes of stored programs masked with format preserving. The proposed solution is more efficient than the existing methods of hiding the code of stored programs provided by the developers of some modern database management systems (DBMS), since an attacker will need much greater computational and time consumption to disclose the source code of stored programs.
APA, Harvard, Vancouver, ISO, and other styles
2

Gothard, A. "Back to the source [automatic code generation]." Information Professional 2, no. 4 (August 1, 2005): 38–42. http://dx.doi.org/10.1049/inp:20050407.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Althar, Raghavendra Rao, Abdulrahman Alahmadi, Debabrata Samanta, Mohammad Zubair Khan, and Ahmed H. Alahmadi. "Mathematical foundations based statistical modeling of software source code for software system evolution." Mathematical Biosciences and Engineering 19, no. 4 (2022): 3701–19. http://dx.doi.org/10.3934/mbe.2022170.

Full text
Abstract:
<abstract><p>Source code is the heart of the software systems; it holds a wealth of knowledge that can be tapped for intelligent software systems and leverage the possibilities of reuse of the software. In this work, exploration revolves around making use of the pattern hidden in various software development processes and artifacts. This module is part of the smart requirements management system that is intended to be built. This system will have multiple modules to make the software requirements management phase more secure from vulnerabilities. Some of the critical challenges bothering the software development community are discussed. The background of Machine Learning approaches and their application in software development practices are explored. Some of the work done around modeling the source code and approaches used for vulnerabilities understanding in software systems are reviewed. Program representation is explored to understand some of the principles that would help in understanding the subject well. Further deeper dive into source code modeling possibilities are explored. Machine learning best practices are explored inline with the software source code modeling.</p></abstract>
APA, Harvard, Vancouver, ISO, and other styles
4

Lawal Abba, Hadiza, Abubakar Roko, Aminu B. Muhammad, Abdulgafar Usman, and Abba Almu. "Enhanced Semantic Similarity Detection of Program Code Using Siamese Neural Network." International Journal of Advanced Networking and Applications 14, no. 02 (2022): 5353–60. http://dx.doi.org/10.35444/ijana.2022.14205.

Full text
Abstract:
Even though there are various source code plagiarism detection approaches, most of them are only concerned with lexical similarities attack with an assumption that plagiarism is only conducted by students who are not proficient in programming. However, plagiarism is often conducted not only due to student incapability but also because of bad time management. Thus, semantic similarity attacks should be detected and evaluated. This research proposes a source code semantic similarity detection approach that can detect most source code similarities by representing the source code into an Abstract Syntax Tree (AST) and evaluating similarity using a Siamese neural network. Since AST is a language-dependent feature, the SOCO dataset is selected which consists of C++ program codes. Based on the evaluation, it can be concluded that our approach is more effective than most of the existing systems for detecting source code plagiarism. The proposed strategy was implemented and an experimental study based on the AI-SOCO dataset revealed that the proposed similarity measure achieved better performance for the recommendation system in terms of precision, recall, and f1 score by 15%, 10%, and 22% respectively in the 100,000 datasets. In the future, it is suggested that the system can be improved by detecting inter-language source code similarity.
APA, Harvard, Vancouver, ISO, and other styles
5

Ganesh, Sundarakrishnan, Francis Palma, and Tobias Olsson. "Are Source Code Metrics “Good Enough” in Predicting Security Vulnerabilities?" Data 7, no. 9 (September 7, 2022): 127. http://dx.doi.org/10.3390/data7090127.

Full text
Abstract:
Modern systems produce and handle a large volume of sensitive enterprise data. Therefore, security vulnerabilities in the software systems must be identified and resolved early to prevent security breaches and failures. Predicting security vulnerabilities is an alternative to identifying them as developers write code. In this study, we studied the ability of several machine learning algorithms to predict security vulnerabilities. We created two datasets containing security vulnerability information from two open-source systems: (1) Apache Tomcat (versions 4.x and five 2.5.x minor versions). We also computed source code metrics for these versions of both systems. We examined four classifiers, including Naive Bayes, Decision Tree, XGBoost Classifier, and Logistic Regression, to show their ability to predict security vulnerabilities. Moreover, an ensemble learner was introduced using a stacking classifier to see whether the prediction performance could be improved. We performed cross-version and cross-project predictions to assess the effectiveness of the best-performing model. Our results showed that the XGBoost classifier performed best compared to other learners, i.e., with an average accuracy of 97% in both datasets. The stacking classifier performed with an average accuracy of 92% in Struts and 71% in Tomcat. Our best-performing model—XGBoost—could predict with an average accuracy of 87% in Tomcat and 99% in Struts in a cross-version setup.
APA, Harvard, Vancouver, ISO, and other styles
6

Kanellopoulos, Y., C. Makris, and C. Tjortjis. "An improved methodology on information distillation by mining program source code." Data & Knowledge Engineering 61, no. 2 (May 2007): 359–83. http://dx.doi.org/10.1016/j.datak.2006.06.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Naga Malleswari, D., and Dr K.Subrahmanyam. "SIS Framework for Risk Assessment Through Quantitative Analysis." International Journal of Engineering & Technology 7, no. 2.32 (May 31, 2018): 367. http://dx.doi.org/10.14419/ijet.v7i2.32.15715.

Full text
Abstract:
Now a days, risk management plays very important role in Information systems, currently there are various risk assessment techniques. When system is analysing the source code, automatically some disputes may arise which depends on various reasons. These disputes may arise some of the risks in information system which may leads to loss of some data. To avoid that, in this paper we are implementing a framework for source code analysis which is used for brief assessment of risk, which includes guidance to risk minimization. In this framework source based risk assessment is done through the source code analysis. In order to assess risk that arose from the source code, first we need to calculate complexity of a source code in Information System. Finally the complexity which is the result of this framework will indicates the risk intensity of the source code.
APA, Harvard, Vancouver, ISO, and other styles
8

Alawneh, Ali, Iyad M. Alazzam, and Khadijah Shatnawi. "Locating Source Code Bugs in Software Information Systems Using Information Retrieval Techniques." Big Data and Cognitive Computing 6, no. 4 (December 13, 2022): 156. http://dx.doi.org/10.3390/bdcc6040156.

Full text
Abstract:
Bug localization is the process through which the buggy source code files are located regarding a certain bug report. Bug localization is an overwhelming and time-consuming process. Automating bug localization is the key to help developers and increase their productivities. Expanding bug reports with more semantic and increasing software understanding using information retrieval and natural language techniques will be the way to locate the buggy source code file, in which the bug report works as a query and source code as search space. This research investigates the effect of segmenting open source files into executable code and comments, as they have a conflicting nature, seeks the effect of synonyms on the accuracy of bug localization, and examines the effect of “part-of-speech” techniques on reducing the manual inspection for appropriate synonyms. This research aims to approve that such methods improve the accuracy of bug localization tasks. The used approach was evaluated on three Java open source software, namely Eclipse 3.1, AspectJ 1.0, and SWT 3.1; we implement our dedicated Java tool to adopt our methodology and conduct several experiments on each software. The experimental results reveal a considerable improvement in recall and precision levels, and the developed methods display an accuracy improvement of 4–10% compared with the state-of-the-art approaches.
APA, Harvard, Vancouver, ISO, and other styles
9

Jadalla, Ameera, and Ashraf Elnagar. "PDE4Java: Plagiarism Detection Engine for Java source code: a clustering approach." International Journal of Business Intelligence and Data Mining 3, no. 2 (2008): 121. http://dx.doi.org/10.1504/ijbidm.2008.020514.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

WONG, W. ERIC, and JENNY LI. "REDESIGNING LEGACY SYSTEMS INTO THE OBJECT-ORIENTED PARADIGM." International Journal of Software Engineering and Knowledge Engineering 14, no. 03 (June 2004): 255–76. http://dx.doi.org/10.1142/s0218194004001634.

Full text
Abstract:
Object-oriented languages support many modern programming concepts such as information hiding, inheritance, polymorphism, and dynamic binding. As a result, software systems implemented in OO languages are in general more reusable and reliable than others. Many legacy software systems, created before OO programming became popular, need to be redesigned and updated to OO programs. The process of abstracting OO designs from the procedural source code has often been done with limited assistance from program structural diagrams. Most reengineering focuses on the functionality of the original program, and the OO redesign often results in a completely new design based on the designers' understanding of the original program. Such an approach is not sufficient because it may take a significant amount of time and effort for designers to comprehend the original program. This paper presents a computer-aided semi-automatic method that abstracts OO designs from the original procedural source code. More specifically, it is a method for OO redesign based on program structural diagrams, visualization, and execution slices. We conducted a case study by applying this method to an inventory management software system. Results indicate that our method can effectively and efficiently abstract an appropriate OO design out of the original C code. In addition, some of the code from the original system can be automatically identified and reused in the new OO system.
APA, Harvard, Vancouver, ISO, and other styles
11

Eid, Salma, Soha Makady, and Manal Ismail. "Detecting software performance problems using source code analysis techniques." Egyptian Informatics Journal 21, no. 4 (December 2020): 219–29. http://dx.doi.org/10.1016/j.eij.2020.02.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Eichinger, Frank, David Kramer, Klemens Böhm, and Wolfgang Karl. "From source code to runtime behaviour: Software metrics help to select the computer architecture." Knowledge-Based Systems 23, no. 4 (May 2010): 343–49. http://dx.doi.org/10.1016/j.knosys.2009.11.014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

BADUROWICZ, Marcin. "DETECTION OF SOURCE CODE IN INTERNET TEXTS USING AUTOMATICALLY GENERATED MACHINE LEARNING MODELS." Applied Computer Science 18, no. 1 (March 30, 2022): 89–98. http://dx.doi.org/10.35784/acs-2022-7.

Full text
Abstract:
In the paper, the authors are presenting the outcome of web scraping software allowing for the automated classification of source code. The software system was prepared for a discussion forum for software developers to find fragments of source code that were published without marking them as code snippets. The analyzer software is using a Machine Learning binary classification model for differentiating between a programming language source code and highly technical text about software. The analyzer model was prepared using the AutoML subsystem without human intervention and fine-tuning and its accuracy in a described problem exceeds 95%. The analyzer based on the automatically generated model has been deployed and after the first year of continuous operation, its False Positive Rate is less than 3%. The similar process may be introduced in document management in software development process, where automatic tagging and search for code or pseudo-code may be useful for archiving purposes.
APA, Harvard, Vancouver, ISO, and other styles
14

Chen, Boyuan, and Zhen Ming (Jack) Jiang. "A Survey of Software Log Instrumentation." ACM Computing Surveys 54, no. 4 (May 2021): 1–34. http://dx.doi.org/10.1145/3448976.

Full text
Abstract:
Log messages have been used widely in many software systems for a variety of purposes during software development and field operation. There are two phases in software logging: log instrumentation and log management. Log instrumentation refers to the practice that developers insert logging code into source code to record runtime information. Log management refers to the practice that operators collect the generated log messages and conduct data analysis techniques to provide valuable insights of runtime behavior. There are many open source and commercial log management tools available. However, their effectiveness highly depends on the quality of the instrumented logging code, as log messages generated by high-quality logging code can greatly ease the process of various log analysis tasks (e.g., monitoring, failure diagnosis, and auditing). Hence, in this article, we conducted a systematic survey on state-of-the-art research on log instrumentation by studying 69 papers between 1997 and 2019. In particular, we have focused on the challenges and proposed solutions used in the three steps of log instrumentation: (1) logging approach; (2) logging utility integration; and (3) logging code composition. This survey will be useful to DevOps practitioners and researchers who are interested in software logging.
APA, Harvard, Vancouver, ISO, and other styles
15

Bukhari, Khulood, and Rana Malek. "Open-source automated insulin delivery systems for the management of type 1 diabetes during pregnancy." BMJ Case Reports 14, no. 9 (September 2021): e243522. http://dx.doi.org/10.1136/bcr-2021-243522.

Full text
Abstract:
A 40-year-old woman used an open-source automated insulin delivery system to manage her type 1 diabetes (T1D) prior to conception. The code for building the iPhone application called ‘Loop’ that carried the software for the hybrid closed-loop controller was available online. Her glycated hemoglobin before conception was 6.4%. Between 6 and 12 weeks gestation, she spent 66% time-in-range (TIR), 28% time-above-range (TAR) and 6% time-below-range (TBR). Between 18 and 24 weeks gestation, she spent 68% TIR, 27% TAR and 5% TBR. During her third trimester, she spent 72% TIR, 21% TAR and 7% TBR. She delivered a healthy infant with no neonatal complications. Clinicians should be aware of this technology as it gains traction in the T1D community and seeks Food and Drug Administration approval.
APA, Harvard, Vancouver, ISO, and other styles
16

Lecher, Alanna L. "Open-Source Code for Radium-Derived Ocean-Groundwater Modeling: Project Open RaDOM." Hydrology 9, no. 6 (June 14, 2022): 106. http://dx.doi.org/10.3390/hydrology9060106.

Full text
Abstract:
Radium has been commonly used as a tracer of submarine groundwater discharge to the ocean and embankments, as radium activities are commonly input into box models to calculate a groundwater flux. Similarly, isotopes of radium (Ra224, Ra223, Ra226, Ra228) have been used to calculate water mass ages, which have been used as a proxy for residence times. Less commonly, radium and other tracers have been utilized in mixing models to determine the relative contribution of groundwater to a marine system. In the literature, all of these methods have almost exclusively been solved using analytical methods prone to large errors and other issues. Project Open RaDOM, introduced here, is a collection of open-source R scripts that numerically solve for groundwater flux, residence time, and relative contribution of groundwater to coastal systems. Solving these models numerically allows for over-constrained systems to increase their accuracy and force real solutions. The scripts are written in a way to make them user-friendly, even to scientists unfamiliar with R. This communication includes a description of the scripts in Project Open RaDOM, a discussion of examples in the literature, and case studies of the scripts using previously published data.
APA, Harvard, Vancouver, ISO, and other styles
17

Weig, Eric C., and Michael Slone. "SPOKEdb: open-source information management system for oral history." Digital Library Perspectives 34, no. 2 (May 14, 2018): 101–16. http://dx.doi.org/10.1108/dlp-03-2017-0012.

Full text
Abstract:
Purpose This paper aims to examine how an open-source information management system was developed to manage a collection of more than 10,000 oral history interviews at the University of Kentucky Libraries’ Louie B. Nunn Center for Oral History. Design/methodology/approach Digital library architects at the University of Kentucky Libraries built an open-source information management system for oral history using the open-source tools Omeka and Blacklight. Additional open-source code was developed to facilitate interaction between these tools. Findings Information management systems that address needs of libraries and archives can be built by combining existing open-source tools in complementary ways. Originality/value This work at the University of Kentucky Libraries serves as a proof of concept for other institutions to examine as a potential model to follow or adapt for their own local needs. The SPOKEdb framework can be replicated elsewhere, as the major and minor components are open-source. SPOKEdb at its conceptual level is a unique information management system based on its tailored approach to serving the needs of oral history management at various user levels including both administrative and public.
APA, Harvard, Vancouver, ISO, and other styles
18

Yermolenko, Andrei, and Yuriy Golchevskiy. "Developing Web Content Management Systems – from the Past to the Future." SHS Web of Conferences 110 (2021): 05007. http://dx.doi.org/10.1051/shsconf/202111005007.

Full text
Abstract:
The article presents an overview of the main stages of development and formation of content management systems as a component of software, in conjunction with the material base. The need of a society in creation and development of systems is revealed. The main elements of the mentioned systems, their functionality, and user groups are described. The features of modern content management systems are presented. The change in approaches to the development of such systems from monolithic to distributed systems is outlined. Further formation of Web-content management systems is seen in the development of systems with open source code, using Web 3.0, refining tools analysis of the quality of content, support for business B2B and B2C, interface development, active use of headless CMS, support of a new level of SEO, implementation of artificial intelligence algorithms. Keywords: content management systems
APA, Harvard, Vancouver, ISO, and other styles
19

Das Neves, D., D. Fenn, and P. Sulcas. "Selection of enterprise resource planning (ERP) systems." South African Journal of Business Management 35, no. 1 (March 31, 2004): 45–52. http://dx.doi.org/10.4102/sajbm.v35i1.651.

Full text
Abstract:
In order to determine the process organisations go through in selecting an Enterprise Resource Planning (ERP) system, a field study was undertaken on eleven cases. Based on the findings a best practice selection process is proposed.A benchmark selection criteria checklist was drawn up as part of this investigation and each of the criteria listed should be considered prior to final selection.Other issues discussed in this article include the original motivation and justification for ERP purchase, change management, customisation of source code and the roles of the selection committee, consultants and vendors.
APA, Harvard, Vancouver, ISO, and other styles
20

Lee, Yangsun. "A Study on Intermediate Code Generation for Security Weakness Analysis of Smart Contract Chaincode." Webology 19, no. 1 (January 20, 2022): 4745–60. http://dx.doi.org/10.14704/web/v19i1/web19318.

Full text
Abstract:
The hyperledger fabric is a modular blockchain framework used by private companies to develop blockchain-based products, solutions, and applications using plug-and-play components. The smart contracts operating in this framework is created by implementing a chaincode. When implementing a chaincode, there may be a security weakness inside the code, which is the root cause of the security vulnerability. However, when the contract is completed and the block is created, the chaincode cannot be arbitrarily modified, so the security weakness must be analyzed before execution. This paper conducted a study on chaincode intermediate code generation for security weakness analysis of chaincode operating in hyperledger fabric blockchain framework. Analysis of security weaknesses at the source code level is not easy because the code logic is not clear and the complexity is high. On the other hand, security weakness analysis at the intermediate code level is easy to analyze because the code logic of the source code is clearly represented and the complexity is lower than that of the source code.
APA, Harvard, Vancouver, ISO, and other styles
21

Abdulsalam, Abeer, and Nazre Abdul Rashid. "Comparative Analysis of Machine Learning Techniques for Splitting Identifiers within Source Code." Webology 17, no. 2 (December 21, 2020): 776–87. http://dx.doi.org/10.14704/web/v17i2/web17066.

Full text
Abstract:
Feature location is the process of extracting identifiers within source code. In software engineering, it is a usual procedure to upgrade software by adding new features. In order to facilitate this process for the developers, feature location has been proposed to extract the significant components within the source code which are the identifiers. One of the challenging issues that faces the feature location task is handling multi-word identifiers where developers may use different type of separations among the words. Different research studies have used various types of techniques. However, recent studies have showed interest in Machine Learning Techniques (MLTs) due to their substantial performance. With the diversity MLTs, there is a vital demand to identify the most accurate one in terms of splitting the identifiers correctly. Therefore, this study aims to provide a comparative analysis of different MLTs including Naïve Bayes, Support Vector Machine and J48. The dataset used in the experiment is a benchmark data that contains vast amount of source codes along with numerous identifiers. Results showed that the best accuracy has been achieved by using the J48 classifier where the f-measure was 66%.
APA, Harvard, Vancouver, ISO, and other styles
22

Saini, Munish, Sandeep Mehmi, and Kuljit Kaur Chahal. "Understanding Open Source Software Evolution Using Fuzzy Data Mining Algorithm for Time Series Data." Advances in Fuzzy Systems 2016 (2016): 1–13. http://dx.doi.org/10.1155/2016/1479692.

Full text
Abstract:
Source code management systems (such as Concurrent Versions System (CVS), Subversion, and git) record changes to code repositories of open source software projects. This study explores a fuzzy data mining algorithm for time series data to generate the association rules for evaluating the existing trend and regularity in the evolution of open source software project. The idea to choose fuzzy data mining algorithm for time series data is due to the stochastic nature of the open source software development process. Commit activity of an open source project indicates the activeness of its development community. An active development community is a strong contributor to the success of an open source project. Therefore commit activity analysis along with the trend and regularity analysis for commit activity of open source software project acts as an important indicator to the project managers and analyst regarding the evolutionary prospects of the project in the future.
APA, Harvard, Vancouver, ISO, and other styles
23

Kanwal, Jaweria, Onaiza Maqbool, Hamid Abdul Basit, Muddassar Azam Sindhu, and Katsuro Inoue. "Historical perspective of code clone refactorings in evolving software." PLOS ONE 17, no. 12 (December 1, 2022): e0277216. http://dx.doi.org/10.1371/journal.pone.0277216.

Full text
Abstract:
Cloning in software is generally perceived as a threat to its maintenance and that is why it needs to be managed properly. Understanding clones from a historical perspective is essential for effective clone management. Analysis of code refactorings performed on clones in previous releases will help developers in taking decisions about clone refactoring in future releases. In this paper we perform a longitudinal study on the evolution of clone refactorings in various versions of five software systems. To perform a systematic study on clone refactoring evolution, we define clone evolution patterns for studying refactorings in a formal notation. Our results show that only a small proportion of code clones are refactored between the versions and most of the refactorings are inconsistent within clone classes. Moreover, clone refactorings may cause clone removal. Analysis of the source code of refactored clones reveals similar reasons of inconsistent refactorings and clone removal for five Java systems. This analysis will help in devising appropriate strategies for managing clone refactorings in software and hence provide foundation for devising better clone management tools.
APA, Harvard, Vancouver, ISO, and other styles
24

Zhang, Wangda, Junyoung Kim, Kenneth A. Ross, Eric Sedlar, and Lukas Stadler. "Adaptive code generation for data-intensive analytics." Proceedings of the VLDB Endowment 14, no. 6 (February 2021): 929–42. http://dx.doi.org/10.14778/3447689.3447697.

Full text
Abstract:
Modern database management systems employ sophisticated query optimization techniques that enable the generation of efficient plans for queries over very large data sets. A variety of other applications also process large data sets, but cannot leverage database-style query optimization for their code. We therefore identify an opportunity to enhance an open-source programming language compiler with database-style query optimization. Our system dynamically generates execution plans at query time, and runs those plans on chunks of data at a time. Based on feedback from earlier chunks, alternative plans might be used for later chunks. The compiler extension could be used for a variety of data-intensive applications, allowing all of them to benefit from this class of performance optimizations.
APA, Harvard, Vancouver, ISO, and other styles
25

Boretto, Marco, Wojciech Brylinski, Giovanna Lehmann Miotto, Enrico Gamberini, Roland Sipos, and Viktor Vilhelm Sonesten. "DAQling: an open-source data acquisition framework." EPJ Web of Conferences 245 (2020): 01026. http://dx.doi.org/10.1051/epjconf/202024501026.

Full text
Abstract:
The Data AcQuisition (DAQ) software for most applications in high energy physics is composed of common building blocks, such as a networking layer, plug-in loading, configuration, and process management. These are often re-invented and developed from scratch for each project or experiment around specific needs. In some cases, time and available resources can be limited and make development requirements difficult or impossible to meet. Moved by these premises, our team developed an open-source lightweight C++ software framework called DAQling, to be used as the core for the DAQ systems of small and medium-sized experiments and collaborations. The framework offers a complete DAQ ecosystem, including a communication layer based on the widespread ZeroMQ messaging library, configuration management based on the JSON format, control of distributed applications, extendable operational monitoring with web-based visualisation, and a set of generic utilities. The framework comes with minimal dependencies, and provides automated host and build environment setup based on the Ansible automation tool. Finally, the end-user code is wrapped in so-called “Modules”, that can be loaded at configuration time, and implement specific roles. Several collaborations already chose DAQling as the core for their DAQ systems, such as FASER, RD51, and NA61/SHINE. We will present the framework and project-specific implementations and experiences.
APA, Harvard, Vancouver, ISO, and other styles
26

Mahomodally, A. Fatwimah Humairaa, and Geerish Suddul. "An Enhanced Freelancer Management System with Machine Learning-based Hiring." Shanlax International Journal of Arts, Science and Humanities 9, no. 3 (January 1, 2022): 34–41. http://dx.doi.org/10.34293/sijash.v9i3.4405.

Full text
Abstract:
Existing Freelancer Management Systems are not being adequately efficient, inconveniencing to a certain degree the freelance workforce, which comprises around 1.1 billion freelancers globally. This paper thereby aims to resolve the impediments of similar existing systems. Pertaining to the methodology, qualitative analysis was adopted. Interviews, participant observation, interface analysis, workshop documents, research papers, books and articles were used to draw data about similar applications. A web application was implemented to fulfil the objectives by using WAMP as a local development server, Visual Studio Code as a source code editor, and HTML, PHP, Python, SQL, JavaScript and CSS as programming languages along with Ajax for requests-handling functionalities, and already available APIs, and jQuery and Python libraries. The contributions brought forth are providing a shortlist of the best-qualified freelancers for each project via Machine Learning technique, generating an automated invoice and payment as soon as an entrepreneur supplies a monetary figure when approving the deliverable of a project, and enabling freelancers to sign contracts electronically to comply with business terms on one centralised repository, unlike existing systems which do not support these 3 features together on the same platform. The multivariate regression model used for intelligent hiring performs satisfactorily by yielding a R2 of around 0.9993.
APA, Harvard, Vancouver, ISO, and other styles
27

Rábová, Ivana, and Michal Hodinka. "Business rules in knowledge management and in information systems – methodology of mining business rules and knowledge." Acta Universitatis Agriculturae et Silviculturae Mendelianae Brunensis 59, no. 4 (2011): 267–72. http://dx.doi.org/10.11118/actaun201159040267.

Full text
Abstract:
Every organisation works according to principles which define business logic that controls all business processes. However, lot of basic rules of business logic are hidden in companies’ guidelines and directives, in informal techniques of experts, processes owners and specialists. The aim of all managers should be a replacement of this incoherent set of information by set of clear and unambiguous terms which describe the way the company is controlled from inside. This notion is a ground of control and administration of business knowledge.Contemporary practises in development of informational systems demand openness and availability to correspond to constant changes. With the complexity of information system grows the amount of work and its level of difficulty. The rules of business logic are transferred to application logic and implemented into a source code. Therefore, change of the business rule needs change of the code. This means compilation or replacement of the code. Adoption of this new approach would mean more effective possibilities in adjustment of the information systems to environment dynamics thanks to the change of the rules. The article deals with mining methods and subsequent structure of business rules for easier implementation into information system. Business rules base is at the same time business knowledge base which can serve for different purposes then development and use of information systems.
APA, Harvard, Vancouver, ISO, and other styles
28

Domingos, Dulce, and Francisco Martins. "Using BPMN to model Internet of Things behavior within business process." International Journal of Information Systems and Project Management 5, no. 4 (January 31, 2022): 39–51. http://dx.doi.org/10.12821/ijispm050403.

Full text
Abstract:
Whereas, traditionally, business processes use the Internet of Things (IoTs) as a distributed source of information, the increase of computational capabilities of IoT devices provides them with the means to also execute parts of the business logic, reducing the amount of exchanged data and central processing. Current approaches based on Business Process Model and Notation (BPMN) already support modelers to define both business processes and IoT devices behavior at the same level of abstraction. However, they are not restricted to standard BPMN elements and they generate IoT device specific low-level code. The work we present in this paper exclusivelly uses standard BPMN to define central as well as IoT behavior of business processes. In addition, the BPMN that defines the IoT behavior is translated to a neutral-platform programming code. The deployment and execution environments use Web services to support the communication between the process execution engine and IoT devices.
APA, Harvard, Vancouver, ISO, and other styles
29

Lee, Sangwoo, and Jungwon Cho. "Malware Authorship Attribution Model using Runtime Modules based on Automated Analysis." JOIV : International Journal on Informatics Visualization 6, no. 1-2 (May 31, 2022): 214. http://dx.doi.org/10.30630/joiv.6.1-2.941.

Full text
Abstract:
Malware authorship attribution is a research field that identifies the author of malware by extracting and analyzing features that relate the authors from the source code or binary code of malware. Currently, it is being used as one of the detection techniques based on malware forensics or identifying patterns of continuous attacks such as APT attacks. The analysis methods to identify the author are as follows. One is a source code-based analysis method that extracts features from the source code, and the other is a binary-based analysis method that extracts features from the binary. However, to handle the modularization and the increasing amount of malicious code with these methods, both time and manpower are insufficient to figure out the characteristics of the malware. Therefore, we propose the model for malware authorship attribution by rapidly extracting and analyzing features using automated analysis. Automated analysis uses a tool and can be analyzed through a file of malware and the specific hash values without experts. Furthermore, it is the fastest to figure out among other malware analysis methods. We have experimented by applying various machine learning classification algorithms to six malware author groups, and Runtime Modules and Kernel32.dll API extracted from the automated analysis were selected as features for author identification. The result shows more high accuracy than the previous studies. By using the automated analysis, it extracts features of malware faster than source code and binary-based analysis methods.
APA, Harvard, Vancouver, ISO, and other styles
30

Yudachev, S. S., P. A. Monakhov, and N. A. Gordienko. "Industry 4.0 Digital Technologies for data collection and control." Glavnyj mekhanik (Chief Mechanic), no. 6 (May 25, 2021): 43–58. http://dx.doi.org/10.33920/pro-2-2106-04.

Full text
Abstract:
This article describes an attempt to create open source LabVIEW software, equivalent to data collection and control software. The proposed solution uses GNU Radio, OpenCV, Scilab, Xcos, and Comedi in Linux. GNU Radio provides a user-friendly graphical interface. Also, GNU Radio is a software-defined radio that conducts experiments in practice using software rather than the usual hardware implementation. Blocks for data propagation, code deletion with and without code tracking are created using the zero correlation zone code (ZCZ, a combination of ternary codes equal to 1, 0, and –1, which is specified in the program). Unlike MATLAB Simulink, GNU Radio is open source, i. e. free, and the concepts can be easily accessed by ordinary people without much programming experience using pre-written blocks. Calculations can be performed using OpenCV or Scilab and Xcos. Xcos is an application that is part of the Scilab mathematical modeling system, and it provides developers with the ability to design systems in the field of mechanics, hydraulics and electronics, as well as queuing systems. Xcos is a graphical interactive environment based on block modeling. The application is designed to solve problems of dynamic and situational modeling of systems, processes, devices, as well as testing and analyzing these systems. In this case, the modeled object (a system, device or process) is represented graphically by its functional parametric block diagram, which includes blocks of system elements and connections between them. The device drivers listed in Comedi are used for real-time data access. We also present an improved PyGTK-based graphical user interface for GNU Radio. English version of the article is available at URL: https://panor.ru/articles/industry-40-digital-technology-for-data-collection-and-management/65216.html
APA, Harvard, Vancouver, ISO, and other styles
31

Kovács, Attila, and Kristóf Szabados. "Internal quality evolution of a large test system – an industrial study." Acta Universitatis Sapientiae, Informatica 8, no. 2 (December 1, 2016): 216–40. http://dx.doi.org/10.1515/ausi-2016-0010.

Full text
Abstract:
AbstractThis paper presents our empirical observations related to the evolution of a large automated test system. The system observed is used in the industry as a test tool for complex telecommunication systems, itself consisting of more than one million lines of source code. This study evaluates how different changes during the development have changed the number of observed Code Smells in the test system. We have monitored the development of the test scripts and measured the code quality characteristics over a five years period.The observations show that the introduction of continuous integration, the existence of tool support for quality improvements in itself, changing the development methodologies (from waterfall to agile), changing technical and line management structure and personnel caused no measurable change in the trends of the observed Code Smells. Internal quality improvements were achieved mainly by individuals intrinsic motivation. Our measurements show similarities with earlier results on software systems evolutions presented by Lehman.
APA, Harvard, Vancouver, ISO, and other styles
32

Ng, Celeste See-Pui. "Exploring Relationships in Tailoring Option, Task Category, and Effort in ERP Software Maintenance." International Journal of Enterprise Information Systems 9, no. 2 (April 2013): 83–105. http://dx.doi.org/10.4018/jeis.2013040105.

Full text
Abstract:
Maintenance to in-house applications is often done by modifying source code; however, packaged applications also enable certain maintenance to be done through changes to configurational parameters rather than through changes to the source code. This research presents preliminary evidence from the field to fill this gap in the empirical understanding of ERP maintenance. Using data from 503 ERP maintenance requests, the author’s results suggest that relative maintenance effort distributions for all maintenance categories and tailoring options are not normal distributions but heavy-tailed positively skewed distributions. Comparing ERP systems to in-house developed software, the author found a large proportion of corrective maintenance requests than adaptive requests. Enhancement and corrective task categories that use the programming tailoring option show a trend of increment in relative maintenance effort per request moving median over time. In contrast, enhancement and adaptive task categories that use the configuration tailoring option show a trend of reduction in relative maintenance effort per request moving median over time. The number of maintenance requests for all tailoring options and task categories were increasingly high four months after the introduction of a new module. Comparatively, under the same period, there was relatively higher number of maintenance requests for enhancement task category than other task categories, indicating that unique or orthogonal requirements were not available in the ERP system.
APA, Harvard, Vancouver, ISO, and other styles
33

Greiner, Sandra, and Thomas Buchmann. "Round-trip Engineering UML Class Models and Java Models." International Journal of Information System Modeling and Design 7, no. 3 (July 2016): 72–92. http://dx.doi.org/10.4018/ijismd.2016070104.

Full text
Abstract:
Model transformations constitute the key technology for model-driven software development, a software engineering discipline which became more and more important during the last decade. While tool support for unidirectional batch transformations is rather mature, bidirectional and incremental transformations are only weakly investigated. Nevertheless, several usage scenarios demand for incremental and bidirectional transformations, like round-trip engineering between UML class models and Java source code. This paper presents a bidirectional transformation between UML class models and a Java model which is obtained from Java source code. The transformation is written in QVT Relations, a declarative model transformation language provided by the OMG. While the case study demonstrates that it is possible to specify bidirectional transformations between heterogeneous metamodels in a single relational specification, it also reveals some inherent limitations of the language and the corresponding tool support.
APA, Harvard, Vancouver, ISO, and other styles
34

Jacobs, Gabriel, and Cliona O’Neill. "On the reliability (or otherwise) of SIC codes." European Business Review 15, no. 3 (June 1, 2003): 164–69. http://dx.doi.org/10.1108/09555340310474668.

Full text
Abstract:
Researchers often make use of SIC (Standard Industrial Classification) codes when gathering and analysing data about the activities of companies. The use of these codes is, however, fraught with potential difficulties, errors easily creeping in and consequently distorting results. This paper outlines the major SIC code systems in use on both sides of the Atlantic – which, despite efforts to standardise them (and thus to make them worthy of their name), still present levels of inconsistency and unreliability both internally and comparatively – and discusses various problems associated with using the codes as data sources.
APA, Harvard, Vancouver, ISO, and other styles
35

Althar, Raghavendra Rao, Debabrata Samanta, Manjit Kaur, Abeer Ali Alnuaim, Nouf Aljaffan, and Mohammad Aman Ullah. "Software Systems Security Vulnerabilities Management by Exploring the Capabilities of Language Models Using NLP." Computational Intelligence and Neuroscience 2021 (December 27, 2021): 1–19. http://dx.doi.org/10.1155/2021/8522839.

Full text
Abstract:
Security of the software system is a prime focus area for software development teams. This paper explores some data science methods to build a knowledge management system that can assist the software development team to ensure a secure software system is being developed. Various approaches in this context are explored using data of insurance domain-based software development. These approaches will facilitate an easy understanding of the practical challenges associated with actual-world implementation. This paper also discusses the capabilities of language modeling and its role in the knowledge system. The source code is modeled to build a deep software security analysis model. The proposed model can help software engineers build secure software by assessing the software security during software development time. Extensive experiments show that the proposed models can efficiently explore the software language modeling capabilities to classify software systems’ security vulnerabilities.
APA, Harvard, Vancouver, ISO, and other styles
36

Maruping, Likoebe M., Sherae L. Daniel, and Marcelo Cataldo. "Developer Centrality and the Impact of Value Congruence and Incongruence on Commitment and Code Contribution Activity in Open Source Software Communities." MIS Quarterly 43, no. 3 (January 1, 2019): 951–76. http://dx.doi.org/10.25300/misq/2019/13928.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Ye, Sheng, Jing Wang, Sikandar Ali, Hasan Ali Khattak, Chenhong Guo, and Zhongguo Yang. "Recovering Latent Data Flow from Business Process Model Automatically." Wireless Communications and Mobile Computing 2022 (June 20, 2022): 1–11. http://dx.doi.org/10.1155/2022/7579515.

Full text
Abstract:
Process-driven applications evolve rapidly through the interaction between executable BPMN (Business Process Modeling and Notation) models, business tasks, and external services. Given these components operate on some shared process data, it is imperative to recover the latent data by visiting relation, which is known as data flow among these tasks. Data flow will benefit some typical applications including data flow anomaly checking and data privacy protection. However, in most cases, the complete data flow in a business process is not explicitly defined but hidden in model elements such as form declarations, variable declarations, and program code. Some methods to recovering data flow based on process model analysis of source code have some drawbacks; i.e., for security reasons, users do not want to provide source code but only encapsulated methods; therefore, data flows are difficult to analyze. We propose a method to generate running logs that are used to produce a complete data flow picture combined with the static code analysis method. This method combines the simple and easy-to-use characteristics of static code analysis methods and makes up for the shortcomings of static code analysis methods that cannot adapt to complex business processes, and as a result, the analyzed data flow is inaccurate. Moreover, a holistic framework is proposed to generate the data flow graph. The prototype system designed on Camunda and Flowable BPM (business process management) engine proves the applicability of the solution. The effectiveness of our method is validated on the prototype system.
APA, Harvard, Vancouver, ISO, and other styles
38

Pendharkar, Parag C. "Scale economies and production function estimation for object-oriented software component and source code documentation size." European Journal of Operational Research 172, no. 3 (August 2006): 1040–50. http://dx.doi.org/10.1016/j.ejor.2004.10.023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Singh Negi, Dheeraj. "Open source software using New GEN LIB: a case study of international management institutue Bhubaneswar." Library Hi Tech News 31, no. 9 (October 28, 2014): 9–10. http://dx.doi.org/10.1108/lhtn-07-2014-0056.

Full text
Abstract:
Purpose – The purpose of this study was to develop and update database of books in the International Management Institute Bhubaneswar. The study presents the status of automation in International Management Institute Bhubaneswar. A properly computerized library will help its user with quick services. Library automation refers to mechanization of library housekeeping operations predominantly by computerization. Implement automated system using New Gen Lib (NGL) library integrated open source software to carry out the functions of the circulation section more effectively to provide various search option to know the availability of books in the library and generate the list of books due by the particular member and also overdue charges. NGL is an integrated software system with required models for small to very large libraries. Being an open source any library wanted to go for automation for their library housekeeping operations can make use of this software. Design/methodology/approach – Open source is a software development model and a software distribution model. In this model, the source code of programs is made freely available with the software itself, so that anyone can see, change and distribute it provided they abide by the accompanying license. In this sense, open source is similar to peer review, which is used to strengthen the progress of scholarly communication. The open source software differs from the closed source or proprietary software, which may only be obtained by some form of payment, either by purchase or by leasing. The primary difference between the two is the freedom to modify the software. An open system is a design philosophy antithetical to solutions designed to be proprietary. The idea behind it is that institutions, such as libraries, can build a combination of components and deliver services that include several vendors' offerings. Thus, for instance, a library might use an integrated library system from one of the major vendors in combination with an open source product developed by another library or by itself to better meet its internal or users' requirements. Findings – NGL free software is constantly being updated, changed and customized to meet the library's needs. While all of this is fine and dandy, and sounds like the win-win solution for your library, there are still pitfalls and hurdles we will need to overcome. Hopefully, this article provides some introductory information as to how to wean your library off of traditional computing products and dive into the pool of open source resources available today. Libraries in the developing countries are able to support electronic access, digital libraries and resource sharing because they are able to use Open sources Software (OSS). Even libraries in well-developed countries are becoming more inclined toward OSS to improve their services. Originality/value – To develop and updated database of books and other online/printed resources of the International Management Institute Bhubaneswar. To implement automated system using NGL library integrated open sources software. To carry out the charging and discharging functions of the circulation section and Provide Various search options to know the availability of books in the library.
APA, Harvard, Vancouver, ISO, and other styles
40

Kumar, Supriya, Matthew Arnold, Glen James, and Rema Padman. "Developing a common data model approach for DISCOVER CKD: A retrospective, global cohort of real-world patients with chronic kidney disease." PLOS ONE 17, no. 9 (September 29, 2022): e0274131. http://dx.doi.org/10.1371/journal.pone.0274131.

Full text
Abstract:
Objectives To describe a flexible common data model (CDM) approach that can be efficiently tailored to study-specific needs to facilitate pooled patient-level analysis and aggregated/meta-analysis of routinely collected retrospective patient data from disparate data sources; and to detail the application of this CDM approach to the DISCOVER CKD retrospective cohort, a longitudinal database of routinely collected (secondary) patient data of individuals with chronic kidney disease (CKD). Methods The flexible CDM approach incorporated three independent, exchangeable components that preceded data mapping and data model implementation: (1) standardized code lists (unifying medical events from different coding systems); (2) laboratory unit harmonization tables; and (3) base cohort definitions. Events between different coding vocabularies were not mapped code-to-code; for each data source, code lists of labels were curated at the entity/event level. A study team of epidemiologists, clinicians, informaticists, and data scientists were included within the validation of each component. Results Applying the CDM to the DISCOVER CKD retrospective cohort, secondary data from 1,857,593 patients with CKD were harmonized from five data sources, across three countries, into a discrete database for rapid real-world evidence generation. Conclusions This flexible CDM approach facilitates evidence generation from real-world data within the DISCOVER CKD retrospective cohort, providing novel insights into the epidemiology of CKD that may expedite improvements in diagnosis, prognosis, early intervention, and disease management. The adaptable architecture of this CDM approach ensures scalable, fast, and efficient application within other therapy areas to facilitate the combined analysis of different types of secondary data from multiple, heterogeneous sources.
APA, Harvard, Vancouver, ISO, and other styles
41

Vijaykumar, Nandita, Ataberk Olgun, Konstantinos Kanellopoulos, F. Nisa Bostanci, Hasan Hassan, Mehrshad Lotfi, Phillip B. Gibbons, and Onur Mutlu. "MetaSys: A Practical Open-source Metadata Management System to Implement and Evaluate Cross-layer Optimizations." ACM Transactions on Architecture and Code Optimization 19, no. 2 (June 30, 2022): 1–29. http://dx.doi.org/10.1145/3505250.

Full text
Abstract:
This article introduces the first open-source FPGA-based infrastructure, MetaSys, with a prototype in a RISC-V system, to enable the rapid implementation and evaluation of a wide range of cross-layer techniques in real hardware. Hardware-software cooperative techniques are powerful approaches to improving the performance, quality of service, and security of general-purpose processors. They are, however, typically challenging to rapidly implement and evaluate in real hardware as they require full-stack changes to the hardware, system software, and instruction-set architecture (ISA). MetaSys implements a rich hardware-software interface and lightweight metadata support that can be used as a common basis to rapidly implement and evaluate new cross-layer techniques. We demonstrate MetaSys’s versatility and ease-of-use by implementing and evaluating three cross-layer techniques for: (i) prefetching in graph analytics; (ii) bounds checking in memory unsafe languages, and (iii) return address protection in stack frames; each technique requiring only ~100 lines of Chisel code over MetaSys. Using MetaSys, we perform the first detailed experimental study to quantify the performance overheads of using a single metadata management system to enable multiple cross-layer optimizations in CPUs. We identify the key sources of bottlenecks and system inefficiency of a general metadata management system. We design MetaSys to minimize these inefficiencies and provide increased versatility compared to previously proposed metadata systems. Using three use cases and a detailed characterization, we demonstrate that a common metadata management system can be used to efficiently support diverse cross-layer techniques in CPUs. MetaSys is completely and freely available at https://github.com/CMU-SAFARI/MetaSys .
APA, Harvard, Vancouver, ISO, and other styles
42

Yadav, Dharmveer Kumar, and Sandip Kumar Dutta. "Test Case Prioritization Using Clustering Approach for Object Oriented Software." International Journal of Information System Modeling and Design 10, no. 3 (July 2019): 92–109. http://dx.doi.org/10.4018/ijismd.2019070106.

Full text
Abstract:
In the software maintenance activity, regression testing is performed for validing modified source code. Regression testing ensures that the modified code would not affect the earlier tested program. Due to a constraint of resources and time, regression testing is a time-consuming process and it is a very expensive activity. During the regression testing, a set of the test case and the existing test cases are reused. To minimize the cost of regression testing, the researchers proposed a test case prioritization based on clustering techniques. In recent years, research on regression testing has made significant progress for object-oriented software. The empirical results show the importance of K-mean clustering algorithm used to achieve an effective result. They found from experimental results that their proposed approach achieves the highest faults detected value than others.
APA, Harvard, Vancouver, ISO, and other styles
43

Schlurmann, Torsten, Widjo Kongko, Nils Goseberg, Danny Hilman Natawidjaja, and Kerry Sieh. "NEAR-FIELD TSUNAMI HAZARD MAP PADANG, WEST SUMATRA: UTILIZING HIGH RESOLUTION GEOSPATIAL DATA AND RESEASONABLE SOURCE SCENARIOS." Coastal Engineering Proceedings 1, no. 32 (January 19, 2011): 26. http://dx.doi.org/10.9753/icce.v32.management.26.

Full text
Abstract:
Near-field tsunami propagation both in shallow water environments and bore-like wave propagation on land are conducted in this study to obtain fundamental knowledge on the tsunami hazard potential in the city of Padang, Western Sumatra, Republic of Indonesia. As the region proves a huge seismic moment deficit which has progressively accumulated since the last recorded major earthquakes in 1797 and 1833, this investigation focuses on most reasonable seismic sources and possibly triggered nearshore tsunamis in order to develop upgraded disaster mitigations programs in this densely-populated urban agglomeration located on the western shore of Sumatra Island. Observations from continuous Global Positioning Satellite (cGPS) systems and supplementary coral growth studies confirm a much greater probability of occurrence that a major earthquake and subsequent tsunami are likely to strike the region in the near future. Newly surveyed and processed sets of geodata have been collected and used to progress most plausible rupture scenarios to approximate the extent and magnitudes of a further earthquake. Based upon this novel understanding, the present analysis applies two hydronumerical codes to simulate most probable tsunami run-up and subsequent inundations in the city of Padang in very fine resolution. Run-up heights and flow-depths are determined stemming from these most plausible rupture scenarios. Evaluation of outcome and performance of both numerical tools regarding impacts of surge flow and bore-like wave fronts encountering the coast and inundating the city are thoroughly carried out. Results are discussed not only for further scientific purposes, i.e. benchmark tests, but also to disseminate main findings to responsible authorities in Padang with the objective to distribute the most probable dataset of plausible tsunami inundations as well as to address valuable insights and knowledge for effective counter measures, i.e. evacuation routes and shelter building. Following evacuation simulations based on rational assumptions and simplifications reveal a most alerting result as about 260.000 people are living in the highly exposed potential tsunami inundation area in the city of Padang of which more than 90.000 people will need more than 30 min. to evacuate to safe areas.
APA, Harvard, Vancouver, ISO, and other styles
44

Ali-Hassan, Hossam. "Social Capital in Management Information Systems Literature." Journal of Information Technology Research 6, no. 4 (October 2013): 1–17. http://dx.doi.org/10.4018/jitr.2013100101.

Full text
Abstract:
Social capital represents resources or assets rooted in an individual’s or group’s network of social relations. It is a multidimensional and multilevel concept characterized by diverse definitions and conceptualizations, all of which focus on the structure and/or on the content of the social relations. A common conceptualization of social capital in information systems research consists of a structural, relational and cognitive dimension. The structural dimension represents the configuration of the social network and the characteristics of its ties. The relational dimension epitomizes assets embedded in the social relations, such as trust, obligations, and norms of reciprocity. The cognitive dimension is created by common codes, languages and narratives, and represents a shared context that facilitates interaction. To singular or collective network members, social capital can be a source of solidarity, information, cooperation, collaboration and influence. Ultimately, social capital has been and will remain sound theoretical grounding upon which to study information systems affected by social relationships and their embedded assets.
APA, Harvard, Vancouver, ISO, and other styles
45

Quan, Wei, Wenwen Fu, Jinli Yan, and Zhigang Sun. "OpenTSN: an open-source project for time-sensitive networking system development." CCF Transactions on Networking 3, no. 1 (August 5, 2020): 51–65. http://dx.doi.org/10.1007/s42045-020-00029-8.

Full text
Abstract:
Abstract Time-sensitive networking (TSN) is a promising technique in many fields such as industrial automation and autonomous driving. The standardization of TSN has been rapidly improved by the IEEE 802.1 TSN working group. Currently, it has formed a comprehensive standard system with a wide range of choices. However, there is a large gap between TSN standards and application specific TSN systems. Designers need to determine the required TSN standards and standard implementation methods based on the application’s transmission performance and reliability requirements. Therefore, an easy-to-use developing platform for rapid TSN system prototyping and evaluation plays a vital role in the application of TSN technologies. This article mainly introduces OpenTSN, an open source project that supports rapid TSN system customization. This project has three features, which are SDN-based TSN network control mechanism, time-sensitive management protocol and time-sensitive switching model, for building an efficient TSN system. OpenTSN opens all the hardware and software source codes so that designers can quickly and flexibly customize the TSN system according to their own needs, maximizing the reuse of existing code and reducing the customization complexity. With this project, two FPGA-based prototyping examples with star and ring topology are presented at the experimental section. The experiment results show that the synchronization precision of the entire testing network is under 32 ns and the transmission performance matches the theory analysis of the testing Cyclic Queue and Forwarding based TSN network.
APA, Harvard, Vancouver, ISO, and other styles
46

Mashingaidze, Sivave. "Ethical intelligence: Espousing African Ubuntu philosophical business approach with Jewish business ethics systems as panacea for corporate failure in Africa." Corporate Ownership and Control 12, no. 1 (2014): 473–89. http://dx.doi.org/10.22495/cocv12i1c5p3.

Full text
Abstract:
The objective of this article is to espouse the indispensability of Jewish ethics system with African Ubuntu Philosophy as a panacea for reduction of corporate failure in Africa. This much has been demonstrated in the article with heavy reliance on descriptive phenomenology and secondary sources of data as methodology. The article found that in all the national codes of corporate governance in Africa the need for actively managing the ethical performance of companies is not emphasized but the codes are there in their books. The article again recommends an urgent need for enforcing the Jewish ethical system and the King III code together with the implementation of laws, enforcement of sanctions and strengthening of institutions of governance on a continuous basis
APA, Harvard, Vancouver, ISO, and other styles
47

Li, Tong, Shiheng Wang, David Lillis, and Zhen Yang. "Combining Machine Learning and Logical Reasoning to Improve Requirements Traceability Recovery." Applied Sciences 10, no. 20 (October 16, 2020): 7253. http://dx.doi.org/10.3390/app10207253.

Full text
Abstract:
Maintaining traceability links of software systems is a crucial task for software management and development. Unfortunately, dealing with traceability links are typically taken as afterthought due to time pressure. Some studies attempt to use information retrieval-based methods to automate this task, but they only concentrate on calculating the textual similarity between various software artifacts and do not take into account the properties of such artifacts. In this paper, we propose a novel traceability link recovery approach, which comprehensively measures the similarity between use cases and source code by exploring their particular properties. To this end, we leverage and combine machine learning and logical reasoning techniques. On the one hand, our method extracts features by considering the semantics of the use cases and source code, and uses a classification algorithm to train the classifier. On the other hand, we utilize the relationships between artifacts and define a series of rules to recover traceability links. In particular, we not only leverage source code’s structural information, but also take into account the interrelationships between use cases. We have conducted a series of experiments on multiple datasets to evaluate our approach against existing approaches, the results of which show that our approach is substantially better than other methods.
APA, Harvard, Vancouver, ISO, and other styles
48

Mann, Suman, Nitish Pathak, Neelam Sharma, Raju Kumar, Rabins Porwal, Sheelesh Kr Sharma, and Saw Mon Yee Aung. "Study of Energy-Efficient Optimization Techniques for High-Level Homogeneous Resource Management." Wireless Communications and Mobile Computing 2022 (July 27, 2022): 1–12. http://dx.doi.org/10.1155/2022/1953510.

Full text
Abstract:
Resource management efficiency can be a beneficial step toward optimizing power consumption in software-hardware integrated systems. Languages such as C, C++, and Fortran have been extremely popular for dealing with optimization, memory management, and other resource management. We investigate novel algorithmic architectures capable of optimizing resource requirements and increasing energy efficiency. The experimental results obtained with C++ can be extended to other programming languages as well. We emphasize the inherent drawbacks of memory management operators. These operators are intended to be extremely generic in their application, just as the concept of dynamic memory is. As a result, they are unable to take advantage of the various optimization techniques and opportunities that specific use cases present. Each source code file is modeled after its own distinct memory usage pattern, which can be used to speed up memory management routines. Such concepts are frequently time-consuming and costly to implement; consequently, they are not the primary concern of application developers, as they require manual development and integration. We intend to address this gap by providing a suite of memory management algorithms that enable dramatic performance improvements at the source code level while allowing for seamless integration across multiple use cases. The techniques have been evaluated on several performance parameters, and results have been presented. In this paper, we have compared a variety of memory allocation techniques and compared their space and energy efficiency requirements. Three variants of SSDAM, SSDAM-E, and DLLOM strategies have been evaluated and compared against the base performance of new and delete operators. SSDAM-E, SSDAM with new delete operators, and DDLOM improve the memory consumption by the factors of 8.01, 7.0, and 4.0, respectively. In the worst case, SSDAM-E gave an average running time of 5.650 sec faster than the DLLOM average time of 7.496 sec. As far as energy efficiency is considered, SSDAM-Original and SSDAM-E-Original attain 100%, in comparison with the base efficiency of 12.48% characterized by new/delete operators.
APA, Harvard, Vancouver, ISO, and other styles
49

Vorobyova, Anna E., Inna E. Fedyunina, and Ekaterina A. Vinogradova. "LINGUISTIC AND MENTAL ASPECTS OF THE TRANSLATION OF OFFICIAL TEXTS (BASED ON THE OFFICIAL DOCUMENTS OF PRESS SERVICES AND NEWS PORTALS)." Sovremennye issledovaniya sotsialnykh problem 14, no. 4 (December 29, 2022): 372–87. http://dx.doi.org/10.12731/2077-1770-2022-14-4-372-387.

Full text
Abstract:
Background. The translation process is to be considered not merely as a cross- language, but as a cross-cultural interaction as well, since two language systems and two cultures operate simultaneously. The translator’s mission is to adequately transfer the source code of information into the translation language and adapt the national and cultural features of the original text. This involves mental processing of the underlying message implications and presuppositions. Strategies and methods of translation of official documents are determined not only by their linguistic characteristics, but also by their national-specific features. The above-mentioned aspects contribute to the background of the study. Purpose. The article deals with the analysis of the lexical and syntactic features of the translation of official business texts through the prism of the national cultural code, as well as reveals the influence of mental features of various linguistic and cultural communities on the linguistic, structural and style-forming characteristics of business texts. Materials and methods. The empiric material for the study is represented by 70 official documents of international organizations in English (UNESCO, The Minsk Declaration, U.S. Mission Russia, etc.) and their corresponding Russian translations. The documents are published in an open access on the Internet on the official websites of press services and news portals. In the course of analyzing the sources of actual material, general scientific methods of analysis and synthesis, cognitive analysis, and content analysis were used. Results. The results of the study proved that the national-cultural specificity has a direct impact on the linguistic, structural and style-forming characteristics of official business texts. To ensure the adequacy of a translation in the recipient language, cognitive processing of the source code through the prism of the semantic space of the translator’s culture is required. Recurrent methods of translating of official documents are the techniques of generalization, specification, addition, zero translation and integration. Conclusion. Official business texts are marked by certain functional characteristics that must be taken into consideration in the process of translation to ensure its adequacy and communicative effect.
APA, Harvard, Vancouver, ISO, and other styles
50

Deswary, Dwi, and Ary Sutanto. "DEVELOPMENT OF MANAGEMENT INFORMATION SYSTEM IN MASTER STUDY PROGRAM ON EDUCATION MANAGEMENT GRADUATE PROGRAM OF JAKARTA STATE UNIVERSITY." IJER - INDONESIAN JOURNAL OF EDUCATIONAL REVIEW 4, no. 1 (July 3, 2017): 44. http://dx.doi.org/10.21009/ijer.04.01.05.

Full text
Abstract:
This research aims to find out descriptively about: the needs of management information systems in the Master Program of Education Management Graduate School, State University of Jakarta (UNJ), planning of management information system model that needed, the development of management information system model, and testing of management information system model. The method used is research and development. Data collected by observation, interview, documentation study, and audio-visual material. The research findings inform that in the Master program study of Education Management, data management and information currently been carried out using information systems as processing and storage of data and information. Needs that exist in the course include the need for tools, software, and human resources. Needs in the form of equipment such as computers, networks, and other support tools have been prepared. Software requirements include information system software. While human resources need such as the provision of administrative and IT staff that competent for the job. Planning includes the design objectives, aspects of education management, procedures, and design models. The design model consists of design of data flowchart and structure menu that illustrates the systems and sub-systems of the information system that designed. Development of the model is done by developing a few things like the development of graphic design, engineering source code, local webserver installation, and design layout. Development is based on the needs analysis and planning. Tests carried out in three stages. The first phase was conducted to test the way of supporting components on the local computer system. The second phase was conducted to test the way of the system designed after being embedded on a local webserver. The third phase is done to test the way of the overall information system.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography