Journal articles on the topic 'Software quality, processes and metrics'

To see the other types of publications on this topic, follow the link: Software quality, processes and metrics.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Software quality, processes and metrics.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Mishra, Alok, Raed Shatnawi, Cagatay Catal, and Akhan Akbulut. "Techniques for Calculating Software Product Metrics Threshold Values: A Systematic Mapping Study." Applied Sciences 11, no. 23 (December 1, 2021): 11377. http://dx.doi.org/10.3390/app112311377.

Full text
Abstract:
Several aspects of software product quality can be assessed and measured using product metrics. Without software metric threshold values, it is difficult to evaluate different aspects of quality. To this end, the interest in research studies that focus on identifying and deriving threshold values is growing, given the advantage of applying software metric threshold values to evaluate various software projects during their software development life cycle phases. The aim of this paper is to systematically investigate research on software metric threshold calculation techniques. In this study, electronic databases were systematically searched for relevant papers; 45 publications were selected based on inclusion/exclusion criteria, and research questions were answered. The results demonstrate the following important characteristics of studies: (a) both empirical and theoretical studies were conducted, a majority of which depends on empirical analysis; (b) the majority of papers apply statistical techniques to derive object-oriented metrics threshold values; (c) Chidamber and Kemerer (CK) metrics were studied in most of the papers, and are widely used to assess the quality of software systems; and (d) there is a considerable number of studies that have not validated metric threshold values in terms of quality attributes. From both the academic and practitioner points of view, the results of this review present a catalog and body of knowledge on metric threshold calculation techniques. The results set new research directions, such as conducting mixed studies on statistical and quality-related studies, studying an extensive number of metrics and studying interactions among metrics, studying more quality attributes, and considering multivariate threshold derivation.
APA, Harvard, Vancouver, ISO, and other styles
2

Alenezi, Mamdouh. "Internal Quality Evolution of Open-Source Software Systems." Applied Sciences 11, no. 12 (June 19, 2021): 5690. http://dx.doi.org/10.3390/app11125690.

Full text
Abstract:
The evolution of software is necessary for the success of software systems. Studying the evolution of software and understanding it is a vocal topic of study in software engineering. One of the primary concepts of software evolution is that the internal quality of a software system declines when it evolves. In this paper, the method of evolution of the internal quality of object-oriented open-source software systems has been examined by applying a software metric approach. More specifically, we analyze how software systems evolve over versions regarding size and the relationship between size and different internal quality metrics. The results and observations of this research include: (i) there is a significant difference between different systems concerning the LOC variable (ii) there is a significant correlation between all pairwise comparisons of internal quality metrics, and (iii) the effect of complexity and inheritance on the LOC was positive and significant, while the effect of Coupling and Cohesion was not significant.
APA, Harvard, Vancouver, ISO, and other styles
3

Kataieva, Yevheniia, Svetlana Odokienko, Maya Luta, and Yaroslav Savchenko. "PRACTICAL QUALITY ANALYSIS OF OPEN SOURCE SOFTWARE." Management of Development of Complex Systems, no. 44 (November 30, 2020): 49–55. http://dx.doi.org/10.32347/2412-9933.2020.44.49-55.

Full text
Abstract:
The success of any project is determined by its ability to meet the needs of the consumer, and therefore ensuring a high level of quality is a necessary task of any production, including software engineering. Insufficient quality of the created software requires many IT-organizations, up to 70% of the budget of the information system to reserve for the maintenance stage, with up to 60% of all software modifications performed to eliminate errors, and only the remaining 40% - to correct software within the business process, improvement certain indicators of software quality, or to prevent potential problems. Software quality is a complex concept. Standards highlight the quality of development processes, internal and external quality of the software product, the quality of the software product at the stage of use. For each of the components of quality can be called a set of metrics that determine the quality of the software product. The resulting structure is called the software quality model. Software metrics are a measure that allows you to get the numerical value of a property of software or its specifications, as well as the method of its calculation. Metrics allow you to get numeric values for each property of the software or its specifications. Of particular interest are software complexity metrics. Complexity is an important factor on which other parameters of software quality depend, such as accuracy, correctness, reliability, convenience of support. The existence of methods and algorithms for automatic calculation of software complexity metrics using software allows you to get a comprehensive formal report on the quality of software in a short time. This allows for objective monitoring of the quality of software throughout the project life cycle, make adjustments to the project plan, as well as make timely decisions about the need for refactoring.
APA, Harvard, Vancouver, ISO, and other styles
4

BARMAK, ALEXANDER, VIKTOR KUDRIAVTSEV, YRII FORKUN, and OKSANA YASHYNA. "SOFTWARE CODE ANALYSIS SYSTEM FOR RISK ASSESSMENT AND QUALITY ASSURANCE OF SOFTWARE." HERALD OF KHMELNYTSKYI NATIONAL UNIVERSITY 297, no. 3 (July 2, 2021): 25–29. http://dx.doi.org/10.31891/2307-5732-2021-297-3-25-29.

Full text
Abstract:
The paper presents the results of research of various standards, rules and methods of writing software code and analysis of their impact on software quality and the likelihood of technical risks associated with information processes within the system. Most of the risks that arise while developing software products are due to errors in building the system architecture or writing code. As a solution for such problems, it is proposed to apply the developed set of rules and methods to build the system architecture and assess the quality of writing software objects. Metrics have been developed to estimate the size and complexity of the module by combining elements of Halsted and Chepin metrics. Also, a set of principles for optimizing the structure of the system, also known as SOLID principles, was presented. The application of these principles for system construction and analysis was substantiated in order to minimize risks, ensure the quality of the software system and provide opportunities for easy extensibility of the project. Using these methods will optimize the project both for use and for further development. The need for such optimization processes in terms of risk management is that the clearer the system and the easier it is to expand, the less likely it is that errors will occur in the future when adding new functionality.
APA, Harvard, Vancouver, ISO, and other styles
5

Patnaik, K. Sridhar, and Pooja Jha. "Proposed Metrics for Process Capability Analysis in Improving Software Quality: An Empirical Study." International Journal of Software Engineering and Technologies (IJSET) 1, no. 3 (December 1, 2016): 152. http://dx.doi.org/10.11591/ijset.v1i3.4578.

Full text
Abstract:
A software project faces its top expense on defect removal; thereby delaying the schedules. There has been increasing demand for high quality software. Here, high quality software means, delivering defect free software and meeting the predictable results within time and cost constraints. Software defect prediction strives to improve software quality and testing efficiency. The research work presented here is an empirical study and analyzes importance of different metrics used in the organization. The paper examines the impact of LSL and USL, known as organization baselines, on various projects and proposes four metrics for capability analysis metrics. These can prove beneficial for categorizing the process of software development. These metrics aim to improve the ongoing software development process and are helpful in determining the quality of these processes in terms of their specification limits. Also, the paper attempts to justify the probability of the values related to the data provided by normal distribution or Gaussian distribution.
APA, Harvard, Vancouver, ISO, and other styles
6

Parthasarathy, Sudhaman, C. Sridharan, Thangavel Chandrakumar, and S. Sridevi. "Quality Assessment of Standard and Customized COTS Products." International Journal of Information Technology Project Management 11, no. 3 (July 2020): 1–13. http://dx.doi.org/10.4018/ijitpm.2020070101.

Full text
Abstract:
Software quality is a very important aspect in evolving strategy for IT vendors involved in commercial off-the-shelf (COTS) (also referred as packaged software) product development. Software metrics are widely accepted measures for monitoring and managing the quality in software projects. Enterprise resource planning (ERP) systems are COTS products and attempt to integrate data and processes in organizations and often require extensive customization. Using software quality metrics already established in literature, software quality attributes defined by the quality model ISO/IEC 9126 were evaluated for a standard and a customized ERP product. This will help the ERP team to identify the specific quality attributes that were affected owing to customization. This research study infers that there exists a considerable impact of ERP system customization over the quality of ERP product. The implications of the findings for both practice and research are discussed, and possible areas of future research are identified.
APA, Harvard, Vancouver, ISO, and other styles
7

Azuma, Motoei. "Software products evaluation system: quality models, metrics and processes—International Standards and Japanese practice." Information and Software Technology 38, no. 3 (March 1996): 145–54. http://dx.doi.org/10.1016/0950-5849(95)01069-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Damasevicius, Robertas, and Vytautas Stuikys. "Metrics for evaluation of metaprogram complexity." Computer Science and Information Systems 7, no. 4 (2010): 769–87. http://dx.doi.org/10.2298/csis090315004d.

Full text
Abstract:
The concept of complexity is used in many areas of computer science and software engineering. Software complexity metrics can be used to evaluate and compare quality of software development and maintenance processes and their products. Complexity management and measurement is especially important in novel programming technologies and paradigms, such as aspect-oriented programming, generative programming, and metaprogramming, where complex multilanguage and multi-aspect program specifications are developed and used. This paper analyzes complexity management and measurement techniques, and proposes five complexity metrics (Relative Kolmogorov Complexity, Metalanguage Richness, Cyclomatic Complexity, Normalized Difficulty, Cognitive Difficulty) for measuring complexity of metaprograms at information, metalanguage, graph, algorithm, and cognitive dimensions.
APA, Harvard, Vancouver, ISO, and other styles
9

Venkatraman, Prakash, and Goplakrishnan Sethumadhavan. "An Efficient Regression Testing Test Suite Optimization System with Quality Metrics." Journal of Computational and Theoretical Nanoscience 13, no. 10 (October 1, 2016): 6754–63. http://dx.doi.org/10.1166/jctn.2016.5624.

Full text
Abstract:
To make certain the software Quality, Software testing is one of the most significant processes in Software Development Life Cycle (SDLC). Software systems develop regularly to offer the necessary functionalities and to acclimatize to ever-changing customer necessitates. Regression testing points out to the portion of the test cycle in which a syllabus is experimented to make certain that alters do not influence characteristics such as adding novel characteristics or adapting presented characteristics that are not supposed to be influenced. The Regression testing is used to modified versions of the software to make certain that the modified characteristics perform equally and that the changes did not generate unexpected faults, moreover known as deterioration errors. As a result, in order to spot such errors, Software regression testing is necessary. The most important plan of the suggested system is to execute the software regression testing with a reduced amount of time and cost devoid of changing the quality. For this, we will promise the quality of the software tool in terms of quality metrics such as dependability and maintainability, etc after optimizing the test suite. By using Improved Particle Swarm Optimization Algorithm (IPSO), the test suite optimization will be prepared. With different devices such as JDepend and Rayleigh’s model, the quality metrics will be computed.
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, Zhen Qi, and Dan Kai Zhang. "Improved CK Metrics for Object Oriented Design Measurement." Advanced Engineering Forum 6-7 (September 2012): 333–36. http://dx.doi.org/10.4028/www.scientific.net/aef.6-7.333.

Full text
Abstract:
In order to understand and control the software development process better, improve the quality of software. At the same time, traditional methods of measurement in the field of object-oriented is no longer fit for some of the unique features for object-oriented software, this makes us accelerate the speed of research of object-oriented software metrics, and considerable progress has been made. This paper describes object-oriented software measurement method proposed by Chidamber and Kemerer-- C&K , for its lack, we improved on method of C&K, combined with the features of the Java language, applied software measurement tools that you already have on software measurement indicators for implementation and help developers guide software development processes to meet the needs of users better.
APA, Harvard, Vancouver, ISO, and other styles
11

Roldan-Molina, Gabriela R., Jose R. Mendez, Iryna Yevseyeva, and Vitor Basto-Fernandes. "Ontology Fixing by Using Software Engineering Technology." Applied Sciences 10, no. 18 (September 11, 2020): 6328. http://dx.doi.org/10.3390/app10186328.

Full text
Abstract:
This paper presents OntologyFixer, a web-based tool that supports a methodology to build, assess, and improve the quality of ontology web language (OWL) ontologies. Using our software, knowledge engineers are able to fix low-quality OWL ontologies (such as those created from natural language documents using ontology learning processes). The fixing process is guided by a set of metrics and fixing mechanisms provided by the tool, and executed primarily through automated changes (inspired by quick fix actions used in the software engineering domain). To evaluate the quality, the tool supports numerical and graphical quality assessments, focusing on ontology content and structure attributes. This tool follows principles, and provides features, typical of scientific software, including user parameter requests, logging, multithreading execution, and experiment repeatability, among others. OntologyFixer architecture takes advantage of model view controller (MVC), strategy, template, and factory design patterns; and decouples graphical user interfaces (GUI) from ontology quality metrics, ontology fixing, and REST (REpresentational State Transfer) API (Application Programming Interface) components (used for pitfall identification, and ontology evaluation). We also separate part of the OntologyFixer functionality into a new package called OntoMetrics, which focuses on the identification of symptoms and the evaluation of the quality of ontologies. Finally, OntologyFixer provides mechanisms to easily develop and integrate new quick fix methods.
APA, Harvard, Vancouver, ISO, and other styles
12

Nidagundi, Padmaraj, and Leonids Novickis. "Introduction to Lean Canvas Transformation Models and Metrics in Software Testing." Applied Computer Systems 19, no. 1 (May 1, 2016): 30–36. http://dx.doi.org/10.1515/acss-2016-0004.

Full text
Abstract:
Abstract Software plays a key role nowadays in all fields, from simple up to cutting-edge technologies and most of technology devices now work on software. Software development verification and validation have become very important to produce the high quality software according to business stakeholder requirements. Different software development methodologies have given a new dimension for software testing. In traditional waterfall software development software testing has approached the end point and begins with resource planning, a test plan is designed and test criteria are defined for acceptance testing. In this process most of test plan is well documented and it leads towards the time-consuming processes. For the modern software development methodology such as agile where long test processes and documentations are not followed strictly due to small iteration of software development and testing, lean canvas transformation models can be a solution. This paper provides a new dimension to find out the possibilities of adopting the lean transformation models and metrics in the software test plan to simplify the test process for further use of these test metrics on canvas.
APA, Harvard, Vancouver, ISO, and other styles
13

Rosli, Marshima Mohd, and Nor Shahida Mohamad Yusop. "Evaluating the effectiveness of data quality framework in software engineering." International Journal of Electrical and Computer Engineering (IJECE) 12, no. 6 (December 1, 2022): 6410. http://dx.doi.org/10.11591/ijece.v12i6.pp6410-6422.

Full text
Abstract:
<span lang="EN-US">The quality of data is important in research working with data sets because poor data quality may lead to invalid results. Data sets contain measurements that are associated with metrics and entities; however, in some data sets, it is not always clear which entities have been measured and exactly which metrics have been used. This means that measurements could be misinterpreted. In this study, we develop a framework for data quality assessment that determines whether a data set has sufficient information to support the correct interpretation of data for analysis in empirical research. The framework incorporates a dataset metamodel and a quality assessment process to evaluate the data set quality. To evaluate the effectiveness of our framework, we conducted a user study. We used observations, a questionnaire and think aloud approach to provide insights into the framework through participant thought processes while applying the framework. The results of our study provide evidence that most participants successfully applied the definitions of dataset category elements and the formal definitions of data quality issues to the datasets. Further work is needed to reproduce our results with more participants, and to determine whether the data quality framework is generalizable to other types of data sets.</span>
APA, Harvard, Vancouver, ISO, and other styles
14

Rodríguez-Hernández, V., M. C. Espino-Gudiño, J. L. González-Pérez, J. Gudiño-Bazaldúa, and Victor Castano. "Assessing quality in software development: An agile methodology approach." Journal of Advanced Computer Science & Technology 4, no. 2 (June 9, 2015): 225. http://dx.doi.org/10.14419/jacst.v4i2.4173.

Full text
Abstract:
<p>A novel methodology, result of 10 years of in-field testing, which makes possible the convergence of different types of models and quality standards for Engineering and Computer Science Faculties, is presented. Since most software-developing companies are small and medium sized, the projects developed must focuson SCRUM and Extreme Programming (XP), opposed to a RUP, which is quite heavy, as well as on Personal Software Process (PSP) and Team Software Process (TSP), which provide students with competences and a structured framework. ISO 90003:2004 norm is employed to define the processes by means of a quality system without new requirements or changing the existing ones. Also, the model is based on ISO/IEC 25000 (ISO (IEC 9126 – ISO/IEC 14598)) to allow comparing software built by different metrics.</p>
APA, Harvard, Vancouver, ISO, and other styles
15

KHOSHGOFTAAR, TAGHI M., KEHAN GAO, and AMRI NAPOLITANO. "AN EMPIRICAL STUDY OF FEATURE RANKING TECHNIQUES FOR SOFTWARE QUALITY PREDICTION." International Journal of Software Engineering and Knowledge Engineering 22, no. 02 (March 2012): 161–83. http://dx.doi.org/10.1142/s0218194012400013.

Full text
Abstract:
The primary goal of software quality engineering is to produce a high quality software product through the use of some specific techniques and processes. One strategy is applying data mining techniques to software metric and defect data collected during the software development process to identify potential low-quality program modules. In this paper, we investigate the use of feature selection in the context of software quality estimation (also referred to as software defect prediction), where a classification model is used to predict whether program modules (instances) are fault-prone or not-fault-prone. Seven filter-based feature ranking techniques are examined. Among them, six are commonly used, and the other one, named signal to noise ratio (SNR), is rarely employed. The objective of the paper is to compare these seven techniques for various software data sets and assess their effectiveness for software quality modeling. A case study is performed on 16 software data sets, and classification models are built with five different learners and evaluated with two performance metrics. Our experimental results are summarized based on statistical tests for significance. The main conclusion is that the SNR technique performs as well as the best performer of the six commonly used techniques.
APA, Harvard, Vancouver, ISO, and other styles
16

Costa, Ana Rita, Carla Barbosa, Gilberto Santos, and M. RUi Alves. "Six Sigma: Main Metrics and R Based Software for Training Purposes and Practical Industrial Quality Control." Quality Innovation Prosperity 23, no. 2 (July 31, 2019): 83. http://dx.doi.org/10.12776/qip.v23i2.1278.

Full text
Abstract:
<p><strong>Purpose:</strong> To clarify the different types of data likely to occur in any service or industrial process, the main applicable statistics for each type of data and the Six Sigma metrics that allow characterising and benchmarking organisational processes.</p><p><strong>Methodology/Approach:</strong> A short reference to the statistical process control is carried out, from Shewhart’s works to Motorola’s achievements, followed by a short discussion of the use of Six Sigma tools as a part of today’s total quality approaches, and by a discussion of the continuous, attribute and counting data worlds and their main applications in process analysis. Because many quality professionals may have difficulties dealing with engineering perspectives, a review of main classic and Six Sigma process metrics is done with examples. Complementing discussions, four functions written in the R language are presented, which can deal with real organisational data, or can be used for training purposes.</p><p><strong>Findings:</strong> The functions developed provide useful graphical displays and calculate all necessary metrics, having the ability to let the user provide theoretical values for training activities. Real and simulated case studies help understanding data worlds and respective Six Sigma metrics.</p><p><strong>Research Limitation/implication:</strong> This paper reports an intentionally simple theoretical perspective of Six Sigma metrics and friendly software which is available to all interested professionals on request to the authors.</p><strong>Originality/Value of paper:</strong> The paper presents clear definitions of main data types and metrics and is supported by a set of four new functions that can be used by any researcher with a minimum knowledge of the R software.
APA, Harvard, Vancouver, ISO, and other styles
17

Agárdi, Anita, and László Kovács. "Property-Based Quality Measures in Ontology Modeling." Applied Sciences 12, no. 23 (December 6, 2022): 12475. http://dx.doi.org/10.3390/app122312475.

Full text
Abstract:
The development of an appropriate ontology model is usually a hard task. One of the main issues is that ontology developers usually concentrate on classes and neglect the role of properties. This paper analyzes the role of an appropriate property set in providing multi-purpose ontology models with a high level of re-usability in different areas. In this paper, novel quality metrics related to property components are introduced and a conversion method is presented to map the base ontology into models for software development. The benefits of the proposed quality metrics and the usability of the proposed conversion methods are demonstrated by examples from the field of knowledge modeling.
APA, Harvard, Vancouver, ISO, and other styles
18

Alsulami, Musleh. "Exploring the Use of Software Metrics in Saudi Enterprises." Tehnički glasnik 16, no. 2 (May 11, 2022): 155–60. http://dx.doi.org/10.31803/tg-20220124143638.

Full text
Abstract:
This study was aimed to evaluate the application of software metrics used by the software enterprises present in Saudi Arabia. Extensive literature reviews were conducted to comprehend the current body of knowledge on the use of software metrics in Saudi enterprises. These literature review and studies elapsed approximately two decades. Based on the drawbacks, shortcomings, and fallacies of the existing studies a series of interview questionnaires were developed. Interviews were conducted for collection of real-time and actual data. Around seven Saudi enterprises were selected, and each enterprise was considered and regarded as a unit, and the manager of the enterprise was acting as a unit for our case study. 40 managers were interviewed, and their responses were analyzed. Respondents' responses indicate that the software is useful enough to support business processes. In an attempt to assess the complexity of implementing this software, effective feedback was received, suggesting that there is a lack of communication between the developers and managements’ intent. Moreover, the findings of this study showed that the organization need to give more attention to quality and productivity management. In addition, the results indicate that when agile development is undertaken through software effectiveness, the enterprise's services are implemented appropriately.
APA, Harvard, Vancouver, ISO, and other styles
19

Saini, Neha, Prof Indu Chhabra, and Dr Ajay Guleria. "Machine Learning Based Approach for Evaluating Agile Based Methods to Enhance Software Quality." International Journal of Engineering and Advanced Technology 12, no. 2 (December 30, 2022): 123–27. http://dx.doi.org/10.35940/ijeat.b3956.1212222.

Full text
Abstract:
Developing a quality software product is an essential need of the software industry. Software quality comprises of various factors. Therefore, it cannot be measured on the basis of a single variable. Several agile software development methods have evolved all around the world with the passage of time that contribute towards the development of new and improved software methods. The agile processes have started invading the software development industry to provide good quality software in minimal time. As the changes have occurred in the modern day evaluation metrics, the changes have been observed in the agile oriented quality evaluation methods as well. This paper presents a machine learning based approach for evaluating agile based methods for enhancing software quality. This advanced mechanism of processing the data attributes is inspired by SWARA and FDD. The validation and evaluation has been done using statistical and the quantitative parameters.
APA, Harvard, Vancouver, ISO, and other styles
20

S Al- Tarawneh, Enar, and Lana kh Al-Tarawneh. "Introducing Comprehensive Software Quality Model for Evaluating and Development E-Government Websites in Jordan Using Fuzzy Analytical Hierarchy Process." Webology 19, no. 1 (January 20, 2022): 890–902. http://dx.doi.org/10.14704/web/v19i1/web19061.

Full text
Abstract:
E-government is a new principle, mainly employed to improve the processes of government to make it appear more efficient, responsive, and transparent. The processes reflect the way that both enterprises and citizens will ultimately interact with their government units as well as the way that government units effectively cooperate and communicate. Besides, the main goal of this contribution is to build a new quality model dedicated to evaluate E-government websites. In this work a comparative analysis has been done on existing software quality models. In this respect, a good number of quality models for evaluating e-government websites have been published. Each model is mainly intended to embody and represent the quality factors’ totality and issues which are relevant to a particular idea of websites quality. Besides, such quality models may ultimately be used to improve new websites, develop ways to measure the quality of already existing websites, or in a sense, guide the development of existing websites. Thus, desirable mechanisms ultimately exist to choose the right model of highest intrinsic quality as well as of greater relevance. The researcher, in this research, presents mechanisms which are mainly proposed for the comparative assessment of e-government websites quality models. The introduction of model has been further enhanced and intensified through expanding its general quality, and identifying metrics to ultimately measure their attributes. Finally, the Fuzzy Analytical Hierarchy Process (FAHP) has been applied to the model, just as a decision-making tool, in order to mainly execute an empirical study that aims at verifying the ultimate efficiency of such a model.
APA, Harvard, Vancouver, ISO, and other styles
21

Greshnikov, I. I., L. S. Kuravsky, and G. A. Yuriev. "Principles of Developing a Software and Hardware Complex for Crew Intelligent Support and Training Level Assessment." Моделирование и анализ данных 11, no. 2 (2021): 5–30. http://dx.doi.org/10.17759/mda.2021110201.

Full text
Abstract:
Presented is a new approach to aircraft crew intelligent support, which is based on comparing flight fragments (maneuvers) under study with the relevant patterns contained in the database and representing the system “empirical intelligence”. Principal components of this approach are four new metrics for comparing flight fragments, viz.: the Euclidean metric in the space of wavelet coefficients; the likelihood metric of eigenvalue trajectories for transformations of activity parameters; the Kohonen metric in the space of wavelet coefficients; the likelihood metric for comparing gaze trajectories. Features of the presented approach are: the presence of an “intelligent component” that is contained in empirical data and can be flexibly changed as they accumulate; the use of integral comparisons of the flight fragments under study and video oculography data with relevant patterns of various types and performance quality from a specialized database, with transferring characteristics of the nearest pattern from this specialized database to the fragment under study; applying a complex combination of the methods for stochastic processes analysis and multivariate statistical techniques.
APA, Harvard, Vancouver, ISO, and other styles
22

Zheng, Mengze, Islam Zada, Sara Shahzad, Javed Iqbal, Muhammad Shafiq, Muhammad Zeeshan, and Amjad Ali. "Key Performance Indicators for the Integration of the Service-Oriented Architecture and Scrum Process Model for IOT." Scientific Programming 2021 (February 2, 2021): 1–11. http://dx.doi.org/10.1155/2021/6613579.

Full text
Abstract:
An important aspect in any business process lifecycle is management of the performance, where performance requirements on business processes are specified as Key Performance Indicators (KPIs) with target values which are to be achieved in a certain analysis period. A KPI is a business metric used to measure and evaluate the individual capability, maturity, complexity, and agility of a business process in the development environment. This study designed four general KPIs for the integration of SOA and scrum to bring further advancement in these approaches for IIoT. The study also identified some common metrics which will give help to software developers and, especially, to those who want to apply SOA and scrum integration. These metrics will play a critical role of bridging the strategy and concepts of improvements with operational activities. The identified KPIs will help to measure the business agility, quality and value, team efficiency, and complexity of scrum- and SOA based projects. Software development organizations can also practice these KPIs to know where to focus their resources to deliver the ultimate business profit. So, software business organizations could better align their business projects and IT investments with the rapid market change and deliveries.
APA, Harvard, Vancouver, ISO, and other styles
23

Torres-Sospedra, Joaquín, Begoña Martínez-Salvador, Cristina Campos Sancho, and Mar Marcos. "Process Model Metrics for Quality Assessment of Computer-Interpretable Guidelines in PROforma." Applied Sciences 11, no. 7 (March 25, 2021): 2922. http://dx.doi.org/10.3390/app11072922.

Full text
Abstract:
Background: Clinical Practice Guidelines (CPGs) include recommendations to optimize patient care and thus have the potential to improve the quality and outcomes of healthcare. To achieve this, CPG recommendations are usually formalized in terms of Computer-Interpretable Guideline (CIG) languages. However, a clear understanding of CIG models may prove complicated, due to the inherent complexity of CPGs and the specificities of CIG languages. Drawing a parallel with the Business Process Management (BPM) and the Software Engineering fields, understandability and modifiability of CIG models can be regarded as primary quality attributes, in order to facilitate their validation, as well as their adaptation to accommodate evolving clinical evidence, by modelers (typically teams made up of clinical and IT experts). This constitutes a novel approach in this area of CIG development, where understandability and modifiability aspects have not been considered to date. Objective: In this paper, we define a comprehensive set of process model metrics for CIGs described in the PROforma CIG language, with the main objective of providing tools for quality assessment of CIG models in this language. Methods: To this end, we first reinterpret a set of metrics from the BPM field in terms of PROforma and then we define new metrics to capture the singularities of PROforma models. Additionally, we report on a set of experiments to assess the relationship between the structural and logical properties of CIG models, as measured by the proposed metrics, and their understandability and modifiability from the point of view of modelers, both clinicians and IT staff. For the analysis of the experiment results, we perform statistical analysis based on a generalized linear mixed model with binary logistic regression. Results: Our contribution includes the definition of a comprehensive set of metrics that allow measuring model quality aspects of PROforma CIG models, the implementation of tools and algorithms to assess the metrics for PROforma models, and the empirical validation of the proposed metrics as quality indicators. Conclusions: In light of the results, we conclude that the proposed metrics can be of great value, as they capture the PROforma-specific features in addition to those inspired by the general-purpose BPM metrics in the literature. In particular, the newly defined metrics for PROforma prevail as statistically significant when the whole CIG model is considered, which means that they better characterize its complexity. Consequently, the proposed metrics can be used as quality indicators of the understandability, and thereby maintainability, of PROforma CIGs.
APA, Harvard, Vancouver, ISO, and other styles
24

BELLINI, CARLO GABRIEL PORTO, RITA DE CÁSSIA DE FARIA PEREIRA, and JOÃO LUIZ BECKER. "MEASUREMENT IN SOFTWARE ENGINEERING: FROM THE ROADMAP TO THE CROSSROADS." International Journal of Software Engineering and Knowledge Engineering 18, no. 01 (February 2008): 37–64. http://dx.doi.org/10.1142/s021819400800357x.

Full text
Abstract:
Research on software measurement can be organized around five key conceptual and methodological issues: how to apply measurement theory to software, how to frame software metrics, how to develop metrics, how to collect core measures, and how to analyze measures. The subject is of special concern for the industry, which is interested in improving practices — mainly in developing countries, where the software industry represents an opportunity for growth and usually receives institutional support for matching international quality standards. Academics are also in need of understanding and developing more effective methods for managing the software process and assessing the success of products and services, as a result of an enhanced awareness about the emergency of aligning business processes and information systems. This paper unveils the fundamentals of measurement in software engineering and discusses current issues and foreseeable trends for the subject. A literature review was performed within major academic publications in the last decade, and findings suggest a sensible shift of measurement interests towards managing the software process as a whole — without losing from sight the customary focus on hard issues like algorithm efficiency and worker productivity.
APA, Harvard, Vancouver, ISO, and other styles
25

Isyaku, Babangida, Kamalrulnizam Abu Bakar, Mohd Soperi Mohd Zahid, Eman H. Alkhammash, Faisal Saeed, and Fuad A. Ghaleb. "Route Path Selection Optimization Scheme Based Link Quality Estimation and Critical Switch Awareness for Software Defined Networks." Applied Sciences 11, no. 19 (September 29, 2021): 9100. http://dx.doi.org/10.3390/app11199100.

Full text
Abstract:
Software-defined network (SDN) is a new paradigm that decouples the control plane and data plane. This offered a more flexible way to efficiently manage the network. However, the increasing number of traffics due to the proliferation of the Internet of Things (IoT) devices also increase the number of flow arrival which in turn causes flow rules to change more often, and similarly, path setup requests increased. These events required route path computation activities to take place immediately to cope with the new network changes. Searching for an optimal route might be costly in terms of the time required to calculate a new path and update the corresponding switches. However, the current path selection schemes considered only single routing metrics either link or switch operation. Incorporating link quality and switch’s role during path selection decisions have not been considered. This paper proposed Route Path Selection Optimization (RPSO) with multi-constraint. RPSO introduced joint parameters based on link and switches such as Link Latency (LL), Link Delivery Ratio (LDR), and Critical Switch Frequency Score (CWFscore). These metrics encourage path selection with better link quality and a minimal number of critical switches. The experimental results show that the proposed scheme reduced path stretch by 37%, path setup latency by 73% thereby improving throughput by 55.73%, and packet delivery ratio by 12.5% compared to the baseline work.
APA, Harvard, Vancouver, ISO, and other styles
26

Luther, Wolfram, Ekaterina Auer, and Benjamin Weyers. "Reliable Visual Analytics, a Prerequisite for Outcome Assessment of Engineering Systems." Acta Cybernetica 24, no. 3 (March 16, 2020): 287–314. http://dx.doi.org/10.14232/actacyb.24.3.2020.3.

Full text
Abstract:
Various evaluation approaches exist for multi-purpose visual analytics (VA) frameworks. They are based on empirical studies in information visualization or on community activities, for example, VA Science and Technology Challenge (2006-2014) created as a community evaluation resource to 'decide upon the right metrics to use, and the appropriate implementation of those metrics including datasets and evaluators'. In this paper, we propose to use evaluated VA environments for computer-based processes or systems with the main goal of aligning user plans, system models and software results. For this purpose, trust in VA outcome should be established, which can be done by following the (meta-)design principles of a human-centered verification and validation assessment and also in dependence on users' task models and interaction styles, since the possibility to work with the visualization interactively is an integral part of VA. To define reliable VA, we point out various dimensions of reliability along with their quality criteria, requirements, attributes and metrics. Several software packages are used to illustrate the concepts.
APA, Harvard, Vancouver, ISO, and other styles
27

Noël, René, Carla Taramasco, and Gastón Márquez. "Standards, Processes, and Tools Used to Evaluate the Quality of Health Information Systems: Systematic Literature Review." Journal of Medical Internet Research 24, no. 3 (March 8, 2022): e26577. http://dx.doi.org/10.2196/26577.

Full text
Abstract:
Background Evaluating health information system (HIS) quality is strategically advantageous for improving the quality of patient care. Nevertheless, few systematic studies have reported what methods, such as standards, processes, and tools, were proposed to evaluate HIS quality. Objective This study aimed to identify and discuss the existing literature that describes standards, processes, and tools used to evaluate HIS quality. Methods We conducted a systematic literature review using review guidelines focused on software and systems. We examined seven electronic databases—Scopus, ACM (Association for Computing Machinery), ScienceDirect, Google Scholar, IEEE Xplore, Web of Science, and PubMed—to search for and select primary studies. Results Out of 782 papers, we identified 17 (2.2%) primary studies. We found that most of the primary studies addressed quality evaluation from a management perspective. On the other hand, there was little explicit and pragmatic evidence on the processes and tools that allowed for the evaluation of HIS quality. Conclusions To promote quality evaluation of HISs, it is necessary to define mechanisms and methods that operationalize the standards in HISs. Additionally, it is necessary to create metrics that measure the quality of the most critical components and processes of HISs.
APA, Harvard, Vancouver, ISO, and other styles
28

Tibaut, Andrej, and Sara Guerra de Oliveira. "A Framework for the Evaluation of the Cultural Heritage Information Ontology." Applied Sciences 12, no. 2 (January 13, 2022): 795. http://dx.doi.org/10.3390/app12020795.

Full text
Abstract:
The intelligent management of built cultural heritage, including heritage buildings, requires common semantics in the form of standardized ontologies to achieve semantic interoperability. Foundational ontologies should be reused when building new ontologies, as they provide high-level terms; however, candidate foundational ontologies should be evaluated for quality and pitfalls. Simple metrics (e.g., number of concepts) are easy to obtain with existing tools. Complex metrics such as quality of ontology structure, functional adequacy, transferability, reliability, compatibility, maintainability, and operability, are defined in recent ontology evaluation frameworks; however, these do not evaluate interoperability features. The paper proposes an improved framework for an automated ontology evaluation based on the OQuaRE framework. Our approach improved some of the metrics of the OQuaRE framework and introduced three metrics for assessing the interoperability of the ontology in question (Externes, Composability, and Aggregability). In the experimental section, the framework is validated in an evaluation of cultural heritage information ontology (CIDOC CRM—ISO 12217:2014) with the use of new software for ontology evaluation. The detailed results reveal that the ontology is minimally acceptable and that the improved evaluation framework efficiently integrated interoperability metrics. Recommendations for the improvement of the cultural heritage information ontology are described in the Discussion and Conclusions section.
APA, Harvard, Vancouver, ISO, and other styles
29

Çelikkaya, Güner. "Manufacturing Execution Systems And A Sectoral Application." KnE Social Sciences 1, no. 2 (March 19, 2017): 73. http://dx.doi.org/10.18502/kss.v1i2.648.

Full text
Abstract:
<p class="KeywordsText"> Information systems have widespread usage on Manufacturing Systems. Most companies need tracking production processes with information systems to become prominent in highly competitive environment. There are some specific metrics need to be followed and important to access these datas on time and also should be reliable. These metrics include all manufacturing and quality processes. In this context; efficiency, line stoppages, malfunction and interruptions, body and supplied part trace ability informations are needed. This study MES (Manufacturing Execution Systems) is the system developed for tracking production processes. MES is working from receipt of order to delivery of the product that are the essentials for the production activities to achieve real time status of all manufacturing data. In this way, all information which not manually recorded or takes long process time to record or can be tracked instantaneously and these records can be reached and analyzed in the future. The system also provides accurate data for us. In this study all phases from design to implementation of an automotive subindustry firm’s MES application designed based on integrated software process requirements are depicted. Relevant system’s integrated software will be implemented to Module Line. This study will allow tracking related product’s all process, decision making and traceability data with serial numbers.</p>
APA, Harvard, Vancouver, ISO, and other styles
30

Irawati, Anie Rose, Didik Kurniawan, and Anastasya Cindy Grissherin. "IMPLEMENTASI SOFTWARE METRICS UNTUK PENGUKURAN PERFORMA PADA SISTEM INFORMASI AKADEMIK UNIVERSITAS LAMPUNG (SIAKAD V.4.1 UNILA)." Jurnal Pepadun 3, no. 1 (April 1, 2022): 97–107. http://dx.doi.org/10.23960/pepadun.v3i1.100.

Full text
Abstract:
The University of Lampung Academic Information System (SIAKAD) is an information system developed to support the university in managing student academic processes. The absence of a feedback section makes users, especially students, unable to deliver opinions about the system, so that even though the system is running well, the real quality of the system based on user experience is unknown. Performance is one aspect of quality that is important to know, but it cannot be measured directly. The study uses an information system functional scoreboard to assess three aspects of the system, those are system performance, information effectiveness, and service performance. The questionnaire was distributed to 100 respondents from all faculties as a sample, which are expected to represent the answers from the entire university. Based on the research results, it is known that the performance of SIAKAD V.4.1 UNILA is good to a moderate extent. Some things that are needed to be improved are the stages of service procedures, the addition of other academic information such as graphs of student’s grade, transparency of grades, an updated course schedule, integration to the graduation registration system, adding the latest campus information that is displayed clearly and well, also integration with the other systems at the University of Lampung as well as better server replacement and maintenance.
APA, Harvard, Vancouver, ISO, and other styles
31

Handler, David C. L., Flora Cheng, Abdulrahman M. Shathili, and Paul A. Haynes. "PeptideWitch–A Software Package to Produce High-Stringency Proteomics Data Visualizations from Label-Free Shotgun Proteomics Data." Proteomes 8, no. 3 (August 21, 2020): 21. http://dx.doi.org/10.3390/proteomes8030021.

Full text
Abstract:
PeptideWitch is a python-based web module that introduces several key graphical and technical improvements to the Scrappy software platform, which is designed for label-free quantitative shotgun proteomics analysis using normalised spectral abundance factors. The program inputs are low stringency protein identification lists output from peptide-to-spectrum matching search engines for ‘control’ and ‘treated’ samples. Through a combination of spectral count summation and inner joins, PeptideWitch processes low stringency data, and outputs high stringency data that are suitable for downstream quantitation. Data quality metrics are generated, and a series of statistical analyses and graphical representations are presented, aimed at defining and presenting the difference between the two sample proteomes.
APA, Harvard, Vancouver, ISO, and other styles
32

Dewangan, Seema, Rajwant Singh Rao, Alok Mishra, and Manjari Gupta. "Code Smell Detection Using Ensemble Machine Learning Algorithms." Applied Sciences 12, no. 20 (October 13, 2022): 10321. http://dx.doi.org/10.3390/app122010321.

Full text
Abstract:
Code smells are the result of not following software engineering principles during software development, especially in the design and coding phase. It leads to low maintainability. To evaluate the quality of software and its maintainability, code smell detection can be helpful. Many machine learning algorithms are being used to detect code smells. In this study, we applied five ensemble machine learning and two deep learning algorithms to detect code smells. Four code smell datasets were analyzed: the Data class, the God class, the Feature-envy, and the Long-method datasets. In previous works, machine learning and stacking ensemble learning algorithms were applied to this dataset and the results found were acceptable, but there is scope of improvement. A class balancing technique (SMOTE) was applied to handle the class imbalance problem in the datasets. The Chi-square feature extraction technique was applied to select the more relevant features in each dataset. All five algorithms obtained the highest accuracy—100% for the Long-method dataset with the different selected sets of metrics, and the poorest accuracy, 91.45%, was achieved by the Max voting method for the Feature-envy dataset for the selected twelve sets of metrics.
APA, Harvard, Vancouver, ISO, and other styles
33

Ozadam, Hakan, Michael Geng, and Can Cenik. "RiboFlow, RiboR and RiboPy: an ecosystem for analyzing ribosome profiling data at read length resolution." Bioinformatics 36, no. 9 (January 13, 2020): 2929–31. http://dx.doi.org/10.1093/bioinformatics/btaa028.

Full text
Abstract:
Abstract Summary Ribosome occupancy measurements enable protein abundance estimation and infer mechanisms of translation. Recent studies have revealed that sequence read lengths in ribosome profiling data are highly variable and carry critical information. Consequently, data analyses require the computation and storage of multiple metrics for a wide range of ribosome footprint lengths. We developed a software ecosystem including a new efficient binary file format named ‘ribo’. Ribo files store all essential data grouped by ribosome footprint lengths. Users can assemble ribo files using our RiboFlow pipeline that processes raw ribosomal profiling sequencing data. RiboFlow is highly portable and customizable across a large number of computational environments with built-in capabilities for parallelization. We also developed interfaces for writing and reading ribo files in the R (RiboR) and Python (RiboPy) environments. Using RiboR and RiboPy, users can efficiently access ribosome profiling quality control metrics, generate essential plots and carry out analyses. Altogether, these components create a software ecosystem for researchers to study translation through ribosome profiling. Availability and implementation For a quickstart, please see https://ribosomeprofiling.github.io. Source code, installation instructions and links to documentation are available on GitHub: https://github.com/ribosomeprofiling. Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
34

Rekoske, John M., Eric M. Thompson, Morgan P. Moschetti, Mike G. Hearne, Brad T. Aagaard, and Grace A. Parker. "The 2019 Ridgecrest, California, Earthquake Sequence Ground Motions: Processed Records and Derived Intensity Metrics." Seismological Research Letters 91, no. 4 (February 12, 2020): 2010–23. http://dx.doi.org/10.1785/0220190292.

Full text
Abstract:
Abstract Following the 2019 Ridgecrest, California, earthquake sequence, we compiled ground-motion records from multiple data centers and processed these records using newly developed ground-motion processing software that performs quality assurance checks, performs standard time series processing steps, and computes a wide range of ground-motion metrics. In addition, we compute station and waveform metrics such as the time-averaged shear-wave velocity to 30 m depth (VS30), finite-rupture distances, and spectral accelerations. This data set includes 22,708 records from 133 events from 4 July 2019 (UTC) to 18 October 2019 with a magnitude range from 3.6 to 7.1. We expect that the rapid collection and dissemination of this information will facilitate detailed studies of these ground motions. In this article, we describe the data selection, processing steps, and how to access the data.
APA, Harvard, Vancouver, ISO, and other styles
35

Nascimento de Lima, Giuseppe Anthony, Thalles Henrique Do Nascimento Araújo, Luciano Ferreira de Azevedo, and Francisco Fernandes de Araújo Neto. "Um metamodelo para elaboração, aplicação e análise de autoavaliações institucionais em conformidade com o SINAES." Revista Principia - Divulgação Científica e Tecnológica do IFPB 1, no. 44 (April 4, 2019): 122. http://dx.doi.org/10.18265/1517-03062015v1n44p122-131.

Full text
Abstract:
<p>The legislation that establishes the National Higher Education Evaluation System (SINAES) states that the institutional self-evaluation process in Brazilian Higher Education Institutions (BHEI) must be made periodically and compulsorily. However, the legislation, regulamentation and technical notes do not have a toolset (evaluative instrumentation, quality and quantitative metrics, diagnostic criteria, etc.) for conception, application and data analysis in self-evaluative processes. Among the various techniques and instruments used by many BHEI, this research tried to collect the most productive and efficient ones which were compatible with SINAES so as to obtain a metamodel able to guide the assembling and management of computerized internal institutional evaluations, in which software interface prototyping techniques were employed for its validation.</p>
APA, Harvard, Vancouver, ISO, and other styles
36

Unudulmaz, Ahmet, Mustafa Özgür Cingiz, and Oya Kalıpsız. "Adaptation of the Four Levels of Test Maturity Model Integration with Agile and Risk-Based Test Techniques." Electronics 11, no. 13 (June 24, 2022): 1985. http://dx.doi.org/10.3390/electronics11131985.

Full text
Abstract:
Many projects that progress with failure, processes managed erroneously, failure to deliver products and projects on time, excessive increases taking place in costs, and an inability to analyze customer requests correctly pave the way for the use of agile processes in software development methods and cause the importance of test processes to increase day by day. In particular, the inability to properly handle testing processes and risks with time and cost pressures, the differentiation of software development methods between projects, the failure to integrate risk management, and risk analysis studies, conducted within a company/institution, with software development methods also complicates this situation. It is recommended to use agile process methods and test maturity model integration (TMMI), with risk-based testing techniques and user scenario testing techniques, to eliminate such problems. In this study, agile process transformation of a company, operating in factory automation systems in the field of industry, was followed for two and a half years. This study has been prepared to close the gap in the literature on the integration of TMMI level 2, TMMI level 3, and TMMI level 4 with SAFE methodology and agile processes. Our research has been conducted upon the use of all TMMI level sub-steps with both agile process practices and some test practices (risk-based testing techniques, user scenario testing techniques). TMMI coverage percentages have been determined as 92.85% based on TMMI level 2, 92.9% based on TMMI level 3, and 100% based on TMMI level 4. In addition, agile process adaptation metrics and their measurements between project versions will be shown, and their contribution to quality will be mentioned.
APA, Harvard, Vancouver, ISO, and other styles
37

Robles-Schrader, Grisel M., Keith A. Herzog, and Josefina Serrato. "3347 Developing Relevant Community Engagement Metrics to Evaluate Engagement Support and Outcomes." Journal of Clinical and Translational Science 3, s1 (March 2019): 87–88. http://dx.doi.org/10.1017/cts.2019.201.

Full text
Abstract:
OBJECTIVES/SPECIFIC AIMS: The goals in this project were two-fold:. Develop metrics that assessed community engagement support the center provides, and. Systematically document the fluid and time-intensive nature of providing community engaged research support, as well as key outcomes. METHODS/STUDY POPULATION: The CCH utilized REDCap software in combination with Excel, to create and implement a data collection system to monitor and report on the full spectrum of engagement activities offered by the center. Center staff collaborated in identifying relevant metrics, developing the data collection instruments, and beta-testing instruments with real examples. This facilitated the integration of contextual factors (defined as factors such as the history, size, and diversity of the community, the organizational mission, the structure and size of the CE team, the number of years a university has been supporting community-engaged research work, etc.). Taking a collaborative approach in developing the center’s evaluation plan offered the added benefit of facilitating staff/faculty buy-in, building staff capacity, and engaging the team in understanding concepts related to performance measurement versus management. RESULTS/ANTICIPATED RESULTS: Key benefits of these engagement tracking systems include: consolidating data into a central location, standardizing tracking processes and critical definitions, and supporting more automated reporting systems (e.g., dashboards) that facilitate quality improvement and highlight success stories. Data were compiled and reported via on-line dashboard (REDCap and Tableau) to help center leadership and staff analyze:. Quality improvement issues (How quickly are we responding to a request for support? Are we providing resources that meet the needs of community partners? Academics? Community-academic partnerships?);. Qualitative process analysis (In what research phase are we typically receiving requests for support (e.g. proposal development phase, implementation phase, etc.)? What types of projects are applying for seed grants? After the seed grant ends, are the community-academic partnerships continuing to partner on research activities?);. Outcomes (Are new partnerships stemming from our support? Are supported research projects leading to new policies, practices, programs?). DISCUSSION/SIGNIFICANCE OF IMPACT: There is a gap in the literature regarding meaningful, actionable, and feasible community engaged metrics that capture critical processes and outcomes. This project identified many more relevant metrics and demonstrates that it is worthwhile to take a collaborative, inclusive approach to identifying, tracking, and reporting on key process and outcome metrics in order to convey a more comprehensive picture of community engagement activities and to inform continuous improvement efforts. Community engagement centers across CTSIs offer a similar range of programs and services. At the same time, much of the community-engaged research literature describes metrics related to community-academic grant submissions, funds awarded, and peer-reviewed publications. Experts that work in the arena of providing community engagement support recognize that these metrics are sufficient in understanding the spectrum of engagement opportunities. Community engagement (CE) teams nationally can utilize these metrics in developing their evaluation infrastructure. At the national level, NCATS can utilize the metrics for CE common metrics related to these programs and services. Critical to this process:. Leveraging resources that will facilitate collecting generalizable data (national metrics) while allowing sites to continue collecting nuanced data (local programs and services). Gathering input from CE teams, stakeholders, and researchers to further refine these metrics and data collection methods. Utilizing REDCap, Tableau and other resources that can facilitate data collection and analysis efforts.
APA, Harvard, Vancouver, ISO, and other styles
38

Rusinaru, Denisa, Claudiu Popirlan, Gabriel Stoian, Cosmin Buzatu, Andrei Negoita, Leonardo Geo Manescu, Adrian Cojoaca, and Mihai Mircea. "Extended Capabilities of the Power Quality Management System of the Power Distribution Grid for Data Exchange." MATEC Web of Conferences 210 (2018): 02045. http://dx.doi.org/10.1051/matecconf/201821002045.

Full text
Abstract:
The actual operation of the power distribution systems asks for high power quality (PQ). This fully justifies the investments in improving the metrics of power grid performances. But maintaining a PQ data infrastructure in a numerous locations is time-consuming and prohibitive. Moreover, each PQ monitor releases its own data format. These considerations justified using of a central PQ management system able to manipulate elliptical and time discontinuous information. The paper presents the characteristics of a webbased application designed to collect the data measured with different technologies of monitors and translate them into a common PQ data interchange format allowing comprehensive and long-duration grid operation assessment. The unitary formatted data generated by this customized software tool can be further processed by a proprietary software platform for PQ management owned by the network operator. The present version of this conversion tool is applicable only for one product family operating in the local power distribution grid. Further development is planned for integrating other two monitor vendors/families.
APA, Harvard, Vancouver, ISO, and other styles
39

Gupta, Varun, Jose Maria Fernandez-Crehuet, Chetna Gupta, and Thomas Hanne. "Freelancing Models for Fostering Innovation and Problem Solving in Software Startups: An Empirical Comparative Study." Sustainability 12, no. 23 (December 3, 2020): 10106. http://dx.doi.org/10.3390/su122310106.

Full text
Abstract:
Context: freelancers and startups could provide each other with promising opportunities that lead to mutual growth, by improving software development metrics, such as cost, time, and quality. Niche skills processed by freelancers could help startups reduce uncertainties associated with developments and markets, with the ability to quickly address market issues (and with higher quality). This requires the associations between freelancers and startup to be long-term, based on trust, and promising agreements driven by motivations (leading to the growth of both parties). Freelancers could help startups foster innovations and undertake software development tasks in better ways than conducted in-house, if they are selected using informed decision-making. Objectives: the paper has three objectives, (1) to explore the strategies of startups to outsource software development tasks to freelancers (termed as freelancing association strategies); (2) to identify challenges in such outsourcings; and (3) to identify the impacts of outsourcing tasks to freelancers on overall project metrics. The overall objective is to understand the strategies for involving freelancers in the software development process, throughout the startup lifecycle, and the associated challenges and the impacts that help to foster innovation (to maintain competitive advantages). Method: this paper performs empirical studies through case studies of three software startups located in Italy, France, and India, followed by a survey of 54 freelancers. The results are analyzed and compared in the identification of association models, issues, challenges, and reported results arising because of such associations. The case study results are validated using members checking with the research participants, which shows a higher level of result agreements. Results: the results indicate that the freelancer association strategy is task based, panel based, or a hybrid. The associations are constrained by issues such as deciding pricing, setting deadlines, difficulty in getting good freelancers, quality issues with software artefacts, and efforts to access freelancer work submissions for reward. The associations have a positive impact on software development if there is availability of good freelancers (which lasts long for various tasks). The paper finally provides a freelancing model framework and recommends activities that could result in making the situation beneficial to both parties, and streamline such associations. Fostering innovation in startups is, thus, a trade-off situation, which is limited and supported by many conflicting parameters.
APA, Harvard, Vancouver, ISO, and other styles
40

Das, Ankush, Di Wang, and Jan Hoffmann. "Probabilistic Resource-Aware Session Types." Proceedings of the ACM on Programming Languages 7, POPL (January 9, 2023): 1925–56. http://dx.doi.org/10.1145/3571259.

Full text
Abstract:
Session types guarantee that message-passing processes adhere to predefined communication protocols. Prior work on session types has focused on deterministic languages but many message-passing systems, such as Markov chains and randomized distributed algorithms, are probabilistic. To implement and analyze such systems, this article develops the meta theory of probabilistic session types with an application focus on automatic expected resource analysis. Probabilistic session types describe probability distributions over messages and are a conservative extension of intuitionistic (binary) session types. To send on a probabilistic channel, processes have to utilize internal randomness from a probabilistic branching or external randomness from receiving on a probabilistic channel. The analysis for expected resource bounds is smoothly integrated with the type system and is a variant of automatic amortized resource analysis. Type inference relies on linear constraint solving to automatically derive symbolic bounds for various cost metrics. The technical contributions include the meta theory that is based on a novel nested multiverse semantics and a type-reconstruction algorithm that allows flexible mixing of different sources of randomness without burdening the programmer with complex type annotations. The type system has been implemented in the language NomosPro with linear-time type checking. Experiments demonstrate that NomosPro is applicable in different domains such as cost analysis of randomized distributed algorithms, analysis of Markov chains, probabilistic analysis of amortized data structures and digital contracts. NomosPro is also shown to be scalable by (i) implementing two broadcast and a bounded retransmission protocol where messages are dropped with a fixed probability, and (ii) verifying the limiting distribution of a Markov chain with 64 states and 420 transitions.
APA, Harvard, Vancouver, ISO, and other styles
41

Nemati, Hani, Seyed Vahid Azhari, Mahsa Shakeri, and Michel Dagenais. "Host-Based Virtual Machine Workload Characterization Using Hypervisor Trace Mining." ACM Transactions on Modeling and Performance Evaluation of Computing Systems 6, no. 1 (June 2021): 1–25. http://dx.doi.org/10.1145/3460197.

Full text
Abstract:
Cloud computing is a fast-growing technology that provides on-demand access to a pool of shared resources. This type of distributed and complex environment requires advanced resource management solutions that could model virtual machine (VM) behavior. Different workload measurements, such as CPU, memory, disk, and network usage, are usually derived from each VM to model resource utilization and group similar VMs. However, these course workload metrics require internal access to each VM with the available performance analysis toolkit, which is not feasible with many cloud environments privacy policies. In this article, we propose a non-intrusive host-based virtual machine workload characterization using hypervisor tracing. VM blockings duration, along with virtual interrupt injection rates, are derived as features to reveal multiple levels of resource intensiveness. In addition, the VM exit reason is considered, as well as the resource contention rate due to the host and other VMs. Moreover, the processes and threads preemption rates in each VM are extracted using the collected tracing logs. Our proposed approach further improves the selected features by exploiting a page ranking based algorithm to filter non-important processes running on each VM. Once the metric features are defined, a two-stage VM clustering technique is employed to perform both coarse- and fine-grain workload characterization. The inter-cluster and intra-cluster similarity metrics of the silhouette score is used to reveal distinct VM workload groups, as well as the ones with significant overlap. The proposed framework can provide a detailed vision of the underlying behavior of the running VMs. This can assist infrastructure administrators in efficient resource management, as well as root cause analysis.
APA, Harvard, Vancouver, ISO, and other styles
42

Moreno, Jose Luis, Jose Fernando Ortega, Miguel Ángel Moreno, and Rocío Ballesteros. "Using an unmanned aerial vehicle (UAV) for lake management: ecological status, lake regime shift and stratification processes in a small Mediterranean karstic lake." Limnetica 41, no. 2 (June 15, 2022): 1. http://dx.doi.org/10.23818/limn.41.21.

Full text
Abstract:
High-resolution remote sensing imagery by unmanned aerial vehicles (UAVs) has been used as a tool for the environmental management of natural resources. Monitoring programmes that evaluate the ecological status of water bodies according to the Water Framework Directive (WFD) involve significant costs and sampling efforts that can be reduced by using UAVs. UAV imagery was used to measure some metrics of the “macrophytes and phytobenthos” biological quality element (BQE), which is required to assess the ecological status of European lakes; e.g. the percentage cover of hydrophytes and helophytes. Eight UAV flights took place during an annual cycle (July 2016 to July 2017) in a small karstic lake located in southeast Spain. Limnolog¬ical surveys of physicochemical (temperature, conductivity, dissolved oxygen, pH) and biological (pigments) parameters were simultaneously performed to correctly interpret the UAV images. For each flight performance, an orthomosaic of georeferenced RGB images was obtained, and the different features of interest were monitored and quantified by an automated identification and classification system (the LAIC software). The UAV images allowed us to not only evaluate the lake’s ecological status by measuring macrophyte metrics, but to also detect relevant ecological events for environmental management. A gradual burial process of charophyte meadows by the proliferation of periphytic cyanobacterial was detected in an early state by UAV images. Stratification processes, such as hypolimnetic sulphur bacteria blooms or metalimnetic white colloidal layers, were also ob¬served by UAV imaging. We conclude that UAV imagery is a useful tool for environmental lake management.
APA, Harvard, Vancouver, ISO, and other styles
43

Söbke, Heinrich. "Exploring (Collaborative) Generation and Exploitation of Multiple Choice Questions: Likes as Quality Proxy Metric." Education Sciences 12, no. 5 (April 21, 2022): 297. http://dx.doi.org/10.3390/educsci12050297.

Full text
Abstract:
Multiple Choice Questions (MCQs) are an established medium of formal educational contexts. The collaborative generation of MCQs by students follows the perspectives of constructionist and situated learning and is an activity that fosters learning processes. The MCQs generated are—besides the learning processes—further outcomes of collaborative generation processes. Quality MCQs are a valuable resource, so that collaboratively generated quality MCQs might also be exploited in further educational scenarios. However, the quality MCQs first need to be identified from the corpus of all generated MCQs. This article investigates whether Likes distributed by students when answering MCQs are viable as a metric for identifying quality MCQs. Additionally, this study explores whether the process of collaboratively generating MCQs and using the quality MCQs generated in commercial quiz apps is achievable without additional extrinsic motivators. Accordingly, this article describes the results of a two-stage field study. The first stage investigates whether quality MCQs may be identified through collaborative inputs. For this purpose, the Reading Game (RG), a gamified, web-based software aiming at collaborative MCQ generation, is employed as a semester-accompanying learning activity in a bachelor course in Urban Water Management. The reliability of a proxy metric for quality calculated from the ratio of Likes received and appearances in quizzes is compared to the quality estimations of domain experts for selected MCQs. The selection comprised the ten best and the ten worst rated MCQs. Each of the MCQs is rated regarding five dimensions. The results support the assumption that the RG-given quality metric allows identification of well-designed MCQs. In the second stage, MCQs created by RG are provided in a commercial quiz app (QuizUp) in a voluntary educational scenario. Despite the prevailing pressure to learn, neither the motivational effects of RG nor of the app are found in this study to be sufficient for encouraging students to voluntarily use them on a regular basis. Besides confirming that quality MCQs may be generated by collaborative software, it is to be stated that in the collaborative generation of MCQs, Likes may serve as a proxy metric for the quality of the MCQs generated.
APA, Harvard, Vancouver, ISO, and other styles
44

Kulkarni, Anagha, Mike Wong, Tejasvi Belsare, Risha Shah, Diana Yu Yu, Bera Coskun, Carrie Holschuh, Venoo Kakar, Sepideh Modrek, and Anastasia Smirnova. "Quantifying the Quality of Web-Based Health Information on Student Health Center Websites Using a Software Tool: Design and Development Study." JMIR Formative Research 6, no. 2 (February 2, 2022): e32360. http://dx.doi.org/10.2196/32360.

Full text
Abstract:
Background The internet has become a major source of health information, especially for adolescents and young adults. Unfortunately, inaccurate, incomplete, or outdated health information is widespread on the web. Often adolescents and young adults turn to authoritative websites such as the student health center (SHC) website of the university they attend to obtain reliable health information. Although most on-campus SHC clinics comply with the American College Health Association standards, their websites are not subject to any standards or code of conduct. In the absence of quality standards or guidelines, monitoring and compliance processes do not exist for SHC websites. Thus, there is no oversight of the health information published on SHC websites by any central governing body. Objective The aim of this study is to develop, describe, and validate an open-source software that can effectively and efficiently assess the quality of health information on SHC websites in the United States. Methods Our cross-functional team designed and developed an open-source software, QMOHI (Quantitative Measures of Online Health Information), that assesses information quality for a specified health topic from all SHC websites belonging to a predetermined list of universities. The tool was designed to compute 8 different quality metrics that quantify various aspects of information quality based on the retrieved text. We conducted and reported results from 3 experiments that assessed the QMOHI tool in terms of its scalability, generalizability in health topics, and robustness to changes in universities’ website structure. Results Empirical evaluation has shown the QMOHI tool to be highly scalable and substantially more efficient than manually assessing web-based information quality. The tool’s runtime was dominated by network-related tasks (98%), whereas the metric computations take <2 seconds. QMOHI demonstrated topical versatility, evaluating SHC website information quality for four disparate and broad health topics (COVID, cancer, long-acting reversible contraceptives, and condoms) and two narrowly focused topics (hormonal intrauterine device and copper intrauterine device). The tool exhibited robustness, correctly measuring information quality despite changes in SHC website structure. QMOHI can support longitudinal studies by being robust to such website changes. Conclusions QMOHI allows public health researchers and practitioners to conduct large-scale studies of SHC websites that were previously too time- and cost-intensive. The capability to generalize broadly or focus narrowly allows a wide range of applications of QMOHI, allowing researchers to study both mainstream and underexplored health topics. QMOHI’s ability to robustly analyze SHC websites periodically promotes longitudinal investigations and allows QMOHI to be used as a monitoring tool. QMOHI serves as a launching pad for our future work that aims to develop a broadly applicable public health tool for web-based health information studies with potential applications far beyond SHC websites.
APA, Harvard, Vancouver, ISO, and other styles
45

Thompson, Tommy, and Michele Vinciguerra. "Integrated Pathfinding and Player Analysis for Touch-Driven Games." Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment 10, no. 3 (June 29, 2021): 67–68. http://dx.doi.org/10.1609/aiide.v10i3.12743.

Full text
Abstract:
This paper introduces a software framework, currently in prototype stage, designed to permit automatic integration of pathfinding navigation in isomertric camera-driven games. This framework is built with an emphasis towards touchscreen-driven gameplay on tablets and smartphones. This enables pathfinding to be adopted not by characters in-game, but also permits a range of traditional and experimental control schemes to utilise this feature. In addition, the framework provides for a range of metrics to be established based on user activity; enabling quality assurance to be conducted based on raw-user input and pre-processed data.
APA, Harvard, Vancouver, ISO, and other styles
46

Rudahunga, Christian, Henry Kiragu, and Mary Ahuna. "Fusion Based MR Images Denoising Technique Using Frequency Domain and Non-Local Means Filters." WSEAS TRANSACTIONS ON SIGNAL PROCESSING 18 (October 4, 2022): 153–63. http://dx.doi.org/10.37394/232014.2022.18.22.

Full text
Abstract:
The non-invasive and non-ionizing properties of Magnetic Resonance Imaging (MRI) in addition to the associated good image quality as well as high resolution make MRI more attractive than many other medical imaging techniques. However, during the acquisition, transmission, compression and storage processes, the Magnetic Resonance (MR) images are corrupted by various types of noise and artifacts that degrade their visual quality. Most of the existing MR images denoising techniques give good quality images only when the noise density is low with their performances deteriorating as the noise power increases. The few methods that yield high quality images for all noise densities involve multiple complex and time-consuming processes. This paper proposes a computationally simple MR images denoising technique that consistently gives good denoising results for low as well as high noise densities. The proposed procedure fuses an MR image that is denoised by a Modified Discrete Fast Fourier Transform (MDFFT) filter with one that is denoised using a non-local means filter in frequency domain to yield a high quality output image. The main contribution of this proposed method is the employment of a novel image fusion approach that greatly improves the quality of the denoised image. The performance of the proposed technique is compared with those of the Wiener, median, adaptive median and the MDFFT filters. Objective metrics such as the Peak Signal-to-Noise Ratio (PSNR) and the Structural Similarity (SSIM) index were used in the performance assessments. The outcomes of these assessments showed that the proposed algorithm yielded images of higher quality in terms of the PSNR measure than the existing denoising techniques by at least 7.11 dB for a noise density of up to 0.5.
APA, Harvard, Vancouver, ISO, and other styles
47

Murtiyoso, A., P. Grussenmeyer, and N. Börlin. "REPROCESSING CLOSE RANGE TERRESTRIAL AND UAV PHOTOGRAMMETRIC PROJECTS WITH THE DBAT TOOLBOX FOR INDEPENDENT VERIFICATION AND QUALITY CONTROL." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W8 (November 13, 2017): 171–77. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w8-171-2017.

Full text
Abstract:
Photogrammetry has recently seen a rapid increase in many applications, thanks to developments in computing power and algorithms. Furthermore with the democratisation of UAVs (Unmanned Aerial Vehicles), close range photogrammetry has seen more and more use due to the easier capability to acquire aerial close range images. In terms of photogrammetric processing, many commercial software solutions exist in the market that offer results from user-friendly environments. However, in most commercial solutions, a black-box approach to photogrammetric calculations is often used. This is understandable in light of the proprietary nature of the algorithms, but it may pose a problem if the results need to be validated in an independent manner. In this paper, the Damped Bundle Adjustment Toolbox (DBAT) developed for Matlab was used to reprocess some photogrammetric projects that were processed using the commercial software Agisoft Photoscan. Several scenarios were experimented on in order to see the performance of DBAT in reprocessing terrestrial and UAV close range photogrammetric projects in several configurations of self-calibration setting. Results show that DBAT managed to reprocess PS projects and generate metrics which can be useful for project verification.
APA, Harvard, Vancouver, ISO, and other styles
48

Павленко, М. А., С. В. Осієвський, and Ю. В. Данюк. "Methodological foundation for improving the quality of intelligent decision-making system software." Системи обробки інформації, no. 1(164) (March 17, 2021): 55–64. http://dx.doi.org/10.30748/soi.2021.164.06.

Full text
Abstract:
On the basis of a detailed analysis, existing terminological interpretations of the concept of "software quality" have been generalized, conclusions are drawn about the correspondence of the terms used to assess the quality of general software in the process of assessing the quality of software of intelligent decision-making systems (IDMS). It has been proved that the quality of the IDMS software is a complex multi-criteria indicator that takes into account not only the performance of the individual software module as a subsystem, but also the causal relationships of the elements of the software system itself. The main differences in software quality assessment between the functional and formal approaches are shown. The structure of the criterion of guarantor capacity of decision-making systems software has been investigated and conclusions have been drawn on the influence of its main components on the evaluation of IDMS software and on the provision of reliable computing process. On the basis of the analysis of the list of attributes and the quality metric of the IDMS software, it is established that the guarantee is determined by the reliability of the software structure itself and is characterised by the restoration of the functional state after failures or failures. The interrelationship and influence of IDMS software design quality indicators on the characteristics and sub-characteristics of the IDMS software is established, an example of the interrelationship between characteristics (factors) and quality indicators, the method of measuring quality indicators and design processes is given. On the basis of the conducted research, IDMS software denial regimes have been defined and their impact on the decision-making process has been shown. Detailed classes of failures and their influence on compliance of IDMS software with the task of development are shown. It has been shown that the reliability of IDMS is a dynamic concept, manifested in time, and is strongly dependent on the presence / absence of defects in the interaction. A detailed analysis of methods of software quality assurance and control has been carried out, and conclusions have been drawn on the possibility of their application IDMS software. The maturity model of the IDMS software has been improved and validated, and the maturity structure of the software as an indicator of the quality of the IDMS has been introduced.
APA, Harvard, Vancouver, ISO, and other styles
49

Ramírez-Montañez, Julio Alberto, Marco Antonio Aceves-Fernández, Jesús Carlos Pedraza-Ortega, Efrén Gorrostieta-Hurtado, and Artemio Sotomayor-Olmedo. "Airborne Particulate Matter Modeling: A Comparison of Three Methods Using a Topology Performance Approach." Applied Sciences 12, no. 1 (December 28, 2021): 256. http://dx.doi.org/10.3390/app12010256.

Full text
Abstract:
Understanding the behavior of suspended pollutants in the atmosphere has become of paramount importance to determine air quality. For this purpose, a variety of simulation software packages and a large number of algorithms have been used. Among these techniques, recurrent deep neural networks (RNN) have been used lately. These are capable of learning to imitate the chaotic behavior of a set of continuous data over time. In the present work, the results obtained from implementing three different RNNs working with the same structure are compared. These RNNs are long-short term memory network (LSTM), a recurrent gated unit (GRU), and the Elman network, taking as a case study the records of particulate matter PM10 and PM2.5 from 2005 to 2019 of Mexico City, obtained from the Red Automatica de Monitoreo Ambiental (RAMA) database. The results were compared for these three topologies in execution time, root mean square error (RMSE), and correlation coefficient (CC) metrics.
APA, Harvard, Vancouver, ISO, and other styles
50

Bergès, Cécilia, Edern Cahoreau, Pierre Millard, Brice Enjalbert, Mickael Dinclaux, Maud Heuillet, Hanna Kulyk, et al. "Exploring the Glucose Fluxotype of the E. coli y-ome Using High-Resolution Fluxomics." Metabolites 11, no. 5 (April 26, 2021): 271. http://dx.doi.org/10.3390/metabo11050271.

Full text
Abstract:
We have developed a robust workflow to measure high-resolution fluxotypes (metabolic flux phenotypes) for large strain libraries under fully controlled growth conditions. This was achieved by optimizing and automating the whole high-throughput fluxomics process and integrating all relevant software tools. This workflow allowed us to obtain highly detailed maps of carbon fluxes in the central carbon metabolism in a fully automated manner. It was applied to investigate the glucose fluxotypes of 180 Escherichia coli strains deleted for y-genes. Since the products of these y-genes potentially play a role in a variety of metabolic processes, the experiments were designed to be agnostic as to their potential metabolic impact. The obtained data highlight the robustness of E. coli’s central metabolism to y-gene deletion. For two y-genes, deletion resulted in significant changes in carbon and energy fluxes, demonstrating the involvement of the corresponding y-gene products in metabolic function or regulation. This work also introduces novel metrics to measure the actual scope and quality of high-throughput fluxomics investigations.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography