To see the other types of publications on this topic, follow the link: SUT (SOFTWARE UNDER TESTING).

Journal articles on the topic 'SUT (SOFTWARE UNDER TESTING)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'SUT (SOFTWARE UNDER TESTING).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Rahmani, Ani, Joe Lian Min, and S. Suprihanto. "SOFTWARE UNDER TEST DALAM PENELITIAN SOFTWARE TESTING: SEBUAH REVIEW." JTT (Jurnal Teknologi Terapan) 7, no. 2 (October 22, 2021): 181. http://dx.doi.org/10.31884/jtt.v7i2.362.

Full text
Abstract:
Software under Test (SUT) is an essential aspect of software testing research activities. Preparation of the SUT is not simple. It requires accuracy, completeness and will affect the quality of the research conducted. Currently, there are several ways to utilize an SUT in software testing research: building an own SUT, utilization of open source to build an SUT, and SUT from the repository utilization. This article discusses the results of SUT identification in many software testing studies. The research is conducted in a systematic literature review (SLR) using the Kitchenham protocol. The review process is carried out on 86 articles published in 2017-2020. The article was selected after two selection stages: the Inclusion and Exclusion Criteria and the quality assessment. The study results show that the trend of using open source is very dominant. Some researchers use open source as the basis for developing SUT, while others use SUT from a repository that provides ready-to-use SUT. In this context, utilization of the SUT from the software infrastructure repository (SIR) and Defect4J are the most significant choice of researchers.
APA, Harvard, Vancouver, ISO, and other styles
2

Mishra, Deepti Bala, Biswaranjan Acharya, Dharashree Rath, Vassilis C. Gerogiannis, and Andreas Kanavos. "A Novel Real Coded Genetic Algorithm for Software Mutation Testing." Symmetry 14, no. 8 (July 26, 2022): 1525. http://dx.doi.org/10.3390/sym14081525.

Full text
Abstract:
Information Technology has rapidly developed in recent years and software systems can play a critical role in the symmetry of the technology. Regarding the field of software testing, white-box unit-level testing constitutes the backbone of all other testing techniques, as testing can be entirely implemented by considering the source code of each System Under Test (SUT). In unit-level white-box testing, mutants can be used; these mutants are artificially generated faults seeded in each SUT that behave similarly to the realistic ones. Executing test cases against mutants results in the adequacy (mutation) score of each test case. Efficient Genetic Algorithm (GA)-based methods have been proposed to address different problems in white-box unit testing and, in particular, issues of mutation testing techniques. In this research paper, a new approach, which integrates the path coverage-based testing method with the novel idea of tracing a Fault Detection Matrix (FDM) to achieve maximum mutation coverage, is proposed. The proposed real coded GA for mutation testing is designed to achieve the highest Mutation Score, and it is thus named RGA-MS. The approach is implemented in two phases: path coverage-based test data are initially generated and stored in an optimized test suite. In the next phase, the test suite is executed to kill the mutants present in the SUT. The proposed method aims to achieve the minimum test dataset, having at the same time the highest Mutation Score by removing duplicate test data covering the same mutants. The proposed approach is implemented on the same SUTs as these have been used for path testing. We proved that the RGA-MS approach can cover maximum mutants with a minimum number of test cases. Furthermore, the proposed method can generate a maximum path coverage-based test suite with minimum test data generation compared to other algorithms. In addition, all mutants in the SUT can be covered by less number of test data with no duplicates. Ultimately, the generated optimal test suite is trained to achieve the highest Mutation Score. GA is used to find the maximum mutation coverage as well as to delete the redundant test cases.
APA, Harvard, Vancouver, ISO, and other styles
3

Kusharki, Muhammad Bello, Sanjay Misra, Bilkisu Muhammad-Bello, Ibrahim Anka Salihu, and Bharti Suri. "Automatic Classification of Equivalent Mutants in Mutation Testing of Android Applications." Symmetry 14, no. 4 (April 14, 2022): 820. http://dx.doi.org/10.3390/sym14040820.

Full text
Abstract:
Software and symmetric testing methodologies are primarily used in detecting software defects, but these testing methodologies need to be optimized to mitigate the wasting of resources. As mobile applications are becoming more prevalent in recent times, the need to have mobile applications that satisfy software quality through testing cannot be overemphasized. Testing suites and software quality assurance techniques have also become prevalent, which underscores the need to evaluate the efficacy of these tools in the testing of the applications. Mutation testing is one such technique, which is the process of injecting small changes into the software under test (SUT), thereby creating mutants. These mutants are then tested using mutation testing techniques alongside the SUT to determine the effectiveness of test suites through mutation scoring. Although mutation testing is effective, the cost of implementing it, due to the problem of equivalent mutants, is very high. Many research works gave varying solutions to this problem, but none used a standardized dataset. In this research work, we employed a standard mutant dataset tool called MutantBench to generate our data. Subsequently, an Abstract Syntax Tree (AST) was used in conjunction with a tree-based convolutional neural network (TBCNN) as our deep learning model to automate the classification of the equivalent mutants to reduce the cost of mutation testing in software testing of android applications. The result shows that the proposed model produces a good accuracy rate of 94%, as well as other performance metrics such as recall (96%), precision (89%), F1-score (92%), and Matthew’s correlation coefficients (88%) with fewer False Negatives and False Positives during testing, which is significant as it implies that there is a decrease in the risk of misclassification.
APA, Harvard, Vancouver, ISO, and other styles
4

V. Chandra Prakash, Dr, Subhash Tatale, Vrushali Kondhalkar, and Laxmi Bewoor. "A Critical Review on Automated Test Case Generation for Conducting Combinatorial Testing Using Particle Swarm Optimization." International Journal of Engineering & Technology 7, no. 3.8 (July 7, 2018): 22. http://dx.doi.org/10.14419/ijet.v7i3.8.15212.

Full text
Abstract:
In software development life cycle, testing plays the significant role to verify requirement specification, analysis, design, coding and to estimate the reliability of software system. A test manager can write a set of test cases manually for the smaller software systems. However, for the extensive software system, normally the size of test suite is large, and the test suite is prone to an error committed like omissions of important test cases, duplication of some test cases and contradicting test cases etc. When test cases are generated automatically by a tool in an intelligent way, test errors can be eliminated. In addition, it is even possible to reduce the size of test suite and thereby to decrease the cost & time of software testing.It is a challenging job to reduce test suite size. When there are interacting inputs of Software under Test (SUT), combinatorial testing is highly essential to ensure higher reliability from 72 % to 91 % or even more than that. A meta-heuristic algorithm like Particle Swarm Optimization (PSO) solves optimization problem of automated combinatorial test case generation. Many authors have contributed in the field of combinatorial test case generation using PSO algorithms.We have reviewed some important research papers on automated test case generation for combinatorial testing using PSO. This paper provides a critical review of use of PSO and its variants for solving the classical optimization problem of automatic test case generation for conducting combinatorial testing.
APA, Harvard, Vancouver, ISO, and other styles
5

Safdar, Safdar Aqeel, Tao Yue, and Shaukat Ali. "Recommending Faulty Configurations for Interacting Systems Under Test Using Multi-objective Search." ACM Transactions on Software Engineering and Methodology 30, no. 4 (July 2021): 1–36. http://dx.doi.org/10.1145/3464939.

Full text
Abstract:
Modern systems, such as cyber-physical systems, often consist of multiple products within/across product lines communicating with each other through information networks. Consequently, their runtime behaviors are influenced by product configurations and networks. Such systems play a vital role in our daily life; thus, ensuring their correctness by thorough testing becomes essential. However, testing these systems is particularly challenging due to a large number of possible configurations and limited available resources. Therefore, it is important and practically useful to test these systems with specific configurations under which products will most likely fail to communicate with each other. Motivated by this, we present a search-based configuration recommendation ( SBCR ) approach to recommend faulty configurations for the system under test (SUT) based on cross-product line (CPL) rules. CPL rules are soft constraints, constraining product configurations while indicating the most probable system states with a certain degree of confidence. In SBCR , we defined four search objectives based on CPL rules and combined them with six commonly applied search algorithms. To evaluate SBCR (i.e., SBCR NSGA-II , SBCR IBEA , SBCR MoCell , SBCR SPEA2 , SBCR PAES , and SBCR SMPSO ), we performed two case studies (Cisco and Jitsi) and conducted difference analyses. Results show that for both of the case studies, SBCR significantly outperformed random search-based configuration recommendation ( RBCR ) for 86% of the total comparisons based on six quality indicators, and 100% of the total comparisons based on the percentage of faulty configurations (PFC). Among the six variants of SBCR, SBCR SPEA2 outperformed the others in 85% of the total comparisons based on six quality indicators and 100% of the total comparisons based on PFC.
APA, Harvard, Vancouver, ISO, and other styles
6

Novella, Luigi, Manuela Tufo, and Giovanni Fiengo. "Automatic Test Set Generation for Event-Driven Systems in the Absence of Specifications Combining Testing with Model Inference." Information Technology And Control 48, no. 2 (June 25, 2019): 316–34. http://dx.doi.org/10.5755/j01.itc.48.2.21725.

Full text
Abstract:
The growing dependency of human activities on software technologies is leading to the need for designing more and more accurate testing techniques to ensure the quality and reliability of software components. A recent literature review of software testing methodologies reveals that several new approaches, which differ in the way test inputs are generated to efficiently explore systems behaviour, have been proposed. This paper is concerned with the challenge of automatically generating test input sets for Event-Driven Systems (EDS) for which neither source code nor specifications are available, therefore we propose an innovative fully automatic testing with model learning technique. It basically involves active learning to automatically infer a behavioural model of the System Under Test (SUT) using tests as queries, generates further tests based on the learned model to systematically explore unseen parts of the subject system, and makes use of passive learning to refine the current model hypothesis as soon as an inconsistency is found with the observed behaviour. Our passive learning algorithm uses the basic steps of Evidence-Driven State Merging (EDSM) and introduces an effective heuristic for choosing the pair of states to merge to obtain the target machine. Finally, the effectiveness of the proposed testing technique is demonstrated within the context of event-based functional testing of Android Graphical User Interface (GUI) applications and compared with that of existing baseline approaches.
APA, Harvard, Vancouver, ISO, and other styles
7

Ahmad, A., and D. Al-Abri. "Design of a Realistic Test Simulator For a Built-In Self Test Environment." Journal of Engineering Research [TJER] 7, no. 2 (December 1, 2010): 69. http://dx.doi.org/10.24200/tjer.vol7iss2pp69-79.

Full text
Abstract:
This paper presents a realistic test approach suitable to Design For Testability (DFT) and Built- In Self Test (BIST) environments. The approach is culminated in the form of a test simulator which is capable of providing a required goal of test for the System Under Test (SUT). The simulator uses the approach of fault diagnostics with fault grading procedure to provide the tests. The tool is developed on a common PC platform and hence no special software is required. Thereby, it is a low cost tool and hence economical. The tool is very much suitable for determining realistic test sequences for a targeted goal of testing for any SUT. The developed tool incorporates a flexible Graphical User Interface (GUI) procedure and can be operated without any special programming skill. The tool is debugged and tested with the results of many bench mark circuits. Further, this developed tool can be utilized for educational purposes for many courses such as fault-tolerant computing, fault diagnosis, digital electronics, and safe - reliable - testable digital logic designs.
APA, Harvard, Vancouver, ISO, and other styles
8

Rosero, Raúl H., Omar S. Gómez, and Glen Rodríguez. "15 Years of Software Regression Testing Techniques — A Survey." International Journal of Software Engineering and Knowledge Engineering 26, no. 05 (June 2016): 675–89. http://dx.doi.org/10.1142/s0218194016300013.

Full text
Abstract:
Software regression testing techniques verify previous functionalities each time software modifications occur or new characteristics are added. With the aim of gaining a better understanding of this subject, in this work we present a survey of software regression testing techniques applied in the last 15 years; taking into account its application domain, kind of metrics they use, its application strategies and the phase of the software development process where they are applied. From an outcome of 460 papers, a set of 25 papers describing the use of 31 software testing regression techniques were identified. Results of this survey suggest that at the time of applying a regression testing techniques, metrics like cost and fault detection efficiency are the most relevant. Most of the techniques were assessed with instrumented programs (experimental cases) under academic settings. Conversely, we observe a minimum set of software regression techniques applied in industrial settings, mainly, under corrective and maintenance approaches. Finally, we observe a trend using some regression techniques under agile approaches.
APA, Harvard, Vancouver, ISO, and other styles
9

Maspupah, Asri, and Akhmad Bakhrun. "PERBANDINGAN KEMAMPUAN REGRESSION TESTING TOOL PADA REGRESSION TEST SELECTION: STARTS DAN EKSTAZI." JTT (Jurnal Teknologi Terapan) 7, no. 1 (July 7, 2021): 59. http://dx.doi.org/10.31884/jtt.v7i1.319.

Full text
Abstract:
Regression testing as an essential activity in software development that has changed requirements. In practice, regression testing requires a lot of time so that an optimal strategy is needed. One approach that can be used to speed up execution time is the Regression Test Selection (RTS) approach. Currently, practitioners and academics have started to think about developing tools to optimize the process of implementing regression testing. Among them, STARTS and Ekstazi are the most popular regression testing tools among academics in running test case selection algorithms. This article discusses the comparison of the capabilities of the STARTS and Ekstazi features by using feature parameter evaluation. Both tools were tested with the same input data in the form of System Under Test (SUT) and test cases. The parameters used in the tool comparisons are platform technology, test case selection, functionality, usability and performance efficiency, the advantages, and disadvantages of the tool. he results of the trial show the differences and similarities between the features of STARTS and Ekstazi, so that it can be used by practitioners to take advantage of tools in the implementation of regression testing that suit their needs. In addition, experimental results show that the use of Ekstazi is more precise in sorting out important test cases and is more efficient, when compared to STARTS and regression testing with retest all.
APA, Harvard, Vancouver, ISO, and other styles
10

HU, HAI, CHANG-HAI JIANG, and KAI-YUAN CAI. "AN IMPROVED APPROACH TO ADAPTIVE TESTING." International Journal of Software Engineering and Knowledge Engineering 19, no. 05 (August 2009): 679–705. http://dx.doi.org/10.1142/s0218194009004349.

Full text
Abstract:
Adaptive testing is the counterpart of adaptive control in software testing. It means that software testing strategy should be adjusted on-line by using the testing data collected during software testing as our understanding of the software under test is improved. Previous studies on adaptive testing involved a simplified Controlled Markov Chain (CMC) model for software testing which employs several unrealistic assumptions. In this paper we propose a new adaptive software testing approach in the context of an improved and namely, general CMC model which aims to eliminate such threats to validity. A set of more realistic basic assumptions on the software testing process is proposed and several unrealistic assumptions are replaced by less unrealistic assumptions. A new adaptive testing strategy based on the general CMC is developed and implemented. Mathematical simulations and experiments on real life software are conducted to demonstrate the effectiveness of the new strategy.
APA, Harvard, Vancouver, ISO, and other styles
11

Neri, Giulia R. "The Use of Exploratory Software Testing in SCRUM." ACM SIGSOFT Software Engineering Notes 48, no. 1 (January 10, 2023): 59–62. http://dx.doi.org/10.1145/3573074.3573089.

Full text
Abstract:
Exploratory testing is a very common, yet under researched, software testing technique. Research has shown how this technique can provide a better insight about the system under test than other techniques, that it can find defects more efficiently than other testing approaches and even aid the design of other techniques. This research aims at increasing the understanding of exploratory testing and the way it is used within industries utilizing SCRUM. Another aim is to identify and understand the factors that enable the tester to use this technique successfully. The decision to set the study in SCRUM comes from the fact that this Agile management framework is the most popular in industry and from the suggestion to focus on the relationship between Agile and exploratory testing. Also, the choice of a specific context adds significance to the findings. This research will be conducted in a Sheffield based company, which produces data analytics software. The methodology will consist of three phases. During Phase 1 (Identification), SCRUM practitioners will be interviewed about the use of exploratory testing in SCRUM and the success factors of this technique. The aim of Phase 2 (Confirmation) will be to confirm the findings from Phase 1. This will be accomplished with focus groups and widely-distributed online survey. Finally, during Phase 3 (Verification), practitioners will take part to experiments to verify that the success factors identified during the first two phases enable efficient and effective exploratory testing. The purpose of this research is to enrich the academic field of software verification and validation, but also to provide industries utilising SCRUM with useful guidance.
APA, Harvard, Vancouver, ISO, and other styles
12

Biswas, Ranadeep, Diptanshu Kakwani, Jyothi Vedurada, Constantin Enea, and Akash Lal. "MonkeyDB: effectively testing correctness under weak isolation levels." Proceedings of the ACM on Programming Languages 5, OOPSLA (October 20, 2021): 1–27. http://dx.doi.org/10.1145/3485546.

Full text
Abstract:
Modern applications, such as social networking systems and e-commerce platforms are centered around using large-scale storage systems for storing and retrieving data. In the presence of concurrent accesses, these storage systems trade off isolation for performance. The weaker the isolation level, the more behaviors a storage system is allowed to exhibit and it is up to the developer to ensure that their application can tolerate those behaviors. However, these weak behaviors only occur rarely in practice and outside the control of the application, making it difficult for developers to test the robustness of their code against weak isolation levels. This paper presents MonkeyDB, a mock storage system for testing storage-backed applications. MonkeyDB supports a key-value interface as well as SQL queries under multiple isolation levels. It uses a logical specification of the isolation level to compute, on a read operation, the set of all possible return values. MonkeyDB then returns a value randomly from this set. We show that MonkeyDB provides good coverage of weak behaviors, which is complete in the limit. We test a variety of applications for assertions that fail only under weak isolation. MonkeyDB is able to break each of those assertions in a small number of attempts.
APA, Harvard, Vancouver, ISO, and other styles
13

Rehan, Muhammad, Norhalina Senan, Muhammad Aamir, Ali Samad, Mujtaba Husnain, Noraini Ibrahim, Sikandar Ali, and Hizbullah Khatak. "A Systematic Analysis of Regression Test Case Selection: A Multi-Criteria-Based Approach." Security and Communication Networks 2021 (September 21, 2021): 1–11. http://dx.doi.org/10.1155/2021/5834807.

Full text
Abstract:
In applied software engineering, the algorithms for selecting the appropriate test cases are used to perform regression testing. The key objective of this activity is to make sure that modification in the system under test (SUT) has no impact on the overall functioning of the updated software. It is concluded from the literature that the efficacy of the test case selection solely depends on the following metrics, namely, the execution cost of the test case, the lines of the code covered in unit time also known as the code coverage, the ability to capture the potential faults, and the code modifications. Furthermore, it is also observed that the approaches for the regression testing developed so far generated results by focusing on one or two parameters. In this paper, our key objectives are twofold: one is to explore the importance of the role of each metric in detail. The secondary objective is to study the combined effect of these metrics in test case selection task that is capable of achieving more than one objective. In this paper, a detailed and comprehensive review of the work related to regression testing is provided in a very distinct and principled way. This survey will be useful for the researchers contributing to the field of regression testing. It is noteworthy that our systematic literature review (SLR) included the noteworthy work published from 2007 to 2020. Our study observed that about 52 relevant studies focused on all of the four metrics to perform their respective tasks. The results also revealed that about 30% of the different categories of regression test case reported the results using metaheuristic regression test selection (RTS). Similarly, about 31% of the literature reported results using the generic regression test case selection techniques. Most of the researchers focus on the datasets, namely, Software-Artefact Infrastructure Repository (SIR), JodaTime, TreeDataStructure, and Apache Software Foundation. For validation purpose, following parameters were focused, namely, the inclusiveness, precision, recall, and retest-all.
APA, Harvard, Vancouver, ISO, and other styles
14

Parlato, Aldo, Elio Tomarchio, Cristiano Calligaro, and Calogero Pace. "The methodology for active testing of electronic devices under the radiations." Nuclear Technology and Radiation Protection 33, no. 1 (2018): 53–60. http://dx.doi.org/10.2298/ntrp1801053p.

Full text
Abstract:
The methodology, developed for active testing of electronic devices under the radiations, is presented. The test set-up includes a gamma-ray facility, the hardware board/fixtures and the software tools purposely designed and realized. The methodology is so wide-ranging to allow us the verification of different classes of electronic devices, even if only application examples for static random access memory modules are reported.
APA, Harvard, Vancouver, ISO, and other styles
15

Verma, Vibha, Sameer Anand, and Anu Gupta Aggarwal. "Software warranty cost optimization under imperfect debugging." International Journal of Quality & Reliability Management 37, no. 9/10 (October 31, 2019): 1233–57. http://dx.doi.org/10.1108/ijqrm-03-2019-0088.

Full text
Abstract:
Purpose The purpose of this paper is to identify and quantify the key components of the overall cost of software development when warranty coverage is given by a developer. Also, the authors have studied the impact of imperfect debugging on the optimal release time, warranty policy and development cost which signifies that it is important for the developers to control the parameters that cause a sharp increase in cost. Design/methodology/approach An optimization problem is formulated to minimize software development cost by considering imperfect fault removal process, faults generation at a constant rate and an environmental factor to differentiate the operational phase from the testing phase. Another optimization problem under perfect debugging conditions, i.e. without error generation is constructed for comparison. These optimization models are solved in MATLAB, and their solutions provide insights to the degree of impact of imperfect debugging on the optimal policies with respect to software release time and warranty time. Findings A real-life fault data set of Radar System is used to study the impact of various cost factors via sensitivity analysis on release and warranty policy. If firms tend to provide warranty for a longer period of time, then they may have to bear losses due to increased debugging cost with more number of failures occurring during the warrantied time but if the warranty is not provided for sufficient time it may not act as sufficient hedge during field failures. Originality/value Every firm is fighting to remain in the competition and expand market share by offering the latest technology-based products, using innovative marketing strategies. Warranty is one such strategic tool to promote the product among masses and develop a sense of quality in the user’s mind. In this paper, the failures encountered during development and after software release are considered to model the failure process.
APA, Harvard, Vancouver, ISO, and other styles
16

Betts, Kevin M., and Mikel D. Petty. "Automated Search-Based Robustness Testing for Autonomous Vehicle Software." Modelling and Simulation in Engineering 2016 (2016): 1–15. http://dx.doi.org/10.1155/2016/5309348.

Full text
Abstract:
Autonomous systems must successfully operate in complex time-varying spatial environments even when dealing with system faults that may occur during a mission. Consequently, evaluating the robustness, or ability to operate correctly under unexpected conditions, of autonomous vehicle control software is an increasingly important issue in software testing. New methods to automatically generate test cases for robustness testing of autonomous vehicle control software in closed-loop simulation are needed. Search-based testing techniques were used to automatically generate test cases, consisting of initial conditions and fault sequences, intended to challenge the control software more than test cases generated using current methods. Two different search-based testing methods, genetic algorithms and surrogate-based optimization, were used to generate test cases for a simulated unmanned aerial vehicle attempting to fly through an entryway. The effectiveness of the search-based methods in generating challenging test cases was compared to both a truth reference (full combinatorial testing) and the method most commonly used today (Monte Carlo testing). The search-based testing techniques demonstrated better performance than Monte Carlo testing for both of the test case generation performance metrics: (1) finding the single most challenging test case and (2) finding the set of fifty test cases with the highest mean degree of challenge.
APA, Harvard, Vancouver, ISO, and other styles
17

Chassidim, Hadas, Dani Almog, and Shlomo Mark. "Continuous Software Engineering and Unit Testing: From Theory to Practice." WSEAS TRANSACTIONS ON COMPUTER RESEARCH 9 (August 10, 2021): 113–24. http://dx.doi.org/10.37394/232018.2021.9.14.

Full text
Abstract:
With the Agile development approach, the software industry has moved to a more flexible and continuous Software Development Life Cycle (SDLC), which integrates the stages of development, delivery and deployment. This trend has exposed a tendency of increasing reliance on both unit testing and test automation for the fundamental quality-activities during the code development. To implement Continuous Software Engineering (CSE), it is vital to assure that unit-testing activities are an integral and well-defined part of a continuous process. This paper focuses on the initial role of actual testing – viewing unit testing as a quality indicator during the development life cycle. We review the definition of unit-testing from the CSE world, and describe a qualitative study in which we examined implementation of unit testing in three software companies that recently migrated to CSE methodology. The results from the qualitative study corroborate our argument that under the continues approach, quality-based development practices such as unit testing are of increasing importance, lacking common set of measurements and KPI's. A possible explanation to this may be the role of continuous practices as well as unit testing in the software engineering curriculum
APA, Harvard, Vancouver, ISO, and other styles
18

Gonçalves, Eder Mateus Nunes, Ricardo Arend Machado, Bruno Coelho Rodrigues, and Diana Adamatti. "CPN4M: Testing Multi-Agent Systems under Organizational Model Moise+ Using Colored Petri Nets." Applied Sciences 12, no. 12 (June 9, 2022): 5857. http://dx.doi.org/10.3390/app12125857.

Full text
Abstract:
Multi-agent systems (MASs) are distributed and complex software that demand specific software engineering features. Testing is a critical phase when validating software, and it is also difficult to conceive and execute. Designing systems under a multi-agent paradigm is even more difficult because of properties such as autonomy, reactivity, pro-activity, and social skills. Any multi-agent system has at least three main dimensions: the individual and social levels and communication interfaces. Considering an approach for testing a dimension specifically, we deal with the social level as an organizational model in this paper. It imposes some restrictions on agents’ behavior through a set of behavioral constraints. In this sense, an error in the organization can occur when the allocated resources are not enough for executing plans and reaching goals. This work aims to present a whole framework for analyzing and testing MAS social level under organizational model Moise+. This framework uses a Moise+ specifications set as information artifact mapped in a colored Petri net (CPN) model, named CPN4M, as a test case generation mechanism. CPN4M uses two different test adequacy criteria: all-paths and state-transition path. In this paper, we have formalized the transition from Moise+ to CPN, the procedures for test case generation, and executed some tests in a case study. The results indicate that this methodology can increase the correction degree for a social level in a multi-agent system specified by a Moise+ model, indicating the system context that can lead the MAS for failures.
APA, Harvard, Vancouver, ISO, and other styles
19

Fellner, Andreas, Mitra Tabaei Befrouei, and Georg Weissenbacher. "Mutation testing with hyperproperties." Software and Systems Modeling 20, no. 2 (April 2021): 405–27. http://dx.doi.org/10.1007/s10270-020-00850-1.

Full text
Abstract:
AbstractWe present a new method for model-based mutation-driven test case generation. Mutants are generated by making small syntactical modifications to the model or source code of the system under test. A test case kills a mutant if the behavior of the mutant deviates from the original system when running the test. In this work, we use hyperproperties—which allow to express relations between multiple executions—to formalize different notions of killing for both deterministic as well as non-deterministic models. The resulting hyperproperties are universal in the sense that they apply to arbitrary reactive models and mutants. Moreover, an off-the-shelf model checking tool for hyperproperties can be used to generate test cases. Furthermore, we propose solutions to overcome the limitations of current model checking tools via a model transformation and a bounded SMT encoding. We evaluate our approach on a number of models expressed in two different modeling languages by generating tests using a state-of-the-art mutation testing tool.
APA, Harvard, Vancouver, ISO, and other styles
20

Colomo-Palacios, Ricardo, Luis López-Cuadrado, Israel González-Carrasco, and José García-Peñalvo. "SABUMO-dTest: Design and evaluation of an intelligent collaborative distributed testing framework." Computer Science and Information Systems 11, no. 1 (2014): 29–45. http://dx.doi.org/10.2298/csis130129019c.

Full text
Abstract:
Software development must increasingly adapt to teams whose members work together but are geographically separated leading to distributed development projects. Such projects consist of teams working together, but sited in different geographic locations. Under these conditions, Global Software Engineering is having a profound impact on the way products are conceived, designed, constructed and tested. One of the problems with this area is the lack of tools which supports the distributed process. Focusing on the testing process, this paper presents SABUMO-dTest, a framework based on Semantic technologies that allows software organizations to represent testing processes with the final aim of trading their services or modeling their testing needs in a social and competitive environment. The proposed framework benefits from a set of shared and controlled vocabularies that permit knowledge and process sharing with potential partners, experts and testing service providers. The evaluation of the system included two kinds of projects, the ones in which testing was not determined by SABUMO-dTest and the ones developed under its influence. Results show remarkable outcomes in SABUMO-dTest driven projects.
APA, Harvard, Vancouver, ISO, and other styles
21

Nazim, Mohd, Mohd Arif, Chaudhary Wali Mohammad, and Mohd Sadiq. "Generating large dataset for software requirements prioritization and selection under fuzzy environment." Journal of Information & Optimization Sciences 44, no. 2 (2023): 285–99. http://dx.doi.org/10.47974/jios-1290.

Full text
Abstract:
Requirements elicitation is a key sub-process of requirements engineering in which various types of software requirements such as functional and non-functional requirements, testing requirements etc., are identified according to the needs of the stakeholders. After the completion of the requirements elicitation process, there may be a large set of requirements and practically it is not possible to implement all requirements due to budget, time and other constraints of an organization. Under this situation, software developers select the highest ranked software requirements from the list of the elicited software requirements for different releases of software so that a successful software product can be developed. The selection of software requirements for different releases is a difficult task because it requires the participation of various stakeholders. In the field of science and engineering, datasets play an important role during experimental work. Based on our review, we found that in the literature of software engineering less attention is given on the generation of the datasets particularly in the domain of software requirements prioritization and selection (SRPS). Therefore, to address this issue, in this paper we present a four-step method for generating the dataset for SRPS research, i.e. (a) identification of stakeholders, (b) elicitation of software requirements using the goal-oriented concept, (c) formation of decision maker committee and (d) evaluation of the functional requirements and non-functional requirements by decision makers under fuzzy environment. The proposed method has been applied to generate the dataset for the requirements of an institute examination system.
APA, Harvard, Vancouver, ISO, and other styles
22

SHIN, KYULEE, and JIN SEO CHO. "TESTING FOR NEGLECTED NONLINEARITY USING EXTREME LEARNING MACHINES." International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 21, supp02 (October 31, 2013): 117–29. http://dx.doi.org/10.1142/s0218488513400205.

Full text
Abstract:
We introduce a statistic testing for neglected nonlinearity using extreme learning machines and call it ELMNN test. The ELMNN test is very convenient and can be widely applied because it is obtained as a by-product of estimating linear models. For the proposed test statistic, we provide a set of regularity conditions under which it asymptotically follows a chi-squared distribution under the null. We conduct Monte Carlo experiments and examine how it behaves when the sample size is finite. Our experiment shows that the test exhibits the properties desired by our theory.
APA, Harvard, Vancouver, ISO, and other styles
23

Sorensen, Tyler, Lucas F. Salvador, Harmit Raval, Hugues Evrard, John Wickerson, Margaret Martonosi, and Alastair F. Donaldson. "Specifying and testing GPU workgroup progress models." Proceedings of the ACM on Programming Languages 5, OOPSLA (October 20, 2021): 1–30. http://dx.doi.org/10.1145/3485508.

Full text
Abstract:
As GPU availability has increased and programming support has matured, a wider variety of applications are being ported to these platforms. Many parallel applications contain fine-grained synchronization idioms; as such, their correct execution depends on a degree of relative forward progress between threads (or thread groups). Unfortunately, many GPU programming specifications (e.g. Vulkan and Metal) say almost nothing about relative forward progress guarantees between workgroups. Although prior work has proposed a spectrum of plausible progress models for GPUs, cross-vendor specifications have yet to commit to any model. This work is a collection of tools and experimental data to aid specification designers when considering forward progress guarantees in programming frameworks. As a foundation, we formalize a small parallel programming language that captures the essence of fine-grained synchronization. We then provide a means of formally specifying a progress model, and develop a termination oracle that decides whether a given program is guaranteed to eventually terminate with respect to a given progress model. Next, we formalize a set of constraints that describe concurrent programs that require forward progress to terminate. This allows us to synthesize a large set of 483 progress litmus tests. Combined with the termination oracle, we can determine the expected status of each litmus test -- i.e. whether it is guaranteed to eventually terminate -- under various progress models. We present a large experimental campaign running the litmus tests across 8 GPUs from 5 different vendors. Our results highlight that GPUs have significantly different termination behaviors under our test suite. Most notably, we find that Apple and ARM GPUs do not support the linear occupancy-bound model, as was hypothesized by prior work.
APA, Harvard, Vancouver, ISO, and other styles
24

Khatri, Sunil Kumar, Kamaldeep Kaur, and Rattan K. Datta. "Using Statistical Usage Testing in Conjunction with Other Black Box Testing Techniques." International Journal of Reliability, Quality and Safety Engineering 22, no. 01 (February 2015): 1550004. http://dx.doi.org/10.1142/s0218539315500047.

Full text
Abstract:
Cleanroom methodology is a scrupulous incremental software development approach16 for the development of zero defect and high-reliability software using box structure specification, statistical quality control and certification.2 Statistical Usage Testing (SUT) technique is the testing technique defined it Cleanroom software engineering. It based on developing the usage models and then performing statistical tests on the usage models.15 The paper shows the usage of SUT in conjunction with other black box testing techniques. Other types of testing can be performed along with SUT depending on the requirement. Use of other testing techniques can be essential to show specific scenarios of use or to attain full usage model coverage with reduced test cases.17 The paper also presents the effectiveness of applying SUT in conjunction with other black box testing techniques by using various test cases. Other black box testing techniques used include Equivalence Class Partitioning (ECP), Boundary Value Analysis (BVA), Cause Effect Graphing (CEG), Use Case Testing (UCT) and Orthogonal Array Testing (OATS).
APA, Harvard, Vancouver, ISO, and other styles
25

Liu, Chengkun, Tchamie Kadja, and Vamsy P. Chodavarapu. "Experimental Evaluation of Sensor Fusion of Low-Cost UWB and IMU for Localization under Indoor Dynamic Testing Conditions." Sensors 22, no. 21 (October 25, 2022): 8156. http://dx.doi.org/10.3390/s22218156.

Full text
Abstract:
Autonomous systems usually require accurate localization methods for them to navigate safely in indoor environments. Most localization methods are expensive and difficult to set up. In this work, we built a low-cost and portable indoor location tracking system by using Raspberry Pi 4 computer, ultra-wideband (UWB) sensors, and inertial measurement unit(s) (IMU). We also developed the data logging software and the Kalman filter (KF) sensor fusion algorithm to process the data from a low-power UWB transceiver (Decawave, model DWM1001) module and IMU device (Bosch, model BNO055). Autonomous systems move with different velocities and accelerations, which requires its localization performance to be evaluated under diverse motion conditions. We built a dynamic testing platform to generate not only the ground truth trajectory but also the ground truth acceleration and velocity. In this way, our tracking system’s localization performance can be evaluated under dynamic testing conditions. The novel contributions in this work are a low-cost, low-power, tracking system hardware–software design, and an experimental setup to observe the tracking system’s localization performance under different dynamic testing conditions. The testing platform has a 1 m translation length and 80 μm of bidirectional repeatability. The tracking system’s localization performance was evaluated under dynamic conditions with eight different combinations of acceleration and velocity. The ground truth accelerations varied from 0.6 to 1.6 m/s2 and the ground truth velocities varied from 0.6 to 0.8 m/s. Our experimental results show that the location error can reach up to 50 cm under dynamic testing conditions when only relying on the UWB sensor, with the KF sensor fusion of UWB and IMU, the location error decreases to 13.7 cm.
APA, Harvard, Vancouver, ISO, and other styles
26

Touati, A., A. Bosio, P. Girard, A. Virazel, P. Bernardi, and M. Sonza Reorda. "Microprocessor Testing: Functional Meets Structural Test." Journal of Circuits, Systems and Computers 26, no. 08 (April 11, 2017): 1740007. http://dx.doi.org/10.1142/s0218126617400072.

Full text
Abstract:
Structural test is widely adopted to ensure high quality for a given product. The availability of many commercial tools and the use of fault models make it very easy to generate and to evaluate. Despite its efficiency, structural test is also known for the risk of over-testing that may lead to yield loss. This problem is mainly due to the fact that structural test does not take into account the functionality of the circuit under test. On the other hand, functional test guarantees that the circuit is tested under normal conditions, thus avoiding any over- as well as under-testing issues. More in particular, for microprocessor testing, functional test is usually applied by exploiting the Software-Based-Self-Test (SBST) technique. SBST applies a set of functional test programs that are executed by the processor to achieve a given fault coverage. SBST fits particularly well for online testing of processor-based systems. In this work, we describe a technique able to execute functional test programs as if they were structural tests. In this way, they can be applied during the end-of-production test in order to achieve good fault coverage and, at the same time, avoiding any over-test problems. We will show that it is possible to map functional test programs into the classical structural test schemes, so that their application simply requires the presence of a scan chain. Finally, we present a compaction algorithm able to significantly reduce the test length. Results carried out on two different microprocessors show the advantages of such approach.
APA, Harvard, Vancouver, ISO, and other styles
27

Chitalov, D. I. "On the Development of a Module for Working with the buoyantSimpleFoam Solver and the postProcess Utility of the OpenFOAM Platform." Programmnaya Ingeneria 13, no. 2 (February 17, 2022): 81–87. http://dx.doi.org/10.17587/prin.13.81-87.

Full text
Abstract:
The paper summarizes the results of research on the development of a software module that expands the source code of the OpenFOAM platform in terms of providing a specialist with access to new possibilities of a numerical ex­periment in relation to problems of continuum mechanics. The module provides the user with graphical and software tools for working with the buoyantSimpleFoam solver and postProcess utility. This work contains a description of the shortcomings of existing software solutions - analogs, the urgency of the problem under study is formulated. The author has set goals and defined the tasks necessary to achieve them. A description of the operation of the postProcess utility and the buoyantSimpleFoam solver is given, as well as the structure and parameters of the corresponding dictionary files of the design case. The author presents a set of technologies necessary for the implementation of the capabilities of a software module, typing, debugging and testing its program code. The performance of the developed software solution has been tested on the example of one of the fundamental problems of continuum mechanics, and the results of testing are presented. Based on the results of the study, the final conclusions are presented, as well as information on the scientific novelty and potential practical significance of the study.
APA, Harvard, Vancouver, ISO, and other styles
28

KAPUR, P. K., SAMEER ANAND, SHINJI INOUE, and SHIGERU YAMADA. "A UNIFIED APPROACH FOR DEVELOPING SOFTWARE RELIABILITY GROWTH MODEL USING INFINITE SERVER QUEUING MODEL." International Journal of Reliability, Quality and Safety Engineering 17, no. 05 (October 2010): 401–24. http://dx.doi.org/10.1142/s0218539310003871.

Full text
Abstract:
In the past 35 years numerous software reliability growth models (SRGMs) have been proposed under diverse testing and debugging (T&D) environments and applied successfully in many real life software projects but no SRGM can claim to be the best in general as the physical interpretation of the T&D is not general. Unified modeling approach proves to be very successful in this regard and provides an excellent platform for obtaining several existing SRGM following single methodology. It forms the main focus of this paper; here we propose a unification modeling approach applying the infinite server queuing theory based on the basic assumptions of an SRGM defining three level of complexity of faults. Our unification methodology can be used to obtain several existing and new SRGMs which consider testing a one stage process with no fault categorization, two/three stage process with random delay functions and hence categorize faults in two/three level complexity. We have also provided data analysis based on two actual T&D data set for some of the models discussed and proposed in the paper.
APA, Harvard, Vancouver, ISO, and other styles
29

A.M, Abirami. "Active Learning Strategies and Blended Learning Approach for Teaching Under Graduate Software Engineering Course." Journal of Engineering Education Transformations 35, no. 1 (July 1, 2021): 42–51. http://dx.doi.org/10.16920/jeet/2021/v35i1/22055.

Full text
Abstract:
Abstract: Software Engineering is a core theory course offered in Undergraduate Engineering programmes. It is one of the challenging courses for the teaching faculty. The course includes various systematic approaches, and methods that can be employed for designing, developing, testing and maintaining quality software applications. The course also focuses on the latest tools and technologies that are being practiced in the industry. Effective content delivery methods leveraging active learning strategies for the classical topics have been well researched and established. However, improvement in content delivery plan and execution is required for relatively new and evolving topics like DevOps and version controlling. This paper presents an experimental study of application of the various active learning techniques such as discussion forums, tech talks followed by quiz for such topics. Impact of the application of the improved content delivery plan on the course outcomes attainment by the learners has also been observed and presented. The enhanced approach has proved to improve student learning outcomes, which are measured and presented using a Abirami A.M , Pudumalar S ThiruchadaiPandeeswari 1 2, S 3 Department of Information Technology, Thiagarajar College of Engineering , Madurai, Tamil Nadu 1 abiramiam@tce.edu 2 spmit@tce.edu 3 eshwarimsp@tce.edu standard set of tools and metrics. This paper uses the logistic regression model for the study of impact of blended learning approach on student learning outcomes which produces 60% accuracy. Keywords: DevOps, Software Maintenance, Version Controlling, Software Engineering, Course Outcome Attainment, Active Learning Strategies
APA, Harvard, Vancouver, ISO, and other styles
30

Jain, Amita, Devendra Kumar Tayal, Manju Khari, and Sonakshi Vij. "A Novel Method for Test Path Prioritization using Centrality Measures." International Journal of Open Source Software and Processes 7, no. 4 (October 2016): 19–38. http://dx.doi.org/10.4018/ijossp.2016100102.

Full text
Abstract:
Software testing is an essential stage of the software development life cycle which helps in producing bug free software. This paper introduces a strategy to create test data consequently from the beginning of test information which is tested against the Program under test (PUT) for ampleness criteria. Initially this process produces test information set arbitrarily where a unique approach for the test path prioritization process is presented that uses the concept of centrality measures. The proposed algorithm computes the importance of each node in the test paths by using various centrality measures and thus identifies the significance of each path. Furthermore, the proposed methodology shows the maximum number of potential nodes that are covered using a less number of prioritized paths. This paper tests the created test information against the product to check its sufficiency.
APA, Harvard, Vancouver, ISO, and other styles
31

Pan, Bo, Xuguang Wang, Zhenyang Xu, Lianjun Guo, and Xuesong Wang. "Experimental and Numerical Study of Fracture Behavior of Rock-Like Material Specimens with Single Pre-Set Joint under Dynamic Loading." Materials 14, no. 10 (May 20, 2021): 2690. http://dx.doi.org/10.3390/ma14102690.

Full text
Abstract:
The Split Hopkinson Pressure Bar (SHPB) is an apparatus for testing the dynamic stress-strain response of the cement mortar specimen with pre-set joints at different angles to explore the influence of joint attitudes of underground rock engineering on the failure characteristics of rock mass structure. The nuclear magnetic resonance (NMR) has also been used to measure the pore distribution and internal cracks of the specimen before and after the testing. In combination with numerical analysis, the paper systematically discusses the influence of joint angles on the failure mode of rock-like materials from three aspects of energy dissipation, microscopic damage, and stress field characteristics. The result indicates that the impact energy structure of the SHPB is greatly affected by the pre-set joint angle of the specimen. With the joint angle increasing, the proportion of reflected energy moves in fluctuation, while the ratio of transmitted energy to dissipated energy varies from one to the other. NMR analysis reveals the structural variation of the pores in those cement specimens before and after the impact. Crack propagation direction is correlated with pre-set joint angles of the specimens. With the increase of the pre-set joint angles, the crack initiation angle decreases gradually. When the joint angles are around 30°–75°, the specimens develop obvious cracks. The crushing process of the specimens is simulated by LS-DYNA software. It is concluded that the stresses at the crack initiation time are concentrated between 20 and 40 MPa. The instantaneous stress curve first increases and then decreases with crack propagation, peaking at different times under various joint angles; but most of them occur when the crack penetration ratio reaches 80–90%. With the increment of joint angles in specimens through the simulation software, the changing trend of peak stress is consistent with the test results.
APA, Harvard, Vancouver, ISO, and other styles
32

Bures, Miroslav, Bestoun S. Ahmed, and Kamal Z. Zamli. "Prioritized Process Test: An Alternative to Current Process Testing Strategies." International Journal of Software Engineering and Knowledge Engineering 29, no. 07 (July 2019): 997–1028. http://dx.doi.org/10.1142/s0218194019500335.

Full text
Abstract:
Testing processes and workflows in information and Internet of Things systems is a major part of the typical software testing effort. Consistent and efficient path-based test cases are desired to support these tests. Because certain parts of software system workflows have a higher business priority than others, this fact has to be involved in the generation of test cases. In this paper, we propose a Prioritized Process Test (PPT), which is a model-based test case generation algorithm that represents an alternative to currently established algorithms that use directed graphs and test requirements to model the system under test. The PPT accepts a directed multigraph as a model to express priorities, and edge weights are used instead of test requirements. To determine the test-coverage level of test cases, a test-depth-level concept is used. We compared the presented PPT with five alternatives (i.e. the Process Cycle Test (PCT), a naive reduction of test set created by the PCT, Brute Force algorithm, Set-covering-Based Solution and Matching-based Prefix Graph Solution) for edge coverage and edge-pair coverage. To assess the optimality of the path-based test cases produced by these strategies, we used 14 metrics based on the properties of these test cases and 59 models that were created for three real-world systems. For all edge coverage, the PPT produced more optimal test cases than the alternatives in terms of the majority of the metrics. For edge-pair coverage, the PPT strategy yielded similar results to those of the alternatives. Thus, the PPT strategy is an applicable alternative as it reflects both the required test coverage level and the business priority in parallel.
APA, Harvard, Vancouver, ISO, and other styles
33

Ehsan, Umer, Muhammad Jawad, Umar Javed, Khurram Shabih Zaidi, Ateeq Ur Rehman, Anton Rassõlkin, Maha M. Althobaiti, Habib Hamam, and Muhammad Shafiq. "A Detailed Testing Procedure of Numerical Differential Protection Relay for EHV Auto Transformer." Energies 14, no. 24 (December 14, 2021): 8447. http://dx.doi.org/10.3390/en14248447.

Full text
Abstract:
In power systems, the programmable numerical differential relays are widely used for the protection of generators, bus bars, transformers, shunt reactors, and transmission lines. Retrofitting of relays is the need of the hour because lack of proper testing techniques and misunderstanding of vital procedures may result in under performance of the overall protection system. Lack of relay’s proper testing provokes an unpredictability in its behavior, that may prompt tripping of a healthy power system. Therefore, the main contribution of the paper is to prepare a step-by-step comprehensive procedural guideline for practical implementation of relay testing procedures and a detailed insight analysis of relay’s settings for the protection of an Extra High Voltage (EHV) auto transformer. The experimental results are scrutinized to document a detailed theoretical and technical analysis. Moreover, the paper also covers shortcomings of existing literature by documenting specialized literature that covers all aspects of protection relays, i.e., from basics of electromechanical domain to the technicalities of the numerical differential relay covering its detailed testing from different reputed manufacturers. A secondary injection relay test set is used for detailed testing of differential relay under test, and the S1 Agile software is used for protection relay settings, configuration modification, and detailed analysis.
APA, Harvard, Vancouver, ISO, and other styles
34

Ueno, Ken, and Michiaki Tatsubori. "Early Capacity Testing of an Enterprise Service Bus." International Journal of Web Services Research 6, no. 4 (October 2009): 30–47. http://dx.doi.org/10.4018/jwsr.2009071302.

Full text
Abstract:
An enterprise service-oriented architecture is typically done with a messaging infrastructure called an Enterprise Service Bus (ESB). An ESB is a bus which delivers messages from service requesters to service providers. Since it sits between the service requesters and providers, it is not appropriate to use any of the existing capacity planning methodologies for servers, such as modeling, to estimate the capacity of an ESB. There are programs that run on an ESB called mediation modules. Their functionalities vary and depend on how people use the ESB. This creates difficulties for capacity planning and performance evaluation. This article proposes a capacity planning methodology and performance evaluation techniques for ESBs, to be used in the early stages of the system development life cycle. The authors actually run the ESB on a real machine while providing a pseudo-environment around it. In order to simplify setting up the environment we provide ultra-light service requestors and service providers for the ESB under test. They show that the proposed mock environment can be set up with practical hardware resources available at the time of hardware resource assessment. Our experimental results showed that the testing results with our mock environment correspond well with the results in the real environment.
APA, Harvard, Vancouver, ISO, and other styles
35

CHOI, BYOUNGJU. "TEST ADEQUACY MEASUREMENT USING A COMBINATION OF CRITERIA." International Journal of Reliability, Quality and Safety Engineering 07, no. 03 (September 2000): 191–203. http://dx.doi.org/10.1142/s0218539300000171.

Full text
Abstract:
The traditional method of assessing software quality is through software testing. In the past two decades, numerous test criteria have been proposed. Although the generation and evaluation of a test set is important, the ultimate goal is to ensure the quality of the software under test. It is risky to validate software by using test sets with respect to an arbitrarily selected test adequacy criterion as it can lead to incorrect conclusions on the quality of the software. In this paper we examine how different test criteria can be combined into one measurement to assess the adequacy of test sets. Based on the subsumption relation between these criteria, a multi-criterion decision making method, Analytic Hierarchy Process (AHP), is used to determine the weight of each test adequacy criterion. A case study reported here suggests that the combined criterion so generated provides a more objective and precise measurement of the fault detection capability of test sets than does a single-member test criterion.
APA, Harvard, Vancouver, ISO, and other styles
36

Korotun, M., L. Weizman, A. Drori, J. Zaccaria, T. Goldstein, I. Litman, S. Hahn, and H. Greenberg. "0584 Detecting Sleep Disordered Breathing Using Sub-Terahertz Radio-Frequency Micro-Radar." Sleep 43, Supplement_1 (April 2020): A224. http://dx.doi.org/10.1093/sleep/zsaa056.581.

Full text
Abstract:
Abstract Introduction New sensor technologies are entering sleep testing at a rapid pace; Neteera™ developed a novel sensor and algorithm for sleep apnea detection utilizing a contact-free, radar-based sensor system. The system utilizes a high-frequency, low-power, directional micro-radar which operates at ~120GHz and a sampling rate of 2500Hz as well as algorithms which are able to detect both pulse and respiratory activity of subjects during sleep. Methods Adult subjects undergoing diagnostic PSG for clinical purposes were simultaneously assessed with the novel micro-radar system with sensors under the mattress. Disordered breathing events (DBEs) were scored from the PSG using AASM scoring guidelines and were compared with those detected by the micro-radar sensor. Test data were grouped into three sets: 1. Single under mattress sensor; 2. Two under mattress sensors on each side of the bed (to improve signal capture); 3. After software optimization. The micro-radar sensor detected DBEs but software to describe the type of DBEs (obstructive apnea/central apnea/hypopnea) is still under development. Detection rate of DBEs was compared between the two methodologies and the development sets. Results n=22 (12 F, 10 M), Age=50.8±12.4 years, BMI=35.32±7.37 kg/m2. Diagnostic PSG AHI: 19.7±29.4/hr, T90=15.8±25.7%. Percent DBEs missed by the micro-radar sensor: 1st set=14.6±10.6%; 2nd set=9.4±8.3%; 3rd set=1.2±2.6%. Number of DBEs assessed for each set was 646, 1144, 125 events, respectively. With each successive set, the detection rate improved. Conclusion A novel micro-radar, non-contact sensor technology can be used to detect DBEs during sleep. Detection rate improved with utilization of two sensors per bed and software optimization. Future software development is expected to improve detection rate and facilitate breathing event classification into obstructive apneas/central apneas/hypopneas. Support None.
APA, Harvard, Vancouver, ISO, and other styles
37

Kornilov, Gleb. "INFRARED THERMOGRAPHY TECHNIQUE FOR DAMAGE DETECTION ON COMPOSITE MATERIALS UNDER LOW-ENERGY IMPACT." Perm National Research Polytechnic University Aerospace Engineering Bulletin, no. 72 (2023): 21–32. http://dx.doi.org/10.15593/2224-9982/2023.72.02.

Full text
Abstract:
The use of thermal non-destructive testing of products made of polymer composite materials for impact damage is one of the promising directions. The paper presents a mobile monitoring approach that provides damage detection at low impact values. A feature of the developed method of impact damage detection is a comprehensive consideration of the presence of a coating on the surface of composite material, the type of a source of thermal loading, the technical excellence of the thermal imager and the capabilities of the temperature imaging software. In the post-processing of thermal imaging data, the method used monoframe-processing technology, which included a set of operations: visualization of imaging data in a monotone palette, contrast enhancement of the thermal image and sub-framing – narrowing the area of analysis of the original thermal image, with a scanning search.
APA, Harvard, Vancouver, ISO, and other styles
38

Avdeenko, Tatiana, and Konstantin Serdyukov. "Automated Test Data Generation Based on a Genetic Algorithm with Maximum Code Coverage and Population Diversity." Applied Sciences 11, no. 10 (May 20, 2021): 4673. http://dx.doi.org/10.3390/app11104673.

Full text
Abstract:
In the present paper, we investigate an approach to intelligent support of the software white-box testing process based on an evolutionary paradigm. As a part of this approach, we solve the urgent problem of automated generation of the optimal set of test data that provides maximum statement coverage of the code when it is used in the testing process. We propose the formulation of a fitness function containing two terms, and, accordingly, two versions for implementing genetic algorithms (GA). The first term of the fitness function is responsible for the complexity of the code statements executed on the path generated by the current individual test case (current set of statements). The second term formulates the maximum possible difference between the current set of statements and the set of statements covered by the remaining test cases in the population. Using only the first term does not make it possible to obtain 100 percent statement coverage by generated test cases in one population, and therefore implies repeated launch of the GA with changed weights of the code statements which requires recompiling the code under the test. By using both terms of the proposed fitness function, we obtain maximum statement coverage and population diversity in one launch of the GA. Optimal relation between the two terms of fitness function was obtained for two very different programs under testing.
APA, Harvard, Vancouver, ISO, and other styles
39

Mou, Yunhan, Scott Hummel, Tassos Kyriakides, and Yuan Huang. "47 Shrinking Coarsened Win Ratio and Testing of Composite Endpoint." Journal of Clinical and Translational Science 7, s1 (April 2023): 12–13. http://dx.doi.org/10.1017/cts.2023.138.

Full text
Abstract:
OBJECTIVES/GOALS: Win ratio (WR) is an increasingly popular composite endpoint in clinical trials. A typical set up in cardiovascular trials is to use death as the first and hospitalization as the second layer. However, the power of WR may be reduced by its strict hierarchical structure. Our study aims to release the oracular hierarchical structure of the standard WR. METHODS/STUDY POPULATION: Addressing the power reduction of WR when treatment effects lie in the subsequent layers, we propose an improved method, Shrinking Coarsened Win Ratio (SCWR), that releases the oracular hierarchical structure of the standard WR approach by adding layers with coarsened thresholds shrinking to zero. A weighted adaptive approach is developed to determine the thresholds in SCWR. We conducted simulations to compare the performance of our improved method and the standard Win Ratio (WR) under different scenarios of follow-up time, association between events, and treatment effect levels. We also illustrate our method by re-analyzing real-world cardiovascular trials. RESULTS/ANTICIPATED RESULTS: First, the developed Shrinking Coarsened Win Ratio (SCWR) method preserves the good statistical properties of the standard WR and has a greater capacity to detect treatment effects on subsequent layer outcomes. Second, the SCWR method outperforms the standard approach under the scenarios in our simulations in terms of gaining higher power. In practice, we expect that SCWR can better detect the treatment effects. Finally, we will offer convenient software tools and clear tutorials for implementing the SCWR method in future studies, which include both unstratified and stratified designs. DISCUSSION/SIGNIFICANCE: The developed SCWR provides a more flexible way of combining the top layer and subsequent layers (e.g., the fatal and non-fatal endpoints) under the hierarchical structure and achieves a higher power in simulation. This nonparametric approach can accommodate different types of outcomes, including time-to-event, continuous, and categorical ones.
APA, Harvard, Vancouver, ISO, and other styles
40

D’Accardi, Ester, Davide Palumbo, and Umberto Galietti. "A Comparison among Different Ways to Investigate Composite Materials with Lock-In Thermography: The Multi-Frequency Approach." Materials 14, no. 10 (May 12, 2021): 2525. http://dx.doi.org/10.3390/ma14102525.

Full text
Abstract:
The main goal of non-destructive testing is the detection of defects early enough to avoid catastrophic failure with particular interest for the inspection of aerospace structures; under this aspect, all methods for fast and reliable inspection deserve special attention. In this sense, active thermography for non-destructive testing enables contactless, fast, remote, and not expensive control of materials and structures. Furthermore, different works have confirmed the potentials of lock-in thermography as a flexible technique for its peculiarity to be performed by means of a low-cost set-up. In this work, a new approach called the multi-frequency via software approach (MFS), based on the superimposition via software of two square waves with two different main excitation frequencies, has been used to inspect a sample in carbon fiber reinforced polymers (CFRP) material with imposed defects of different materials, sizes and depths, by means of lock-in thermography. The advantages and disadvantages of the multi-frequency approach have been highlighted by comparing quantitatively the MFS with the traditional excitation methods (sine and square waves).
APA, Harvard, Vancouver, ISO, and other styles
41

Kosobudzki, Mariusz. "Preliminary Selection of Road Test Sections for High-Mobility Wheeled Vehicle Testing under Proving Ground Conditions." Applied Sciences 12, no. 7 (March 30, 2022): 3513. http://dx.doi.org/10.3390/app12073513.

Full text
Abstract:
Before being introduced into the maintenance system, each vehicle must undergo a series of tests, including durability tests. Computer models and whole vehicles on test stands can be used to identify vehicle performance parameters, which requires access to special software and test stands. However, due to the size of the vehicle or access to test stands, experimental tests must often be carried out on proving grounds; while generating the most complete set of loads, this is much more time-consuming. Therefore, methods are sought to indicate whether testing at the proving ground can be accelerated, for example, by increasing the speed of driving or using test road sections that generate more severe loads. This paper presents a method for evaluating the suitability of selected test road sections for durability tests of a special high-mobility wheeled vehicle. The method is based on the Basquin model, which consider the fatigue strength of materials, and rainflow load cycle counting method combined with the P-M damage summation method. Evaluation of test road selection was supplemented with analysis of travel speed distributions determined using the Beta statistic distribution. The presented model was used to evaluate the ability to accelerate a mileage test of an 8 × 8 vehicle. While certain data mentioned in the article remain classified and are not able to be presented in full, the author has attempted to provide a comprehensive background of the analyses conducted and the data used to illustrate them.
APA, Harvard, Vancouver, ISO, and other styles
42

Van Vooren, Steven, James Grayson, Marc Van Ranst, Elisabeth Dequeker, Lies Laenen, Reile Janssen, Laurent Gillet, et al. "Reliable and Scalable SARS-CoV-2 qPCR Testing at a High Sample Throughput: Lessons Learned from the Belgian Initiative." Life 12, no. 2 (January 21, 2022): 159. http://dx.doi.org/10.3390/life12020159.

Full text
Abstract:
We present our approach to rapidly establishing a standardized, multi-site, nation-wide COVID-19 screening program in Belgium. Under auspices of a federal government Task Force responsible for upscaling the country’s testing capacity, we were able to set up a national testing initiative with readily available resources, putting in place a robust, validated, high-throughput, and decentralized qPCR molecular testing platform with embedded proficiency testing. We demonstrate how during an acute scarcity of equipment, kits, reagents, personnel, protective equipment, and sterile plastic supplies, we introduced an approach to rapidly build a reliable, validated, high-volume, high-confidence workflow based on heterogeneous instrumentation and diverse assays, assay components, and protocols. The workflow was set up with continuous quality control monitoring, tied together through a clinical-grade information management platform for automated data analysis, real-time result reporting across different participating sites, qc monitoring, and making result data available to the requesting physician and the patient. In this overview, we address challenges in optimizing high-throughput cross-laboratory workflows with minimal manual intervention through software, instrument and assay validation and standardization, and a process for harmonized result reporting and nation-level infection statistics monitoring across the disparate testing methodologies and workflows, necessitated by a rapid scale-up as a response to the pandemic.
APA, Harvard, Vancouver, ISO, and other styles
43

Angarita, Leonardo Bermon, Alvaro Fernández Del Carpio, and Andrés Alberto Osorio Londoño. "A Bibliometric Analysis of DevOps Metrics." DESIDOC Journal of Library & Information Technology 42, no. 6 (January 2, 2023): 387–96. http://dx.doi.org/10.14429/djlit.42.6.18365.

Full text
Abstract:
DevOps has become an important set of organisational and technical practices in software development under the agile umbrella. Many efforts are still being made in this field, mainly focused on the inclusion of metrics for monitoring progress. Gathering metrics is a difficult task, but it can provide insight into the performance of the software delivery process. The current status of the definition, application and implementation of metrics in DevOps projects and processes is of interest to software practitioners. Thus, the objective of this article is to analyze documents regarding the impact of metrics in DevOps projects. 103 documents were obtained from the Scopus database to analyze them through the bibliometric method considering several aspects. The bibliometric analysis performed included author analysis, author affiliation, authors’ countries, keyword analysis, citation analysis, and network analysis. The results indicate that DevOps research is not centralized in a specific group of researchers. Moreover, the most significant contributions of DevOps are related to continuous integration, software design, and software testing. The bibliometric analysis presented in this article helps to identify the current state of the DevOps literature and provides an insightful discussion of future research trends.
APA, Harvard, Vancouver, ISO, and other styles
44

Ustun, Taha Selim, Shuichi Sugahara, Masaichi Suzuki, Jun Hashimoto, and Kenji Otani. "Power Hardware in-the-Loop Testing to Analyze Fault Behavior of Smart Inverters in Distribution Networks." Sustainability 12, no. 22 (November 11, 2020): 9365. http://dx.doi.org/10.3390/su12229365.

Full text
Abstract:
Deep penetration of distributed generators have created several stability and operation issues for power systems. In order to address these, inverters with advanced capabilities such as frequency and reactive power support the grid. Known also as Smart Inverters (SIs), these devices are highly dynamic and contribute to the power flow in the system. Notwithstanding their benefits, such dynamic devices are new to distribution networks. Power system operators are very reluctant toward such changes as they may cause unknown issues. In order to alleviate these concerns and facilitate SIs integration to the grid, behavior studies are required. To that end, this paper presents a power hardware-in-the-loop test set up and tests that are performed to study fault behavior of SIs connected to distribution networks. The details of the software model, SI integration with the real-time simulator, test results, and their analyses are presented. This experience shows that it is not trivial to connect such novel devices with simulation environments. Adjustments are required on both software and hardware fronts on a case-by-case basis. The encountered integration issues and their solutions are presented herein. The fault behavior of the SI with respect to the fault location is documented. It is observed that for faults that are close to SIs, momentary cessation of generation is observed. This needs to be tackled by device manufacturers as this phenomenon is very detrimental to health of a power system under fault conditions. Extensive PHIL test results show that several factors affect the fault behavior of an SI: fault location and its duration, SI mode of operation as well as extra devices housed in the casing. These results and their in-depth analyses are presented for a thorough understanding of SI behavior under fault conditions.
APA, Harvard, Vancouver, ISO, and other styles
45

Макарова, Taisiya Makarova, Мелешко, Nataliya Meleshko, Жаринов, and Sergey Zharinov. "Ultrasonic Testing of Railway Transport Units with Phased Array Flaw Detectors." NDT World 18, no. 3 (September 1, 2015): 72–76. http://dx.doi.org/10.12737/12576.

Full text
Abstract:
The article describes possibilities of phased array flaw detectors application for testing of railway transport units, such as wheel set axles, all-rolled wheels, solebars of freight cars. The task was to reproduce the standard testing procedures using phased array flaw detectors and demonstrate their advantages in visibility, efficiency, repeatability, results validity. Unfortunately, one of the main advantages of phased array flaw detectors, namely – a possibility to control the focusing depth – are lost while testing of large scale objects. Sector scanning technique with the phased array flaw detectors OmniScan and Isonic 2010 in the minimum configuration was used for the research. In the all cases acoustical images of the following reflectors were obtained within the range of selected angles: saw-cuts in axles, spot-drillings and saw-cuts in wheels, side drilled holes and natural defects in solebars. The Multi Group software (Isonic 2000) at testing of wheel set axles has enabled the testing schemes to be realized with one prism and one phased array instead of several classical piezoelectric transducers. Circumferential testing of all-rolled wheels from internal lateral surface under the roll surface level has allowed transverse cracks and flange embedded defects to be detected. Solebar testing has been more complicated because of a form and irregularity of scanning surface, necessity of its cleaning and a complex profile of back surface. Nevertheless the usage of phased array flaw detectors has made it possible to identify the back surface profile. Application of phased arrays substantially increases testing efficiency and improves visibility of obtained results.
APA, Harvard, Vancouver, ISO, and other styles
46

Currie, Robert, Rosen Mataev, and Marco Clemencic. "Evolution of the LHCb Continuous Integration system." EPJ Web of Conferences 245 (2020): 05039. http://dx.doi.org/10.1051/epjconf/202024505039.

Full text
Abstract:
The physics software stack of LHCb is based on Gaudi and is comprised of about 20 interdependent projects, managed across multiple GitLab repositories. At present, the continuous integration (CI) system used for regular building and testing of this software is implemented using Jenkins and runs on a cluster of about 300 cores. LHCb CI pipelines are python-based and relatively modern with some degree of modularity, i.e. the separation of test jobs from build jobs. However, these still suffer from obsoleted design choices that prevent improvements to scalability and reporting. In particular, the resource use and speed have not been thoroughly optimized due to the predominant use of the system for nightly builds, where a feedback time of 8 hours is acceptable. We describe recent work on speeding up pipelines by aggressively splitting and parallelizing checkout, build and test jobs and caching their artifacts. The current state of automatic code quality integration, such as coverage reports, is shown. This paper presents how feedback time from change (merge request) submission to build and test reports is reduced from “next day” to a few hours by dedicated on-demand pipelines. Custom GitLab integration allows easy triggering of pipelines, including linked changes to multiple projects, and provides immediate feedback as soon as ready. Reporting includes a comparison to tests on a unique stable reference build, dynamically chosen for every set of changes under testing. This work enables isolated testing of changes that integrates well into the development workflow, leaving nightly testing primarily for integration tests.
APA, Harvard, Vancouver, ISO, and other styles
47

Sukhorukov, Vasily, Dmitry Slesarev, Ivan Shpakov, Vasily Yu Volokhovsky, Alexander Vorontsov, and Alexander Shalashilin. "Automated Condition Monitoring with Remaining Lifetime Assessment for Wire Ropes in Ladle Cranes." Materials Evaluation 79, no. 11 (November 1, 2021): 1050–60. http://dx.doi.org/10.32548/2021.me-04181.

Full text
Abstract:
The hazards and deterioration of operating wire ropes on overhead cranes, which articulate the ladle in the basic oxygen steelmaking process and are subjected to intensive periodic loads and exposure to high temperatures, are discussed. An automated condition monitoring system (ACMS) based on a magnetic flux leakage testing (MFL) flaw detector permanently installed on the rope under test is used. An algorithm of the rope’s residual tensile strength assessment is provided. A specially developed software that submits a decision on the rope’s condition to the crane operator is described. The practice of combining magnetic rope testing (MRT) and tensile strength analysis for the quantitative assessment of rope condition is reviewed. Practical issues are also discussed, such as how to establish the condition monitoring process, set loss thresholds for rope metallic cross-sectional area, and safely prolong the service life of rope.
APA, Harvard, Vancouver, ISO, and other styles
48

Qin, Hui Bin, Shu Fang Wu, Z. L. Hou, and Zong Yan Wang. "Research of Automated Process Planning Based on 3D Feature Component Model." Key Engineering Materials 392-394 (October 2008): 234–39. http://dx.doi.org/10.4028/www.scientific.net/kem.392-394.234.

Full text
Abstract:
This paper analyzed the current process planning based on 3D feature model, and pointed out the exiting disadvantages. The framework and key technologies that play a vital role in process planning were issued. A uniform manufacturing model, which was built on component model by offsetting surface features and volumetric features. It implemented the typical machining features recognition and extraction, including visional entity and conceal technological attribute. It also set up hierarchy planning model. Under the support of these theories, a feature-oriented process planning generation system based on Solidworks CAD platform was developed. With the testing work on the interrelated examples, the software system has validated the feasibility and practicability of the method in this paper, which will enrich the way to make stock model in existent CAM software system. It has significant meaning to promote the integration of CAD/CAPP.
APA, Harvard, Vancouver, ISO, and other styles
49

Min, J. L., N. Rajabi, and A. Rahmani. "Comprehensive study of SIR: Leading SUT repository for software testing." Journal of Physics: Conference Series 1869, no. 1 (April 1, 2021): 012072. http://dx.doi.org/10.1088/1742-6596/1869/1/012072.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Giannopoulos, Ioannis K., Mehdi Yasaee, and Nikolaos Maropakis. "Ballistic Impact and Virtual Testing of Woven FRP Laminates." Journal of Composites Science 5, no. 5 (April 22, 2021): 115. http://dx.doi.org/10.3390/jcs5050115.

Full text
Abstract:
The aim of the work was to investigate the numerical simulations correlation with the experimental behaviour of steel ball high velocity impact onto a 2 × 2 twill woven carbon composite laminate. The experimental set up consisted of a pressurised gas-gun able to shot steel ball projectiles onto two different composite plate layup configurations of plates made of the same composite material fabric. Subsequently, the experiments were replicated using the LSDYNA explicit finite element analysis software package. Progressive failure numerical models of two different fidelity levels were constructed. The higher fidelity model was simulating each of the plys of the composite panels separately, tied together using cohesive zone modelling properties. The lower fidelity model consisted of a single layer plate with artificial integration points for each ply. The simulation results came out to be in satisfactory agreement with the experimental ones. While the delamination extent was moderately under predicted by the higher fidelity model, the general behaviour was complying with the experimental results. The lower fidelity model was consistent in representing the damage of the panel during the impact and better predicted the impactor residual velocities due to the better matching of the pane stiffness. Despite the competency of the higher fidelity model to capture the damage of the laminate in a more detailed level, the computational cost was 80% higher than the lower fidelity case, which rendered that model impractical against the lower fidelity one, to use in larger models representing more substantial or more complex structures.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography