Journal articles on the topic 'Test case selection and execution'

To see the other types of publications on this topic, follow the link: Test case selection and execution.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Test case selection and execution.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Gladston, Angelin, H. Khanna Nehemiah, P. Narayanasamy, and A. Kannan. "Test Case Selection Using Feature Extraction and Clustering." International Journal of Knowledge-Based Organizations 8, no. 2 (April 2018): 18–31. http://dx.doi.org/10.4018/ijkbo.2018040102.

Full text
Abstract:
This article explains the selection of important parameters from an execution pattern which brings out the details of the application of test cases. Hence, execution profiles are captured and a new execution profile-based clustering approach is chosen for test case selection, which uses three new features. These are Function frequency, Branches taken and Block percentage. The test cases are clustered using the extracted features. The experiments show that the proposed FBB selects smaller size of more relevant test cases which are more fault revealing compared to the existing Function Call Profile approach.
APA, Harvard, Vancouver, ISO, and other styles
2

de Souza, Luciano S., Ricardo B. C. Prudêncio, Flavia de A. Barros, and Eduardo H. da S. Aranha. "Search based constrained test case selection using execution effort." Expert Systems with Applications 40, no. 12 (September 2013): 4887–96. http://dx.doi.org/10.1016/j.eswa.2013.02.018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Böhmer, Kristof, and Stefanie Rinderle-Ma. "Automatic Business Process Test Case Selection: Coverage Metrics, Algorithms, and Performance Optimizations." International Journal of Cooperative Information Systems 25, no. 04 (December 2016): 1740002. http://dx.doi.org/10.1142/s0218843017400020.

Full text
Abstract:
Business processes describe and implement the business logic of companies, control human interaction, and invoke heterogeneous services during runtime. Therefore, ensuring the correct execution of processes is crucial. Existing work is addressing this challenge through process verification. However, the highly dynamic aspects of the current processes and the deep integration and frequent invocation of third party services limit the use of static verification approaches. Today, one frequently utilized approach to address this limitation is to apply process tests. However, the complexity of process models is steadily increasing. So, more and more test cases are required to assure process model correctness and stability during design and maintenance. But executing hundreds or even thousands of process model test cases lead to excessive test suite execution times and, therefore, high costs. Hence, this paper presents novel coverage metrics along with a genetic test case selection algorithm. Both enable the incorporation of user-driven test case selection requirements and the integration of different knowledge sources. In addition, techniques for test case selection computation performance optimization are provided and evaluated. The effectiveness of the presented genetic test case selection algorithm is evaluated against five alternative test case selection algorithms.
APA, Harvard, Vancouver, ISO, and other styles
4

Xiao, Lei, Huaikou Miao, and Ying Zhong. "Test case prioritization and selection technique in continuous integration development environments: a case study." International Journal of Engineering & Technology 7, no. 2.28 (May 16, 2018): 332. http://dx.doi.org/10.14419/ijet.v7i2.28.13207.

Full text
Abstract:
Regression testing is a very important activity in continuous integration development environments. Software engineers frequently integrate new or changed code that involves in a new regression testing. Furthermore, regression testing in continuous integration development environments is together with tight time constraints. It is also impossible to re-run all the test cases in regression testing. Test case prioritization and selection technique are often used to render continuous integration processes more cost-effective. According to multi objective optimization, we present a test case prioritization and selection technique, TCPSCI, to satisfy time constraints and achieve testing goals in continuous integration development environments. Based on historical failure data, testing coverage code size and testing execution time, we order and select test cases. The test cases of the maximize code coverage, the shorter execution time and revealing the latest faults have the higher priority in the same change request. The case study results show that using TCPSCI has a higher cost-effectiveness comparing to the manually prioritization.
APA, Harvard, Vancouver, ISO, and other styles
5

P Mahapatra, R., Aparna Ranjith, and A. Kulothungan. "Framework for Optimizing Test Cases in Regression Testing." International Journal of Engineering & Technology 7, no. 3.12 (July 20, 2018): 444. http://dx.doi.org/10.14419/ijet.v7i3.12.16126.

Full text
Abstract:
Software once developed is subject to continuous changes. Software Regression Testing thus can be used to reduce the efforts of testing the software by selecting only the required number of test cases and ordering them to test the software after changes have been made to it. In order to improve the fault detection rate, the selection of efficient test cases and order of execution of these tests is important. Here is when the test case selection comes into picture where in, the fault detection rate during the working of any software has to be improved. The test case selection process will find the most efficient test cases which can fully functionally test the software that has been modified. This indeed will contribute to an improved fault detection rate which can provide faster feedback on the system under test and let software engineers begin correcting faults as early as possible. In this paper, an approach for test case selection is proposed which takes into consideration the effect of three parameters History, Coverage and Requirement all together in order to improve the selection process. This will also ensure that the rejection of some efficient test cases is reduced when compared to the selection process in conventional methods, most of them making use of a single parameter for test case selection. These Test cases are further optimized using Genetic Algorithm. Results indicate that the proposed technique is much more efficient in terms of selecting the test cases when compared to conventional techniques, thereby improving fault detection rate.
APA, Harvard, Vancouver, ISO, and other styles
6

Liu, Yu Lin, Yan Wang, and Jian Tao Zhou. "Design and Implementation of Test Case Selection Based on Map Reduce." Applied Mechanics and Materials 654 (October 2014): 378–81. http://dx.doi.org/10.4028/www.scientific.net/amm.654.378.

Full text
Abstract:
In the traditional software testing, a large collection of test cases of the tested system automatically generated , in the process of actual execution, all of the test cases are executed is not possible. Normally, we test a certain function of the tested system, so choosing the test cases about a certain function is very important. This paper focuses on solving the problem of choosing test cases about a certain function of the tested system based on CPN model, the method which is based on purpose is used in this process. In the process of test cases selection, there are a whole lot of repeated calculation and operation, this characteristic just can make it combined with the parallel advantage of cloud computing. In summary, this dissertation focus on the test cases selection problem, using MapReduce programming on Hadoop platform, a test case selection tool is designed to improve the efficiency and service capabilities of test selection, the result of the experiment is consistent with the expected result.
APA, Harvard, Vancouver, ISO, and other styles
7

Et.al, Chetan J. Shingadiya. "Genetic Algorithm for Test Suite Optimization: An Experimental Investigation of Different Selection Methods." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, no. 3 (April 10, 2021): 3778–87. http://dx.doi.org/10.17762/turcomat.v12i3.1661.

Full text
Abstract:
Software Testing is an important aspect of the real time software development process. Software testing always assures the quality of software product. As associated with software testing, there are few very important issues where there is a need to pay attention on it in the process of software development test. These issues are generation of effective test case and test suite as well as optimization of test case and suite while doing testing of software product. The important issue is that testing time of the test case and test suite. It is very much important that after development of software product effective testing should be performed. So to overcome these issues of optimization, we have proposed new approach for test suite optimization using genetic algorithm (GA). Genetic algorithm is evolutionary in nature so it is often used for optimization of problem by researcher. In this paper, our aim is to study various selections methods like tournament selection, rank selection and roulette wheel selection and then we apply this genetic algorithm (GA) on various programs which will generate optimized test suite with parameters like fitness value of test case, test suite and take minimum amount of time for execution after certain preset generation. In this paper our main objectives as per the experimental investigation, we show that tournament selection works very fine as compared to other methods with respect fitness selection of test case and test suites, testing time of test case and test suites as well as number of requirements.
APA, Harvard, Vancouver, ISO, and other styles
8

Peng, She Qiang, and Ze Yi Tian. "Design and Realization of IE Vulnerabilities Mining Based on Fuzz Testing." Applied Mechanics and Materials 651-653 (September 2014): 2032–35. http://dx.doi.org/10.4028/www.scientific.net/amm.651-653.2032.

Full text
Abstract:
In order to detect Internet Explorer browser vulnerabilities, the miner distributed with test method based on the DOM tree are designed and implemented, it also implemented dynamic selection of the test case execution, experiment results found 50 IE vulnerabilities.
APA, Harvard, Vancouver, ISO, and other styles
9

CHEN, ZHENYU, YONGWEI DUAN, ZHIHONG ZHAO, BAOWEN XU, and JU QIAN. "USING PROGRAM SLICING TO IMPROVE THE EFFICIENCY AND EFFECTIVENESS OF CLUSTER TEST SELECTION." International Journal of Software Engineering and Knowledge Engineering 21, no. 06 (September 2011): 759–77. http://dx.doi.org/10.1142/s0218194011005487.

Full text
Abstract:
Cluster test selection is a new successful approach to select a subset of the existing test suite in regression testing. In this paper, program slicing is introduced to improve the efficiency and effectiveness of cluster test selection techniques. A static slice is computed on the modified code. The execution profile of each test case is filtered by the program slice to highlight the parts of software affected by modification, called slice filtering. The slice filtering reduces the data dimensions for cluster analysis, such that the cost of cluster test selection is saved dramatically. The experiment results show that the slice filtering techniques could reduce the cost of cluster test selection significantly and could also improve the effectiveness of cluster test selection modestly. Therefore, cluster test selection by filtering has more potential scalability to deal with large software.
APA, Harvard, Vancouver, ISO, and other styles
10

Ping, Z., W. Yueyong, D. Fangjing, and G. Xin. "Research on design method of human resource system test case in construction enterprise." IOP Conference Series: Materials Science and Engineering 1242, no. 1 (April 1, 2022): 012029. http://dx.doi.org/10.1088/1757-899x/1242/1/012029.

Full text
Abstract:
Abstract In the process of human resource system test design in construction enterprises, various test design methods are often used, such as equivalence class division, boundary value analysis, causal graph, error inference and so on. How to choose the correct test design method is related to the rationality of test case design and high efficiency of test execution. Therefore, the selection of appropriate test design method plays a vital role in use case design. In this paper, the basic use of equivalence class partition and boundary value analysis and their relationship are briefly described. It is hoped that it can be used for reference in the study of human resource system test cases in construction enterprises.
APA, Harvard, Vancouver, ISO, and other styles
11

Rehan, Muhammad, Norhalina Senan, Muhammad Aamir, Ali Samad, Mujtaba Husnain, Noraini Ibrahim, Sikandar Ali, and Hizbullah Khatak. "A Systematic Analysis of Regression Test Case Selection: A Multi-Criteria-Based Approach." Security and Communication Networks 2021 (September 21, 2021): 1–11. http://dx.doi.org/10.1155/2021/5834807.

Full text
Abstract:
In applied software engineering, the algorithms for selecting the appropriate test cases are used to perform regression testing. The key objective of this activity is to make sure that modification in the system under test (SUT) has no impact on the overall functioning of the updated software. It is concluded from the literature that the efficacy of the test case selection solely depends on the following metrics, namely, the execution cost of the test case, the lines of the code covered in unit time also known as the code coverage, the ability to capture the potential faults, and the code modifications. Furthermore, it is also observed that the approaches for the regression testing developed so far generated results by focusing on one or two parameters. In this paper, our key objectives are twofold: one is to explore the importance of the role of each metric in detail. The secondary objective is to study the combined effect of these metrics in test case selection task that is capable of achieving more than one objective. In this paper, a detailed and comprehensive review of the work related to regression testing is provided in a very distinct and principled way. This survey will be useful for the researchers contributing to the field of regression testing. It is noteworthy that our systematic literature review (SLR) included the noteworthy work published from 2007 to 2020. Our study observed that about 52 relevant studies focused on all of the four metrics to perform their respective tasks. The results also revealed that about 30% of the different categories of regression test case reported the results using metaheuristic regression test selection (RTS). Similarly, about 31% of the literature reported results using the generic regression test case selection techniques. Most of the researchers focus on the datasets, namely, Software-Artefact Infrastructure Repository (SIR), JodaTime, TreeDataStructure, and Apache Software Foundation. For validation purpose, following parameters were focused, namely, the inclusiveness, precision, recall, and retest-all.
APA, Harvard, Vancouver, ISO, and other styles
12

Gupta, Atulya, and Rajendra Prasad Mahapatra. "Multifactor Algorithm for Test Case Selection and Ordering." Baghdad Science Journal 18, no. 2(Suppl.) (June 20, 2021): 1056. http://dx.doi.org/10.21123/bsj.2021.18.2(suppl.).1056.

Full text
Abstract:
Regression testing being expensive, requires optimization notion. Typically, the optimization of test cases results in selecting a reduced set or subset of test cases or prioritizing the test cases to detect potential faults at an earlier phase. Many former studies revealed the heuristic-dependent mechanism to attain optimality while reducing or prioritizing test cases. Nevertheless, those studies were deprived of systematic procedures to manage tied test cases issue. Moreover, evolutionary algorithms such as the genetic process often help in depleting test cases, together with a concurrent decrease in computational runtime. However, when examining the fault detection capacity along with other parameters, is required, the method falls short. The current research is motivated by this concept and proposes a multifactor algorithm incorporated with genetic operators and powerful features. A factor-based prioritizer is introduced for proper handling of tied test cases that emerged while implementing re-ordering. Besides this, a Cost-based Fine Tuner (CFT) is embedded in the study to reveal the stable test cases for processing. The effectiveness of the outcome procured through the proposed minimization approach is anatomized and compared with a specific heuristic method (rule-based) and standard genetic methodology. Intra-validation for the result achieved from the reduction procedure is performed graphically. This study contrasts randomly generated sequences with procured re-ordered test sequence for over '10' benchmark codes for the proposed prioritization scheme. Experimental analysis divulged that the proposed system significantly managed to achieve a reduction of 35-40% in testing effort by identifying and executing stable and coverage efficacious test cases at an earlier phase.
APA, Harvard, Vancouver, ISO, and other styles
13

YU, Y. T., S. F. TANG, P. L. POON, and T. Y. CHEN. "A STUDY ON A PATH-BASED STRATEGY FOR SELECTING BLACK-BOX GENERATED TEST CASES." International Journal of Software Engineering and Knowledge Engineering 11, no. 02 (April 2001): 113–38. http://dx.doi.org/10.1142/s0218194001000475.

Full text
Abstract:
Various black-box methods for the generation of test cases have been proposed in the literature. Many of these methods, including the category-partition method and the classification-tree method, follow the approach of partition testing, in which the input domain is partitioned into subdomains according to important aspects of the specification, and test cases are then derived from the subdomains. Though comprehensive in terms of these important aspects, execution of all the test cases so generated may not be feasible under the constraint of tight testing resources. In such circumstances, there is a need to select a smaller subset of test cases from the original test suite for execution. In this paper, we propose the use of white-box information to guide the selection of test cases from the original test suite generated by a black-box testing method. Furthermore, we have developed some techniques and algorithms to facilitate the implementation of our approach, and demonstrated its viability and benefits by means of a case study.
APA, Harvard, Vancouver, ISO, and other styles
14

Maspupah, Asri, and Akhmad Bakhrun. "PERBANDINGAN KEMAMPUAN REGRESSION TESTING TOOL PADA REGRESSION TEST SELECTION: STARTS DAN EKSTAZI." JTT (Jurnal Teknologi Terapan) 7, no. 1 (July 7, 2021): 59. http://dx.doi.org/10.31884/jtt.v7i1.319.

Full text
Abstract:
Regression testing as an essential activity in software development that has changed requirements. In practice, regression testing requires a lot of time so that an optimal strategy is needed. One approach that can be used to speed up execution time is the Regression Test Selection (RTS) approach. Currently, practitioners and academics have started to think about developing tools to optimize the process of implementing regression testing. Among them, STARTS and Ekstazi are the most popular regression testing tools among academics in running test case selection algorithms. This article discusses the comparison of the capabilities of the STARTS and Ekstazi features by using feature parameter evaluation. Both tools were tested with the same input data in the form of System Under Test (SUT) and test cases. The parameters used in the tool comparisons are platform technology, test case selection, functionality, usability and performance efficiency, the advantages, and disadvantages of the tool. he results of the trial show the differences and similarities between the features of STARTS and Ekstazi, so that it can be used by practitioners to take advantage of tools in the implementation of regression testing that suit their needs. In addition, experimental results show that the use of Ekstazi is more precise in sorting out important test cases and is more efficient, when compared to STARTS and regression testing with retest all.
APA, Harvard, Vancouver, ISO, and other styles
15

Singh, Rajvir, Anita Singhrova, and Rajesh Bhatia. "Optimized Test Case Generation for Object Oriented Systems Using Weka Open Source Software." International Journal of Open Source Software and Processes 9, no. 3 (July 2018): 15–35. http://dx.doi.org/10.4018/ijossp.2018070102.

Full text
Abstract:
Detection of fault proneness classes helps software testers to generate effective class level test cases. In this article, a novel technique is presented for an optimized test case generation for ant-1.7 open source software. Class level object oriented (OO) metrics are considered as effective means to find fault proneness classes. The open source software ant-1.7 is considered for the evaluation of proposed techniques as a case study. The proposed mathematical model is the first of its kind generated using Weka open source software to select effective OO metrics. Effective and ineffective OO metrics are identified using feature selection techniques for generating test cases to cover fault proneness classes. In this methodology, only effective metrics are considered for assigning weights to test paths. The results indicate that the proposed methodology is effective and efficient as the average fault exposition potential of generated test cases is 90.16% and test cases execution time saving is 45.11%.
APA, Harvard, Vancouver, ISO, and other styles
16

Anwar, Zeeshan, Ali Ahsan, and Cagatay Catal. "Neuro-Fuzzy Modeling for Multi-Objective Test Suite Optimization." Journal of Intelligent Systems 25, no. 2 (April 1, 2016): 123–46. http://dx.doi.org/10.1515/jisys-2014-0152.

Full text
Abstract:
AbstractRegression testing is a type of testing activity, which ensures that source code changes do not affect the unmodified portions of the software adversely. This testing activity may be very expensive in, some cases, due to the required time to execute the test suite. In order to execute the regression tests in a cost-effective manner, the optimization of regression test suite is crucial. This optimization can be achieved by applying test suite reduction (TSR), regression test selection (RTS), or test case prioritization (TCP) techniques. In this paper, we designed and implemented an expert system for TSR problem by using neuro-fuzzy modeling-based approaches known as “adaptive neuro-fuzzy inference system with grid partitioning” (ANFIS-GP) and “adaptive neuro-fuzzy inference system with subtractive clustering” (ANFIS-SC). Two case studies were performed to validate the model and fuzzy logic, multi-objective genetic algorithms (MOGAs), non-dominated sorting genetic algorithm (NSGA-II) and multi-objective particle swarm optimization (MOPSO) algorithms were used for benchmarking. The performance of the models were evaluated in terms of reduction of test suite size, reduction in fault detection rate, reduction in test suite execution time, and reduction in requirement coverage. The experimental results showed that our ANFIS-based optimization system is very effective to optimize the regression test suite and provides better performance than the other approaches evaluated in this study. Size and execution time of the test suite is reduced up to 50%, whereas loss in fault detection rate is between 0% and 25%.
APA, Harvard, Vancouver, ISO, and other styles
17

Jung, Pilsu, Sungwon Kang, and Jihyun Lee. "Efficient Regression Testing of Software Product Lines by Reducing Redundant Test Executions." Applied Sciences 10, no. 23 (December 4, 2020): 8686. http://dx.doi.org/10.3390/app10238686.

Full text
Abstract:
Regression testing for software product lines (SPLs) is challenging because it must ensure that all the products of a product family work correctly whenever changes are made. One approach to reducing the cost of regression testing is the regression test selection (RTS), which selects a subset of regression test cases. However, even when RTS is applied, SPL regression testing can still be expensive because, in the product line context, each test case can be executed on more than one product that reuses the test case, which would typically result in a large number of test executions. A promising direction is to eliminate redundant test executions of test cases. We propose a method that, given a test case, identifies a set of products, on which the test case will cover the same sequence of source code statements and produce the same testing results, and then excludes these products from products to apply the test case to. The evaluation results showed that when the full selection approach and the approach of repetitively applying an RTS method for a single software system are used for test selection, our method reduced, respectively, 59.3% and 40.0% of the numbers of test executions of the approaches.
APA, Harvard, Vancouver, ISO, and other styles
18

Wang, Rongcun, Rubing Huang, Yansheng Lu, and Binbin Qu. "Clustering Analysis of Function Call Sequence for Regression Test Case Reduction." International Journal of Software Engineering and Knowledge Engineering 24, no. 08 (October 2014): 1197–223. http://dx.doi.org/10.1142/s0218194014500387.

Full text
Abstract:
Regression test case reduction aims at selecting a representative subset from the original test pool, while retaining the largest possible fault detection capability. Cluster analysis has been proposed and applied for selecting an effective test case subset in regression testing. It groups test cases into clusters based on the similarity of historical execution profiles. In previous studies, historical execution profiles are represented as binary or numeric function coverage vectors. The vector-based similarity approaches only consider which functions or statements are covered and the number of times they are executed. However, the vector-based approaches do not take the relations and sequential information between function calls into account. In this paper, we propose cluster analysis of function call sequences to attempt to improve the fault detection effectiveness of regression testing even further. A test is represented as a function call sequence that includes the relations and sequential information between function calls. The distance between function call sequences is measured not only by the Levenshtein distance but also the Euclidean distance. To assess the effectiveness of our approaches, we designed and conducted experimental studies on five subject programs. The experimental results indicate that our approaches are statistically superior to the approaches based on the similarity of vectors (i.e. binary vectors and numeric vectors), random and greedy function-coverage-based maximization test case reduction techniques in terms of fault detection effectiveness. With respective to the cost-effectiveness, cluster analysis of sequences measured using the Euclidean distance is more effective than using the Levenshtein distance.
APA, Harvard, Vancouver, ISO, and other styles
19

Wu, Shih-DA, and Jung-Hua Lo. "The MADAG Strategy for Fault Location Techniques." Applied Sciences 13, no. 2 (January 6, 2023): 819. http://dx.doi.org/10.3390/app13020819.

Full text
Abstract:
Spectrum-based fault localization (SBFL), which utilizes spectrum information of test cases to calculate the suspiciousness of each statement in a program, can reduce developers’ effort. However, applying redundant test cases from a test suite to fault localization incurs a heavy burden, especially in a restricted resource environment, and it is expensive and infeasible to inspect the results of each test input. Prioritizing/selecting appropriate test cases is important to enable the practical application of the SBFL technique. In addition, we must ensure that applying the selected tests to SBFL can achieve approximately the effectiveness of fault localization with whole tests. This paper presents a test case prioritization/selection strategy, namely the Minimal Aggregate of the Diversity of All Groups (MADAG). The MADAG strategy prioritizes/selects test cases using information on the diversity of the execution trace of each test case. We implemented and applied the MADAG strategy to 233 faulty versions of the Siemens and UNIX programs from the Software-artifact Infrastructure Repository. The experiments show that (1) the MADAG strategy uses only 8.99 and 14.27 test cases, with an average of 18, from the Siemens and UNIX test suites, respectively, and the SBFL technique has approximate effectiveness for fault localization on all test cases and outperforms the previous best test case prioritization method; (2) we verify that applying whole tests from the test suite may not achieve the better effectiveness in fault localization compared with the tests selected by MADAG strategy.
APA, Harvard, Vancouver, ISO, and other styles
20

Song, Yang, Xuzhou Zhang, and Yun-Zhan Gong. "Infeasible Path Detection Based on Code Pattern and Backward Symbolic Execution." Mathematical Problems in Engineering 2020 (May 25, 2020): 1–12. http://dx.doi.org/10.1155/2020/4258291.

Full text
Abstract:
This paper sets out to reveal the relationship between code pattern and infeasible paths and gives advices to the selection of infeasible path detection techniques. Lots of program paths are proved to be infeasible, which leads to imprecision and low efficiency of program analysis. Detection of infeasible paths is required in many areas of software engineering including test coverage analysis, test case generation, and security vulnerability analysis. The immediate cause of path infeasibility is the contradiction of path constraints, whose distribution will affect the performance of different program analysis techniques. But there is a lack of research on the distribution of contradict constraints currently. We propose a code pattern based on the empirical study of infeasible paths; the statistical result proves the correlation of the pattern with contradict constraints. We then develop a path feasibility detection method based on backward symbolic execution. Performance of the proposed technique is evaluated from two aspects: the efficiency of detecting infeasibility paths for specific program element and the improvement of applying the technique on code coverage testing.
APA, Harvard, Vancouver, ISO, and other styles
21

Fabris, D. M., P. Brambilla, C. Conese, M. M. Maspes, R. Sala, and M. Tarabini. "METROLOGICAL CHARACTERIZATION OF OPTICAL 3D COORDINATE MEASUREMENT SYSTEMS – COMPARISON OF ALTERNATIVE HARDWARE DESIGNS AS PER ISO 10360." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVIII-2/W2-2022 (December 8, 2022): 39–43. http://dx.doi.org/10.5194/isprs-archives-xlviii-2-w2-2022-39-2022.

Full text
Abstract:
Abstract. This research focuses on the characterization of the metrology of Optical 3D Coordinate Measurement Systems (O3DCMS). The focus is set on the identification and execution of the procedure indicated by the currently active technical standards related to industrial O3DCMS, for their metrological assessment, objective comparison, and performance tracking. This work leads to the implementation of an ad hoc software for the execution of the standard tests by the ISO 10360-13 standard. The implemented software application is employed in a real-case scenario for evaluating the performances of an industrial 3D scanner based on structured light. The specific hardware components to be assessed are two light sources of the active stereoscopic vision system, named Digital Light Projectors (DLP). The case study applies the procedures and metrics indicated by the active standards to objectively compare two alternative hardware design of the system under test. This results in the identification of the most performing hardware configuration, allowing the selection of the best system design, basing on objective metrological parameters.
APA, Harvard, Vancouver, ISO, and other styles
22

Gupta, Varun, Durg Singh Chauhan, and Kamlesh Dutta. "Hybrid Regression Testing Based on Path Pruning." International Journal of Systems and Service-Oriented Engineering 5, no. 1 (January 2015): 35–55. http://dx.doi.org/10.4018/ijssoe.2015010103.

Full text
Abstract:
Regression testing has been studied by various researchers for developing and testing the quality of software. Regression testing aims at re-execution of evolved software code to ensure that no new errors had been introduced during the process of modification. Since re-execution of all test cases is not feasible, selecting manageable number of test cases to execute modified code with good fault detection rate is a problem. In past few years, various hybrid based regression testing approaches have been proposed and successfully employed for software testing, aiming at reduction in the number of test cases and higher fault detection capabilities. These techniques are based on sequence of selections, prioritizations and minimization of test suite. However, these techniques suffer from major drawbacks like improper consideration of control dependencies, neglection of unaffected fragments of code for testing purpose. Further, these techniques have been employed on hypothetical or simple programs with test suites of smaller size. Present paper proposes hybrid regression testing, a combination of test case selections, test case prioritizations and test suite minimization. The technique works at statement level and is based on finding the paths containing statements that affects or gets affected by the addition/deletion or modification (both control and data dependency) of variables in statements. The modification in the code may cause ripple effect thereby resulting into faulty execution of the code. The hybrid regression testing approach is aimed at detecting such faults with lesser number of test cases. Reduction in number of test cases is possible because of the decreased number of paths to be tested. A web based framework to automate and parallelize this testing technique to maximum extend, making it well suited for globally distributed environments is also proposed in the present paper. Framework when implemented as a tool can handle large pool of test cases and will make use of parallel MIMD architectures like multicore systems. Technique is applied on prototype live system and results are compared with recently proposed hybrid regression testing approach against parameters of interest. Obtained optimized results are indicators of effectiveness of approach in terms of reduction in effort, cost as well as testing time in general and increment delivery time in particular.
APA, Harvard, Vancouver, ISO, and other styles
23

Geetha Devasena, M. S., G. Gopu, and M. L. Valarmathi. "Automated and Optimized Software Test Suite Generation Technique for Structural Testing." International Journal of Software Engineering and Knowledge Engineering 26, no. 01 (February 2016): 1–13. http://dx.doi.org/10.1142/s0218194016500017.

Full text
Abstract:
Software testing consumes 50% of total software development cost. Test case design gains central importance in testing activity with respect to quality. The manual test suite generation is a time consuming and tedious task which needs automation. Unit testing is normally done in stringent time schedules by the developers or rarely by testers. In structural testing, it is not possible to check exhaustively all possible test data and the quality of test is dependent heavily on the performance of single developer or tester. Thus automation and optimization is required in generating test data to assist developer or tester with the selection of appropriate test data. A novel hybrid technique is developed to automate the test suite generation process for branch coverage criteria using evolutionary testing. The hybrid technique applies both Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) to automatically generate test data. This technique improves efficiency and effectiveness of test case generation process when compared to applying Genetic Algorithm or Particle Swarm Optimization alone. The performance of proposed technique is evaluated and is observed that hybrid technique reduces the number of iterations by 47% when compared to GA and PSO applied separately and it reduces the execution time by 52% than GA and 48% than PSO.
APA, Harvard, Vancouver, ISO, and other styles
24

Nguyen, Kien Trung, and Minh Dung Ha. "Application of gas-assisted gravity drainage (GAGD) to improve oil recovery of Rang Dong basement reservoir." Petrovietnam Journal 10 (November 1, 2022): 35–45. http://dx.doi.org/10.47800/pvj.2022.10-05.

Full text
Abstract:
As an option to improve the ultimate oil recovery factor of the Rang Dong field, a feasibility study of gas-assisted gravity drainage (GAGD) application was carried out for the fractured basement reservoir with a single-well Huff ‘n’ Puff pilot test applied on a high water-cut producer. This paper aims to provide an in-depth understanding about such case study, which includes details on candidate selection, gas injection scheme, on-site execution, results of flow-back, post-job review and lessons learned. The pilot test of GAGD was recorded with a good oil rate and low water-cut during flowing back after gas injection and shut-in for gas segregation, which suggests the positive effectiveness of GAGD to some degree. The expansion of the GAGD application to other wells and areas in the field would be encouraged in any similar situation. On the other hand, the results of this pilot test shed a light into further optimisation of the candidate selection and gas injection scheme by material balance analysis and reservoir simulation respectively.
APA, Harvard, Vancouver, ISO, and other styles
25

Cao, Heling, Zhenghaohe He, Yangxia Meng, and Yonghe Chu. "Automatic Repair of Java Programs Weighted Fusion Similarity via Genetic Programming." Information Technology and Control 51, no. 4 (December 12, 2022): 738–56. http://dx.doi.org/10.5755/j01.itc.51.4.30515.

Full text
Abstract:
Recently, automated program repair techniques have been proven to be useful in the process of software development. However, how to reduce the large search space and the random of ingredient selection is still a challenging problem. In this paper, we propose a repair approach for buggy program based on weighted fusion similarity and genetic programming. Firstly, the list of modification points is generated by selecting modification points from the suspicious statements. Secondly, the buggy repair ingredient is selected according to the value of the weighted fusion similarity, and the repair ingredient is applied to the corresponding modification points according to the selected operator. Finally, we use the test case execution information to prioritize the test cases to improve individual verification efficiency. We have implemented our approach as a tool called WSGRepair. We evaluate WSGRepair in Defects4J and compare with other program repair techniques. Experimental results show that our approach improve the success rate of buggy program repair by 28.6%, 64%, 29%, 64% and 112% compared with the GenProg, CapGen, SimFix, jKali and jMutRepair.
APA, Harvard, Vancouver, ISO, and other styles
26

Yarmolik, Svetlana. "Address Sequences and Backgrounds with Different Hamming Distances for Multiple Run March Tests." International Journal of Applied Mathematics and Computer Science 18, no. 3 (September 1, 2008): 329–39. http://dx.doi.org/10.2478/v10006-008-0030-y.

Full text
Abstract:
Address Sequences and Backgrounds with Different Hamming Distances for Multiple Run March TestsIt is widely known that pattern sensitive faults are the most difficult faults to detect during the RAM testing process. One of the techniques which can be used for effective detection of this kind of faults is the multi-background test technique. According to this technique, multiple-run memory test execution is done. In this case, to achieve a high fault coverage, the structure of the consecutive memory backgrounds and the address sequence are very important. This paper defines requirements which have to be taken into account in the background and address sequence selection process. A set of backgrounds which satisfied those requirements guarantee us to achieve a very high fault coverage for multi-background memory testing.
APA, Harvard, Vancouver, ISO, and other styles
27

Willar, Debby, Estrellita V. Y. Waney, and Novatus Senduk. "The execution of infrastructure project life-cycle." MATEC Web of Conferences 258 (2019): 02017. http://dx.doi.org/10.1051/matecconf/201925802017.

Full text
Abstract:
The significant increase in Indonesian construction sector activity nowadays is also influenced by government financing for infrastructure projects. Therefore, the government needs to ensure that the infrastructure projects are consistently constructed along the project life-cycle. Phases of the infrastructure project life-cycle implemented in the Ministry of Public Works and Housing consist of 1) planning, 2) selection of service providers, 3) construction processes, and 4) construction product hand-over. Data collection using three rounds of Delphi Study was undertaken to empirically test the level of implementation of the project life-cycle indicators, which are used as standards to construct infrastructure projects. The respondents of the studies came from sectors who were executing infrastructure projects in the areas of Cipta Karya, Bina Marga, Sumber Daya Air, and Penyediaan Perumahan. The results of the studies concluded that the sectors have understood and implemented most of the indicators, however, different levels of implementation have existed along with the barriers of the implementation. From the studies, profiles of the execution of infrastructure project life-cycle were provided as references for the government to evaluate the performance of the sectors, as well as to take corrective actions to improve their performance.
APA, Harvard, Vancouver, ISO, and other styles
28

Spieker, Helge, Arnaud Gotlieb, and Morten Mossige. "Rotational Diversity in Multi-Cycle Assignment Problems." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 7724–31. http://dx.doi.org/10.1609/aaai.v33i01.33017724.

Full text
Abstract:
In multi-cycle assignment problems with rotational diversity, a set of tasks has to be repeatedly assigned to a set of agents. Over multiple cycles, the goal is to achieve a high diversity of assignments from tasks to agents. At the same time, the assignments’ profit has to be maximized in each cycle. Due to changing availability of tasks and agents, planning ahead is infeasible and each cycle is an independent assignment problem but influenced by previous choices. We approach the multi-cycle assignment problem as a two-part problem: Profit maximization and rotation are combined into one objective value, and then solved as a General Assignment Problem. Rotational diversity is maintained with a single execution of the costly assignment model. Our simple, yet effective method is applicable to different domains and applications. Experiments show the applicability on a multi-cycle variant of the multiple knapsack problem and a real-world case study on the test case selection and assignment problem, an example from the software engineering domain, where test cases have to be distributed over compatible test machines.
APA, Harvard, Vancouver, ISO, and other styles
29

Brodskiy, Victor. "Organization of safe execution of works with the use of trapping nets." E3S Web of Conferences 258 (2021): 09005. http://dx.doi.org/10.1051/e3sconf/202125809005.

Full text
Abstract:
The organization of ensuring safe execution of building and assembly works when erecting and reconstructing buildings (structures) of various purposes based on the application of trapping nets to prevent industrial injuries in case of human or items falling from height is presented. Structural layout of a safety (catching) device with pivotally mounted brackets and a freely hanging net was considered. Appearing dynamic loads in case of items falling on trapping nets depending on impact acceleration were theoretically identified. It was found out that a trapping net with pivotally positioned brackets additionally reduces deceleration loads in relation to devices with rigidly fixed brackets and their use is more effective for cases of men falling with insignificant forward velocity. Bench and shop tests of trapping devices were carried out with the purpose of checking compliance of selected theoretical models, selection of developed options of designs and schematic diagrams, differentiation of reaction of capron and lavsan net materials from action of impulse loads. Test key results confirming matching of experimental data with presented theoretical models were showed. It was established that dynamic overloads depend both on a bracket position angle as well as on a place of an item falling into net, the value of pitch of deflection of net cloth made of lavsan and capron materials is almost similar and characteristics of values of their displacement under dynamic loads from a falling item are identical.
APA, Harvard, Vancouver, ISO, and other styles
30

Hendri, Hendri, Jimmi Walter Hasiholan Manurung, Rifqi Audi Ferian, Wahyu Faharrudin Hanaatmoko, and Yulianti Yulianti. "Pengujian Black Box pada Aplikasi Sistem Informasi Pengelolaan Masjid Menggunakan Teknik Equivalence Partitions." Jurnal Teknologi Sistem Informasi dan Aplikasi 3, no. 2 (April 30, 2020): 107. http://dx.doi.org/10.32493/jtsi.v3i2.4694.

Full text
Abstract:
Black-box testing is very important because this technique is able to identify errors in functions, interfaces, data models, and access to external data sources. In implementation problems often arise when testers are never sure whether the software being tested has actually passed the test. This happens because there may be several execution paths that have never been tested. Testers must make every possible input data combination for testing. The selection of input data to find errors is a problem for testers because it has a high probability, so the test case design can automatically become a solution. In this implementation the application to be tested using black-box testing is the Mosque Management Information System. The Mosque Management application will be tested using black box testing where the test is only intended to see if the program matches the function desired by the program without knowing the program code used. To produce test case designs automatically in black box method software testing with equality partition technique. Equivalence Partitions technique is a test based on entering data on each form in the mosque management information system, each menu will be tested and grouped according to function whether it is valid or not.
APA, Harvard, Vancouver, ISO, and other styles
31

Mendes, Emilia, Vitor Freitas, Mirko Perkusich, João Nunes, Felipe Ramos, Alexandre Costa, Renata Saraiva, and Arthur Freire. "Using Bayesian Network to Estimate the Value of Decisions within the Context of Value-Based Software Engineering: A Multiple Case Study." International Journal of Software Engineering and Knowledge Engineering 29, no. 11n12 (November 2019): 1629–71. http://dx.doi.org/10.1142/s0218194019400151.

Full text
Abstract:
Context: Companies must make a paradigm shift in which both short- and long-term value aspects are employed to guide their decision-making. Such need is pressing in innovative industries, such as ICT, and is the core of Value-based Software Engineering (VBSE). Objective: This paper details three case studies where value estimation models using Bayesian Network (BN) were built and validated. These estimation models were based upon value-based decisions made by key stakeholders in the contexts of feature selection, test cases execution prioritization, and user interfaces design selection. Methods: All three case studies were carried out according to a Framework called VALUE — improVing decision-mAking reLating to software-intensive prodUcts and sErvices development. This framework includes a mixed-methods approach, comprising several steps to build and validate company-specific value estimation models. Such a building process uses as input data key stakeholders’ decisions (gathered using the Value tool), plus additional input from key stakeholders. Results: Three value estimation BN models were built and validated, and the feedback received from the participating stakeholders was very positive. Conclusions: We detail the building and validation of three value estimation BN models, using a combination of data from past decision-making meetings and also input from key stakeholders.
APA, Harvard, Vancouver, ISO, and other styles
32

Akbari, Zahra, Sedigheh Khoshnevis, and Mehran Mohsenzadeh. "A Method for Prioritizing Integration Testing in Software Product Lines Based on Feature Model." International Journal of Software Engineering and Knowledge Engineering 27, no. 04 (May 2017): 575–600. http://dx.doi.org/10.1142/s0218194017500218.

Full text
Abstract:
Testing activities for software product lines should be different from that of single software systems, due to significant differences between software product line engineering and single software system development. The cost of testing in software product line is generally higher compared with single software systems; therefore, there should exist a certain balance between cost, quality of final products, and the time of performing testing activities. As decreasing testing cost is an important challenge in software product line integration testing, the contribution of this paper is in introducing a method for early integration testing in software product lines based on feature model (FM) by prioritizing test cases in order to decrease integration testing costs in SPLs. In this method, we focus on reusing domain engineering artifacts and prioritized selection and execution of integration test cases. It also uses separation of concerns and pruning techniques on FMs to help prioritize the test cases. The method shows to be promising when applied to some case studies in the sense that it decreases the costs of performing integration test by about 82% and also detects about 44% of integration faults in domain engineering.
APA, Harvard, Vancouver, ISO, and other styles
33

Kashiwagi, Isaac, Dean Kashiwagi, and Len Gambla. "Application of the Best Value Approach in Procuring ERP Services in a Traditional ICT Environment." Journal for the Advancement of Performance Information and Value 10, no. 1 (March 13, 2020): 51–65. http://dx.doi.org/10.37265/japiv.v10i1.22.

Full text
Abstract:
The ICT industry has struggled with performance for years. Tools, processes, and techniques have been developed in attempts to improve performance, however, the level of performance has not significantly improved. The Best Value Approach has been proposed to increase both the procurement and execution of ICT projects. This researches focus is to further test, explore and confirm the claims associated with the Best Value Approach and its applicability in the ICT industry. Using case study research, the Best Value Approach was used in the selection of an ERP vendor for a client organization. The research findings confirm the claims of the Best Value Approach to be accurate in terms of being simpler, quicker, lower costing, requiring little expertise from the client and delivering an understandable, non-technical plan including detailed schedule, milestone schedule, and schedule that identifies all stakeholder activity.
APA, Harvard, Vancouver, ISO, and other styles
34

Rani, Shweta, Bharti Suri, and Rinkaj Goyal. "On the Effectiveness of Using Elitist Genetic Algorithm in Mutation Testing." Symmetry 11, no. 9 (September 9, 2019): 1145. http://dx.doi.org/10.3390/sym11091145.

Full text
Abstract:
Manual test case generation is an exhaustive and time-consuming process. However, automated test data generation may reduce the efforts and assist in creating an adequate test suite embracing predefined goals. The quality of a test suite depends on its fault-finding behavior. Mutants have been widely accepted for simulating the artificial faults that behave similarly to realistic ones for test data generation. In prior studies, the use of search-based techniques has been extensively reported to enhance the quality of test suites. Symmetry, however, can have a detrimental impact on the dynamics of a search-based algorithm, whose performance strongly depends on breaking the “symmetry” of search space by the evolving population. This study implements an elitist Genetic Algorithm (GA) with an improved fitness function to expose maximum faults while also minimizing the cost of testing by generating less complex and asymmetric test cases. It uses the selective mutation strategy to create low-cost artificial faults that result in a lesser number of redundant and equivalent mutants. For evolution, reproduction operator selection is repeatedly guided by the traces of test execution and mutant detection that decides whether to diversify or intensify the previous population of test cases. An iterative elimination of redundant test cases further minimizes the size of the test suite. This study uses 14 Java programs of significant sizes to validate the efficacy of the proposed approach in comparison to Initial Random tests and a widely used evolutionary framework in academia, namely Evosuite. Empirically, our approach is found to be more stable with significant improvement in the test case efficiency of the optimized test suite.
APA, Harvard, Vancouver, ISO, and other styles
35

Gotlieb, Arnaud, and Dusica Marijan. "Using Global Constraints to Automate Regression Testing." AI Magazine 38, no. 1 (March 31, 2017): 73–87. http://dx.doi.org/10.1609/aimag.v38i1.2714.

Full text
Abstract:
Nowadays, any communicating or autonomous systems rely on high-quality software-based components. To ensure a sufficient level of quality, these components must be thoroughly verified before being released and being deployed in operational settings. Regression testing is a crucial verification process that executes any new release of a software-based component against previous versions of the component, with existing test cases. However, the selection of test cases in regression testing is challenging as the time available for testing is limited and some selection criteria must be respected. This problem, coined as Test Suite Reduction (TSR), is usually addressed by validation engineers through manual analysis or by using approximation techniques. Even if the underlying optimization problem is untractable in theory, solving it in practice is crucial when there are pressing needs to release high-quality components while at the same time reducing the time-to-market of new software releases. In this paper, we address the TSR problem with sound Artificial intelligence techniques such as Constraint Programming (CP) and global constraints. By associating each test case a cost-value aggregating distinct criteria, such as execution time, priority or importance due to the error-proneness of each test case, we propose several constraint optimization models to find a subset of test cases covering all the test requirements and optimizing the overall cost of selected test cases. Our models are based on a combination of NVALUE, GLOBALCARDINALITY, and SCALAR_PRODUCT, three well-known global constraints that can faithfully encode the coverage relation between test cases and test requirements. Our contribution includes the reuse of existing preprocessing rules to simplify the problem before solving it and the design of structure-aware heuristics, which take into account the notion of costs, associated with test cases. The work presented in this paper has been motivated by an industrial application in the communication domain. Our overall goal is to develop a constraint-based approach of test suite reduction that can be deployed to test a complete product line of conferencing systems in continuous delivery mode. By implementing this approach in a software prototype tool and experimentally evaluated it on both randomly generated instances and industrial instances, we hope to foster a quick adoption of the technology.
APA, Harvard, Vancouver, ISO, and other styles
36

SHUKLA, RAKESH, PAUL STROOPER, and DAVID CARRINGTON. "A FRAMEWORK FOR STATISTICAL TESTING OF SOFTWARE COMPONENTS." International Journal of Software Engineering and Knowledge Engineering 17, no. 03 (June 2007): 379–405. http://dx.doi.org/10.1142/s021819400700329x.

Full text
Abstract:
Statistical testing involves the testing of software by selecting test cases from a probability distribution that is intended to represent the software's operational usage. In this paper, we describe and evaluate a framework for statistical testing of software components that incorporates test case execution and output evaluation. An operational profile and a test oracle are essential for the statistical testing of software components because they are used for test case generation and output evaluation respectively. An operational profile is a set of input events and their associated probabilities of occurrence expected in actual operation. A test oracle is a mechanism that is used to check the results of test cases. We present four types of operational profiles and three types of test oracles, and empirically evaluate them using the framework by applying them to two software components. The results show that while simple operational profiles may be effective for some components, more sophisticated profiles are needed for others. For the components that we tested, the fault-detecting effectiveness of the test oracles was similar.
APA, Harvard, Vancouver, ISO, and other styles
37

Vidisha Sharma, Satish Kumar Alaria. "Improving the Performance of Heterogeneous Hadoop Clusters Using Map Reduce." International Journal on Recent and Innovation Trends in Computing and Communication 7, no. 2 (February 28, 2019): 11–17. http://dx.doi.org/10.17762/ijritcc.v7i2.5225.

Full text
Abstract:
The key issue that emerges because of the tremendous development of connectivity among devices and frameworks is making such a great amount of data at an exponential rate that an achievable answer for preparing it is getting to be troublesome step by step. Thusly, building up a stage for such propelled dimension of data handling, equipment just as programming improvements should be led to come in level with such generous data. To enhance the proficiency of Hadoop bunches in putting away and dissecting big data, we have proposed an algorithmic methodology that will provide food the necessities of heterogeneous data put away .over Hadoop groups and enhance the execution just as effectiveness. The proposed paper intends to discover the adequacy of new calculation, correlation, proposals, and an aggressive way to deal with discover the best answer for enhancing the big data situation. The Map Reduce method from Hadoop will help in keeping up a nearby watch over the unstructured or heterogeneous Hadoop bunches with bits of knowledge on results obviously from the algorithm.in this paper we proposed new Generating another calculation to tackle these issues for the business just as non-business uses can help the advancement of network. The proposed calculation can help enhance the situation of data ordering calculation MapReduce in heterogeneous Hadoop groups. The exposition work and analyses directed under this work have copied very amazing outcomes, some of them being the selection of schedulers to plan employments, arrangement of data in similitude lattice, bunching before planning inquiries and in addition, iterative, mapping and diminishing and restricting the inner conditions together to stay away from question slowing down and execution times. The test led additionally sets up the way that if a procedure is characterized to deal with the diverse use case situations, one could generally lessen the expense of processing and can profit on depending on disseminated frameworks for quick executions.
APA, Harvard, Vancouver, ISO, and other styles
38

Younis, Mohammed I., and Kamal Z. Zamli. "A Strategy for Automatic Quality Signing and Verification Processes for Hardware and Software Testing." Advances in Software Engineering 2010 (February 2, 2010): 1–7. http://dx.doi.org/10.1155/2010/323429.

Full text
Abstract:
We propose a novel strategy to optimize the test suite required for testing both hardware and software in a production line. Here, the strategy is based on two processes: Quality Signing Process and Quality Verification Process, respectively. Unlike earlier work, the proposed strategy is based on integration of black box and white box techniques in order to derive an optimum test suite during the Quality Signing Process. In this case, the generated optimal test suite significantly improves the Quality Verification Process. Considering both processes, the novelty of the proposed strategy is the fact that the optimization and reduction of test suite is performed by selecting only mutant killing test cases from cumulating t-way test cases. As such, the proposed strategy can potentially enhance the quality of product with minimal cost in terms of overall resource usage and time execution. As a case study, this paper describes the step-by-step application of the strategy for testing a 4-bit Magnitude Comparator Integrated Circuits in a production line. Comparatively, our result demonstrates that the proposed strategy outperforms the traditional block partitioning strategy with the mutant score of 100% to 90%, respectively, with the same number of test cases.
APA, Harvard, Vancouver, ISO, and other styles
39

Ledford, Aubrey, Ashley Smith, Tessa DesRochers, and Cecile Rose Vibat. "CLRM-19 USING FUNCTIONAL PRECISION MEDICINE TO GUIDE CLINICAL TRIAL ENROLLMENT IN GBM." Neuro-Oncology Advances 4, Supplement_1 (August 1, 2022): i10. http://dx.doi.org/10.1093/noajnl/vdac078.039.

Full text
Abstract:
Abstract Interventional clinical trials in glioblastoma (GBM) have been consistently disappointing, attributable to various factors such as ineffective therapies, inadequate trial designs including lack of control arms, or enrollment criteria that do not represent real-world practice. Novel paradigms for clinical trial design(s) in GBM are desperately needed to produce clinically useful patient outcomes. KIYATEC has developed a patient- and tumor-specific technology platform to evaluate cellular response(s) to therapeutics using 3D cell culture methods that provide functional, patient-specific response predictions. Employing KIYATEC’s technology to screen compounds against both primary patient-, and PDX-derived specimens, enables clinical prioritization of early-stage assets most likely to have therapeutic response in vivo. In addition, KIYATEC’s 3D Predict™ Glioma test has shown clinical correlation of test-predicted response(s) and clinical outcomes in GBM patients. Incorporating KIYATEC’s 3D ex vivo technology into GBM therapeutic development is positioned to accelerate more successful trial results by 1) identifying early-stage compounds likely to possess clinical effects in vivo, and 2) prospectively identifying patients expected to have a clinical response to therapeutics in development. 3D Predict Glioma provides patient-specific responses within 7-10 days of tissue acquisition, providing an avenue for test integration into adaptive clinical trials, whereby functional characterization could provide gating information relating to trial execution. Specifically, functional response prediction may play a pivotal role in identifying newly diagnosed patients who might derive greater benefit from clinical trials compared to standard of care and by optimizing effective therapeutic selection in the recurrent setting. Therefore, a priori knowledge of an early-stage assets’ potential, combined with therapeutic sensitivity of individual patient tissue, may facilitate a new era for adaptive clinical trial design by assimilating KIYATEC’s analytically and clinically validated test into various steps of clinical trial execution such as randomization, stratification, therapy-switching, or compound addition/discontinuation.
APA, Harvard, Vancouver, ISO, and other styles
40

Zhang, Man, and Andrea Arcuri. "Adaptive Hypermutation for Search-Based System Test Generation: A Study on REST APIs with EvoMaster." ACM Transactions on Software Engineering and Methodology 31, no. 1 (January 31, 2022): 1–52. http://dx.doi.org/10.1145/3464940.

Full text
Abstract:
REST web services are widely popular in industry, and search techniques have been successfully used to automatically generate system-level test cases for those systems. In this article, we propose a novel mutation operator which is designed specifically for test generation at system-level, with a particular focus on REST APIs. In REST API testing, and often in system testing in general, an individual can have a long and complex chromosome. Furthermore, there are two specific issues: (1) fitness evaluation in system testing is highly costly compared with the number of objectives (e.g., testing targets) to optimize for; and (2) a large part of the genotype might have no impact on the phenotype of the individuals (e.g., input data that has no impact on the execution flow in the tested program). Due to these issues, it might be not suitable to apply a typical low mutation rate like 1/ n (where n is the number of genes in an individual), which would lead to mutating only one gene on average. Therefore, in this article, we propose an adaptive weight-based hypermutation, which is aware of the different characteristics of the mutated genes. We developed adaptive strategies that enable the selection and mutation of genes adaptively based on their fitness impact and mutation history throughout the search. To assess our novel proposed mutation operator, we implemented it in the EvoMaster tool, integrated in the MIO algorithm, and further conducted an empirical study with three artificial REST APIs and four real-world REST APIs. Results show that our novel mutation operator demonstrates noticeable improvements over the default MIO. It provides a significant improvement in performance for six out of the seven case studies, where the relative improvement is up to +12.09% for target coverage, +12.69% for line coverage, and +32.51% for branch coverage.
APA, Harvard, Vancouver, ISO, and other styles
41

Waqar, Muhammad, Imran, Muhammad Atif Zaman, Muhammad Muzammal, and Jungsuk Kim. "Test Suite Prioritization Based on Optimization Approach Using Reinforcement Learning." Applied Sciences 12, no. 13 (July 4, 2022): 6772. http://dx.doi.org/10.3390/app12136772.

Full text
Abstract:
Regression testing ensures that modified software code changes have not adversely affected existing code modules. The test suite size increases with modification to the software based on the end-user requirements. Regression testing executes the complete test suite after updates in the software. Re-execution of new test cases along with existing test cases is costly. The scientific community has proposed test suite prioritization techniques for selecting and minimizing the test suite to minimize the cost of regression testing. The test suite prioritization goal is to maximize fault detection with minimum test cases. Test suite minimization reduces the test suite size by deleting less critical test cases. In this study, we present a four-fold methodology of test suite prioritization based on reinforcement learning. First, the testers’ and users’ log datasets are prepared using the proposed interaction recording systems for the android application. Second, the proposed reinforcement learning model is used to predict the highest future reward sequence list from the data collected in the first step. Third, the proposed prioritization algorithm signifies the prioritized test suite. Lastly, the fault seeding approach is used to validate the results from software engineering experts. The proposed reinforcement learning-based test suite optimization model is evaluated through five case study applications. The performance evaluation results show that the proposed mechanism performs better than baseline approaches based on random and t-SANT approaches, proving its importance for regression testing.
APA, Harvard, Vancouver, ISO, and other styles
42

Li, Rong, Hao Gong, Mohamed K. Al-Shamsi, Peizhen Feng, Mohamed Lotfy Elbaramawi, Ahmed Al-Neaimi, Helal Al-Menhali, et al. "A Novel Approach by Needles in the Payzone of Heterogeneous Tight Carbonate: A Case Study for Offshore Marginal Field." International Journal of Petroleum Technology 9 (August 31, 2022): 14–25. http://dx.doi.org/10.54653/2409-787x.2022.09.3.

Full text
Abstract:
The new Fish-bones Completion & Stimulation approach by needles in the pay stack aims at addressing heterogeneous tight carbonate by increasing flow area in the lower permeable streaks, which is efficient as hundreds of tunnels drain connecting the borewell to the body of the reservoir to increase well productivity and oil recovery. The initial plan includes selecting the best ones from Stair step horizontal well, Dual Lateral well, five lateral fish-bone drilling, and horizontal drilling along with hydraulic fracture. Considering the lessons of failure to clean internal tubes, the modified 4-1/2” Liner is installed in the lower two sub-layer by jetting subs combing with production subs for matrix acidizing in the upper one sub-layer as per modified Fish-bones Completion Design. In addition, special acid-releasing float shoes and new fishing baskets are applied to avoid those previous problems with this well technique. The candidate well shows good oil test and production, which improved by three times at 2000bbl/d with respect to the initial plan in the B field. This paper describes the technology background and characteristics, design factors, modified design, execution, well test, and lessons learned during implementation.
APA, Harvard, Vancouver, ISO, and other styles
43

Shadiq, Jafar, Ahmad Safei, and Rayhan Wahyudin Ratu Loly. "Pengujian Aplikasi Peminjaman Kendaraan Operasional Kantor Menggunakan BlackBox Testing." INFORMATION MANAGEMENT FOR EDUCATORS AND PROFESSIONALS : Journal of Information Management 5, no. 2 (July 28, 2021): 97. http://dx.doi.org/10.51211/imbi.v5i2.1561.

Full text
Abstract:
Abstrak: Pengujian black-box begitu penting karena teknik tersebut mampu mengidentifikasi kesalahan dalam fungsi, antar muka, model data, dan akses kesumber data eksternal. Dalam pelaksanaan sering timbul masalah penguji tidak pernah yakin apakah perangkat lunak yang diuji telah benar-benar lolos dalam pengujian. Hal ini terjadi karena kemungkinan masih ada beberapa jalur eksekusi yang belum pernah teruji. Penguji seharusnya membuat setiap kemungkinan kombinasi data masukan untuk pengujian. Pemilihan data masukan untuk menemukan kesalahan menjadi masalah bagi penguji karena memiliki probabilitas yang tinggi, sehingga desain test case secara otomatis dapat menjadi solusi. Pada implementasi ini aplikasi yang akan di uji menggunakan black-box testing adalah sebuah Sistem Informasi peminjaman kedaraan operasional. Aplikasi peminjaman kendaran operasional tersebut akan diuji menggunakan black-box testing di mana pengujian ini hanya bertujuan untuk melihat program tersebut apakah sesuai dengan fungsi yang diinginkan program tersebut tanpa mengetahui kode program yang dipakai. Untuk menghasilkan desain test case secara otomatis dalam pengujian perangkat lunak metode black-box dengan teknik equivalence partitioning. Teknik Equivalence Partitions adalah pengujian didasarkan masukkan data pada setiap form yang ada pada sistem informasi pengelolaan masjid, setiap menu akan dilakukan pengujian dan dikelompokkan berdasarkan fungsinya baik itu bernilai valid ataupun tidak valid. Kata kunci: Equivalence Partitions, black-box, Aplikasi, Peminjaman Kendaraan Operasional, test case Abstract: Black-box testing is very important because the technique is able to identify errors in functions, interfaces, data models, and access to external data sources. In implementation, the problem often arises that the tester is never sure whether the software being tested has actually passed the test. This happens because there may still be some untested execution paths. The tester must make every possible combination of input data for testing. The selection of input data to find errors is a problem for the examiner because it has a high probability, so that the test case design can automatically be a solution. In this implementation, the application that will be tested using black-box testing is an operational vehicle loan information system. The operational vehicle loan application will be tested using black-box testing where this test only aims to see whether the program is in accordance with the desired function of the program without knowing the program code used. To generate test case designs automatically in software testing the black-box method with equivalence partitioning techniques. The Equivalence Partitions technique is a test based on data on every form in the management information system, each menu will be tested and assessed for mosques based on usefulness, whether it is feasible or not valid.. Keywords: Equivalence Partitions, black-box, Aplplication, Operational Vehicle Loan, test case
APA, Harvard, Vancouver, ISO, and other styles
44

Nagy, Stefan. "The Fun in Fuzzing." Queue 20, no. 6 (December 31, 2022): 80–87. http://dx.doi.org/10.1145/3580504.

Full text
Abstract:
Stefan Nagy, an assistant professor in the Kahlert School of Computing at the University of Utah, takes us on a tour of recent research in software fuzzing, or the systematic testing of programs via the generation of novel or unexpected inputs. The first paper he discusses extends the state of the art in coverage-guided fuzzing with the semantic notion of "likely invariants," inferred via techniques from property-based testing. The second explores encoding domain-specific knowledge about certain bug classes into test-case generation. His last selection takes us through the looking glass, randomly generating entire C programs and using differential analysis to compare traces of optimized and unoptimized executions, in order to find bugs in the compilers themselves.
APA, Harvard, Vancouver, ISO, and other styles
45

Tsidylo, Ivan M., Serhiy O. Semerikov, Tetiana I. Gargula, Hanna V. Solonetska, Yaroslav P. Zamora, and Andrey V. Pikilnyak. "Simulation of intellectual system for evaluation of multilevel test tasks on the basis of fuzzy logic." CTE Workshop Proceedings 8 (March 19, 2021): 507–20. http://dx.doi.org/10.55056/cte.304.

Full text
Abstract:
The article describes the stages of modeling an intelligent system for evaluating multilevel test tasks based on fuzzy logic in the MATLAB application package, namely the Fuzzy Logic Toolbox. The analysis of existing approaches to fuzzy assessment of test methods, their advantages and disadvantages is given. The considered methods for assessing students are presented in the general case by two methods: using fuzzy sets and corresponding membership functions; fuzzy estimation method and generalized fuzzy estimation method. In the present work, the Sugeno production model is used as the closest to the natural language. This closeness allows for closer interaction with a subject area expert and build well-understood, easily interpreted inference systems. The structure of a fuzzy system, functions and mechanisms of model building are described. The system is presented in the form of a block diagram of fuzzy logical nodes and consists of four input variables, corresponding to the levels of knowledge assimilation and one initial one. The surface of the response of a fuzzy system reflects the dependence of the final grade on the level of difficulty of the task and the degree of correctness of the task. The structure and functions of the fuzzy system are indicated. The modeled in this way intelligent system for assessing multilevel test tasks based on fuzzy logic makes it possible to take into account the fuzzy characteristics of the test: the level of difficulty of the task, which can be assessed as “easy”, “average", “above average”, “difficult”; the degree of correctness of the task, which can be assessed as “correct”, “partially correct”, “rather correct”, “incorrect”; time allotted for the execution of a test task or test, which can be assessed as “short”, “medium”, “long”, “very long”; the percentage of correctly completed tasks, which can be assessed as “small”, “medium”, “large”, “very large”; the final mark for the test, which can be assessed as “poor”, “satisfactory”, “good”, “excellent”, which are included in the assessment. This approach ensures the maximum consideration of answers to questions of all levels of complexity by formulating a base of inference rules and selection of weighting coefficients when deriving the final estimate. The robustness of the system is achieved by using Gaussian membership functions. The testing of the controller on the test sample brings the functional suitability of the developed model.
APA, Harvard, Vancouver, ISO, and other styles
46

Freire, Mariana Lourenço, Felipe Dutra Rêgo, Gláucia Cota, Marcelo Antônio Pascoal-Xavier, and Edward Oliveira. "Potential antigenic targets used in immunological tests for diagnosis of tegumentary leishmaniasis: A systematic review." PLOS ONE 16, no. 5 (May 27, 2021): e0251956. http://dx.doi.org/10.1371/journal.pone.0251956.

Full text
Abstract:
Immunological tests may represent valuable tools for the diagnosis of human tegumentary leishmaniasis (TL) due to their simple execution, less invasive nature and potential use as a point-of-care test. Indeed, several antigenic targets have been used with the aim of improving the restricted scenario for TL-diagnosis. We performed a worldwide systematic review to identify antigenic targets that have been evaluated for the main clinical forms of TL, such as cutaneous (CL) and mucosal (ML) leishmaniasis. Included were original studies evaluating the sensitivity and specificity of immunological tests for human-TL, CL and/or ML diagnosis using purified or recombinant proteins, synthetic peptides or polyclonal or monoclonal antibodies to detect Leishmania-specific antibodies or antigens. The review methodology followed PRISMA guidelines and all selected studies were evaluated in accordance with QUADAS-2. Thirty-eight original studies from four databases fulfilled the selection criteria. A total of 79 antigens were evaluated for the detection of antibodies as a diagnostic for TL, CL and/or ML by ELISA. Furthermore, three antibodies were evaluated for the detection of antigen by immunochromatographic test (ICT) and immunohistochemistry (IHC) for CL-diagnosis. Several antigenic targets showed 100% of sensitivity and specificity, suggesting potential use for TL-diagnosis in its different clinical manifestations. However, a high number of proof-of-concept studies reinforce the need for further analysis aimed at verifying true diagnostic accuracy in clinical practice.
APA, Harvard, Vancouver, ISO, and other styles
47

Dinata, Rozzi Kesuma, Bustami Bustami, Ar Razi, and Muhammad Arasyi. "Algoritma Dijkstra dan Bellman-Ford dalam Sistem Pemetaan Barbershop di Kota Lhokseumawe." INFORMAL: Informatics Journal 7, no. 2 (August 31, 2022): 128. http://dx.doi.org/10.19184/isj.v7i2.33303.

Full text
Abstract:
A Barbershop service provider is a company that provides hair care to the community. Many people are currently doing business in this field, and many business actors are opening Barbershops in a variety of locations, ranging from campuses to office districts to densely populated towns. In Lhokseumawe City, there are 12 Barbershops. The application's benefit is that it can identify the shortest path from the user's location to the selected Barbershop, as well as the Barbershop's location and a brief description of the Barbershops in Lhokseumawe City. Only the system's defined nodes can be used to find the fastest route to the Barbershop. Dijkstra's method was chosen because it works against all current alternative functions and provides the shortest path from all nodes, ensuring that the shortest path is produced optimally. Because the Bellman-Ford algorithm is a variant of the BFS (best-first-search) algorithm, which is also employed in the search for the closest distance when the search for the closest distance has a negative weight, it was chosen. The same results were obtained in picking the route based on the results of the route selection test. However, when the two techniques are compared in terms of program execution time, Dijkstra's algorithm is faster than the Bellman-Ford algorithm.
APA, Harvard, Vancouver, ISO, and other styles
48

Choi, Su Jin, So Won Choi, Jong Hyun Kim, and Eul-Bum Lee. "AI and Text-Mining Applications for Analyzing Contractor’s Risk in Invitation to Bid (ITB) and Contracts for Engineering Procurement and Construction (EPC) Projects." Energies 14, no. 15 (July 30, 2021): 4632. http://dx.doi.org/10.3390/en14154632.

Full text
Abstract:
Contractors responsible for the whole execution of engineering, procurement, and construction (EPC) projects are exposed to multiple risks due to various unbalanced contracting methods such as lump-sum turn-key and low-bid selection. Although systematic risk management approaches are required to prevent unexpected damage to the EPC contractors in practice, there were no comprehensive digital toolboxes for identifying and managing risk provisions for ITB and contract documents. This study describes two core modules, Critical Risk Check (CRC) and Term Frequency Analysis (TFA), developed as a digital EPC contract risk analysis tool for contractors, using artificial intelligence and text-mining techniques. The CRC module automatically extracts risk-involved clauses in the EPC ITB and contracts by the phrase-matcher technique. A machine learning model was built in the TFA module for contractual risk extraction by using the named-entity recognition (NER) method. The risk-involved clauses collected for model development were converted into a database in JavaScript Object Notation (JSON) format, and the final results were saved in pickle format through the digital modules. In addition, optimization and reliability validation of these modules were performed through Proof of Concept (PoC) as a case study, and the modules were further developed to a cloud-service platform for application. The pilot test results showed that risk clause extraction accuracy rates with the CRC module and the TFA module were about 92% and 88%, respectively, whereas the risk clause extraction accuracy rates manually by the engineers were about 70% and 86%, respectively. The time required for ITB analysis was significantly shorter with the digital modules than by the engineers.
APA, Harvard, Vancouver, ISO, and other styles
49

Santoso, Hari, and Lukman Fakih Lidimilah. "OPTIMASI ALGORITMA ALGA UNTUK MENINGKATKAN LAJU KONVERGENSI." Jurnal Ilmiah Informatika 2, no. 1 (June 9, 2017): 68–82. http://dx.doi.org/10.35316/jimi.v2i1.446.

Full text
Abstract:
Artificial AlgaeAlgorithm (AAA) is an optimization algorithm that has advantages of swarm algorithm model and evolution model. AAA consists of three phases of helical movement phase, reproduction, and adaptation. Helical movement is a three-dimensional movement with the direction of x, y, and z which is very influential in the rate of convergence and diversity of solutions. Helical motion optimization aims to increase the convergence rate by moving the algae to the best colony in the population. Algae Algorithm Optimization (AAA ') was tested with 25 objective functions of CEC'05 and implemented in case of pressure vessel design optimization. The results of the CEC'05 function test show that there is an increase in convergence rate at AAA ', but at worst condition of AAA' becomes less stable and trapped in local optima. The complexity analysis shows that AAA has the complexity of O (M3N2O) and AAA 'has the complexity of O (M2N2O) with M is the number of colonies, N is the number of algae individuals, and O is the maximum of the evaluation function. The results of the implementation of pressure vessel design optimization show that AAA's execution time increased 1,103 times faster than AAA. The increase in speed is due to the tournament selection process in AAA performed before the helical motion, whereas in AAA 'is done if the solution after movement is no better than before. At its best, AAA 'found a solution 4.5921 times faster than AAA. At worst, AAA 'stuck on local optima because helical movement is too focused on global best that is not necessarily global optima.
APA, Harvard, Vancouver, ISO, and other styles
50

Solomatina, D. I., and M. V. Filippov. "Multi-trajectory Fitness Function-based Method of Automated Test Coverage for Code Using Evolutionary Algorithms." Mechanical Engineering and Computer Science, no. 10 (December 6, 2018): 30–40. http://dx.doi.org/10.24108/1018.0001434.

Full text
Abstract:
Testing is a time consuming and resource intensive task. Testing often takes about half of the total project development time.Hence, in recent times, a lot of studies have considered the issues of the process automation of creating and running tests. Recently, a lot of papers have been published in the field concerned. However, most of them use unit testing, in which the coverage of different program fragments occurs independently of each other. Therefore, the internal structure of the program was not taken into account.The article presents a method to build a test suite based on evolutionary algorithms that takes into account the features of the entire program, as a whole.Evolutionary algorithms are a class of optimization algorithms that has been widely developed lately, using the ideas of artificial intelligence and intended for conducting directed search. They allow successful solving the problems of software testing, such as testing embedded systems and automatic construction of unit tests.With implementing evolutionary approach the article has solved the following tasks. Proposed a method for determining the fitness function, which allows us to consider an approximation along all possible trajectories, which provides a solution to the problem of coverage in almost any case. However, the number of such trajectories can be quite large, which leads to a significant increase in test execution time. Proposed a method of approximation along all possible trajectories, allowing testing to be done in a reasonable time. Time costs to achieve the goal are reduced by selecting such a trajectory among all possible ones for which a distance value to the instruction is minimal.To study the proposed method, there were four different tasks each of which involved about 1000 runs. The presented research results have shown that the given method is more reliable than the commonly used approaches in which the reference trajectory is selected. In addition, an estimate of time costs has shown that the discussed method has higher efficiency.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography