Journal articles on the topic 'Adaptive random testing'

To see the other types of publications on this topic, follow the link: Adaptive random testing.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Adaptive random testing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

CHAN, KWOK PING, TSONG YUEH CHEN, and DAVE TOWEY. "RESTRICTED RANDOM TESTING: ADAPTIVE RANDOM TESTING BY EXCLUSION." International Journal of Software Engineering and Knowledge Engineering 16, no. 04 (August 2006): 553–84. http://dx.doi.org/10.1142/s0218194006002926.

Full text
Abstract:
Restricted Random Testing (RRT) is a new method of testing software that improves upon traditional Random Testing (RT) techniques. Research has indicated that failure patterns (portions of an input domain which, when executed, cause the program to fail or reveal an error) can influence the effectiveness of testing strategies. For certain types of failure patterns, it has been found that a widespread and even distribution of test cases in the input domain can be significantly more effective at detecting failure compared with ordinary RT. Testing methods based on RT, but which aim to achieve even and widespread distributions, have been called Adaptive Random Testing (ART) strategies. One implementation of ART is RRT. RRT uses exclusion zones around executed, but non-failure-causing, test cases to restrict the regions of the input domain from which subsequent test cases may be drawn. In this paper, we introduce the motivation behind RRT, explain the algorithm and detail some empirical analyses carried out to examine the effectiveness of the method. Two versions of RRT are presented: Ordinary RRT (ORRT) and Normalized RRT (NRRT). The two versions share the same fundamental algorithm, but differ in their treatment of non-homogeneous input domains. Investigations into the use of alternative exclusion shapes are outlined, and a simple technique for reducing the computational overheads of RRT, prompted by the alternative exclusion shape investigations, is also explained. The performance of RRT is compared with RT and another ART method based on maximized minimum test case separation (DART), showing excellent improvement over RT and a very favorable comparison with DART.
APA, Harvard, Vancouver, ISO, and other styles
2

Parsa, Saeed, and Esmaee Nikravan. "Hybrid adaptive random testing." International Journal of Computing Science and Mathematics 11, no. 3 (2020): 209. http://dx.doi.org/10.1504/ijcsm.2020.10028215.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Nikravan, Esmaeel, and Saeed Parsa. "Hybrid adaptive random testing." International Journal of Computing Science and Mathematics 11, no. 3 (2020): 209. http://dx.doi.org/10.1504/ijcsm.2020.106694.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Chen, T. Y., F. C. Kuo, R. G. Merkel, and S. P. Ng. "Mirror adaptive random testing." Information and Software Technology 46, no. 15 (December 2004): 1001–10. http://dx.doi.org/10.1016/j.infsof.2004.07.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wu, Huayao, Changhai Nie, Justyna Petke, Yue Jia, and Mark Harman. "An Empirical Comparison of Combinatorial Testing, Random Testing and Adaptive Random Testing." IEEE Transactions on Software Engineering 46, no. 3 (March 1, 2020): 302–20. http://dx.doi.org/10.1109/tse.2018.2852744.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Nie, Changhai, Huayao Wu, Xintao Niu, Fei-Ching Kuo, Hareton Leung, and Charles J. Colbourn. "Combinatorial testing, random testing, and adaptive random testing for detecting interaction triggered failures." Information and Software Technology 62 (June 2015): 198–213. http://dx.doi.org/10.1016/j.infsof.2015.02.008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Chen, Tsong Yueh, Fei-Ching Kuo, Huai Liu, and W. Eric Wong. "Code Coverage of Adaptive Random Testing." IEEE Transactions on Reliability 62, no. 1 (March 2013): 226–37. http://dx.doi.org/10.1109/tr.2013.2240898.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Junpeng Lv, Hai Hu, Kai-Yuan Cai, and Tsong Yueh Chen. "Adaptive and Random Partition Software Testing." IEEE Transactions on Systems, Man, and Cybernetics: Systems 44, no. 12 (December 2014): 1649–64. http://dx.doi.org/10.1109/tsmc.2014.2318019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Liu, Huai, Fei-Ching Kuo, and Tsong Yueh Chen. "Comparison of adaptive random testing and random testing under various testing and debugging scenarios." Software: Practice and Experience 42, no. 8 (September 2, 2011): 1055–74. http://dx.doi.org/10.1002/spe.1113.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Chen, Tsong Yueh, Fei-Ching Kuo, and Huai Liu. "Adaptive random testing based on distribution metrics." Journal of Systems and Software 82, no. 9 (September 2009): 1419–33. http://dx.doi.org/10.1016/j.jss.2009.05.017.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Mao, Chengying, Xuzheng Zhan, Jinfu Chen, Jifu Chen, and Rubing Huang. "Adaptive random testing based on flexible partitioning." IET Software 14, no. 5 (October 1, 2020): 493–505. http://dx.doi.org/10.1049/iet-sen.2019.0325.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Huang, Rubing, Jinfu Chen, and Yansheng Lu. "Adaptive Random Testing with Combinatorial Input Domain." Scientific World Journal 2014 (2014): 1–16. http://dx.doi.org/10.1155/2014/843248.

Full text
Abstract:
Random testing (RT) is a fundamental testing technique to assess software reliability, by simply selecting test cases in a random manner from the whole input domain. As an enhancement of RT, adaptive random testing (ART) has better failure‐detection capability and has been widely applied in different scenarios, such as numerical programs, some object‐oriented programs, and mobile applications. However, not much work has been done on the effectiveness of ART for the programs with combinatorial input domain (i.e., the set of categorical data). To extend the ideas to the testing for combinatorial input domain, we have adopted different similarity measures that are widely used for categorical data in data mining and have proposed two similarity measures based on interaction coverage. Then, we propose a new version named ART‐CID as an extension of ART in combinatorial input domain, which selects an element from categorical data as the next test case such that it has the lowest similarity against already generated test cases. Experimental results show that ART‐CID generally performs better than RT, with respect to different evaluation metrics.
APA, Harvard, Vancouver, ISO, and other styles
13

CHEN, TSONG YUEH, FEI-CHING KUO, and ZHI QUAN ZHOU. "ON FAVOURABLE CONDITIONS FOR ADAPTIVE RANDOM TESTING." International Journal of Software Engineering and Knowledge Engineering 17, no. 06 (December 2007): 805–25. http://dx.doi.org/10.1142/s0218194007003501.

Full text
Abstract:
Recently, adaptive random testing (ART) has been developed to enhance the fault-detection effectiveness of random testing (RT). It has been known in general that the fault-detection effectiveness of ART depends on the distribution of failure-causing inputs, yet this understanding is in coarse terms without precise details. In this paper, we conduct an in-depth investigation into the factors related to the distribution of failure-causing inputs that have an impact on the fault-detection effectiveness of ART. This paper gives a comprehensive analysis of the favourable conditions for ART. Our study contributes to the knowledge of ART and provides useful information for testers to decide when it is more cost-effective to use ART.
APA, Harvard, Vancouver, ISO, and other styles
14

Yan, Min, Li Wang, and Aiguo Fei. "ARTDL: Adaptive Random Testing for Deep Learning Systems." IEEE Access 8 (2020): 3055–64. http://dx.doi.org/10.1109/access.2019.2962695.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Tappenden, Andrew F., and James Miller. "A Novel Evolutionary Approach for Adaptive Random Testing." IEEE Transactions on Reliability 58, no. 4 (December 2009): 619–33. http://dx.doi.org/10.1109/tr.2009.2034288.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Huang, Rubing, Huai Liu, Xiaodong Xie, and Jinfu Chen. "Enhancing mirror adaptive random testing through dynamic partitioning." Information and Software Technology 67 (November 2015): 13–29. http://dx.doi.org/10.1016/j.infsof.2015.06.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Indhumathi, D., and S. Sarala. "Fragment Analysis and Test Case Generation using F-Measure for Adaptive Random Testing and Partitioned Block based Adaptive Random Testing." International Journal of Computer Applications 93, no. 6 (May 16, 2014): 11–15. http://dx.doi.org/10.5120/16218-5662.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Chen, Tsong Yueh, Fei-Ching Kuo, and Huai Liu. "Distributing test cases more evenly in adaptive random testing." Journal of Systems and Software 81, no. 12 (December 2008): 2146–62. http://dx.doi.org/10.1016/j.jss.2008.03.062.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Chen, Tsong Yueh, Fei-Ching Kuo, Robert G. Merkel, and T. H. Tse. "Adaptive Random Testing: The ART of test case diversity." Journal of Systems and Software 83, no. 1 (January 2010): 60–66. http://dx.doi.org/10.1016/j.jss.2009.02.022.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Li, Zhibo, Qingbao Li, and Lei Yu. "An Enhanced Adaptive Random Testing by Dividing Dimensions Independently." Mathematical Problems in Engineering 2019 (October 13, 2019): 1–15. http://dx.doi.org/10.1155/2019/9381728.

Full text
Abstract:
Random testing (RT) is widely applied in the area of software testing due to its advantages such as simplicity, unbiasedness, and easy implementation. Adaptive random testing (ART) enhances RT. It improves the effectiveness of RT by distributing test cases as evenly as possible. Fixed Size Candidate Set (FSCS) is one of the most well-known ART algorithms. Its high failure-detection effectiveness only shows at low failure rates in low-dimensional spaces. In order to solve this problem, the boundary effect of the test case distribution is analyzed, and the FSCS algorithm of a limited candidate set (LCS-FSCS) is proposed. By utilizing the information gathered from success test cases (no failure-causing test inputs), a tabu generation domain of candidate test case is produced. This tabu generation domain is eliminated from the current candidate test case generation domain. Finally, the number of test cases at the boundary is reduced by constraining the candidate test case generation domain. The boundary effect is effectively relieved, and the distribution of test cases is more even. The results of the simulation experiment show that the failure-detection effectiveness of LCS-FSCS is significantly improved in high-dimensional spaces. Meanwhile, the failure-detection effectiveness is also improved for high failure rates and the gap of failure-detection effectiveness between different failure rates is narrowed. The results of an experiment conducted on some real-life programs show that LCS-FSCS is less effective than FSCS only when the failure distribution is concentrated on the boundary. In general, the effectiveness of LCS-FSCS is higher than that of FSCS.
APA, Harvard, Vancouver, ISO, and other styles
21

Selay, Elmin, Zhi Quan Zhou, Tsong Yueh Chen, and Fei-Ching Kuo. "Adaptive Random Testing in Detecting Layout Faults of Web Applications." International Journal of Software Engineering and Knowledge Engineering 28, no. 10 (September 25, 2018): 1399–428. http://dx.doi.org/10.1142/s0218194018500407.

Full text
Abstract:
As part of a software testing process, output verification poses a challenge when the output is not numeric or textual, such as graphical. The industry practice of using human oracles (testers) to observe and verify the correctness of the actual results is both expensive and error-prone. In particular, this practice is usually unsustainable when developing web applications — the most popular software of our era. This is because web applications change frequently due to the fast-evolving requirements amid popular demand. To improve the cost effectiveness of browser output verification, in this study we design failure-based testing techniques and evaluate the effectiveness and efficiency thereof in the context of web testing. With a novel application of the concept of adaptive random sequence (ARS), our approach leverages peculiar characteristics of failure patterns found in browser layout rendering. An empirical study shows that the use of failure patterns and inclination to guide the testing flow leads to more cost-effective results than other classic methods. This study extends the application of ARSs from the input space of programs to their output space, and also shows that adaptive random testing (ART) can outperform random testing (RT) in both failure detection effectiveness (in terms of F-measure) and failure detection efficiency (in terms of execution time).
APA, Harvard, Vancouver, ISO, and other styles
22

Shin, Seung-Hun, and Seung-Kyu Park. "Adaptive Random Testing through Iterative Partitioning with Enlarged Input Domain." KIPS Transactions:PartD 15D, no. 4 (August 29, 2008): 531–40. http://dx.doi.org/10.3745/kipstd.2008.15-d.4.531.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Sinaga, Arnaldo Marulitua. "Adaptive Random Testing with Coverage Information for Object Oriented Program." Advanced Science Letters 23, no. 5 (May 1, 2017): 4359–62. http://dx.doi.org/10.1166/asl.2017.8338.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Ackah-Arthur, Hilary, Jinfu Chen, Jiaxiang Xi, Michael Omari, Heping Song, and Rubing Huang. "A cost‐effective adaptive random testing approach by dynamic restriction." IET Software 12, no. 6 (December 2018): 489–97. http://dx.doi.org/10.1049/iet-sen.2017.0208.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Shin, Seung-Hun, Seung-Kyu Park, Kyung-Hee Choi, and Ki-Hyun Jung. "Adaptive Random Testing for Integrated System based on Output Distribution Estimation." Journal of the Korea Society for Simulation 20, no. 3 (September 30, 2011): 19–28. http://dx.doi.org/10.9709/jkss.2011.20.3.019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Crabbe, Marjolein, and Martina Vandebroek. "Computerized Adaptive Testing for the Random Weights Linear Logistic Test Model." Applied Psychological Measurement 38, no. 6 (May 27, 2014): 415–31. http://dx.doi.org/10.1177/0146621614533987.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Ziyuan, Wang, Zhang Yanliang, Gao Peng, and Shuang Shiyong. "Comparing Fault Detection Efficiencies of Adaptive Random Testing and Greedy Combinatorial Testing for Boolean-Specifications." International Journal of Performability Engineering 17, no. 1 (2021): 114. http://dx.doi.org/10.23940/ijpe.21.01.p11.114122.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Cetin-Berber, Dee Duygu, Halil Ibrahim Sari, and Anne Corinne Huggins-Manley. "Imputation Methods to Deal With Missing Responses in Computerized Adaptive Multistage Testing." Educational and Psychological Measurement 79, no. 3 (October 29, 2018): 495–511. http://dx.doi.org/10.1177/0013164418805532.

Full text
Abstract:
Routing examinees to modules based on their ability level is a very important aspect in computerized adaptive multistage testing. However, the presence of missing responses may complicate estimation of examinee ability, which may result in misrouting of individuals. Therefore, missing responses should be handled carefully. This study investigated multiple missing data methods in computerized adaptive multistage testing, including two imputation techniques, the use of full information maximum likelihood and the use of scoring missing data as incorrect. These methods were examined under the missing completely at random, missing at random, and missing not at random frameworks, as well as other testing conditions. Comparisons were made to baseline conditions where no missing data were present. The results showed that imputation and the full information maximum likelihood methods outperformed incorrect scoring methods in terms of average bias, average root mean square error, and correlation between estimated and true thetas.
APA, Harvard, Vancouver, ISO, and other styles
29

Monemi Bidgoli, Atieh, and Hassan Haghighi. "Augmenting ant colony optimization with adaptive random testing to cover prime paths." Journal of Systems and Software 161 (March 2020): 110495. http://dx.doi.org/10.1016/j.jss.2019.110495.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Chen, Jinfu, Patrick Kwaku Kudjo, Zufa Zhang, Chenfei Su, Yuchi Guo, Rubing Huang, and Heping Song. "A Modified Similarity Metric for Unit Testing of Object-Oriented Software Based on Adaptive Random Testing." International Journal of Software Engineering and Knowledge Engineering 29, no. 04 (April 2019): 577–606. http://dx.doi.org/10.1142/s0218194019500244.

Full text
Abstract:
Finding an effective method for testing object-oriented software (OOS) has proven elusive in the software community due to the rapid development of object-oriented programming (OOP) technology. Although significant progress has been made by previous studies, challenges still exist in relation to the object distance measurement of OOS using Adaptive Random Testing (ART). This is partly due to the unique features of OOS such as encapsulation, inheritance and polymorphism. In a previous work, we proposed a new similarity metric called the Object and Method Invocation Sequence Similarity (OMISS) metric to facilitate multi-class level testing using ART. In this paper, we broaden the set of models in the metric (OMISS) by considering the method parameter and adding the weight in the metric to develop a new distance metric to improve unit testing of OOS. We used the new distance metric to calculate the distance between the set of objects and the distance between the method sequences of the test cases. Additionally, we integrate the new metric in unit testing with ART and applied it to six open source subject programs. The experimental result shows that the proposed method with method parameter considered in this study is better than previous methods without the method parameter in the case of the single method. Our finding further shows that the proposed unit testing approach is a promising direction for assisting software engineers who seek to improve the failure-detection effectiveness of OOS testing.
APA, Harvard, Vancouver, ISO, and other styles
31

Alamgir, Arbab, Abu Khari A’ain, Norlina Paraman, and Usman Ullah Sheikh. "Adaptive random testing with total cartesian distance for black box circuit under test." Indonesian Journal of Electrical Engineering and Computer Science 20, no. 2 (November 1, 2020): 720. http://dx.doi.org/10.11591/ijeecs.v20.i2.pp720-726.

Full text
Abstract:
<p>Testing and verification of digital circuits is of vital importance in electronics industry. Moreover, key designs require preservation of their intellectual property that might restrict access to the internal structure of circuit under test. Random testing is a classical solution to black box testing as it generates test patterns without using the structural implementation of the circuit under test. However, random testing ignores the importance of previously applied test patterns while generating subsequent test patterns. An improvement to random testing is Antirandom that diversifies every subsequent test pattern in the test sequence. Whereas, computational intensive process of distance calculation restricts its scalability for large input circuit under test. Fixed sized candidate set adaptive random testing uses predetermined number of patterns for distance calculations to avoid computational complexity. A combination of max-min distance with previously executed patterns is carried out for each test pattern candidate. However, the reduction in computational complexity reduces the effectiveness of test set in terms of fault coverage. This paper uses a total cartesian distance based approach on fixed sized candidate set to enhance diversity in test sequence. The proposed approach has a two way effect on the test pattern generation as it lowers the computational intensity along with enhancement in the fault coverage. Fault simulation results on ISCAS’85 and ISCAS’89 benchmark circuits show that fault coverage of the proposed method increases up to 20.22% compared to previous method.</p>
APA, Harvard, Vancouver, ISO, and other styles
32

Olea, Julio, Juan Ramón Barrada, Francisco J. Abad, Vicente Ponsoda, and Lara Cuevas. "Computerized Adaptive Testing: The Capitalization on Chance Problem." Spanish journal of psychology 15, no. 1 (March 2012): 424–41. http://dx.doi.org/10.5209/rev_sjop.2012.v15.n1.37348.

Full text
Abstract:
This paper describes several simulation studies that examine the effects of capitalization on chance in the selection of items and the ability estimation in CAT, employing the 3-parameter logistic model. In order to generate different estimation errors for the item parameters, the calibration sample size was manipulated (N = 500, 1000 and 2000 subjects) as was the ratio of item bank size to test length (banks of 197 and 788 items, test lengths of 20 and 40 items), both in a CAT and in a random test. Results show that capitalization on chance is particularly serious in CAT, as revealed by the large positive bias found in the small sample calibration conditions. For broad ranges of θ, the overestimation of the precision (asymptotic Se) reaches levels of 40%, something that does not occur with the RMSE (θ). The problem is greater as the item bank size to test length ratio increases. Potential solutions were tested in a second study, where two exposure control methods were incorporated into the item selection algorithm. Some alternative solutions are discussed.
APA, Harvard, Vancouver, ISO, and other styles
33

Ackah-Arthur, Hilary, Jinfu Chen, Dave Towey, Michael Omari, Jiaxiang Xi, and Rubing Huang. "One-Domain-One-Input: Adaptive Random Testing by Orthogonal Recursive Bisection With Restriction." IEEE Transactions on Reliability 68, no. 4 (December 2019): 1404–28. http://dx.doi.org/10.1109/tr.2019.2907577.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Xu, Zhiyuan, Gongjun Xu, and Wei Pan. "Adaptive testing for association between two random vectors in moderate to high dimensions." Genetic Epidemiology 41, no. 7 (July 17, 2017): 599–609. http://dx.doi.org/10.1002/gepi.22059.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Wang, Wen-Chung, Chen-Wei Liu, and Shiu-Lien Wu. "The Random-Threshold Generalized Unfolding Model and Its Application of Computerized Adaptive Testing." Applied Psychological Measurement 37, no. 3 (January 11, 2013): 179–200. http://dx.doi.org/10.1177/0146621612469720.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Omari, Michael, Jinfu Chen, Hilary Ackah-Arthur, and Patrick Kwaku Kudjo. "Elimination by Linear Association: An Effective and Efficient Static Mirror Adaptive Random Testing." IEEE Access 7 (2019): 71038–60. http://dx.doi.org/10.1109/access.2019.2919160.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Rao, Dr K. Koteswara, Saroja Mrs Y, Mr N. Ramesh Babu, Dr G. Lalitha Kumari, and Surekha Mrs Y. "Adaptive Genetic Algorithm (AGA) Based Optimal Directed Random Testing for Reducing Interactive Faults." Indian Journal of Computer Science and Engineering 12, no. 2 (April 20, 2021): 485–98. http://dx.doi.org/10.21817/indjcse/2021/v12i2/211202170.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Al Mohamad, Diaa, Erik W. Van Zwet, Eric Cator, and Jelle J. Goeman. "Adaptive critical value for constrained likelihood ratio testing." Biometrika 107, no. 3 (May 4, 2020): 677–88. http://dx.doi.org/10.1093/biomet/asaa013.

Full text
Abstract:
Summary We present a new general method for constrained likelihood ratio testing which, when few constraints are violated, improves upon the existing approach in the literature that compares the likelihood ratio with the quantile of a mixture of chi-squared distributions; the improvement is in terms of both simplicity and power. The proposed method compares the constrained likelihood ratio statistic against the quantile of only one chi-squared random variable with data-dependent degrees of freedom. The new test is shown to have a valid exact significance level $\alpha$. It also has more power than the classical approach against alternatives for which the number of violations is not large. We provide more details for testing a simple order $\mu_1\leqslant\cdots\leqslant\mu_p$ against all alternatives using the proposed approach and give clear guidelines as to when the new method would be advantageous. A simulation study suggests that for testing a simple order, the new approach is more powerful in many scenarios than the existing method that uses a mixture of chi-squared variables. We illustrate the results of our adaptive procedure using real data on the liquidity preference hypothesis.
APA, Harvard, Vancouver, ISO, and other styles
39

Thu Nguyet, Phan Thi, and Muslem Daud. "Computer Adaptive Test Development To Assess Students’ Psychology." JURNAL SERAMBI ILMU 22, no. 1 (March 22, 2021): 139–49. http://dx.doi.org/10.32672/si.v22i1.2760.

Full text
Abstract:
Stress becomes a significantly serious issue among university students and we need efficient tools to understand it more. The aim of present study is to develop a Computerized Adaptive Testing (CAT) to measure the mentioned stress, as pioneer project in Vietnam. In this vein, an item bank of 68 items has been constructed, which is based on Likert Polytomous Scales through five subdomains: behavior, academic performance, family, lecturer and finance. The sampling of the survey is large. It has assessed 2,085 students (704 males and 1,381 females). Multidimensional Random Coefficients Multinomial Logit (MRCML) Model is applied to develop Multidimensional Stress Scales and Computerized Adaptive Testing procedure. The result findings indicate that Multidimensional Random Coefficients Multinomial Logit (MRCML) can be used to develop new scale with psychometric properties. Indicated by various fit criteria MNSQ, standard errors, Z (t-test) implemented in software ConQuest. The subdomain has a good reliability (from .857 to .798). As respect to CATs, a simulated experiment based on the empirical data is applied to evaluate the performance of the proposed computerized adaptive testing. The standard error of the estimated stress proficiencies are reported in this study. The 68 items stress data appropriate fit the Multidimensional model applied.
APA, Harvard, Vancouver, ISO, and other styles
40

CAI, KAI-YUAN, TSONG YUEH CHEN, YONG-CHAO LI, YUEN TAK YU, and LEI ZHAO. "ON THE ONLINE PARAMETER ESTIMATION PROBLEM IN ADAPTIVE SOFTWARE TESTING." International Journal of Software Engineering and Knowledge Engineering 18, no. 03 (May 2008): 357–81. http://dx.doi.org/10.1142/s0218194008003696.

Full text
Abstract:
Software cybernetics is an emerging area that explores the interplay between software and control. The controlled Markov chain (CMC) approach to software testing supports the idea of software cybernetics by treating software testing as a control problem, where the software under test serves as a controlled object modeled by a controlled Markov chain and the software testing strategy serves as the corresponding controller. The software under test and the corresponding software testing strategy form a closed-loop feedback control system. The theory of controlled Markov chains is used to design and optimize the testing strategy in accordance with the testing/reliability goal given explicitly and a priori. Adaptive software testing adjusts and improves software testing strategy online by using the testing data collected in the course of software testing. In doing so, the online parameter estimations play a key role. In this paper, we study the effects of genetic algorithm and the gradient method for doing online parameter estimation in adaptive software testing. We find that genetic algorithm is effective and does not require prior knowledge of the software parameters of concern. Although genetic algorithm is computationally intensive, it leads the adaptive software testing strategy to an optimal software testing strategy that is determined by optimizing a given testing goal, such as minimizing the total cost incurred for removing a given number of defects. On the other hand, the gradient method is computationally favorable, but requires appropriate initial values of the software parameters of concern. It may lead, or fail to lead, the adaptive software testing strategy to an optimal software testing strategy, depending on whether the given initial parameter values are appropriate or not. In general, the genetic algorithm should be used instead of the gradient method in adaptive software testing. Simulation results show that adaptive software testing does work and outperforms random testing.
APA, Harvard, Vancouver, ISO, and other styles
41

Chen, Jinfu, Qihao Bao, T. H. Tse, Tsong Yueh Chen, Jiaxiang Xi, Chengying Mao, Minjie Yu, and Rubing Huang. "Exploiting the Largest Available Zone: A Proactive Approach to Adaptive Random Testing by Exclusion." IEEE Access 8 (2020): 52475–88. http://dx.doi.org/10.1109/access.2020.2977777.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

CHEN, Tsong-Yueh. "Impact of the Compactness of Failure Regions on the Performance of Adaptive Random Testing." Journal of Software 17, no. 12 (2006): 2438. http://dx.doi.org/10.1360/jos172438.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Omari, Michael, Jinfu Chen, Robert French-Baidoo, and Yunting Sun. "A Proactive Approach to Test Case Selection — An Efficient Implementation of Adaptive Random Testing." International Journal of Software Engineering and Knowledge Engineering 30, no. 08 (August 2020): 1169–98. http://dx.doi.org/10.1142/s0218194020500308.

Full text
Abstract:
Fixed Sized Candidate Set (FSCS) is the first of a series of methods proposed to enhance the effectiveness of random testing (RT) referred to as Adaptive Random Testing methods or ARTs. Since its inception, test case generation overheads have been a major drawback to the success of ART. In FSCS, the bulk of this cost is embedded in distance computations between a set of randomly generated candidate test cases and previously executed but unsuccessful test cases. Consequently, FSCS is caught in a logical trap of probing the distances between every candidate and all executed test cases before the best candidate is determined. Using data mining, however, we discovered that about 50% of all valid test cases are encountered much earlier in the distance computations process but without any benefit of a hindsight, FSCS is unable to validate them; a wild goose chase. This paper then uses this information to propose a new strategy that predictively and proactively selects valid candidates anywhere during the distance computation process without vetting every candidate. Theoretical analysis, simulations and experimental studies conducted led to a similar conclusion: 25% of the distance computations are wasteful and can be discarded without any repercussion on effectiveness.
APA, Harvard, Vancouver, ISO, and other styles
44

Gupta, Sheifali, Pooja Jain, Deepali Gupta, and Harsha Chauhan. "Boosted Random Forest Learning Based Convolution Neural Network Model for Face Recognition System." Journal of Computational and Theoretical Nanoscience 16, no. 10 (October 1, 2019): 4153–59. http://dx.doi.org/10.1166/jctn.2019.8495.

Full text
Abstract:
Convolution Neural Network (CNN) accommodates high dimension features and large amount of data with high computation. In this paper, a CNN model for face recognition system that is based upon random forest learning approach is presented. It extracts the convolution neural network based linear and non-linear features of images. Random forest learns the linear and nonlinear features with different number of trees. The random forest learning is used with adaptive boosting algorithm for enhancing the recognition accuracy. It selects effective tree by boosting approach using adaptive threshold at testing time. For performance evaluation, the proposed boosted random forest based CNN model is compared with the existing model of soft-max learner based CNN model. The YALE dataset is used that contains the images of 38 persons, having 64 images for each person. The proposed approach achieves significant accuracy of 99.7%.
APA, Harvard, Vancouver, ISO, and other styles
45

Mao, Chengying, Xuzheng Zhan, T. H. Tse, and Tsong Yueh Chen. "KDFC-ART: a KD-tree approach to enhancing Fixed-size-Candidate-set Adaptive Random Testing." IEEE Transactions on Reliability 68, no. 4 (December 2019): 1444–69. http://dx.doi.org/10.1109/tr.2019.2892230.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Kuo, Fei-Ching, Tsong Yueh Chen, Huai Liu, and Wing Kwong Chan. "Enhancing adaptive random testing for programs with high dimensional input domains or failure-unrelated parameters." Software Quality Journal 16, no. 3 (March 8, 2008): 303–27. http://dx.doi.org/10.1007/s11219-008-9047-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Abdulameer, Mohammed Hasan, Siti Norul Huda Sheikh Abdullah, and Zulaiha Ali Othman. "Support Vector Machine Based on Adaptive Acceleration Particle Swarm Optimization." Scientific World Journal 2014 (2014): 1–8. http://dx.doi.org/10.1155/2014/835607.

Full text
Abstract:
Existing face recognition methods utilize particle swarm optimizer (PSO) and opposition based particle swarm optimizer (OPSO) to optimize the parameters of SVM. However, the utilization of random values in the velocity calculation decreases the performance of these techniques; that is, during the velocity computation, we normally use random values for the acceleration coefficients and this creates randomness in the solution. To address this problem, an adaptive acceleration particle swarm optimization (AAPSO) technique is proposed. To evaluate our proposed method, we employ both face and iris recognition based on AAPSO with SVM (AAPSO-SVM). In the face and iris recognition systems, performance is evaluated using two human face databases, YALE and CASIA, and the UBiris dataset. In this method, we initially perform feature extraction and then recognition on the extracted features. In the recognition process, the extracted features are used for SVM training and testing. During the training and testing, the SVM parameters are optimized with the AAPSO technique, and in AAPSO, the acceleration coefficients are computed using the particle fitness values. The parameters in SVM, which are optimized by AAPSO, perform efficiently for both face and iris recognition. A comparative analysis between our proposed AAPSO-SVM and the PSO-SVM technique is presented.
APA, Harvard, Vancouver, ISO, and other styles
48

LOONEY, CARL G. "ACCELERATING TRAINING OF FEEDFORWARD NEURAL NETWORKS." International Journal on Artificial Intelligence Tools 03, no. 03 (September 1994): 339–48. http://dx.doi.org/10.1142/s0218213094000170.

Full text
Abstract:
We review methods and techniques for training feedforward neural networks that avoid problematic behavior, accelerate the convergence, and verify the training. Adaptive step gain, bipolar activation functions, and conjugate gradients are powerful stabilizers. Random search techniques circumvent the local minimum trap and avoid specialization due to overtraining. Testing assures quality learning.
APA, Harvard, Vancouver, ISO, and other styles
49

Chen, Jinfu, Fei-Ching Kuo, Tsong Yueh Chen, Dave Towey, Chenfei Su, and Rubing Huang. "A Similarity Metric for the Inputs of OO Programs and Its Application in Adaptive Random Testing." IEEE Transactions on Reliability 66, no. 2 (June 2017): 373–402. http://dx.doi.org/10.1109/tr.2016.2628759.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Wang, Rongcun, Zhengmin Li, Shujuan Jiang, and Chuanqi Tao. "Regression Test Case Prioritization Based on Fixed Size Candidate Set ART Algorithm." International Journal of Software Engineering and Knowledge Engineering 30, no. 03 (March 2020): 291–320. http://dx.doi.org/10.1142/s0218194020500138.

Full text
Abstract:
Regression testing is a very time-consuming and expensive testing activity. Many test case prioritization techniques have been proposed to speed up regression testing. Previous studies show that no one technique is always best. Random strategy, as the simplest strategy, is not always so bad. Particularly, when a test suite has higher fault detection capability, the strategy can generate a better result. Nevertheless, due to the randomness, the strategy is not always as satisfactory as expected. In this context, we present a test case prioritization approach using fixed size candidate set adaptive random testing algorithm to reduce the effect of randomness and improve fault detection effectiveness. The distance between pair-wise test cases is assessed by exclusive OR. We designed and conducted empirical studies on eight C programs to validate the effectiveness of the proposed approach. The experimental results, confirmed by a statistical analysis, indicate that the approach we proposed is more effective than random and the total greedy prioritization techniques in terms of fault detection effectiveness. Although the presented approach has comparable fault detection effectiveness to ART-based and the additional greedy techniques, the time cost is much lower. Consequently, the proposed approach is much more cost-effective.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography