Journal articles on the topic 'PRIORITIZING TEST CASES'

To see the other types of publications on this topic, follow the link: PRIORITIZING TEST CASES.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'PRIORITIZING TEST CASES.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Rothermel, G., R. H. Untch, Chengyun Chu, and M. J. Harrold. "Prioritizing test cases for regression testing." IEEE Transactions on Software Engineering 27, no. 10 (2001): 929–48. http://dx.doi.org/10.1109/32.962562.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Elbaum, Sebastian, Alexey G. Malishevsky, and Gregg Rothermel. "Prioritizing test cases for regression testing." ACM SIGSOFT Software Engineering Notes 25, no. 5 (September 2000): 102–12. http://dx.doi.org/10.1145/347636.348910.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ledru, Yves, Alexandre Petrenko, Sergiy Boroday, and Nadine Mandran. "Prioritizing test cases with string distances." Automated Software Engineering 19, no. 1 (September 7, 2011): 65–95. http://dx.doi.org/10.1007/s10515-011-0093-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Mei, Hong, Dan Hao, Lingming Zhang, Lu Zhang, Ji Zhou, and Gregg Rothermel. "A Static Approach to Prioritizing JUnit Test Cases." IEEE Transactions on Software Engineering 38, no. 6 (November 2012): 1258–75. http://dx.doi.org/10.1109/tse.2011.106.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Öztürk, Muhammed Maruf. "A bat-inspired algorithm for prioritizing test cases." Vietnam Journal of Computer Science 5, no. 1 (September 7, 2017): 45–57. http://dx.doi.org/10.1007/s40595-017-0100-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Hemmati, Hadi, Zhihan Fang, Mika V. Mäntylä, and Bram Adams. "Prioritizing manual test cases in rapid release environments." Software Testing, Verification and Reliability 27, no. 6 (July 13, 2016): e1609. http://dx.doi.org/10.1002/stvr.1609.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Mittal, Shweta, and Om Prakash Sangwan. "Prioritizing test cases for regression techniques using metaheuristic techniques." Journal of Information and Optimization Sciences 39, no. 1 (November 15, 2017): 39–51. http://dx.doi.org/10.1080/02522667.2017.1372150.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Qian, Ju, and Di Zhou. "Prioritizing Test Cases for Memory Leaks in Android Applications." Journal of Computer Science and Technology 31, no. 5 (September 2016): 869–82. http://dx.doi.org/10.1007/s11390-016-1670-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Alves, Everton L. G., Patrícia D. L. Machado, Tiago Massoni, and Miryung Kim. "Prioritizing test cases for early detection of refactoring faults." Software Testing, Verification and Reliability 26, no. 5 (March 21, 2016): 402–26. http://dx.doi.org/10.1002/stvr.1603.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ayav, Tolga. "Prioritizing MCDC test cases by spectral analysis of Boolean functions." Software Testing, Verification and Reliability 27, no. 7 (August 1, 2017): e1641. http://dx.doi.org/10.1002/stvr.1641.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Abdur, Md, Md Abu, and Md Saeed. "Prioritizing Dissimilar Test Cases in Regression Testing using Historical Failure Data." International Journal of Computer Applications 180, no. 14 (January 17, 2018): 1–8. http://dx.doi.org/10.5120/ijca2018916258.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Aggrawal, K. K., Yogesh Singh, and A. Kaur. "Code coverage based technique for prioritizing test cases for regression testing." ACM SIGSOFT Software Engineering Notes 29, no. 5 (September 2004): 1–4. http://dx.doi.org/10.1145/1022494.1022511.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Sakib, Kazi, Md Abdur Rahman, and Saeed Siddik. "Prioritizing Test Cases by Collaborating Artifacts of Software Development Life Cycle." International Journal of Forensic Software Engineering 1, no. 1 (2019): 1. http://dx.doi.org/10.1504/ijfse.2019.10024847.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Do, Hyunsook, Gregg Rothermel, and Alex Kinneer. "Prioritizing JUnit Test Cases: An Empirical Assessment and Cost-Benefits Analysis." Empirical Software Engineering 11, no. 1 (February 21, 2006): 33–70. http://dx.doi.org/10.1007/s10664-006-5965-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

HUANG, RUBING, XIAODONG XIE, DAVE TOWEY, TSONG YUEH CHEN, YANSHENG LU, and JINFU CHEN. "PRIORITIZATION OF COMBINATORIAL TEST CASES BY INCREMENTAL INTERACTION COVERAGE." International Journal of Software Engineering and Knowledge Engineering 23, no. 10 (December 2013): 1427–57. http://dx.doi.org/10.1142/s0218194013500459.

Full text
Abstract:
Combinatorial interaction testing is a well-recognized testing method, and has been widely applied in practice, often with the assumption that all test cases in a combinatorial test suite have the same fault detection capability. However, when testing resources are limited, an alternative assumption may be that some test cases are more likely to reveal failure, thus making the order of executing the test cases critical. To improve testing cost-effectiveness, prioritization of combinatorial test cases is employed. The most popular approach is based on interaction coverage, which prioritizes combinatorial test cases by repeatedly choosing an unexecuted test case that covers the largest number of uncovered parameter value combinations of a given strength (level of interaction among parameters). However, this approach suffers from some drawbacks. Based on previous observations that the majority of faults in practical systems can usually be triggered with parameter interactions of small strengths, we propose a new strategy of prioritizing combinatorial test cases by incrementally adjusting the strength values. Experimental results show that our method performs better than the random prioritization technique and the technique of prioritizing combinatorial test suites according to test case generation order, and has better performance than the interaction-coverage-based test prioritization technique in most cases.
APA, Harvard, Vancouver, ISO, and other styles
16

Ahmad, Johanna. "Measuring the Efficiency of MFWA Technique for Prioritizing Event Sequences Test Cases." International Journal of Advanced Trends in Computer Science and Engineering 8, no. 1.4 (September 15, 2019): 23–28. http://dx.doi.org/10.30534/ijatcse/2019/0481.42019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Barbosa, Gerson, Érica Ferreira de Souza, Luciana Brasil Rebelo dos Santos, Marlon da Silva, Juliana Marino Balera, and Nandamudi Lankalapalli Vijaykumar. "A Systematic Literature Review on prioritizing software test cases using Markov chains." Information and Software Technology 147 (July 2022): 106902. http://dx.doi.org/10.1016/j.infsof.2022.106902.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Mukherjee, R., and K. S. Patnaik. "Prioritizing JUnit Test Cases Without Coverage Information: An Optimization Heuristics Based Approach." IEEE Access 7 (2019): 78092–107. http://dx.doi.org/10.1109/access.2019.2922387.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Gupta, Atulya, and Rajendra Prasad Mahapatra. "Multifactor Algorithm for Test Case Selection and Ordering." Baghdad Science Journal 18, no. 2(Suppl.) (June 20, 2021): 1056. http://dx.doi.org/10.21123/bsj.2021.18.2(suppl.).1056.

Full text
Abstract:
Regression testing being expensive, requires optimization notion. Typically, the optimization of test cases results in selecting a reduced set or subset of test cases or prioritizing the test cases to detect potential faults at an earlier phase. Many former studies revealed the heuristic-dependent mechanism to attain optimality while reducing or prioritizing test cases. Nevertheless, those studies were deprived of systematic procedures to manage tied test cases issue. Moreover, evolutionary algorithms such as the genetic process often help in depleting test cases, together with a concurrent decrease in computational runtime. However, when examining the fault detection capacity along with other parameters, is required, the method falls short. The current research is motivated by this concept and proposes a multifactor algorithm incorporated with genetic operators and powerful features. A factor-based prioritizer is introduced for proper handling of tied test cases that emerged while implementing re-ordering. Besides this, a Cost-based Fine Tuner (CFT) is embedded in the study to reveal the stable test cases for processing. The effectiveness of the outcome procured through the proposed minimization approach is anatomized and compared with a specific heuristic method (rule-based) and standard genetic methodology. Intra-validation for the result achieved from the reduction procedure is performed graphically. This study contrasts randomly generated sequences with procured re-ordered test sequence for over '10' benchmark codes for the proposed prioritization scheme. Experimental analysis divulged that the proposed system significantly managed to achieve a reduction of 35-40% in testing effort by identifying and executing stable and coverage efficacious test cases at an earlier phase.
APA, Harvard, Vancouver, ISO, and other styles
20

Nayak, Gayatri, and Mitrabinda Ray. "Survey on Prioritizing Test Cases in Various Levels of the Software Development Life Cycle." International Journal of Information Technology Project Management 12, no. 1 (January 2021): 1–28. http://dx.doi.org/10.4018/ijitpm.2021010101.

Full text
Abstract:
Test case prioritization is a technical method to reorder the execution of test cases to reduce regression testing costs. This paper has examined various existing techniques that are widely used and suggests improving test case prioritization process after finding many research gaps. These research gaps are collected after doing a thorough study on 206 papers after surfing 310 papers on test case generation and prioritization techniques. These papers are collected from different electronic databases such as IEEE Explore, Science Direct, ACM Library, Springer, Wiley, and Elsevier. The authors have targeted to make a statistical record to show research contribution on test case prioritization at three levels of software development life cycle. This survey shows that 20.87% of papers are contributing for TCP at the requirement phase, 38.83% of papers are contributing for TCP at the design phase, 40.29% of papers are contributing to TCP at the coding phase. The inference of this study cites many future recommendations for the current researchers in the conclusion section.
APA, Harvard, Vancouver, ISO, and other styles
21

Hooda, Aman, and Ankit Kumar. "Hybrid approach of prioritizing Design based and Risk based test cases by Partition Clustering." Oxford Journal of Intelligent Decision and Data Science 2018 (2018): 34–47. http://dx.doi.org/10.5899/2018/ojids-00016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Verma, Amit, and Simranjeet Kaur. "Design and Development of an Agorithm for Prioritizing the Test Cases Using Neural Network as Classifier." IAES International Journal of Artificial Intelligence (IJ-AI) 4, no. 1 (March 1, 2015): 14. http://dx.doi.org/10.11591/ijai.v4.i1.pp14-19.

Full text
Abstract:
Test Case Prioritization (TCP) has gained wide spread acceptance as it often results in good quality software free from defects. Due to the increase in rate of faults in software traditional techniques for prioritization results in increased cost and time. Main challenge in TCP is difficulty in manually validate the priorities of different test cases due to large size of test suites and no more emphasis are made to make the TCP process automate. The objective of this paper is to detect the priorities of different test cases using an artificial neural network which helps to predict the correct priorities with the help of back propagation algorithm. In our proposed work one such method is implemented in which priorities are assigned to different test cases based on their frequency. After assigning the priorities ANN predicts whether correct priority is assigned to every test case or not otherwise it generates the interrupt when wrong priority is assigned. In order to classify the different priority test cases classifiers are used. Proposed algorithm is very effective as it reduces the complexity with robust efficiency and makes the process automated to prioritize the test cases.
APA, Harvard, Vancouver, ISO, and other styles
23

Haraty, Ramzi A., Nashat Mansour, Lama Moukahal, and Iman Khalil. "Regression Test Cases Prioritization Using Clustering and Code Change Relevance." International Journal of Software Engineering and Knowledge Engineering 26, no. 05 (June 2016): 733–68. http://dx.doi.org/10.1142/s0218194016500248.

Full text
Abstract:
Regression testing is important for maintaining software quality. However, the cost of regression testing is relatively high. Test case prioritization is one way to reduce this cost. Test case prioritization techniques sort test cases for regression testing based on their importance. In this paper, we design and implement a test case prioritization method based on the location of a change. The method consists of three steps: (1) clustering test cases, (2) prioritizing the clusters with respect to the relevance of the clusters to a code change, and (3) test case prioritization within each cluster based on metrics. We propose a metric for measuring test case importance based on Requirement Complexity, Code Complexity, and Code Coverage. To evaluate our method, we apply it on a launch interceptor problem program, and measure the inclusiveness and precision for clusters of test cases with respect to code change in specific test cases. Our results show that our proposed change-based prioritization method increases the likelihood of executing more relevant test cases earlier.
APA, Harvard, Vancouver, ISO, and other styles
24

Zhai, Ke, Bo Jiang, and W. K. Chan. "Prioritizing Test Cases for Regression Testing of Location-Based Services: Metrics, Techniques, and Case Study." IEEE Transactions on Services Computing 7, no. 1 (January 2014): 54–67. http://dx.doi.org/10.1109/tsc.2012.40.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Samad, Ali, Hairulnizam Bin Mahdin, Rafaqat Kazmi, Rosziati Ibrahim, and Zirawani Baharum. "Multiobjective Test Case Prioritization Using Test Case Effectiveness: Multicriteria Scoring Method." Scientific Programming 2021 (June 24, 2021): 1–13. http://dx.doi.org/10.1155/2021/9988987.

Full text
Abstract:
Modified source code validation is done by regression testing. In regression testing, the time and resources are limited, in which we have to select the minimal test cases from test suites to reduce execution time. The test case minimization process deals with the optimization of the regression testing by removing redundant test cases or prioritizing the test cases. This study proposed a test case prioritization approach based on multiobjective particle swarm optimization (MOPSO) by considering minimum execution time, maximum fault detection ability, and maximum code coverage. The MOPSO algorithm is used for the prioritization of test cases with parameters including execution time, fault detection ability, and code coverage. Three datasets are selected to evaluate the proposed MOPSO technique including TreeDataStructure, JodaTime, and Triangle. The proposed MOPSO is compared with the no ordering, reverse ordering, and random ordering technique for evaluating the effectiveness. The higher values of results represent the more effectiveness and the efficiency of the proposed MOPSO as compared to other approaches for TreeDataStructure, JodaTime, and Triangle datasets. The result is presented to 100-index mode relevant from low to high values; after that, test cases are prioritized. The experiment is conducted on three open-source java applications and evaluated using metrics inclusiveness, precision, and size reduction of a matrix of the test suite. The results revealed that all scenarios performed well in acceptable mode, and the technique is 17% to 86% more effective in terms of inclusiveness, 33% to 85% more effective in terms of precision, and 17% minimum to 86% maximum in size reduction of metrics.
APA, Harvard, Vancouver, ISO, and other styles
26

Choi and Lim. "Model-Based Test Suite Generation Using Mutation Analysis for Fault Localization." Applied Sciences 9, no. 17 (August 23, 2019): 3492. http://dx.doi.org/10.3390/app9173492.

Full text
Abstract:
Fault localization techniques reduce the effort required when debugging software, as revealed by previous test cases. However, many test cases are required to reduce the number of candidate fault locations. To overcome this disadvantage, various methods were proposed to reduce fault-localization costs by prioritizing test cases. However, because a sufficient number of test cases is required for prioritization, the test-case generation cost remains high. This paper proposes a test-case generation method using a state chart to reduce the number of test suites required for fault localization, minimizing the test-case generation and execution times. The test-suite generation process features two phases: fault-detection test-case generation and fault localization in the test cases. Each phase uses mutation analysis to evaluate test cases; the results are employed to improve the test cases according to the objectives of each phase, using genetic algorithms. We provide useful guidelines for application of a search-based mutational method to a state chart; we show that the proposed method improves fault-localization performance in the test-suite generation phase.
APA, Harvard, Vancouver, ISO, and other styles
27

Mansour, Nashat, and Wael Statieh. "Regression Test Selection for C# Programs." Advances in Software Engineering 2009 (July 2, 2009): 1–10. http://dx.doi.org/10.1155/2009/535708.

Full text
Abstract:
We present a regression test selection technique for C# programs. C# is fairly new and is often used within the Microsoft .Net framework to give programmers a solid base to develop a variety of applications. Regression testing is done after modifying a program. Regression test selection refers to selecting a suitable subset of test cases from the original test suite in order to be rerun. It aims to provide confidence that the modifications are correct and did not affect other unmodified parts of the program. The regression test selection technique presented in this paper accounts for C#.Net specific features. Our technique is based on three phases; the first phase builds an Affected Class Diagram consisting of classes that are affected by the change in the source code. The second phase builds a C# Interclass Graph (CIG) from the affected class diagram based on C# specific features. In this phase, we reduce the number of selected test cases. The third phase involves further reduction and a new metric for assigning weights to test cases for prioritizing the selected test cases. We have empirically validated the proposed technique by using case studies. The empirical results show the usefulness of the proposed regression testing technique for C#.Net programs.
APA, Harvard, Vancouver, ISO, and other styles
28

Akbari, Zahra, Sedigheh Khoshnevis, and Mehran Mohsenzadeh. "A Method for Prioritizing Integration Testing in Software Product Lines Based on Feature Model." International Journal of Software Engineering and Knowledge Engineering 27, no. 04 (May 2017): 575–600. http://dx.doi.org/10.1142/s0218194017500218.

Full text
Abstract:
Testing activities for software product lines should be different from that of single software systems, due to significant differences between software product line engineering and single software system development. The cost of testing in software product line is generally higher compared with single software systems; therefore, there should exist a certain balance between cost, quality of final products, and the time of performing testing activities. As decreasing testing cost is an important challenge in software product line integration testing, the contribution of this paper is in introducing a method for early integration testing in software product lines based on feature model (FM) by prioritizing test cases in order to decrease integration testing costs in SPLs. In this method, we focus on reusing domain engineering artifacts and prioritized selection and execution of integration test cases. It also uses separation of concerns and pruning techniques on FMs to help prioritize the test cases. The method shows to be promising when applied to some case studies in the sense that it decreases the costs of performing integration test by about 82% and also detects about 44% of integration faults in domain engineering.
APA, Harvard, Vancouver, ISO, and other styles
29

Bhattacharjee, Gargi, and Sudipta Dash. "Test Path Prioritization from UML Activity Diagram Using a Hybridized Approach." International Journal of Knowledge-Based Organizations 8, no. 1 (January 2018): 83–96. http://dx.doi.org/10.4018/ijkbo.2018010106.

Full text
Abstract:
Software testing is regarded as a pivotal approach to realize a high reliable product. To check for the correctness of results, we require appropriate test cases. UML models are largely used to depict the specifications for software development. Test cases are created independently and based on the sequence of occurrence in the diagrams; they lead to corresponding test paths in the program. In this paper, we have analyzed an activity diagram, consisting of concurrent activities, for generating test paths. The obtained test paths are therefore required to be ranked. We have demonstrated that it is conceivable to apply Genetic Algorithm procedures alongside Ant Colony Optimization technique for not only finding the most critical path but also prioritizing the other paths too for enhancing the effectiveness of software testing.
APA, Harvard, Vancouver, ISO, and other styles
30

Cheng, Jing, Tao Zhang, Jingbo Zhang, Haipeng Wang, and Qin Xu. "A Deviate-Based Prioritizing Technique for Regression Testing of Mobile Navigation Service." Xibei Gongye Daxue Xuebao/Journal of Northwestern Polytechnical University 36, no. 1 (February 2018): 189–94. http://dx.doi.org/10.1051/jnwpu/20183610189.

Full text
Abstract:
Mobile navigation service is an important and popular location-based services, which help to recommend routes for mobile users to their destinations. Modern mobile navigation service provides various navigation strategies and considers many complex situations, which make validation and verification of mobile navigation service becomes very difficult. In this paper, we present an approach to prioritize test cases for regression testing of mobile navigation Service. The approach is based on the assumption that there may be a failure if the user's actual route deviates from the navigation recommended route. In this paper, we analyze the mass mobile navigation logs, compare the recommended routes with the user travel routes, identify the deviation of the intersection point, and then priority regression test data by deviation. To evaluate our approach, we conduct a case study on a popular navigation software. By compare proposed prioritizing test approach with random test approach, the approach helps to improve test efficiency.
APA, Harvard, Vancouver, ISO, and other styles
31

Panda, S., D. Munjal, and D. P. Mohapatra. "A Slice-Based Change Impact Analysis for Regression Test Case Prioritization of Object-Oriented Programs." Advances in Software Engineering 2016 (May 8, 2016): 1–20. http://dx.doi.org/10.1155/2016/7132404.

Full text
Abstract:
Test case prioritization focuses on finding a suitable order of execution of the test cases in a test suite to meet some performance goals like detecting faults early. It is likely that some test cases execute the program parts that are more prone to errors and will detect more errors if executed early during the testing process. Finding an optimal order of execution for the selected regression test cases saves time and cost of retesting. This paper presents a static approach to prioritizing the test cases by computing the affected component coupling (ACC) of the affected parts of object-oriented programs. We construct a graph named affected slice graph (ASG) to represent these affected program parts. We determine the fault-proneness of the nodes of ASG by computing their respective ACC values. We assign higher priority to those test cases that cover the nodes with higher ACC values. Our analysis with mutation faults shows that the test cases executing the fault-prone program parts have a higher chance to reveal faults earlier than other test cases in the test suite. The result obtained from seven case studies justifies that our approach is feasible and gives acceptable performance in comparison to some existing techniques.
APA, Harvard, Vancouver, ISO, and other styles
32

Wu, Shih-DA, and Jung-Hua Lo. "The MADAG Strategy for Fault Location Techniques." Applied Sciences 13, no. 2 (January 6, 2023): 819. http://dx.doi.org/10.3390/app13020819.

Full text
Abstract:
Spectrum-based fault localization (SBFL), which utilizes spectrum information of test cases to calculate the suspiciousness of each statement in a program, can reduce developers’ effort. However, applying redundant test cases from a test suite to fault localization incurs a heavy burden, especially in a restricted resource environment, and it is expensive and infeasible to inspect the results of each test input. Prioritizing/selecting appropriate test cases is important to enable the practical application of the SBFL technique. In addition, we must ensure that applying the selected tests to SBFL can achieve approximately the effectiveness of fault localization with whole tests. This paper presents a test case prioritization/selection strategy, namely the Minimal Aggregate of the Diversity of All Groups (MADAG). The MADAG strategy prioritizes/selects test cases using information on the diversity of the execution trace of each test case. We implemented and applied the MADAG strategy to 233 faulty versions of the Siemens and UNIX programs from the Software-artifact Infrastructure Repository. The experiments show that (1) the MADAG strategy uses only 8.99 and 14.27 test cases, with an average of 18, from the Siemens and UNIX test suites, respectively, and the SBFL technique has approximate effectiveness for fault localization on all test cases and outperforms the previous best test case prioritization method; (2) we verify that applying whole tests from the test suite may not achieve the better effectiveness in fault localization compared with the tests selected by MADAG strategy.
APA, Harvard, Vancouver, ISO, and other styles
33

Kiran, Ayesha, Wasi Haider Butt, Arslan Shaukat, Muhammad Umar Farooq, Urooj Fatima, Farooque Azam, and Zeeshan Anwar. "Multi-objective regression test suite optimization using three variants of adaptive neuro fuzzy inference system." PLOS ONE 15, no. 12 (December 3, 2020): e0242708. http://dx.doi.org/10.1371/journal.pone.0242708.

Full text
Abstract:
In the process of software development, regression testing is one of the major activities that is done after making modifications in the current system or whenever a software system evolves. But, the test suite size increases with the addition of new test cases and it becomes in-efficient because of the occurrence of redundant, broken, and obsolete test cases. For that reason, it results in additional time and budget to run all these test cases. Many researchers have proposed computational intelligence and conventional approaches for dealing with this problem and they have achieved an optimized test suite by selecting, minimizing or reducing, and prioritizing test cases. Currently, most of these optimization approaches are single objective and static in nature. But, it is mandatory to use multi-objective dynamic approaches for optimization due to the advancements in information technology and associated market challenges. Therefore, we have proposed three variants of self-tunable Adaptive Neuro-fuzzy Inference System i.e. TLBO-ANFIS, FA-ANFIS, and HS-ANFIS, for multi-objective regression test suites optimization. Two benchmark test suites are used for evaluating the proposed ANFIS variants. The performance of proposed ANFIS variants is measured using Standard Deviation and Root Mean Square Error. A comparison of experimental results is also done with six existing methods i.e. GA-ANFIS, PSO-ANFIS, MOGA, NSGA-II, MOPSO, and TOPSIS and it is concluded that the proposed method effectively reduces the size of regression test suite without a reduction in the fault detection rate.
APA, Harvard, Vancouver, ISO, and other styles
34

Bajaj, Anu, Ajith Abraham, Saroj Ratnoo, and Lubna Abdelkareim Gabralla. "Test Case Prioritization, Selection, and Reduction Using Improved Quantum-Behaved Particle Swarm Optimization." Sensors 22, no. 12 (June 9, 2022): 4374. http://dx.doi.org/10.3390/s22124374.

Full text
Abstract:
The emerging areas of IoT and sensor networks bring lots of software applications on a daily basis. To keep up with the ever-changing expectations of clients and the competitive market, the software must be updated. The changes may cause unintended consequences, necessitating retesting, i.e., regression testing, before being released. The efficiency and efficacy of regression testing techniques can be improved with the use of optimization approaches. This paper proposes an improved quantum-behaved particle swarm optimization approach for regression testing. The algorithm is improved by employing a fix-up mechanism to perform perturbation for the combinatorial TCP problem. Second, the dynamic contraction-expansion coefficient is used to accelerate the convergence. It is followed by an adaptive test case selection strategy to choose the modification-revealing test cases. Finally, the superfluous test cases are removed. Furthermore, the algorithm’s robustness is analyzed for fault as well as statement coverage. The empirical results reveal that the proposed algorithm performs better than the Genetic Algorithm, Bat Algorithm, Grey Wolf Optimization, Particle Swarm Optimization and its variants for prioritizing test cases. The findings show that inclusivity, test selection percentage and cost reduction percentages are higher in the case of fault coverage compared to statement coverage but at the cost of high fault detection loss (approx. 7%) at the test case reduction stage.
APA, Harvard, Vancouver, ISO, and other styles
35

Köber, John, Matthias Behrendt, and Albert Albers. "Case study on prioritizing test cases and selecting the most qualified validation environment using an OEM’s transmission application as an example." Procedia CIRP 100 (2021): 834–39. http://dx.doi.org/10.1016/j.procir.2021.05.035.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Gupta, P. K. "K-Step Crossover Method based on Genetic Algorithm for Test Suite Prioritization in Regression Testing." JUCS - Journal of Universal Computer Science 27, no. 2 (February 28, 2021): 170–89. http://dx.doi.org/10.3897/jucs.65241.

Full text
Abstract:
Software is an integration of numerous programming modules  (e.g., functions, procedures, legacy system, reusable components, etc.) tested and combined to build the entire module. However, some undesired faults may occur due to a change in modules while performing validation and verification. Retesting of entire software is a costly affair in terms of money and time. Therefore, to avoid retesting of entire software, regression testing is performed. In regression testing, an earlier created test suite is used to retest the software system's modified module. Regression Testing works in three manners; minimizing test cases, selecting test cases, and prioritizing test cases. In this paper, a two-phase algorithm has been proposed that considers test case selection and test case prioritization technique for performing regression testing on several modules ranging from a smaller line of codes to huge line codes of procedural language. A textual based differencing algorithm has been implemented for test case selection. Program statements modified between two modules are used for textual differencing and utilized to identify test cases that affect modified program statements. In the next step, test case prioritization is implemented by applying the Genetic Algorithm for code/condition coverage. Genetic operators: Crossover and Mutation have been applied over the initial population (i.e. test cases), taking code/condition coverage as fitness criterion to provide a prioritized test suite. Prioritization algorithm can be applied over both original and reduced test suite depending upon the test suite's size or the need for accuracy. In the obtained results, the efficiency of the prioritization algorithms has been analyzed by the Average Percentage of Code Coverage (APCC) and Average Percentage of Code Coverage with cost (APCCc). A comparison of the proposed approach is also done with the previously proposed methods and it is observed that APCC & APCCc values achieve higher percentage values faster in the case of the prioritized test suite in contrast to the non-prioritized test suite.
APA, Harvard, Vancouver, ISO, and other styles
37

Zhang, Tianning, Xingqi Wang, Dan Wei, and Jinglong Fang. "Test Case Prioritization Technique Based on Error Probability and Severity of UML Models." International Journal of Software Engineering and Knowledge Engineering 28, no. 06 (June 2018): 831–44. http://dx.doi.org/10.1142/s0218194018500249.

Full text
Abstract:
Test case prioritization is one of the most useful activities in testing. Most existing test case prioritization techniques are based on code coverage, which requires access to source code. However, code-based testing comes late in the software development life cycle, when errors are detected, the cost of testing is very high. Therefore, in this paper, we provide a test case prioritization technique based on Unified Modeling Language (UML) model, built before coding, to detect errors as earlier as possible and reduce the cost of modification. The technique consists of the following main parts: (1) using C&K metrics to estimate the error probability of class; (2) using dependences, obtained from the model slicing, to estimate error severity; (3) generating test case priority from error probability and severity, then prioritizing the test case. With our technique, test engineers need the UML model only and the test cases can be prioritized automatically. To evaluate our technique, we applied our technique to unmanned aerial vehicles (UAV) flight control system and performed test case prioritization. The results show that the error can be detected effectively and stability can be increased significantly as compared to the current code-based techniques.
APA, Harvard, Vancouver, ISO, and other styles
38

Wei, Dan, Qingying Sun, Xingqi Wang, Tianning Zhang, and Bin Chen. "A Model-Based Test Case Prioritization Approach Based on Fault Urgency and Severity." International Journal of Software Engineering and Knowledge Engineering 30, no. 02 (February 2020): 263–90. http://dx.doi.org/10.1142/s0218194020500126.

Full text
Abstract:
With the aggrandizement scale of software system, the number of test cases has grown explosively. Test case prioritization (TCP) has been widely used in software testing to effectively improve testing efficiency. However, traditional TCP methods are mostly based on software code and they are difficult to apply to model-based testing. Moreover, existing model-based TCP techniques often do not take the likely distribution of faults into consideration, yet software faults are not often equally distributed in the system, and test cases that cover more fault prone modules are more likely to reveal faults so that they should be run with a higher priority. Therefore, in this paper, we provide a TCP approach based on Hidden Markov Model (HMM), to detect faults as earlier as possible and reduce the cost of modification. This approach consists of the following main parts: (1) transforming the Unified Modeling Language (UML) sequence diagram to HMM; (2) estimating the fault urgency according to fault priority and probability; (3) estimating the fault severity by analyzing the weight of the state in the HMM; (4) generating test case priority from fault urgency and fault severity, then prioritizing test case. The proposed approach is implemented on unmanned aerial vehicles (UAV) flight control system to perform TCP. The experimental results show that our proposed TCP approach can effectively enhance the probability of earlier fault detection and improve the efficiency and stability as compared to other prioritization techniques, such as original prioritization, random prioritization, additional prioritization and EPS-UML.
APA, Harvard, Vancouver, ISO, and other styles
39

Wright, Adam, Francine L. Maloney, Matthew Wien, Lipika Samal, Srinivas Emani, and Gianna Zuccotti. "Assessing information system readiness for mitigating malpractice risk through simulation: results of a multi-site study." Journal of the American Medical Informatics Association 22, no. 5 (May 26, 2015): 1020–28. http://dx.doi.org/10.1093/jamia/ocv041.

Full text
Abstract:
Abstract Objective To develop and test an instrument for assessing a healthcare organization’s ability to mitigate malpractice risk through clinical decision support (CDS). Materials and Methods Based on a previously collected malpractice data set, we identified common types of CDS and the number and cost of malpractice cases that might have been prevented through this CDS. We then designed clinical vignettes and questions that test an organization’s CDS capabilities through simulation. Seven healthcare organizations completed the simulation. Results All seven organizations successfully completed the self-assessment. The proportion of potentially preventable indemnity loss for which CDS was available ranged from 16.5% to 73.2%. Discussion There is a wide range in organizational ability to mitigate malpractice risk through CDS, with many organizations’ electronic health records only being able to prevent a small portion of malpractice events seen in a real-world dataset. Conclusion The simulation approach to assessing malpractice risk mitigation through CDS was effective. Organizations should consider using malpractice claims experience to facilitate prioritizing CDS development.
APA, Harvard, Vancouver, ISO, and other styles
40

BAI, XIAOYING, RON S. KENETT, and WEI YU. "RISK ASSESSMENT AND ADAPTIVE GROUP TESTING OF SEMANTIC WEB SERVICES." International Journal of Software Engineering and Knowledge Engineering 22, no. 05 (August 2012): 595–620. http://dx.doi.org/10.1142/s0218194012500167.

Full text
Abstract:
Testing is necessary to ensure the quality of web services that are loosely coupled, dynamic bound and integrated through standard protocols. Exhaustive testing of web services is usually impossible due to unavailable source code, diversified user requirements and large number of possible service combinations delivered by the open platform. This paper proposes a risk-based approach for selecting and prioritizing test cases for testing service-based systems. We specially address the problem in the context of semantic web services. Semantic web services introduce semantics to service integration and interoperation using ontology models and specifications. Semantic errors are considered more difficult to detect than syntactic errors. Due to the complexity of conceptual uniformity, it is hard to ensure the completeness, consistency and unified quality of ontology model. A failure of the semantic service-based software may result from many factors such as misused data, unsuccessful service binding, and unexpected usage scenarios. This work analyzes the two factors of risk estimation: failure probability and importance, from three aspects: ontology data, service and composite service. With this approach, test cases are associated to semantic features, and are scheduled based on the risks of their target features. Risk assessment is used to control the process of Web Services progressive group testing, including test case ranking, test case selection and service ruling out. This paper discusses the control architecture and adaptive measurement mechanism for adaptive group testing. As a statistical testing technique, the proposed approach aims to detect, as early as possible, the problems with highest impact on the users.
APA, Harvard, Vancouver, ISO, and other styles
41

Bhowmick, Rupsa, Jill C. Trepanier, and Alex M. Haberlie. "Classification Analysis of Southwest Pacific Tropical Cyclone Intensity Changes Prior to Landfall." Atmosphere 14, no. 2 (January 28, 2023): 253. http://dx.doi.org/10.3390/atmos14020253.

Full text
Abstract:
This study evaluates the ability of a random forest classifier to identify tropical cyclone (TC) intensification or weakening prior to landfall over the western region of the Southwest Pacific Ocean (SWPO) basin. For both Australia mainland and SWPO island cases, when a TC first crosses land after spending ≥24 h over the ocean, the closest hour prior to the intersection is considered as the landfall hour. If the maximum wind speed (Vmax) at the landfall hour increased or remained the same from the 24-h mark prior to landfall, the TC is labeled as intensifying and if the Vmax at the landfall hour decreases, the TC is labeled as weakening. Geophysical and aerosol variables closest to the 24 h before landfall hour were collected for each sample. The random forest model with leave-one-out cross validation and the random oversampling example technique was identified as the best-performing classifier for both mainland and island cases. The model identified longitude, initial intensity, and sea skin temperature as the most important variables for the mainland and island landfall classification decisions. Incorrectly classified cases from the test data were analyzed by sorting the cases by their initial intensity hour, landfall hour, monthly distribution, and 24-h intensity changes. TC intensity changes near land strongly impact coastal preparations such as wind damage and flood damage mitigations; hence, this study will contribute to improve identifying and prioritizing prediction of important variables contributing to TC intensity change before landfall.
APA, Harvard, Vancouver, ISO, and other styles
42

Rizwan, Uppal, Uppal Muhammad Saad, Khan Aftab Ahmad, and Saeed Umar. "SARS-CoV-2 Omicron and centaurus variants induced lymphocytopenia: A multicenter clinical investigation on 118,561 cases across Pakistan during 2021-2022." International Journal of Clinical Virology 6, no. 2 (September 16, 2022): 034–37. http://dx.doi.org/10.29328/journal.ijcv.1001047.

Full text
Abstract:
The SARS-CoV-2 pandemic is still ongoing. Previously, several studies have been conducted to investigate laboratory markers as a tool for severity assessment during COVID-19 infections. Biological markers such as Platelet count, D-dimer and IL-6, Lymphocytopenia and others have been used for assessment of severity in COVID-19 disease patients (infected by SARS-CoV-2 Alpha, Beta, Gamma, Delta, Epsilon, and other variants). We observed a significant drop in lymphocyte count among suspected SARS-CoV-2 clinical patients with symptoms of fever, running nose, breathing discomfort, cough, and others during Omicron and Centaurus variants spread in Pakistan. A multicenter, cross-sectional study was conducted from Jan 2021 to Aug 2022, on 118,561 subjects to evaluate hematological abnormalities among suspected patients. Of note, significantly decreased lymphocyte levels (lymphocytopenia) were observed among 43.05% of infected patients. Also, the levels of NA (39.03%), HGB (28.27%), MCV (22.62%), PLT (8.17%), and ALB (4.30%) were also reduced among infected patients. This suggests that lymphopenia can be used as an alternative, cost-effective, early diagnostic biomarker for clinical COVID-19 patients, even before the diagnosis via real-time PCR. In resource-limited countries, the current study is critical for policy-making strategic organizations for prioritizing lymphocytopenia-based screening (as an alternative, cost-effective diagnostic test) in clinical COVID-19 patients, before real-time PCR-based diagnosis.
APA, Harvard, Vancouver, ISO, and other styles
43

Garg, Kunal, Sara Campolonghi, Armin Schwarzbach, Maria Luisa Garcia Alonso, Fausto M. Villavicencio-Aguilar, Liria M. Fajardo-Yamamoto, and Leona Gilbert. "SARSPLEX: Multiplex Serological ELISA with a Holistic Approach." Viruses 14, no. 12 (November 22, 2022): 2593. http://dx.doi.org/10.3390/v14122593.

Full text
Abstract:
Currently, there are over 602 million severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) cases and 6.4 million COVID-19 disease-related deaths worldwide. With ambitious vaccine strategies, reliable and accurate serological testing is needed to monitor the dynamics of the novel coronavirus pandemic and community immunity. We set out to improve serological testing of the immune response against SARS-CoV-2. We hypothesize that by multiplexing the serological diagnostic test kit (SARSPLEX) and screening for three antibodies, an even more robust diagnostic can be developed. A total of 293 sera were analyzed for IgM, IgG, or IgA immune reactions to the subunit 1 spike glycoprotein and the nucleocapsid protein in a standardized ELISA platform. Testing IgM, IgG, and IgA demonstrated high positive and negative agreements compared to RT-PCR and serology reference tests. Comparison with the pre-2019-CoV (n = 102) samples highlighted the specificity of this test kit and indicated that no unspecific binding, even with the summer flu patients (n = 44), was detected. In addition, SARSPLEX demonstrated to be a valuable occupational surveillance tool used in a functional medicine facility. With increased and broader testing, SARSPLEX will be a valuable tool in monitoring immunity and aid in prioritizing access to the SARS-CoV-2 vaccine for high-risk patients.
APA, Harvard, Vancouver, ISO, and other styles
44

Er, Zafer Cengiz, Ferit Çiçekçioğlu, and Kivanc Atilgan. "Evaluation of Neurocognitive Abilities in Patients Undergoing Carotid Endarterectomy Surgery." Heart Surgery Forum 24, no. 1 (February 16, 2021): E158—E164. http://dx.doi.org/10.1532/hsf.3371.

Full text
Abstract:
Objective: To evaluate the differences in neurocognitive abilities between the preoperative and postoperative periods following carotid endarterectomy (CEA), due to carotid artery stenosis, and to evaluate the effectiveness of CEA on neurocognitive abilities in the future. Material and methods: Thirty-eight cases of CEA surgery at Bozok University Faculty of Medicine Research Hospital between January 2015 and June 2020 were examined. Neurocognitive tests were performed on carotid endarterectomy patients one day before the operation and on the 2nd, 4th, and 30th postoperative days. The effect of CEA on cognitive results has been investigated. Results: Of the patients, eight were female (21.1%), 30 were male (78.9%), and the mean age was 66 ± 4.09. Thirty-two (84.21%) of the patients were operated on under general anesthesia and six (15.78%) under regional anesthesia. A shunt was used in 19 patients. Right carotid endarterectomy was performed in 20 cases and left carotid endarterectomy in 18 cases. We used the primary closure technique in two of 38 cases and patches on 36 of them. We used Dacron in 21 cases, PTFE in 12 cases, and saphenous vein as a patch in three cases. In the WMS digit spam and recall scores, the postoperative period fell on the 2nd day, and then on the 4th and 30th day after the operation, there was a low level of increase over time. Compared with the preoperative period, the learning score was found to be the lowest on the 2nd day, lower on the 4th day compared with the preoperative period and improved compared with the preoperative period on the 30th day. There was no decrease in the verbal fluency test score results after the operation, on the contrary, it was observed minimally. The test score results cumulatively were decreased in the early postoperative periods compared with the preoperative period and increased on the 30th day compared with the preoperative period. Conclusion: The purpose of CEA in the past was the prevention of ischemic stroke and cerebrovascular disease (CVD) rather than neurocognitive recovery. Factors affecting neurocognition in CEA are multifactorial. Preservation and improvement of neurocognition are more important than any other period of history. By prioritizing cognitive abilities in the treatment of carotid stenosis, individualization of the treatment will help maximize the increase in cognitive abilities by providing optimum benefit to the patient of each factor.
APA, Harvard, Vancouver, ISO, and other styles
45

Ning, Xia, Ziwei Fan, Evan Burgun, Zhiyun Ren, and Titus Schleyer. "Improving information retrieval from electronic health records using dynamic and multi-collaborative filtering." PLOS ONE 16, no. 8 (August 5, 2021): e0255467. http://dx.doi.org/10.1371/journal.pone.0255467.

Full text
Abstract:
Due to the rapid growth of information available about individual patients, most physicians suffer from information overload and inefficiencies when they review patient information in health information technology systems. In this paper, we present a novel hybrid dynamic and multi-collaborative filtering method to improve information retrieval from electronic health records. This method recommends relevant information from electronic health records to physicians during patient visits. It models information search dynamics using a Markov model. It also leverages the key idea of collaborative filtering, originating from Recommender Systems, for prioritizing information based on various similarities among physicians, patients and information items. We tested this new method using electronic health record data from the Indiana Network for Patient Care, a large, inter-organizational clinical data repository maintained by the Indiana Health Information Exchange. Our experimental results demonstrated that, for top-5 recommendations, our method was able to correctly predict the information in which physicians were interested in 46.7% of all test cases. For top-1 recommendations, the corresponding figure was 24.7%. In addition, the new method was 22.3% better than the conventional Markov model for top-1 recommendations.
APA, Harvard, Vancouver, ISO, and other styles
46

Tricarico, Davide, Marco Calandri, Matteo Barba, Clara Piatti, Carlotta Geninatti, Domenico Basile, Marco Gatti, Massimiliano Melis, and Andrea Veltri. "Convolutional Neural Network-Based Automatic Analysis of Chest Radiographs for the Detection of COVID-19 Pneumonia: A Prioritizing Tool in the Emergency Department, Phase I Study and Preliminary “Real Life” Results." Diagnostics 12, no. 3 (February 23, 2022): 570. http://dx.doi.org/10.3390/diagnostics12030570.

Full text
Abstract:
The aim of our study is the development of an automatic tool for the prioritization of COVID-19 diagnostic workflow in the emergency department by analyzing chest X-rays (CXRs). The Convolutional Neural Network (CNN)-based method we propose has been tested retrospectively on a single-center set of 542 CXRs evaluated by experienced radiologists. The SARS-CoV-2 positive dataset (n = 234) consists of CXRs collected between March and April 2020, with the COVID-19 infection being confirmed by an RT-PCR test within 24 h. The SARS-CoV-2 negative dataset (n = 308) includes CXRs from 2019, therefore prior to the pandemic. For each image, the CNN computes COVID-19 risk indicators, identifying COVID-19 cases and prioritizing the urgent ones. After installing the software into the hospital RIS, a preliminary comparison between local daily COVID-19 cases and predicted risk indicators for 2918 CXRs in the same period was performed. Significant improvements were obtained for both prioritization and identification using the proposed method. Mean Average Precision (MAP) increased (p < 1.21 × 10−21 from 43.79% with random sorting to 71.75% with our method. CNN sensitivity was 78.23%, higher than radiologists’ 61.1%; specificity was 64.20%. In the real-life setting, this method had a correlation of 0.873. The proposed CNN-based system effectively prioritizes CXRs according to COVID-19 risk in an experimental setting; preliminary real-life results revealed high concordance with local pandemic incidence.
APA, Harvard, Vancouver, ISO, and other styles
47

Won, Kimberly Y., Katherine Gass, Marco Biamonte, Daniel Argaw Dagne, Camilla Ducker, Christopher Hanna, Achim Hoerauf, et al. "Diagnostics to support elimination of lymphatic filariasis—Development of two target product profiles." PLOS Neglected Tropical Diseases 15, no. 11 (November 15, 2021): e0009968. http://dx.doi.org/10.1371/journal.pntd.0009968.

Full text
Abstract:
As lymphatic filariasis (LF) programs move closer to established targets for validation elimination of LF as a public health problem, diagnostic tools capable of supporting the needs of the programs are critical for success. Known limitations of existing diagnostic tools make it challenging to have confidence that program endpoints have been achieved. In 2019, the World Health Organization (WHO) established a Diagnostic Technical Advisory Group (DTAG) for Neglected Tropical Diseases tasked with prioritizing diagnostic needs including defining use-cases and target product profiles (TPPs) for needed tools. Subsequently, disease-specific DTAG subgroups, including one focused on LF, were established to develop TPPs and use-case analyses to be used by product developers. Here, we describe the development of two priority TPPs for LF diagnostics needed for making decisions for stopping mass drug administration (MDA) of a triple drug regimen and surveillance. Utilizing the WHO core TPP development process as the framework, the LF subgroup convened to discuss and determine attributes required for each use case. TPPs considered the following parameters: product use, design, performance, product configuration and cost, and access and equity. Version 1.0 TPPs for two use cases were published by WHO on 12 March 2021 within the WHO Global Observatory on Health Research and Development. A common TPP characteristic that emerged in both use cases was the need to identify new biomarkers that would allow for greater precision in program delivery. As LF diagnostic tests are rarely used for individual clinical diagnosis, it became apparent that reliance on population-based surveys for decision making requires consideration of test performance in the context of such surveys. In low prevalence settings, the number of false positive test results may lead to unnecessary continuation or resumption of MDA, thus wasting valuable resources and time. Therefore, highly specific diagnostic tools are paramount when used to measure low thresholds. The TPP process brought to the forefront the importance of linking use case, program platform and diagnostic performance characteristics when defining required criteria for diagnostic tools.
APA, Harvard, Vancouver, ISO, and other styles
48

Yang, He S., Yu Hou, Hao Zhang, Amy Chadburn, Lars F. Westblade, Richard Fedeli, Peter A. D. Steel, et al. "Machine Learning Highlights Downtrending of COVID-19 Patients with a Distinct Laboratory Profile." Health Data Science 2021 (June 16, 2021): 1–9. http://dx.doi.org/10.34133/2021/7574903.

Full text
Abstract:
Background. New York City (NYC) experienced an initial surge and gradual decline in the number of SARS-CoV-2-confirmed cases in 2020. A change in the pattern of laboratory test results in COVID-19 patients over this time has not been reported or correlated with patient outcome. Methods. We performed a retrospective study of routine laboratory and SARS-CoV-2 RT-PCR test results from 5,785 patients evaluated in a NYC hospital emergency department from March to June employing machine learning analysis. Results. A COVID-19 high-risk laboratory test result profile (COVID19-HRP), consisting of 21 routine blood tests, was identified to characterize the SARS-CoV-2 patients. Approximately half of the SARS-CoV-2 positive patients had the distinct COVID19-HRP that separated them from SARS-CoV-2 negative patients. SARS-CoV-2 patients with the COVID19-HRP had higher SARS-CoV-2 viral loads, determined by cycle threshold values from the RT-PCR, and poorer clinical outcome compared to other positive patients without the COVID12-HRP. Furthermore, the percentage of SARS-CoV-2 patients with the COVID19-HRP has significantly decreased from March/April to May/June. Notably, viral load in the SARS-CoV-2 patients declined, and their laboratory profile became less distinguishable from SARS-CoV-2 negative patients in the later phase. Conclusions. Our longitudinal analysis illustrates the temporal change of laboratory test result profile in SARS-CoV-2 patients and the COVID-19 evolvement in a US epicenter. This analysis could become an important tool in COVID-19 population disease severity tracking and prediction. In addition, this analysis may play an important role in prioritizing high-risk patients, assisting in patient triaging and optimizing the usage of resources.
APA, Harvard, Vancouver, ISO, and other styles
49

Sellmeier, Anna Catharina, Andreas Elsner, Tim Niedergassel, Johannes Schmitz, Sebastian Rehberg, Claudia Hornberg, Thomas Vordemvenne, and Dirk Wähnert. "COVID-19 after the first wave of the pandemic among employees from a German university hospital: prevalence and questionnaire data." Journal of Medicine and Life 15, no. 9 (September 2022): 1119–28. http://dx.doi.org/10.25122/jml-2022-0126.

Full text
Abstract:
The SARS-CoV-2 pandemic has changed lives around the world. In particular, healthcare workers faced significant challenges as a result of the pandemic. This study investigates the seroprevalence of SARS-CoV-2 in March–April 2020 in Germany among healthcare workers and relates it to questionnaire data. In June 2020, all employees of the reporting hospital were offered a free SARS-CoV-2 antibody test. The first 2,550 test results were sent along with study documents. The response rate was 15.1%. The COVID-19 PCR test prevalence amongst health care workers in this study was 1.04% (95% CI 0.41–2.65%), higher by a factor of 5 than in the general population (p=0.01). The ratio of seroprevalence to PCR prevalence was 1.5. COVID-19-associated symptoms were also prevalent in the non-COVID-19-positive population. Only two symptoms showed statistically significant odds ratios, loss of smell and loss of taste. Health care workers largely supported non-pharmaceutical interventions during the initial lockdown (93%). Individual behavior correlated significantly with attitudes toward policy interventions and perceived individual risk factors. Our data suggest that healthcare workers may be at higher risk of infection. Therefore, a discussion about prioritizing vaccination makes sense. They also support offering increased SARS-CoV-2 testing to hospital workers. It is concluded that easier access to SARS-CoV-2 testing reduces the number of unreported cases. Furthermore, individual attitudes toward rules and regulations on COVID-19 critically influence compliance. Thus, one goal of public policy should be to maintain high levels of support for non-pharmaceutical interventions to keep actual compliance high.
APA, Harvard, Vancouver, ISO, and other styles
50

Jang, Kyung Mi. "Monogenic diabetes: recent updates on diagnosis and precision treatment: A narrative review." Precision and Future Medicine 6, no. 4 (December 31, 2022): 209–17. http://dx.doi.org/10.23838/pfm.2022.00121.

Full text
Abstract:
Monogenic diabetes is commonly caused by single-gene mutations. This disease ranges from 1% to 5% in all cases of diabetes and is less affected by behavior and environment. Neonatal diabetes mellitus (NDM) and maturity-onset diabetes of the young (MODY) account for a major proportion of monogenic diabetes, while syndromic diabetes constitutes a smaller proportion. Diagnosis of monogenic diabetes has improved from being based on clinical phenotypes to molecular genetics, with significant advancement of genome sequencing skills. Precise medication for monogenic diabetes is based on genetic testing; therefore, an accurate diagnosis is essential. Due to the basic clinical criteria (diagnosed < 6 months of age), genetic testing and precision treatment for NDM are fast and uncomplicated. The MODY probability calculator was developed; however, it remains challenging to distinguish MODY from type 1 and 2 diabetes due to the lack of a single diagnostic criteria and genetic testing. Additionally, the high cost and complicated interpretation of these genetic test results add to these challenges. This review will discuss the distinct etiology and subgroups that contribute to predicting and treating clinical phenotypes associated with monogenic diabetes. Furthermore, we will review the recent Korean studies and suggest methods of prioritizing patient screening for genetic testing.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography