Добірка наукової літератури з теми "SUT (SOFTWARE UNDER TESTING)"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "SUT (SOFTWARE UNDER TESTING)".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "SUT (SOFTWARE UNDER TESTING)"

1

Rahmani, Ani, Joe Lian Min, and S. Suprihanto. "SOFTWARE UNDER TEST DALAM PENELITIAN SOFTWARE TESTING: SEBUAH REVIEW." JTT (Jurnal Teknologi Terapan) 7, no. 2 (October 22, 2021): 181. http://dx.doi.org/10.31884/jtt.v7i2.362.

Повний текст джерела
Анотація:
Software under Test (SUT) is an essential aspect of software testing research activities. Preparation of the SUT is not simple. It requires accuracy, completeness and will affect the quality of the research conducted. Currently, there are several ways to utilize an SUT in software testing research: building an own SUT, utilization of open source to build an SUT, and SUT from the repository utilization. This article discusses the results of SUT identification in many software testing studies. The research is conducted in a systematic literature review (SLR) using the Kitchenham protocol. The review process is carried out on 86 articles published in 2017-2020. The article was selected after two selection stages: the Inclusion and Exclusion Criteria and the quality assessment. The study results show that the trend of using open source is very dominant. Some researchers use open source as the basis for developing SUT, while others use SUT from a repository that provides ready-to-use SUT. In this context, utilization of the SUT from the software infrastructure repository (SIR) and Defect4J are the most significant choice of researchers.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Mishra, Deepti Bala, Biswaranjan Acharya, Dharashree Rath, Vassilis C. Gerogiannis, and Andreas Kanavos. "A Novel Real Coded Genetic Algorithm for Software Mutation Testing." Symmetry 14, no. 8 (July 26, 2022): 1525. http://dx.doi.org/10.3390/sym14081525.

Повний текст джерела
Анотація:
Information Technology has rapidly developed in recent years and software systems can play a critical role in the symmetry of the technology. Regarding the field of software testing, white-box unit-level testing constitutes the backbone of all other testing techniques, as testing can be entirely implemented by considering the source code of each System Under Test (SUT). In unit-level white-box testing, mutants can be used; these mutants are artificially generated faults seeded in each SUT that behave similarly to the realistic ones. Executing test cases against mutants results in the adequacy (mutation) score of each test case. Efficient Genetic Algorithm (GA)-based methods have been proposed to address different problems in white-box unit testing and, in particular, issues of mutation testing techniques. In this research paper, a new approach, which integrates the path coverage-based testing method with the novel idea of tracing a Fault Detection Matrix (FDM) to achieve maximum mutation coverage, is proposed. The proposed real coded GA for mutation testing is designed to achieve the highest Mutation Score, and it is thus named RGA-MS. The approach is implemented in two phases: path coverage-based test data are initially generated and stored in an optimized test suite. In the next phase, the test suite is executed to kill the mutants present in the SUT. The proposed method aims to achieve the minimum test dataset, having at the same time the highest Mutation Score by removing duplicate test data covering the same mutants. The proposed approach is implemented on the same SUTs as these have been used for path testing. We proved that the RGA-MS approach can cover maximum mutants with a minimum number of test cases. Furthermore, the proposed method can generate a maximum path coverage-based test suite with minimum test data generation compared to other algorithms. In addition, all mutants in the SUT can be covered by less number of test data with no duplicates. Ultimately, the generated optimal test suite is trained to achieve the highest Mutation Score. GA is used to find the maximum mutation coverage as well as to delete the redundant test cases.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Kusharki, Muhammad Bello, Sanjay Misra, Bilkisu Muhammad-Bello, Ibrahim Anka Salihu, and Bharti Suri. "Automatic Classification of Equivalent Mutants in Mutation Testing of Android Applications." Symmetry 14, no. 4 (April 14, 2022): 820. http://dx.doi.org/10.3390/sym14040820.

Повний текст джерела
Анотація:
Software and symmetric testing methodologies are primarily used in detecting software defects, but these testing methodologies need to be optimized to mitigate the wasting of resources. As mobile applications are becoming more prevalent in recent times, the need to have mobile applications that satisfy software quality through testing cannot be overemphasized. Testing suites and software quality assurance techniques have also become prevalent, which underscores the need to evaluate the efficacy of these tools in the testing of the applications. Mutation testing is one such technique, which is the process of injecting small changes into the software under test (SUT), thereby creating mutants. These mutants are then tested using mutation testing techniques alongside the SUT to determine the effectiveness of test suites through mutation scoring. Although mutation testing is effective, the cost of implementing it, due to the problem of equivalent mutants, is very high. Many research works gave varying solutions to this problem, but none used a standardized dataset. In this research work, we employed a standard mutant dataset tool called MutantBench to generate our data. Subsequently, an Abstract Syntax Tree (AST) was used in conjunction with a tree-based convolutional neural network (TBCNN) as our deep learning model to automate the classification of the equivalent mutants to reduce the cost of mutation testing in software testing of android applications. The result shows that the proposed model produces a good accuracy rate of 94%, as well as other performance metrics such as recall (96%), precision (89%), F1-score (92%), and Matthew’s correlation coefficients (88%) with fewer False Negatives and False Positives during testing, which is significant as it implies that there is a decrease in the risk of misclassification.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

V. Chandra Prakash, Dr, Subhash Tatale, Vrushali Kondhalkar, and Laxmi Bewoor. "A Critical Review on Automated Test Case Generation for Conducting Combinatorial Testing Using Particle Swarm Optimization." International Journal of Engineering & Technology 7, no. 3.8 (July 7, 2018): 22. http://dx.doi.org/10.14419/ijet.v7i3.8.15212.

Повний текст джерела
Анотація:
In software development life cycle, testing plays the significant role to verify requirement specification, analysis, design, coding and to estimate the reliability of software system. A test manager can write a set of test cases manually for the smaller software systems. However, for the extensive software system, normally the size of test suite is large, and the test suite is prone to an error committed like omissions of important test cases, duplication of some test cases and contradicting test cases etc. When test cases are generated automatically by a tool in an intelligent way, test errors can be eliminated. In addition, it is even possible to reduce the size of test suite and thereby to decrease the cost & time of software testing.It is a challenging job to reduce test suite size. When there are interacting inputs of Software under Test (SUT), combinatorial testing is highly essential to ensure higher reliability from 72 % to 91 % or even more than that. A meta-heuristic algorithm like Particle Swarm Optimization (PSO) solves optimization problem of automated combinatorial test case generation. Many authors have contributed in the field of combinatorial test case generation using PSO algorithms.We have reviewed some important research papers on automated test case generation for combinatorial testing using PSO. This paper provides a critical review of use of PSO and its variants for solving the classical optimization problem of automatic test case generation for conducting combinatorial testing.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Safdar, Safdar Aqeel, Tao Yue, and Shaukat Ali. "Recommending Faulty Configurations for Interacting Systems Under Test Using Multi-objective Search." ACM Transactions on Software Engineering and Methodology 30, no. 4 (July 2021): 1–36. http://dx.doi.org/10.1145/3464939.

Повний текст джерела
Анотація:
Modern systems, such as cyber-physical systems, often consist of multiple products within/across product lines communicating with each other through information networks. Consequently, their runtime behaviors are influenced by product configurations and networks. Such systems play a vital role in our daily life; thus, ensuring their correctness by thorough testing becomes essential. However, testing these systems is particularly challenging due to a large number of possible configurations and limited available resources. Therefore, it is important and practically useful to test these systems with specific configurations under which products will most likely fail to communicate with each other. Motivated by this, we present a search-based configuration recommendation ( SBCR ) approach to recommend faulty configurations for the system under test (SUT) based on cross-product line (CPL) rules. CPL rules are soft constraints, constraining product configurations while indicating the most probable system states with a certain degree of confidence. In SBCR , we defined four search objectives based on CPL rules and combined them with six commonly applied search algorithms. To evaluate SBCR (i.e., SBCR NSGA-II , SBCR IBEA , SBCR MoCell , SBCR SPEA2 , SBCR PAES , and SBCR SMPSO ), we performed two case studies (Cisco and Jitsi) and conducted difference analyses. Results show that for both of the case studies, SBCR significantly outperformed random search-based configuration recommendation ( RBCR ) for 86% of the total comparisons based on six quality indicators, and 100% of the total comparisons based on the percentage of faulty configurations (PFC). Among the six variants of SBCR, SBCR SPEA2 outperformed the others in 85% of the total comparisons based on six quality indicators and 100% of the total comparisons based on PFC.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Novella, Luigi, Manuela Tufo, and Giovanni Fiengo. "Automatic Test Set Generation for Event-Driven Systems in the Absence of Specifications Combining Testing with Model Inference." Information Technology And Control 48, no. 2 (June 25, 2019): 316–34. http://dx.doi.org/10.5755/j01.itc.48.2.21725.

Повний текст джерела
Анотація:
The growing dependency of human activities on software technologies is leading to the need for designing more and more accurate testing techniques to ensure the quality and reliability of software components. A recent literature review of software testing methodologies reveals that several new approaches, which differ in the way test inputs are generated to efficiently explore systems behaviour, have been proposed. This paper is concerned with the challenge of automatically generating test input sets for Event-Driven Systems (EDS) for which neither source code nor specifications are available, therefore we propose an innovative fully automatic testing with model learning technique. It basically involves active learning to automatically infer a behavioural model of the System Under Test (SUT) using tests as queries, generates further tests based on the learned model to systematically explore unseen parts of the subject system, and makes use of passive learning to refine the current model hypothesis as soon as an inconsistency is found with the observed behaviour. Our passive learning algorithm uses the basic steps of Evidence-Driven State Merging (EDSM) and introduces an effective heuristic for choosing the pair of states to merge to obtain the target machine. Finally, the effectiveness of the proposed testing technique is demonstrated within the context of event-based functional testing of Android Graphical User Interface (GUI) applications and compared with that of existing baseline approaches.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Ahmad, A., and D. Al-Abri. "Design of a Realistic Test Simulator For a Built-In Self Test Environment." Journal of Engineering Research [TJER] 7, no. 2 (December 1, 2010): 69. http://dx.doi.org/10.24200/tjer.vol7iss2pp69-79.

Повний текст джерела
Анотація:
This paper presents a realistic test approach suitable to Design For Testability (DFT) and Built- In Self Test (BIST) environments. The approach is culminated in the form of a test simulator which is capable of providing a required goal of test for the System Under Test (SUT). The simulator uses the approach of fault diagnostics with fault grading procedure to provide the tests. The tool is developed on a common PC platform and hence no special software is required. Thereby, it is a low cost tool and hence economical. The tool is very much suitable for determining realistic test sequences for a targeted goal of testing for any SUT. The developed tool incorporates a flexible Graphical User Interface (GUI) procedure and can be operated without any special programming skill. The tool is debugged and tested with the results of many bench mark circuits. Further, this developed tool can be utilized for educational purposes for many courses such as fault-tolerant computing, fault diagnosis, digital electronics, and safe - reliable - testable digital logic designs.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Rosero, Raúl H., Omar S. Gómez, and Glen Rodríguez. "15 Years of Software Regression Testing Techniques — A Survey." International Journal of Software Engineering and Knowledge Engineering 26, no. 05 (June 2016): 675–89. http://dx.doi.org/10.1142/s0218194016300013.

Повний текст джерела
Анотація:
Software regression testing techniques verify previous functionalities each time software modifications occur or new characteristics are added. With the aim of gaining a better understanding of this subject, in this work we present a survey of software regression testing techniques applied in the last 15 years; taking into account its application domain, kind of metrics they use, its application strategies and the phase of the software development process where they are applied. From an outcome of 460 papers, a set of 25 papers describing the use of 31 software testing regression techniques were identified. Results of this survey suggest that at the time of applying a regression testing techniques, metrics like cost and fault detection efficiency are the most relevant. Most of the techniques were assessed with instrumented programs (experimental cases) under academic settings. Conversely, we observe a minimum set of software regression techniques applied in industrial settings, mainly, under corrective and maintenance approaches. Finally, we observe a trend using some regression techniques under agile approaches.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Maspupah, Asri, and Akhmad Bakhrun. "PERBANDINGAN KEMAMPUAN REGRESSION TESTING TOOL PADA REGRESSION TEST SELECTION: STARTS DAN EKSTAZI." JTT (Jurnal Teknologi Terapan) 7, no. 1 (July 7, 2021): 59. http://dx.doi.org/10.31884/jtt.v7i1.319.

Повний текст джерела
Анотація:
Regression testing as an essential activity in software development that has changed requirements. In practice, regression testing requires a lot of time so that an optimal strategy is needed. One approach that can be used to speed up execution time is the Regression Test Selection (RTS) approach. Currently, practitioners and academics have started to think about developing tools to optimize the process of implementing regression testing. Among them, STARTS and Ekstazi are the most popular regression testing tools among academics in running test case selection algorithms. This article discusses the comparison of the capabilities of the STARTS and Ekstazi features by using feature parameter evaluation. Both tools were tested with the same input data in the form of System Under Test (SUT) and test cases. The parameters used in the tool comparisons are platform technology, test case selection, functionality, usability and performance efficiency, the advantages, and disadvantages of the tool. he results of the trial show the differences and similarities between the features of STARTS and Ekstazi, so that it can be used by practitioners to take advantage of tools in the implementation of regression testing that suit their needs. In addition, experimental results show that the use of Ekstazi is more precise in sorting out important test cases and is more efficient, when compared to STARTS and regression testing with retest all.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

HU, HAI, CHANG-HAI JIANG, and KAI-YUAN CAI. "AN IMPROVED APPROACH TO ADAPTIVE TESTING." International Journal of Software Engineering and Knowledge Engineering 19, no. 05 (August 2009): 679–705. http://dx.doi.org/10.1142/s0218194009004349.

Повний текст джерела
Анотація:
Adaptive testing is the counterpart of adaptive control in software testing. It means that software testing strategy should be adjusted on-line by using the testing data collected during software testing as our understanding of the software under test is improved. Previous studies on adaptive testing involved a simplified Controlled Markov Chain (CMC) model for software testing which employs several unrealistic assumptions. In this paper we propose a new adaptive software testing approach in the context of an improved and namely, general CMC model which aims to eliminate such threats to validity. A set of more realistic basic assumptions on the software testing process is proposed and several unrealistic assumptions are replaced by less unrealistic assumptions. A new adaptive testing strategy based on the general CMC is developed and implemented. Mathematical simulations and experiments on real life software are conducted to demonstrate the effectiveness of the new strategy.
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "SUT (SOFTWARE UNDER TESTING)"

1

YADAV, TRILOCAAN. "MINIMIZING AND OPTIMIZING THE SOLUTION SPACE OF TEST DATA." Thesis, DELHI TECHNOLOGICAL UNIVERSITY, 2020. http://dspace.dtu.ac.in:8080/jspui/handle/repository/18828.

Повний текст джерела
Анотація:
In the software development lifecycle (SDL), testing of software is the most stressful and exhausting operation which consumes lots of time. Every aspect of software is very hard to test. Consequently, in recent times some automatic data generation research methods were added to reduce the time expended during the software testing. And the solution space of the automated generated test data is very large. It is not easy to check all the test data which is generated because it is time consuming, forces to check whole solution space of automated generated test data. We present in this paper demonstrating the design framework, implementing it and discovering the tool 's capabilities to minimize the test data generated. Our concrete concepts on the test cases for the optimal set is based on the mutation function Specified by the user. The system was implemented in language C++. We introduce mutation function to calculate mutant score with value and path to the test cases generated to minimize the solution space for the tester.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Csallner, Christoph. "Combining over- and under-approximating program analyses for automatic software testing." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/24764.

Повний текст джерела
Анотація:
Thesis (Ph.D.)--Computing, Georgia Institute of Technology, 2009.
Committee Chair: Smaragdakis, Yannis; Committee Member: Dwyer, Matthew; Committee Member: Orso, Alessandro; Committee Member: Pande, Santosh; Committee Member: Rugaber, Spencer.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Bard, Robin, and Simon Banasik. "En prestanda- och funktionsanalys av Hypervisors för molnbaserade datacenter." Thesis, Malmö högskola, Fakulteten för teknik och samhälle (TS), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-20491.

Повний текст джерела
Анотація:
I dagens informationssamhälle pågår en växande trend av molnbaserade tjänster. Vid implementering av molnbaserade tjänster används metoden Virtualisering. Denna metod minskar behovet av antal fysiska datorsystem i ett datacenter. Vilket har en positiv miljöpåverkan eftersom energikonsumtionen minskar när hårdvaruresurser kan utnyttjas till sin fulla kapacitet. Molnbaserade tjänster skapar samhällsnytta då nya aktörer utan teknisk bakgrundskunskap snabbt kan komma igång med verksamhetsberoende tjänster. För tillämpning av Virtualisering används en så kallad Hypervisor vars uppgift är att distribuera molnbaserade tjänster. Efter utvärdering av vetenskapliga studier har vi funnit att det finns skillnader i prestanda och funktionalitet mellan olika Hypervisors. Därför väljer vi att göra en prestanda- samt funktionsanalys av Hypervisors som kommer från de största aktörerna på marknaden. Dessa är Microsoft Hyper-V Core Server 2012, Vmware ESXi 5.1.0 och Citrix XenServer 6.1.0 Free edition. Vår uppdragsgivare är försvarsmakten som bekräftade en stor efterfrågan av vår undersökning. Rapporten innefattar en teoretisk grund som beskriver tekniker bakom virtualisering och applicerbara användningsområden. Genomförandet består av två huvudsakliga metoder, en kvalitativ- respektive kvantitativ del. Grunden till den kvantitativa delen utgörs av ett standardsystem som fastställdes utifrån varje Hypervisors begränsningar. På detta standardsystem utfördes prestandatester i form av dataöverföringar med en serie automatiserade testverktyg. Syftet med testverktygen var att simulera datalaster som avsiktligt påverkade CPU och I/O för att avgöra vilka prestandaskillnader som förekommer mellan Hypervisors. Den kvalitativa undersökningen omfattade en utredning av funktionaliteter och begränsningar som varje Hypervisor tillämpar. Med tillämpning av empirisk analys av de kvantitativa mätresultaten kunde vi fastställa orsaken bakom varje Hypervisors prestanda. Resultaten visade att det fanns en korrelation mellan hur väl en Hypervisor presterat och vilken typ av dataöverföring som den utsätts för. Den Hypervisor som uppvisade goda prestandaresultat i samtliga dataöverföringar är ESXi. Resultaten av den kvalitativa undersökningen visade att den Hypervisor som offererade mest funktionalitet och minst begränsningar är Hyper-V. Slutsatsen blev att ett mindre datacenter som inte planerar en expansion bör lämpligtvis välja ESXi. Ett större datacenter som både har behov av funktioner som gynnar molnbaserade tjänster och mer hårdvaruresurser bör välja Hyper-V vid implementation av molntjänster.
A growing trend of cloud-based services can be witnessed in todays information society. To implement cloud-based services a method called virtualization is used. This method reduces the need of physical computer systems in a datacenter and facilitates a sustainable environmental and economical development. Cloud-based services create societal benefits by allowing new operators to quickly launch business-dependent services. Virtualization is applied by a so-called Hypervisor whose task is to distribute cloud-based services. After evaluation of existing scientific studies, we have found that there exists a discernible difference in performance and functionality between different varieties of Hypervisors. We have chosen to perform a functional and performance analysis of Hypervisors from the manufacturers with the largest market share. These are Microsoft Hyper-V Core Server 2012, Vmware ESXi 5.1.0 and Citrix XenServer 6.1.0 Free edition. Our client, the Swedish armed forces, have expressed a great need of the research which we have conducted. The thesis consists of a theoretical base which describes techniques behind virtualization and its applicable fields. Implementation comprises of two main methods, a qualitative and a quantitative research. The basis of the quantitative investigation consists of a standard test system which has been defined by the limitations of each Hypervisor. The system was used for a series of performance tests, where data transfers were initiated and sampled by automated testing tools. The purpose of the testing tools was to simulate workloads which deliberately affected CPU and I/O to determine the performance differences between Hypervisors. The qualitative method comprised of an assessment of functionalities and limitations for each Hypervisor. By using empirical analysis of the quantitative measurements we were able to determine the cause of each Hypervisors performance. The results revealed that there was a correlation between Hypervisor performance and the specific data transfer it was exposed to. The Hypervisor which exhibited good performance results in all data transfers was ESXi. The findings in the qualitative research revealed that the Hypervisor which offered the most functionality and least amount of constraints was Hyper-V. The conclusion of the overall results uncovered that ESXi is most suitable for smaller datacenters which do not intend to expand their operations. However a larger datacenter which is in need of cloud service oriented functionalities and requires greater hardware resources should choose Hyper-V at implementation of cloud-based services.
Стилі APA, Harvard, Vancouver, ISO та ін.

Книги з теми "SUT (SOFTWARE UNDER TESTING)"

1

Satdarova, Faina. DIFFRACTION ANALYSIS OF DEFORMED METALS: Theory, Methods, Programs. xxu: Academus Publishing, 2019. http://dx.doi.org/10.31519/monography_1598.

Повний текст джерела
Анотація:
General analysis of the distribution of crystals orientation and dislocation density in the polycrystalline system is presented. Recovered information in diffraction of X-rays adopting is new to structure states of polycrystal. Shear phase transformations in metals — at the macroscopic and microscopic levels — become a clear process. Visualizing the advances is produced by program included in package delivered. Mathematical models developing, experimental design, optimal statistical estimation, simulation the system under study and evolution process on loading serves as instrumentation. To reduce advanced methods to research and studies problem-oriented software will promote when installed. Automation programs passed a testing in the National University of Science and Technology “MISIS” (The Russian Federation, Moscow). You score an advantage in theoretical and experimental research in the field of physics of metals.
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "SUT (SOFTWARE UNDER TESTING)"

1

Doležel, Michal. "Defining TestOps: Collaborative Behaviors and Technology-Driven Workflows Seen as Enablers of Effective Software Testing in DevOps." In Agile Processes in Software Engineering and Extreme Programming – Workshops, 253–61. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58858-8_26.

Повний текст джерела
Анотація:
Abstract Context: DevOps is an increasingly popular approach to software development and software operations. Being understood as mutually integrated, both activities have been re-united under one single label. In contrast to traditional software development activities, DevOps promotes numerous fundamental changes, and the area of software testing is not an exception. Yet, the exact appearance of software testing within DevOps is poorly understood, so is the notion of TestOps. Objective: This paper explores TestOps as a concept rooted in industrial practice. Method: To provide a pluralist outline of practitioners’ views on What is TestOps, the YouTube platform was searched for digital content containing either “TestOps” or “DevTestOps” in the content title. Through a qualitative lens, the resulting set was systematically annotated and thematically analyzed in an inductive manner. Results: Referring to DevOps, practitioners use the notion of TestOps when characterizing a conceptual shift that occurs within the area of software testing. As a matter of fact, two dominant categories were found in the data: (i) TestOps as a new organizational philosophy; (ii) TestOps as an innovative software technique (i.e. process supported by technology). A set of high-level themes within each of these categories was identified and described. Conclusion: The study outlines an inconsistency in practitioner perspectives on the nature of TestOps. To decrease the identified conceptual ambiguity, the proposed model posits two complementary meanings of TestOps.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Beyer, Dirk, Po-Chun Chien, and Nian-Ze Lee. "Bridging Hardware and Software Analysis with Btor2C: A Word-Level-Circuit-to-C Translator." In Tools and Algorithms for the Construction and Analysis of Systems, 152–72. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-30820-8_12.

Повний текст джерела
Анотація:
AbstractAcross the broad research field concerned with the analysis of computational systems, research endeavors are often categorized by the respective models under investigation. Algorithms and tools are usually developed for a specific model, hindering their applications to similar problems originating from other computational systems. A prominent example of such a situation is the area of formal verification and testing for hardware and software systems. The two research communities share common theoretical foundations and solving methods, including satisfiability, interpolation, and abstraction refinement. Nevertheless, it is often demanding for one community to benefit from the advancements of the other, as analyzers typically assume a particular input format. To bridge the gap between the hardware and software analysis, we propose Btor2C, a translator from word-level sequential circuits to C programs. We choose the Btor2 language as the input format for its simplicity and bit-precise semantics. It can be deemed as an intermediate representation tailored for analysis. Given a Btor2 circuit, Btor2C generates a behaviorally equivalent program in the language C, supported by many static program analyzers. We demonstrate the use cases of Btor2C by translating the benchmark set from the Hardware Model Checking Competitions into C programs and analyze them by tools from the Intl. Competitions on Software Verification and Testing. Our results show that software analyzers can complement hardware verifiers for enhanced quality assurance: For example, the software verifier VeriAbs with Btor2C as preprocessor found more bugs than the best hardware verifiers ABC and AVR in our experiment.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Schumi, Richard, Priska Lang, Bernhard K. Aichernig, Willibald Krenn, and Rupert Schlick. "Checking Response-Time Properties of Web-Service Applications Under Stochastic User Profiles." In Testing Software and Systems, 293–310. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-67549-7_18.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Travkin, Oleg, Annika Mütze, and Heike Wehrheim. "SPIN as a Linearizability Checker under Weak Memory Models." In Hardware and Software: Verification and Testing, 311–26. Cham: Springer International Publishing, 2013. http://dx.doi.org/10.1007/978-3-319-03077-7_21.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Camilli, Matteo, and Barbara Russo. "Model-Based Testing Under Parametric Variability of Uncertain Beliefs." In Software Engineering and Formal Methods, 175–92. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58768-0_10.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Chen, Yinong, and Jean Arlat. "Modeling Software Dependability Growth under Input Partition Testing." In Safe Comp 96, 136–45. London: Springer London, 1997. http://dx.doi.org/10.1007/978-1-4471-0937-2_12.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Rudy, Jarosław. "Algorithm-Aware Makespan Minimisation for Software Testing Under Uncertainty." In Advances in Intelligent Systems and Computing, 435–45. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-19501-4_43.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Li, Zhe, and Fei Xie. "Concolic Testing of Front-end JavaScript." In Fundamental Approaches to Software Engineering, 67–87. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-30826-0_4.

Повний текст джерела
Анотація:
AbstractJavaScript has become the most popular programming language for web front-end development. With such popularity, there is a great demand for thorough testing of client-side JavaScript web applications. In this paper, we present a novel approach to concolic testing of front-end JavaScript web applications. This approach leverages widely used JavaScript testing frameworks such as Jest and Puppeteer and conducts concolic execution on JavaScript functions in web applications for unit testing. The seamless integration of concolic testing with these testing frameworks allows injection of symbolic variables within the native execution context of a JavaScript web function and precise capture of concrete execution traces of the function under test. Such concise execution traces greatly improve the effectiveness and efficiency of the subsequent symbolic analysis for test generation. We have implemented our approach on Jest and Puppeteer. The application of our Jest implementation on Metamask, one of the most popular Crypto wallets, has uncovered 3 bugs and 1 test suite improvement, whose bug reports have all been accepted by Metamask developers on Github. We also applied our Puppeteer implementation to 21 Github projects and detected 4 bugs.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Stouky, Ali, Btissam Jaoujane, Rachid Daoudi, and Habiba Chaoui. "Improving Software Automation Testing Using Jenkins, and Machine Learning Under Big Data." In Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, 87–96. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-98752-1_10.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Zhong, Ziyuan, Yuchi Tian, and Baishakhi Ray. "Understanding Local Robustness of Deep Neural Networks under Natural Variations." In Fundamental Approaches to Software Engineering, 313–37. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-71500-7_16.

Повний текст джерела
Анотація:
AbstractDeep Neural Networks (DNNs) are being deployed in a wide range of settings today, from safety-critical applications like autonomous driving to commercial applications involving image classifications. However, recent research has shown that DNNs can be brittle to even slight variations of the input data. Therefore, rigorous testing of DNNs has gained widespread attention.While DNN robustness under norm-bound perturbation got significant attention over the past few years, our knowledge is still limited when natural variants of the input images come. These natural variants, e.g., a rotated or a rainy version of the original input, are especially concerning as they can occur naturally in the field without any active adversary and may lead to undesirable consequences. Thus, it is important to identify the inputs whose small variations may lead to erroneous DNN behaviors. The very few studies that looked at DNN’s robustness under natural variants, however, focus on estimating the overall robustness of DNNs across all the test data rather than localizing such error-producing points. This work aims to bridge this gap.To this end, we study the local per-input robustness properties of the DNNs and leverage those properties to build a white-box (DeepRobust-W) and a black-box (DeepRobust-B) tool to automatically identify the non-robust points. Our evaluation of these methods on three DNN models spanning three widely used image classification datasets shows that they are effective in flagging points of poor robustness. In particular, DeepRobust-W and DeepRobust-B are able to achieve an F1 score of up to 91.4% and 99.1%, respectively. We further show that DeepRobust-W can be applied to a regression problem in a domain beyond image classification. Our evaluation on three self-driving car models demonstrates that DeepRobust-W is effective in identifying points of poor robustness with F1 score up to 78.9%.
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "SUT (SOFTWARE UNDER TESTING)"

1

Lima, Yury Alencar, Elder de Macedo Rodrigues, Fabio Paulo Basso, and Rafael A. P. Oliveira. "Teasy: A domain-specific language to reduce time and facilitate the creation of tests in web applications." In Workshop em Modelagem e Simulação de Sistemas Intensivos em Software. Sociedade Brasileira de Computação - SBC, 2021. http://dx.doi.org/10.5753/mssis.2021.17258.

Повний текст джерела
Анотація:
Software testing automation is one of the most challenging activities in Software Engineering scenarios. Moden-Based Testing (MBT) is a feasible strategy to alleviate efforts on automating testing activities. Trough a model that specifies the behavior of the Software Under Testing (SUT), MBT approaches are useful strategies to generate test cases and run them. However, some domains such as, web applications require extra efforts on applying MBT approaches. Due to this, in this study we propose and validate Teasy a Domain Specification Language (DSL) that makes MBT feasible for web application. Through the conduction of a Proof-of-Concept on testing a real-world web application, we noticed Teasy has potential to evolve to effectively support software development environments. Using a real-world application and projects with manually seeded faults, Teasy testing scenarios have detected 78,57% of the functional inconsistencies.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Abad, Pablo, Nazareno Aguirre, Valeria Bengolea, Daniel Ciolek, Marcelo F. Frias, Juan Galeotti, Tom Maibaum, Mariano Moscato, Nicolas Rosner, and Ignacio Vissani. "Improving Test Generation under Rich Contracts by Tight Bounds and Incremental SAT Solving." In 2013 IEEE Sixth International Conference on Software Testing, Verification and Validation (ICST). IEEE, 2013. http://dx.doi.org/10.1109/icst.2013.46.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Simão, Adenilso da Silva, Auri Marcelo Rizzo Vincenzi, and José Carlos Maldonado. "mudelgen: A Tool for Processing Mutant Operator Descriptions." In Simpósio Brasileiro de Engenharia de Software. Sociedade Brasileira de Computação, 2002. http://dx.doi.org/10.5753/sbes.2002.23970.

Повний текст джерела
Анотація:
Mutation Testing is a testing approach for assessing the adequacy of a set of test cases by analyzing their ability in distinguishing the product under test from a set of alternative products, the so-called mutants. The mutants are generated from the product under test by applying a set of mutant operators, which systematically yield products with slight syntactical differences. Aiming at automating the generation of mutants, we have designed a language — named MuDeL — for describing mutant operators. In this paper, we describe the mudelgen system, which was developed to support the language MuDeL. mudelgen was developed using concepts that come from transformational and logical programming paradigms, as well as from context-free grammar and denotational semantics theories.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Emer, Maria Claudia F. P., and Silvia Regina Vergilio. "Selection and Evaluation of Test Data Sets Based on Genetic Programming." In Simpósio Brasileiro de Engenharia de Software. Sociedade Brasileira de Computação, 2002. http://dx.doi.org/10.5753/sbes.2002.23940.

Повний текст джерела
Анотація:
A testing criterion is a predicate to be satisfied and generally addresses two important questions related to: 1) the selection of test cases capable of revealing as many faults as possible; and 2) the evaluation of a test set to consider the test ended. Studies show that fault based criteria, such as mutation testing, are very efficacious, but very expensive in terms of the number of test cases. Mutation testing uses mutation operators to generate alternatives for the program P under test. The goal is to derive test cases to producing different behaviours in P and its alternatives. This approach usually does not allow the test of interaction between faults since the alternative differs from P by a simple modification. This work explores the use of Genetic Programming (GP) to derive alternatives for testing P and describes two GP-based test procedures for selection and evaluation of test data. Experimental results show the GP approach applicability and allow comparison with mutation testing.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Simão, Adenilso da Silva, and José Carlos Maldonado. "MuDeL: A Language and a System for Describing and Generating Mutants." In Simpósio Brasileiro de Engenharia de Software. Sociedade Brasileira de Computação, 2001. http://dx.doi.org/10.5753/sbes.2001.23992.

Повний текст джерела
Анотація:
Mutation Testing is an approach for assessing the quality of a test case suite by analyzing its ability in distinguishing the product under test from a set of alternative products, the so-called mutants. The mutants are generated from the product under test by applying a set of mutant operators, which produce products with slight syntactical differences. The mutant operators are usually based on typical errors that occurs during the software development and can be related to a fault model. In this paper, we propose a language — named MuDeL — for describing mutant operators aiming not only at automating the mutant generation, but also at providing precision and formality to the operator descriptions. The language was designed using concepts that come from transformational and logical programming paradigms, as well as from context-free grammar theory. The language is illustrated with some simple examples. We also describe the mudelgen system, developed to support this language.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Aiken, Bradford, and Keith W. Wait. "Improving Fidelity of Energy Management Software Testing Through Hierarchical Clustering of Train Consist Data." In 2020 Joint Rail Conference. American Society of Mechanical Engineers, 2020. http://dx.doi.org/10.1115/jrc2020-8113.

Повний текст джерела
Анотація:
Abstract Energy management systems, such as New York Air Brake’s LEADER [1], are real-time control technologies that optimize train performance as Level 2 Autonomy systems under the SAE’s “Levels of Driving Automation” classification system [2], and are now commonly used by many railroads. Such systems require extensive testing due to varying requirements of speed and fuel efficiency, compatibility with the wide variation in consists actually marshalled in the field, as well as the potential for the systems to cause break-in-twos or other undesirable situations. Devising accurate test cases that translate well to real-world usage is a common obstacle in the software development process. Using empirical data gathered from sampling field observations and an unsupervised machine learning model, we have created a simple but effective software system capable of performing automated statistical analysis on train consists and recommending a small number of consists which best capture the variation observed on-track. The data produced by such a system is demonstrably useful in developing truly representative test cases for train control systems/energy management software. In this investigation, we first applied such an algorithm to a population of train consists from some arbitrary segment of North American track to identify the most representative sample. We then evaluated the performance of the LEADER driving strategy for the sample set of consists with one of two consists that had previously been used for ad-hoc development testing of the software. Our findings from these simulations indicate that the consists identified by the clustering algorithm display greater variation in LEADER-controlled performance across several features than the ad-hoc testing consists do. Such metrics are transit time, fuel consumption, speed limit adherence, and air brake usage. Application of the algorithm is therefore beneficial in that it allows for more efficient and more thorough testing and characterization of energy management software.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Milton, Dave C. "Considerations in Planning and Managing a Large Software Development Project." In 2014 10th International Pipeline Conference. American Society of Mechanical Engineers, 2014. http://dx.doi.org/10.1115/ipc2014-33252.

Повний текст джерела
Анотація:
While the pipeline industry is no stranger to complex and expensive projects, the unique characteristics of a truly large software development project require a special set of considerations. Many companies are finding themselves undertaking such a project in order to manage growth, achieve efficiencies, adjust to a new business driver, or simply to replace aging legacy systems. Without the proper attention to topics such as vendor selection, off-shore resourcing, the role of the internal IT shop, testing and training, these projects are almost sure to cost more, take longer and under deliver benefits compared to the original plan and justification.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Tahhan, Antonio, Cody Muchmore, Larinda Nichols, Alison Wells, Gregory Roberts, Emerald Ryan, Sneha Suresh, Bishwo Bhandari, and Chad Pope. "Development of Experimental and Computational Procedures for Nuclear Power Plant Component Testing Under Flooding Conditions." In 2017 25th International Conference on Nuclear Engineering. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/icone25-67985.

Повний текст джерела
Анотація:
Idaho State University (ISU), with support from Idaho National Laboratory, is actively engaged in enhancing nuclear power plant risk modeling. The ISU team is significantly increasing the understanding of non-containment, nuclear power plant component performance under flooding conditions. The work involves experimentation activities and development of mathematical models, using data from component flooding experiments. The research consists in developing experimentation procedures that comprised small scale component testing, followed by simple and then complex full scale component testing. The research is taking place in the Component Flooding Evaluation Laboratory (CFEL). Tests in CFEL will include water rise, spray, and wave impact experiments on passive and active components. Initial development work focused on small scale components, radios and simulated doors, that served as a low-risk and low-cost proof-of-concept options. Following these tests, full-scale component tests were performed in the Portal Evaluation Tank (PET). The PET is a semi-cylindrical 7500-1 capacity steel tank, with an opening to the environment of 2.4 m. × 2.4 m. The opening allows installation of doors, feedthroughs, pipes, or other components. The first set of experiments with the PET were conducted in 2016 using hollow doors subjected to a water rise scenario. Data collected during the door tests is being analyzed using Bayesian regression methods to determine the parameters of influence and inform future experiments. A practical method of simulating full scale wave impacts on components and structures is also being researched to further enhance CFEL capabilities. Early on, the team determined full scale wave impacts could not be simulated using traditional wave flumes or pools; therefore, closed conduit flow is under consideration. Computational fluid dynamics software is being used to simulate fluid velocities associated with tsunami waves of heights up to 6-m, and to design a wave impact simulation device capable of accurately recreating a near vertical wave section with variable height and fluid velocity. The component flooding simulation activities associated with this project involve use of smoothed particle dynamics codes. These particle-based simulation methods do not require a mesh to be applied to the fluid, which allows for more natural flows to be simulated. Finally, CFEL can be described as a pioneering element, comprised of several ongoing research and experimental projects, that are vital to the development of risk analysis methods for the nuclear industry.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Compere, Marc, Garrett Holden, Otto Legon, and Roberto Martinez Cruz. "MoVE: A Mobility Virtual Environment for Autonomous Vehicle Testing." In ASME 2019 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2019. http://dx.doi.org/10.1115/imece2019-10936.

Повний текст джерела
Анотація:
Abstract Autonomous vehicle researchers need a common framework in which to test autonomous vehicles and algorithms along a realism spectrum from simulation-only to real vehicles and real people. The community needs an open-source, publicly available framework, with source code, in which to develop, simulate, execute, and post-process multi-vehicle tests. This paper presents a Mobility Virtual Environment (MoVE) for testing autonomous system algorithms, vehicles, and their interactions with real and simulated vehicles and pedestrians. The result is a network-centric framework designed to represent multiple real and multiple virtual vehicles interacting and possibly communicating with each other in a common coordinate frame with a common timestamp. This paper presents a literature review of comparable autonomous vehicle softwares, presents MoVE concepts and architecture, and presents three experimental tests with multiple virtual and real vehicles, with real pedestrians. The first scenario is a traffic wave simulation using a real lead vehicle and 3 real follower vehicles. The second scenario is a medical evacuation scenario with 2 real pedestrians and 1 real vehicles. Real pedestrians are represented using live-GPS-followers streaming GPS position from mobile phones over the cellular network. Time-history and spatial plots of real and virtual vehicles are presented with vehicle-to-vehicle distance calculations indicating where and when potential collisions were detected and avoided. The third scenario highlights the avoid() behavior successfully avoiding other virtual vehicles and 1 real pedestrian in a small outdoor area. The MoVE set of concepts and interfaces are implemented as open-source software available for use and customization within the autonomous vehicle community. MoVE is freely available under the GPLv3 open-source license at gitlab.com/comperem/move.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Grelle, Austin L., Young S. Park, and Richard B. Vilim. "Development and Testing of Fault-Diagnosis Algorithms for Reactor Plant Systems." In 2016 24th International Conference on Nuclear Engineering. American Society of Mechanical Engineers, 2016. http://dx.doi.org/10.1115/icone24-61024.

Повний текст джерела
Анотація:
Argonne National Laboratory is further developing fault diagnosis algorithms for use by the operator of a nuclear plant to aid in improved monitoring of overall plant condition and performance. The objective is better management of plant upsets through more timely, informed decisions on control actions with the ultimate goal of improved plant safety, production, and cost management. Integration of these algorithms with visual aids for operators is taking place through a collaboration under the concept of an operator advisory system. This is a software entity whose purpose is to manage and distill the enormous amount of information an operator must process to understand the plant state, particularly in off-normal situations, and how the state trajectory will unfold in time. The fault diagnosis algorithms were exhaustively tested using computer simulations of twenty different faults introduced into the chemical and volume control system (CVCS) of a pressurized water reactor (PWR). The algorithms are unique in that each new application to a facility requires providing only the piping and instrumentation diagram (PID) and no other plant-specific information; a subject-matter expert is not needed to install and maintain each instance of an application. The testing approach followed accepted procedures for verifying and validating software. It was shown that the code satisfies its functional requirement which is to accept sensor information, identify process variable trends based on this sensor information, and then to return an accurate diagnosis based on chains of rules related to these trends. The validation and verification exercise made use of GPASS, a one-dimensional systems code, for simulating CVCS operation. Plant components were failed and the code generated the resulting plant response. Parametric studies with respect to the severity of the fault, the richness of the plant sensor set, and the accuracy of sensors were performed as part of the validation exercise. The background and overview of the software will be presented to give an overview of the approach. Following, the verification and validation effort using the GPASS code for simulation of plant transients including a sensitivity study on important parameters will be presented. Finally, the planned future path will be highlighted.
Стилі APA, Harvard, Vancouver, ISO та ін.

Звіти організацій з теми "SUT (SOFTWARE UNDER TESTING)"

1

Chen, Weixing. PR378-173601-Z01 Effect of Pressure Fluctuations on the Growth Rate of Near-Neutral pH SCC. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), July 2021. http://dx.doi.org/10.55274/r0012112.

Повний текст джерела
Анотація:
This report summarizes the work completed in PRCI SCC-2-12A project: The Effect of Pressure Fluctuations on the Growth Rate of Near-Neutral pH SCC, which is Phase 3 of the work on the same subject of investigation. The following insights from the current phase of the PRCI SCC-2-12A project are thought to be the most important: - Near neutral pH crack initiation is pressure-fluctuation dependent. Severe pressure fluctuations accelerate the fracture and spallation of mill scale on the pipeline steel surfaces, making it harder to initiate SCC cracks from the bottom of pits that are developed at flawed mill scale sites. On the other hand, the presence of a primer layer before application of the protective coating preserves the mill scale on the pipe steel surface and promotes crack initiation. - The early-stage crack growth primarily features crack length extension on the pipe surface but limited crack growth in the depth direction. Three different mechanisms of crack length extension have been identified, including that determined by the geometry of coating disbondment, a chaotic process of crack coalescence, and the ability of existing cracks to induce further crack initiation and growth. This latter process is pressure-fluctuation sensitive. - A complete set of equations governing crack growth in Stage 2 has been established based on experimental specimens with surface cracks under mechanical loading conditions realistic to pressure fluctuations during the operation of oil and gas pipelines. - The contribution to crack growth by direct dissolution of the steel at the crack tip has been determined, which has been found to be crack depth-dependent and pressure-fluctuation-sensitive. Gas pipelines operated under high mean pressure show higher rates of dissolution. - The severity of crack growth and the accuracy of the predictive model can be significantly affected by crack tip morphology, either sharp or blunt, and this would yield different threshold values for Stage 2 crack growth and therefore different lengths of remaining life. - Full scale testing was performed and has validated the crack growth models contained herein. - The PipeOnline software has been revised to incorporate the new experimental results obtained from the current PRCI SCC 2-12A project. This PipeOnline software was previously developed from the two earlier phases of the PRCI project.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Juden, Matthew, Tichaona Mapuwei, Till Tietz, Rachel Sarguta, Lily Medina, Audrey Prost, Macartan Humphreys, et al. Process Outcome Integration with Theory (POInT): academic report. Centre for Excellence and Development Impact and Learning (CEDIL), March 2023. http://dx.doi.org/10.51744/crpp5.

Повний текст джерела
Анотація:
This paper describes the development and testing of a novel approach to evaluating development interventions – the POInT approach. The authors used Bayesian causal modelling to integrate process and outcome data to generate insights about all aspects of the theory of change, including outcomes, mechanisms, mediators and moderators. They partnered with two teams who had evaluated or were evaluating complex development interventions: The UPAVAN team had evaluated a nutrition-sensitive agriculture intervention in Odisha, India, and the DIG team was in the process of evaluating a disability-inclusive poverty graduation intervention in Uganda. The partner teams’ theory of change were adapted into a formal causal model, depicted as a directed acyclic graph (DAG). The DAG was specified in the statistical software R, using the CausalQueries package, having extended the package to handle large models. Using a novel prior elicitation strategy to elicit beliefs over many more parameters than has previously been possible, the partner teams’ beliefs about the nature and strength of causal links in the causal model (priors) were elicited and combined into a single set of shared prior beliefs. The model was updated on data alone as well as on data plus priors to generate posterior models under different assumptions. Finally, the prior and posterior models were queried to learn about estimates of interest, and the relative role of prior beliefs and data in the combined analysis.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Olsen. PR-179-07200-R01 Evaluation of NOx Sensors for Control of Aftertreatment Devices. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), June 2008. http://dx.doi.org/10.55274/r0010985.

Повний текст джерела
Анотація:
Emissions reduction through exhaust aftertreatment is becoming more common. It is likely to play an important role in meeting new emissions regulations in the future. Currently, the predominate aftertreatment technology for NOX reduction in lean burn natural gas engines appears to be selective catalytic reduction (SCR). In SCR, a reducing agent is injected into the exhaust upstream of a catalyst. Supplying the optimal quantity of reagent is critical to effective application of SCR. If too little reagent is supplied then the NOx reduction efficiency may be too low. If too much reagent is provided then the ammonia slip may be too high. Control of reagent injection is an area where improvements could be made. In many current SCR systems, the rate of reagent injection is determined by engine loading. The relationship between engine loading and engine out NOX emission is determined during SCR system commissioning, and assumed to remain constant. Ideally, NOX emissions would be measured and used as feedback to the SCR system. It may also be advantageous to employ transient reagent injection based on time dependent variations in NOX mass flow in the exhaust. This would be possible with a fast response NOx sensor. Close loop engine control is an area of increasing importance. As regulatory emissions levels are reduced, compliance margins generally decrease. Precise control of air/fuel ratio and ignition timing become more critical. Cylinder-to-cylinder control of air/fuel ratio, ignition timing, and IMEP are also important. Advanced sensors are an enabling technology for more precise engine control. Ion sensing is an example of a technology that potentially can improve cylinder balancing and ignition timing. Cylinder-to-cylinder air/fuel ratio can be accomplished in several different ways. One approach would be to install individual sensors in the exhaust manifold, one for each cylinder. Ceramic based sensors (O2 and NOx) may be reliable enough at exhaust port temperatures. They are typically used in the exhaust of 4-stroke cycle engines, which have higher exhaust temperatures than 2-stroke cycle engines. Ceramic based NOx sensors have been under development for use, primarily, in Lean NOx Traps (LNTs). This technology is expected to be used on over-the-road Diesel truck engines in 2010. Therefore, the research effort has momentum. This provides an opportunity to capitalize on the efforts of another industry. In this project a NOx sensor will be evaluated using the SCR slipstream system on the GMV-4TF. The basic tasks are: 1. Identify commercial NOx sensors and procure most promising sensor 2. Design and modification of SCR slipstream system to accept sensors 3. Installation of sensors, sensor electronics, and data logging hardware and software 4. Sensor evaluation during SCR slipstream testing.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії