Journal articles on the topic 'Software testing, verification and validation'

To see the other types of publications on this topic, follow the link: Software testing, verification and validation.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Software testing, verification and validation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Karamanlidis, D. "Software validation: inspection - testing - verification - alternatives." Advances in Engineering Software (1978) 7, no. 4 (October 1985): 216. http://dx.doi.org/10.1016/0141-1195(85)90080-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Yamada, Shigeru. "Software validation: Inspection-testing-verification-alternatives." European Journal of Operational Research 27, no. 3 (December 1986): 385. http://dx.doi.org/10.1016/0377-2217(86)90337-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Venugopal, Manokar, Manju Nanda, G. Anand, and Hari Chandana Voora. "An integrated Hardware/Software Verification and Validation methodology for Signal Processing Systems." ITM Web of Conferences 50 (2022): 02001. http://dx.doi.org/10.1051/itmconf/20225002001.

Full text
Abstract:
The testing and validation services team assesses project deliverables at various stages of development using innovative and effective verification and validation, to ensure that the deliverables are compliance with the customer specifications and requirements. Whenever new products and devices are released, completely integrated verification and validation services are delivered to accurate and complete records usability, performance, and quality assurance services. Throughout the product development and testing process, the testing and validation services team employs verification and validation techniques. Code reviews, walk through, inspections, desk-checking, and code execution are all examples of verification and validation techniques. Services for verification and validation are used to assess whether or not the software or application provided complies with the requirements and serves the intended purpose. A procedure used to ensure that the software created is of good quality and consistently operates as expected is independent testing and validation services. Unit testing (also known as “White Box Testing”), hardware-software integration testing (HSIT), and system testing are the three primary independent verification and validation approaches (Black Box Testing). The teams responsible for the verification and validation services actively participate in each stage of the project and design the services according to the project’s needs (e.g., prototype, spiral, iterative, V Model, and Agile). Our expertise in the embedded domain, tried-and-true verification and validation techniques, and a thorough methodology provide a quick turnaround and excellent results for the targeted solution. Independent Verification and validation services covering Source code, design, and requirements White box testing, or unit testing Testing for hardware-software integration Black box testing, or system testing To reduce test cycle-time significantly on test Automation solutions. Verification and validation techniques can be used to effectively and efficiently carry out stress and performance tests, and to detect defects early in the life cycle. Documentation of test process. Liaison and Certification
APA, Harvard, Vancouver, ISO, and other styles
4

Petrenko, A. K. "Verification, Validation, and Testing of Software: Special Issue of theProgrammirovanieJournal." Programming and Computer Software 29, no. 6 (November 2003): 296–97. http://dx.doi.org/10.1023/b:pacs.0000004129.28766.3d.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Li, Bing, Li Hong Li, and Fan Ming Liu. "Verification, Validation and Accreditation of Ship Electric Propulsion Simulation System." Applied Mechanics and Materials 433-435 (October 2013): 1915–20. http://dx.doi.org/10.4028/www.scientific.net/amm.433-435.1915.

Full text
Abstract:
in this paper, the concepts of modelling, simulation, verification and validation are described in order to analyse their necessities in electric ship propulsion system simulation. Traditional verification and validation methods are outlined before a new method based on expert theory and verification and validation methods of software testing is proposed to verify if the simulation program meets customers needs and respective software protocols, according to the characteristics of electric ship propulsion system.
APA, Harvard, Vancouver, ISO, and other styles
6

Zhou, Jiantao, Jing Liu, Jinzhao Wu, and Guodong Zhong. "A Latent Implementation Error Detection Method for Software Validation." Journal of Applied Mathematics 2013 (2013): 1–10. http://dx.doi.org/10.1155/2013/138287.

Full text
Abstract:
Model checking and conformance testing play an important role in software system design and implementation. From the view of integrating model checking and conformance testing into a tightly coupled validation approach, this paper presents a novel approach to detect latent errors in software implementation. The latent errors can be classified into two kinds, one is called as Unnecessary Implementation Trace, and the other is called as Neglected Implementation Trace. The method complements the incompleteness of security properties for software model checking. More accurate models are characterized to leverage the effectiveness of the model-based software verification and testing combined method.
APA, Harvard, Vancouver, ISO, and other styles
7

Sziray, József. "A Test Model for Hardware and Software Systems." Journal of Advanced Computational Intelligence and Intelligent Informatics 8, no. 5 (September 20, 2004): 523–29. http://dx.doi.org/10.20965/jaciii.2004.p0523.

Full text
Abstract:
The paper is concerned with the general aspects of testing complex hardware and software systems. First a mapping scheme as a test model is presented for an arbitrary given system. This scheme serves for describing the one-to-one correspondence between the input and output domains of the system, where the test inputs and fault classes are also involved. The presented test model incorporates both the verification and the validation schemes for hardware and software. The significance of the model is that it alleviates the clear differentiation between verification and validation tests, which is important and useful in the process of test design and evaluation. On the other hand, this model provides a clear overview on the various purpose test sets, which helps in organizing and applying these sets. The second part of the paper examines the case when the hardware and software are designed by using formal specification. Here the consequences and problems of formal methods, and their impacts on verification and validation are discussed.
APA, Harvard, Vancouver, ISO, and other styles
8

Reddy, Jogannagari Malla, and Kothuri Parashu Ramulu. "The Complexity of Verification and Validation Testing in Component Based Software Engineering." International Journal of Computer Sciences and Engineering 5, no. 12 (December 31, 2017): 296–300. http://dx.doi.org/10.26438/ijcse/v5i12.296300.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Voronova, Anna, Elena Zhilenkova, Anton Zhilenkov, and Vladislav Borisenko. "Practical approaches to verification and validation of engineering software when solving problems of nonlinear and chaotic dynamics." E3S Web of Conferences 258 (2021): 01012. http://dx.doi.org/10.1051/e3sconf/202125801012.

Full text
Abstract:
The article discusses the problem of verification and validation of engineering software when solving problems of nonlinear dynamics. The problems of validation and verification are shown on the example of dynamic chaos systems. The results of testing systems are presented. It is shown that in general when solving problems of nonlinear dynamics the characteristics of the developed engineering software are of critical importance. It is also indicated that neglecting this fact leads to irreversible negative consequences, ultimately resulting in the decay of the dynamics of a nonlinear system as well as in the decay of its orbits. The influence of the hardware and a number of aspects of the system and software implementation of the verification and validation systems under study is also shown. The article demonstrates modern approaches to the development of the studied software systems. It is shown that a high-quality software product suggests division into subsystems and stages.
APA, Harvard, Vancouver, ISO, and other styles
10

Murray-Smith, David. "Some Issues in the Testing of Computer Simulation Models." International Journal of Business & Technology 5, no. 1 (November 1, 2016): 1–10. http://dx.doi.org/10.33107/ijbte.2016.5.1.01.

Full text
Abstract:
The testing of simulation models has much in common with testing processes in other types of application involving software development. However, there are also important differences associated with the fact that simulation model testing involves two distinct aspects, which are known as verification and validation. Model validation is concerned with investigation of modelling errors and model limitations while verification involves checking that the simulation program is an accurate representation of the mathematical and logical structure of the underlying model. Success in model validation depends upon the availability of detailed information about all aspects of the system being modelled. It also may depend on the availability of high quality data from the system which can be used to compare its behaviour with that of the corresponding simulation model. Transparency, high standards of documentation and good management of simulation models and data sets are basic requirements in simulation model testing. Unlike most other areas of software testing, model validation often has subjective elements, with potentially important contributions from face- validation procedures in which experts give a subjective assessment of the fidelity of the model. Verification and validation processes are not simply applied once but must be used repeatedly throughout the model development process, with regressive testing principles being applied. Decisions about when a model is acceptable for the intended application inevitably involve some form of risk assessment. A case study concerned with the development and application of a simulation model of a hydro-turbine and electrical generator system is used to illustrate some of the issues arising in a typical control engineering application. Results from the case study suggest that it is important to bring together objective aspects of simulation model testing and the more subjective face- validation aspects in a coherent fashion. Suggestions are also made about the need for changes in approach in the teaching of simulation techniques to engineering students to give more emphasis to issues of model quality, testing and validation.
APA, Harvard, Vancouver, ISO, and other styles
11

Zeng, Le-tian, Chun-hui Yang, Mao-sheng Huang, and Yue-long Zhao. "Verification of Imaging Algorithm for Signal Processing Software within Synthetic Aperture Radar (SAR) System." Scientific Programming 2019 (February 21, 2019): 1–12. http://dx.doi.org/10.1155/2019/7105281.

Full text
Abstract:
In the signal processing software testing for synthetic aperture radar (SAR), the verification for algorithms is professional and has a very high proportion. However, existing methods can only perform a degree of validation for algorithms, exerting an adverse effect on the effectiveness of the software testing. This paper proposes a procedure-based approach for algorithm validation. Firstly, it describes the processing procedures of polar format algorithm (PFA) under the motion-error circumstance, based on which it analyzes the possible questions that may exist in the actual situation. By data simulation, the SAR echoes are generated flexibly and efficiently. Then, algorithm simulation is utilized to focus on the demonstrations for the approximations adopted in the algorithm. Combined with real data processing, the bugs concealed are excavated further, implementing a comprehensive validation for PFA. Simulated experiments and real data processing validate the correctness and effectiveness of the proposed algorithm.
APA, Harvard, Vancouver, ISO, and other styles
12

Popov, Dmitry. "Testing and verification of the LHCb Simulation." EPJ Web of Conferences 214 (2019): 02043. http://dx.doi.org/10.1051/epjconf/201921402043.

Full text
Abstract:
Monte-Carlo simulation is a fundamental tool for high-energy physics experiments, from the design phase to data analysis. In recent years its relevance has increased due to the ever growing measurements precision. Accuracy and reliability are essential features in simulation and particularly important in the current phase of the LHCb experiment, where physics analysis and preparation for data taking with the upgraded detector need to be performed at the same time. In this paper we will give an overview of the full chain of tests and procedures implemented for the LHCb Simulation software stack to ensure the quality of its results. The tests comprise simple checks to validate new software contributions in a nightlies system as well as more elaborate checks to probe simple physics and software quantities for performance and regression verifications. Commissioning of a new major version of the simulation software for production implies also validating its impact using a few physics anlayses. A new system for Simulation Data Quality (SimDQ) that is being put in place to help in the first phase of commissioning and for fast verification of all samples produced is also discussed.
APA, Harvard, Vancouver, ISO, and other styles
13

Kumar, Mohit, Geetika Gandhi, and Sushil Garg. "Survey on Various Testing Techniques and Strategies for Bug Finding." INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY 11, no. 1 (October 25, 2013): 2150–55. http://dx.doi.org/10.24297/ijct.v11i1.1184.

Full text
Abstract:
Software testing is verification and validation process aimed for evaluating a program and ensures that it meets the required result. The main goal of software testing is to uncover the errors in software. So the main aim of test cases is to derive set of tests that have highest probability of finding bugs. There are many approaches to software testing, but effective testing of any software product is essentially a tough process. It is nearly impossible to find all the errors in the program. The major problem in testing is what would be the strategy that we should adopt for testing. Thus, the selection of right strategy at the right time will make the software testing efficient and effective. In this paper I have described software testing techniques which are classified by purpose.
APA, Harvard, Vancouver, ISO, and other styles
14

M. Altaie, Atica, Rasha Gh. Alsarraj, and Asmaa H. Al-Bayati. "VERIFICATION AND VALIDATION OF A SOFTWARE: A REVIEW OF THE LITERATURE." Iraqi Journal for Computers and Informatics 46, no. 1 (June 30, 2020): 40–47. http://dx.doi.org/10.25195/ijci.v46i1.249.

Full text
Abstract:
With the development of the Internet, making software is often essential, also it is complicated to succeed in the project’s development. There is a necessity in delivering software of top quality. It might be accomplished through using the procedures of Verification and Validation (V&V) via development processes. The main aim of the V&V has been checking if the created software is meeting the needs and specifications of clients. V&V has been considered as collections related to testing as well as analysis activities across the software’s full life cycle. Quick developments in software V&V were of high importance in developing approaches and tools for identifying possible concurrent bugs and therefore verifying the correctness of software. It has been reflecting the modern software V&V concerning efficiency. The main aim of this study has been retrospective review related to various researches in software V&V and conduct a comparison between them. In the modern competitive world related to the software, the developers of software must be delivering on-time quality products, also the developers should be verifying that the software has been properly functioning and validating the product for each one of the client’s requirements. The significance of V&V in the development of software has been maintaining the quality of software. The approaches of V&V have been utilized in all stages of the System Development Life Cycle. Furthermore, the presented study also provides objectives of V&V and describes V&V tools that can be used in the process of software development, the way of improving the software’s quality.
APA, Harvard, Vancouver, ISO, and other styles
15

Cardoso, Rafael C., Georgios Kourtis, Louise A. Dennis, Clare Dixon, Marie Farrell, Michael Fisher, and Matt Webster. "A Review of Verification and Validation for Space Autonomous Systems." Current Robotics Reports 2, no. 3 (June 18, 2021): 273–83. http://dx.doi.org/10.1007/s43154-021-00058-1.

Full text
Abstract:
Abstract Purpose of Review The deployment of hardware (e.g., robots, satellites, etc.) to space is a costly and complex endeavor. It is of extreme importance that on-board systems are verified and validated through a variety of verification and validation techniques, especially in the case of autonomous systems. In this paper, we discuss a number of approaches from the literature that are relevant or directly applied to the verification and validation of systems in space, with an emphasis on autonomy. Recent Findings Despite advances in individual verification and validation techniques, there is still a lack of approaches that aim to combine different forms of verification in order to obtain system-wide verification of modular autonomous systems. Summary This systematic review of the literature includes the current advances in the latest approaches using formal methods for static verification (model checking and theorem proving) and runtime verification, the progress achieved so far in the verification of machine learning, an overview of the landscape in software testing, and the importance of performing compositional verification in modular systems. In particular, we focus on reporting the use of these techniques for the verification and validation of systems in space with an emphasis on autonomy, as well as more general techniques (such as in the aeronautical domain) that have been shown to have potential value in the verification and validation of autonomous systems in space.
APA, Harvard, Vancouver, ISO, and other styles
16

Neri, Giulia R. "The Use of Exploratory Software Testing in SCRUM." ACM SIGSOFT Software Engineering Notes 48, no. 1 (January 10, 2023): 59–62. http://dx.doi.org/10.1145/3573074.3573089.

Full text
Abstract:
Exploratory testing is a very common, yet under researched, software testing technique. Research has shown how this technique can provide a better insight about the system under test than other techniques, that it can find defects more efficiently than other testing approaches and even aid the design of other techniques. This research aims at increasing the understanding of exploratory testing and the way it is used within industries utilizing SCRUM. Another aim is to identify and understand the factors that enable the tester to use this technique successfully. The decision to set the study in SCRUM comes from the fact that this Agile management framework is the most popular in industry and from the suggestion to focus on the relationship between Agile and exploratory testing. Also, the choice of a specific context adds significance to the findings. This research will be conducted in a Sheffield based company, which produces data analytics software. The methodology will consist of three phases. During Phase 1 (Identification), SCRUM practitioners will be interviewed about the use of exploratory testing in SCRUM and the success factors of this technique. The aim of Phase 2 (Confirmation) will be to confirm the findings from Phase 1. This will be accomplished with focus groups and widely-distributed online survey. Finally, during Phase 3 (Verification), practitioners will take part to experiments to verify that the success factors identified during the first two phases enable efficient and effective exploratory testing. The purpose of this research is to enrich the academic field of software verification and validation, but also to provide industries utilising SCRUM with useful guidance.
APA, Harvard, Vancouver, ISO, and other styles
17

López-Fernández, Jesús J., Esther Guerra, and Juan de Lara. "Combining unit and specification-based testing for meta-model validation and verification." Information Systems 62 (December 2016): 104–35. http://dx.doi.org/10.1016/j.is.2016.06.008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Rexhepi, Burim, and Ali Rexhepi. "SOFTWARE TESTING TECHNIQUES AND PRINCIPLES." Knowledge International Journal 28, no. 4 (December 10, 2018): 1383–87. http://dx.doi.org/10.35120/kij28041383b.

Full text
Abstract:
This paper describes Software testing, need for software testing, Software testing goals and principles. Further it describe about different Software testing techniques and different software testing strategies. Finally it describes the difference between software testing and debugging.To perform testing effectively and efficiently, everyone involved with testing should be familiar with basic software testing goals, principles, limitations and concepts.We further explains different Software testing techniques such as Correctness testing, Performance testing, Reliability testing, Security testing. Further we have discussed the basic principles of black box testing, white box testing and gray box testing. We have surveyed some of the strategies supporting these paradigms, and have discussed their pros and cons. We also describes about different software testing strategies such as unit testing, Integration testing, acceptance testing and system testing.Finally there is comparison between debugging and testing. Testing is more than just debugging .Testing is not only used to locate defects and correct them it is also used in validation, verification process and measurement. A strategy for software Testing integrates software test case design methods into a well planned Series of steps that result in successful Construction of software that result in successful construction of software. Software testing Strategies gives the road map for testing. A software testing Strategy should be flexible enough to promote a customized testing approach at same time it must be right enough. Strategy is generally developed by project managers, software engineer and testing specialist. Software testing is an extremely creative and intellectually challenging task. When testing follows the principles given below, the creative element of test design and execution rivals any of the preceding software development steps, because testing requires high creativity and responsibility only the best personnel must be assigned to design, implement, and analyze test cases, test data and test results.
APA, Harvard, Vancouver, ISO, and other styles
19

Wang, Rui Li, Xiao Liang, Wen Zhou Lin, Xue Zhe Liu, and Yun Long Yu. "Verification and Validation of a Detonation Computational Fluid Dynamics Model." Defect and Diffusion Forum 366 (April 2016): 40–46. http://dx.doi.org/10.4028/www.scientific.net/ddf.366.40.

Full text
Abstract:
Verification and validation (V&V) are the primary means to assess the accuracy and reliability in computational fluid dynamics (CFD) simulation. V&V of the multi-medium detonation CFD model is conducted by using our independently-developed software --- Lagrangian adaptive hydrodynamics code in the 2D space (LAD2D) as well as a large number of benchmark testing models. Specifically, the verification of computational model is based on the basic theory of the computational scheme and mathematical physics equations, and validation of the physical model is accomplished by comparing the numerical solution with the experimental data. Finally, some suggestions are given about V&V of the detonation CFD model.
APA, Harvard, Vancouver, ISO, and other styles
20

Tebes, Guido, Luis Olsina, Denis Peppino, and Pablo Becker. "Specifying and Analyzing a Software Testing Ontology at the Top-Domain Ontological Level." Journal of Computer Science and Technology 21, no. 2 (October 21, 2021): e12. http://dx.doi.org/10.24215/16666038.21.e12.

Full text
Abstract:
One of the Software Engineering areas that supports quality assurance is testing. Given that specific processes, artefacts, methods and ultimately strategies for software testing involve a large number of domain concepts, it is valuable to have a robust conceptual base, that is, a software testing ontology that defines the terms, properties, relationships and axioms explicitly and unambiguously. Ontologies for instance foster a clearer terminological understanding of process and method specifications for strategies, among many other benefits. After analyzing both the results of a conducted Systematic Literature Review of primary studies on conceptualized software testing ontologies and the state-of-the-art of testing-related standards, we decided to develop a software testing top-domain ontology named TestTDO that fits our goals. Therefore, this article specifices development, verification and validation aspects of the TestTDO, which was built following the Design Science Research approach.
APA, Harvard, Vancouver, ISO, and other styles
21

Pasaribu, Johni Setiady. "Perbandingan Pengujian Boundary Value Analysis, Equivalence Partitioning dan Error Guessy (Studi Kasus Indeks Nilai)." Jurnal ICT : Information Communication & Technology 20, no. 2 (December 15, 2021): 210–17. http://dx.doi.org/10.36054/jict-ikmi.v20i2.388.

Full text
Abstract:
Software testing is a critical and time-consuming element of the software development life cycle (SDLC). Software testing determines the quality of a software before it is used by end users. The main goal is to find software errors or defects so that these errors or defects can be corrected in this testing phase. There are two general techniques for detecting software errors, namely: Black Box and White Box Testing. Black Box testing technique is used for validation, namely whether you have built the right software and White Box testing is used for verification, namely whether the software made is correct. This study aims to test the value index software using three methods of Black Box testing techniques, namely Equivalence partitioning (EP), Boundary Value Analysis (BVA) and Error Guessing. The testing process is carried out to find out how much error occurs in the value index software. The research method is qualitative descriptive and analytical. The results of this study indicate that the value index application has no shortcomings (error of 0%) both with BVA, EP and Error Guessing techniques.
APA, Harvard, Vancouver, ISO, and other styles
22

ROUSSET, MARIE-CHRISTINE, and SUSAN CRAW. "ECAI-96 workshop on validation, verification and refinement of KBS: A short report." Knowledge Engineering Review 12, no. 01 (January 1997): 95–98. http://dx.doi.org/10.1017/s0269888997000052.

Full text
Abstract:
Ensuring reliability and enhancing quality of Knowledge Based Systems (KBS) are critical factors for their successful deployment in real-world applications. This is a broad task involving both methodological and formal approaches for designing rigorous Validation, Verification and Testing (VVT) methods and tools. Some of these can be adapted from conventional software engineering, while others rely on specific aspects of KBS.
APA, Harvard, Vancouver, ISO, and other styles
23

Zhukov, Victor P. "Verification, Validation, and Testing of Kinetic Mechanisms of Hydrogen Combustion in Fluid-Dynamic Computations." ISRN Mechanical Engineering 2012 (August 13, 2012): 1–11. http://dx.doi.org/10.5402/2012/475607.

Full text
Abstract:
A one-step, a two-step, an abridged, a skeletal, and four detailed kinetic schemes of hydrogen oxidation have been tested. A new skeletal kinetic scheme of hydrogen oxidation has been developed. The CFD calculations were carried out using ANSYS CFX software. Ignition delay times and speeds of flames were derived from the computational results. The computational data obtained using ANSYS CFX and CHEMKIN, and experimental data were compared. The precision, reliability, and range of validity of the kinetic schemes in CFD simulations were estimated. The impact of kinetic scheme on the results of computations was discussed. The relationship between grid spacing, time step, accuracy, and computational cost was analyzed.
APA, Harvard, Vancouver, ISO, and other styles
24

Joosten, Joosten. "THE BLACK BOX TESTING AND LOC METHOD APPROACH IN TESTING AND STREAMLINING THE PATIENT REGISTRATION PROGRAM." Jurnal Riset Informatika 3, no. 2 (March 2, 2021): 137–44. http://dx.doi.org/10.34288/jri.v3i2.188.

Full text
Abstract:
Good software can be used if there is proper testing. The testing phase is quite important because the software needs to be tested before it is used by end users. In making software for animal hospitals there is no validation and verification so testing is needed. This study used information on the registration section of veterinary hospital patients and was tested by three Black Box Testing methods, namely Equivalence Class Partitioning (ECP), Boundary Value Analysis (BVA), and Decision Table plus the LOC approach. The test results of the three methods are that the percentage of invalid ECPs is greater than the valid ones, so the input value limit needs to be changed again. Then for BVA testing, the percentage of valid is higher than invalid. In the decision table, a shortening rule is made between operating services and other services so that it produces inpatient status and down payment automatically without choose it again and is tested again by the decision table by matching the estimation results of the two services.
APA, Harvard, Vancouver, ISO, and other styles
25

Putri, Dwi Ismiyana. "Teknik Equivalence Partitions untuk Pengujian Aplikasi Manajemen Kas dan Inventaris Berbasis Web." INFORMATION MANAGEMENT FOR EDUCATORS AND PROFESSIONALS : Journal of Information Management 6, no. 2 (October 24, 2022): 193. http://dx.doi.org/10.51211/imbi.v6i2.1922.

Full text
Abstract:
Performing checks on software specifications requires testing and validation. The discrepancy in the data entry process was caused by the validation process not being optimized. One of the consequences is the process of failure when inputting data. As a result, the application can be dangerous for future system users. Therefore, it is necessary to improve the quality and accurate verification. When the testing process finds errors that were previously unknown, then the test is called a good test. One of the testing method is the black box testing method with the Equivalence Partition technique. Equivalence Partition is a technique used to reduce the number of test cases that exist during testing, by entering data on each form that is in the application in each menu, then testing and grouping based on the validity by the system functionality.
APA, Harvard, Vancouver, ISO, and other styles
26

Nidagundi, Padmaraj, and Leonids Novickis. "Introduction to Lean Canvas Transformation Models and Metrics in Software Testing." Applied Computer Systems 19, no. 1 (May 1, 2016): 30–36. http://dx.doi.org/10.1515/acss-2016-0004.

Full text
Abstract:
Abstract Software plays a key role nowadays in all fields, from simple up to cutting-edge technologies and most of technology devices now work on software. Software development verification and validation have become very important to produce the high quality software according to business stakeholder requirements. Different software development methodologies have given a new dimension for software testing. In traditional waterfall software development software testing has approached the end point and begins with resource planning, a test plan is designed and test criteria are defined for acceptance testing. In this process most of test plan is well documented and it leads towards the time-consuming processes. For the modern software development methodology such as agile where long test processes and documentations are not followed strictly due to small iteration of software development and testing, lean canvas transformation models can be a solution. This paper provides a new dimension to find out the possibilities of adopting the lean transformation models and metrics in the software test plan to simplify the test process for further use of these test metrics on canvas.
APA, Harvard, Vancouver, ISO, and other styles
27

Acharya, Sushil, Priyadarshan Anant Manohar, Peter Wu, Bruce Maxim, and Mary Hansen. "Design, Development and Delivery of Active Learning Tools in Software Verification & Validation Education." Journal of Education and Learning 7, no. 1 (August 23, 2017): 13. http://dx.doi.org/10.5539/jel.v7n1p13.

Full text
Abstract:
Active learning tools are critical in imparting real world experiences to the students within a classroom environment. This is important because graduates are expected to develop software that meets rigorous quality standards in functional and application domains with little to no training. However, there is a well-recognized need for the availability of effective active tools. This need has been addressed by the authors by designing, developing, and delivering, twenty delivery hours of Case Studies, sixteen delivery hours of Class Exercises, and six delivery hours of Video Case Studies for use in V&V courses. The active learning tools focus on some specific SV&V topics such as requirements engineering, software reviews, configuration management, and software testing. Four key skill areas sought after by employers, namely communication skills, applied knowledge of methods, applied knowledge of tools, and research exposure have been used to drive the development funded by a National Science Foundation grant and perfected through an industry-academia partnership. These tools have been successfully disseminated to over 25 universities with many CS, IS, SE programs incorporating the tools in their existing courses and others designing new courses based on these tools.In this paper we present data on the student feedback and pedagogical effectiveness of the strategies used to effectively incorporate and deliver the developed active learning tools by instructors at two universities. Traditional and flipped classroom delivery strategies are discussed as well as topics like pre-requisite knowledge preparation prior to class, course module presentation sequence, homework, team/individual work, collaborative discussions, and assessment tools are deliberated. The student questionnaire data from the two University Partners who used the V&V instructional activities were quite positive and showed that students were interested in the activities, saw the real-world applications, and communicated with their classmates as they solved the problems. Educational outcomes assessment demonstrated more effective learning in all key learning areas.
APA, Harvard, Vancouver, ISO, and other styles
28

Yin, Peili, Jianhua Wang, and Chunxia Lu. "Measuring Software Test Verification for Complex Workpieces based on Virtual Gear Measuring Instrument." Measurement Science Review 17, no. 4 (August 1, 2017): 197–207. http://dx.doi.org/10.1515/msr-2017-0023.

Full text
Abstract:
AbstractValidity and correctness test verification of the measuring software has been a thorny issue hindering the development of Gear Measuring Instrument (GMI). The main reason is that the software itself is difficult to separate from the rest of the measurement system for independent evaluation. This paper presents a Virtual Gear Measuring Instrument (VGMI) to independently validate the measuring software. The triangular patch model with accurately controlled precision was taken as the virtual workpiece and a universal collision detection model was established. The whole process simulation of workpiece measurement is implemented by VGMI replacing GMI and the measuring software is tested in the proposed virtual environment. Taking involute profile measurement procedure as an example, the validity of the software is evaluated based on the simulation results; meanwhile, experiments using the same measuring software are carried out on the involute master in a GMI. The experiment results indicate a consistency of tooth profile deviation and calibration results, thus verifying the accuracy of gear measuring system which includes the measurement procedures. It is shown that the VGMI presented can be applied in the validation of measuring software, providing a new ideal platform for testing of complex workpiece-measuring software without calibrated artifacts.
APA, Harvard, Vancouver, ISO, and other styles
29

Neicu, Marian Ştefan, and George Gustav Savii. "Evaluation of Algorithms and Methods for Developing Business Information Systems Using Virtual Factory." Applied Mechanics and Materials 841 (June 2016): 367–72. http://dx.doi.org/10.4028/www.scientific.net/amm.841.367.

Full text
Abstract:
The paper presents a method for assessing information systems designed for the modern business environment. Unlike systems audit or established methods of software testing, the evaluation of systems for business can be made by implementing and using them in a virtual, digital factory. This, in addition to effective verification and validation of the software, can be used to evaluate how computer applications adapt to the complete information processing flows corresponding to the core processes of a real enterprise. In this way full results of good quality can be achieved at a low cost of the evaluation process.
APA, Harvard, Vancouver, ISO, and other styles
30

Rowe, Alexander D., Stephanie D. Stoway, Henrik Åhlman, Vaneet Arora, Michele Caggana, Anna Fornari, Arthur Hagar, et al. "A Novel Approach to Improve Newborn Screening for Congenital Hypothyroidism by Integrating Covariate-Adjusted Results of Different Tests into CLIR Customized Interpretive Tools." International Journal of Neonatal Screening 7, no. 2 (April 23, 2021): 23. http://dx.doi.org/10.3390/ijns7020023.

Full text
Abstract:
Newborn screening for congenital hypothyroidism remains challenging decades after broad implementation worldwide. Testing protocols are not uniform in terms of targets (TSH and/or T4) and protocols (parallel vs. sequential testing; one or two specimen collection times), and specificity (with or without collection of a second specimen) is overall poor. The purpose of this retrospective study is to investigate the potential impact of multivariate pattern recognition software (CLIR) to improve the post-analytical interpretation of screening results. Seven programs contributed reference data (N = 1,970,536) and two sets of true (TP, N = 1369 combined) and false (FP, N = 15,201) positive cases for validation and verification purposes, respectively. Data were adjusted for age at collection, birth weight, and location using polynomial regression models of the fifth degree to create three-dimensional regression surfaces. Customized Single Condition Tools and Dual Scatter Plots were created using CLIR to optimize the differential diagnosis between TP and FP cases in the validation set. Verification testing correctly identified 446/454 (98%) of the TP cases, and could have prevented 1931/5447 (35%) of the FP cases, with variable impact among locations (range 4% to 50%). CLIR tools either as made here or preferably standardized to the recommended uniform screening panel could improve performance of newborn screening for congenital hypothyroidism.
APA, Harvard, Vancouver, ISO, and other styles
31

Zhou, Deyuan, Changtuan Guo, Xiaohan Wu, and Bo Zhang. "Seismic Evaluation of a Multitower Connected Building by Using Three Software Programs with Experimental Verification." Shock and Vibration 2016 (2016): 1–18. http://dx.doi.org/10.1155/2016/8215696.

Full text
Abstract:
Shanghai International Design Center (SHIDC) is a hybrid structure of steel frame and reinforced concrete core tube (SF-RCC). It is a building of unequal height two-tower system and the story lateral stiffness of two towers is different, which may result in the torsion effect. To fully evaluate structural behaviors of SHIDC under earthquakes, NosaCAD, ABAQUS, and Perform-3D, which are widely applied for nonlinear structure analysis, were used to perform elastoplastic time history analyses. Numerical results were compared with those of shake table testing. NosaCAD has function modules for transforming the nonlinear analysis model to Perform-3D and ABAQUS. These models were used in ABAQUS or Perform-3D directly. With the model transformation, seismic performances of SHIDC were fully investigated. Analyses have shown that the maximum interstory drift can satisfy the limits specified in Chinese code and the failure sequence of structural members was reasonable. It meant that the earthquake input energy can be well dissipated. The structure keeps in an undamaged state under frequent earthquakes and it does not collapse under rare earthquakes; therefore, the seismic design target is satisfied. The integrated use of multisoftware with the validation of shake table testing provides confidence for a safe design of such a complex structure.
APA, Harvard, Vancouver, ISO, and other styles
32

Acharya, Sushil, Priyadarshan Manohar, Peter Wu, and Walter Schilling. "Using Academia-Industry Partnerships to Enhance Software Verification & Validation Education via Active Learning Tools." Journal of Education and Learning 6, no. 2 (January 5, 2017): 69. http://dx.doi.org/10.5539/jel.v6n2p69.

Full text
Abstract:
Imparting real world experiences in a software verification and validation (SV&V) course is often a challenge due to the lack of effective active learning tools. This pedagogical requirement is important because graduates are expected to develop software that meets rigorous quality standards in functional and application domains. Realizing the necessity the authors designed and developed 42 delivery hours of active learning tools consisting of Case Studies, Class Exercises, and Case Study Videos for use in courses that impart knowledge on SV&V topics viz. requirements engineering, software reviews, configuration management, and software testing. Four key skill areas sought after by employers, namely communication skills, applied knowledge of methods, applied knowledge of tools, and research exposure are used to drive the development funded by a National Science Foundation grant and perfected through an industry-academia partnership.In this paper, we discuss in detail the four project plans the researchers and their industry counterparts followed over the past two years in the development and eventual dissemination of the active learning tools. A course enhancement plan was used to drive activities related to reviewing, enhancing, and modularizing modules, identified by a gap analysis performed by focus groups comprised of industry and academic partners. The course delivery plan was used to drive activities related to developing content delivery strategies. An evaluation and assessment plan was used to drive activities related to periodically evaluating student learning and assessing the project. And finally a course dissemination plan is being used to drive activities related to distributing course modules and assessment reports. The tools have been shared through two workshops and other means with instructors in universities and industry partners.
APA, Harvard, Vancouver, ISO, and other styles
33

Rusu, Vlad. "Combining formal verification and conformance testing for validating reactive systems." Software Testing, Verification and Reliability 13, no. 3 (2003): 157–80. http://dx.doi.org/10.1002/stvr.274.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Verma, Shubham, and Dipti Dipti Ranjan. "EB: Eye Biometrics Based a Novel Human Recognition System for Cardless Online Payment Security Improvement in ATMs." International Journal of Engineering Research in Computer Science and Engineering 9, no. 9 (September 21, 2022): 69–76. http://dx.doi.org/10.36647/ijercse/09.09.art017.

Full text
Abstract:
Here we are introducing a novel approach for enhancing the security of traditional transactions of ATM .Individual confirmation is quite possibly the main ways to deal with work on the security. Be that as it may, the conventional individual validation techniques or advance hybrid. Biometrics, which naturally utilizes the physiological or social quality of individuals to perceive their characters, is one of the powerful procedures to defeat these issues. Biometrics is a field of programmed individual ID dependent on physiological and social attributes of people. A conduct trademark is progressively an impression of an individual’s physiological cosmetics Validations. We used Eye Biometric Recognition System using OpenCV for cardless transaction & Identity Verification after ATM Transactions. We proposed EB verification model in the research for final compilation of transaction. Though it may require more time for verification but security is prior to time & cyber thefts. The efficiency of Software we Proposed came out to be 98.52%, 95.75%, 98.86% while performing an extensive testing of our algorithm with 3 datasets named UBIRIS.V1, UBIRIS.V2, IITD and the algorithms we have used in OpenCV For feature detection & extraction are ORB, brute force algorithms. With the hope of Interest in youth and additional feature of security to be embedded in online transaction is the motivation of the research.
APA, Harvard, Vancouver, ISO, and other styles
35

FAN, Yuyang, Zhi DENG, and Zihang LI. "Verification and reliability analysis of synchronizers in clock domain crossing." Xibei Gongye Daxue Xuebao/Journal of Northwestern Polytechnical University 40, no. 2 (April 2022): 369–76. http://dx.doi.org/10.1051/jnwpu/20224020369.

Full text
Abstract:
There are a large number of multi-clock domain circuits in the airborne equipment of aircraft. When data is transmitted across the clock domain, meta-stability may occur, resulting in data transmission errors and reduced circuit reliability. However, due to the occasional and non-reproducible faults caused by metastability, and the high cost of existing cross-clock domain specific verification software, cross-clock domain circuit verification in three-mode redundancy scenarios is not supported. To solve this problem, a method that combines register transfer level (RTL) validation, board-level accelerated testing and computational evaluation based on traditional tools is presented. This method can detect the cross-clock domain transmission problems in three-mode application scenarios or normal scenarios and assess potential cross-clock domain transmission risks using generic simulation tools at an early stage of design. It reduces the cost of economy and time for high safety level airborne complex electronic verification, and improves the reliability of the circuit.
APA, Harvard, Vancouver, ISO, and other styles
36

Oleksandr, Shepeliev, and Mariia Bilova. "SOFTWARE TESTING RESULTS ANALYSIS FOR THE REQUIREMENTS CONFORMITY USING NEURAL NETWORKS." Bulletin of National Technical University "KhPI". Series: System Analysis, Control and Information Technologies, no. 2 (6) (December 28, 2021): 8–14. http://dx.doi.org/10.20998/2079-0023.2021.02.02.

Full text
Abstract:
The relevance of scientific work lies in the need to improve existing software designed to analyze the compliance of the results of software testing ofthe stated requirements. For the implementation of this goal, neural networks can be used by quality control specialists to make decisions aboutsoftware quality, or project managers as an expert system, for one of the quality indicators for the customer. The article deals with software testingwhich is a process of validation and verification of compliance of the software application or business program with the technical requirements thatguided its design and development, and work as expected, and identifies important errors or deficiencies classified by the severity of the program to befixed. Existing systems do not provide for or have only partial integration of systems of work with the analysis of requirements, which should ensurethe formation of expert assessment and provide an opportunity to justify the quality of the software product. Thus, a data processing model based on afuzzy neural network was proposed. An approach to allow determining the compliance of the developed software with functional and non-functionalrequirements was proposed, taking into account how successfully or unsuccessfully implemented this or that requirement. The ultimate goal ofscientific work is the development of algorithmic software analysis of compliance of software testing results to stated requirements for support in thedecisions taken. The following tasks are solved in scientific work: analysis of advantages and disadvantages of using existing systems when workingwith requirements; definition of general structure and classification of testing and requirements; characteristic main features of the use of neuralnetworks; designing architecture, the module of research of conformity of results of testing software to the stated requirements.
APA, Harvard, Vancouver, ISO, and other styles
37

Mishra, Ashutosh, and Meenu Singla. "A Software Fault Prediction on Inter- and Intra-Release Prediction Scenarios." International Journal of Open Source Software and Processes 12, no. 4 (October 2021): 1–18. http://dx.doi.org/10.4018/ijossp.287611.

Full text
Abstract:
Software quality engineering applied numerous techniques for assuring the quality of software, namely testing, verification, validation, fault tolerance, and fault prediction of the software. The machine learning techniques facilitate the identification of software modules as faulty or non-faulty. In most of the research, these approaches predict the fault-prone module in the same release of the software. Although, the model is found to be more efficient and validated when training and tested data are taken from previous and subsequent releases of the software respectively. The contribution of this paper is to predict the faults in two scenarios i.e. inter and intra release prediction. The comparison of both intra and inter-release fault prediction by computing various performance matrices using machine learning methods shows that intra-release prediction is having better accuracy compared to inter-releases prediction across all the releases. Also, but both the scenarios achieve good results in comparison to existing research work.
APA, Harvard, Vancouver, ISO, and other styles
38

Harman, Mark, and Bogdan Korel. "Editorial for special issue of STVR on software testing, verification, and validation - volume 2 (extended selected papers from ICST 2011)." Software Testing, Verification and Reliability 23, no. 7 (September 16, 2013): 529. http://dx.doi.org/10.1002/stvr.1506.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Harman, Mark, and Bogdan Korel. "Editorial for special issue of STVR on software testing, verification, and validation - volume 1 (extended selected papers from ICST 2011)." Software Testing, Verification and Reliability 23, no. 6 (August 27, 2013): 437. http://dx.doi.org/10.1002/stvr.1507.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

KUMAR, VIJAY, SUNIL KUMAR KHATRI, HITESH DUA, MANISHA SHARMA, and PARIDHI MATHUR. "AN ASSESSMENT OF TESTING COST WITH EFFORT-DEPENDENT FDP AND FCP UNDER LEARNING EFFECT: A GENETIC ALGORITHM APPROACH." International Journal of Reliability, Quality and Safety Engineering 21, no. 06 (December 2014): 1450027. http://dx.doi.org/10.1142/s0218539314500272.

Full text
Abstract:
Software testing involves verification and validation of the software to meet the requirements elucidated by customers in the earlier phases and to subsequently increase software reliability. Around half of the resources, such as manpower and CPU time are consumed and a major portion of the total cost of developing the software is incurred in testing phase, making it the most crucial and time-consuming phase of a software development lifecycle (SDLC). Also the fault detection process (FDP) and fault correction process (FCP) are the important processes in SDLC. A number of software reliability growth models (SRGM) have been proposed in the last four decades to capture the time lag between detected and corrected faults. But most of the models are discussed under static environment. The purpose of this paper is to allocate the resources in an optimal manner to minimize the cost during testing phase using FDP and FCP under dynamic environment. An elaborate optimization policy based on optimal control theory for resource allocation with the objective to minimize the cost is proposed. Further, genetic algorithm is applied to obtain the optimum value of detection and correction efforts which minimizes the cost. Numerical example is given in support of the above theoretical result. The experimental results help the project manager to identify the contribution of model parameters and their weight.
APA, Harvard, Vancouver, ISO, and other styles
41

Al Zaabi, Abdulla, Chan Yeob Yeun, and Ernesto Damiani. "Trusting Testcases Using Blockchain-Based Repository Approach." Symmetry 13, no. 11 (October 26, 2021): 2024. http://dx.doi.org/10.3390/sym13112024.

Full text
Abstract:
Modern vehicles have evolved to support connected and self-driving capabilities. The concepts such as connected driving, cooperative driving, and intelligent transportation systems have resulted in an increase in the connectivity of vehicles and subsequently created new information security risks. The original vehicular ad-hoc network term is now emerged to a new term, Internet of Vehicles (IoV), which is a typical application of symmetry of Internet of Things (IoT). Vehicle manufacturers address some critical issues such as software bugs or security issues through remote updates, and this gives rise to concerns regarding the security of updated components. Moreover, aftermarket units such as those imposed by transportation authorities or insurance companies expose vehicles to high risk. Software testing aims to ensure that software products are reliable and behave as expected. Many commercial and open-source software products undergo formal certifications to increase users’ confidence in their accuracy, reliability, and security. There are different techniques for software certification, including test-based certification. Testcase repositories are available to support software testing and certification, such as the Linux Test Project for Linux kernel testing. Previous studies performed various testing and experimental evaluation of different parts of modern vehicles to assess the security risks. Due to the lack of trusted testcase repositories and a common approach for testing, testing efforts are performed individually. In this paper, we propose a blockchain-based approach for a testcase repository to support test-based software and security testing and overcome the lack of trusted testcase repositories. The novel concept Proof-of-Validation to manage global state is proposed to manage updates to the repository. The initial work in this study considers the LTP test suite as a use case for the testcase repository. This research work is expected to contribute to the further development in including evidence generation for testing verification.
APA, Harvard, Vancouver, ISO, and other styles
42

Cronqvist, Mattias Lantz, Carl-Oscar Jonson, and Erik Prytz. "Development and Initial Validation of a Stochastic Discrete Event Simulation to Assess Disaster Preparedness." Prehospital and Disaster Medicine 34, s1 (May 2019): s118. http://dx.doi.org/10.1017/s1049023x19002528.

Full text
Abstract:
Introduction:Assessing disaster preparedness in a given region is a complex problem. Current methods are often resource-intensive and may lack generalizability beyond a specific scenario. Computer-based stochastic simulations may be an additional method but would require systems that are valid, flexible, and easy to use. Emergo Train System (ETS) is an analog simulation system used for disaster preparedness assessments.Aim:To digitalize the ETS model and develop stochastic simulation software for improved disaster preparedness assessments.Methods:A simulation software was developed in C#. The simulation model was based on ETS. Preliminary verification and validation (V&V) tests were performed, including unit and integration testing, trace validation, and a comparison to a prior analog ETS disaster preparedness assessment exercise.Results:The software contains medically validated patients from ETS and is capable of automatically running disaster scenarios with stochastic variations in the injury panorama, available resources, geographical location, and other variables. It consists of two main programs: an editor where scenarios can be constructed and a simulation system to evaluate the outcome. Initial V&V testing showed that the software is reliable and internally consistent. The comparison to the analog exercise showed a general high agreement in terms of patient outcome. The analog exercise featured a train derailment with 397 injured, of which 45 patients suffered preventable death. In comparison, the computer simulation ran 100 iterations of the same scenario and indicated that a median of 41 patients (IQR 31 to 44) would suffer a preventable death.Discussion:Stochastic simulation methods can be a powerful complement to traditional capability assessments methods. The developed simulation software can be used for both assessing emergency preparedness with some validity and as a complement to analog capability assessment exercises, both as input and to validate results. Future work includes comparing the simulation to real disaster outcomes.
APA, Harvard, Vancouver, ISO, and other styles
43

Webster, Matt, David Western, Dejanira Araiza-Illan, Clare Dixon, Kerstin Eder, Michael Fisher, and Anthony G. Pipe. "A corroborative approach to verification and validation of human–robot teams." International Journal of Robotics Research 39, no. 1 (November 25, 2019): 73–99. http://dx.doi.org/10.1177/0278364919883338.

Full text
Abstract:
We present an approach for the verification and validation (V&V) of robot assistants in the context of human–robot interactions, to demonstrate their trustworthiness through corroborative evidence of their safety and functional correctness. Key challenges include the complex and unpredictable nature of the real world in which assistant and service robots operate, the limitations on available V&V techniques when used individually, and the consequent lack of confidence in the V&V results. Our approach, called corroborative V&V, addresses these challenges by combining several different V&V techniques; in this paper we use formal verification (model checking), simulation-based testing, and user validation in experiments with a real robot. This combination of approaches allows V&V of the human–robot interaction task at different levels of modeling detail and thoroughness of exploration, thus overcoming the individual limitations of each technique. We demonstrate our approach through a handover task, the most critical part of a complex cooperative manufacturing scenario, for which we propose safety and liveness requirements to verify and validate. Should the resulting V&V evidence present discrepancies, an iterative process between the different V&V techniques takes place until corroboration between the V&V techniques is gained from refining and improving the assets (i.e., system and requirement models) to represent the human–robot interaction task in a more truthful manner. Therefore, corroborative V&V affords a systematic approach to “meta-V&V,” in which different V&V techniques can be used to corroborate and check one another, increasing the level of certainty in the results of V&V.
APA, Harvard, Vancouver, ISO, and other styles
44

Stankaitis, Paulius, Alexei Iliasov, Tsutomu Kobayashi, Yamine Aït-Ameur, Fuyuki Ishikawa, and Alexander Romanovsky. "A refinement-based development of a distributed signalling system." Formal Aspects of Computing 33, no. 6 (November 24, 2021): 1009–36. http://dx.doi.org/10.1007/s00165-021-00567-y.

Full text
Abstract:
AbstractThe decentralised railway signalling systems have a potential to increase capacity, availability and reduce maintenance costs of railway networks. However, given the safety-critical nature of railway signalling and the complexity of novel distributed signalling solutions, their safety should be guaranteed by using thorough system validation methods. To achieve such a high-level of safety assurance of these complex signalling systems, scenario-based testing methods are far from being sufficient despite that they are still widely used in the industry. Formal verification is an alternative approach which provides a rigorous approach to verifying complex systems and has been successfully used in the railway domain. Despite the successes, little work has been done in applying formal methods for distributed railway systems. In our research we are working towards a multifaceted formal development methodology of complex railway signalling systems. The methodology is based on the Event-B modelling language which provides an expressive modelling language, a stepwise development and a proof-based model verification. In this paper, we present the application of the methodology for the development and verification of a distributed protocol for reservation of railway sections. The main challenge of this work is developing a distributed protocol which ensures safety and liveness of the distributed railway system when message delays are allowed in the model.
APA, Harvard, Vancouver, ISO, and other styles
45

ILLARRAMENDI REZABAL, MIREN, ASIER IRIARTE, AITOR ARRIETA AGUERRI,, GOIURIA SAGARDUI MENDIETA, and FELIX LARRINAGA BARRENECHEA. "DIGITAL SAFETY MANAGER: IOT SERVICE TO ASSURE THE SAFE BEHAVIOUR OF MACHINES AND CONTROLS IN THE DIGITAL INDUSTRY." DYNA 97, no. 1 (January 1, 2022): 18–22. http://dx.doi.org/10.6036/10243.

Full text
Abstract:
The digital industry requires increasingly complex and reliable software systems. They need to control and make critical decisions at runtime. As a consequence, the verification and validation of these systems has become a major research challenge. At design and development time, model testing techniques are used while run-time verification aims at verifying that a system satisfies a given property. The latter technique complements the former. The solution presented in this paper targets embedded systems whose software components are designed by state machines defined by Unified Modelling Language (UML). The CRESCO (C++ REflective State-Machines based observable software COmponents) platform generates software components that provide internal information at runtime and the verifier uses this information to check system-level reliability/safety contracts. The verifier detects when a system contract is violated and initiates a safeState process to prevent dangerous scenarios. These contracts are defined by internal information from the software components that make up the system. Thus, as demonstrated in the tested experiment, the robustness of the system is increased. All software components (controllers), such as the verifier, have been deployed as services (producers/consumers) of the Arrowhead IoT platform: the controllers are deployed on local Arrowhead platforms (Edge) and the verifier (Safety Manager) is deployed on an Arrowhead platform (Cloud) that will consume controllers on the Edge and ensure the proper functioning of the plant controllers. Keywords: run-time monitoring, robustness, software components, contracts, software models, state machines
APA, Harvard, Vancouver, ISO, and other styles
46

Elahi, Mohammad M., and Seyed M. Hashemi. "A Framework for Extension of Dynamic Finite Element Formulation to Flexural Vibration Analysis of Thin Plates." Shock and Vibration 2017 (2017): 1–10. http://dx.doi.org/10.1155/2017/5905417.

Full text
Abstract:
Dynamic Finite Element formulation is a powerful technique that combines the accuracy of the exact analysis with wide applicability of the finite element method. The infinite dimensionality of the exact solution space of plate equation has been a major challenge for development of such elements for the dynamic analysis of flexible two-dimensional structures. In this research, a framework for such extension based on subset solutions is proposed. An example element is then developed and implemented in MATLAB® software for numerical testing, verification, and validation purposes. Although the presented formulation is not exact, the element exhibits good convergence characteristics and can be further enriched using the proposed framework.
APA, Harvard, Vancouver, ISO, and other styles
47

Benthem, P., R. Wayth, E. de Lera Acedo, K. Zarb Adami, M. Alderighi, C. Belli, P. Bolli, et al. "The Aperture Array Verification System 1: System overview and early commissioning results." Astronomy & Astrophysics 655 (October 28, 2021): A5. http://dx.doi.org/10.1051/0004-6361/202040086.

Full text
Abstract:
The design and development process for the Square Kilometre Array (SKA) radio telescope’s Low Frequency Aperture Array component was progressed during the SKA pre-construction phase by an international consortium, with the goal of meeting requirements for a critical design review. As part of the development process a full-sized prototype SKA Low ‘station’ was deployed – the Aperture Array Verification System 1 (AAVS1). We provide a system overview and describe the commissioning results of AAVS1, which is a low frequency radio telescope with 256 dual-polarisation log-periodic dipole antennas working as a phased array. A detailed system description is provided, including an in-depth overview of relevant sub-systems, ranging from hardware, firmware, software, calibration, and control sub-systems. Early commissioning results cover initial bootstrapping, array calibration, stability testing, beam-forming, and on-sky sensitivity validation. Lessons learned are presented, along with future developments.
APA, Harvard, Vancouver, ISO, and other styles
48

Benthem, P., R. Wayth, E. de Lera Acedo, K. Zarb Adami, M. Alderighi, C. Belli, P. Bolli, et al. "The Aperture Array Verification System 1: System overview and early commissioning results." Astronomy & Astrophysics 655 (October 28, 2021): A5. http://dx.doi.org/10.1051/0004-6361/202040086.

Full text
Abstract:
The design and development process for the Square Kilometre Array (SKA) radio telescope’s Low Frequency Aperture Array component was progressed during the SKA pre-construction phase by an international consortium, with the goal of meeting requirements for a critical design review. As part of the development process a full-sized prototype SKA Low ‘station’ was deployed – the Aperture Array Verification System 1 (AAVS1). We provide a system overview and describe the commissioning results of AAVS1, which is a low frequency radio telescope with 256 dual-polarisation log-periodic dipole antennas working as a phased array. A detailed system description is provided, including an in-depth overview of relevant sub-systems, ranging from hardware, firmware, software, calibration, and control sub-systems. Early commissioning results cover initial bootstrapping, array calibration, stability testing, beam-forming, and on-sky sensitivity validation. Lessons learned are presented, along with future developments.
APA, Harvard, Vancouver, ISO, and other styles
49

Bertolino, Antonia, and Yvan Labiche. "Editorial for the special issue of STVR on the 5th IEEE International Conference on Software Testing, Verification, and Validation (ICST 2012)." Software Testing, Verification and Reliability 24, no. 5 (July 2, 2014): 339–40. http://dx.doi.org/10.1002/stvr.1541.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Fraser, Gordon, and Darko Marinov. "Editorial for the special issue of STVR on the 8th IEEE International Conference on Software Testing, Verification, and Validation (ICST 2015)." Software Testing, Verification and Reliability 27, no. 6 (August 30, 2017): e1644. http://dx.doi.org/10.1002/stvr.1644.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography