Dissertations / Theses on the topic 'Load Based Testing Validation'

To see the other types of publications on this topic, follow the link: Load Based Testing Validation.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 36 dissertations / theses for your research on the topic 'Load Based Testing Validation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Cordova, Lucas Pascual. "Development and Validation of Feedback-Based Testing Tutor Tool to Support Software Testing Pedagogy." Diss., North Dakota State University, 2020. https://hdl.handle.net/10365/31749.

Full text
Abstract:
Current testing education tools provide coverage deficiency feedback that either mimics industry code coverage tools or enumerates through the associated instructor tests that were absent from the student’s test suite. While useful, these types of feedback mechanisms are akin to revealing the solution and can inadvertently lead a student down a trial-and-error path, rather than using a systematic approach. In addition to an inferior learning experience, a student may become dependent on the presence of this feedback in the future. Considering these drawbacks, there exists an opportunity to develop and investigate alternative feedback mechanisms that promote positive reinforcement of testing concepts. We believe that using an inquiry-based learning approach is a better alternative (to simply providing the answers) where students can construct and reconstruct their knowledge through discovery and guided learning techniques. To facilitate this, we present Testing Tutor, a web-based assignment submission platform to support different levels of testing pedagogy via a customizable feedback engine. This dissertation is based on the experiences of using Testing Tutor at different levels of the curriculum. The results indicate that the groups using conceptual feedback produced higher-quality test suites (achieved higher average code coverage, fewer redundant tests, and higher rates of improvement) than the groups that received traditional code coverage feedback. Furthermore, students also produced higher quality test suites when the conceptual feedback was tailored to task-level for lower division student groups and self-regulating-level for upper division student groups. We plan to perform additional studies with the following objectives: 1) improve the feedback mechanisms; 2) understand the effectiveness of Testing Tutor’s feedback mechanisms at different levels of the curriculum; and 3) understand how Testing Tutor can be used as a tool for instructors to gauge learning and determine whether intervention is necessary to improve students’ learning.
APA, Harvard, Vancouver, ISO, and other styles
2

Granda, Juca María Fernanda. "Testing-Based Conceptual Schema Validation in a Model-Driven Environment." Doctoral thesis, Universitat Politècnica de València, 2017. http://hdl.handle.net/10251/89091.

Full text
Abstract:
Despite much scepticism and problems for its adoption, the Model-Driven Development (MDD) is being used and improved to provide many inherent benefits for industry. One of its greatest benefits is the ability to handle the complexity of software development by raising the abstraction level. Models are expressed using concepts that are not related to a specific implementation technology (e.g. Unified Modelling Language -UML, Object Constraint Language -OCL, Action Language for Foundational UML -ALF), which means that the models can be easier to specify, maintain and document. As in Model-Driven Engineering (MDE), the primary artefacts are the conceptual models, efforts are focused on their creation, testing and evolution at different levels of abstraction through transformations because if a conceptual schema has defects, these are passed on to the following stages, including coding. Thus, one of the challenges for researchers and developers in Model-Driven Development is being able to identify defects early on, at the conceptual schema level, as this helps reduce development costs and improve software quality. Over the last decade, little research work has been performed in this area. Some of the causes of this are the high theoretical complexity of testing conceptual schemas and the lack of adequate software support. This research area thus admits new methods and techniques, facing challenges such as generation of test cases using information external to the conceptual schemas (i.e. requirements), the measurement of possible automation, selection and prioritization of test cases, the need for an efficient support tool using standard semantics, the opportune feedback to support the software quality assurance process and facilitate making decisions based on the analysis and interpretation of the results. The aim of this thesis is to mitigate some of the problems that affect conceptual schema validation by providing a novel testing-based validation framework based on Model-Driven Development. The use of MDD improves abstraction, automation and reuse, which allows us to alleviate the complexity of our validation framework. Furthermore, by leveraging MDD techniques (such as metamodeling, model transformations, and models at runtime), our framework supports four phases of the testing process: test design, test case generation, test case execution and the evaluation of the results. In order to provide software support for our proposal, we developed the CoSTest ALF-based testing environment. To ensure that CoSTest offers the necessary functionality, we first identified a set of functional requirements. Then, after these requirements were identified, we defined the architecture and testing environment of the validation framework, and finally we implemented the architecture in the Eclipse context. CoSTest has been developed to test several properties on the executable model, such as syntactic correctness (i.e. all the elements in the model conform to the syntax of the language in which it is described), consistency between the structural and behavioural parts (its integrity constraints) and completeness (i.e. all possible changes on the system state can be performed through the execution of the operations defined in the executable model). For defective models, the CoSTest report returns a meaningful feedback that helps locate and repair any defects detected.
A pesar del escepticismo y dificultades en su adopción, el Desarrollo Orientado por Modelos (MDD, por sus siglas en inglés) está siendo usado y mejorado para proveer muchos beneficios inherentes a la industria. Uno de sus mayores beneficios es la capacidad de manejar la complejidad del desarrollo de software elevando el nivel de abstracción. Los modelos se expresan utilizando conceptos que no están relacionados con una tecnología de implementación específica (por ejemplo, Lenguaje de Modelado Unificado -UML, Lenguaje de Restricción de Objetos -OCL, Lenguaje de Acción para el Foundational UML - ALF), lo que significa que los modelos pueden ser más fáciles de especificar, mantener y documentar. Debido a que en una Ingeniería dirigida por modelos (MDE), los artefactos primarios son los modelos conceptuales, los esfuerzos se centran en su creación, prueba y evolución a diferentes niveles de abstracción a través de transformaciones, porque si un esquema conceptual tiene defectos, éstos se pasan a las siguientes etapas, incluida la codificación. Por lo tanto, uno de los retos para los investigadores y desarrolladores in MDD es poder identificar los defectos temprano, a nivel de esquemas conceptuales, ya que esto ayudaría a reducir los costos de desarrollo y mejorar la calidad del software. Durante la última década, pocos trabajos de investigación se han realizado en esta área. Algunas de las causas de esta realidad son la alta complejidad teórica de probar esquemas conceptuales y la falta de soporte de software adecuado. Por lo tanto, este área de investigación admite nuevos métodos y técnicas, enfrentando retos como la generación de casos de prueba utilizando información externa a los esquemas conceptuales (es decir, los requisitos), la medición de una posible automatización, selección y priorización de casos de prueba, la necesidad de una herramienta de soporte eficiente que utilice una semántica estándar, la retroalimentación oportuna para apoyar el proceso de aseguramiento de la calidad del software y facilitar la toma de decisiones basadas en el análisis y la interpretación de los resultados. El objetivo de esta tesis es mitigar algunos de los problemas que afectan la validación de los esquemas conceptuales, proporcionando un nuevo marco de validación basado en pruebas que fue construido usando un desarrollo dirigido por modelos. El uso de MDD permite un aumento en la abstracción, automatización y reutilización que nos permite aliviar la complejidad de nuestro marco de validación. Además, al aprovechar las técnicas MDD (como el metamodelado, las transformaciones de modelos y los modelos en tiempo de ejecución), nuestro marco soporta cuatro fases del proceso de prueba: diseño de pruebas, generación de casos de prueba, ejecución de casos de prueba y la evaluación de los resultados. Con el fin de proporcionar soporte de software para nuestra propuesta, hemos desarrollado CoSTest, un entorno de pruebas basado en el lenguaje ALF. Para asegurar que CoSTest ofrece la funcionalidad necesaria, primero identificamos un conjunto de requisitos funcionales. Luego, después de identificar estos requisitos, definimos la arquitectura y el ambiente de pruebas de nuestro marco de validación y, finalmente, implementamos la arquitectura en el contexto de Eclipse. CoSTest ha sido desarrollado para probar varias propiedades sobre el modelo ejecutable como la corrección sintáctica (es decir, todos los elementos del modelo se ajustan a la sintaxis del lenguaje en el que se describe), consistencia entre la parte estructural y el comportamiento (sus restricciones de integridad) y completitud (es decir, todos los cambios posibles en el estado del sistema se pueden realizar a través de la ejecución de las operaciones definidas en el modelo ejecutable). Para los modelos defectuosos, el informe de CoSTest devuelve una retroalimentación significativa que ayuda a localizar y reparar los defectos detec
A pesar de l'escepticisme i les dificultats en la seua adopció, el Desenvolupament Orientat per Models (MDD, segons les sigles en anglès) està sent usat i millorat per tal de proveir molts beneficis potencials inherents a l' indústria. Un dels majors beneficis és la capacitat de manejar la complexitat del desenvolupament del programari elevant el nivell d'abstracció. Els models s'expressen mitjançant conceptes que no estan relacionats amb una tecnologia d'implementació específica (per exemple, el Llenguatge de Modelat Unificat - UML, Llenguatge de Restricció d'Objectes -OCL, Llenguatge d'Acció per al Foundational UML - ALF), el que significa que els models poder ser més fàcils d'especificar, mantindre i documentar. A causa de que en una Enginyeria dirigida per models (MDE), els artefactes primaris són els models conceptuals, els esforços es centren en la seua creació, prova i evolució a diferents nivells d'abstracció mitjançant transformacions, perquè si un esquema conceptual té defectes, aquestos es passen a les següents etapes, inclosa la codificació. Per tant, un del reptes per als investigadors i desenvolupadors en MDD és poder identificar els defectes des del principi, a nivell de esquemes conceptuals, perquè açò ajudaria a reduir els costos de desenvolupament i millora de la qualitat del programari. Durant l'última dècada, pocs treballs d'investigació s'han fet en aquesta àrea. Algunes de les causes d'aquesta realitat són l'alta complexitat teòrica de provar esquemes conceptuals i la falta de suport de programari adequat. Per tant, aquesta àrea d'investigació admet nous mètodes i tècniques, enfrontant reptes com la generació de casos de prova mitjançant informació externa als esquemes conceptuals (es a dir, requisits), la medició de una possible automatització, selecció i priorització de casos de prova, la necessitat de una ferramenta de suport rentable que utilitze una semàntica estàndard, la retroalimentació oportuna per suportar el procés d'assegurament de la qualitat del programari i la facilitat per a prendre decisions basades en l'anàlisi i la interpretació dels resultats. En aquesta tesi intentem mitigar alguns dels problemes que afecten a la validació dels esquemes conceptuals, proporcionant un nou marc de validació basat en proves que va ser construït mitjançant un desenvolupament dirigit per models. L'ús de MDD permet un augment en l'abstracció, automatització i reutilització que ens permet alleujar la complexitat del nostre marc de validació. A més a més, al aprofitar les tècniques MDD (com el metamodelat, les transformacions de models i els models en temps d'execució), el nostre marc suporta quatre fases del procés de prova: disseny, generació i execució de casos de prova, així com l'avaluació de resultats del procés de prova. Amb la finalitat de proporcionar suport de programari per a la nostra proposta, hem desenvolupat un entorn de proves basat en el llenguatge ALF que s'anomena CoSTest. Per tal d'assegurar que CoSTest ofereix la funcionalitat necessària, identifiquem un conjunt de requisits funcionals abans de desenvolupar la ferramenta. Després d'identificar aquestos requisits, definim l'arquitectura i l'ambient de proves del nostre marc de validació, i finalment, implementem l'arquitectura en el context Eclipse. CoSTest ha sigut desenvolupat per provar diverses propietats sobre el model executable com la correcció sintàctica (és a dir, tots els elements del model s'ajusten a la sintaxi del llenguatge en el que es descriu), consistència antre la part estructural i el comportament (les seues restriccions d'integritat) i completitud (és a dir, tots els canvis possibles en l'estat del sistema es poden realitzar mitjançant l'execució de les operacions definides en el model executable). Per als models defectuosos, l'informe de CoSTest retorna una retroalimentació significativa que ajuda a localitzar i reparar els defectes dete
Granda Juca, MF. (2017). Testing-Based Conceptual Schema Validation in a Model-Driven Environment [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/89091
TESIS
APA, Harvard, Vancouver, ISO, and other styles
3

Kara, Ismihan Refika. "Automated Navigation Model Extraction For Web Load Testing." Master's thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12613992/index.pdf.

Full text
Abstract:
Web pages serve a huge number of internet users in nearly every area. An adequate testing is needed to address the problems of web domains for more efficient and accurate services. We present an automated tool to test web applications against execution errors and the errors occured when many users connect the same server concurrently. Our tool, called NaMoX, attains the clickables of the web pages, creates a model exerting depth first search algorithm. NaMoX simulates a number of users, parses the developed model, and tests the model by branch coverage analysis. We have performed experiments on five web sites. We have reported the response times when a click operation is eventuated. We have found 188 errors in total. Quality metrics are extracted and this is applied to the case studies.
APA, Harvard, Vancouver, ISO, and other styles
4

Likki, Srinivas Reddy. "TESTING AND VALIDATION OF A CORRELATION BASED TRANSITION MODEL USING LOCAL VARIABLES." UKnowledge, 2004. http://uknowledge.uky.edu/gradschool_theses/319.

Full text
Abstract:
A systematic approach of testing and validating transition models is developed and employed in testing of a recently developed transition model. The testing methodology uses efficient computational tools and a wide range of test cases. The computational tools include a boundary layer code, single zone Navier Stokes solver, and a multi-block Navier Stokes solver which uses MPI and is capable of handling complex geometries and moving grids. Test cases include simple flat plate experiments, cascade experiments, and unsteady wake/blade interaction experiments. The test cases are used to test the predicting capabilities of the transition model under various effects such as free stream turbulence intensity, Reynolds number variations, pressure gradient, flow separation, and unsteady wake/blade interaction. Using the above test cases and computational tools a method is developed to validate transition models. The transition model is first implemented in boundary layer code and tested for simple flat plate cases. Then the transition model is implemented in single zone Navier Stokes solver and tested for hysteresis effects for flat plate cases. Finally the transition model is implemented in multi zone Navier Stokes solver and tested for compressor and turbine cascade cases followed by unsteady wake/blade interaction experiments. Using the method developed a new correlation based transition model (Menter et al. 2004) which uses local variables is tested and validated. The new model predicted good results for high free stream turbulence and high Reynolds number cases. For low free stream turbulence and low Reynolds number cases, the results were satisfactory.
APA, Harvard, Vancouver, ISO, and other styles
5

Gilbert, Andy Michael. "Validation of a laboratory method for accelerated fatigue testing of bridge deck panels with a rolling wheel load." Thesis, Montana State University, 2012. http://etd.lib.montana.edu/etd/2012/gilbert/GilbertA0512.pdf.

Full text
Abstract:
The Western Transportation Institute (WTI) was engaged by the California Department of Transportation (Caltrans) to investigate the performance of various bridge deck rehabilitation surface treatments. This study requires that full-scale reinforced bridge deck slabs be tested in a laboratory environment. The deck slabs are to be tested by applying repeated passes of a rolling wheel load to damage the slabs to certain levels of deterioration. The slabs will be mounted in a frame for testing to impose specific support constraints necessary to generate realistic box girder bridge behavior. The intent of the present study was to design the panel support frame and validate that it provides the required restraint conditions needed for testing as well as to determine if it will be possible to generate the damage required in future deck slabs in a realistic time frame. This validation was accomplished by performing an experimental study in which a sample test slab was loaded to failure in one of the bays of the support frame. The slab was loaded with a stationary hydraulic jack over a contact area resembling that of a standard dual tire footprint. In addition, the finite element modeling software, ANSYS, was used to model the laboratory test to aid in interpreting the experimental results. The results from the laboratory test and the related findings from the finite element model were presented in terms of cracking behavior, deflection histories, strain measurements in the steel reinforcement, ultimate capacity, and mode of failure. The results were used in conjunction with the finite element model to validate the performance of the support frame. It was determined that the support frame provides the restraint conditions needed to create the in-service stress conditions of interest in the bridge deck slabs. A fatigue life model that was developed by past researchers was used to assess the expected performance of the deck specimens under the proposed rolling wheel loads.
APA, Harvard, Vancouver, ISO, and other styles
6

Jayaram, Vinay B. "Experimental Study of Scan Based Transition Fault Testing Techniques." Thesis, Virginia Tech, 2003. http://hdl.handle.net/10919/31146.

Full text
Abstract:
The presence of delay-inducing defects is causing increasing concern in the semiconductor industry today. To test for such delay-inducing defects, scan-based transition fault testing techniques are being implemented. There exist organized techniques to generate test patterns for the transition fault model and the two popular methods being used are Broad-side delay test (Launch-from-capture) and Skewed load delay test (Launch-from-shift). Each method has its own drawbacks and many practical issues are associated with pattern generation and application. Our work focuses on the implementation and comparison of these transition fault testing techniques on multiple industrial ASIC designs. In this thesis, we present results from multiple designs and compare the two techniques with respect to test coverage, pattern volume and pattern generation time. For both methods, we discuss the effects of multiple clock domains, tester hardware considerations, false and multi-cycle paths and the implications of using a low cost tester. We then consider the implications of pattern volume on testing both stuck-at and transition faults and the effects of using transition fault patterns to test stuck-at faults. Finally, we present results from our analysis on switching activity of nets in the design, while executing transition fault patterns.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
7

Zientarski, Lauren Ann. "Wind Tunnel Testing of a Variable Camber Compliant Wing with a Unique Dual Load Cell Test Fixture." University of Dayton / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1448893315.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Bierman, Anandi. "Refinement and validation of a microsatellite based identification and parentage testing panel in horses." Diss., University of Pretoria, 2010. http://hdl.handle.net/2263/25557.

Full text
Abstract:
The power of microsatellite markers lies in their ability to identify. Whether it is the identification of genes and associating them with known phenotypes or identifying and discerning individuals from one another, the role they play in the genetic field has been immense. Parentage testing of horses today is done via molecular means as opposed to serology. Microsatellite marker panels are decided upon by bodies such as the International Society for Animal Genetics (ISAG) in order to uphold international genotyping standards. The current horse microsatellite marker panel is not fully characterized and many markers are amplified by primers originally designed for linkage studies and were never intended for multiplex PCR analysis. The aim of this study was to refine and validate the current marker panel used for horses through sequencing of the repeat elements and flanking regions as well as the design of new primers for the setup of a marker panel incorporating more microsatellites and better primers. Sequencing of microsatellite flanking regions revealed that much variation lies within the regions flanking a microsatellite repeat element. Sequencing of the repeat element showed that not all markers are simple repeats, as was previously thought. The primers used to amplify microsatellite markers for horses were re-designed in the course of this study, utilizing knowledge gained from flanking region variation and repeat element length. New primers and known allele sizes allowed for the implementation of a nomenclature system in horses based on repeat element length as opposed to alphabet letters. By incorporating more markers into the panel it was hoped that a greater discriminatory power would be achieved. Measures of genetic diversity such as Observed Heterozygosity and Polymorphism Information Content showed negligible differences between the two panels however genotyping data from the old ISAG panel of nine markers showed that the probability of excluding an individual in a parentage test was better when using more markers.
Dissertation (MSc)--University of Pretoria, 2010.
Production Animal Studies
unrestricted
APA, Harvard, Vancouver, ISO, and other styles
9

Fatolitis, Philip. "Initial Validation of Novel Performance-Based Measures: Mental Rotation and Psychomotor Ability." Doctoral diss., University of Central Florida, 2008. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/6223.

Full text
Abstract:
Given the high-risk nature of military flight operations and the significant resources required to train U.S. Naval Aviation personnel, continual improvement is required in the selection process. In addition to general commissioning requirements and aeromedical standards, the U.S. Navy utilizes the Aviation Selection Test Battery (ASTB) to select commissioned aviation students. Although the ASTB has been a good predictor of aviation student performance in training, it was proposed that incremental improvement could be gained with the introduction of novel, computer administered performance-based measures: Block Rotation (BRT) and a Navy-developed Compensatory Tracking task. This work constituted an initial validation of the BRT, an interactive virtual analog of Shepard-Metzler's (1971) Mental Rotation task that was developed with the intention of quantifying mental rotation and psychomotor ability. For Compensatory Tracking, this work sought to determine if data gathered concord with results in extant literature, confirming the validity of the task. Data from the BRT were examined to determine task reliability and to formulate relevant quantitative/predictive performance human models. Results showed that the BRT performance is a valid spatial ability predictor whose output can be modeled, and that Compensatory Tracking task data concord with the psychometric properties of tracking tasks that have been previously presented in the literature.
Ph.D.
Doctorate
Psychology
Sciences
APA, Harvard, Vancouver, ISO, and other styles
10

Gunther, Matthew. "Design and Validation of an LED-Based Solar Simulator for Solar Cell and Thermal Testing." DigitalCommons@CalPoly, 2020. https://digitalcommons.calpoly.edu/theses/2302.

Full text
Abstract:
An LED-based solar simulator has been designed, constructed, and qualified under ASTM standards for use in the Cal Poly Space Environments Laboratory. The availability of this simulator will enhance the capability of undergraduate students to evaluate solar cell and thermal coating performance, and offers further research opportunities. The requirements of ASTM E927-19 for solar simulators intended for photovoltaic cell testing were used primarily, supplemented by information from ASTM E491-73 for solar simulators intended for spacecraft thermal vacuum testing. Three main criteria were identified as design goals - spectral match ratio, spatial non-uniformity, and temporal instability. An electrical design for an LED-based simulator to satisfy these criteria was developed and implemented, making use of existing lab equipment where possible to minimize cost. The resulting simulator meets the desired spatial non-uniformity and temporal instability requirements of ASTM E927-19, but falls short of the spectral match ratio needed. This is shown to be due to a calibration issue that is easily amended via software. The simulator is overall Class UCB under ASTM E927, and Class CCC under ASTM E491. The simulator was used to conduct the same laboratory procedure for solar cell I-V curve testing as performed by undergraduate students, showing excellent promise as a course enhancement.
APA, Harvard, Vancouver, ISO, and other styles
11

Marculescu, Bogdan. "Interactive Search-Based Software Testing : Development, Evaluation, and Deployment." Doctoral thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-15083.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Yilmaz, Levent. "Specifying and Verifying Collaborative Behavior in Component-Based Systems." Diss., Virginia Tech, 2002. http://hdl.handle.net/10919/26494.

Full text
Abstract:
In a parameterized collaboration design, one views software as a collection of components that play specific roles in interacting, giving rise to collaborative behavior. From this perspective, collaboration designs revolve around reusing collaborations that typify certain design patterns. Unfortunately, verifying that active, concurrently executing components obey the synchronization and communication requirements needed for the collaboration to work is a serious problem. At least two major complications arise in concurrent settings: (1) it may not be possible to analytically identify components that violate the synchronization constraints required by a collaboration, and (2) evolving participants in a collaboration independently often gives rise to unanticipated synchronization conflicts. This work presents a solution technique that addresses both of these problems. Local (that is, role-to-role) synchronization consistency conditions are formalized and associated decidable inference mechanisms are developed to determine mutual compatibility and safe refinement of synchronization behavior. More specifically, given generic parameterized collaborations and components with specific roles, mutual compatibility analysis verifies that the provided and required synchronization models are consistent and integrate correctly. Safe refinement, on the other hand, guarantees that the local synchronization behavior is maintained consistently as the roles and the collaboration are refined during development. This form of local consistency is necessary, but insufficient to guarantee a consistent collaboration overall. As a result, a new notion of global consistency (that is, among multiple components playing multiple roles) is introduced: causal process constraint analysis. A method for capturing, constraining, and analyzing global causal processes, which arise due to causal interference and interaction of components, is presented. Principally, the method allows one to: (1) represent the intended causal processes in terms of interactions depicted in UML collaboration graphs; (2) formulate constraints on such interactions and their evolution; and (3) check that the causal process constraints are satisfied by the observed behavior of the component(s) at run-time.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
13

Mahajan, Rajneesh. "A Multi-Language Goal-Tree Based Functional Test Planning System." Thesis, Virginia Tech, 2002. http://hdl.handle.net/10919/34487.

Full text
Abstract:
Test plans are used to guide, organize and document the testing activities during hardware design process. Manual test planning and configuration is known to be labor intensive, time consuming and error prone. It is desirable to develop efficient approaches to model testing and to develop test tools to automate test-planning activities. With the emergence of new hardware design paradigms, there is a need to develop more specialized description languages. However, adopting a new language for hardware-based designs involves adapting the existing design and verification tool suite for the new language. This is a very time consuming and capital intensive process. To ease the adoption of new description languages, it is desirable to develop multi-language support methodologies for design and test tools. This thesis addresses a subset of these problems. It presents a goal-tree based test methodology which is very effective for functional testing of hardware models in multiple application domains. Then it describes an approach for achieving a high degree of language independence using ideas of data abstraction. It also presents an automated test-planning tool called the "Goal Tree System (GTS)", which provides an implementation of goal tree methodology and multi-language support ideas. We demonstrate the use of this tool by testing models developed in VHDL and SystemC. We also present the design aspects of the Goal Tree System, which enable it to work across multiple platforms and with multiple simulators.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
14

Robison, Aaron. "Modeling and Validation of Tension-Element Based Mechanisms for Golf Ball-Club Impact." Diss., CLICK HERE for online access, 2006. http://contentdm.lib.byu.edu/ETD/image/etd1523.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Schwandt, Michael Joseph. "Risk-Based Framework for Focused Assessment of System Dynamics Models." Diss., Virginia Tech, 2009. http://hdl.handle.net/10919/27575.

Full text
Abstract:
The lack of a consistent, rigorous testing methodology has contributed to system dynamics often not being well received in traditional modeling application areas or within the broader research community. With a foundation in taxonomy classification techniques, this research developed a modeling process risk-based framework focused on the objectives of the system dynamics methodology. This approach assists modelers in prioritizing the modeling process risk management requirements â and resources â for a project by employing a modeling process risk dictionary, a modeling process risk management methods database, and an algorithm for selecting methods based on a modeling process risk assessment. System dynamics benefits from the modeling process risk management approach include more efficient use of risk management resources and more effective management of modeling process risks. In addition, the approach includes qualities that support the achievement of verification, validation, and accreditation (VV&A) principles. A system dynamics model was developed as the apparatus for assessing the impacts of various modeling process risk management policies, including those found in the traditional system dynamics method, the more commonly practiced method, and the method as modified by the integration of the modeling risk management framework. These policies are defined by common parameters within the model, allowing comparison of system behavior as affected by the policy parameters. The system dynamics model enabled the testing of the potential value of the system dynamics modeling process framework. Results from a fractional factorial designed experiment identified the sensitive parameters that affect the key result measures established to assess model behavior, focusing on timeliness, effectiveness, and quality. The experimental process highlighted the capabilities of system dynamics modeling to provide insight from the model structure, in addition to the system results. These insights supported assessment of the policies that were tested. The proposed modeling process risk management policy delivered results that were substantially better than those of the baseline policy. The simulated project was delivered 26% faster, with 49% fewer rework discovery resources, and 1% higher actual work content in the project. The proposed policy also delivered superior results when compared to other common approaches to system dynamics modeling process risk management.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
16

Imanian, James A. "Automated test case generation for reactive software systems based on environment models." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2005. http://library.nps.navy.mil/uhtbin/hyperion/05Jun%5FImanian.pdf.

Full text
Abstract:
Thesis (M.S. in Computer Science)--Naval Postgraduate School, June 2005.
Thesis Advisor(s): Mikhail Auguston, James B. Michael. Includes bibliographical references (p. 55-56). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
17

Farjoud, Alireza. "Physics-based Modeling Techniques for Analysis and Design of Advanced Suspension Systems with Experimental Validation." Diss., Virginia Tech, 2011. http://hdl.handle.net/10919/77354.

Full text
Abstract:
This research undertakes the problem of vibration control of vehicular and structural systems using intelligent materials and controllable devices. Advanced modeling tools validated with experimental test data are developed to help with understanding the fundamentals as well as advanced and novel applications of smart and conventional suspension systems. The project can be divided into two major parts. The first part is focused on development of novel smart suspensions using Magneto-Rheological (MR) fluids in unique configurations in order to improve efficiency, controllability, and safety of today's vehicles. In this part of the research, attention is paid to fundamentals as well as advanced applications of MR technology. Extensive rheological studies, both theoretical and experimental, are performed to understand the basic behaviors of MR fluids as complex non-Newtonian fluids in novel applications. Using the knowledge obtained from fundamental studies of MR fluids, unique application concepts are investigated that lead to design, development, and experimental testing of two new classes of smart devices: MR Hybrid Dampers and MR Squeeze Mounts. Multiple generations of these devices are built and tested as proof of concept prototypes. Advanced physics-based mathematical models are developed for these devices. Experimental test data are used to validate the models and great agreement is obtained. The models are used as design tools at preliminary as well as detailed design stages of device development. The significant finding in this part of the research is that MR fluids can deliver a much larger window of controllable force in squeeze mode compared to shear and valve modes which can be used in various applications. The second part of the research is devoted to the development of innovative design tools for suspension design and tuning. Various components of suspension systems are studied and modeled using a new physics-based modeling approach. The component of main interest is the shim stack assembly in hydraulic dampers which is modeled using energy and variational methods. A major finding is that the shims should be modeled individually in order to represent the sliding effects properly when the shim stack is deflected. Next, the individual component models are integrated into a full suspension model. This model is then used as a tool for suspension design, synthesis, and tuning. Using this design tool, suspension engineers in manufacturing companies and other industrial sections can easily perform parametric studies without the need to carry out time consuming and expensive field and laboratory tests.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
18

Brown, Reagan. "An examination of the structure and predictability of Myers-Briggs Type Indicator preferences using a job component validity strategy based on the Common-Metric Questionnaire." Thesis, This resource online, 1995. http://scholar.lib.vt.edu/theses/available/etd-07102009-040356/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

De, Sousa Barroca José Duarte. "Verification and validation of knowledge-based clinical decision support systems - a practical approach : A descriptive case study at Cambio CDS." Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-104935.

Full text
Abstract:
The use of clinical decision support (CDS) systems has grown progressively during the past decades. CDS systems are associated with improved patient safety and outcomes, better prescription and diagnosing practices by clinicians and lower healthcare costs. Quality assurance of these systems is critical, given the potentially severe consequences of any errors. Yet, after several decades of research, there is still no consensual or standardized approach to their verification and validation (V&V). This project is a descriptive and exploratory case study aiming to provide a practical description of how Cambio CDS, a market-leading developer of CDS services, conducts its V&V process. Qualitative methods including semi-structured interviews and coding-based textual data analysis were used to elicit the description of the V&V approaches used by the company. The results showed that the company’s V&V methodology is strongly influenced by the company’s model-driven development approach, a strong focus and leveraging of domain knowledge and good testing practices with a focus on automation and test-driven development. A few suggestions for future directions were discussed.
APA, Harvard, Vancouver, ISO, and other styles
20

Hays, Mark A. "A Fault-Based Model of Fault Localization Techniques." UKnowledge, 2014. http://uknowledge.uky.edu/cs_etds/21.

Full text
Abstract:
Every day, ordinary people depend on software working properly. We take it for granted; from banking software, to railroad switching software, to flight control software, to software that controls medical devices such as pacemakers or even gas pumps, our lives are touched by software that we expect to work. It is well known that the main technique/activity used to ensure the quality of software is testing. Often it is the only quality assurance activity undertaken, making it that much more important. In a typical experiment studying these techniques, a researcher will intentionally seed a fault (intentionally breaking the functionality of some source code) with the hopes that the automated techniques under study will be able to identify the fault's location in the source code. These faults are picked arbitrarily; there is potential for bias in the selection of the faults. Previous researchers have established an ontology for understanding or expressing this bias called fault size. This research captures the fault size ontology in the form of a probabilistic model. The results of applying this model to measure fault size suggest that many faults generated through program mutation (the systematic replacement of source code operators to create faults) are very large and easily found. Secondary measures generated in the assessment of the model suggest a new static analysis method, called testability, for predicting the likelihood that code will contain a fault in the future. While software testing researchers are not statisticians, they nonetheless make extensive use of statistics in their experiments to assess fault localization techniques. Researchers often select their statistical techniques without justification. This is a very worrisome situation because it can lead to incorrect conclusions about the significance of research. This research introduces an algorithm, MeansTest, which helps automate some aspects of the selection of appropriate statistical techniques. The results of an evaluation of MeansTest suggest that MeansTest performs well relative to its peers. This research then surveys recent work in software testing using MeansTest to evaluate the significance of researchers' work. The results of the survey indicate that software testing researchers are underreporting the significance of their work.
APA, Harvard, Vancouver, ISO, and other styles
21

Vögele, Christian [Verfasser], Helmut [Akademischer Betreuer] Krcmar, Helmut [Gutachter] Krcmar, and Alexander [Gutachter] Pretschner. "Automatic Extraction and Selection of Workload Specifications for Load Testing and Model-Based Performance Prediction / Christian Vögele ; Gutachter: Helmut Krcmar, Alexander Pretschner ; Betreuer: Helmut Krcmar." München : Universitätsbibliothek der TU München, 2018. http://d-nb.info/116262115X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Bhuyan, Md Delwar Hossain. "Statistical transfer matrix-based damage localization and quantification for civil structures." Thesis, Rennes 1, 2017. http://www.theses.fr/2017REN1S082/document.

Full text
Abstract:
La localisation de dégâts basée sur les mesures de vibrations est devenue un axe de recherche important pour la surveillance de la santé structurale (SHM). En particulier, la Stochastic Dynamic Damage Locating Vector (SDDLV) est une méthode de localisation des dégâts basée sur le couplage entre un modèle aux éléments finis (FE) de la structure et des paramètres modaux estimés à partir des mesures dynamiques en excitation ambiante dans les états structuraux sain et endommagé, interrogeant les changements dans la matrice de transfert. Dans la première contribution, la méthode SDDLV est étendue avec une approche statistique conjointe utilisant plusieurs ensembles de modes, surmontant la limitation théorique sur le nombre minimal de paramètres. Un autre problème traité est la performance de la méthode en fonction du choix de la variable de Laplace où la fonction de transfert est évaluée. Une attention particulière est accordée à ce choix et à son optimisation. Dans la deuxième contribution, l'approche Influence Line Damage Location (ILDL), complémentaire à l’approche SDDLV est étendue avec un cadre statistique. Dans la dernière contribution, une approche de sensibilité pour les petits dommages est développée en fonction de la différence des matrices de transfert, permettant la localisation des dommages par des tests statistiques dans un cadre gaussien, et en plus la quantification des dommages dans une deuxième étape. Enfin, les méthodes proposées sont validées sur des simulations numériques et leurs performances sont testées dans de nombreuses études de cas sur des expériences de laboratoire
Vibration-based damage localization has become an important issue for Structural Health Monitoring (SHM). Particularly, the Stochastic Dynamic Damage Locating Vector (SDDLV) method is an output-only damage localization method based on both a Finite Element (FE) model of the structure and modal parameters estimated from output-only measurements in the reference and damaged states of the system, interrogating changes in the transfer matrix. Firstly, the SDDLV method has been extended with a joint statistical approach for multiple mode sets, overcoming the theoretical limitation on the number of modes in previous works. Another problem is that the performance of the method can change considerably depending on the Laplace variable where the transfer function is evaluated. Particular attention is given to this choice and how to optimize it. Secondly, the Influence Line Damage Location (ILDL) approach which is complementary to the SDDLV approach has been extended with a statistical framework. Thirdly, a sensitivity approach for small damages has been developed based on the transfer matrix difference, allowing damage localization through statistical tests in a Gaussian framework, and in addition the quantification of the damage in a second step. Finally, the proposed methods are validated on numerical simulations and their performances are tested extensively in numerous case studies on lab experiments
APA, Harvard, Vancouver, ISO, and other styles
23

Fadel, Aline Cristine 1984. "Técnicas de testes aplicadas a software embarcado em redes ópticas." [s.n.], 2011. http://repositorio.unicamp.br/jspui/handle/REPOSIP/267792.

Full text
Abstract:
Orientadores: Regina Lúcia de Oliveira Moraes, Eliane Martins
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Tecnologia
Made available in DSpace on 2018-08-19T14:09:37Z (GMT). No. of bitstreams: 1 Fadel_AlineCristine_M.pdf: 3259764 bytes, checksum: a287ca33254d027f23e2f2f818464ee1 (MD5) Previous issue date: 2011
Resumo: Esse trabalho apresenta os detalhes e os resultados de testes automatizados e manuais que utilizaram a técnica de injeção de falhas e que foram aplicados em redes ópticas. No primeiro experimento o teste foi automatizado e utilizou a emulação de falhas físicas baseadas na máquina de estados do software embarcado dessa rede. Para esse teste foi utilizado uma chave óptica que é controlada por um robô de testes. O segundo experimento foi um teste manual, que injetou falhas nas mensagens de comunicação do protocolo dessa rede, a fim de validar os mecanismos de tolerância a falhas do software central dessa rede. Esse experimento utilizou a metodologia Conformance and Fault injection para preparar, executar e relatar os resultados dos casos de testes. Nos dois experimentos também foi utilizado um padrão de documentação de testes que visa facilitar a reprodução dos testes, a fim de que eles possam ser aplicados em outros ambientes. Com a aplicação desses testes, a rede óptica pode alcançar uma maior confiabilidade, disponibilidade e robustez, que são características essenciais para sistemas que requerem alta dependabilidade
Abstract: This work presents the details and the results of automatic and manual tests that used the fault injection technique and were applied on GPON network. In the first experiment the test was automated, and it performed the emulation of physical faults based on the state machine of the embedded software in this network. In this test is used an optical switch that is controlled by a test robot. The second experiment was a manual test, which injected faults on protocol communication message exchanged through the optical network, in order to validate the main software fault tolerance mechanisms. This experiment used a Conformance and Fault injection methodology to prepare, execute and report the results of the test cases. In both experiments, it was used a standard test documentation to facilitate the reproduction of the tests, so that they can be applied in other environments. With applying both tests, the optical networks reach greater reliability, availability and robustness. These attributes are essential for systems that require high dependability
Mestrado
Tecnologia e Inovação
Mestre em Tecnologia
APA, Harvard, Vancouver, ISO, and other styles
24

Ridene, Youssef. "Ingéniérie dirigée par les modèles pour la gestion de la variabilité dans le test d'applications mobiles." Thesis, Pau, 2011. http://www.theses.fr/2011PAUU3010/document.

Full text
Abstract:
L'engouement du grand public pour les applications mobiles, dont le nombre ne cessede croître, a rendu les utilisateurs de plus en plus exigeants quant à la qualité de cesapplications. Seule une procédure de test efficace permet de répondre à ces exigences.Dans le contexte des applications embarquées sur téléphones mobiles, le test est unetâche coûteuse et répétitive principalement à cause du nombre important de terminauxmobiles qui sont tous différents les uns des autres.Nous proposons dans cette thèse le langage MATeL, un DSML (Domain-Specific ModelingLanguage) qui permet de d’écrire des scénarios de test spécifiques aux applicationsmobiles. Sa syntaxe abstraite, i.e. un méta modèle et des contraintes OCL, permet auconcepteur de manipuler les concepts métier du test d'applications mobiles (testeur, mobileou encore résultats attendus et résultats obtenus). Par ailleurs, il permet d'enrichirces scénarii avec des points de variabilité qui autorisent de spécifier des variations dansle test en fonction des particularités d'un mobile ou d'un ensemble de mobiles. La syntaxeconcrète de MATeL, qui est inspirée de celle des diagrammes de séquence UML,ainsi que son environnement basé sur Eclipse permettent à l'utilisateur de concevoir desscénarii relativement facilement.Grâce à une plateforme de test en ligne construite pour les besoins de notre projet,il est possible d'exécuter les scénarii sur plusieurs téléphones différents. La démarcheest illustrée dans cette thèse à travers des cas d'utilisation et des expérimentations quiont permis de vérifier et valider notre proposition
Mobile applications have increased substantially in volume with the emergence ofsmartphones. Ensuring high quality and successful user experience is crucial to the successof such applications. Only an efficient test procedure allows developers to meet these requirements. In the context of embedded mobile applications, the test is costly and repetitive. This is mainly due to the large number of different mobile devices. In this thesis, we describe MATeL, a Domain-Specific Modeling Language (DSML) for designing test scenarios for mobile applications. Its abstract syntax, i.e. a meta model and OCL constraints, enables the test designer to manipulate mobile applications testing concepts such as tester, mobile or outcomes and results. It also enables him/her to enrich these scenarios with variability points in the spirit of Software Product-Line engineering, that can specify variations in the test according to the characteristics of one mobile or a set of mobiles. The concrete syntax of MATeL that is inspired from UML sequence diagrams and its environment based on Eclipse allow the user to easily develop scenarios. MATeL is built upon an industrial platform (a test bed) in order to be able to run scenarios on several different phones. The approach is illustrated in this thesis through use cases and experiments that led to verify and validate our contribution
APA, Harvard, Vancouver, ISO, and other styles
25

Enderlin, Ivan. "Génération automatique de tests unitaires avec Praspel, un langage de spécification pour PHP." Thesis, Besançon, 2014. http://www.theses.fr/2014BESA2067/document.

Full text
Abstract:
Les travaux présentés dans ce mémoire portent sur la validation de programmes PHP à travers un nouveau langage de spécification, accompagné de ses outils. Ces travaux s’articulent selon trois axes : langage de spécification, génération automatique de données de test et génération automatique de tests unitaires.La première contribution est Praspel, un nouveau langage de spécification pour PHP, basé sur la programmation par contrat. Praspel spécifie les données avec des domaines réalistes, qui sont des nouvelles structures permettant de valider etgénérer des données. À partir d’un contrat écrit en Praspel, nous pouvons faire du Contract-based Testing, c’est à dire exploiter les contrats pour générer automatiquement des tests unitaires. La deuxième contribution concerne la génération de données de test. Pour les booléens, les entiers et les réels, une génération aléatoire uniforme est employée. Pour les tableaux, un solveur de contraintes a été implémenté et utilisé. Pour les chaînes de caractères, un langage de description de grammaires avec un compilateur de compilateurs LL(⋆) et plusieurs algorithmes de génération de données sont employés. Enfin, la génération d’objets est traitée.La troisième contribution définit des critères de couverture sur les contrats.Ces derniers fournissent des objectifs de test. Toutes ces contributions ont été implémentées et expérimentées dans des outils distribués à la communauté PHP
The works presented in this memoir are about the validation of PHPprograms through a new specification language, along with its tools. These works follow three axes: specification language, automatic test data generation and automatic unit test generation. The first contribution is Praspel, a new specification language for PHP, based on the Design by Contract. Praspel specifies data with realistic domains, which are new structures allowing to validate and generate data. Based on a contract, we are able to perform Contract-based Testing, i.e.using contracts to automatically generate unit tests. The second contribution isabout test data generation. For booleans, integers and floating point numbers, auniform random generation is used. For arrays, a dedicated constraint solver has been implemented and used. For strings, a grammar description language along with an LL(⋆) compiler compiler and several algorithms for data generation are used. Finally, the object generation is supported. The third contribution defines contract coverage criteria. These latters provide test objectives. All these contributions are implemented and experimented into tools distributed to the PHP community
APA, Harvard, Vancouver, ISO, and other styles
26

Müller, Christoph. "Untersuchung von Holzwerkstoffen unter Schlagbelastung zur Beurteilung der Werkstoffeignung für den Maschinenbau." Doctoral thesis, Universitätsbibliothek Chemnitz, 2015. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-184057.

Full text
Abstract:
In der vorliegenden Arbeit werden Holzwerkstoffe im statischen Biegeversuch und im Schlagbiegeversuch vergleichend geprüft. Ausgewählte Holzwerkstoffe werden thermisch geschädigt, zudem wird eine relevante Kerbgeometrie geprüft. Ziel der Untersuchungen ist die Eignung verschiedenartiger Werkstoffe für den Einsatz in sicherheitsrelevanten Anwendungen mit Schlagbelastungen zu prüfen. Hierzu werden zunächst die Grundlagen der instrumentierten Schlagprüfung und der Holzwerkstoffe erarbeitet. Der Stand der Technik wird dargelegt und bereits durchgeführte Studien werden analysiert. Darauf aufbauend wird eine eigene Prüfeinrichtung zur zeitlich hoch aufgelösten Kraft-Beschleunigungs-Messung beim Schlagversuch entwickelt. Diese wird anhand verschiedener Methoden auf ihre Eignung und die Messwerte auf Plausibilität geprüft. Darüber hinaus wird ein statistisches Verfahren zur Überprüfung auf ausreichende Stichprobengröße entwickelt und auf die durchgeführten Messungen angewendet. Anhand der unter statischer und schlagartiger Biegebeanspruchung ermittelten charakteristischen Größen, wird ein Klassenmodell zum Werkstoffvergleich und zur Werkstoffauswahl vorgeschlagen. Dieses umfasst integral die mechanische Leistungsfähigkeit der geprüften Holzwerkstoffe und ist für weitere Holzwerkstoffe anwendbar. Abschließend wird, aufbauend auf den gewonnenen Erkenntnissen, ein Konzept für die Bauteilprüfung unter Schlagbelastung für weiterführende Untersuchungen vorgeschlagen
In the present work wood-based materials are compared under static bending load and impact bending load. Several thermal stress conditions are applied to selected materials, furthermore one relevant notch geometry is tested. The objective of these tests is to investigate the suitability of distinct wood materials for security relevant applications with the occurrence of impact loads. For this purpose the basics of instrumented impact testing and wood-based materials are acquired. The state of the technology and a comprehensive analysis of original studies are subsequently presented. On this basis an own impact pendulum was developed to allow force-acceleration measurement with high sample rates. The apparatus is validated by several methods and the achieved signals are tested for plausibility. A general approach of testing for adequate sample size is implemented and applied to the tested samples. Based on the characteristic values of the static bending and impact bending tests a classification model for material selection and comparison is proposed. The classification model is an integral approach for mechanical performance assessment of wood-based materials. In conclusion a method for impact testing of components (in future studies) is introduced
APA, Harvard, Vancouver, ISO, and other styles
27

Günther, Thomas [Verfasser], and A. [Akademischer Betreuer] Albers. "Methode zur Optimierung von Motor-Dauerlauf-programmen als Teil des Validierungsprozesses auf der Basis thermomechanisch schädigungsgleicher Ersatzkollektive am Beispiel eines Al-Si-Zylinderkopfes = A method for optimization of engine durability-test-programs as a part of the validation process based on a load-collective with thermomechanical equivalent damage by the example of an Al-Si cylinder head / Thomas Günther ; Betreuer: A. Albers." Karlsruhe : KIT-Bibliothek, 2021. http://d-nb.info/123507255X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Aleahmad, Turadg. "Improving Students’ Study Practices Through the Principled Design of Research Probes." Research Showcase @ CMU, 2012. http://repository.cmu.edu/dissertations/129.

Full text
Abstract:
A key challenge of the learning sciences is moving research results into practice. Educators on the front lines perceive little value in the outputs of education research and demand more “usable knowledge”. This work explores the potential instead of usable artifacts to translate knowledge into practice, adding scientists as stakeholders in an interaction design process. The contributions are two effective systems, the scientific and contextual principles in their design, and a research model for scientific research through interaction design. College student study practices are the domain chosen for the development of these methods. Iterative ethnographic fieldwork identified two systems that would be likely to advance both learning in practice and knowledge for applying the employed theories in general. Nudge was designed to improve students’ study time management by regularly emailing students with explicit recommended study activities. It reconceptualizes the syllabus into an interactive guide that fits into modern students' attention streams. Examplify was designed to improve how students learn from worked example problems by modularizing them into steps and scaffolding their metacognitive behaviors though problem-solving and self-explanation prompts. It combines these techniques in a way that is exceedingly easy to author, using existing answer keys and students' self-evaluations. Nudge and Examplify were evaluated experimentally over a full semester of a lecture-based introductory chemistry course. Nudge messages increased students’ sense of achievement and interacted with students’ existing time management skills to improve exam grades for poorer students. Among students who could choose whether to receive them, 80% did. Students with access to Examplify had higher exam scores (d=0.26), especially on delayed measures of learning (d=0.40). A key design decision in Examplify was not clearly resolvable by existing theory and so was tested experimentally by comparing two variants, one without prompts to solve the steps. The variant without problem solving was less effective (d=0.77) and less used, while usage rates of the variant with problem solving increased over time. These results support the use of the design methods employed and provide specific empirical recommendations for future designs of these and similar systems for implementing theory in practice.
APA, Harvard, Vancouver, ISO, and other styles
29

"Trace-based post-silicon validation for VLSI circuits." 2012. http://library.cuhk.edu.hk/record=b5549661.

Full text
Abstract:
The ever-increasing design complexity of modern circuits challenges our ability to verify their correctness. Therefore, various errors are more likely to escape the pre-silicon verification process and to manifest themselves after design tape-out. To address this problem, effective post-silicon validation is essential for eliminating design bugs before integrated circuit (IC) products shipped to customers. In the debug process, it becomes increasingly popular to insert design-for-debug (DfD) structures into the original design to facilitate real-time debug without intervening the circuits’ normal operation. For this so-called trace-based post-silicon validation technique, the key question is how to design such DfD circuits to achieve sufficient observability and controllability during the debug process with limited hardware overhead. However, in today’s VLSI design flow, this is unfortunately conducted in a manual fashion based on designers’ own experience, which cannot guarantee debug quality. To tackle this problem, we propose a set of automatic tracing solutions as well as innovative DfD designs in this thesis. First, we develop a novel trace signal selection technique to maximize the visibility on debugging functional design errors. To strengthen the capability for tackling these errors, we sequentially introduce a multiplexed signal tracing strategy with a trace signal grouping algorithm for maximizing the probability of catching the propagated evidences from functional design errors. Then, to effectively localize speedpathrelated electrical errors, we propose an innovative trace signal selection solution as well as a trace qualification technique. On the other hand, we introduce several low-cost interconnection fabrics to effectively transfer trace data in post-silicon validation. We first propose to reuse the existing test channel for real-time trace data transfer, so that the routing cost of debug hardware is dramatically reduced. The method is further improved to avoid data corruption in multi-core debug. We then develop a novel interconnection fabric design and optimization technique, by combining multiplexor network and non-blocking network, to achieve high debug flexibility with minimized hardware cost. Moreover, we introduce a hybrid trace interconnection fabric that is able to tolerate unknown values in “golden vectors“, at the cost of little extra DfD overhead. With the fabric, we develop a systematic signal tracing procedure to automatically localize erroneous signals with just a few debug runs. Our empirical evaluation shows that the solutions presented in this thesis can greatly improve the validation quality of VLSI circuits, and ultimately enable the design and fabrication of reliable electronic devices.
Liu, Xiao.
Thesis (Ph.D.)--Chinese University of Hong Kong, 2012.
Includes bibliographical references (leaves 143-152).
Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Abstract --- p.i
Acknowledgement --- p.iv
Preface --- p.vii
Chapter 1 --- Introduction --- p.1
Chapter 1.1 --- VLSI Design Trends and Validation Challenges --- p.1
Chapter 1.2 --- Key Contributions and Thesis Outline --- p.4
Chapter 2 --- State of the Art on Post-Silicon Validation --- p.8
Chapter 2.1 --- Trace Signal Selection --- p.12
Chapter 2.2 --- Interconnection Fabric Design for Trace Data Transfer --- p.14
Chapter 2.3 --- Trace Data Compression --- p.15
Chapter 2.4 --- Trace-Based Debug Control --- p.16
Chapter 3 --- Signal Selection for Visibility Enhancement --- p.18
Chapter 3.1 --- Preliminaries and Summary of Contributions --- p.19
Chapter 3.2 --- Restorability Formulation --- p.23
Chapter 3.2.1 --- Terminologies --- p.23
Chapter 3.2.2 --- Gate-Level Restorabilities --- p.24
Chapter 3.3 --- Trace Signal Selection --- p.28
Chapter 3.3.1 --- Circuit Level Visibility Calculation --- p.28
Chapter 3.3.2 --- Trace Signal Selection Methodology --- p.30
Chapter 3.3.3 --- Trace Signal Selection Enhancements --- p.31
Chapter 3.4 --- Experimental Results --- p.34
Chapter 3.4.1 --- Experiment Setup --- p.34
Chapter 3.4.2 --- Experimental Results --- p.35
Chapter 3.5 --- Conclusion --- p.40
Chapter 4 --- Multiplexed Tracing for Design Error --- p.47
Chapter 4.1 --- Preliminaries and Summary of Contributions --- p.49
Chapter 4.2 --- Design Error Visibility Metric --- p.53
Chapter 4.3 --- Proposed Methodology --- p.56
Chapter 4.3.1 --- Supporting DfD Hardware for Multiplexed Signal Tracing --- p.58
Chapter 4.3.2 --- Signal Grouping Algorithm --- p.58
Chapter 4.4 --- Experimental Results --- p.62
Chapter 4.4.1 --- Experiment Setup --- p.62
Chapter 4.4.2 --- Experimental Results --- p.63
Chapter 4.5 --- Conclusion --- p.66
Chapter 5 --- Tracing for Electrical Error --- p.68
Chapter 5.1 --- Preliminaries and Summary of Contributions --- p.69
Chapter 5.2 --- Observing Speedpath-Related Electrical Errors --- p.71
Chapter 5.2.1 --- Speedpath-Related Electrical Error Model --- p.71
Chapter 5.2.2 --- Speedpath-Related Electrical Error Detection Quality --- p.73
Chapter 5.3 --- Trace Signal Selection --- p.75
Chapter 5.3.1 --- Relation Cube Extraction --- p.76
Chapter 5.3.2 --- Signal Selection for Non-Zero-Probability Error Detection --- p.77
Chapter 5.3.3 --- Trace Signal Selection for Error Detection Quality Enhancement --- p.78
Chapter 5.4 --- Trace Data Qualification --- p.80
Chapter 5.5 --- Experimental Results --- p.83
Chapter 5.6 --- Conclusion --- p.87
Chapter 6 --- Reusing Test Access Mechanisms --- p.88
Chapter 6.1 --- Preliminaries and Summary of Contributions --- p.89
Chapter 6.1.1 --- SoC Test Architectures --- p.89
Chapter 6.1.2 --- SoC Post-Silicon Validation Architectures --- p.90
Chapter 6.1.3 --- Summary of Contributions --- p.92
Chapter 6.2 --- Overview of the Proposed Debug Data Transfer Framework --- p.93
Chapter 6.3 --- Proposed DfD Structures --- p.94
Chapter 6.3.1 --- Modified Wrapper Design --- p.95
Chapter 6.3.2 --- Trace Buffer Interface Design --- p.97
Chapter 6.4 --- Sharing TAM for Multi-Core Debug Data Transfer --- p.98
Chapter 6.4.1 --- Core Masking for TestRail Architecture --- p.98
Chapter 6.4.2 --- Channel Split --- p.99
Chapter 6.5 --- Experimental Results --- p.101
Chapter 6.6 --- Conclusion --- p.104
Chapter 7 --- Interconnection Fabric for Flexible Tracing --- p.105
Chapter 7.1 --- Preliminaries and Summary of Contributions --- p.106
Chapter 7.2 --- Proposed Interconnection Fabric Design --- p.111
Chapter 7.2.1 --- Multiplexer Network for Mutually-Exclusive Signals --- p.111
Chapter 7.2.2 --- Non-Blocking Concentration Network for Concurrently-Accessible Signals --- p.114
Chapter 7.3 --- Experimental Results --- p.117
Chapter 7.4 --- Conclusion --- p.121
Chapter 8 --- Interconnection Fabric for Systematic Tracing --- p.123
Chapter 8.1 --- Preliminaries and Summary of Contributions --- p.124
Chapter 8.2 --- Proposed Trace Interconnection Fabric --- p.128
Chapter 8.3 --- Proposed Error Evidence Localization Methodology --- p.130
Chapter 8.4 --- Experimental Results --- p.133
Chapter 8.4.1 --- Experimental Setup --- p.133
Chapter 8.4.2 --- Results and Discussion --- p.134
Chapter 8.5 --- Conclusion --- p.139
Chapter 9 --- Conclusion --- p.140
Bibliography --- p.152
APA, Harvard, Vancouver, ISO, and other styles
30

(9593063), Li Cheng. "Laboratory Load-Based Testing, Performance Mapping and Rating of Residential Cooling Equipment." Thesis, 2020.

Find full text
Abstract:
In the U.S., unitary residential air conditioners are rated using standard AHRI 210/240 that is inadequate to credit equipment with advanced controls and variable-speed components since the ratings are based on results of steady-state laboratory tests. Contrarily, a load-based testing and rating approach is presented in this work that can capture equipment performance with its integrated controls and thermostat responses that is more representative of the field. In this approach, representative building sensible and latent loads are emulated in a psychrometric test facility at different indoor and outdoor test conditions utilizing a virtual building model. The indoor test room conditions are continuously adjusted to emulate the dynamic response of the virtual building to the test equipment sensible and latent cooling rates and the equipment dynamic response is measured. Meanwhile, the inlet temperatures to the test equipment thermostat are independently controlled to track the same virtual building response using a thermostat environment emulator that encloses the test thermostat, that provides typical flow conditions and of which the design and control are presented in this work. Climate-specific cooling seasonal performance ratings can be determined by propagating load-based test results through a temperature-bin method to estimate a seasonal coefficient of performance (SCOP). In addition, a next-generation rating approach is developed that extends load-based testing for performance mapping, such that the SCOP can be obtained using building simulations that incorporate specific building types, climates and an equipment-specific performance map.
In this work, the proposed approaches were implemented to test and rate a variable-speed residential heat pump operating in cooling mode. Trained with results from only 12 load-based test intervals carried out using the test equipment, a quasi-steady-state mapping model was able to map the equipment performance across almost the entire operating envelope within $\pm10\%$ errors and the $R^2$ values were very close to 1. Using the identified performance map, the next-generation SCOP was obtained based on an annual simulation deployed in EnergyPlus, where the map was coupled to a typical single-family building in Albuquerque,NM. Compared to the temperature-bin-based rating, this simulation-based rating is able to comprehensively and appropriately reflect equipment annual field performance associated with a specific building type and climate, as the rating is extended from automated laboratory load-based testing and performance mapping.
APA, Harvard, Vancouver, ISO, and other styles
31

Xiao, Dongyi. "Inter-model, analytical, and experimental validation of a heat balance based residential cooling load calculation procedure." 2006. http://digital.library.okstate.edu/etd/umi-okstate-2013.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Kong, Xiaoxiao. "Validating the retelling task of Test for English Majors Band 4 Oral in China: Evidence from a corpus-based exploration." Thesis, 2017. http://hdl.handle.net/1885/168698.

Full text
Abstract:
The retelling task is the first task in the Test for English Majors Band 4 Oral (TEM4-Oral), a nationwide English speaking test for undergraduate English major students in China. Despite its wide use, little work has been done on ensuring its validity. This is reflected in the absence of a systematic validation program or review since its launch in 1999 (Duan, 2011). In the case of the retelling task, the provision of a source story which candidates ‘retell’ raises concerns about the use of source material and how it is linked to language proficiency (e.g. Plakans & Gebril, 2012). This study examines the validity of the TEM4-Oral retelling task through analysing features of test-taker performance across four test administrations. Combining corpus analysis and qualitative explorations, it addresses how candidates’ proficiency levels are reflected in the discourse characteristics of their retellings. It also explores the consistency of scoring across task versions. Findings suggest significant differences in the discourse features of higher- and lower-ranked retellings, which partly supports the score interpretation from test-taker performance in the retelling task. On the other hand, some inconsistencies in candidates’ discourse features have been observed across administrations. This indicates that the generalisability of the task is threatened. This study constitutes an initial step in the validation process of TEM4-Oral. As well as shedding light on the design of the retelling task, it highlights the importance of test validation in the Chinese context, which would benefit thousands of people.
APA, Harvard, Vancouver, ISO, and other styles
33

Wilson, Irene Rose. "The validation of a performance-based assessment battery." Diss., 2001. http://hdl.handle.net/10500/16493.

Full text
Abstract:
Legislative pressures are being brought to bear on South African employers to demonstrate that occupational assessment is scientifically valid and culture-fair. The development of valid and reliable performance-based assessment tools will enable employers to meet these requirements. The general aim of this research was to validate a performance-based assessment battery for the placement of sales representatives. A literature survey examined alternative assessment measures and methods of performance measurement, leading to the conclusion that the combination of the work sample as a predictor measure and the managerial rating of performance as a criterion measure offer a practical and cost-effective assessment process to the sales manager. The empirical study involved 54 sales persons working for the Commercial division of an oil marketing company, selling products and services to the commercial and industrial market. By means of the empirical study, a significant correlation was found between performance of sales representatives in terms of the performance-based assessment battery for the entry level of the career ladder and their behaviour in the field as measured by the managerial performance rating instrument. The limitations of the sample, however, prevent the results from being generalised to other organisations.
Industrial & Organisational Psychology
M.A. (Industrial Psychology)
APA, Harvard, Vancouver, ISO, and other styles
34

Oliveira, Bruno Moisés Teixeira. "A pattern-based approach for ETL systems modelling and validation." Doctoral thesis, 2018. http://hdl.handle.net/1822/56801.

Full text
Abstract:
Tese de Doutoramento em Informática
Usually, a data warehousing system stores data in an integrated and consistent way, making it an ideal data repository to support decision-making processes. However, to keep this repository properly updated it is necessary to access to a variety of information sources, transform the data gathered according to the established decision-making requirements and load that data into the Data Warehouse System data repository – the data warehouse. All these tasks are done by highly sophisticated programs that together integrates what we use to define as the ETL system. The ETL (Extract, Transform, Load) system is responsible to perform all those tasks, being considered a very time-consuming, error-prone and complex process, involving several participants from different knowledge domains. They are one of the most important components of a data warehousing system, strongly influenced by the complexity of business requirements, their changing, and evolution. These aspects influence not only the structure of a data warehouse itself but also the information sources schemas involved with, since they must handle data with complex data requirements and transformation routines. Moreover, ETL systems are data-oriented processes composed of dozens of granular tasks arranged based on specific languages and architectures, which results in technical and complex terms, difficult to understand and maintain. Despite the efforts done by several researchers for the modelling and implementing them, a solid and simpler approach providing the necessary bridges to create the conceptual and logical models, and validate them before its final implementation is still lacking. However, a specific subset of these tasks can be grouped on a collection together with their relationships to form abstract constructs. Thus, to facilitate the planning and ETL implementation, this work aims to present a set of constructs that represent meta-models (patterns) specially designed to map standard ETL procedures, providing the necessary bridges to represent them at the conceptual level and provide its mapping to execution primitives. Basically, these (ETL) patterns are comprised of a set of abstract components that can be configured to enable its instantiation for specific application scenarios. With them, generic models can be built, simplifying process views and providing methods for carrying out the acquired expertise to new applications based on well-proven practices that can be used to describe general solutions based on specific skeletons configured and instantiated according to a set of specific integration requirements. The ETL pattern-based approach presented uses BPMN (Business Process Model and Notation) for modelling conceptual ETL workflows, mapping them to real execution primitives using a domain-specific language that allows for the generation of instances that can be executed in an ETL commercial tool. This work demonstrates the feasibility and effectiveness of an ETL pattern-based approach, analysing a test scenario for data integration based on the pattern framework proposed.
Os sistemas de Data Warehousing suportam o armazenamento de grandes quantidades de dados de forma integrada e consistente, tornando-o num sistema ideal para o suporte de processos de tomada de decisão. De forma a manter os seus repositórios devidamente atualizados, os dados extraídos das fontes de informação utilizadas devem ser transformados de acordo com os requisitos de tomada de decisão para posterior povoamento do seu repositório de armazenamento – o Data Warehouse. Todas essas tarefas de gestão e transformação de dados são suportadas por processos de povoamento que com base em sofisticadas estratégias caracterizam o sistema de ETL. Os processos de ETL (Extract, Transform, Load) são processos bastante específicos, orientados a dados e com uma estrutura que requer a atenção de recursos humanos altamente especializados de diversos domínios. A sua composição é essencialmente baseada em operações muito detalhadas, baseadas em linguagens, metodologias e arquiteturas específicas que originam processos de grandes dimensões, difíceis de desenvolver e manter. Como resultado, o seu desenvolvimento e manutenção consume uma parte substancial dos recursos necessários para a implementação do sistema de Data Warehousing. Estas características são essencialmente afetadas pela mudança de requisitos que resulta de processos de negócio evolutivos que afetam não só a estrutura do Data Warehouse, mas também a estrutura das fontes de dados utilizadas. Nos últimos anos têm sido realizados desenvolvimentos significativos na área, no entanto, ainda não existe uma abordagem convincente e simples que com base na especificação conceptual e lógica de processos de ETL, proporcione um mapeamento em primitivas de execução tendo por base formalismos sólidos que garantam a noção de consistência do processo. Assim, no sentido de facilitar o seu processo de implementação, um conjunto de meta modelos (padrões) que representam tarefas de ETL tipicamente utilizadas e documentadas, são apresentados. Os padrões representam construtores de alto nível, que só por si permitem desenhar e validar uma primeira versão do sistema de povoamento antes de proceder à sua implementação, simplificando a representação de modelos mais abstratos (muito úteis para fases iniciais do desenvolvimento do projeto) e ao mesmo tempo disponibilizando os meios necessários para possibilitar o seu mapeamento para primitivas de execução. Desta forma, os recursos direcionados para suportar a fase de planeamento e desenho do projeto podem ser aproveitados em fases de desenvolvimento posteriores (essencialmente mais técnicas), proporcionando uma visão integradora e unificada do processo de desenvolvimento. Com esta abordagem, componentes abstratos podem ser configurados de forma a reorganizar as tarefas que os constituem para proporcionar a geração de instâncias geradas tendo em consideração casos muito particulares. Não só as diversas fases do desenvolvimento do projeto ficam simplificadas, como também é possível encapsular o conhecimento adquirido para outros projetos através da aplicação de práticas conhecidas e validadas, permitindo a construção de sistemas mais fiáveis associados a uma redução do tempo e recursos necessários para a sua implementação. A representação conceptual é suportada pela utilização de BPMN (Business Process Model and Notation), utilizando os construtores disponibilizados pela linguagem para o desenvolvimento de fluxos de trabalho bastante detalhados que descrevem os componentes utilizados e como estes se integram com os restantes componentes de ETL existentes. De seguida, os padrões identificados no modelo conceptual são enriquecidos com uma semântica específica, suportada por uma linguagem de domínio especialmente desenvolvida de acordo os requisitos de cada padrão. Estas primitivas lógicas podem ser mais tarde utilizadas para a geração de modelos físicos que podem ser executados utilizando uma ferramenta comercial já existente. Este trabalho apresenta a aplicação dos diversos padrões utilizando para isso um caso de estudo que exemplifica a aplicação de uma abordagem orientada a padrões para o desenvolvimento de sistemas de ETL.
APA, Harvard, Vancouver, ISO, and other styles
35

Ward-Cox, Maxine Welland. "Validation of a rating scale for distance education university student essays in a literature-based module." Thesis, 2020. http://hdl.handle.net/10500/26935.

Full text
Abstract:
This thesis reports on the findings of a study to validate an assessment scale for writing in an Open Distance Learning (ODL) context by first-year students in their responses to English literary texts. The study involved the interrogation of an existing scale, adapted from Jacobs et al. (1981), which was being used for the Foundations in English Literary Studies (ENG1501) module at the University of South Africa. Despite the credibility of the original scale, the modified version had been used in language- and literature-based courses in the English Studies Department since 1998 and had not been updated or empirically tested in the context of the target group. Thus, the gap that this current study addressed was the need for a valid rating scale that takes into account the complexities of literature teaching and ODL in the current South African university environment. This thesis includes a review of the debate on validity and the validation of rating scales both internationally and in South Africa, the ODL environment, and the assessment of assignments based on literary texts, particularly in the multicultural South African context. The methodology included research of both a quantitative and a qualitative nature. The outcome was an empirically-validated scale that should contribute to the quest for accuracy in assessing academic writing and meet the formative and summative assessment needs of the target group
English Studies
D. Litt. et Phil. (English)
APA, Harvard, Vancouver, ISO, and other styles
36

Werner, Edith Benedicta Maria. "Learning Finite State Machine Specifications from Test Cases." Doctoral thesis, 2010. http://hdl.handle.net/11858/00-1735-0000-0006-B3D7-E.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography