To see the other types of publications on this topic, follow the link: Computer controlled testing.

Dissertations / Theses on the topic 'Computer controlled testing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 24 dissertations / theses for your research on the topic 'Computer controlled testing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Bhatia, Sanjay. "Software tools for computer-controlled fatigue testing." Thesis, Virginia Tech, 1986. http://hdl.handle.net/10919/45749.

Full text
Abstract:
Past efforts at implementing Load Spectrum Generation and Neuber Control have centered around minicomputers and analog circuits. The use of a personal computer to implement the tasks is presented. On implementation of the load Spectrum Generation software, the response of the Materials Testing System was investigated for distortion and attenuation. In particular, the effect of the resolution of the waveform on the test system response was noted. There was negligible attenuation for full scale frequencies of up to 20 Hz. Greater waveform resolution was required at lower frequencies than at higher frequencies. On implementation of the Neuber Control program, the accuracy obtained at the Neuber hyperbolas was noted. Better accuracy was obtained at ramp frequencies below 0.1 Hz. Based on the results obtained after implementing the Load Spectrum Generator program and the Neuber Control program, the performance of the personal computer in controlling fatigue tests is evaluated. Cost effectiveness and versatility favor the use of a personal computer for the control of fatigue tests.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
2

Nesmith, Willie Morgan Jr. "Development of a computer controlled multiaxial cubical testing apparatus." Thesis, Georgia Institute of Technology, 1997. http://hdl.handle.net/1853/24144.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Maksym, Geoffrey N. "Computer controlled oscillator for dynamic testing of biological soft tissue strips." Thesis, McGill University, 1993. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=69742.

Full text
Abstract:
A computer controlled tissue strip oscillator has been constructed for the advanced study of lung parenchyma mechanics. The data acquisition and control are facilitated on a 486 personal computer. The tissue is maintained by a continuously circulating bath of Krebs-Ringer solution at 37$ sp circ$C bubbled with a 95% O$ sb2$ and 5% CO$ sb2$ gas mixture. The oscillator has a useful bandwidth to 20 Hz at 0.5 cm amplitude and step response with no overshoot at all amplitudes. The movement range of the motor is 5 cm with resolution 13.6 $ mu$m. The force resolution is 66 $ mu$N with a range of 0.25 N. A tissue preconditioning protocol was developed as a standard maneuver to be conducted prior to applying length perturbations about specific operating stresses. The tissue strip oscillator has been successfully tested on dog lung tissue strips.
APA, Harvard, Vancouver, ISO, and other styles
4

Wengenack, Nancy L. "Design and testing of a computer-controlled square wave voltammetry instrument /." Online version of thesis, 1987. http://hdl.handle.net/1850/8853.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

ahmed, Tanveer, and Madhu Sudhana Raju. "Integrating Exploratory Testing In Software Testing Life Cycle, A Controlled Experiment." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-3414.

Full text
Abstract:
Context. Software testing is one of the crucial phases in software development life cycle (SDLC). Among the different manual testing methods in software testing, Exploratory testing (ET) uses no predefined test cases to detect defects. Objectives. The main objective of this study is to test the effectiveness of ET in detecting defects at different software test levels. The objective is achieved by formulating hypotheses, which are later tested for acceptance or rejection. Methods. Methods used in this thesis are literature review and experiment. Literature review is conducted to get in-depth knowledge on the topic of ET and to collect data relevant to ET. Experiment was performed to test hypotheses specific to the three different testing levels : unit , integration and system. Results. The experimental results showed that using ET did not find all the seeded defects at the three levels of unit, integration and system testing. The results were analyzed using statistical tests and interpreted with the help of bar graphs. Conclusions. We conclude that more research is required in generalizing the benefits of ET at different test levels. Particularly, a qualitative study to highlight factors responsible for the success and failure of ET is desirable. Also we encourage a replication of this experiment with subjects having a sound technical and domain knowledge.
H.no.2-7-644, Mukrumpura, Karimnagar, Pincode:500001, India, Phone number: +91-9908644775
APA, Harvard, Vancouver, ISO, and other styles
6

Yellowhair, Julius Eldon. "Advanced Technologies for Fabrication and Testing of Large Flat Mirrors." Diss., The University of Arizona, 2007. http://hdl.handle.net/10150/195245.

Full text
Abstract:
Classical fabrication methods alone do not enable manufacturing of large flat mirrors that are much larger than 1 meter. This dissertation presents the development of enabling technologies for manufacturing large high performance flat mirrors and lays the foundation for manufacturing very large flat mirrors. The enabling fabrication and testing methods were developed during the manufacture of a 1.6 meter flat. The key advantage over classical methods is that our method is scalable to larger flat mirrors up to 8 m in diameter.Large tools were used during surface grinding and coarse polishing of the 1.6 m flat. During this stage, electronic levels provided efficient measurements on global surface changes in the mirror. The electronic levels measure surface inclination or slope very accurately. They measured slope changes across the mirror surface. From the slope information, we can obtain surface information. Over 2 m, the electronic levels can measure to 50 nm rms of low order aberrations that include power and astigmatism. The use of electronic levels for flatness measurements is analyzed in detail.Surface figuring was performed with smaller tools (size ranging from 15 cm to 40 cm in diameter). A radial stroker was developed and used to drive the smaller tools; the radial stroker provided variable tool stroke and rotation (up to 8 revolutions per minute). Polishing software, initially developed for stressed laps, enabled computer controlled polishing and was used to generate simulated removal profiles by optimizing tool stroke and dwell to reduce the high zones on the mirror surface. The resulting simulations from the polishing software were then applied to the real mirror. The scanning pentaprism and the 1 meter vibration insensitive Fizeau interferometer provided accurate and efficient surface testing to guide the remaining fabrication. The scanning pentaprism, another slope test, measured power to 9 nm rms over 2 meters. The Fizeau interferometer measured 1 meter subapertures and measured the 1.6 meter flat to 3 nm rms; the 1 meter reference flat was also calibrated to 3 nm rms. Both test systems are analyzed in detail. During surface figuring, the fabrication and testing were operated in a closed loop. The closed loop operation resulted in a rapid convergence of the mirror surface (11 nm rms power, and 6 nm rms surface irregularity). At present, the surface figure for the finished 1.6 m flat is state of the art for 2 meter class flat mirrors.
APA, Harvard, Vancouver, ISO, and other styles
7

BERTOLINI, Cristiano. "Evaluation of GUI testing techniques for system crashing: from real to model-based controlled experiments." Universidade Federal de Pernambuco, 2010. https://repositorio.ufpe.br/handle/123456789/2076.

Full text
Abstract:
Made available in DSpace on 2014-06-12T15:54:24Z (GMT). No. of bitstreams: 2 arquivo7096_1.pdf: 2072025 bytes, checksum: ca8b71b9cfdeb09118a7c281cafe2872 (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2010
Conselho Nacional de Desenvolvimento Científico e Tecnológico
Aplicações para celular estão se tornando cada vez mais complexas, bem como testá-las. Teste de interfaces gráficas (GUI) é uma tendência atual e se faz, em geral, através da simulação de interações do usuário. Várias técnicas são propostas, no qual, eficiência (custo de execução) e eficácia (possibilidade de encontrar bugs) são os aspectosmais cruciais desejados pela industria. No entanto, avaliações mais sistemáticas são necessárias para identificar quais técnicas melhoram a eficiência e eficácia de tais aplicações. Esta tese apresenta uma avaliação experimental de duas técnicas de testes de GUI, denominadas de DH e BxT, que são usadas para testar aplicações de celulares com um histórico de erros reais. Estas técnicas são executadas por um longo período de tempo (timeout de 40h, por exemplo) tentando identificar as situações críticas que levam o sistema a uma situação inesperada, onde o sistema pode não continuar sua execução normal. Essa situação é chamada de estado de crash. A técnicaDHjá existia e é utilizadapela industriade software, propomos outra chamada de BxT. Em uma avaliação preliminar, comparamos eficácia e eficiência entre DH e BxT através de uma análise descritiva. Demonstramos que uma exploração sistemática, realizada pela BxT, é uma abordagem mais interessante para detectar falhas em aplicativos de celulares. Com base nos resultados preliminares, planejamos e executamos um experimento controlado para obter evidência estatística sobre sua eficiência e eficácia. Como ambas as técnicas são limitadas por um timeout de 40h, o experimento controlado apresenta resultados parciais e, portanto, realizamos uma investigação mais aprofundada através da análise de sobrevivência. Tal análise permite encontrar a probabilidade de crash de uma aplicação usando tanto DH quanto BxT. Como experimentos controlados são onerosos, propomos uma estratégia baseada em experimentos computacionais utilizando a linguagem PRISM e seu verificador de modelos para poder comparar técnicas de teste de GUI, em geral, e DH e BxT em particular. No entanto, os resultados para DH e BxT tem uma limitação: a precisão do modelo não é estatisticamente comprovada. Assim, propomos uma estratégia que consiste em utilizar os resultados anteriores da análise de sobrevivência para calibrar nossos modelos. Finalmente, utilizamos esta estratégia, já com os modelos calibrados, para avaliar uma nova técnica de teste de GUI chamada Hybrid-BxT (ou simplesmente H-BxT), que é uma combinação de DH e BxT
APA, Harvard, Vancouver, ISO, and other styles
8

Gazes, Seth Brian. "Computer controlled device to independently control flow waveform parameters during organ culture and biomechanical testing of mouse carotid arteries." Thesis, Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/31812.

Full text
Abstract:
Thesis (M. S.)--Mechanical Engineering, Georgia Institute of Technology, 2010.
Committee Chair: Rudy Gleason; Committee Member: Raymond Vito; Committee Member: W. Robert Taylor. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
9

Böhmer, Bianca. "Testing Numeric: Evidence from a randomized controlled trial of a computer based mathematics intervention in Cape Town high schools." Master's thesis, University of Cape Town, 2014. http://hdl.handle.net/11427/18599.

Full text
Abstract:
This thesis presents the results of randomized controlled trial conducted to evaluate a Grade 8 after-school mathematics intervention. The programme employed student coaches to facilitate classes in which Khan Academy resources were used to teach basic numeracy. Large gains of 0.321 standard deviations were observed on basic numeracy outcomes for learners who were selected to be on the programme. Similarly, learners in treatment also scored 0.246 standard deviations higher on core Grade 8 curriculum questions at endline. The improvements in mathematics outcomes were evident for learners throughout the distribution and treatment learners outperformed control group learners on every subsection of the mathematics test. There was also no significant differential treatment effect by gender, race, home language, baseline typing speed or cognitive development. However, treatment learners with better English literacy at baseline scored significantly higher than learners in the bottom third on core grade 8 curriculum questions. Additionally, despite close contact between control and treatment learners, no statistically significant evidence of spillover was detected.
APA, Harvard, Vancouver, ISO, and other styles
10

Söderlund, Sverker. "Performance of REST applications : Performance of REST applications in four different frameworks." Thesis, Linnéuniversitetet, Institutionen för datavetenskap (DV), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-64841.

Full text
Abstract:
More and more companies use a REST architecture to implement applications for an easy to use API. One important quality attribute of an application is the performance. To be able to understand how the application will perform it is important to know how the selected framework perform. By testing the performance of different frameworks it will become easier for software developers to choose the right framework to achieve their requirements and goals. At the time when this paper was written the research in this area was limited. This paper answered the question of which framework between Express, .NET Core, Spring and Flask that had the best performance. To be able to know how frameworks performed the author needed to measure them. One way of measuring performance is with response time from the server. The author used a controlled experiment to collect raw data from which the results was drawn. The author found out that Spring had the best overall performance between the different categories. By analysing the results the author also found out that performance differed a lot between the frameworks in some categories.
APA, Harvard, Vancouver, ISO, and other styles
11

Kuřímský, Lukáš. "Zařízení pro automatizovaná testování řídicích jednotek plynových kotlů." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2021. http://www.nusl.cz/ntk/nusl-442519.

Full text
Abstract:
This diploma thesis deals with the design and implementation of a computer-controlled device for testing gas boiler control units, especially in the development phase. The reason for creating a test facility is the inadequacy of older test systems and the automation of existing testing. The test device in development consists of individual different cards. Each of the cards inserted into the motherboard performs its function in the system. Each of the cards has a special functionality which simulates the real conditions of the developed product. The basis of most cards is a microcontroller with a Cortex-M core, which communicates with the connected computer using the MODBUS protocol on the RS-485 communication interface. All cards on the bus are connected in parallel and behaves as a SLAVE, while the computer behaves as a MASTER and requests data or sends commands to the cards. The cards represent status switches (switching sensors), resistance and analog temperature sensors, PWM inputs and outputs (for simulation of feedback pumps or flow meters with pulse output). The cards also include a flame simulator, which reliably simulates the electrical properties of the flame and at the same time acts as a fan simulator. The input of the control unit is taken care of by the input card, which is intended for digital detection of the voltage presence in the range of 5 to 230 V DC and AC. Simultaneously, a card for connecting the power supply at zero voltage and disconnecting at zero current is created to supply the tested device with alternating voltage. A schematic diagram was designed or simulated for each card, then the function was verified and on this basis the whole card was created, including the microcontroller firmware. The most suitable solution and function of each card is carefully described and evaluated. All the requirements of the assignment within the work were met and the whole test equipment was manufactured and verified in four versions. In the future, the device is ready for the implementation of an automatic flame simulator and other improvements of individual module cards.
APA, Harvard, Vancouver, ISO, and other styles
12

Menozzi, Jerald Paul. "Microcomputer-based controller of coupled fluid pressures in triaxial stress testing." Thesis, Massachusetts Institute of Technology, 1985. http://hdl.handle.net/1721.1/104016.

Full text
Abstract:
Thesis (B.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1985.
MICROFICHE COPY AVAILABLE IN ARCHIVES AND ENGINEERING
Bibliography: leaf 42.
by Jerald Paul Menozzi Jr.
B.S.
APA, Harvard, Vancouver, ISO, and other styles
13

Trimmel, Stefan. "Evaluation of Model-Based Testing on a Base Station Controller." Thesis, Linköping University, Department of Computer and Information Science, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-12059.

Full text
Abstract:

This master thesis investigates how well suited the model-based testing process is for testing a new feature of a Base Station Controller. In model-based testing the tester designs a behavioral model of the system under test, or some part of the system. This model is then given to a test generation tool that will analyze the model and produce interesting test cases. These test cases can either be run on the system in an automatic or manual way depending on what type of setup there is.

In this report it is suggested that the behavioral model should be produced in as early a stage as possible and that it should be a collaboration between the test team and the design team.

The advantages with the model-based testing process are a better overview of the test cases, the test cases are always up to date, it helps in finding errors or contradictions in requirements and it performs closer collaboration between the test team and the design team. The disadvantages with model-based testing process are that it introduces more sources where an error can occur. The behavioral model can have errors, the layer between the model and the generated test cases can have errors and the layer between the test cases and the system under test can have errors. This report also indicates that the time needed for testing will be longer compared with manual testing.

During the pilot, when a part of a new feature was tested, of this master thesis a test generation tool called Qtronic was used. This tool solves a very challenging task which is generating test cases from a general behavioral model and with a good result. This tool provides many good things but it also has its shortages. One of the biggest shortages is the debugging of the model for finding errors. This step is very time consuming because it requires that a test case generation is performed on the whole model. When there is a fault in the model then this test generation can take very long time, before the tool decides that it is impossible to cover the model.

Under the circumstances that the Qtronic tool is improved on varies issues suggested in the thesis, one of the most important issues is to do something about the long debugging time needed, then the next step can be to use model-based testing in a larger evaluation project at BSC Design, Ericsson.

APA, Harvard, Vancouver, ISO, and other styles
14

Tobin, Stephen M. "Construction and testing of an 80C86 based communications controller for the Petite Amateur Navy Satellite (PANSAT)." Thesis, Monterey, California : Naval Postgraduate School, 1990. http://handle.dtic.mil/100.2/ADA243722.

Full text
Abstract:
Thesis (M.S. in Engineering Science (Computer Systems))--Naval Postgraduate School, December 1990.
Thesis Advisor(s): Cotton, Mitchell L. Second Reader: Lee, Chin-Hwa. "December 1990." Description based on title screen as viewed on March 30, 2010. DTIC Descriptor(s): Artificial Satellites, Computer Communications, Students, Theses, Paper, Vehicles, Learning, Circuit Testers, Naval Equipment, Requirements, Control, Reliability Author(s) subject terms: Satellite Communications, Satellite Microprocessors. Includes bibliographical references (p. 91-93). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
15

Nowocin, John Kendall. "Microgrid risk reduction for design and validation testing using controller hardware in the loop." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/111906.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 83-84).
As electric power customers look for reductions in the cost of energy, increases in the level of service reliability, and reductions in greenhouse gas emissions a common solution is a microgrid. These microgrids are smaller power systems where distributed energy resources are used to power local electric load(s). This work demonstrates an improved approach to planning microgrids via satellite imagery and has a case study in applied to India, the contribution of an anonymized real world test feeder to the power systems community, transition of geospatial information to a digital twin for an analysis of microgrid availability, and the process of developing a controller hardware in the loop platform to integrate physical equipment controllers from manufacturers and the development, testing, and validation of models by applying a general framework. The controller hardware in the loop platform (CHIL) can achieve the testing capabilities for microgrid controllers as more functions are required. CHIL is one method to validate microgrid controller performance before equipment is installed. Microgrids promise to improve the reliability, resiliency, and efficiency of the nation's aging but critical power distribution systems. Models of common power systems equipment were developed to achieve realistic interactions with the microgrid controller under test. The CHIL testbed that was built at MIT Lincoln Laboratory is described, and the equipment models developed are openly available. This testbed was able to test microgrid controllers under a variety of scenarios, including islanding, short-circuit analysis, and cyber attack. The effort resulted in the successful demonstration of HIL simulation technology at two Technical Symposiums organized by the Mass Clean Energy Center (CEC) for utility distribution system engineers, project developers, systems integrators, equipment vendors, academia, regulators, City of Boston officials, and Commonwealth officials. Actual microgrid controller hardware was integrated along with actual commercial generator and inverter controller hardware in the microgrid feeder that is becoming the IEEE reference standard.
by John Kendall Nowocin.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
16

Agbogo, Adakole Michael. "Effect of computer based training and testing on structured on–the–job training programs / M.A. Agbogo." Thesis, North-West University, 2010. http://hdl.handle.net/10394/4566.

Full text
Abstract:
Human capital is the only resource within an organisation that can learn. Developing high levels of competence in employees is one of the most challenging issues in organisations. Off–the–Job training programs either miss the mark or are too far away from the performance setting to have the desired impact on employee competence. Studies have shown that unstructured On–the–Job Training (OJT) leads to increased error rate, lower productivity and decreased training efficiency, compared to structured On–the–Job Training(S–OJT). The proven efficiency and effectiveness of S–OJT make it especially suitable to meet this challenge. Though S–OJT has been around for a while there has not been a proper integration of technology into the process. Every training approach, including S–OJT, is merely a means to an end, not an end in itself. The use of S–OJT helps to develop consistent appropriate levels of employee competence. When employees have these competencies e.g. better knowledge of the production processes, they can increase productivity, complete projects on time, lower defect rates, or achieve other outcomes of importance. These are the outcomes that matter to the organisation and the effectiveness of S–OJT should be judged from this perspective. Researchers have consistently found that one way to improve learners' success is to increase the frequency of exams. Classes meet for a set number of times. An instructor's decision to give more exams typically means that students have less time for learning activities during class meetings. How then can one have the best of both worlds, increasing the number of assessments and at the same time having enough time for learning activities? This can only be accomplished by integrating computer–based assessment into S–OJT programs. Computer–based testing and training can provide flexibility, instant feedback, an individualised assessment and eventually lower costs than traditional written examinations. Computerised results create opportunities for teaching and assessment to be integrated more than ever before and allow for retesting students, measuring growth and linking assessment to instruction. This research aims to evaluate the effectiveness of integrating computer–based testing and training into S–OJT programs using the Air Separation unit of Sasol Synfuels as a case study. The null hypothesis is used to investigate the draw backs of OJT and S–OJT programs. A framework is also developed for the effective integration of CBT into S–OJT programs.
Thesis (M.Ing. (Development and Management))--North-West University, Potchefstroom Campus, 2011.
APA, Harvard, Vancouver, ISO, and other styles
17

Gu, Yu. "Design and flight testing actuator failure accommodation controllers on WVU YF-22 research UAVS." Morgantown, W. Va. : [West Virginia University Libraries], 2004. https://etd.wvu.edu/etd/controller.jsp?moduleName=documentdata&jsp%5FetdId=3702.

Full text
Abstract:
Thesis (Ph. D.)--West Virginia University, 2004.
Title from document title page. Document formatted into pages; contains xiv, 145 p. : ill. (some col.). Includes abstract. Includes bibliographical references (p. 138-145).
APA, Harvard, Vancouver, ISO, and other styles
18

Garrad, Mark, and n/a. "Computer Aided Text Analysis in Personnel Selection." Griffith University. School of Applied Psychology, 2004. http://www4.gu.edu.au:8080/adt-root/public/adt-QGU20040408.093133.

Full text
Abstract:
This program of research was aimed at investigating a novel application of computer aided text analysis (CATA). To date, CATA has been used in a wide variety of disciplines, including Psychology, but never in the area of personnel selection. Traditional personnel selection techniques have met with limited success in the prediction of costly training failures for some occupational groups such as pilot and air traffic controller. Accordingly, the overall purpose of this thesis was to assess the validity of linguistic style to select personnel. Several studies were used to examine the structure of language in a personnel selection setting; the relationship between linguistic style and the individual differences dimensions of ability, personality and vocational interests; the validity of linguistic style as a personnel selection tool and the differences in linguistic style across occupational groups. The participants for the studies contained in this thesis consisted of a group of 810 Royal Australian Air Force Pilot, Air Traffic Control and Air Defence Officer trainees. The results partially supported two of the eight hypotheses; the other six hypotheses were supported. The structure of the linguistic style measure was found to be different in this study compared with the structure found in previous research. Linguistic style was found to be unrelated to ability or vocational interests, although some overlap was found between linguistic style and the measure of personality. In terms of personnel selection validity, linguistic style was found to relate to the outcome of training for the occupations of Pilot, Air Traffic Control and Air Defence Officer. Linguistic style also demonstrated incremental validity beyond traditional ability and selection interview measures. The findings are discussed in light of the Five Factor Theory of Personality, and motivational theory and a modified spreading activation network model of semantic memory and knowledge. A general conclusion is drawn that the analysis of linguistic style is a promising new tool in the area of personnel selection.
APA, Harvard, Vancouver, ISO, and other styles
19

Andersson, Sebastian, and Gustav Carlstedt. "Automated Testing of Robotic Systems in Simulated Environments." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-44572.

Full text
Abstract:
With the simulations tools available today, simulation can be utilised as a platform for more advanced software testing. By introducing simulations to software testing of robot controllers, the motion performance testing phase can begin at an earlier stage of development. This would benefit all parties involved with the robot controller. Testers at ABB would be able to include more motion performance tests to the regression tests. Also, ABB could save money by adapting to simulated robot tests and customers would be provided with more reliable software updates. In this thesis, a method is developed utilising simulations to create a test set for detecting motion anomalies in new robot controller versions. With auto-generated test cases and a similarity analysis that calculates the Hausdorff distance for a test case executed on controller versions with an induced artificial bug. A test set has been created with the ability to detect anomalies in a robot controller with a bug.
APA, Harvard, Vancouver, ISO, and other styles
20

MOURA, JOAO A. "Desenvolvimento e construção de sistema automatizados para controle de qualidade na produção de sementes de iodo-125." reponame:Repositório Institucional do IPEN, 2015. http://repositorio.ipen.br:8080/xmlui/handle/123456789/26454.

Full text
Abstract:
Submitted by Claudinei Pracidelli (cpracide@ipen.br) on 2016-07-01T11:08:37Z No. of bitstreams: 0
Made available in DSpace on 2016-07-01T11:08:37Z (GMT). No. of bitstreams: 0
Tese (Doutorado em Tecnologia Nuclear)
IPEN/T
Instituto de Pesquisas Energeticas e Nucleares - IPEN-CNEN/SP
APA, Harvard, Vancouver, ISO, and other styles
21

Bissi, Wilson. "WS-TDD: uma abordagem ágil para o desenvolvimento de serviços WEB." Universidade Tecnológica Federal do Paraná, 2016. http://repositorio.utfpr.edu.br/jspui/handle/1/1829.

Full text
Abstract:
Test Driven Development (TDD) é uma prática ágil que ganhou popularidade ao ser definida como parte fundamental na eXtreme Programming (XP). Essa prática determina que os testes devem ser escritos antes da implementação do código. TDD e seus efeitos têm sido amplamente estudados e comparados com a prática Test Last Development (TLD) em diversos trabalhos. Entretanto, poucos estudos abordam TDD no desenvolvimento de Web Services (WS), devido à complexidade em testar as dependências entre os componentes distribuídos e as particularidades da Service Oriented Architecture (SOA). Este trabalho tem por objetivo definir e validar uma abordagem para o desenvolvimento de WS baseada na prática de TDD, denominada WS-TDD. Essa abordagem guia os desenvolvedores no uso de TDD durante o desenvolvimento de WS, sugerindo ferramentas e técnicas para lidar com as dependências e as particularidades de SOA, com foco na criação dos testes unitários e integrados automatizados na linguagem Java. No intuito de definir e validar a abordagem proposta, quatro métodos de pesquisa foram executados: (i) questionário presencial; (ii) experimento; (iii) entrevista presencial com cada participante do experimento e (iv) triangulação dos resultados com as pessoas que participaram nos três métodos anteriores. De acordo com os resultados obtidos, a WS-TDD mostrou-se mais eficiente quando comparada a TLD, aumentando a qualidade interna do software e a produtividade dos desenvolvedores. No entanto, a qualidade externa do software diminuiu, apresentando um maior número de defeitos quando comparada a TLD. Por fim, é importante destacar que a abordagem proposta surge como uma alternativa simples e prática para a adoção de TDD no desenvolvimento de WS, trazendo benefícios a qualidade interna e contribuindo para aumentar a produtividade dos desenvolvedores. Porém, a qualidade externa do software diminuiu ao utilizar a WS-TDD.
Test Driven Development (TDD) is an agile practice that gained popularity when defined as a fundamental part in eXtreme Programming (XP). This practice determines that the tests should be written before implementing the code. TDD and its effects have been widely studied and compared with the Test Last Development (TLD) in several studies. However, few studies address TDD practice in the development of Web Services (WS), due to the complexity of testing the dependencies among distributed components and the specific characteristics of Service Oriented Architecture (SOA). This study aims to define and validate an approach to develop WS based on the practice of TDD, called WS-TDD. This approach guides developers to use TDD to develop WS, suggesting tools and techniques to deal with SOA particularities and dependencies, focusing on the creation of the unitary and integrated automated tests in Java. In order to define and validate the proposed approach, four research methods have been carried out: (i) questionnaire; (ii) practical experiment; (iii) personal interview with each participant in the experiment and (iv) triangulation of the results with the people who participated in the three previous methods. According to the obtained results, WS-TDD was more efficient compared to TLD, increasing internal software quality and developer productivity. However, the external software quality has decreased due to a greater number of defects compared to the TLD approach. Finally, it is important to highlight that the proposed approach is a simple and practical alternative for the adoption of TDD in the development of WS, bringing benefits to internal quality and contributing to increase the developers’ productivity. However, the external software quality has decreased when using WS-TDD.
APA, Harvard, Vancouver, ISO, and other styles
22

Antelo, Junior Ernesto Willams Molina. "Estimação conjunta de atraso de tempo subamostral e eco de referência para sinais de ultrassom." Universidade Tecnológica Federal do Paraná, 2017. http://repositorio.utfpr.edu.br/jspui/handle/1/2616.

Full text
Abstract:
CAPES
Em ensaios não destrutivos por ultrassom, o sinal obtido a partir de um sistema de aquisição de dados real podem estar contaminados por ruído e os ecos podem ter atrasos de tempo subamostrais. Em alguns casos, esses aspectos podem comprometer a informação obtida de um sinal por um sistema de aquisição. Para lidar com essas situações, podem ser utilizadas técnicas de estimativa de atraso temporal (Time Delay Estimation ou TDE) e também técnicas de reconstrução de sinais, para realizar aproximações e obter mais informações sobre o conjunto de dados. As técnicas de TDE podem ser utilizadas com diversas finalidades na defectoscopia, como por exemplo, para a localização precisa de defeitos em peças, no monitoramento da taxa de corrosão em peças, na medição da espessura de um determinado material e etc. Já os métodos de reconstrução de dados possuem uma vasta gama de aplicação, como nos NDT, no imageamento médico, em telecomunicações e etc. Em geral, a maioria das técnicas de estimativa de atraso temporal requerem um modelo de sinal com precisão elevada, caso contrário, a localização dessa estimativa pode ter sua qualidade reduzida. Neste trabalho, é proposto um esquema alternado que estima de forma conjunta, uma referência de eco e atrasos de tempo para vários ecos a partir de medições ruidosas. Além disso, reinterpretando as técnicas utilizadas a partir de uma perspectiva probabilística, estendem-se suas funcionalidades através de uma aplicação conjunta de um estimador de máxima verossimilhança (Maximum Likelihood Estimation ou MLE) e um estimador máximo a posteriori (MAP). Finalmente, através de simulações, resultados são apresentados para demonstrar a superioridade do método proposto em relação aos métodos convencionais.
Abstract (parágrafo único): In non-destructive testing (NDT) with ultrasound, the signal obtained from a real data acquisition system may be contaminated by noise and the echoes may have sub-sample time delays. In some cases, these aspects may compromise the information obtained from a signal by an acquisition system. To deal with these situations, Time Delay Estimation (TDE) techniques and signal reconstruction techniques can be used to perform approximations and also to obtain more information about the data set. TDE techniques can be used for a number of purposes in the defectoscopy, for example, for accurate location of defects in parts, monitoring the corrosion rate in pieces, measuring the thickness of a given material, and so on. Data reconstruction methods have a wide range of applications, such as NDT, medical imaging, telecommunications and so on. In general, most time delay estimation techniques require a high precision signal model, otherwise the location of this estimate may have reduced quality. In this work, an alternative scheme is proposed that jointly estimates an echo model and time delays for several echoes from noisy measurements. In addition, by reinterpreting the utilized techniques from a probabilistic perspective, its functionalities are extended through a joint application of a maximum likelihood estimator (MLE) and a maximum a posteriori (MAP) estimator. Finally, through simulations, results are presented to demonstrate the superiority of the proposed method over conventional methods.
APA, Harvard, Vancouver, ISO, and other styles
23

Mitchell, Eric John. "F/A-18A-D Flight Control computer OFP versions 10.6.1 and 10.7 developmental flight testing out-of-controlled flight test program yields reduced Falling Leaf departure susceptibility and enhanced aircraft maneuverability /." 2004. http://etd.utk.edu/2004/MitchellEric.pdf.

Full text
Abstract:
Thesis (M.S.)--University of Tennessee, Knoxville, 2004.
Title from title page screen (viewed May 13, 2004). Thesis advisor: Robert Richards. Document formatted into pages (xvi, 98 p. : ill. (some col.)). Vita. Includes bibliographical references (p. 51-53).
APA, Harvard, Vancouver, ISO, and other styles
24

(10716420), Taegyu Kim. "Cyber-Physical Analysis and Hardening of Robotic Aerial Vehicle Controllers." Thesis, 2021.

Find full text
Abstract:
Robotic aerial vehicles (RAVs) have been increasingly deployed in various areas (e.g., commercial, military, scientific, and entertainment). However, RAVs’ security and safety issues could not only arise from either of the “cyber” domain (e.g., control software) and “physical” domain (e.g., vehicle control model) but also stem in their interplay. Unfortunately, existing work had focused mainly on either the “cyber-centric” or “control-centric” approaches. However, such a single-domain focus could overlook the security threats caused by the interplay between the cyber and physical domains.
In this thesis, we present cyber-physical analysis and hardening to secure RAV controllers. Through a combination of program analysis and vehicle control modeling, we first developed novel techniques to (1) connect both cyber and physical domains and then (2) analyze individual domains and their interplay. Specifically, we describe how to detect bugs after RAV accidents using provenance (Mayday), how to proactively find bugs using fuzzing (RVFuzzer), and how to patch vulnerable firmware using binary patching (DisPatch). As a result, we have found 91 new bugs in modern RAV control programs, and their developers confirmed 32 cases and patch 11 cases.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography