To see the other types of publications on this topic, follow the link: Experiment automation.

Dissertations / Theses on the topic 'Experiment automation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Experiment automation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Eada, Priyanudeep. "Experiment to evaluate an Innovative Test Framework : Automation of non-functional testing." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-10940.

Full text
Abstract:
Context. Performance testing, among other types of non-functional testing, is necessary to assess software quality. Most often, manual approach is employed to test a system for its performance. This approach has several setbacks. The existing body of knowledge lacks empirical evidence on automation of non-functional testing and is largely focused on functional testing. Objectives. The objective of the present study is to evaluate a test framework that automates performance testing. A large-scale distributed project is selected as the context to achieve this objective. The rationale for choosing such a project is that the proposed test framework was designed with an intention to adapt and tailor according to any project’s characteristics. Methods. An experiment was conducted with 15 participants at Ericsson R&D department, India to evaluate an automated test framework. Repeated measures design with counter balancing method was used to understand the accuracy and time taken while using the test framework. To assess the ease-of-use of the proposed framework, a questionnaire was distributed among the experiment participants. Statistical techniques were used to accept or reject the hypothesis. The data analysis was performed using Microsoft Excel. Results. It is observed that the automated test framework is superior to the traditional manual approach. There is a significant reduction in the average time taken to run a test case. Further, the number of errors resulting in a typical testing process is minimized. Also, the time spent by a tester during the actual test is phenomenally reduced while using the automated approach. Finally, as perceived by software testers, the automated approach is easier to use when compared to the manual test approach. Conclusions. It can be concluded that automation of non-functional testing will result in overall reduction in project costs and improves quality of software tested. This will address important performance aspects such as system availability, durability and uptime. It was observed that it is not sufficient if the software meets the functional requirements, but is also necessary to conform to the non-functional requirements.
APA, Harvard, Vancouver, ISO, and other styles
2

Nilsson, Axel. "Zero-Downtime Deployment in a High Availability Architecture : Controlled experiment of deployment automation in a high availability architecture." Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-74971.

Full text
Abstract:
Computer applications are no longer local installations on our computers. Many modern web applications and services rely on an internet connection to a centralized server to access the full functionality of the application. High availability architectures can be used to provide redundancy in case of failure to ensure customers always have access to the server. Due to the complexity of such systems and the need for stability, deployments are often avoided and new features and bug fixes cannot be delivered to the end user quickly. In this project, an automation system is proposed to allow for deployments to a high availability architecture while ensuring high availability. The purposed automation system is then tested in a controlled experiment to see if it can deliver what it promises. During low amounts of traffic, the deployment system showed it could make a deployment with a statistically insignificant change in error rate when compared to normal operations. Similar results were found during medium to high levels of traffic for successful deployments, but if the system had to recover from a failed deployment there was an increase in errors. However, the response time during the experiment showed that the system had a significant effect on the response time of the web application resulting in the availability being compromised in certain situations.
APA, Harvard, Vancouver, ISO, and other styles
3

Fältros, Jesper, Isak Alinger, and Bergen Axel von. "Safety risks with ZigBee smart devices : Identifying risks and countermeasures in ZigBee devices with an eavesdropping experiment." Thesis, Tekniska Högskolan, Jönköping University, JTH, Datateknik och informatik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-49630.

Full text
Abstract:
With ZigBee being the world’s leading IoT protocol, users are vulnerable to attacks on the wireless communication between ZigBee devices and the information that can be gained from them. For users to protect themselves from potential attacks they need to be aware of what information can be extracted and how it can be countered. Through an eavesdropping experiment, done using three individual sensors from different vendors, various packets with potential for misuse have been identified within the area of building security. With the potential areas of misuse identified, there is also a need for countermeasures against these threats. Countermeasures were identified through a collection of literature that was summarized in order to provide a wide range of alternatives, suitable to different scenarios. The experiment was limited to the functions of the sensors used, as well as traffic using the ZigBee protocol. This study pinpoints a potential for misuse of the ZigBee traffic sent between devices and shows that the ZigBee protocol is fundamentally flawed from a security aspect. Whilst countermeasures exist, they are not applicable to every situation which is why the ZigBee protocol itself needs further development to be considered secure.
APA, Harvard, Vancouver, ISO, and other styles
4

Ebadat, Afrooz. "Experiment Design for Closed-loop System Identification with Applications in Model Predictive Control and Occupancy Estimation." Doctoral thesis, KTH, Reglerteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-209021.

Full text
Abstract:
The objective of this thesis is to develop algorithms for application-oriented input design. This procedure takes the model application into account when designing experiments for system identification. This thesis is divided into two parts. The first part considers the theory of application-oriented input design, with special attention to Model Predictive Control (MPC). We start by studying how to find a convex approximation of the set of models that result in acceptable control performance using analytical methods when controllers with no closed-form control law, for e.g., MPC are employed. The application-oriented input design is formulated in time domain to enable handling of signals constraints. The framework is extended to closed-loop systems where two cases are considered i.e., when the plant is controlled by a general but known controller and for the case of MPC. To this end, an external stationary signal is designed via graph theory. Different sources of uncertainty in application-oriented input design are investigated and a robust application-oriented input design framework is proposed. The second part of this thesis is devoted to the problem of estimating the number of occupants based on the information available to HVAC systems in buildings. The occupancy estimation is first formulated as a two-tier problem. In the first tier, the room dynamic is identified using temporary measurements of occupancy. In the second tier, the identified model is employed to formulate the problem as a fused-lasso problem. The proposed method is further developed to be used as a multi-room estimator using a physics-based model. However, since it is not always possible to collect measurements of occupancy, we proceed by proposing a blind identification algorithm which estimates the room dynamic and occupancy, simultaneously. Finally, the application-oriented input design framework is employed to collect data that is informative enough for occupancy estimation purposes.

QC 20170620

APA, Harvard, Vancouver, ISO, and other styles
5

Rozanski, Robert. "Automating the development of Metabolic Network Models using Abductive Logic Programming." Thesis, University of Manchester, 2017. https://www.research.manchester.ac.uk/portal/en/theses/automating-the-development-of-metabolic-network-models-using-abductive-logic-programming(281edadb-4b80-4485-abe8-6966f3fc4ecc).html.

Full text
Abstract:
The complexity of biological systems constitute a significant problem for the development of biological models. This inspired the creation of a few Computational Scientific Discovery systems that attempt to address this problem in the context of metabolomics through the use of computers and automation. These systems have important limitations, however, like limited revision and experiment design abilities and the inability to revise refuted models. The goal of this project was to address some of these limitations. The system developed for this project, "Huginn", was based on the use of Abductive Logic Programming to automate crucial development tasks, like experiment design, testing consistency of models with experimental results and revision of refuted models. The main questions of this project were (1) whether the proposed system can successfully develop Metabolic Network Models and (2) whether it can do it better than its predecessors. To answer these questions we tested Huginn in a simulated environment. Its task was to relearn the structures of disrupted fragments of a state-of-the-art model of yeast metabolism. The results of the simulations show that Huginn can relearn the structure of metabolic models, and that it can do it better than previous systems thanks to the specific features introduced in it. Furthermore, we show how the design of extended crucial experiments can be automated using Answer Set Programming for the first time.
APA, Harvard, Vancouver, ISO, and other styles
6

Urbiš, Jakub. "Optimalizace a měření transportních experimentů na grafenových polem řízených tranzistorech." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2019. http://www.nusl.cz/ntk/nusl-402598.

Full text
Abstract:
This thesis deals with the automation of transport experiments on graphene using the graphical programming language LabVIEW. Specifically, the experiments with graphene relative humidity sensors are based on: a two-point graphene structure, a two-point structure of SiO$_2$ and a four-point graphene structure in the form of a Hall bar. In all of these experiments, relative humidity, input electrical parameters, SPM measurements, and macroscopic transport properties are measured simultaneously. The program DeviceManager developed in framework of this thesis simplifies the implementation of these experiments.
APA, Harvard, Vancouver, ISO, and other styles
7

Brusamento, Donato. "Improving pattern recognition based myocontrol of prosthetic hands via user-in-the-loop." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019.

Find full text
Abstract:
Il controllo di mani protesiche basato su elettromiografia (EMG) ha le potenzialità di ristabilire funzioni motorie ai pazienti che hanno subito un’amputazione, migliorando sensibilimente la qualità della vita. Tuttavia rimangono problemi aperti nell’ottenere un controllo ricco di movimenti e stabile, fra cui la presenza del limb position effect. La tesi si concentra nel cercare di ridurre questa causa di instabilità, proponendo una versione modificata dell’algoritmo Ridge Regression with Random Fourier Features, reso incrementale e arricchito di feedback all’utente. Questo approccio viene poi validato tramite un esperimento su 12 soggetti intatti, per verificare l’incremento di performance, e tramite un ulteriore studio pilota su un soggetto amputato, a seguito dell’adattamento del software ad una mano protesica in via di sviluppo.
APA, Harvard, Vancouver, ISO, and other styles
8

Oliveira, Ricardo Ramos de. "Avaliação da portabilidade entre fornecedores de teste como serviço na computação em nuvem." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-16072018-170853/.

Full text
Abstract:
O processo de automatização de teste de software possui alto custo envolvido em sistemas de larga escala, pois exigem cenários de teste complexos e tempos de execução extremamente longos. Além disso, cada etapa do processo de teste requer recursos computacionais e um tempo considerável para a execução de muitos casos de teste, tornando-se um gargalo para as empresas de Tecnologia da Informação (TI). Neste contexto, os benefícios e oportunidades oferecidos pela combinação da computação em nuvem com o Teste como Serviço (Testing as a Service, TaaS), que é considerado um novo modelo de negócio e de serviço atraente e promissor, podem proporcionar um impacto positivo na redução do tempo de execução dos testes de maneira custo-efetiva e aumentar o retorno sobre o investimento ou Return on investment (ROI). Todavia, existe o problema de vendor lock-in, que é o aprisionamento do usuário à plataforma de um fornecedor específico ou serviço de teste, ocasionado pela dificuldade de migrar de um fornecedor TaaS para outro, limitando a utilização dessas novas tecnologias de maneira efetiva e eficiente, impedindo assim, a ampla adoção do TaaS. Como os estudos existentes não são rigorosos ou conclusivos e, principalmente, devido à falta de evidência empírica na área de serviço de teste, muitas questões devem ser investigadas na perspectiva da migração entre os provedores de TaaS. O objetivo deste trabalho é reduzir o impacto ocasionado pelo problema de vendor lock-in no processo de automatização de testes na computação em nuvem, na escrita, configuração, execução e gerenciamento dos resultados de testes automatizados. Neste contexto, foi desenvolvido o protótipo da abordagem intitulada Multi-TaaS por meio de uma biblioteca Java como prova de conceito. A abordagem Multi-TaaS é uma camada de abstração e a sua arquitetura permite abstrair e flexibilizar a troca de fornecedores de TaaS de forma portável, pois permite encapsular toda a complexidade da implementação do engenheiro de software ao desacoplar o teste automatizado de qual plataforma TaaS ele será executado, bem como abstrair os aspectos da comunicação e integração entre as APIs REST proprietárias dos diferentes fornecedores de TaaS. Além disso, a abordagem Multi-TaaS possibilita também sumarizar os resultados dos testes automatizados de forma independente das tecnologias da plataforma TaaS subjacente. Foram realizadas avaliações comparativas da eficiência, efetividade, dificuldade e do esforço de migração entre as abordagens Multi-TaaS e abordagem convencional, por meio de experimentos controlados. Os resultados deste trabalho indicam que a nova abordagem permite facilitar a troca do serviço de teste, melhorar a eficiência e, principalmente, reduzir o esforço e os custos de manutenção na migração entre fornecedores de TaaS. Os estudos realizados no experimento controlado são promissores e podem auxiliar os engenheiros de software na tomada de decisão quanto aos riscos associados ao vendor lock-in no TaaS. Por fim, a abordagem Multi-TaaS contribui, principalmente, para a portabilidade dos testes automatizados na nuvem e da sumarização dos resultados dos testes e, consequentemente, possibilita que o modelo de serviço TaaS na computação em nuvem seja amplamente adotado, de forma consciente, no futuro.
The automation of software testing involves high costs in large-scale systems, since it requires complex test scenarios and extremely long execution times. Moreover, each of its steps demands computational resources and considerable time for running many test cases, which makes it a bottleneck for Information Technology (IT) companies. The benefits and opportunities offered by the combination of cloud computing and Testing as a Service (TaaS), considered a new business and service model, can reduce the execution time of tests in a cost-effective way and improve Return on Investment (ROI). However, the lock-in problem, i.e., the imprisonment of the user in the platform of a specific vendor or test service caused by the difficult migration from one TaaS provider to another limits the effective use of such new technologies and prevents the widespread adoption of TaaS. As studies conducted are neither rigorous, nor conclusive, and mainly due to the lack of empirical evidence, many issues must be investigated from the perspective of migration among TaaS providers. This research aims at reductions in the impact of the vendor lock-in problem on the automation process of testing in cloud computing, writing, configuration, execution and management of automated test results. The prototype of the Multi- TaaS approach was developed through a Java library as a proof of concept. The Multi-TaaS approach is an abstraction layer and its architecture enables the abstraction and flexibilization of the exchange of TaaS providers in a portable way, once the complexity of the software engineers implementation can be encapsulated. The two main advantages of Multi-TaaS are the decoupling of the automated test from the TaaS platform on which it will be executed and the abstraction of the communication and integration aspects among the proprietary REST APIs of the different TaaS providers. The approach also enables the summarization of automated test results independently of the underlying TaaS platform technologies. A comparative evaluation between Multi-TaaS and conventional migration approaches regarding the difficulty, efficiency, effectiveness and effort of migration among TaaS providers was conducted through controlled experiments.The results show the approach facilitates the exchange of test service, improves efficiency and reduces the effort and maintenance costs of migration among TaaS providers. The studies conducted in the controlled experiment are promising and can assist software engineers in decision-making regarding the risks associated with vendor lock-in in TaaS. The Multi-TaaS approach contributes mainly to the portability of automated tests in the cloud and summarization of their results. Finally, this research enables also the widespread adoption of the TaaS service model in cloud computing, consciously, in the future.
APA, Harvard, Vancouver, ISO, and other styles
9

Mladenovic, Milos. "Development of Sustainable Traffic Control Principles for Self-Driving Vehicles: A Paradigm Shift Within the Framework of Social Justice." Diss., Virginia Tech, 2014. http://hdl.handle.net/10919/64806.

Full text
Abstract:
Developments of commercial self-driving vehicle (SDV) technology has a potential for a paradigm shift in traffic control technology. Contrary to some previous research approaches, this research argues that, as any other technology, traffic control technology for SDVs should be developed having in mind improved quality of life through a sustainable developmental approach. Consequently, this research emphasizes upon the social perspective of sustainability, considering its neglect in the conventional control principles, and the importance of behavioral considerations for accurately predicting impacts upon economic or environmental factors. The premise is that traffic control technology can affect the distribution of advantages and disadvantages in a society, and thus it requires a framework of social justice. The framework of social justice is inspired by John Rawls' Theory of Justice as fairness, and tries to protect the inviolability of each user in a system. Consequently, the control objective is the distribution of delay per individual, considering for example that the effect of delay is not the same if a person is traveling to a grocery store as opposed to traveling to a hospital. The notion of social justice is developed as a priority system, with end-user responsibility, where user is able to assign a specific Priority Level for each individual trip with SDV. Selected Priority Level is used to determine the right-of-way for each self-driving vehicle at an intersection. As a supporting mechanism to the priority system, there is a structure of non-monetary Priority Credits. Rules for using Priority Credits are determined using knowledge from social science research and through empirical evaluation using surveys, interviews, and web-based experiment. In the physical space, the intersection control principle is developed as hierarchical self-organization, utilizing communication, sensing, and in-vehicle technological capabilities. This distributed control approach should enable robustness against failure, and scalability for future expansion. The control mechanism has been modeled as an agent-based system, allowing evaluation of effects upon safety and user delay. In conclusion, by reaching across multiple disciplines, this development provides the promise and the challenge for evolving SDV control technology. Future efforts for SDV technology development should continue to rely upon transparent public involvement and understanding of human decision-making.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
10

Forsslund, Patrik, and Simon Monié. "MULTI-DRONE COLLABORATION FOR SEARCH AND RESCUE MISSIONS." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-54439.

Full text
Abstract:
Unmanned Aerial Vehicle (UAV), also called drones, are used for Search And Rescue (SAR) missions, mainly in the form of a pilot manoeuvring a single drone. However, the increase in labour to cover larger areas quickly would result in a very high cost and time spent per rescue operation. Therefore, there is a need for an easy to use, low-cost, and highly autonomous swarm of drones for SAR missions where the detection and rescue times are kept to a minimum. In this thesis, a Subsumption-based architecture is proposed, which combines multiple behaviours to create more complex behaviours. An investigation of (1) what are the critical aspects of controlling a swarm of drones, (2) how can a combination of different behavioural algorithms increase the performance of a swarm of drones, and (3) what benchmarks are necessary when evaluating the fitness of the behavioural algorithms. The proposed architecture was simulated in AirSim using the SimpleFlight flight controller through experiments that evaluated the individual layers and missions that simulated real-life scenarios. The results validate the modularity and reliability of the architecture, where the architecture has the potential for improvements in future iterations. For the search area of 400×400meters, the swarm consistently produced an average area coverage of at least 99.917% and found all the missing people in all missions, with the slowest average being 563 seconds. Compared to related work, the result produced similar or better times when scaled to the same proportions and higher area coverage. As comparisons of results in SAR missions can be difficult, the introduction of Active time can serve as a benchmark for others in future swarm performance measurements.
APA, Harvard, Vancouver, ISO, and other styles
11

Billa, Cleo Zanella. "Um experimento formal para avaliar novas formas de visualização de prontuários clínicos eletrônicos." [s.n.], 2009. http://repositorio.unicamp.br/jspui/handle/REPOSIP/275816.

Full text
Abstract:
Orientador: Jacques Wainer
Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Computação
Made available in DSpace on 2018-08-16T05:40:57Z (GMT). No. of bitstreams: 1 Billa_CleoZanella_D.pdf: 3316703 bytes, checksum: 82c6b703f196ad4980b583703c56be1f (MD5) Previous issue date: 2009
Resumo: Atualmente, o uso da computação na medicina vem crescendo cada vez mais, e um dos temas mais discutidos é o prontuário clínico eletrônico. é consenso que a utilização de um prontuário eletrônico pode facilitar o trabalho do profissional de saúde e melhorar ainda mais a qualidade do cuidado em saúde, porém ainda existe muita discussão sobre como ele deve ser e quais ferramentas deve oferecer. Este trabalho propõe duas novas formas de visualização do prontuário. A primeira é um sumário com as informações mais relevantes do paciente. A segunda é a representação dos dados do paciente através de um diagrama, onde o profissional de saúde pode expressar o design rationale (DR) da consulta. A área de sumarização automática é um problema altamente complexo, e apesar de terem sido usados procedimentos muito simples, o experimento realizado mostrou que o processo foi suficiente para construir um sumário com o mínimo de informações necessárias para que o quadro clínico do paciente pudesse ser entendido. Alguns estudos apontam que a falta de informação sobre o processo de diagnóstico e sobre o planejamento do tratamento é uma das principais falhas de um sistema de prontuário eletrônico. Por isso, foi sugerida uma representação que utiliza diagramas para armazenar e visualizar, além dos dados do paciente, o raciocínio do profissional de saúde durante uma consulta. Essa técnica é conhecida como design rationale, e é usada, principalmente, na área de engenharia de software. Além de propor essas duas novas formas de visualização do prontuário clínico, foi realizado um experimento formal com o objetivo de testar o sumário e o diagrama com DR na prática. O experimento ocorreu em um ambulatório de clínica geral da Unifesp, onde alunos do curso de medicina recebiam o sumário, ou o diagrama com DR, ou o prontuário clínico tradicional e respondiam questões sobre um determinado caso. Os resultados do experimento mostram que o sumário continha informações suficientes para avaliar o quadro clínico do paciente; porém, eles também mostram que o diagrama com DR provavelmente não apresentou nenhuma vantagem em relação ao prontuário tradicional
Abstract: Collaboration between computer science and medicine is growing day by day, and one of the most controversial topics is the electronic patient record (EPR). Despite all scientists agree that the EPR can improve health care quality, how it should behave, or what tools it should provide are still open questions. This work suggests two ways to visualize the EPR. The first is through a summary, with the most important information of the patient. And, the second, is a diagram where the physician is able to express his design rationale. Summarization is a complex problem, and despite very simple procedures were used, the experimental evaluation shows that the summary contains as much information as the traditional EPR. The idea of diagrams to visualize the EPR was originated in a technique called design rationale (DR), used, mostly, in Software Engineering. Its major goal is to reproduce the rationale during a project design. Some researches pointed out that one of the major limitations of EPR is the lack of information about diagnosis processes, and treatment planning. To evaluate these new ways of visualization of the EPR, an experimental evaluation was performed to test the summary and the diagram in real practice. The experiment was conduct in a outpatient care clinic at Unifesp, where medical students use the summary, or the diagram, or the traditional EPR to answer questions about specific patients. The results of the experiment show that the summary was equivalent to the traditional EPR, and that the diagram no not show any leverage to the traditional EPR
Doutorado
Informática Médica
Doutor em Ciência da Computação
APA, Harvard, Vancouver, ISO, and other styles
12

Engelmann, James E. "An Information Management and Decision Support tool for Predictive Alerting of Energy for Aircraft." Ohio University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1595779161412401.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Wen, Hui Ying. "Human-automation task allocation in lunar landing: simulation and experiments." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/85813.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2011.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 59-62).
Task allocation, or how tasks are assigned to the human operator(s) versus to automation, is an important aspect of designing a complex vehicle or system for use in human space exploration. The performance implications of alternative task allocations between human and automation can be simulated, allowing high-level analysis of a large task allocation design space. Human subject experiments can then be conducted to uncover human behaviors not modeled in simulation but need to be considered in making the task allocation decision. These methods were applied here to the case scenario of lunar landing with a single human pilot. A task analysis was performed on a hypothetical generic lunar landing mission, focusing on decisions and actions that could be assigned to the pilot or to automation during the braking, approach, and touchdown phases. Models of human and automation task completion behavior were implemented within a closed-loop pilot-vehicle simulation for three subtasks within the landing point designation (LPD) and final approach tasks, creating a simulation framework tailored for the analysis of a task allocation design space. Results from 160 simulation runs showed that system performance, measured by fuel usage and landing accuracy, was predicted to be optimized if the human performs decision making tasks, and manual tasks such as flying the vehicle are automated. Variability in fuel use can be attributed to human performance of the flying task. Variability in landing accuracy appears to result from human performance of the LPD and flying tasks. Next, a human subject experiment (11 subjects, 68 trials per subject) was conducted to study subjects' risk-taking strategy in designating the landing point. Results showed that subjects designated landing points that compensated for estimated touchdown dispersions and system-level knowledge of the probabilities of manual versus automated flight. Also, subjects made more complete LPD compensations when estimating touchdown dispersion from graphical plots rather than from memories of previous simulated landings. The way in which dispersion information is presented affected the consistency with which subjects adhered to a risk level in making landing point selections. These effects could then be incorporated in future human performance models and task allocation simulations.
by Hui Ying Wen.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
14

FUNKE, GREGORY J. "THE EFFECTS OF STRESS AND AUTOMATION ON PERFORMANCE IN A SIMULATED WINTER DRIVE." University of Cincinnati / OhioLINK, 2004. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1099610041.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Ziccardi, Jason Brian. "A comparison of probe techniques for assessing situation awareness across levels of automation." Thesis, California State University, Long Beach, 2014. http://pqdtopen.proquest.com/#viewpdf?dispub=1528075.

Full text
Abstract:

Techniques to accurately measure situation awareness (SA) are important when designing systems that optimize operator performance. The two most prominent SA probe techniques vary based on screen visibility and situation pause during question presentation. The current study used four probe techniques based on all possible configurations of these factors. Air traffic control students controlled traffic in 10 scenarios that included all four probe techniques and a baseline no-probe condition across two degrees of automation. Probe questions varied on two levels of priority and specificity, creating four question types. Based on operator performance variations and subjective ratings, results support administration of probes with a visible screen and while the situation is paused. No method showed superior sensitivity to SA differences. Finally, the current study replicated findings that low priority information is offloaded to the environment and accessed as needed, supporting the situated approach towards SA.

APA, Harvard, Vancouver, ISO, and other styles
16

Andersen, Bruce Jacob. "An experimental study of the automation of thermoplastic composite processing." Thesis, Georgia Institute of Technology, 1987. http://hdl.handle.net/1853/19668.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Johnson, Kip E. (Kip Edward) 1978. "Experimental study of automation to support time-critical replanning decisions." Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/82225.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Guo, Jianglong. "Numerical and experimental study of electroadhesion to enable manufacturing automation." Thesis, Loughborough University, 2016. https://dspace.lboro.ac.uk/2134/21718.

Full text
Abstract:
Robotics and autonomous systems (RAS) have great potential to propel the world to future growth. Electroadhesion is a promising and potentially revolutionising material handling technology for manufacturing automation applications. There is, however, a lack of an in-depth understanding of this electrostatic adhesion phenomenon based on a confident electroadhesive pad design, manufacture, and testing platform and procedure. This Ph.D. research endeavours to obtain a more comprehensive understanding of electroadhesion based on an extensive literature review, theoretical modelling, electrostatic simulation, and experimental validation based on a repeatable pad design, manufacture, and testing platform and procedure.
APA, Harvard, Vancouver, ISO, and other styles
19

Lan, Dapeng. "Experimental Study of Thread Mesh Network for Wireless Building Automation Systems." Thesis, KTH, Skolan för elektro- och systemteknik (EES), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-194440.

Full text
Abstract:
Wireless sensor network technologies have gained significant popularity in home automation due to their scalability, system mobility, wireless connectivity, inexpensive and easy commissioning. Thread, a new wireless protocol aiming for home automation, is proposed by Google Nest and standardized by Thread Group. This thesis presents a thorough experimental evaluation of Thread wireless protocol with the hardware platform from NXP. The test plan, implementation, and analysis of the experiments is discussed in details, including signal coverage, unicast and multicast latency, reliability, and availability. Furthermore, a system level model considering the delay in different layers for the latency of Thread mesh network is presented, and validated by the experimental results. Finally, a friendly tool was developed for installers to estimate the latency of Thread mesh network.
APA, Harvard, Vancouver, ISO, and other styles
20

Björklund, Patrik, and Anna Rydin. "Automation Pipelines for Efficient and Robust Experimental Research Within Cognitive Neuroscience." Thesis, Uppsala universitet, Avdelningen för visuell information och interaktion, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-453741.

Full text
Abstract:
The current trend towards large-scale research projects with big quantities of data from multiple sources require robust and efficient data handling. This thesis explores techniques for automatizing research data pipelines. Specifically, two tasks related to automation within a long-term research project in cognitive neuroscience are addressed. The first task is to develop a tool for automatic transcribing of paper-based questionnaires using computer vision. Questionnaires containing continuous scales, so called visual analog scales (VASs), are used extensively in e.g. psychology. Despite this, there currently exists no tool for automatic decoding of these types of questionnaires. The resulting computer vision system for automatic questionnaire transcribing we present, called "VASReader", reliably detects VAS marks with an accuracy of 98%, and predicts their position with a mean absolute error of 0.3 mm when compared to manual measurements. The second task addressed in this thesis project is to investigate whether machine learning can be used to detect anomalies in Magnetic Resonance Imaging (MRI) data. An implementation of the unsupervised anomaly detection technique Isolation Forest shows promising results for the detection of anomalous data points. The model is trained on image quality metric (IQM) data extracted from MRI. However, it is concluded that the site of scanning and MRI machine model used affect the IQMs, and that the model is more prone to classify data points originating from machines and institutions that have less support in the database as anomalous. An important conclusion from both tasks is that automation is possible and can be a great asset to researchers, if an appropriate level and type of automation is selected.
APA, Harvard, Vancouver, ISO, and other styles
21

Borngrund, Carl. "Machine vision for automation of earth-moving machines : Transfer learning experiments with YOLOv3." Thesis, Luleå tekniska universitet, Institutionen för system- och rymdteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-75169.

Full text
Abstract:
This master thesis investigates the possibility to create a machine vision solution for the automation of earth-moving machines. This research was done as without some type of vision system it will not be possible to create a fully autonomous earth moving machine that can safely be used around humans or other machines. Cameras were used as the primary sensors as they are cheap, provide high resolution and is the type of sensor that most closely mimic the human vision system. The purpose of this master thesis was to use existing real time object detectors together with transfer learning and examine if they can successfully be used to extract information in environments such as construction, forestry and mining. The amount of data needed to successfully train a real time object detector was also investigated. Furthermore, the thesis examines if there are specifically difficult situations for the defined object detector, how reliable the object detector is and finally how to use service-oriented architecture principles can be used to create deep learning systems. To investigate the questions formulated above, three data sets were created where different properties were varied. These properties were light conditions, ground material and dump truck orientation. The data sets were created using a toy dump truck together with a similarly sized wheel loader with a camera mounted on the roof of its cab. The first data set contained only indoor images where the dump truck was placed in different orientations but neither the light nor the ground material changed. The second data set contained images were the light source was kept constant, but the dump truck orientation and ground materials changed. The last data set contained images where all property were varied. The real time object detector YOLOv3 was used to examine how a real time object detector would perform depending on which one of the three data sets it was trained using. No matter the data set, it was possible to train a model to perform real time object detection. Using a Nvidia 980 TI the inference time of the model was around 22 ms, which is more than enough to be able to classify videos running at 30 fps. All three data sets converged to a training loss of around 0.10. The data set which contained more varied data, such as the data set where all properties were changed, performed considerably better reaching a validation loss of 0.164 compared to the indoor data set, containing the least varied data, only reached a validation loss of 0.257. The size of the data set was also a factor in the performance, however it was not as important as having varied data. The result also showed that all three data sets could reach a mAP score of around 0.98 using transfer learning.
APA, Harvard, Vancouver, ISO, and other styles
22

Vaidya, Rohit Subhash. "Experimental testing of a computer aided heat treatment planning system." Link to electronic thesis, 2003. http://www.wpi.edu/Pubs/ETD/Available/etd-0827103-111212.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Anozie, Chidi H. "Event-Triggered Design of Networked Embedded Automation Systems." University of Akron / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=akron1291754351.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Kolla, Maheedhar, and University of Lethbridge Faculty of Arts and Science. "Automatic text summarization using lexical chains : algorithms and experiments." Thesis, Lethbridge, Alta. : University of Lethbridge, Faculty of Arts and Science, 2004, 2004. http://hdl.handle.net/10133/226.

Full text
Abstract:
Summarization is a complex task that requires understanding of the document content to determine the importance of the text. Lexical cohesion is a method to identify connected portions of the text based on the relations between the words in the text. Lexical cohesive relations can be represented using lexical chaings. Lexical chains are sequences of semantically related words spread over the entire text. Lexical chains are used in variety of Natural Language Processing (NLP) and Information Retrieval (IR) applications. In current thesis, we propose a lexical chaining method that includes the glossary relations in the chaining process. These relations enable us to identify topically related concepts, for instance dormitory and student, and thereby enhances the identification of cohesive ties in the text. We then present methods that use the lexical chains to generate summaries by extracting sentences from the document(s). Headlines are generated by filtering the portions of the sentences extracted, which do not contribute towards the meaning of the sentence. Headlines generated can be used in real world application to skim through the document collections in a digital library. Multi-document summarization is gaining demand with the explosive growth of online news sources. It requires identification of the several themes present in the collection to attain good compression and avoid redundancy. In this thesis, we propose methods to group the portions of the texts of a document collection into meaningful clusters. clustering enable us to extract the various themes of the document collection. Sentences from clusters can then be extracted to generate a summary for the multi-document collection. Clusters can also be used to generate summaries with respect to a given query. We designed a system to compute lexical chains for the given text and use them to extract the salient portions of the document. Some specific tasks considered are: headline generation, multi-document summarization, and query-based summarization. Our experimental evaluation shows that efficient summaries can be extracted for the above tasks.
viii, 80 leaves : ill. ; 29 cm.
APA, Harvard, Vancouver, ISO, and other styles
25

Attali, Dean. "Automatic analysis of dual-channel Droplet Digital PCR experiments." Thesis, University of British Columbia, 2016. http://hdl.handle.net/2429/57928.

Full text
Abstract:
The ability to quantify the amount of DNA in a sample is an essential technique in biology and is used in many fields of research. Droplet digital polymerase chain reaction (ddPCR) is an advanced technology developed for this purpose that enables more accurate and sensitive quantification than traditional real-time PCR. In ddPCR, nucleic acid (e.g., genomic DNA) within a sample is partitioned into thousands of droplets, along with the reagents needed to amplify and detect one or more DNA target sequences. After amplification takes place in all droplets, each droplet is individually read by a two-colour fluorescence detection system to determine whether or not it contains the target sequence. ddPCR experiments utilizing both fluorescence wavelengths are termed dual-channel, while simpler experiments can make use of only one fluorescence wavelength and are thus classified as single-channel. Droplets containing amplified product exhibit high fluorescence and are said to be positive, while those without product show little or no fluorescence and are considered negative. Using this binary, or digital, classification of droplets, the number of positive and negative droplets can be counted to allow for an absolute quantification of template abundance in the starting sample. ddPCR instruments are now available commercially and their use is growing. But, there are a very limited number of tools available for downstream data analysis. The key step in ddPCR data analysis is droplet gating: using the end-point fluorescence data to gate, or classify, droplets as either positive or negative for a given template. The proprietary software provided by BioRad Inc., a ddPCR instrument manufacturer, is currently the only program available to automatically analyze dual-channel ddPCR data. However, because this analysis tool often produces poor results, many ddPCR users resort to time-consuming and subjective manual data analyses, emphasizing the clear need for new ddPCR analysis tools. In this thesis, I devise an algorithm for automatic analysis of dual-channel ddPCR data that can objectively and reproducibly perform droplet gating. The proposed analysis method has been implemented in an R package and is also available as a web application online for easy and open access to any ddPCR user.
Science, Faculty of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
26

Chinchilla, Rigoberto. "Design and evaluation of undergraduate experiments using the BYTRONIC laboratory set-up." Ohio : Ohio University, 1993. http://www.ohiolink.edu/etd/view.cgi?ohiou1175193035.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Young, William Martin. "Implementation of an active haptic display and associated psychophysics experiments." Thesis, Georgia Institute of Technology, 2000. http://hdl.handle.net/1853/17597.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Jansson, Henrik. "Experiment design with applications in identification for control." Doctoral thesis, Stockholm : Automatic Control, Dept. of Signals, Sensors and Systems, Royal Institute of Technology (KTH), 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-62.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Guerreiro, Luís Filipe Costa. "Automatic drilling improvement and standardization by design-of-experiments (DOE)." Master's thesis, Universidade de Évora, 2019. http://hdl.handle.net/10174/25737.

Full text
Abstract:
Na manufactura aeroespacial,para um dado intervalo de tempo imposto,os requisitos de qualidade têm de ser cumpridos.O custo operacional tem um pesado impacto,especial mente em operações repetidas,como a operação de furação decompósitos na indústria aeroespacial.É realizado um estudo da furação em compósitos,em particular dos bordos de ataque dos Estabilizadores dos jatos executivos Embraer Legacy 450/500 e Praetor 500/600. Este estudo consiste na análise do desempenho das máquinas pneumáticas e introdução de máquinas semi-automáticas com parâmetros programáveis.Os dados recolhidos nas operações de furação com diferentes parâmetros e aplicação do conceito de Planeamento de Ensaios(hiper cubos Latinos)para uma análise eficiente,visam in crementar o conhecimento sobre a furação,e de terminar os parâmetros mais adequados para cada geometria de ferramenta.Neste contexto,aumentou-se a vida útil da broca; ABSTRAT: In aerospace manufacturing, for a given imposed time interval, quality requirements must be met. Operating costs have a significant impact, especially on repeated operations, such as the drilling of composites in the aerospace industry. A composite drilling study is carried out, in particular the leading edges of the Embraer Legacy 450/500 and Praetor 500/600 Executive Jet Stabilizers. This study is accomplished by a performance analysis of pneumatic machines and the introduction of semi-automatic machines with programmable parameters. The data collected in drilling operations with different parameters and the application of the concept of Design of Experiments (specifically Latin hypercubes) for an efficient analysis, aim to increase the knowledge about drilling, and to determine the most appropriate parameters for each tool geometry. In this context, the operational life of the drill was increased
APA, Harvard, Vancouver, ISO, and other styles
30

Diaz, Acosta Beatriz. "Experiments in Image Segmentation for Automatic US License Plate Recognition." Thesis, Virginia Tech, 2004. http://hdl.handle.net/10919/9988.

Full text
Abstract:
License plate recognition/identification (LPR/I) applies image processing and character recognition technology to identify vehicles by automatically reading their license plates. In the United States, however, each state has its own standard-issue plates, plus several optional styles, which are referred to as special license plates or varieties. There is a clear absence of standardization and multi-colored, complex backgrounds are becoming more frequent in license plates. Commercially available optical character recognition (OCR) systems generally fail when confronted with textured or poorly contrasted backgrounds, therefore creating the need for proper image segmentation prior to classification. The image segmentation problem in LPR is examined in two stages: license plate region detection and license plate character extraction from background. Three different approaches for license plate detection in a scene are presented: region distance from eigenspace, border location by edge detection and the Hough transform, and text detection by spectral analysis. The experiments for character segmentation involve the RGB, HSV/HSI and 1976 CIE L*a*b* color spaces as well as their Karhunen-Loéve transforms. The segmentation techniques applied include multivariate hierarchical agglomerative clustering and minimum-variance color quantization. The trade-off between accuracy and computational expense is used to select a final reliable algorithm for license plate detection and character segmentation. The spectral analysis approach together with the K-L L*a*b* transformed color quantization are found experimentally as the best alternatives for the two identified image segmentation stages for US license plate recognition.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
31

Hosseini, Behzad. "Neutrino Interaction Analysis With An Automatic Scanning System In The Opera Experiment." Master's thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12614370/index.pdf.

Full text
Abstract:
The OPERA experiment was designed to search for nu-mu to nu-tau oscillations through the observation of nu-tau charged-current interactions in the OPERA target. This search requires a massive detector and very high spatial accuracy. Both requirements are ful
APA, Harvard, Vancouver, ISO, and other styles
32

Altinok, Ozgur. "High-speed Automatic Scanning System For Emulsion Analysis In The Opera Experiment." Master's thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12613364/index.pdf.

Full text
Abstract:
The aim of the OPERA experiment is to verify the neutrino oscillation, directly measuring the appearance of from an initially pure beam produced at CERN. For this purpose OPERA detector is located underground Gran Sasso Laboratory(LNGS) 730 km away from CERN. The detector structure designed to be a hybrid system consisting of emulsion targets and electronic detectors. Total area of the emulsion targets in the OPERA detector is around 110000 m2 which needs fast and reliable automatic scanning systems. For this purpose, two dierent automatic scanning systems were developed in Japan and Europe. For now there are 12 scanning laboratories dedicated to the OPERA Experiment. The Emulsion Scanning Laboratory in the Physics department of METU is one of the scanning laboratories for the OPERA Experiment. The automatic scanning system in METU is European type which is using commercial hardware for easy construction and maintain. Also the laboratory has a unique feature in terms of experimental high energy physics laboratories. The emulsion scan- ning laboratory in METU is the
APA, Harvard, Vancouver, ISO, and other styles
33

Pears, Nicholas Edwin. "The low-level guidance of an experimental autonomous vehicle." Thesis, Durham University, 1989. http://etheses.dur.ac.uk/6731/.

Full text
Abstract:
This thesis describes the data processing and the control that constitutes a method of guidance for an autonomous guided vehicle (AGV) operating in a predefined and structured environment such as a warehouse or factory. A simple battery driven vehicle has been constructed which houses an MC68000 based microcomputer and a number of electronic interface cards. In order to provide a user interface, and in order to integrate the various aspects of the proposed guidance method, a modular software package has been developed. This, along with the research vehicle, has been used to support an experimental approach to the research. The vehicle's guidance method requires a series of concatenated curved and straight imaginary Unes to be passed to the vehicle as a representation of a planned path within its environment. Global position specifications for each line and the associated AGV direction and demand speed for each fine constitute commands which are queued and executed in sequence. In order to execute commands, the AGV is equipped with low level sensors (ultrasonic transducers and optical shaft encoders) which allow it to estimate and correct its global position continually. In addition to a queue of commands, the AGV also has a pre-programmed knowledge of the position of a number of correction boards within its environment. These are simply wooden boards approximately 25cm high and between 2 and 5 metres long with small protrusions ("notches") 4cm deep and 10cm long at regular (Im) intervals along its length. When the AGV passes such a correction board, it can measure its perpendicular distance and orientation relative to that board using two sets of its ultrasonic sensors, one set at the rear of the vehicle near to the drive wheels and one set at the front of the vehicle. Data collected as the vehicle moves parallel to a correction board is digitally filtered and subsequently a least squares line fitting procedure is adopted. As well as improving the reliability and accuracy of orientation and distance measurements relative to the board, this provides the basis for an algorithm with which to detect and measure the position of the protrusions on the correction board. Since measurements in three planar, local coordinates can be made (these are: x, the distance travelled parallel to a correction board; and y,the perpendicular distance relative to a correction board; and Ɵ, the clockwise planar orientation relative to the correction board), global position estimation can be corrected. When position corrections are made, it can be seen that they appear as step disturbances to the control system. This control system has been designed to allow the vehicle to move back onto its imaginary line after a position correction in a critically damped fashion and, in the steady state, to track both linear and curved command segments with minimum error.
APA, Harvard, Vancouver, ISO, and other styles
34

Costa, Luiz Cláudio. "Parâmetros de controle do processo de coqueificação das baterias de fornos de coque da COSIPA." Universidade de São Paulo, 2008. http://www.teses.usp.br/teses/disponiveis/3/3140/tde-02062008-161728/.

Full text
Abstract:
O controle de processo de uma planta de fabricação de coque depende de muitas variáveis particulares de cada planta. A busca de modelos de controle próprios torna-se necessário. O presente trabalho apresenta um projeto de experimentos, em forno piloto, para investigar a influência dos principais parâmetros de controle de fabricação do coque quanto à produção e consumo de calor e utilizá-los futuramente num modelo de automação do controle do processo dessa planta. O resultado do experimento apresentou significância estatística para os fatores temperatura e umidade da mistura enfornada e para as interações entre umidade e temperatura e entre umidade e granulometria com relação ao consumo de calor e também o fator temperatura com relação ao tempo líquido de coqueificação. Além do projeto de experimentos em forno piloto foi feito também um experimento em um forno industrial cuja metodologia mostrou-se adequada para um projeto em escala industrial. Com os dados dos experimentos obtiveram-se também equações matemáticas de previsão do consumo de calor e do tempo líquido de coqueificação.
The process control of a coke plant depends on a lot of particular parameters. This work describes an experimental design in a pilot oven aiming at getting the influence of the main control factors of a coke oven battery, relating these parameters with production and heat consumption for future process control automation. The result of the experiment showed statistic significance for the factors temperature and coal blend moisture and for the interactions between temperature and coal blend moisture and between moisture and coal size on the heat consumption and also for the factor temperature on the net coking time. Theses relations can be used to develop coking control at an industrial plant. In addition to the design of experiments in a pilot oven, it was also made an experiment in an industrial battery oven whose methodology showed to be appropriated to an industrial design of experiment. With the experimental data it was possible to write mathematical equations for estimation of heating and net coking time.
APA, Harvard, Vancouver, ISO, and other styles
35

Ponce, Aline Szabo. "Modelagem experimental e controle do processo de hidroconformação de tubos." Universidade de São Paulo, 2006. http://www.teses.usp.br/teses/disponiveis/3/3151/tde-15092006-153334/.

Full text
Abstract:
O propósito deste trabalho é a modelagem experimental e estudo do controle do processo de hidroconformação de tubos. Assim, o trabalho visa o projeto e a construção de um dispositivo servo-controlado de hidroconformação de tubos com um sistema de controle digital. O trabalho compreende o projeto e a construção de um dispositivo a ser acoplado em uma prensa hidráulica, a instrumentação dos equipamentos usados e a implantação do sistema de controle automático do processo através de um computador PC e de placas de interface A/D e D/A. Os aplicativos de controle foram desenvolvidos em linguagem de alto nível no sistema operacional Windows. No projeto do aplicativo, inicialmente foram realizadas rotinas para testes do sistema em malha aberta. As demais rotinas são aquelas associadas às funções matemáticas do modelo fenomenológico do sistema de hidroconformação, aquelas destinadas ao controlador de malha fechada. O tipo de estratégia de controle a ser utilizada foi definida no decorrer do projeto e foi baseada em um modelo de processo não linear, linearizado em torno de cada ponto de operação. Para fins de obtenção do modelo nominal para o controlador, os atuadores e sensores tiveram suas dinâmicas desprezadas face a dinâmica do processo e suas curvas de processo foram levantadas experimentalmente.
This works aims is the experimental modeling of a tube hydroforming (THF) “T” branch, and de THF process automatic control study. Thus, the design and the construction of a servo-controlled hydraulic device for THF, with a automatic digital control system, is embedded in our objectives. Design and construction of device to append on a hydraulic press, implantation of the measurement equipment and implementation of the control system algorithms through a PC with I/O interface boards is necessary. Control algorithms were developed in Hi-level language for windows operating system. The application design was based on experimental initial tests performed with no feed-backing controlling mode. Routes related to phenomenological mathematical model of the THF process were validate against the literature database, and were devoted to the feed-backing controller mode. Control strategy to employ in final application was defined during the process calibration, based on the non-linear characteristics of the “T” branch THF. To obtain the final load path model sensors and cylinders had their dynamics neglected because the THF dynamic is very much higher, and had their behavior curves experimentally raised.
APA, Harvard, Vancouver, ISO, and other styles
36

Thygs, Fabian B. [Verfasser]. "Automation Techniques to Support Experimental Investigation During Systematic Downstream Process Development / Fabian B. Thygs." München : Verlag Dr. Hut, 2017. http://d-nb.info/1149580240/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Cowden, Hindash Alexandra H. "An Experimental Examination of Automatic Interpretation Biases in Major Depression." Scholar Commons, 2018. https://scholarcommons.usf.edu/etd/7681.

Full text
Abstract:
Cognitive theories of depression have long posited automatic interpretation biases (AIB) as a central contributor to depressed mood. The current study was first to examine AIB in a clinically defined depressed sample. While assessing AIB using a semantic association paradigm, pupillary reactivity was simultaneously recorded to build insight into the AIB process. A total of 53 individuals (25 depressed and 28 healthy control) completed the Word Sentence Association Paradigm for Depression (WSAP-D) while pupillary reactivity was recorded. Results revealed the depressed group was significantly more likely to endorse negative AIB and less likely to endorse benign AIB compared to healthy controls. The depressed group demonstrated a modest effect size difference indicating they were faster to endorse negative AIB compared to the healthy controls, but did not differ in endorsing benign AIB or in rejecting either valence. Pupillary reactivity was found to differentiate behaviorally defined AIB type from a natural processing condition when counter to theorized, group relevant AIB. The depressed group demonstrated greater initial pupillary constriction during initial presentation of ambiguous information and comparatively less pupillary dilation during and after endorsing a benign AIB. Taken together, the results suggest that theorized negative AIB and lack of benign AIB are characteristic of depression, that greater cognitive effort is required to reject interpretations consistent with theorized biases consistent with reinterpretation processes, and that depressed individuals are less engaged with benign AIB compared to healthy controls, possibly associated with hedonic deficits. Theoretical implications and future directions are discussed.
APA, Harvard, Vancouver, ISO, and other styles
38

Rensfelt, Agnes. "Viscoelastic Materials : Identification and Experiment Design." Doctoral thesis, Uppsala : Acta Universitatis Upsaliensis, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-111283.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Faragó, Tomáš [Verfasser], and R. [Akademischer Betreuer] Dillmann. "Image-based Control and Automation of High-speed X-ray Imaging Experiments / Tomáš Faragó ; Betreuer: R. Dillmann." Karlsruhe : KIT-Bibliothek, 2017. http://d-nb.info/113936037X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Hayajneh, Mohammad Radi Mohammad <1987&gt. "Nonlinear State Estimation and Control of Autonomous Aerial Robots: Design and Experimental Validation of Smartphone Based Quadrotor." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2016. http://amsdottorato.unibo.it/7297/.

Full text
Abstract:
This work presents developments of Guidance, Navigation and Control (GNC) systems with application to autonomous Unmanned Aerial Vehicle (UAV). Precisely, this work shows the development of navigation system based on nonlinear complementary filters for position, velocity and attitude estimation using low-cost sensors. The proposed filtering method provides attitude estimates in quaternion representations and position and velocity estimates by fusing measurements from Inertial Measurement Unit (IMU), GPS, and a barometer. Least Square Method (LSM) was used in gains tuning to find the best-fitting of the estimated states with precise measurements obtained by a vision based motion capture system. A complete navigation system was produced by integrating both the attitude and the position filters. The integration of the filtering approach based primarily on the ease of design and computational load. Furthermore, the structure of the filtering design allow for straightforward implementation without a need of high performance signal processing. Moreover, the filters can be tuned totally independent of each other. This work also introduces a nonlinear flight controller for stability and trajectory tracking that is practical for real-time implementation. This controller is also demonstrated the ability of a supervisory controller to provide effective waypoint navigation capabilities in autonomous UAV. The implementation of the guidance, navigation, and control algorithms were adopted in the design of a novel smartphone based autopilot for particular quadrotor aerial platforms. The performances of the proposed work are then evaluated by means of several flight tests. The work also includes a design of advanced navigation and guidance systems based on Robot Operating System (ROS) for Search And Rescue (SAR) missions. Primarily, the performance of the navigation and guidance systems were tested in laboratory by simulating GPS measurements in Linux computer mounted on the top of a quadrotor. This activity facilitates moving by the experiments from indoor to outdoor.
APA, Harvard, Vancouver, ISO, and other styles
41

Keyvani, Alireza. "Robustness in ASR : an experimental study of the interrelationship between discriminant feature-space transformation, speaker normalization and environment compensation." Thesis, McGill University, 2007. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=99772.

Full text
Abstract:
This thesis addresses the general problem of maintaining robust automatic speech recognition (ASR) performance under diverse speaker populations, channel conditions, and acoustic environments. To this end, the thesis analyzes the interactions between environment compensation techniques, frequency warping based speaker normalization, and discriminant feature-space transformation (DFT). These interactions were quantified by performing experiments on the connected digit utterances comprising the Aurora 2 database, using continuous density hidden Markov models (HMM) representing individual digits.
Firstly, given that the performance of speaker normalization techniques degrades in the presence of noise, it is shown that reducing the effects of noise through environmental compensation, prior to speaker normalization, leads to substantial improvements in ASR performance. The speaker normalization techniques considered here were vocal tract length normalization (VTLN) and the augmented state-space acoustic decoder (MATE). Secondly, given that discriminant feature-space transformation (DFT) are known to increase class separation, it is shown that performing speaker normalization using VTLN in a discriminant feature-space leads to improvements in the performance of this technique. Classes, in our experiments, corresponded to HMM states. Thirdly, an effort was made to achieve higher class discrimination by normalizing the speech data used to estimate the discriminant feature-space transform. Normalization, in our experiments, corresponded to reducing the variability within each class through the use of environment compensation and speaker normalization. Significant ASR performance improvements were obtained when normalization was performed using environment compensation, while our results were inconclusive for the case where normalization consisted of speaker normalization. Finally, aimed at increasing its noise robustness, a simple modification of MATE is presented. This modification consisted of using, during recognition, knowledge of the distribution of warping factors selected by MATE during training.
APA, Harvard, Vancouver, ISO, and other styles
42

Burrell, Tiffany. "System Identification in Automatic Database Memory Tuning." Scholar Commons, 2010. https://scholarcommons.usf.edu/etd/1583.

Full text
Abstract:
Databases are very complex systems that require database system administrators to perform system tuning in order to achieve optimal performance. Memory tuning is vital to the performance of a database system because when the database workload exceeds its memory capacity, the results of the queries running on a system are delayed and can cause substantial user dissatisfaction. In order to solve this problem, this thesis presents a platform modeled after a closed control feedback loop to control the level of multi-query processing. Utilizing this platform provides two key assets. First, the system identification is acquired, which is one of two crucial steps involved in developing a closed feedback loop. Second, the platform provides a means to experimentally study database tuning problem and verify the effectiveness of research ideas related to database performance.
APA, Harvard, Vancouver, ISO, and other styles
43

Jayasinghe, Indika D. "An automated approach to create, manage and analyze large- scale experiments for elastic n-tier application in clouds." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/49098.

Full text
Abstract:
Cloud computing has revolutionized the computing landscape by providing on-demand, pay-as-you-go access to elastically scalable resources. Many applications are now being migrated from on-premises data centers to public clouds; yet, the transition to the cloud is not always straightforward and smooth. An application that performed well in an on-premise data center may not perform identically in public computing clouds, because many variables like virtualization can impact the application's performance. By collecting significant performance data through experimental study, the cloud's complexity particularly as it relates to performance can be revealed. However, conducting large-scale system experiments is particularly challenging because of the practical difficulties that arise during experimental deployment, configuration, execution and data processing. In spite of these associated complexities, we argue that a promising approach for addressing these challenges is to leverage automation to facilitate the exhaustive measurement of large-scale experiments. Automation provides numerous benefits: removes the error prone and cumbersome involvement of human testers, reduces the burden of configuring and running large-scale experiments for distributed applications, and accelerates the process of reliable applications testing. In our approach, we have automated three key activities associated with the experiment measurement process: create, manage and analyze. In create, we prepare the platform and deploy and configure applications. In manage, we initialize the application components (in a reproducible and verifiable order), execute workloads, collect resource monitoring and other performance data, and parse and upload the results to the data warehouse. In analyze, we process the collected data using various statistical and visualization techniques to understand and explain performance phenomena. In our approach, a user provides the experiment configuration file, so at the end, the user merely receives the results while the framework does everything else. We enable the automation through code generation. From an architectural viewpoint, our code generator adopts the compiler approach of multiple, serial transformative stages; the hallmarks of this approach are that stages typically operate on an XML document that is the intermediate representation, and XSLT performs the code generation. Our automated approach to large-scale experiments has enabled cloud experiments to scale well beyond the limits of manual experimentation, and it has enabled us to identify non-trivial performance phenomena that would not have been possible otherwise.
APA, Harvard, Vancouver, ISO, and other styles
44

Pritisanac, Iva. "Automatic assignment of methyl resonances using experimental NMR data and graph theory." Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:632b78fb-e2c1-455f-834a-a9aa31b74b70.

Full text
Abstract:
Selective isotope labeling of methyl groups allows for atomic-resolution insight into the structures and dynamics of high molecular mass proteins by nuclear magnetic resonance (NMR) spectroscopy. Widespread application of the methodology has been limited due to the challenges and costs associated with assigning 1H-13C resonances to particular amino acids within the protein. Here, I present a novel structure-based automatic assignment strategy, Methyl Assignment by Graph MAtching (MAGMA), which relates experimentally measured methyl-methyl connectivity (NOESY/DREAM) to inter-methyl distances extracted from a high-resolution protein structure. MAGMA features exact algorithms for graph-subgraph isomorphism and maximal common edge subgraph that can sample all theoretically possible methyl resonance assignments. MAGMA was applied to a benchmark of eight proteins for which high-resolution structures, NOESY or DREAM data, and methyl NMR assignments were available. On this benchmark, MAGMA provides a minimum of 31% and a maximum of 89% methyl residue assignments, with complete accuracy. On larger proteins in the benchmark MAGMA outperforms alternative automated methyl resonance assignment programs in both accuracy and coverage. I assessed the influence of different input structures on the accuracy of MAGMA assignments, and concluded that joining the assignments results from multiple structures guarantees accuracy in cases where the structural form in solution is unknown. Finally, I demonstrate the utility of MAGMA in two novel applications on molecular chaperones. In a ligand binding study involving the N-domain of human HSP90α, MAGMA confidently assigned 38% of the input methyl residues when the results were joined over two protein conformations. MAGMA's assignments were then combined with inter-molecular (ligand-methyl) NOEs and HADDOCK to generate models of the protein-ligand complex. The generated models, while featuring the correct ligand binding site, were of lower quality (0.5 - 2.5 Å higher ligand RMSD) when compared to models generated with previously published methyl assignments. Incorporating inter-molecular NOE restraints in the calculation enabled MAGMA to discriminate between certain ligand binding modes, and led to an increase in the number of confident assignments, demonstrating the potential for using such restraints in the future. Finally, a generic data collection protocol for MAGMA was established in a de novo assignment application on the 400 kDa archaeal small heat shock protein Hsp16.5. The protocol involves measurement of inter-methyl NOEs, determination of intra-residue NOEs between Leu-δ12 and Val-γ12 groups, and discrimination between Leu and Val resonances by preparation of an exclusively Val labelled protein sample. A high sparsity of inter-methyl NOE data obtained for this system led to predominantly ambiguous methyl resonance assignments, which can be used for guiding further experimental efforts.
APA, Harvard, Vancouver, ISO, and other styles
45

Hahn, Edward C. (Edward Chun). "An experimental study of the effects of automation on pilot situational awareness in the datalink ATC environment." Thesis, Massachusetts Institute of Technology, 1992. http://hdl.handle.net/1721.1/43259.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Zhu, Shaoling. "Experimental Study on Low Power Wireless Sensor Network Protocols with Native IP Connectivity for BuildingA utomation." Thesis, KTH, Reglerteknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-175792.

Full text
Abstract:
The recent development of wired and wireless communication technologiesmake building automation the next battlefield of the Internet of Things. Multiplestandards have been drafted to accommodate the complex environmentand minimize the resource consumption of wireless sensor networks. This MasterThesis presents a thorough experimental evaluation with the latest Contikinetwork stack and TI CC2650 platform of network performance indicators,including signal coverage, round trip time, packet delivery ratio and powerconsumption. The Master Thesis also provides a comparison of the networkprotocols for low power operations, the existing operating systems for wirelesssensor networks, and the chips that operate on various network protocols. Theresults show that CC2650 is a promising competitor for future development inthe market of building automation.
APA, Harvard, Vancouver, ISO, and other styles
47

Keenaghan, Kevin Michael. "A Novel Non-Acoustic Voiced Speech Sensor Experimental Results and Characterization." Link to electronic thesis, 2004. http://www.wpi.edu/Pubs/ETD/Available/etd-0114104-144946/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Davis, Jonathan Michael. "Diesel Engine Experimental Design and Advanced Analysis Techniques." The Ohio State University, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=osu1313526640.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Huang, Chao-Min. "Robust Design Framework for Automating Multi-component DNA Origami Structures with Experimental and MD coarse-grained Model Validation." The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu159051496861178.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Leighton, J. "Automatic and intentional imitation : experiments with typically developing adults and adults with autism spectrum disorders." Thesis, University College London (University of London), 2007. http://discovery.ucl.ac.uk/1445662/.

Full text
Abstract:
There are four main theories addressing the core mechanisms of imitation. Two of these theories suggest that imitation is mediated by a special-purpose mechanism and two suggest that it is mediated by general learning and motor control mechanisms. The main purpose of this thesis is to examine whether the question of how we imitate is best answered by specialist or generalist theories. In order to do this, experiments have been carried out using both intentional and automatic imitation paradigms. These paradigms have been used to examine imitation skills both in typically developing individuals and individuals with autism spectrum disorder (ASD). The first empirical chapter examines the role of goals in imitation. Specialist theories claim that goals play an integral role in explaining how we imitate. Some of the best evidence in support of this view is provided by error patterns generated in the pen-and- cups task. However, the results from variants of the pen-and-cups task, presented in this chapter, are more consistent with the idea that general processes, rather than goals, guide imitative behaviour. Chapters 3 and 4 examine imitative abilities in ASD using intentional and automatic imitation paradigms in order to ascertain whether there is an imitation specific impairment in ASD. Such an impairment would appear to be consistent with specialist theories of imitation. However, the findings from these chapters imply that the basic mechanism mediating imitation is not impaired in ASD, and that poor performance on complex intentional imitation tasks in ASD may be due to more generalised deficits, not specific to imitation. The final empirical chapter addresses effector specificity in imitation and whether this phenomenon can distinguish between specialist and generalist accounts of imitation. Using an automatic imitation paradigm, partial effector specificity is demonstrated, which is consistent with claims made by generalist theories. The first three empirical chapters therefore challenge some of the evidence that has been put forward to support specialist theories, and the final empirical chapter provides some specific support for generalist theories. Thus, the findings reported in this thesis are consistent with the hypothesis that imitation is mediated by general processes rather than by a special-purpose mechanism.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography