To see the other types of publications on this topic, follow the link: Test input.

Dissertations / Theses on the topic 'Test input'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Test input.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Ballkoci, Rea. "Input Partitioning Impact on Combinatorial Test Coverage." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-48500.

Full text
Abstract:
Software testing is a crucial activity when it comes to the software lifecycle as it can say with a certain confidence that the software will behave according to its specified behavior. However, due to the large input space, it is almost impossible to check all the combinations that might possibly lead to failures. Input partitioning and combinatorial testing are two techniques that can partially solve the test creation and selection problem, by minimizing the number of test cases to be executed. These techniques work closely together, with input partitioning providing a selection of values that are more likely to expose software faults, and combinatorial testing generating all the possible combinations between two to six parameters. The aim of this Thesis is to study how exactly input partitioning impacts combinatorial test coverage, in terms of the measured t-way coverage percentage and the number of missing test cases to achieve full t-way coverage. For this purpose, six manually written test suites were provided by Bombardier Transportation. We performed an experiment, where the combinatorial coverage is measured for four systematic strategies of input partitioning, using a tool called Combinatorial Coverage Measurement (CCM) tool. The strategies are based on the interface documentations, where we can partition using information about data types or predefined partitions, and specification documentations, where we can partition while using Boundary Value Analysis (BVA) or not. The results show that input partitioning affects the combinatorial test coverage through two factors, the number of partitions or intervals and the number of representative values per interval. A high number of values will lead to a higher number of combinations that increases exponentially. The strategy based on specifications without considering BVA always scored the highest coverage per test suite ranging between 22% and 67% , in comparison to the strategy with predefined partitions that almost always had the lowest score ranging from 4% to 41%. The strategy based on the data types was consistent in always having the second highest score when it came to combinatorial coverage ranging from 8% to 56%, while the strategy that considers BVA would vary, strongly depending on the number of non-boolean parameters and their respective number of boundary values, ranging from 3% to 41%. In our study, there were also some other factors that affected the combinatorial coverage such as the number of manually created test cases, data types of the parameters and their values present in the test suites. In conclusion, an input partitioning strategy must be chosen carefully to exercise parts of the system that can potentially result in the discovery of an unintended behavior. At the same time, a test engineer should also consider the number of chosen values. Different strategies can generate different combinations, and thus influencing the obtained combinatorial coverage. Tools that automate the generation of the combinations are adviced to achieve 100% combinatorial coverage.
APA, Harvard, Vancouver, ISO, and other styles
2

Yang, Bo. "A test selection strategy based on input-output relation analysis." Thesis, University of Ottawa (Canada), 1988. http://hdl.handle.net/10393/5484.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Menon, Sreekumar Singh Adit D. "Output hazard-free test generation methodology." Auburn, Ala, 2009. http://hdl.handle.net/10415/1616.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Vemula, Sudheer Stroud Charles E. "Built-in self-test for input/output cells in field programmable gate arrays." Auburn, Ala., 2006. http://repo.lib.auburn.edu/2006%20Summer/Theses/VEMULA_SUDHEER_17.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Lerner, Lee W. Stroud Charles E. "Built-In Self-Test for input/output tiles in field programmable gate arrays." Auburn, Ala, 2008. http://repo.lib.auburn.edu/EtdRoot/2008/SPRING/Electrical_and_Computer_Engineering/Thesis/Lerner_Lee_53.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bozkurt, M. "Automated realistic test input generation and cost reduction in service-centric system testing." Thesis, University College London (University of London), 2013. http://discovery.ucl.ac.uk/1400300/.

Full text
Abstract:
Service-centric System Testing (ScST) is more challenging than testing traditional software due to the complexity of service technologies and the limitations that are imposed by the SOA environment. One of the most important problems in ScST is the problem of realistic test data generation. Realistic test data is often generated manually or using an existing source, thus it is hard to automate and laborious to generate. One of the limitations that makes ScST challenging is the cost associated with invoking services during testing process. This thesis aims to provide solutions to the aforementioned problems, automated realistic input generation and cost reduction in ScST. To address automation in realistic test data generation, the concept of Service-centric Test Data Generation (ScTDG) is presented, in which existing services used as realistic data sources. ScTDG minimises the need for tester input and dependence on existing data sources by automatically generating service compositions that can generate the required test data. In experimental analysis, our approach achieved between 93% and 100% success rates in generating realistic data while state-of-the-art automated test data generation achieved only between 2% and 34%. The thesis addresses cost concerns at test data generation level by enabling data source selection in ScTDG. Source selection in ScTDG has many dimensions such as cost, reliability and availability. This thesis formulates this problem as an optimisation problem and presents a multi-objective characterisation of service selection in ScTDG, aiming to reduce the cost of test data generation. A cost-aware pareto optimal test suite minimisation approach addressing testing cost concerns during test execution is also presented. The approach adapts traditional multi-objective minimisation approaches to ScST domain by formulating ScST concerns, such as invocation cost and test case reliability. In experimental analysis, the approach achieved reductions between 69% and 98.6% in monetary cost of service invocations during testing
APA, Harvard, Vancouver, ISO, and other styles
7

Busson, Laurent. "Evolution of direct diagnostic techniques in Virology; analytical performances and clinical input." Doctoral thesis, Universite Libre de Bruxelles, 2020. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/313391.

Full text
Abstract:
Le diagnostic virologique est un sujet d’actualité particulièrement du fait des récentes épidémiesou pandémies telles que la pandémie d’influenza A(H1N1) en 2009 ou la diffusion du virus Zika dansles Amériques et la région du Pacifique entre 2014 et 2017, associée à des cas de microcéphalie et dessyndromes de Guillain Barré. Encore plus récemment, en août 2018, le ministre de la santé de laRépublique Démocratique du Congo annonçait la 10e épidémie de virus Ebola dans le pays et endécembre 2019, le coronavirus SARS-CoV-2 est à l’origine d’une pandémie au départ de la Chine. Avecle nombre croissant de migrants et de voyageurs favorisant la dissémination des maladies virales, leslaboratoires diagnostiques doivent être parés à la fois pour l’identification des virus communs maisaussi de ceux importés.Les techniques les plus anciennes de diagnostic virologique tendent à devenir obsolètes suite audéveloppement rapide des techniques moléculaires depuis les années 90. Cependant, nous utilisonstoujours un mélange de techniques moléculaires et non moléculaires au sein de notre laboratoire.Les objectifs de ce travail sont de passer en revue les différentes techniques communémentutilisées pour la détection directe des virus avec leurs avantages et leurs inconvénients et de fournirune réflexion sur la place de chaque technique, en 2020, dans un laboratoire diagnostique.Nous aborderons tout d’abord les cultures cellulaires et nous insisterons sur leur polyvalence quipermet parfois de mettre en évidence des micro-organismes que l’on ne suspectait pas. Nousillustrerons ce point par un article relatant la mise en évidence de Chlamydia trachomatis du serovar Lresponsables de la lymphogranulomatose vénérienne dans des prélèvements envoyés pour suspiciond’infection herpétique.Le travail se focalisera ensuite plus particulièrement sur le diagnostic des infections viralesrespiratoires. Nous verrons les principes des tests de détection antigéniques et discuterons de leurslimites en se basant sur un article qui traite du diagnostic des virus influenza A et B par 3 différentstests immunochromatographiques. Cet article montre que la sensibilité des tests varie en fonction dela charge virale dans le prélèvement ainsi que du sous-type de virus.Nous poursuivrons avec les tests d’amplification d’acides nucléiques (tests moléculaires) enexpliquant la technique de PCR (Polymerase Chain Reaction) et une technique d’amplificationisothermique (Nicking Enzyme Amplification Reaction - NEAR). Nous illustrerons par un article portantsur l’évaluation du test Alere i influenza A&B (technique NEAR) en comparaison du test Sofia influenzaA+B (immunochromatographie). Cet article montre un gain de sensibilité de l’Alere i par rapport auSofia pour le diagnostic de l’influenza A mais pas pour l’influenza B. Il constitue également un travailpréliminaire sur l’appréciation de l’utilité d’une technique PCR rapide dans la prise en charge despatients. La conclusion est qu’il pourrait y avoir un apport de ce type de technique pour la diminutiondes hospitalisations, de la prescription des examens complémentaires et des antibiotiques. Celapermettrait également une prescription plus adéquate de l’oseltamivir pour le traitement de la grippe.Le point important est que l’impact du résultat est d’autant plus grand qu’il est délivré précocementdans la prise en charge des patients, idéalement lorsqu’ils sont encore aux urgences.Suite au travail sur l’Alere i, nous avons entrepris d’évaluer un test PCR multiplex (FilmArrayRespiratory Panel) pour le diagnostic des virus afin de voir si la détection d’un plus grand nombre depathogènes pourrait avoir un impact plus grand sur la prise en charge des patients. Cette évaluation adonné lieu à deux articles. Le premier détaille les avantages et inconvénients des différents outils dediagnostic pour la détection des virus respiratoires et sert d’état des lieux sur les tests utilisésactuellement dans les laboratoires de virologie. Le deuxième article porte plus particulièrement surl’apport du FilmArray dans la prise en charge des patients. La conclusion est que ce n’est pas le résultatdu test qui a un impact sur cette prise en charge mais plutôt d’autres facteurs notamment l’âge ou desmarqueurs inflammatoires biologiques.Nous terminerons ce travail par un aperçu des techniques de séquençage qui seront sans aucundoute de plus en plus utilisées pour le diagnostic en virologie.
Doctorat en Sciences médicales (Médecine)
info:eu-repo/semantics/nonPublished
APA, Harvard, Vancouver, ISO, and other styles
8

Chembil, Palat Ramesh. "VT-STAR design and implementation of a test bed for differential space-time block coding and MIMO channel measurements." Thesis, Virginia Tech, 2002. http://hdl.handle.net/10919/35712.

Full text
Abstract:
Next generation wireless communications require transmission of reliable high data rate services. Second generation wireless communications systems use single-input multiple-output (SIMO) channel in the reverse link, meaning one transmit antenna at the user terminal and multiple receive antennas at the base station. Recently, information theoretic research has shown an enormous potential growth in the capacity of wireless systems by using multiple antenna arrays at both ends of the link. Space-time coding exploits the spatial-temporal diversity provided by the multiple input multiple output (MIMO) channels, significantly increasing both system capacity and the reliability of the wireless link. The Virginia Tech Space-Time Advanced Radio (VT-STAR) system presents a test bed to demonstrate the capabilities of space-time coding techniques in real-time. Core algorithms are implemented on Texas Instruments TMS320C67 Evaluation Modules (EVM). The radio frequency subsystem is composed of multi-channel transmitter and receiver chains implemented in hardware for over the air transmission. The capabilities of the MIMO channel are demonstrated in a non-line of sight (NLOS) indoor environment. Also to characterize the capacity gains in an indoor environment this test bed was modified to take channel measurements. This thesis reports the system design of VT-STAR and the channel capacity gains observed in an indoor environment for MIMO channels.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
9

Hunt, Frances Jane. "A semantic contribution to verbal short-term memory : a test of operational definitions of 'semantic similarity' and input versus output processes." Thesis, University of Greenwich, 2007. http://gala.gre.ac.uk/6192/.

Full text
Abstract:
Baddeley and Hitch (1974; Baddeley, 1986, 2000) propose that coding in verbal short-term memory is phonological and that semantic codes are employed in long-term memory. Semantic coding in short-term memory has been investigated to a far lesser degree than phonological codes and the findings have been inconsistent. Some theorists propose that semantic coding is possible (e.g. Nairne, 1990) while other suggest that semantic factors act during recall (e.g. Saint-Aubin & Poirer, 1999a). The following body of work investigates whether semantic coding is possible in short-term memory and examines what constitutes ‘semantic similarity’. Chapter 2 reports two visually presented serial recall experiments comparing semantically similar and dissimilar lists. This revealed that context greatly influences the recall of homophones. Chapter 3 illustrated that category members and synonyms enhanced item recall. However, categories had little impact on order retention, whereas synonyms had a detrimental effect. Chapter 4 employed a matching-span task which is purported to differentiate between input and output processes. It was found that synonyms had a detrimental effect on recall, indicative of the effect being related to input processes. Chapter 5 employed mixed lists using backward and forward recall. It was found that the important factor was that the semantically similar items should be encountered first in order to maximise their impact. This supported the contention of the importance of input factors. Chapter 6 compared phonologically and semantically similar items using an open and a closed word pool. It was found that semantic and phonological similarity has comparable effects when an open word pool and free recall scoring method are employed. Overall, the results were consistent with the idea that phonological and semantic codes can be employed in short-term recall.
APA, Harvard, Vancouver, ISO, and other styles
10

Owens, Madeline. "Emmetropization in Arthropods: A New Vision Test in Several Arthropods Suggests Visual Input may not be Necessary to Establish Correct Focusing." University of Cincinnati / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1563527198165493.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

McNamara, Kevin T. "A theoretical model for education production and an empirical test of the relative importance of school and nonschool inputs." Diss., Virginia Polytechnic Institute and State University, 1986. http://hdl.handle.net/10919/53629.

Full text
Abstract:
The importance of public education in rural development has received increasing attention by local and state policy makers as competition for new industry has intensified throughout rural America. Uncertainty about the relationships of public and private inputs to education output, however, presents problems to state and local officials and parents interested in improving the quality and quantity of the public education system. This research examines the education process in a production function framework to identify the relationships of education inputs to education output. A theoretical model that combines public l and household decision making into an education production process is used as the basis for the empirical model that is developed. The estimated model includes input measures for school, family, volunteer and student inputs to education production and is estimated with cross·sectional data for Virginia counties. The expenditure measure used in the model is specified as a polynomial lag. The model also is specified as a joint-product production process. The results of the analysis provide evidence of the importance of expenditures in education production and indicate that the impact of changes in expenditures occurs over time. The number of and educational levels of teachers also is associated with education output. Household and student inputs also are associated with education output. Volunteer input measures are not statistically significant in the estimated equations, a reflection of the difficulty of specifying and measuring specific volunteer inputs into the education production process. The empirical results do not support a joint production hypothesis between outputs as measured by achievement test scores and the school continuation rate.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
12

Grindal, Mats. "Handling combinatorial explosion in software testing." Doctoral thesis, Linköping : Department of Computer and Information Science, Linköpings universitet, 2007. http://www.bibl.liu.se/liupubl/disp/disp2007/tek1073s.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Kim, Nungsoo. "Extraction of the second-order nonlinear response from model test data in random seas and comparison of the Gaussian and non-Gaussian models." Texas A&M University, 2004. http://hdl.handle.net/1969.1/3183.

Full text
Abstract:
This study presents the results of an extraction of the 2nd-order nonlinear responses from model test data. Emphasis is given on the effects of assumptions made for the Gaussian and non-Gaussian input on the estimation of the 2nd-order response, employing the quadratic Volterra model. The effects of sea severity and data length on the estimation of response are also investigated at the same time. The data sets used in this study are surge forces on a fixed barge, a surge motion of a compliant mini TLP (Tension Leg Platform), and surge forces on a fixed and truncated column. Sea states are used from rough sea (Hs=3m) to high sea (Hs=9m) for a barge case, very rough sea (Hs=3.9m) for a mini TLP, and phenomenal sea (Hs=15m) for a truncated column. After the estimation of the response functions, the outputs are reconstructed and the 2nd order nonlinear responses are extracted with all the QTF distributed in the entire bifrequency domain. The reconstituted time series are compared with the experiment in both the time and frequency domains. For the effects of data length on the estimation of the response functions, 3, 15, and 40- hour data were investigated for a barge, but 3-hour data was used for a mini TLP and a fixed and truncated column due to lack of long data. The effects of sea severity on the estimation of the response functions are found in both methods. The non-Gaussian method for estimation is more affected by data length than the Gaussian method.
APA, Harvard, Vancouver, ISO, and other styles
14

Pandiya, Nimish. "Design and Validation of a MIMO Nonlinear Vibration Test Rig with Hardening Stiffness Characteristics in Multiple Degrees of Freedom." University of Cincinnati / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1504879018436068.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Paiva, Sofia Larissa da Costa. "Aplicação de modelos de defeitos na geração de conjuntos de teste completos a partir de Sistemas de Transição com Entrada/Saída." Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-11072016-172020/.

Full text
Abstract:
O Teste Baseado em Modelos (TBM) emergiu como uma estratégia promissora para minimizar problemas relacionados à falta de tempo e recursos em teste de software e visa verificar se a implementação sob teste está em conformidade com sua especificação. Casos de teste são gerados automaticamente a partir de modelos comportamentais produzidos durante o ciclo de desenvolvimento de software. Entre as técnicas de modelagem existentes, Sistemas de Transição com Entrada/Saída (do inglês, Input/Output Transition Systems - IOTSs), são modelos amplamente utilizados no TBM por serem mais expressivos do que Máquinas de Estado Finito (MEFs). Apesar dos métodos existentes para geração de testes a partir de IOTSs, o problema da seleção de casos de testes é um tópico difícil e importante. Os métodos existentes para IOTS são não-determinísticos, ao contrário da teoria existente para MEFs, que fornece garantia de cobertura completa com base em um modelo de defeitos. Esta tese investiga a aplicação de modelos de defeitos em métodos determinísticos de geração de testes a partir de IOTSs. Foi proposto um método para geração de conjuntos de teste com base no método W para MEFs. O método gera conjuntos de teste de forma determinística além de satisfazer condições de suficiência de cobertura da especificação e de todos os defeitos do domínio de defeitos definido. Estudos empíricos avaliaram a aplicabilidade e eficácia do método proposto: resultados experimentais para analisar o custo de geração de conjuntos de teste utilizando IOTSs gerados aleatoriamente e um estudo de caso com especificações da indústria mostram a efetividade dos conjuntos gerados em relação ao método tradicional de Tretmans.
Model-Based Testing (MBT) has emerged as a promising strategy for the minimization of problems related to time and resource limitations in software testing and aims at checking whether the implementation under test is in compliance with its specification. Test cases are automatically generated from behavioral models produced during the software development life cycle. Among the existing modeling techniques, Input/Output Transition Systems (IOTSs) have been widely used in MBT because they are more expressive than Finite State Machines (FSMs). Despite the existence of test generation methods for IOTSs, the problem of selection of test cases is an important and difficult topic. The current methods for IOTSs are non-deterministic, in contrast to the existing theory for FSMs that provides complete fault coverage guarantee based on a fault model. This manuscript addresses the application of fault models to deterministic test generation methods from IOTSs. A method for the test suite generation based on W method for FSMs is proposed for IOTSs. It generates test suites in a deterministic way and also satisfies sufficient conditions of specification coverage and all faults in a given fault domain. Empirical studies evaluated its applicability and effectiveness. Experimental results for the analyses of the cost of test suite generation by random IOTSs and a case study with specifications from the industry show the effectiveness of the test suites generated in relation to the traditional method of Tretmans.
APA, Harvard, Vancouver, ISO, and other styles
16

Plessner, Von Roderick. "A Study of the Influence Undergraduate Experiences Have onStudent Performance on the Graduate Management Admission Test." University of Toledo / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1401294447.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

CARVALHO, Gustavo Henrique Porto de. "NAT2TEST: generating test cases from natural language requirements based on CSP." Universidade Federal de Pernambuco, 2016. https://repositorio.ufpe.br/handle/123456789/17929.

Full text
Abstract:
Submitted by Natalia de Souza Gonçalves (natalia.goncalves@ufpe.br) on 2016-09-28T12:33:15Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) GustavoHPCarvalho_Doutorado_CInUFPE_2016.pdf: 1763137 bytes, checksum: aed7b3ab2f6235757818003678633c9b (MD5)
Made available in DSpace on 2016-09-28T12:33:15Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) GustavoHPCarvalho_Doutorado_CInUFPE_2016.pdf: 1763137 bytes, checksum: aed7b3ab2f6235757818003678633c9b (MD5) Previous issue date: 2016-02-26
High trustworthiness levels are usually required when developing critical systems, and model based testing (MBT) techniques play an important role generating test cases from specification models. Concerning critical systems, these models are usually created using formal or semi-formal notations. Moreover, it is also desired to clearly and formally state the conditions necessary to guarantee that an implementation is correct with respect to its specification by means of a conformance relation, which can be used to prove that the test generation strategy is sound. Despite the benefits of MBT, those who are not familiar with the models syntax and semantics may be reluctant to adopt these formalisms. Furthermore, most of these models are not available in the very beginning of the project, when usually natural-language requirements are available. Therefore, the use of MBT is postponed. Here, we propose an MBT strategy for generating test cases from controlled naturallanguage (CNL) requirements: NAT2TEST, which refrains the user from knowing the syntax and semantics of the underlying notations, besides allowing early use of MBT via naturallanguage processing techniques; the formal and semi-formal models internally used by our strategy are automatically generated from the natural-language requirements. Our approach is tailored to data-flow reactive systems: a class of embedded systems whose inputs and outputs are always available as signals. These systems can also have timed-based behaviour, which may be discrete or continuous. The NAT2TEST strategy comprises a number of phases. Initially, the requirements are syntactically analysed according to a CNL we proposed to describe data-flow reactive systems. Then, the requirements informal semantics are characterised based on the case grammar theory. Afterwards, we derive a formal representation of the requirements considering a model of dataflow reactive systems we defined. Finally, this formal model is translated into communicating sequential processes (CSP) to provide means for generating test cases. We prove that our test generation strategy is sound with respect to our timed input-output conformance relation based on CSP: csptio. Besides CSP, we explore the generation of other target notations (SCR and IMR) from which we can generate test cases using commercial tools (T-VEC and RT-Tester, respectively). The whole process is fully automated by the NAT2TEST tool. Our strategy was evaluated considering examples from the literature, the aerospace (Embraer) and the automotive (Mercedes) industry. We analysed performance and the ability to detect defects generated via mutation. In general, our strategy outperformed the considered baseline: random testing. We also compared our strategy with relevant commercial tools.
Testes baseados em modelos (MBT) consiste em criar modelos para especificar o comportamento esperado de sistemas e, a partir destes, gerar testes que verificam se implementações possuem o nível de confiabilidade esperado. No contexto de sistemas críticos, estes modelos são normalmente (semi)formais e deseja-se uma definição precisa das condições necessárias para garantir que uma implementação é correta em relação ao modelo da especificação. Esta definição caracteriza uma relação de conformidade, que pode ser usada para provar que uma estratégia de MBT é consistente (sound). Apesar dos benefícios, aqueles sem familiaridade com a sintaxe e a semântica dos modelos empregados podem relutar em adotar estes formalismos. Aqui, propõe-se uma estratégia de MBT para gerar casos de teste a partir de linguagem natural controlada (CNL). Esta estratégia (NAT2TEST) dispensa a necessidade de conhecer a sintaxe e a semântica das notações formais utilizadas internamente, uma vez que os modelos intermediários são gerados automaticamente a partir de requisitos em linguagem natural. Esta estratégia é apropriada para sistemas reativos baseados em fluxos de dados: uma classe de sistemas embarcados cujas entradas e saídas estão sempre disponíveis como sinais. Estes sistemas também podem ter comportamento dependente do tempo (discreto ou contínuo). Na estratégia NAT2TEST, inicialmente, os requisitos são analisados sintaticamente de acordo com a CNL proposta neste trabalho para descrever sistemas reativos. Em seguida, a semântica informal dos requisitos é caracterizada utilizando a teoria de gramática de casos. Posteriormente, deriva-se uma representação formal dos requisitos considerando um modelo definido neste trabalho para sistemas reativos. Finalmente, este modelo é traduzido em uma especificação em communicating sequential processes (CSP) para permitir a geração de testes. Este trabalho prova que a estratégia de testes proposta é consistente considerando a relação de conformidade temporal baseada em entradas e saídas também definida aqui: csptio. Além de CSP, foi explorada a geração de outras notações formais (SCR e IMR), a partir das quais é possível gerar casos de teste usando ferramentas comerciais (T-VEC e RT-Tester, respectivamente). Todo o processo é automatizado pela ferramenta NAT2TEST. A estratégia NAT2TEST foi avaliada considerando exemplos da literatura, da indústria aeroespacial (Embraer) e da automotiva (Mercedes). Foram analisados o desempenho e a capacidade de detectar defeitos gerados através de operadores de mutação. Em geral, a estratégia NAT2TEST apresentou melhores resultados do que a referência adotada: testes aleatórios. A estratégia NAT2TEST também foi comparada com ferramentas comerciais relevantes.
APA, Harvard, Vancouver, ISO, and other styles
18

Nguyen, Ngo Minh Thang. "Test case generation for Symbolic Distributed System Models : Application to Trickle based IoT Protocol." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLC092.

Full text
Abstract:
Les systèmes distribués sont composés de nombreux sous-systèmes distants les uns des autres. Afin de réaliser une même tâche, les sous-systèmes communiquent à la fois avec l’environnement par des messages externes et avec d’autres sous-systèmes par des messages internes, via un réseau de communication. En pratique, les systèmes distribués mettent en jeu plusieurs types d’erreurs, propres aux sous-systèmes les constituant, ou en lien avec les communications internes. Afin de s’assurer de leur bon fonctionnement, savoir tester de tels systèmes est essentiel. Cependant, il est très compliqué de les tester car sans horloge globale, les sous-systèmes ne peuvent pas facilement synchroniser leurs envois de messages, ce qui explique l’existence des situations non déterministes. Le test à base de modèles (MBT) est une approche qui consiste à vérifier si le comportement d’un système sous test (SUT) est conforme à son modèle, qui spécifie les comportements souhaités. MBT comprend deux étapes principales: la génération de cas de test et le calcul de verdict. Dans cette thèse, nous nous intéressons à la génération de cas de test dans les systèmes distribués. Nous utilisons les systèmes de transition symbolique temporisé à entrées et sorties (TIOSTS) et les analysons à l’aide des techniques d’exécution symbolique pour obtenir les comportements symboliques du système distribué. Dans notre approche, l’architecture de test permet d’observer au niveau de chaque soussystème à la fois les messages externes émis vers l’environnement et les messages internes reçus et envoyés. Notre framework de test comprend plusieurs étapes: sélectionner un objectif de test global, défini comme un comportement particulier exhibé par exécution symbolique, projeter l’objectif de test global sur chaque sous-système pour obtenir des objectifs de test locaux, dériver des cas de test unitaires pour chacun des sous-systèmes. L’exécution du test consiste à exécuter des cas de test locaux sur les sous-systèmes paramétrés par les objectifs de tests en calculant à la volée les données de test à soumettre au sous-système en fonction de données observées. Enfin, nous mettons en œuvre notre approche sur un cas d’étude décrivant un protocole utilisé dans le contexte de l’IoT
Distributed systems are composed of many distant subsystems. In order to achieve a common task, subsystems communicate both with the local environment by external messages and with other subsystems by internal messages through a communication network. In practice, distributed systems are likely to reveal many kinds of errors, so that we need to test them before reaching a certain level of confidence in them. However, testing distributed systems is complicated due to their intrinsic characteristics. Without global clocks, subsystems cannot synchronize messages, leading to non-deterministic situations.Model-Based Testing (MBT) aims at checking whether the behavior of a system under test (SUT) is consistent with its model, specifying expected behaviors. MBT is useful for two main steps: test case generation and verdict computation. In this thesis, we are mainly interested in the generation of test cases for distributed systems.To specify the desired behaviors, we use Timed Input Output Symbolic Transition Systems (TIOSTS), provided with symbolic execution techniques to derive behaviors of the distributed system. Moreover, we assume that in addition to external messages, a local test case observes internal messages received and sent by the co-localized subsystem. Our testing framework includes several steps: selecting a global test purpose using symbolic execution on the global system, projecting the global test purpose to obtain a local test purpose per subsystem, deriving unitary test case per subsystem. Then, test execution consists of executing local test cases by submitting data compatible following a local test purpose and computing a test verdict on the fly. Finally, we apply our testing framework to a case study issued from a protocol popular in the context of IoT
APA, Harvard, Vancouver, ISO, and other styles
19

Benharrat, Nassim. "Model-Based Testing of Timed Distributed Systems : A Constraint-Based Approach for Solving the Oracle Problem." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLC021/document.

Full text
Abstract:
Le test à base de modèles des systèmes réactifs est le processus de vérifier si un système sous test (SUT) est conforme à sa spécification. Il consiste à gérer à la fois la génération des données de test et le calcul de verdicts en utilisant des modèles. Nous spécifions le comportement des systèmes réactifs à l'aide des systèmes de transitions symboliques temporisées à entrée-sortie (TIOSTS). Quand les TIOSTSs sont utilisés pour tester des systèmes avec une interface centralisée, l'utilisateur peut ordonner complètement les événements (i.e., les entrées envoyées au système et les sorties produites). Les interactions entre le testeur et le SUT consistent en des séquences d'entrées et de sortie nommées traces, pouvant être séparées par des durées dans le cadre du test temporisé, pour former ce que l'on appelle des traces temporisées. Les systèmes distribués sont des collections de composants locaux communiquant entre eux et interagissant avec leur environnement via des interfaces physiquement distribuées. Différents événements survenant à ces différentes interfaces ne peuvent plus être ordonnés. Cette thèse concerne le test de conformité des systèmes distribués où un testeur est placé à chaque interface localisée et peut observer ce qui se passe à cette interface. Nous supposons qu'il n'y a pas d’horloge commune mais seulement des horloges locales pour chaque interface. La sémantique de tels systèmes est définie comme des tuples de traces temporisées. Nous considérons une approche du test dans le contexte de la relation de conformité distribuée dtioco. La conformité globale peut être testée dans une architecture de test en utilisant des testeurs locaux sans communication entre eux. Nous proposons un algorithme pour vérifier la communication pour un tuple de traces temporisées en formulant le problème de message-passing en un problème de satisfaction de contraintes (CSP). Nous avons mis en œuvre le calcul des verdicts de test en orchestrant à la fois les algorithmes du test centralisé off-line de chacun des composants et la vérification des communications par le biais d'un solveur de contraintes. Nous avons validé notre approche sur un cas étude de taille significative
Model-based testing of reactive systems is the process of checking if a System Under Test (SUT) conforms to its model. It consists of handling both test data generation and verdict computation by using models. We specify the behaviour of reactive systems using Timed Input Output Symbolic Transition Systems (TIOSTS) that are timed automata enriched with symbolic mechanisms to handle data. When TIOSTSs are used to test systems with a centralized interface, the user may completely order events occurring at this interface (i.e., inputs sent to the system and outputs produced from it). Interactions between the tester and the SUT are sequences of inputs and outputs named traces, separated by delays in the timed framework, to form so-called timed traces. Distributed systems are collections of communicating local components which interact with their environment at physically distributed interfaces. Interacting with such a distributed system requires exchanging values with it by means of several interfaces in the same testing process. Different events occurring at different interfaces cannot be ordered any more. This thesis focuses on conformance testing for distributed systems where a separate tester is placed at each localized interface and may only observe what happens at this interface. We assume that there is no global clock but only local clocks for each localized interface. The semantics of such systems can be seen as tuples of timed traces. We consider a framework for distributed testing from TIOSTS along with corresponding test hypotheses and a distributed conformance relation called dtioco. Global conformance can be tested in a distributed testing architecture using only local testers without any communication between them. We propose an algorithm to check communication policy for a tuple of timed traces by formulating the verification of message passing in terms of Constraint Satisfaction Problem (CSP). Hence, we were able to implement the computation of test verdicts by orchestrating both localised off-line testing algorithms and the verification of constraints defined by message passing that can be supported by a constraint solver. Lastly, we validated our approach on a real case study of a telecommunications distributed system
APA, Harvard, Vancouver, ISO, and other styles
20

Žemaitis, Tomas. "Loginės funkcijos termų generavimo algoritmas pagrįstas programinio prototipo modeliu." Master's thesis, Lithuanian Academic Libraries Network (LABT), 2007. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2007~D_20070816_144026-87100.

Full text
Abstract:
Technologijų plėtojimas leidžia vis labiau vystytis sudėtingų elektroninių sistemų produkcijai. Visos šios sistemos turi būti patikrintos ir ištestuotos tam, kad užtikrinti tikslų jų funkcionavimą. Kai sistemų sudėtingumas didėja, testavimas tampa vienas iš svarbiausių faktorių nustatant galutinę produkto kainą. Žinomų žemo lygio metodų, skirtų techninės įrangos testavimui, nepakanka ir daugiau darbo turi būti atlikta abstrakčiame lygyje pradiniuose projektavimo etapuose negu klasikiniame ventiliniame ir registrų perdavimo lygiuose. Realizuotas algoritmas, kuris atsitiktinai generuoja įėjimo poveikį, pagal programinio prototipo modelį paskaičiuoja poveikio reakciją ir iškraipant po vien��� įėjimo poveikio signalo reikšmę apibrėžia galimus išėjimų loginių funkcijų termus. Nagrinėjant kitus įėjimo poveikius apibrėžti išėjimų loginių funkcijų termai patikslinami išmetant dalinius termus. Atsitiktinai sugeneravus ir išnagrinėjus daug įėjimo poveikių gaunami galutiniai išėjimų loginių funkcijų termai. Algoritmas negarantuoja , kad bus gauti visi ir tikslūs išėjimų loginių funkcijų termai, bet gauti termai gali būti naudojami testų generavimui. Gauti išėjimų loginių funkcijų termai užrašomi kartu su įėjimo poveikiu, pagal kurį termas buvo nustatytas, ir patys paskaičiuoti termai jau gali būti naudojami kaip tikrinantys testai. Gauti rezultatai galės būti panaudoti tolimesniems tyrimams: schemų testavimui, defektų šalinimui, funkcijos elementų palyginimui, algoritmo gerinimui... [toliau žr. visą tekstą]
The technological development is enabling production of increasingly complex electronic systems. All those systems must be verified and tested to guarantee correct behavior. As the complexity grows, testing is becoming one of the most significant factors that contribute to the final product cost. The established low-level methods for hardware testing are not any more sufficient and more work has to be done at abstraction levels higher than the classical gate and register-transfer levels. Realized algorithm, which random generates inputs, computes reaction based on software prototype model and deforming values of inputs one by one determines possible terms of logical functions. Analyzing other inputs determined terms of logical functions are corrected by eliminating partial terms. After random generating and analyzing a lot of inputs terminal terms of logical functions are derived. Algorithm doesn’t guarantee that all and exact terms of logical functions are obtained but those terms could be used when generating test vectors. Derived terms of logical functions’ outputs are recorded with input that formed them and following terms can be used as inspecting tests. Collected results can be used for further researches: schemes testing, defect detection, comparing elements of logical function, improving algorithm. Main aspects of design are introduced. Experimental accurateness of results and factors (initial number of random generated test vectors, improvement coefficient, maximum... [to full text]
APA, Harvard, Vancouver, ISO, and other styles
21

Moon, Min-Yeong. "Confidence-based model validation for reliability assessment and its integration with reliability-based design optimization." Diss., University of Iowa, 2017. https://ir.uiowa.edu/etd/5816.

Full text
Abstract:
Conventional reliability analysis methods assume that a simulation model is able to represent the real physics accurately. However, this assumption may not always hold as the simulation model could be biased due to simplifications and idealizations. Simulation models are approximate mathematical representations of real-world systems and thus cannot exactly imitate the real-world systems. The accuracy of a simulation model is especially critical when it is used for the reliability calculation. Therefore, a simulation model should be validated using prototype testing results for reliability analysis. However, in practical engineering situation, experimental output data for the purpose of model validation is limited due to the significant cost of a large number of physical testing. Thus, the model validation needs to be carried out to account for the uncertainty induced by insufficient experimental output data as well as the inherent variability existing in the physical system and hence in the experimental test results. Therefore, in this study, a confidence-based model validation method that captures the variability and the uncertainty, and that corrects model bias at a user-specified target confidence level, has been developed. Reliability assessment using the confidence-based model validation can provide conservative estimation of the reliability of a system with confidence when only insufficient experimental output data are available. Without confidence-based model validation, the designed product obtained using the conventional reliability-based design optimization (RBDO) optimum could either not satisfy the target reliability or be overly conservative. Therefore, simulation model validation is necessary to obtain a reliable optimum product using the RBDO process. In this study, the developed confidence-based model validation is integrated in the RBDO process to provide truly confident RBDO optimum design. The developed confidence-based model validation will provide a conservative RBDO optimum design at the target confidence level. However, it is challenging to obtain steady convergence in the RBDO process with confidence-based model validation because the feasible domain changes as the design moves (i.e., a moving-target problem). To resolve this issue, a practical optimization procedure, which terminates the RBDO process once the target reliability is satisfied, is proposed. In addition, the efficiency is achieved by carrying out deterministic design optimization (DDO) and RBDO without model validation, followed by RBDO with the confidence-based model validation. Numerical examples are presented to demonstrate that the proposed RBDO approach obtains a conservative and practical optimum design that satisfies the target reliability of designed product given a limited number of experimental output data. Thus far, while the simulation model might be biased, it is assumed that we have correct distribution models for input variables and parameters. However, in real practical applications, only limited numbers of test data are available (parameter uncertainty) for modeling input distributions of material properties, manufacturing tolerances, operational loads, etc. Also, as before, only a limited number of output test data is used. Therefore, a reliability needs to be estimated by considering parameter uncertainty as well as biased simulation model. Computational methods and a process are developed to obtain confidence-based reliability assessment. The insufficient input and output test data induce uncertainties in input distribution models and output distributions, respectively. These uncertainties, which arise from lack of knowledge – the insufficient test data, are different from the inherent input distributions and corresponding output variabilities, which are natural randomness of the physical system.
APA, Harvard, Vancouver, ISO, and other styles
22

Noller, Yannic. "Hybrid Differential Software Testing." Doctoral thesis, Humboldt-Universität zu Berlin, 2020. http://dx.doi.org/10.18452/21968.

Full text
Abstract:
Differentielles Testen ist ein wichtiger Bestandteil der Qualitätssicherung von Software, mit dem Ziel Testeingaben zu generieren, die Unterschiede im Verhalten der Software deutlich machen. Solche Unterschiede können zwischen zwei Ausführungspfaden (1) in unterschiedlichen Programmversionen, aber auch (2) im selben Programm auftreten. In dem ersten Fall werden unterschiedliche Programmversionen mit der gleichen Eingabe untersucht, während bei dem zweiten Fall das gleiche Programm mit unterschiedlichen Eingaben analysiert wird. Die Regressionsanalyse, die Side-Channel Analyse, das Maximieren der Ausführungskosten eines Programms und die Robustheitsanalyse von Neuralen Netzwerken sind typische Beispiele für differentielle Softwareanalysen. Eine besondere Herausforderung liegt in der effizienten Analyse von mehreren Programmpfaden (auch über mehrere Programmvarianten hinweg). Die existierenden Ansätze sind dabei meist nicht (spezifisch) dafür konstruiert, unterschiedliches Verhalten präzise hervorzurufen oder sind auf einen Teil des Suchraums limitiert. Diese Arbeit führt das Konzept des hybriden differentiellen Software Testens (HyDiff) ein: eine hybride Analysetechnik für die Generierung von Eingaben zur Erkennung von semantischen Unterschieden in Software. HyDiff besteht aus zwei parallel laufenden Komponenten: (1) einem such-basierten Ansatz, der effizient Eingaben generiert und (2) einer systematischen Analyse, die auch komplexes Programmverhalten erreichen kann. Die such-basierte Komponente verwendet Fuzzing geleitet durch differentielle Heuristiken. Die systematische Analyse basiert auf Dynamic Symbolic Execution, das konkrete Eingaben bei der Analyse integrieren kann. HyDiff wird anhand mehrerer Experimente evaluiert, die in spezifischen Anwendungen im Bereich des differentiellen Testens ausgeführt werden. Die Resultate zeigen eine effektive Generierung von Testeingaben durch HyDiff, wobei es sich signifikant besser als die einzelnen Komponenten verhält.
Differential software testing is important for software quality assurance as it aims to automatically generate test inputs that reveal behavioral differences in software. The concrete analysis procedure depends on the targeted result: differential testing can reveal divergences between two execution paths (1) of different program versions or (2) within the same program. The first analysis type would execute different program versions with the same input, while the second type would execute the same program with different inputs. Therefore, detecting regression bugs in software evolution, analyzing side-channels in programs, maximizing the execution cost of a program over multiple executions, and evaluating the robustness of neural networks are instances of differential software analysis with the goal to generate diverging executions of program paths. The key challenge of differential software testing is to simultaneously reason about multiple program paths, often across program variants, in an efficient way. Existing work in differential testing is often not (specifically) directed to reveal a different behavior or is limited to a subset of the search space. This PhD thesis proposes the concept of Hybrid Differential Software Testing (HyDiff) as a hybrid analysis technique to generate difference revealing inputs. HyDiff consists of two components that operate in a parallel setup: (1) a search-based technique that inexpensively generates inputs and (2) a systematic exploration technique to also exercise deeper program behaviors. HyDiff’s search-based component uses differential fuzzing directed by differential heuristics. HyDiff’s systematic exploration component is based on differential dynamic symbolic execution that allows to incorporate concrete inputs in its analysis. HyDiff is evaluated experimentally with applications specific for differential testing. The results show that HyDiff is effective in all considered categories and outperforms its components in isolation.
APA, Harvard, Vancouver, ISO, and other styles
23

Rezagholi, Mahmoud. "The Effects of Technological Change on Productivity and Factor Demand in U.S. Apparel Industry 1958-1996 : An Econometric Analysis." Thesis, Uppsala University, Department of Economics, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-7659.

Full text
Abstract:

In this dissertation I study substantially the effects of disembodied technical change on the total factor productivity and inputs demand in U.S. Apparel industry during 1958-1996. A time series input-output data set over the sector employs to estimate an error corrected model of a four-factor transcendental logarithmic cost function. The empirical results indicate technical impact on the total factor productivity at the rate of 9% on average. Technical progress has in addition a biased effect on factor augmenting in the sector.

APA, Harvard, Vancouver, ISO, and other styles
24

Arendt, Christopher D. "Adaptive Pareto Set Estimation for Stochastic Mixed Variable Design Problems." Ft. Belvoir : Defense Technical Information Center, 2009. http://handle.dtic.mil/100.2/ADA499860.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Huldt, Love. "Examining correlations between using video streaming services and English language proficiency : A study of upper secondary school learners in Sweden." Thesis, Stockholms universitet, Engelska institutionen, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-194040.

Full text
Abstract:
Streaming video services have been ingrained into everyday life among Swedish teens, and the media content is often considered to benefit English language learners. The present study aims to verify that elevated English language proficiency and avid consumption of online streaming media appear synchronously in upper secondary school students. This is done by gauging online streaming media habits in students enrolled in a Swedish upper secondary school using a questionnaire, and then employing Pearson correlations to investigate the strength of the relationship between this data and student scores on a provided test of receptive vocabulary. Some attention is given to the effect of subtitle language choice on the viewer, as well as giving a brief summary of extramural English. The results are that there were found to be mostly weak correlations of low significance between test scores and online streaming media-use. The discussion links the predominantly weak correlations and significance values to previous studies about frequent multitasking occurring while participants are watching audiovisual media at home. Some space is given to a suggestion on how to adapt the present methodology to upper secondary schools to enable active teachers to explore how their English learners consume audiovisual streaming media and how this may relate to language proficiency. The study concludes that more research is needed to form a more accurate view of the relationship between watching online streaming audiovisual media and improved English language proficiency, and that further investigations should be of greater magnitude and breadth in both sampling as well as what demographic data is gathered.
APA, Harvard, Vancouver, ISO, and other styles
26

Hedayati, Maryeh. "Mobilization and transport of different types of carbon-based engineered and natural nanoparticles through saturated porous media." Thesis, Uppsala universitet, Institutionen för geovetenskaper, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-233631.

Full text
Abstract:
Carbon –based engineered nanoparticles have been widely used due to their small size and uniquephysical and chemical properties. They can dissolve in water, transport through soil and reach drinkingwater resources. The toxic effect of engineered nanoparticles on human and fish cells has beenobserved; therefore, their release and distribution into the environment is a subject of concern. In thisstudy, two types of engineered nanoparticles, multi-walled carbon nano-tubes (MWCNT) and C60 withcylindrical and spherical shapes, respectively, were used. The aim of this study was to investigatetransport and retention of carbon-based engineered and natural nanoparticles through saturated porousmedia. Several laboratory experiments were conducted to observe transport behavior of thenanoparticles through a column packed with sand as a representative porous media. The columnexperiments were intended to monitor the effect of ionic strength, input concentration and the effect ofparticle shape on transport. The results were then interpreted using Derjaguin-Landau-Verwey-Overbeak (DLVO) theory based on the sum of attractive and repulsive forces which exist betweennanoparticles and the porous medium. It was observed that as the ionic strength increased from 1.34mM to 60 mM, the mobility of the nanoparticles was reduced. However, at ionic strength lower than10.89 mM, mobility of C60 was slightly higher than that of MWCNTs. At ionic strength of 60 mMMWCNT particles were significantly more mobile. It is rather difficult to relate this difference to theshape of particle and further studies are required.The effect of input concentration on transport of MWCNTs and C60 was observed in bothmobility of the particle and shape of breakthrough curves while input concentration was elevated from7 mg/l to 100 mg/l. A site-blocking mechanism was suggested to be responsible for the steep andasymmetric shape of the breakthrough curves at the high input concentration.Furthermore inverse modeling was used to calculate parameters such as attachment efficiency,the longitudinal dispersivity, and capacity of the solid phase for the removal of particles. The inversionprocess was performed in a way that the misfit between the observed and simulated breakthroughcurves was minimized. The simulated results were in good agreement with the observed data.
APA, Harvard, Vancouver, ISO, and other styles
27

Obison, Henry, and Chiagozie Ajuorah. "Energy Consumptions of Text Input Methods on Smartphones." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-3855.

Full text
Abstract:
Mobile computing devices, in particular smartphones are powered from Lithium-ion batteries, which are limited in capacity. With the increasing popularity of mobile systems, various text input methods have been developed to improve user experience and performance. Briefly, text input method is a user interface that can be used to compose an electronic mail, configure mobile Virtual Private Network, and carryout bank transactions and online purchases. Efficient energy management in these systems requires an extensive knowledge of where and how the energy is being used. This thesis investigates the energy consumption of text input methods on various smartphones. Hence, the authors modeled the energy consumption profile of text input methods on smartphones and analyze the energy usage benchmarks of the battery. This thesis presents a systematic technique to conduct application specific measurements. The data analysis showed substantial variations in the energy consumptions of various text input methods on a smartphone.
The main objective of this research is to find a systematic and measurement based method to evaluate the energy efficiency of the selected text input methods used on smartphones namely: SwiftKey, Swype, and Zimpl. Using Power Monitor equipment and MATLAB, the energy consumption log files of the text input methods were collected for each of the smartphones, and analyzed. This research introduces an optimized technique to carry-out application specific test on smartphones. It is hoped that the thesis will be beneficial to smartphone battery manufacturers and developers of text input techniques on how to make users’ smartphone battery last longer.
APA, Harvard, Vancouver, ISO, and other styles
28

Elmgren, Rasmus. "Handwriting in VR as a Text Input Method." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-208646.

Full text
Abstract:
This thesis discusses handwriting as a possible text input method for Virtual Reality (VR) with a goal of comparing handwriting with a virtual keyboard input method. VR applications have different approaches to text input and there is no standard for how the user should enter text. Text input methods are important for the user in many cases, e.g when they document, communicate or enter their login information. The goal of the study was to understand how a handwriting input would compare to pointing at a virtual keyboard, which is the most common approach to the problem. A prototype was built using Tesseract for character recognition and Unity to create a basic virtual environment. This prototype was then evaluated with a user study, comparing it to the de facto standard virtual keyboard input method. The user study had a usability and desirability questionnaire approach and also uses Sutcliffe's heuristics for evaluation of virtual environments. Interviews were performed with each test user. The results suggested that the virtual keyboard performs better except for how engaging the input method was. From the interviews a common comment was that the handwriting input method was more fun and engaging. Further applications of the handwriting input method are discussed as well as why the users favored the virtual keyboard method.
Virtual Reality (VR) applikationer har olika tillvägagångssätt för textinmatning och det finns ingen tydlig standard hur användaren matar in text i VR. Textinmatning är viktigt när användaren ska dokumentera, kommunicera eller logga in. Målet med studien var att jämföra en inmatningsmetod baserad på handskrift med det de facto standard virtuella tangentbordet och se vilken inmatningsmetod användarna föredrog. En prototyp som använde handskrift byggdes med hjälp av Tesseract för textinmatning och Unity för att skapa en virtuell miljö. Prototypen jämfördes sedan med det virtuella tangentbordet i en användarstudie. Användarstudien bestod av uppmätt tid samt antal fel, en enkät och en intervju. Enkäten grundades på användarbarhet, önskvärdhet och Sutcliffes utvärderingsheuristik av virtuella miljöer. Resultatet visar att det virtuella tangentbordet presterade bättre, handskriftsmetoden presterade endast bättre på att engagera användaren. Resultatet från intervjuerna styrkte också att handskriftsmetoden var roligare och mer engagerande att använda men inte lika användbar. Framtida studier föreslås i diskussionen samt varför användarna föredrog det virtuella tangentbordet.
APA, Harvard, Vancouver, ISO, and other styles
29

Pacheco, Carlos S. M. Massachusetts Institute of Technology. "Eclat : automatic generation and classification of test inputs." Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/33855.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.
Includes bibliographical references (p. 51-54).
This thesis describes a technique that selects, from a large set of test inputs, a small subset likely to reveal faults in the software under test. The technique takes a program or software component, plus a set of correct executions-say, from observations of the software running properly, or from an existing test suite that a user wishes to enhance. The technique first infers an operational model of the software's operation. Then, inputs whose operational pattern of execution differs from the model in specific ways are suggestive of faults. These inputs are further reduced by selecting only one input per operational pattern. The result is a small portion of the original inputs, deemed by the technique as most likely to reveal faults. Thus, the technique can also be seen as an error-detection technique. The thesis describes two additional techniques that complement test input selection. One is a technique for automatically producing an oracle (a set of assertions) for a test input from the operational model, thus transforming the test input into a test case. The other is a classification-guided test input generation technique that also makes use of operational models and patterns. When generating inputs, it filters out code sequences that are unlikely to contribute to legal inputs, improving the efficiency of its search for fault-revealing inputs.
(cont.) We have implemented these techniques in the Eclat tool, which generates unit tests for Java classes. Eclat's input is a set of classes to test and an example program execution- say, a passing test suite. Eclat's output is a set of JUnit test cases, each containing a potentially fault-revealing input and a set of assertions at least one of which fails. In our experiments, Eclat successfully generated inputs that exposed fault-revealing behavior; we have used Eclat to reveal real errors in programs. The inputs it selects as fault-revealing are an order of magnitude as likely to reveal a fault as all generated inputs.
by Carlos Pacheco.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
30

Sánchez, Clara. "BIST test pattern generator based on partitioning circuit inputs." Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/36580.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1995.
Includes bibliographical references (leaves 33-35).
by Clara Sánchez.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
31

Li, Junyang, and Xueer Xing. "Evaluation of Test Data Generation Techniques for String Inputs." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-14798.

Full text
Abstract:
Context. The effective generation of test data is regarded as very important in the software testing. However, mature and effective techniques for generating string test data have seldom been explored due to the complexity and flexibility in the expression form of the string comparing to other data types. Objectives. Based on this problem, this study is to investigate strengths and limitations of existing string test data generation techniques to support future work for exploring an effective technique to generate string test data. This main goal was achieved via two objectives. First is investigating existing techniques for string test data generation; as well as finding out criteria and Classes-Under-Test (CUTs) used for evaluating the ability of string test generation. Second is to assess representative techniques through comparing effectiveness and efficiency. Methods. For the first objective, we used a systematic mapping study to collect data about existing techniques, criteria, and CUTs. With respect to the second objective, a comparison study was conducted to compare representative techniques selected from the results of systematic mapping study. The data from comparison study was analysed in a quantitative way by using statistical methods. Results. The existing techniques, criteria and CUTs which are related to string test generation were identified. A multidimensional categorisation was proposed to classify existing string test data generation techniques. We selected representative techniques from the search-based method, symbolic execution method, and random generation method of categorisation. Meanwhile, corresponding automated test generation tools including EvoSuite, Symbolic PathFinder (SPF), and Randoop, which achieved representative techniques, were selected to assess through comparing effectiveness and efficiency when applied to 21 CUTs. Conclusions. We concluded that: search-based method has the highest effectiveness and efficiency in three selected solution methods; random generation method has a low efficiency, but has a high fault-detecting ability for some specific CUTs; symbolic execution solution achieved by SPF cannot support string test generation well currently due to possibly incomplete string constraint solver or string generator.
APA, Harvard, Vancouver, ISO, and other styles
32

Lin, Tian Ran. "Vibration of finite coupled structures, with applications to ship structures." University of Western Australia. School of Mechanical Engineering, 2006. http://theses.library.uwa.edu.au/adt-WU2006.0093.

Full text
Abstract:
[Truncated abstract] Shipbuilding is fast becoming a priority industry in Australia. With increasing demands to build fast vessels of lighter weight, shipbuilders are more concerned with noise and vibration problems in ships than ever. The objective of this thesis is to study the vibration response of coupled structures, in the hope that the study may shed some light in understanding the general features of ship vibration. An important feature characterizing the vibration in complex structures is the input mobility, as it describes the capacity of structures in accepting vibration energy from sources. The input mobilities of finite ribbed plate and plate/plate coupled structures are investigated analytically and experimentally in this study. It is shown that the input mobility of a finite ribbed plate is bounded by the input mobilities of the uncoupled plate and beam(s) that form the ribbed plate and is dependent upon the distance between the source location and the stiffened beam(s). Off-neutral axis loading on the beam (point force applied on the beam but away from the beam’s neutral axis) affects the input power, kinetic energy distribution in the component plates of the ribbed plate and energy flow into the plates from the beam under direct excitation ... solutions were then used to examine the validity of statistical energy analysis (SEA) in the prediction of vibration response of an L-shaped plate due to deterministic force excitations. It was found that SEA can be utilized to predict the frequency averaged vibration response and energy flow of L-shaped plates under deterministic force (moment) excitations providing that the source location is more than a quarter of wavelength away from the plate edges. Furthermore, a simple experimental method was developed in this study to evaluate the frequency dependent stiffness and damping of rubber mounts by impact test. Finally, analytical methods developed in this study were applied in the prediction of vibration response of a ship structure. It was found that input mobilities of ship hull structures due to machinery excitations are governed by the stiffness of the supporting structure to which the engine is mounted. Their frequency averaged values can be estimated from those of the mounting structure of finite or infinite extents. It was also shown that wave propagation in ship hull structures at low frequencies could be attenuated by irregularities imposed to the periodic locations of the ship frames. The vibration at higher frequencies could be controlled by modifications of the supporting structure.
APA, Harvard, Vancouver, ISO, and other styles
33

Williams, Kathleen T. "The use of parental input in prekindergarten screening." Virtual Press, 1988. http://liblink.bsu.edu/uhtbin/catkey/558346.

Full text
Abstract:
The purpose of this study was to examine the individual and collective relationships between and among sets of predictor variables obtained from an ecological preschool screening model and criterion variables designed to assess performance in kindergarten. A second purpose of this research was to determine the unique contribution of parental input within the ecological preschool screening model. Fall screening included an individually administered standardized test, the Bracken Basic Concept Scale (BBCS), and a structured parent interview, the Minnesota Preschool Inventory (MPI). The BBCS and the Developmental Scale (DEVEL) of the MPI constituted the set of predictor variables. The criterion set of performance measures included a group administered standardized testing procedure, the Metropolitan Readiness Test (MRT), and a teacher rating scale, the Teacher Rating Scale-Spring (TRS-S), completed in the spring of the kindergarten year.Canonical correlation analysis was used to examine the interrelationships between the two sets ofvariables and to determine the best possible combinationof variables for predicting kindergarten achievement. Multiple regression analysis was used to determine the unique contribution of parental input for predicting kindergarten achievement over and above that information supplied by the standardized test.The results of this study supported the use of an ecological model for predicting kindergarten performance. The information gained from parental input and standardized testing contributed significantly and uniquely to the composite of the predictor set. There was both a statistically significant and a meaningfully significant relationship between the screening procedures completed at the beginning of the school year (the BBCS and the DEVEL) and the assessment procedures done at the end of the school year (the MRT and the TRS-S) when these four variables were considered simultaneously.The use of parental input was supported by the multiple regression analyses. Information gained by structured parent interview had something statistically significant, meaningful, and unique to contribute to the prediction of kindergarten performance over and above that information gained from the individually administered standardized test.
Department of Educational Psychology
APA, Harvard, Vancouver, ISO, and other styles
34

Fard, Hossein Ghodosi, and Bie Chuangjun. "Braille-based Text Input for Multi-touch Screen Mobile Phones." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-5555.

Full text
Abstract:
ABSTRACT: “The real problem of blindness is not the loss of eyesight. The real problem is the misunderstanding and lack of information that exist. If a blind person has proper training and opportunity, blindness can be reduced to a physical nuisance.”- National Federation of the Blind (NFB) Multi-touch screen is a relatively new and revolutionary technology in mobile phone industry. Being mostly software driven makes these phones highly customizable for all sorts of users including blind and visually impaired people. In this research, we present new interface layouts for multi-touch screen mobile phones that enable visionless people to enter text in the form of Braille cells. Braille is the only way for these people to directly read and write without getting help from any extra assistive instruments. It will be more convenient and interesting for them to be provided with facilities to interact with new technologies using their language, Braille. We started with a literature review on existing eyes-free text entry methods and also text input devices, to find out their strengths and weaknesses. At this stage we were aiming at identifying the difficulties that unsighted people faced when working with current text entry methods. Then we conducted questionnaire surveys as the quantitative method and interviews as the qualitative method of our user study to get familiar with users’ needs and expectations. At the same time we studied the Braille language in detail and examined currently available multi-touch mobile phone feedbacks. At the designing stage, we first investigated different possible ways of entering a Braille “cell” on a multi-touch screen, regarding available input techniques and also considering the Braille structure. Then, we developed six different alternatives of entering the Braille cells on the device; we laid out a mockup for each and documented them using Gestural Modules Document and Swim Lanes techniques. Next, we prototyped our designs and evaluated them utilizing Pluralistic Walkthrough method and real users. Next step, we refined our models and selected the two bests, as main results of this project based on good gestural interface principles and users’ feedbacks. Finally, we discussed the usability of our elected methods in comparison with the current method visually impaired use to enter texts on the most popular multi-touch screen mobile phone, iPhone. Our selected designs reveal possibilities to improve the efficiency and accuracy of the existing text entry methods in multi-touch screen mobile phones for Braille literate people. They also can be used as guidelines for creating other multi-touch input devices for entering Braille in an apparatus like computer.
APA, Harvard, Vancouver, ISO, and other styles
35

Clawson, James. "On-the-go text entry: evaluating and improving mobile text input on mini-qwerty keyboards." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/45955.

Full text
Abstract:
To date, hundreds of millions of mini-QWERTY keyboard equipped devices (miniaturized versions of a full desktop keyboard) have been sold. Accordingly, a large percentage of text messages originate from fixed-key, mini-QWERTY keyboard enabled mobile phones. Over a series of three longitudinal studies I quantify how quickly and accurately individuals can input text on mini-QWERTY keyboards. I evaluate performance in ideal laboratory conditions as well as in a variety of mobile contexts. My first study establishes baseline performance measures; my second study investigates the impact of limited visibility on text input performance; and my third study investigates the impact of mobility (sitting, standing, and walking) on text input performance. After approximately five hours of practice, participants achieved expertise typing almost 60 words per minute at almost 95% accuracy. Upon completion of these studies, I examine the types of errors that people make when typing on mini-QWERTY keyboards. Having discovered a common pattern in errors, I develop and refine an algorithm to automatically detect and correct errors in mini-QWERTY keyboard enabled text input. I both validate the algorithm through the analysis of pre-recorded typing data and then empirically evaluate the impacts of automatic error correction on live mini-QWERTY keyboard text input. Validating the algorithm over various datasets, I demonstrate the potential to correct approximately 25% of the total errors and correct up to 3% of the total keystrokes. Evaluating automatic error detection and correction on live typing results in successfully correcting 61% of the targeted errors committed by participants while increasing typing rates by almost two words per minute without introducing noticeable distraction.
APA, Harvard, Vancouver, ISO, and other styles
36

Burrell, James. "Comparison of Text Input and Interaction in a Mobile Learning Environment." NSUWorks, 2013. http://nsuworks.nova.edu/gscis_etd/111.

Full text
Abstract:
Mobile computing devices are increasingly being utilized to support learning activities outside the traditional classroom environment. The text input capabilities of these devices represent a limiting factor for effective support of user-based interaction. The ability to perform continuous character selection and input to complete course exercises is becoming increasingly difficult as these devices become miniaturized to a point where traditional input and output methods are becoming less efficient for continuous text input. This study investigated the design and performance of a prototype mobile text entry keyboard (MobileType) based on characteristics of the linguistic frequency of character occurrence and increasing key size to minimize visual search time and distance during character selection. The study was designed to compare efficiency, effectiveness, and learning effects of the MobileType to the QWERTY keyboard layouts while performing fixed phrase and course exercise text entry tasks in two separate evaluation sessions. A custom software application was developed for a tablet device to display the two keyboard interfaces and capture text entry interaction and timing information. The results of this study indicated the QWERTY text entry interface performed faster in terms of efficiency, while the MobileType interface performed better in terms of effectiveness. In addition, there was an observable increase in the efficiency of the MobileType interface between the two task sessions. The results indicated that the MobileType interface was readily learnable relating to learning effect. Future research is recommended to establish if the performance of the MobileType interface could be increased with further participant familiarization after completing multiple sessions, which would validate the design of MobileType as a possible alternative to the QWERTY text entry interface for mobile devices.
APA, Harvard, Vancouver, ISO, and other styles
37

Lyons, Kenton Michael. "Improving Support of Conversations by Enhancing Mobile Computer Input." Diss., Georgia Institute of Technology, 2005. http://hdl.handle.net/1853/7163.

Full text
Abstract:
Mobile computing is becoming one of the most widely adopted technologies. There are 1.3 billion mobile phone subscribers worldwide, and the current generation of phones offers substantial computing ability. Furthermore, mobile devices are increasingly becoming integrated into everyday life. With the huge popularity in mobile computing, it is critical that we examine the human-computer interaction issues for these devices and explicitly explore supporting everyday activities. In particular, one very common and important activity of daily life I am interested in supporting is conversation. Depending on job type, office works can spend up to 85\% of their time in interpersonal communication. In this work, I present two methods that improve a user's ability to enter information into a mobile computer in conversational situations. First I examine the Twiddler, a keyboard that has been adopted by the wearable computing community. The Twiddler is a mobile one-handed chording keyboard with a keypad similar to a mobile phone. The second input method is dual-purpose speech, a technique designed to leverage a user's conversational speech. A dual-purpose speech interaction is one where speech serves two roles; it is socially appropriate and meaningful in the context of a human-to-human conversation and provides useful input to a computer. A dual-purpose speech application listens to one side of a conversation and provides beneficial services to the user. Together these input methods provide a user the ability to enter information while engaged in conversation in a mobile setting.
APA, Harvard, Vancouver, ISO, and other styles
38

Schaal, Peter. "Observer-based engine air charge characterisation : rapid, observer-assisted engine air charge characterisation using a dynamic dual-ramp testing method." Thesis, Loughborough University, 2018. https://dspace.lboro.ac.uk/2134/33247.

Full text
Abstract:
Characterisation of modern complex powertrains is a time consuming and expensive process. Little effort has been made to improve the efficiency of testing methodologies used to obtain data for this purpose. Steady-state engine testing is still regarded as the golden standard, where approximately 90% of testing time is wasted waiting for the engine to stabilize. Rapid dynamic engine testing, as a replacement for the conventional steady-state method, has the potential to significantly reduce the time required for characterisation. However, even by using state of the art measurement equipment, dynamic engine testing introduces the problem that certain variables are not directly measurable due to the excitation of the system dynamics. Consequently, it is necessary to develop methods that allow the observation of not directly measurable quantities during transient engine testing. Engine testing for the characterisation of the engine air-path is specifically affected by this problem since the air mass flow entering the cylinder is not directly measurable by any sensor during transient operation. This dissertation presents a comprehensive methodology for engine air charge characterisation using dynamic test data. An observer is developed, which allows observation of the actual air mass flow into the engine during transient operation. The observer is integrated into a dual-ramp testing procedure, which allows the elimination of unaccounted dynamic effects by averaging over the resulting hysteresis. A simulation study on a 1-D gas dynamic engine model investigates the accuracy of the developed methodology. The simulation results show a trade-off between time saving and accuracy. Experimental test result confirm a time saving of 95% compared to conventional steady-state testing and at least 65% compared to quasi steady-state testing while maintaining the accuracy and repeatability of conventional steady-state testing.
APA, Harvard, Vancouver, ISO, and other styles
39

Rosenberg, Robert. "Computing without mice and keyboards : text and graphic input devices for mobile computing." Thesis, University College London (University of London), 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.285005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Hagiya, Toshiyuki. "Tutoring System for Smartphone Text Input for Older Adults using Statistical Stumble Detection." Kyoto University, 2018. http://hdl.handle.net/2433/232408.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Rincon, Guillermo. "Kinetics of the electrocoagulation of oil and grease." ScholarWorks@UNO, 2011. http://scholarworks.uno.edu/td/131.

Full text
Abstract:
Research on the electrocoagulation (EC) of hexane extractable materials (HEM) has been conducted at the University of New Orleans using a proprietary bench-scale EC reactor. The original reactor configuration forced the fluid to follow a vertical upward-downward path. An alternate electrode arrangement was introduced so that the path of flow became horizontal. Both configurations were evaluated by comparing the residence time distribution (RTD) data generated in each case. These data produced indication of internal recirculation and stagnant water when the fluid followed a vertical path. These anomalies were attenuated when the fluid flowed horizontally and at a velocity higher than 0.032 m s-1 . A series of EC experiments were performed using a synthetic emulsion with a HEM concentration of approximately 700 mg l-1. It was confirmed that EC of HEM follows first-order kinetics, and kinetic constants of 0.0441 s-1 and 0.0443 s-1 were obtained from applying both the dispersion and tanks-in-series (TIS) models, respectively. In both cases R2 was 0.97. Also, the TIS model indicated that each cell of the EC behaves as an independent continuous-stirred-tank reactor.
APA, Harvard, Vancouver, ISO, and other styles
42

Kaštánek, Martin. "Vstupní díl UHF přijímače s velmi nízkou spotřebou." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2008. http://www.nusl.cz/ntk/nusl-217183.

Full text
Abstract:
The purpose of this work was to make a proposal for input parts of receiver for band 430 to 440 MHz. A model of chosen semiconductor triode BFP540 was created in simulation software. Possibilities how to decrease consumption of this semiconductor triode, keeping the profit, were investigated through the simulation.In compromise consumption, keeping the profit of the amplifier - an optimal operating point for this semiconductor triode UCE = 1,2 V and IC = 2 mA was found. It was tested through the testing wiring with noise microstrips conformity. Ascertained knowledge was used for construction of tuner for UHF receiver. An operating point of input amplifier of UHF receiver was owing to power supply amplifier forced for bigger effectiveness to UCE = 2,65 V and IC = 2,0 mA. Suppression of mirror frequency is provided with Helix filter of the third order, because of intermediate frequency 10,7 MHz. Mixing on intermediate frequency is made again by semiconductor triode BFP540. Selectivity of receiver is provided with intermediate frequency crystal filter 10,7 MHz with bandwidth 15 kHz. Designed input part enables reception of SSB, FM and digital types of modulation.Bandwidth intermediate frequency exit is adapted to this request To receive particular modulation , it is necessary to complete intermediate frequency signal way with appropriate intermediate frequency filter.
APA, Harvard, Vancouver, ISO, and other styles
43

Ziegenmeyer, Jonathan Daniel. "Estimation of Disturbance Inputs to a Tire Coupled Quarter-car Suspension Test Rig." Thesis, Virginia Tech, 2007. http://hdl.handle.net/10919/32806.

Full text
Abstract:
In this study a real-time open loop estimate of the disturbance displacement input to the tire and an external disturbance force, representing handling and aerodynamic forces, acting on the sprung mass of a quarter-car suspension test rig was generated. This information is intended for use in active control methods applied to vehicle suspensions. This estimate is achieved with two acceleration measurements as inputs to the estimator; one each on the sprung and unsprung masses. This method is differentiated from current disturbance accommodating control, bilinear observers, and preview control methods. A description of the quarter-car model and the experimental test rig is given. The equations of motion for the quarter-car model are derived in state space as well as a transfer function form. Several tests were run in simulation to investigate the performance of three integration techniques used in the estimator. These tests were first completed in continuous time prior to transforming to discrete time. Comparisons are made between the simulated and estimated displacement and velocity of the disturbance input to the tire and disturbance force input to the sprung mass. The simulated and estimated dynamic tire normal forces are also compared. This process was necessary to select preliminary values for the integrator transfer function to be implemented in real-time. Using the acceleration measurements from the quarter-car test rig, a quarter-car parameter optimization for use in the estimator was performed. The measured and estimated tire disturbance input, disturbance input velocity, and dynamic tire normal force signals are compared during experimental tests. The results show that the open loop observer provides estimates of the tire disturbance velocity and dynamic tire normal force with acceptable error. The results also indicate the quarter-car test rig behaves linearly within the frequency range and amplitude of the disturbance involved in this study. The resultant access to the disturbance estimate and dynamic tire force estimate in real-time enables pursuit of novel control methods applied to active vibration control of vehicle suspensions.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
44

Krot, Andrii. "New input methods for blind users on wide touch devices." Thesis, Linnéuniversitetet, Institutionen för datavetenskap (DV), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-54221.

Full text
Abstract:
Blind people cannot enter text on touch devices using common input methods. They use special input methods that have lower performance (i.e. lower entry rate and higher error rate). Most blind people have muscle memory from using classic physical keyboards, but the potential of using this memory is not utilized by existing input methods. The goal of the project is to take advantage of this muscle memory to improve the typing performance of blind people on wide touch panels. To this end, four input methods are designed, and a prototype for each one is developed. These input methods are compared with each other and with a standard input method. The results of the comparison show that using input methods designed in this report improves typing performance. The most promising and the least promising approaches are specified.
APA, Harvard, Vancouver, ISO, and other styles
45

von, Gegerfelt Angelina, and Kashmir Klingestedt. "Evaluating Usability of Text and Speech as Input Methods for Natural Language Interfaces Using Gamification." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-186264.

Full text
Abstract:
Today an increasing amount of systems make use of Natural Language Interfaces (NLIs), which make them easy and efficient to use. The purpose of this research was to gain an increased understanding of the usability of different input methods for NLIs. This was done by implementing two versions of a text-based game with an NLI, where one version used speech as input method and the other used text. Tests were then performed with users that all played both versions of the game and then evaluated them individually using the System Usability Scale. It was found that text was better as input method in all aspects. However, speech scored high when the users felt confident in their English proficiency, acknowledging the possibility of using speech as input method for NLIs.
Idag använder en ökande mängd system naturliga språkgränssnitt, vilket gör dem enkla och effektiva att använda. Syftet med denna forskning var att få en ökad förståelse för användbarheten av olika inmatningsmetoder för naturliga språkgränssnitt. Detta gjordes genom att skapa två versioner av ett text-baserat spel med ett naturligt språkgränssnitt, där en version använde tal som inmatningsmetod och andra använde text. Tester utfördes sedan med användare som alla spelade igenom båda versionerna av spelet och sedan utvärderade dem individuellt med hjälp av System Usability Scale, ett system för att mäta graden av användbarhet. Det konstaterades att text fungerade bättre som inmatningsmetod ur alla aspekter. Tal fick dock en hög poäng när användarna kände sig säkra på sin engelska kunnighet, vilket talar för möjligheten att använda tal som en inmatningsmetod för naturliga gränssnitt.
APA, Harvard, Vancouver, ISO, and other styles
46

Pedrosa, Diogo de Carvalho. "Data input and content exploration in scenarios with restrictions." Universidade de São Paulo, 2014. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-13042015-144651/.

Full text
Abstract:
As technology evolves, new devices and interaction techniques are developed. These transformations create several challenges in terms of usability and user experience. Our research faces some challenges for data input or content exploration in scenarios with restrictions. It is not our intention to investigate all possible scenarios, but we deeply explore a broad range of devices and restrictions. We start with a discussion about the use of an interactive coffee table for exploration of personal photos and videos, also considering a TV set as an additional screen. In a second scenario, we present an architecture that offers to interactive digital TV (iDTV) applications the possibility of receiving multimodal data from multiple devices. Our third scenario concentrates on supporting text input for iDTV applications using a remote control, and presents an interface model based on multiple input modes as a solution. In the last two scenarios, we continued investigating better ways to provide text entry; however, our restriction becomes not using the hands, which is the kind of challenge faced by severely motor-disabled individuals. First, we present a text entry method based on two input symbols and an interaction technique based on detecting internal and external heel rotations using an accelerometer, for those who keep at least a partial movement of a leg and a foot. In the following scenario, only the eyes are required. We present an eye-typing technique that recognizes the intended word by weighting length and frequency of all possible words formed by filtering extra letters from the sequence of letters gazed by the user. The exploration of each scenario in depth was important to achieve the relevant results and contributions. On the other hand, the wide scope of this dissertation allowed the student to learn about several technologies and techniques.
Com a evolução da tecnologia, novos dispositivos e técnicas de interação são desenvolvidas. Essas transformações criam desafios em termos de usabilidade e experiência do usuário. Essa pesquisa enfrenta alguns desafios para a entrada de dados e exploração de conteúdo em cenários com restrições. Não foi intenção da pesquisa investigar todos os possíveis cenários, mas sim a exploração em profundidade de uma ampla gama de dispositivos e restrições. Ao todo cinco cenários são investigados. Primeiramente é apresentada uma discussão sobre o uso de uma mesa de centro interativa para a exploração de fotos e vídeos pessoais, a qual também considera um aparelho de TV como tela adicional. Com base no segundo cenário, uma arquitetura que oferece a aplicações de TV digital interativa (TVDI) a possibilidade de receber dados multimodais de múltiplos dispositivos é apresentada. O terceiro cenário se concentra no suporte a entrada de texto para aplicações de TVDI usando o controle remoto, resultando na apresentação de um modelo de interface baseado em múltiplos modos de entrada como solução. Os dois últimos cenários permitem continuar a investigação por melhores formas de entrada de texto, porém, a restrição se torna a impossibilidade de usar as mãos, um dos desafios enfrentados por indivíduos com deficiência motora severa. No primeiro deles, são apresentados um método de entrada de texto baseado em dois símbolos de entrada e uma técnica de interação baseada na detecção de rotações do pé apoiado sobre o calcanhar usando acelerômetro, para aqueles que mantêm pelo menos um movimento parcial de uma perna e um pé. No senário seguinte, apenas os movimentos dos olhos são exigidos. Foi apresentada uma técnica de escrita com o olho que reconhece a palavra desejada ponderando o comprimento de a frequência de ocorrência de todas as palavras que podem ser formadas filtrando letras excedentes da lista de letras olhadas pelo usuário. A exploração de cada cenário em profundidade foi importante para a obtenção de resultados e contribuições relevantes. Por outro lado, o amplo escopo da dissertação permitiu ao estudante o aprendizado de diversas técnicas e tecnologias.
APA, Harvard, Vancouver, ISO, and other styles
47

Kano, Akiyo. "Adding context to automated text input error analysis with reference to understanding how children make typing errors." Thesis, University of Central Lancashire, 2011. http://clok.uclan.ac.uk/5320/.

Full text
Abstract:
Despite the enormous body of literature studying the typing errors of adults, children's typing errors remain an understudied area. It is well known in the field of Child-Computer Interaction that children are not 'little adults'. This means findings regarding how adults make typing mistakes cannot simply be transferred into how children make typing errors, without first understanding the differences. To understand how children differ from adults in the way they make typing mistakes, typing data were gathered from both children and adults. It was important that the data collected from the contrasting participant groups were comparable. Various methods of collecting typing data from adults were reviewed for suitability with children. Several issues were identified that could create a bias towards the adults. To resolve these issues, new tools and methods were designed, such as a new phrase set, a new data collector and new computer experience questionnaires. Additionally, there was a lack of an analysis method of typing data suitable for use with both children and adults. A new categorisation method was defined based on typing errors made by both children and adults. This categorisation method was then adapted into a Java program, which dramatically reduced the time required to carry out typing categorisation. Finally, in a large study, typing data collected from 231 primary school children, aged between 7 and 10 years, and 229 undergraduate computing students were analysed. Grouping the typing errors according to the context in which they occurred allowed for a much more detailed analysis than was possible with error rates. The analysis showed children have a set of errors they made frequently that adults rarely made. These errors that are specific to children suggest that differences exist between the ways the two groups make typing errors. This finding means that children's typing errors should be studied in their own right.
APA, Harvard, Vancouver, ISO, and other styles
48

Olofsson, Jakob. "Input and Display of Text for Virtual Reality Head-Mounted Displays and Hand-held Positionally Tracked Controllers." Thesis, Luleå tekniska universitet, Institutionen för system- och rymdteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-64620.

Full text
Abstract:
The recent increase of affordable virtual reality (VR) head-mounted displays has led to many new video games and applications being developed for virtual reality environments. The improvements to VR technology has introduced many new possibilities, but has also introduced new problems to solve in order to make VR software as comfortable and as effective as possible. In this report, different methods of displaying text and receiving text input in VR environments are investigated and measured. An interactive user study was conducted to evaluate and compare the performance and user opinion of three different text display methods and four separate virtual keyboard solutions. Results revealed that the distance between text and user, with the same relative text size, significantly affected the ease of reading the text, and that designing a good virtual keyboard for VR requires a good balance of multiple factors. An example of such factors is the balance between precise control and the amount of physical exertion required. Additionally, the results suggest that the amount of previous experience with virtual reality equipment, and typing skill with regular physical keyboards, can meaningfully impact which solution is most appropriate.
Den senaste tidens ökning av prisvärda virtual reality (VR) glasögon har lett till en ökning av spel och applikationer som utvecklas för virtual reality miljöer. Förbättringarna av VR tekniken har introducerat många nya möjligheter, men även nya problem att lösa för att skapa VR mjukvara som är så bekväm och effektiv som möjligt. I den här rapporten undersöks och mäts olika metoder för att visa samt ta emot text i VR miljöer. Detta undersöktes genom utförandet av en interaktiv användarstudie som utvärderade och jämförde effektiviteten och användaråsikter kring tre olika metoder för att visa text samt fyra olika virtuella tangentbordslösningar. Resultatet visade att avståndet mellan användaren och texten, med samma relativa textstorlek, avsevärt påverkade lättheten att läsa texten, samt att designen av ett bra virtuellt tangentbord för VR kräver en bra balans mellan flera faktorer. Ett exempel på sådana faktorer är balansen mellan noggrann kontroll och den fysiska ansträngning som krävs. Resultatet tyder även på att mängden av tidigare erfarenhet med virtual reality utrustning samt skicklighet att skriva med vanliga fysiska tangentbord betydligt kan påverka vilka lösningar som är mest passande för situationen.
APA, Harvard, Vancouver, ISO, and other styles
49

Hurley, Noel P. "Resource allocation and student achievement: A microlevel impact study of differential resource inputs on student achievement outcomes." Thesis, University of Ottawa (Canada), 1995. http://hdl.handle.net/10393/9724.

Full text
Abstract:
This study examined the relationships between resource allocation and student achievement using a modified version of a conceptual model designed by Bulcock (1989) within a general model proposed by Guthrie (1988). Five research questions were developed from a review of literature to investigate the relationship between microlevel student input variables and student output variables--both cognitive and affective. The mediating effects of the student perceptions of the quality of school life on student achievement outcomes were also examined. Multiple regression analyses were utilized and data were analyzed at both the individual and school levels. Models were used to investigate the indirect effects of the quality of school life on student achievement outcomes. Substantively meaningful relationships were identified between linguistic resources, language usage and reading outcomes; socioeconomic level, gender, linguistic resources, language usage, and mathematics achievement; gender, student attitudes, and student well-being. All grade eight Newfoundland students (10,146) were the subjects of the study. Participants in the study completed the Canadian Test of Basic Skills (CTBS) and the Bulcock Attitudinal Inventory (BAI). Females scored higher than males on every test of the CTBS and also had more favourable attitudes towards school as measured using the BAI. Urban students outperformed rural students by the equivalent of nearly one year on the CTBS scores. A variable was constructed to test Bernstein's (1961) theory of language discontinuity. Bernstein contended that the further an individual's language code departed from the standard language code in use in that society, the greater the difficulty that person would have in learning. The language code variable was constructed using the language usage score from the CTBS to create a continuous variable. This language code variable proved to be highly explanatory in that it explained a large percentage of the variance in reading achievement outcomes and in mathematics achievement outcomes. The measure for students' perceptions toward their schooling experiences explained a large percentage of the variance of student well-being. Two other noteworthy findings in the present study arose from relationships identified between mathematics achievement and independent variables. A strong relationship was identified between mathematics achievement and socioeconomic level. In general, the higher one's socioeconomic level the greater were the outcome measures in mathematics achievement. Indirect effects analyses produced a significant relationship between gender and mathematics achievement that favoured girls. The construction of the educational production function in the present study proved to be an accurate model. The present study contributed to research in several ways. This is one of the first studies that has employed Quality of School Life indicators as developed in the BAI in an educational production function model. A second contribution was the inclusion of microlevel student linguistic resources as predictors of cognitive achievement outcomes. The third contribution of the present study was the high percentage of variance of cognitive achievement outcomes explained by the modified Bulcock model.
APA, Harvard, Vancouver, ISO, and other styles
50

Lipecki, Johan, and Viggo Lundén. "The Effect of Data Quantity on Dialog System Input Classification Models." Thesis, KTH, Hälsoinformatik och logistik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-237282.

Full text
Abstract:
This paper researches how different amounts of data affect different word vector models for classification of dialog system user input. A hypothesis is tested that there is a data threshold for dense vector models to reach the state-of-the-art performance that have been shown with recent research, and that character-level n-gram word-vector classifiers are especially suited for Swedish classifiers–because of compounding and the character-level n-gram model ability to vectorize out-of-vocabulary words. Also, a second hypothesis is put forward that models trained with single statements are more suitable for chat user input classification than models trained with full conversations. The results are not able to support neither of our hypotheses but show that sparse vector models perform very well on the binary classification tasks used. Further, the results show that 799,544 words of data is insufficient for training dense vector models but that training the models with full conversations is sufficient for single statement classification as the single-statement- trained models do not show any improvement in classifying single statements.
Detta arbete undersöker hur olika datamängder påverkar olika slags ordvektormodeller för klassificering av indata till dialogsystem. Hypotesen att det finns ett tröskelvärde för träningsdatamängden där täta ordvektormodeller när den högsta moderna utvecklingsnivån samt att n-gram-ordvektor-klassificerare med bokstavs-noggrannhet lämpar sig särskilt väl för svenska klassificerare söks bevisas med stöd i att sammansättningar är särskilt produktiva i svenskan och att bokstavs-noggrannhet i modellerna gör att tidigare osedda ord kan klassificeras. Dessutom utvärderas hypotesen att klassificerare som tränas med enkla påståenden är bättre lämpade att klassificera indata i chattkonversationer än klassificerare som tränats med hela chattkonversationer. Resultaten stödjer ingendera hypotes utan visar istället att glesa vektormodeller presterar väldigt väl i de genomförda klassificeringstesterna. Utöver detta visar resultaten att datamängden 799 544 ord inte räcker till för att träna täta ordvektormodeller väl men att konversationer räcker gott och väl för att träna modeller för klassificering av frågor och påståenden i chattkonversationer, detta eftersom de modeller som tränats med användarindata, påstående för påstående, snarare än hela chattkonversationer, inte resulterar i bättre klassificerare för chattpåståenden.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography