To see the other types of publications on this topic, follow the link: Computer confidence.

Dissertations / Theses on the topic 'Computer confidence'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Computer confidence.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Burford, Bryan Christopher. "Contextual effects on computer users' confidence." Thesis, Northumbria University, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.410387.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kamra, Varun. "Mining discriminating patterns in data with confidence." Thesis, California State University, Long Beach, 2016. http://pqdtopen.proquest.com/#viewpdf?dispub=10196147.

Full text
Abstract:

There are many pattern mining algorithms available for classifying data. The main drawback of most of the algorithms is that they always focus on mining frequent patterns in data that may not always be discriminative enough for classification. There could exist patterns that are not frequent, but are efficient discriminators. In such cases these algorithms might not perform well. This project proposes the MDP algorithm, which aims to search for patterns that are good at discriminating between classes rather than searching for frequent patterns. The MDP ensures that there is at least one most discriminative pattern (MDP) per record. The purpose of the project is to investigate how a structural approach to classification compares to a functional approach. The project has been developed in Java programming language.

APA, Harvard, Vancouver, ISO, and other styles
3

Applebee, Andrelyn C., and n/a. "Attitudes toward computers in the 1990s: a look at gender, age and previous computer experience on computer anxiety, confidence, liking and indifference." University of Canberra. Education, 1994. http://erl.canberra.edu.au./public/adt-AUC20060206.123119.

Full text
Abstract:
The purpose of this study was to investigate the relationship between computer attitudes held by tertiary students and the selected variables of gender, age and previous computer experience. It was hypothesized that no statistically significant differences would be found within the relationships tested. A questionnaire comprising the Computer Attitude Scale (CAS), demographic and other questions was administered to the population enrolled in an introductory computer unit at the University of Canberra, Australian Capital Territory in Semester 1, 1992. The results were subjected to t-test and one-way analysis of variance testing. Statistically significant findings were noted between both gender and computer anxiety, and gender and computer confidence, with female students being more anxious and male students being more confident. Students with previous computer experience were found to be significantly less anxious and more confident with computers. More research on possible causes of these relationships and ways of overcoming computer anxiety is needed before the findings can be fully implemented.
APA, Harvard, Vancouver, ISO, and other styles
4

Saxon, John Trevor. "Using traceability in model-to-model transformation to quantify confidence based on previous history." Thesis, University of Birmingham, 2018. http://etheses.bham.ac.uk//id/eprint/8047/.

Full text
Abstract:
A widely used method when generating code for the purposes oftransitioning systems, security, the automotive industry and other mission critical scenarios is model-to-model transformation. Traceability is a mechanism for relating the source model elements and the destination elements. It is used to identity how the latter came from the former as well as indicating when and in what order. In these application domains, traceability is a very useful tool for debugging, testing and performance tuning of model transformations. Recent advances in big data technologies have made it possible to produce a history of these executions. In this thesis, we present a method on how we can use such historical data that quantifies the confidence a user has on a newly proposed transformation. For a given trace of execution, considering historical traces that are either well tested, or performed correctly over time, we introduce a measure of confidence for the new trace. This metric is made to compliment that of traditional testing and verification. For example, our metric will aid in deciding whether to deploy automatically generated code when there is not enough time or resources for thorough testing and verification. We shall evaluate our framework by providing a transformation that transitions a relational database into that of a NoSQL database, specifically Apache HBase. This transformation involves changing the nature of the data that is mapped, such that a loss in integrity occurs in the event of its failure.
APA, Harvard, Vancouver, ISO, and other styles
5

Tsang, Kong Chau. "Confidence measures for disparity estimates from energy neuron populations /." View abstract or full-text, 2007. http://library.ust.hk/cgi/db/thesis.pl?ECED%202007%20TSANG.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Nandeshwar, Ashutosh R. "Models for calculating confidence intervals for neural networks." Morgantown, W. Va. : [West Virginia University Libraries], 2006. https://eidr.wvu.edu/etd/documentdata.eTD?documentid=4600.

Full text
Abstract:
Thesis (M.S.)--West Virginia University, 2006.
Title from document title page. Document formatted into pages; contains x, 65 p. : ill. (some col.). Includes abstract. Includes bibliographical references (p. 62-65).
APA, Harvard, Vancouver, ISO, and other styles
7

Eklöf, Patrik. "Implementing Confidence-based Work Stealing Search in Gecode." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-154475.

Full text
Abstract:
Constraint programming is a field whose goal is to solve extremely large problems with a set of restrictions that define the problem. One such example is generating CPU instructions from source code, because a compiler must choose the optimal instructions that best matches the source code, schedule them optimally to minimize the amount of time it takes to execute the instructions and possibly at the same time also minimize power consumption. The difficulty of the problem lies in that there is no single good way to approach the problem since all parameters are so dependent on each other. For example, if the compiler chooses to minimize the amount of instructions, it usually ends up with large instructions which are big and complex and minimizes the amount of power used. However, they are also more difficult to schedule in an efficient manner in order to reduce the runtime. Choosing many smaller instructions gives more flexibility in scheduling, but draws more power. The compiler must also take caches into account in order to minimize misses which costs power and slows down the execution, making the whole problem even more complex. To find the best solution to such problems, one must typically explore every single possibility and measure which one is fastest. This creates a huge amount of possible solutions which takes a tremendous amount of time to explore to find a solution that meets the requirements (often finding the “optimal” solution). Typically, these problems are abstracted into search trees where they are explored using different techniques. Typically, there are two different ways to parallelize the exploration of search trees. These methods are coarse grained parallel search, which splits exploration into several threads as far up in the tree as possible, near the root, and fine grained parallel search which splits up the work as far down the search tree as possible so that each thread gets only a small subtree to explore. Coarse grained search has the advantage that it can achieve super-linear speedup if the solution is not in the leftmost subtree; otherwise, it wastes all work (compared to DFS). Fine grained search has the advantage that it always achieves linear speedup, but can never achieve super-linear speedup. An interesting way of search known as confidence-based search combines these two approaches. It works by having a set of probabilities for each branch provided by the user (called a confidence model); search method takes the help of probabilities as a guide for how many resources it should spend to explore different subtrees (e.g. if there are 10 threads and a probability of 0.8 that there is a solution in a subtree, the search method sends 8 threads for exploring that subtree; an alternative of looking at the problem is that the search method spends 80% of its resources to explore that subtree and spends the remaining 20% to exploring the rest of the subtrees). As the search method finds failed nodes, it updates the probabilities by taking into account that it is less probable that there is a solution in a subtree where there are more failed nodes. Periodically, the algorithm also restarts, and when it does, it uses the updated probabilities as a guide for where to look for solutions. This thesis took upon the goal of creating such a search for a constraint-based framework called Gecode from scratch. The resulting engine had a lot of potential, and while not perfect, it showed clear signs of super linear speedup for some problems tested with naïve confidence models.
APA, Harvard, Vancouver, ISO, and other styles
8

Lavallée-Adam, Mathieu. "Protein-protein interaction confidence assessment and network clustering computational analysis." Thesis, McGill University, 2014. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=121237.

Full text
Abstract:
Protein-protein interactions represent a crucial source of information for the understanding of the biological mechanisms of the cell. In order to be useful, high quality protein-protein interactions must be computationally extracted from the noisy datasets produced by high-throughput experiments such as affinity purification. Even when filtered protein-protein interaction datasets are obtained, the task of analyzing the network formed by these numerous interactions remains tremendous. Protein-protein interaction networks are large, intricate, and require computational approaches to provide meaningful biological insights. The overall objective of this thesis is to explore algorithms assessing the quality of protein-protein interactions and facilitating the analysis of their networks. This work is divided into four results: 1) a novel Bayesian approach to model contaminants originating from affinity purifications, 2) a new method to identify and evaluate the quality of protein-protein interactions independently in different cell compartments, 3) an algorithm computing the statistical significance of clusterings of proteins sharing the same functional annotation in protein-protein interaction networks, and 4) a computational tool performing sequence motif discovery in 5' untranslated regions as well as evaluating the clustering of such motifs in protein-protein interaction networks.
Les interactions protéine-protéine représentent une source d'information essentielle à la compréhension des divers méchanismes biologiques de la cellule. Cependant, les expériences à haut débit qui identifient ces interactions, comme la purification par affinité, produisent un très grand nombre de faux-positifs. Des méthodes computationelles sont donc requises afin d'extraire de ces ensembles de données les interactions protéine-protéine de grande qualité. Toutefois, même lorsque filtrés, ces ensembles de données forment des réseaux très complexes à analyser. Ces réseaux d'interactions protéine-protéine sont d'une taille importante, d'une grande complexité et requièrent des approches computationelles sophistiquées afin d'en retirer des informations possédant une réelle portée biologique. L'objectif de cette thèse est d'explorer des algorithmes évaluant la qualité d'interactions protéine-protéine et de faciliter l'analyse des réseaux qu'elles composent. Ce travail de recherche est divisé en quatre principaux résultats: 1) une nouvelle approche bayésienne permettant la modélisation des contaminants provenant de la purification par affinité, 2) une nouvelle méthode servant à la découverte et l'évaluation de la qualité d'interactions protéine-protéine à l'intérieur de différents compartiments de la cellule, 3) un algorithme détectant les regroupements statistiquement significatifs de protéines partageant une même annotation fonctionnelle dans un réseau d'interactions protéine-protéine et 4) un outil computationel qui a pour but la découverte de motifs de séquences dans les régions 5' non traduites tout en évaluant le regroupement de ces motifs dans les réseaux d'interactions protéine-protéine.
APA, Harvard, Vancouver, ISO, and other styles
9

Covington, Valerie A. "Lower confidence interval bounds for coherent systems with cyclic components." Thesis, Monterey, California : Naval Postgraduate School, 1990. http://handle.dtic.mil/100.2/ADA242713.

Full text
Abstract:
Thesis (M.S. in Operations Research)--Naval Postgraduate School, September 1990.
Thesis Advisor(s): Woods, W. Max. Second Reader: Whitaker, Lyn R. "September 1990." Description based on title screen viewed on December 17, 2009. DTIC Descriptor(s): Computer programs, intervals, confidence limits, accuracy, theses, Monte Carlo method, cycles, fortran, reliability, yield, standardization, statistical distributions, equations, confidence level, poisson density functions, failure, coherence, binomials, computerized simulation. Author(s) subject terms: Reliability, lower confidence limit, coherent systems, cyclic components. Includes bibliographical references (p. 121-122). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
10

Kevork, Ilias. "Confidence interval methods in discrete event computer simulation : theoretical properties and practical recommendations." Thesis, London School of Economics and Political Science (University of London), 1990. http://etheses.lse.ac.uk/1257/.

Full text
Abstract:
Most of steady state simulation outputs are characterized by some degree of dependency between successive observations at different lags measured by the autocorrelation function. In such cases, classical statistical techniques based on independent, identical and normal random variables are not recommended in the construction of confidence intervals for steady state means. Such confidence intervals would cover the steady state mean with probability different from the nominal confidence level. For the last two decades, alternative confidence interval methods have been proposed for stationary simulation output processes. These methods offer different ways to estimate the variance of the sample mean with final objective of achieving coverages equal to the nominal confidence level. Each sample mean variance estimator depends on a number of different parameters and the sample size. In assessing the performance of the confidence interval methods, emphasis is necessarily placed on studying the actual properties of the methods in an empirical context rather than proving their mathematical properties. The testing process takes place in the context of an environment where certain statistical criteria, which measure the actual properties, are estimated through Monte Carlo methods on output processes from different types of simulation models. Over the past years, however, different testing environments have been used. Different methods have been tested on different output processes under different sample sizes and parameter values for the sample mean variance estimators. The diversity of the testing environments has made it difficult to select the most appropriate confidence interval method for certain types of output processes. Moreover, a catalogue of the properties of the confidence interval methods offers limited direct support to a simulation practitioner seeking to apply the methods to particular processes. Five confidence interval methods are considered in this thesis. Two of them were proposed in the last decade. The other three appeared in the literature in 1983 and 1984 and constitute the recent research objects for the statistical experts in simulation output analysis. First, for the case of small samples, theoretical properties are investigated for the bias of the corresponding sample mean variance estimators on AR(1) and AR(2) time series models and the delay in queue in the M/M/1 queueing system. Then an asymptotic comparison for these five methods is carried out. The special characteristic of the above three processes is that the 5th lag autocorrelation coefficient is given by known difference equations. Based on the asymptotic results and the properties of the sample mean variance estimators in small samples, several recommendations are given in making the following decisions: I) The selection of the most appropriate confidence interval method for certain types of simulation outputs. II) The determination of the best parameter values for the sample mean variance estimators so that the corresponding confidence interval methods achieve acceptable performances. III) The orientation of the future research in confidence interval estimation for steady state autocorrelated simulation outputs.
APA, Harvard, Vancouver, ISO, and other styles
11

Samuelsson, Elin. "A Confidence Measure for Deep Convolutional Neural Network Regressors." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-273967.

Full text
Abstract:
Deep convolutional neural networks can be trained to estimate gaze directions from eye images. However, such networks do not provide any information about the reliability of its predictions. As uncertainty estimates could enable more accurate and reliable gaze tracking applications, a method for confidence calculation was examined in this project. This method had to be computationally efficient for the gaze tracker to function in real-time, without reducing the quality of the gaze predictions. Thus, several state-of-the-art methods were abandoned in favor of Mean-Variance Estimation, which uses an additional neural network for estimating uncertainties. This confidence network is trained based on the accuracy of the gaze rays generated by the primary network, i.e. the prediction network, for different eye images. Two datasets were used for evaluating the confidence network, including the effect of different design choices. A main conclusion was that the uncertainty associated with a predicted gaze direction depends on more factors than just the visual appearance of the eye image. Thus, a confidence network taking only this image as input can never model the regression problem perfectly. Despite this, the results show that the network learns useful information. In fact, its confidence estimates outperform those from an established Monte Carlo method, where the uncertainty is estimated from the spread of gaze directions from several prediction networks in an ensemble.
Djupa faltningsnätverk kan tränas till att uppskatta blickriktningar utifrån ögonbilder. Sådana nätverk ger dock ingen information om hur pålitliga dess prediktioner är. Eftersom osäkerhetsskattningar skulle möjliggöra mer exakta och robusta tillämpningar har en metod för konfidensestimering undersökts i detta projekt. Denna metod behövde vara beräkningsmässigt effektiv för att kunna följa en blickriktning i realtid utan att reducera kvaliteten på blickriktningarna. Således valdes flera etablerade tillvägagångssätt bort till fördel för medelvärdes- och variansestimering, där ytterligare ett nätverk används för att estimera osäkerheter. Detta konfidensnätverk tränas baserat på hur bra blickriktningar det första nätverket, kallat prediktionsnätverket, genererar för olika ögonbilder. Två dataset användes för att utvärdera konfidensnätverket, inklusive effekten av olika sätt att designa det. En viktig slutsats var att osäkerheten hos en predicerad blickriktning beror av fler faktorer än bara ögonbildens utseende. Därför kommer ett konfidensnätverk med endast denna bild som indata aldrig kunna modellera regressionsproblemet perfekt. Trots detta visar resultaten att nätverket lär sig användbar information. Dess konfidensskattningar överträffar till och med dem från en etablerad Monte Carlo-metod, där osäkerheten skattas utifrån spridningen av blickriktningar från en samling prediktionsnätverk.
APA, Harvard, Vancouver, ISO, and other styles
12

Helldin, Boman Joakim. "Assessing confidence in a continuous delivery pipeline for software reliability measurement." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-280692.

Full text
Abstract:
Confidence is a set of metrics and constraints used to measure software reliability. It allows for evaluation of software in agile development phases. The goal of confidence assessment is to make sure software is safe for deployment in a production environment. This study presents which metrics can be relevant and useful when assessing confidence in a continuous delivery pipeline. Continuous delivery pipelines are used to produce software in short cycles and contain a lot of information. To find potential relevant data for confidence assessment, first, a literature review is performed. Then, implementation of a traceability solution in a continuous delivery pipeline is investigated. Software traceability is the ability to trace artifacts in the pipeline. In this study, it is enabled by using the Eiffel framework. A case study is conducted with eight quality assurance experts with varying work experience. The results of the case study are analysed to identify important metrics and concepts to consider when it comes to assessing confidence in a continuous delivery pipeline. The results indicate that build outcome, confidence level change and test coverage are the most relevant metrics. Build outcome is the rate of various outcomes from building a software project. Confidence level change is the trend in the level of confidence in the continuous delivery pipeline. Test coverage is the degree to which the source code is executed during testing. In the conducted interviews, all the experts ranked build outcome as highly as possible. 87.5% of the experts also ranked confidence level change as high as they could, while 75% did it for test coverage. The main indication of this study is that some of the identified metrics found by collecting data using the implemented traceability solution have not been discussed in the state-of-the-art. Specifically, the metrics: build outcome, confidence level change, test suite outcome and issues resolved.
Confidence är en uppsättning av mått och restriktioner som används för att mäta programvarutillförlitlighet. Det möjliggör för evaluering av programvara i agila utvecklingsfaser. Målet med confidence-bedömning är att säkerställa att programvara är säker för distribution i en produktionsmiljö. Den här studien presenterar vilka mått som kan vara relevanta och användbara vid bedömning av confidence i en continuous delivery pipeline. Continuous delivery pipelines används för att producera programvara i korta cykler och innehåller mycket information. För att hitta potentiellt relevant data för confidence-bedömning genomfördes först en litteraturstudie. Sedan undersöktes implementationen av en spårbarhetslösning i en continuous delivery pipeline. Programvaruspårbarhet är förmågan att spåra artefakter i pipelinen. I den här studien möjliggörs det med hjälp av ramverket Eiffel. En fallstudie genomfördes med åtta kvalitetssäkringsexperter med varierande arbetserfarenhet. Resultaten från fallstudien analyseras för att identifiera centrala mått och koncept att ta hänsyn till gällande bedömning av confidence i en continuous delivery pipeline. Resultaten indikerar att build outcome, confidence level change och test coverage är de mest relevanta måtten. Build outcome är förekomsten av olika resultat från bygget av ett programvaruprojekt. Confidence level change är trenden i confidence-nivån i en continuous delivery pipeline. Test coverage är den grad till vilken koden exekveras vid testning. I de genomförda intervjuerna rankade alla experterna build outcome så högt som möjligt. 87.5% av experterna rankade också confidence level change så högt de kunde, medan 75% gjorde det för test coverage. Huvudindikationen för denna studie är att några av de identifierade måtten hittade genom att samla in data med den implementerade spårbarhetslösningen inte har diskuterats i state-of-the-art. Specifikt måtten: build outcome, confidence level change, test suite outcome och issues resolved.
APA, Harvard, Vancouver, ISO, and other styles
13

Kamppari, Simo O. (Simo Olli) 1976. "Word and phone level acoustic confidence scoring for speech understanding systems." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/86458.

Full text
Abstract:
Thesis (M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2000.
Includes bibliographical references (p. 89-91).
by Simo O. Kamppari.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
14

Dusitsin, Krid, and Kurt Kosbar. "Accuracy of Computer Simulations that use Common Pseudo-random Number Generators." International Foundation for Telemetering, 1998. http://hdl.handle.net/10150/609238.

Full text
Abstract:
International Telemetering Conference Proceedings / October 26-29, 1998 / Town & Country Resort Hotel and Convention Center, San Diego, California
In computer simulations of communication systems, linear congruential generators and shift registers are typically used to model noise and data sources. These generators are often assumed to be close to ideal (i.e. delta correlated), and an insignificant source of error in the simulation results. The samples generated by these algorithms have non-ideal autocorrelation functions, which may cause a non-uniform distribution in the data or noise signals. This error may cause the simulation bit-error-rate (BER) to be artificially high or low. In this paper, the problem is described through the use of confidence intervals. Tests are performed on several pseudo-random generators to access which ones are acceptable for computer simulation.
APA, Harvard, Vancouver, ISO, and other styles
15

Olivi, Matteo. "Evaluation of confidence-driven cost aggregation strategies." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2016. http://amslaurea.unibo.it/11621/.

Full text
Abstract:
In this thesis I describe eight new stereo matching algorithms that perform the cost-aggregation step using a guided filter with a confidence map as guidance image, and share the structure of a linear stereo matching algorithm. The results of the execution of the proposed algorithms on four pictures from the Middlebury dataset are shown as well. Finally, based on these results, a ranking of the proposed algorithms is presented.
APA, Harvard, Vancouver, ISO, and other styles
16

Bakshi, Arjun. "Methodology For Generating High-Confidence Cost-Sensitive Rules For Classification." University of Cincinnati / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1377868085.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Burger, Clayton. "A hybridisation technique for game playing using the upper confidence for trees algorithm with artificial neural networks." Thesis, Nelson Mandela Metropolitan University, 2014. http://hdl.handle.net/10948/3957.

Full text
Abstract:
In the domain of strategic game playing, the use of statistical techniques such as the Upper Confidence for Trees (UCT) algorithm, has become the norm as they offer many benefits over classical algorithms. These benefits include requiring no game-specific strategic knowledge and time-scalable performance. UCT does not incorporate any strategic information specific to the game considered, but instead uses repeated sampling to effectively brute-force search through the game tree or search space. The lack of game-specific knowledge in UCT is thus both a benefit but also a strategic disadvantage. Pattern recognition techniques, specifically Neural Networks (NN), were identified as a means of addressing the lack of game-specific knowledge in UCT. Through a novel hybridisation technique which combines UCT and trained NNs for pruning, the UCTNN algorithm was derived. The NN component of UCT-NN was trained using a UCT self-play scheme to generate game-specific knowledge without the need to construct and manage game databases for training purposes. The UCT-NN algorithm is outlined for pruning in the game of Go-Moku as a candidate case-study for this research. The UCT-NN algorithm contained three major parameters which emerged from the UCT algorithm, the use of NNs and the pruning schemes considered. Suitable methods for finding candidate values for these three parameters were outlined and applied to the game of Go-Moku on a 5 by 5 board. An empirical investigation of the playing performance of UCT-NN was conducted in comparison to UCT through three benchmarks. The benchmarks comprise a common randomly moving opponent, a common UCTmax player which is given a large amount of playing time, and a pair-wise tournament between UCT-NN and UCT. The results of the performance evaluation for 5 by 5 Go-Moku were promising, which prompted an evaluation of a larger 9 by 9 Go-Moku board. The results of both evaluations indicate that the time allocated to the UCT-NN algorithm directly affects its performance when compared to UCT. The UCT-NN algorithm generally performs better than UCT in games with very limited time-constraints in all benchmarks considered except when playing against a randomly moving player in 9 by 9 Go-Moku. In real-time and near-real-time Go-Moku games, UCT-NN provides statistically significant improvements compared to UCT. The findings of this research contribute to the realisation of applying game-specific knowledge to the UCT algorithm.
APA, Harvard, Vancouver, ISO, and other styles
18

Radhakrishnan, Ramsundar. "Increasing Accuracy in the Confidence Level during Functional Verification of Combinational Logic Circuits." The Ohio State University, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=osu1308239573.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Valeinis, Janis. "Confidence bands for structural relationship models." Doctoral thesis, [S.l.] : [s.n.], 2007. http://webdoc.sub.gwdg.de/diss/2007/valeinis.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

LeBaron, Dean M. "CVIC: Cluster Validation Using Instance-Based Confidences." BYU ScholarsArchive, 2015. https://scholarsarchive.byu.edu/etd/5736.

Full text
Abstract:
As unlabeled data becomes increasingly available, the need for robust data mining techniques increases as well. Clustering is a common data mining tool which seeks to find related, independent patterns in data called clusters. The cluster validation problem addresses the question of how well a given clustering fits the data set. We present CVIC (cluster validation using instance-based confidences) which assigns confidence scores to each individual instance, as opposed to more traditional methods which focus on the clusters themselves. CVIC trains supervised learners to recreate the clustering, and instances are scored based on output from the learners which corresponds to the confidence that the instance was clustered correctly. One consequence of individually validated instances is the ability to direct users to instances in a cluster that are either potentially misclustered or correctly clustered. Instances with low confidences can either be manually inspected or reclustered and instances with high confidences can be automatically labeled. We compare CVIC to three competing methods for assigning confidence scores and show results on CVIC's ability to successfully assign scores that result in higher average precision and recall for detecting misclustered and correctly clustered instances across five clustering algorithms on twenty data sets including handwritten historical image data provided by Ancestry.com.
APA, Harvard, Vancouver, ISO, and other styles
21

Green, Nathan Alan. "Establishing Public Confidence in the Viability of Fingerprint Biometric Technology." Diss., CLICK HERE for online access, 2005. http://contentdm.lib.byu.edu/ETD/image/etd919.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Marson, Luca. "Enforcing low confidence class predictions for out of distribution data in deep convolutional networks." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-281965.

Full text
Abstract:
Modern discriminative deep neural networks are known to perform high confident predictions for inputs far away from the training data distribution, commonly referred to as out-of-distribution inputs. This property poses security concerns for the deployment of deep learning models in critical applications like autonomous vehicles because it hinders the detection of such inputs. The aim of this thesis is to investigate the problem of out-of-distribution inputs detection and to propose a solution based on a novel method to enforce low confidence far away from the training data in a supervised setting. To do so, samples lying on the classifier decision boundaries are generated by backpropagating the gradient of an appropriately designed loss function to the input and are used as out-of-distribution examples. At the end of the proposed iterative training procedure, the network high confidence region overlaps with the training data distribution support, resulting in low confidence everywhere else and in an improved detection capability of out of distribution inputs. We first evaluate the method on a synthetic 2-dimensional dataset to have a hindsight of its functioning by visualizing the model confidence. To verify its ability to scale up to higher dimensionality settings, we then apply it to more complex datasets. The experimental results on the MNIST dataset show comparable performance to the current state-of-the-art approach. When tested on the CIFAR-10 dataset, the proposed method is not able to reach entirely satisfactory results. Some of the considered metrics however suggest that through further experimentation we might improve the method capabilities.
Moderna artificiella neuronnät är kända för att utföra klassificering med hög självsäkerhet för indata som är utanför distributionen av träningsdata. Sådan data kallas vanligtvis “out-of-distribution” indata. Detta kan ställa till säkerhetsproblem när artificiella neuronnät används i säkerhetskritiska tillämpningar, såsom autonoma fordon. Syftet med detta examensarbete är att undersöka problemet med detektering av “out-of-distribution” data. Vi föreslår en ny metod baserad på en teknik som framtvingar låg konfidens för data långt ifrån träningsdata. För att göra detta, genererar vi datapunkter på gränsen av klassificerarens beslutsgränser genom att bakåtpropagera gradienten som är baserad på en lämpligt utformad förlustfunktion. Dessa datapunkter används sedan som “out-of-distribution” exempel. I slutet av den föreslagna algoritmen överlappar nätverkets höga konfidensområde med stödet för träningsdatadistributionen, vilket resulterar i låg konfidens utanför träningsdatadistributionen och förbättrar “out-of-distribution” detekteringsförmåga. Vi utvärderar först metoden på en syntetisk 2-dimensionell datamängd. Vi får bättre förståelse för hur vår algoritm fungerar genom att visualisera modellkonfidensen. För att verifiera att metoden också fungerar med data av högre dimension tillämpar vi den på mer komplexa datamängder. De experimentella resultaten på MNIST visar prestanda jämförbara med toppmoderna metoder. När den föreslagna metoden testas på CIFAR-10 kan den inte nå helt tillfredsställande resultat. Trots det, tyder några av våra resultat på att vi kan förbättra metoden genom ytterligare experiment.
APA, Harvard, Vancouver, ISO, and other styles
23

Rayakota, Balaji. "Generating high confidence contracts without user input using Daikon and ESC/Java2." Kansas State University, 2013. http://hdl.handle.net/2097/15731.

Full text
Abstract:
Master of Science
Department of Computing and Information Science
Torben Amtoft
Invariants are properties which are asserted to be true at certain program points. Invariants are of paramount importance when proving program correctness and program properties. Method, constructor, and class invariants can serve as contracts which specify program behavior and can lead to more accurate reuse of code; more accurate than comments because contracts are less error prone and they may be proved without testing. Dynamic invariant generation techniques run the program under inspection and observe the values that are computed at each program point and report a list of invariants that were observed to be possibly true. Static checkers observe program code and try to prove the correctness of annotated invariants by generating proofs for them. This project attempts to get strong invariants for a subset of classes in Java; there are two phases first we use Daikon, a tool that suggests invariants using dynamic invariant generation techniques, and next we get the invariants checked using ESC/Java2, which is a static checker for Java. In the first phase an ‘Instrumenter’ program inspects Java classes and generates code such that sufficient information is supplied to Daikon to generate strong invariants. All of this is achieved without any user input. The aim is to be able to understand the behavior of a program using already existing tools.
APA, Harvard, Vancouver, ISO, and other styles
24

Monteith, Kristine Perry. "Heuristic Weighted Voting." BYU ScholarsArchive, 2007. https://scholarsarchive.byu.edu/etd/1206.

Full text
Abstract:
Selecting an effective method for combining the votes of classifiers in an ensemble can have a significant impact on the overall classification accuracy an ensemble is able to achieve. With some methods, the ensemble cannot even achieve as high a classification accuracy as the most accurate individual classifying component. To address this issue, we present the strategy of Heuristic Weighted Voting, a technique that uses heuristics to determine the confidence that a classifier has in its predictions on an instance by instance basis. Using these heuristics to weight the votes in an ensemble results in an overall average increase in classification accuracy over when compared to the most accurate classifier in the ensemble. When considering performance over 18 data sets, Heuristic Weighted Voting compares favorably both in terms of average classification accuracy and algorithm-by-algorithm comparisons in accuracy when evaluated against three baseline ensemble creation strategies as well as the methods of stacking and arbitration.
APA, Harvard, Vancouver, ISO, and other styles
25

Rotunda, Cathy J. "A study of teachers' confidence levels toward meeting the computer technology instructional objectives of the WV Dept. of Education." Morgantown, W. Va. : [West Virginia University Libraries], 1999. http://etd.wvu.edu/templates/showETD.cfm?recnum=717.

Full text
Abstract:
Thesis (Ed. D.)--West Virginia University, 1999.
Title from document title page. Document formatted into pages; contains vii, 119 p. : ill. Includes abstract. Includes bibliographical references (p. 83-86).
APA, Harvard, Vancouver, ISO, and other styles
26

Landmesser, John Andrew. "Improving it portfolio management decision confidence using multi-criteria decision making and hypervariate display techniques." Thesis, Nova Southeastern University, 2014. http://pqdtopen.proquest.com/#viewpdf?dispub=3609737.

Full text
Abstract:

Information technology (IT) investment decision makers are required to process large volumes of complex data. An existing body of knowledge relevant to IT portfolio management (PfM), decision analysis, visual comprehension of large volumes of information, and IT investment decision making suggest Multi-Criteria Decision Making (MCDM) and hypervariate display techniques can reduce cognitive load and improve decision confidence in IT PfM decisions. This dissertation investigates improving the decision confidence by reducing cognitive burden of the decision maker through greater comprehension of relevant decision information. Decision makers from across the federal government were presented with actual federal IT portfolio project lifecycle costs and durations using hypervariate displays to better comprehend IT portfolio information more quickly and make more confident decisions. Other information economics attributes were randomized for IT portfolio projects to generate Balanced Scorecard (BSC) values to support MCDM decision aids focused on IT investment alignment with specific business objectives and constraints. Both quantitative and qualitative measures of participant comprehension, confidence, and efficiency were measured to assess hypervariate display treatment and then MCDM decision aid treatment effectiveness. Morae Recorder Autopilot guided participants through scenario tasks and collected study data without researcher intervention for analysis using Morae Manager. Results showed improved comprehension and decision confidence using hypervariate displays of federal IT portfolio information over the standard displays. Both quantitative and qualitative data showed significant differences in accomplishment of assigned IT portfolio management tasks and increased confidence in decisions. MCDM techniques, incorporating IT BSC, Monte Carlo simulation, and optimization algorithms to provide cost, value, and risk optimized portfolios improved decision making efficiency. Participants did not find improved quality and reduced uncertainty from optimized IT portfolio information. However, on average participants were satisfied and confident with the portfolio optimizations. Improved and efficient methods of delivering and visualizing IT portfolio information can reduce decision maker cognitive load, improve comprehension efficiency, and improve decision making confidence. Study results contribute to knowledge in the area of comprehension and decision making cognitive processes, and demonstrate important linkages between Human-Computer Interaction (HCI) and Decision Support Systems (DSS) to support IT PfM decision making.

APA, Harvard, Vancouver, ISO, and other styles
27

Antonsson, Roger, and Lena Petterson. "Think big : for small - infusing confidence, security and trustworthiness for mobile services." Thesis, Blekinge Tekniska Högskola, Avdelningen för för interaktion och systemdesign, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-3535.

Full text
Abstract:
The use of mobile telephony has over the past years increased and consequently has the development of services over the mobile phone also increased. This semester we have taken part in a large system development project, our contributions have been designing the graphical user interfaces. In doing that we found the problem with how to mediate trust to a user through a graphical user interface interesting. In this thesis we are focusing on how to develop graphical user interfaces for a mobile phone service that radiate and infuse confidence, security and trustworthiness. In order to attain the purpose, we have used the combination of literature studies and to some extent user involvement with Mock-ups and a Think-aloud technique. We are also describing the importance of taking as well usability and usability goals as the needs for the end users into consideration. We have found that more research on how to radiate and infuse trust through a graphical user interface is needed. This thesis is concluded with some aspects on that subject that we think is important to have in mind. It is of great importance to never leave the user in a state of uncertainty and therefore is clear, sincere and informative feedback necessary throughout the service. Also central in designing graphical user interfaces is to make sure that there is no mismatch in the security of the system and the radiated security.
Användningen av mobiltelefoner har de senaste åren ökat, det har även utvecklingen av tjänsterna till mobiltelefoner. Under denna termin har vi deltagit i ett större systemutvecklingsprojekt och vår del har varit att designa det grafiska användargränssnittet. Under projektets gång har vi intresserat oss för problemet, hur tillit kan förmedlas till användaren genom det grafiska användargränssnittet. I den här rapporten fokuserar vi på hur man utvecklar grafiska användargränssnitt för en mobiltelefonservice som utstrålar och ingjuter förtroende, säkerhet och trovärdighet. För att uppnå vårt syfte har vi använt en kombination av litteraturstudier och till viss utsträckning användarmedverkan med Mock-uper och en Tänka-högt teknik. Vi beskriver även vikten av tänka på såväl använbarhet som använbarhetsmål som användarnas behov. Vi har funnit att det behövs mer forskning om hur man utstrålar och ingjuter tillit genom ett grafiskt användargränssnitt. Uppsatsen avslutas med några av våra synpunkter på vad som är viktiga att ha i åtanke. Det är av stor betydelse att aldrig låta en användare sväva i ovisshet och därför är det nödvändigt med tydlig, korrekt och informativ återkoppling genom hela tjänsten. Centralt vid design av grafiska användargränssnitt är också att förvissa sig om att det inte är någon skillnad mellan säkerheten i systemet och säkerheten som utstrålas.
APA, Harvard, Vancouver, ISO, and other styles
28

Robinson, Ashley Renee. "The Attitudes of African American Middle School Girls Toward Computer Science: Influences of Home, School, and Technology Use." Diss., Virginia Tech, 2015. http://hdl.handle.net/10919/52277.

Full text
Abstract:
The number of women in computing is significantly low compared to the number of men in the discipline, with African American women making up an even smaller segment of this population. Related literature accredits this phenomenon to multiple sources, including background, stereotypes, discrimination, self-confidence, and a lack of self-efficacy or belief in one's capabilities. However, a majority of the literature fails to represent African American females in research studies. This research used a mixed methods approach to understand the attitudes of African American middle school girls toward computer science and investigated the factors that influence these attitudes. Since women who do pursue computing degrees and continue with graduate education often publish in Human-Computer Interaction (HCI) in greater proportions than men, this research used an intervention to introduce African American middle school girls to computational thinking concepts using HCI topics. To expand the scope of the data collected, a separate group of girls were introduced to computational thinking concepts through Algorithms. Data were collected through both quantitative and qualitative sources, and analyzed using inferential statistics and content analysis. The results show that African American middle school girls generally have negative attitudes toward computer science. However, after participating in a computer science intervention, perceptions toward computer science become more positive. The results also reveal that four factors influence the attitudes of African American middle school girls toward computer science, such as the participation in an intervention, the intervention content domain, the facilitation of performance accomplishments, and participant characteristics like socioeconomic status, mother's education, school grades, and the use of smart phones and video game consoles at home.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
29

Maninger, Robert M. "The Effects of Technology Integration Techniques in Elementary Mathematics Methods Courses on Elementary Preservice Teachers' Computer Self-Efficacy, Software Integration Confidence, and Lesson Planning." Thesis, University of North Texas, 2003. https://digital.library.unt.edu/ark:/67531/metadc4307/.

Full text
Abstract:
The purpose of this study was to demonstrate the effect of computer technology integration techniques on preservice teachers' feelings of computer self-efficacy and feelings of confidence in software integration. It was also the purpose of this study to interpret these preservice teachers' confidence in using computer technology integration techniques in their own planning and instruction during student teaching. The participants in this study were from two intact, non-randomly-formed classrooms. They were 27 preservice teachers enrolled in the College of Education at a university in north central Texas in two sections of a course entitled EDEE 4350, Mathematics in the Elementary School. This study was quasi-experimental, with a nonequivalent pretest-posttest control group design. The independent variable was the type of instruction experienced in an elementary mathematics methods course: novel instruction with specialized computer technology integration techniques versus traditional instruction with no specialized technology integration techniques. The dependant variables were measured using the following instruments: the Demographic Data and Previous Context Use of the Computer Survey which described participants' demographics and their previous usage of the computer; the Self-Efficacy With Computer Technologies Scale; the Preservice Teacher Software Integration Confidence Scale; and the Lesson Plan Infusion/Integration Scale. The results of the data analysis revealed, through the inferential statistics run on the Self-Efficacy with Computer Technology Scale pretest and posttest, that there was no statistically significant difference between treatment groups (p < .05). The posttest-only Preservice Teachers Software Integration Confidence Scale revealed a statistically significant difference between treatment groups (p < .05). The posttest-only Lesson Plan Technology Infusion/Integration Scale revealed no statistical significance between treatment groups (p < .05). The study provides insight into the benefits of instruction in specific software integration techniques instruction. It suggests that when preservice teachers are given instruction in specific computer software integration techniques, they are more confident in the use of those techniques.
APA, Harvard, Vancouver, ISO, and other styles
30

Anderson, Paul E. "A computational framework for analyzing chemical modification and limited proteolysis experimental data used for high confidence protein structure prediction." Wright State University / OhioLINK, 2006. http://rave.ohiolink.edu/etdc/view?acc_num=wright1161891355.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Allen, Julia Elizabeth. "Transformative Learning Theory as a Basis for Identifying Barriers to Faculty Confidence in Online Instruction." Thesis, University of North Texas, 2017. https://digital.library.unt.edu/ark:/67531/metadc1011768/.

Full text
Abstract:
This study applied the stages of transformative learning to faculty perceptions and application of best practices to online learning. Research questions included: Can transformative learning theory constructs be used to identify potential barriers in faculty development and delivery of online instruction?; How does the stage of transformative learning of online faculty relate to their perceptions about online learning and their application of best practices?; Is there a correlation between stage of transformative learning and the amount of experience with online instruction a faculty member has? Principal component analysis and cluster analysis support a four-component solution. The four constructs equate to Mezirow's four stages of learning: transforming frames of reference through critical reflection of assumptions, validating contested beliefs through discourse, taking action on one's reflective insight, and critically assessing it. Multiple regression analyses were run to predict faculty perceptions on the identified components. Three of these were statistically significant based on years of experience teaching online, the number of professional development workshops taken on online teaching, or both. While the instrument appears to be a valid measurement of transformation of frame of reference, examination of previously contested beliefs, and critical assessment of action, further efforts will be needed before this is a fully validated instrument.
APA, Harvard, Vancouver, ISO, and other styles
32

Alfayez, Abdulaziz Abdullah A. "Exploring the Level of Conceptual Mastery in Computational Thinking Among Male Computer Science Teachers at Public Secondary Schools in Saudi Arabia." University of Toledo / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1538656498846648.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Hartley, Michael A. "A simulation study of the error induced in one-sided reliability confidence bounds for the Weiball distribution using a small sample size with heavily censored data." Thesis, Monterey California. Naval Postgraduate School, 2004. http://hdl.handle.net/10945/1260.

Full text
Abstract:
Approved for public release; distribution in unlimited.
Budget limitations have reduced the number of military components available for testing, and time constraints have reduced the amount of time available for actual testing resulting in many items still operating at the end of test cycles. These two factors produce small test populations (small sample size) with "heavily" censored data. The assumption of "normal approximation" for estimates based on these small sample sizes reduces the accuracy of confidence bounds of the probability plots and the associated quantities. This creates a problem in acquisition analysis because the confidence in the probability estimates influences the number of spare parts required to support a mission or deployment or determines the length of warranty ensuring proper operation of systems. This thesis develops a method that simulates small samples with censored data and examines the error of the Fisher-Matrix (FM) and the Likelihood Ratio Bounds (LRB) confidence methods of two test populations (size 10 and 20) with three, five, seven and nine observed failures for the Weibull distribution. This thesis includes a Monte Carlo simulation code written in S-Plus that can be modified by the user to meet their particular needs for any sampling and censoring scheme. To illustrate the approach, the thesis includes a catalog of corrected confidence bounds for the Weibull distribution, which can be used by acquisition analysts to adjust their confidence bounds and obtain a more accurate representation for warranty and reliability work.
Civilian, Department of the Air Force
APA, Harvard, Vancouver, ISO, and other styles
34

Obioha, Chinonye Leuna. "User-centred design to engender trust in e-commerce." Thesis, Cape Peninsula University of Technology, 2016. http://hdl.handle.net/20.500.11838/2414.

Full text
Abstract:
Thesis (MTech (Information Technology))--Cape Peninsula University of Technology, 2016.
Consumer trust is a core element for any e-commerce website. This study aimed to explore attributes of business-to-consumers (B2C) e-commerce websites that can communicate and engender trust from the users’ perspective using user-centred design. E-commerce websites are known to have features such as security certificates and encryption methods to ensure trust, but this requires technical knowhow to understand. The technologies used to develop websites have improved so far, but it has little effect on improving the trust of the users of e-commerce mostly in developing countries (Africa in particular). E-commerce users do not realise that these features have been put in place for the trustworthiness of the websites which contributes to their reluctance to conduct business transactions online, thus reducing their buying intentions. There is a need to design e-commerce websites to communicate/ convey trust from the users’ perspective. The study explored various sources of data to obtain insight and understanding of the research problem—user-centred design (UCD) group activity with users, interviews with developers, and secondary prior literature. Using UCD as the main methodology, an intensive UCD workshop activity with a group of eight e-commerce users was carried out. Furthermore, to obtain the view of experts (developers) on what is currently done to engender trust in B2C e-commerce websites, interviews with four respondents were also carried out. These interviews were intended to reduce any prejudice or bias and to obtain a clearer understanding of the phenomenon being studied. The findings from the study revealed six main attributes to engender trust, namely aesthetics design, security and information privacy, functionality design, trustworthiness based on content, development process, and vendor attributes. Proposed guidelines for each of the attributes were outlined. The findings from the users showed that those who were acquainted with the e-commerce technologies were those whose backgrounds are computer and technology related. Most users focused on aesthetics design, functionality, and security of their privacy and private details. Less emphasis was placed on the technology behind the e-commerce websites. Users use their aesthetic and cognitive value in their judgement for trust. The findings from the research were further validated using the Domestication of Technology Theory (DTT), which resulted in the development of a user-centred e-commerce trust model.
APA, Harvard, Vancouver, ISO, and other styles
35

Sangi, P. (Pekka). "Object motion estimation using block matching with uncertainty analysis." Doctoral thesis, Oulun yliopisto, 2013. http://urn.fi/urn:isbn:9789526200774.

Full text
Abstract:
Abstract Estimation of 2-D motion is one of the fundamental problems in video processing and computer vision. This thesis addresses two general tasks in estimating projected motions of background and foreground objects in a scene: global motion estimation and motion based segmentation. The work concentrates on the study of the block matching method, and especially on those cases where the matching measure is based on the sum of squared or absolute displaced frame differences. Related techniques for performing the confidence analysis of local displacement are considered and used to improve the performance of the higher-level tasks mentioned. In general, local motion estimation techniques suffer from the aperture problem. Therefore, confidence analysis methods are needed which can complement motion estimates with information about their reliability. This work studies a particular form of confidence analysis which uses the evaluation of the match criterion for local displacement candidates. In contrast to the existing approaches, the method takes into account the local image gradient. The second part of the thesis presents a four-step feature based method for global motion estimation. For basic observations, it uses motion features which are combinations of image point coordinates, displacement estimates at those points, and representations of displacement uncertainty. A parametric form of uncertainty representation is computed exploiting the technique described in the first part of the thesis. This confidence information is used as a basis for weighting the features in motion estimation. Aspects of gradient based feature point selection are also studied. In the experimental part, the design choices of the method are compared, using both synthetic and real sequences. In the third part of the thesis, a technique for feature based extraction of background and foreground motions is presented. The new sparse segmentation algorithm performs competitive segmentation using both the spatial and temporal propagation of support information. The weighting of features exploits parametric uncertainty information which is experimentally shown to improve the performance of motion estimation. In the final part of the thesis, a novel framework for motion based object detection, segmentation, and tracking is developed. It uses a block grid based representation for segmentation and a particle filter based approach to motion estimation. Analysis techniques for obtaining the segmentation are described. Finally, the approach is integrated with the sparse motion segmentation and the combination of the methods is experimentally shown to increase both the efficiency of sampling and the accuracy of segmentation
Tiivistelmä Tässä väitöskirjassa tutkitaan yhtä videonkäsittelyn ja konenäön perusongelmaa, kaksiulotteisen liikkeen estimointia. Työ käsittelee kahta yleistä tehtävää taustan ja etualan kohteiden liikkeiden määrittämisessä: hallitsevan liikkeen estimointia ja liikepohjaista kuvan segmentointia. Tutkituissa ratkaisuissa lähtökohtana käytetään lohkosovitukseen perustuvaa paikallisen liikkeen määritystä, jossa sovituksen kriteerinä käytetään poikkeutettujen kehysten pikseliarvojen erotusta. Tähän liittyen tarkastellaan estimoinnin luotettavuuden analyysin tekniikoita ja näiden hyödyntämistä edellä mainittujen tehtävien ratkaisuissa. Yleensä ottaen paikallisen liikkeen estimointia vaikeuttaa apertuuriongelma. Tämän vuoksi tarvitaan analyysitekniikoita, jotka kykenevät antamaan täydentävää tietoa liike-estimaattien luotettavuudesta. Työn ensimmäisessä osassa kehitetty analyysimenetelmä käyttää lähtötietona lohkosovituksen kriteerin arvoja, jotka on saatu eri liikekandidaateille. Erotuksena aiempiin menetelmiin kehitetty ratkaisu ottaa huomioon kuvagradientin vaikutuksen. Työn toisessa osassa tutkitaan nelivaiheista piirrepohjaista ratkaisua hallitsevan liikkeen estimoimiseksi. Perushavaintoina mallissa käytetään liikepiirteitä, jotka koostuvat valittujen kuvapisteiden koordinaateista, näissä pisteissä lasketuista liike-estimaateista ja estimaattien epävarmuuden esityksestä. Jälkimmäinen esitetään parametrisessa muodossa käyttäen laskentaan työn ensimmäisessä osassa esitettyä menetelmää. Tätä epävarmuustietoa käytetään piirteiden painottamiseen hallitsevan liikkeen estimoinnissa. Lisäksi tutkitaan gradienttipohjaista piirteiden valintaa. Kokeellisessa osassa erilaisia suunnitteluvalintoja verrataan toisiinsa käyttäen synteettisiä ja todellisia kuvasekvenssejä. Väitöstyön kolmannessa osassa esitetään piirrepohjainen menetelmä taustan ja etualan kohteen liikkeiden erottamiseksi toisistaan. Algoritmi tekee analyysin kahta liikettä sisältävälle näkymälle käyttäen sekä spatiaalista että ajallista segmentointitiedon välittämistä. Piirteiden painotus hyödyntää epävarmuustietoa tässä yhteydessä, jonka osoitetaan kokeellisesti parantavan liike-estimoinnin suorituskykyä. Viimeisessä osassa kehitetään viitekehys liikepohjaisen kohteen ilmaisun, segmentoinnin ja seurannan toteutukselle. Se perustuu lohkopohjaiseen esitystapaan ja näytteistyksen soveltamiseen liikkeen estimoinnissa. Analyysitekniikka segmentoinnin määrittämiseksi esitellään. Lopuksi ratkaisu integroidaan työn kolmannessa osassa esitetyn menetelmän kanssa, ja menetelmien yhdistelmän osoitetaan kokeellisesti parantavan sekä näytteistyksen tehokkuutta että segmentoinnin tarkkuutta
APA, Harvard, Vancouver, ISO, and other styles
36

Saunders, Thomas. "Image motion analysis using inertial sensors." Thesis, University of Bath, 2015. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.687346.

Full text
Abstract:
Understanding the motion of a camera from only the image(s) it captures is a di cult problem. At best we might hope to estimate the relative motion between camera and scene if we assume a static subject, but once we start considering scenes with dynamic content it becomes di cult to di↵erentiate between motion due to the observer or motion due to scene movement. In this thesis we show how the invaluable cues provided by inertial sensor data can be used to simplify motion analysis and relax requirements for several computer vision problems. This work was funded by the University of Bath.
APA, Harvard, Vancouver, ISO, and other styles
37

Karlsson, Sara. "Data- och tv-spel – en väg till språkkunskap : Fritidsengelskans betydelse för elevernas språkliga självförtroende i en åländsk skolkontext." Thesis, Örebro universitet, Institutionen för humaniora, utbildnings- och samhällsvetenskap, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-53624.

Full text
Abstract:
The objective of this study is to investigate a connection between pupils’ self-confidence and extramural English. This is a quantitative survey study made among 37 fifth-graders in the Aland Islands, where a group of pupils were given questions about how they feel about English as a subject in school, and particularly about spoken English. There were also questions about their exposure to English at home. The results of the survey show that there is a connection between confidence and extramural English. Especially among the pupils playing computer games, the result shows that they are more self-confident when speaking English than the pupils who do not play computer games.
APA, Harvard, Vancouver, ISO, and other styles
38

Bornefalk, Hermansson Anna. "Resampling Evaluation of Signal Detection and Classification : With Special Reference to Breast Cancer, Computer-Aided Detection and the Free-Response Approach." Doctoral thesis, Uppsala : Acta Universitatis Upsaliensis : Univ.-bibl [distributör], 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-7452.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Gustafson, Nathaniel Lee. "A Confidence-Prioritization Approach to Data Processing in Noisy Data Sets and Resulting Estimation Models for Predicting Streamflow Diel Signals in the Pacific Northwest." BYU ScholarsArchive, 2012. https://scholarsarchive.byu.edu/etd/3294.

Full text
Abstract:
Streams in small watersheds are often known to exhibit diel fluctuations, in which streamflow oscillates on a 24-hour cycle. Streamflow diel fluctuations, which we investigate in this study, are an informative indicator of environmental processes. However, in Environmental Data sets, as well as many others, there is a range of noise associated with individual data points. Some points are extracted under relatively clear and defined conditions, while others may include a range of known or unknown confounding factors, which may decrease those points' validity. These points may or may not remain useful for training, depending on how much uncertainty they contain. We submit that in situations where some variability exists in the clarity or 'Confidence' associated with individual data points – Notably environmental data – an approach that factors this confidence into account during the training phase is beneficial. We propose a methodological framework for assigning confidence to individual data records and augmenting training with that information. We then exercise this methodology on two separate datasets: A simulated data set, and a real-world, Environmental Science data set with a focus on streamflow diel signals. The simulated data set provides integral understanding of the nature of the data involved, and the Environmental Science data set provides a real-world case study of an application of this methodology against noisy data. Both studies' results indicate that applying and utilizing confidence in training increases performance and assists in the Data Mining Process.
APA, Harvard, Vancouver, ISO, and other styles
40

Ericson, Julia. "Modelling Immediate Serial Recall using a Bayesian Attractor Neural Network." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-291553.

Full text
Abstract:
In the last decades, computational models have become useful tools for studying biological neural networks. These models are typically constrained by either behavioural data from neuropsychological studies or by biological data from neuroscience. One model of the latter kind is the Bayesian Confidence Propagating Neural Network (BCPNN) - an attractor network with a Bayesian learning rule which has been proposed as a model for various types of memory. In this thesis, I have further studied the potential of the BCPNN in short-term sequential memory. More specifically, I have investigated if the network can be used to qualitatively replicate behaviours of immediate verbal serial recall, and thereby offer insight into the network-level mechanisms which give rise to these behaviours. The simulations showed that the model was able to reproduce various benchmark effects such as the word length and irrelevant speech effects. It could also simulate the bow shaped positional accuracy curve as well as some backward recall if the to-be recalled sequence was short enough. Finally, the model showed some ability to handle sequences with repeated patterns. However, the current model architecture was not sufficient for simulating the effects of rhythm such as temporally grouping the inputs or stressing a specific element in the sequence. Overall, even though the model is not complete, it showed promising results as a tool for investigating biological memory and it could explain various benchmark behaviours in immediate serial recall through neuroscientifically inspired learning rules and architecture.
Under de senaste årtionden har datorsimulationer blivit ett allt mer populärt verktyg för att undersöka biologiska neurala nätverk. Dessa modeller är vanligtvis inspirerade av antingen beteendedata från neuropsykologiska studier eller av biologisk data från neurovetenskapen. En modell av den senare typen är ett Bayesian Confidence Propagating Neural Network (BCPNN) - ett autoassociativt nätverk med en Bayesiansk inlärningsregel, vilket tidigare har använts för att modellera flera typer av minne. I det här examensarbetet har jag vidare undersökt om nätverket kan användas som en modell för sekventiellt korttidsminne genom att undersöka dess förmåga att replikera beteenden inom verbalt sekventiellt korttidsminne. Experimenten visade att modellen kunde simulera ett flertal viktiga nyckeleffekter såsom the word length effect och the irrelevant speech effect. Däröver kunde modellen även simulera den bågformade kurvan som beskriver andelen lyckade repetitioner som en funktion av position, och den kunde dessutom repetera korta sekvenser baklänges. Modellen visade också på viss förmåga att hantera sekvenser där ett element återkom senare i sekvensen. Den nuvarande modellen var däremot inte tillräcklig för att simulera effekterna som tillkommer av rytm, såsom temporär gruppering eller en betoning på specifika element i sekvensen. I sin helhet ser modellen däremot lovande ut, även om den inte är fullständig i sin nuvarande form, då den kunde simulera ett flertal viktiga nyckeleffekter och förklara dessa med hjälp av neurovetenskapligt inspirerade inlärningsregler.
APA, Harvard, Vancouver, ISO, and other styles
41

Kavurucu, Yusuf. "An Ilp-based Concept Discovery System For Multi-relational Data Mining." Phd thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/12610688/index.pdf.

Full text
Abstract:
Multi Relational Data Mining has become popular due to the limitations of propositional problem definition in structured domains and the tendency of storing data in relational databases. However, as patterns involve multiple relations, the search space of possible hypothesis becomes intractably complex. In order to cope with this problem, several relational knowledge discovery systems have been developed employing various search strategies, heuristics and language pattern limitations. In this thesis, Inductive Logic Programming (ILP) based concept discovery is studied and two systems based on a hybrid methodology employing ILP and APRIORI, namely Confidence-based Concept Discovery and Concept Rule Induction System, are proposed. In Confidence-based Concept Discovery and Concept Rule Induction System, the main aim is to relax the strong declarative biases and user-defined specifications. Moreover, this new method directly works on relational databases. In addition to this, the traditional definition of confidence from relational database perspective is modified to express Closed World Assumption in first-order logic. A new confidence-based pruning method based on the improved definition is applied in the APRIORI lattice. Moreover, a new hypothesis evaluation criterion is used for expressing the quality of patterns in the search space. In addition to this, in Concept Rule Induction System, the constructed rule quality is further improved by using an improved generalization metod. Finally, a set of experiments are conducted on real-world problems to evaluate the performance of the proposed method with similar systems in terms of support and confidence.
APA, Harvard, Vancouver, ISO, and other styles
42

Arthur, Jacob D. "Enhanced Prediction of Network Attacks Using Incomplete Data." NSUWorks, 2017. http://nsuworks.nova.edu/gscis_etd/1020.

Full text
Abstract:
For years, intrusion detection has been considered a key component of many organizations’ network defense capabilities. Although a number of approaches to intrusion detection have been tried, few have been capable of providing security personnel responsible for the protection of a network with sufficient information to make adjustments and respond to attacks in real-time. Because intrusion detection systems rarely have complete information, false negatives and false positives are extremely common, and thus valuable resources are wasted responding to irrelevant events. In order to provide better actionable information for security personnel, a mechanism for quantifying the confidence level in predictions is needed. This work presents an approach which seeks to combine a primary prediction model with a novel secondary confidence level model which provides a measurement of the confidence in a given attack prediction being made. The ability to accurately identify an attack and quantify the confidence level in the prediction could serve as the basis for a new generation of intrusion detection devices, devices that provide earlier and better alerts for administrators and allow more proactive response to events as they are occurring.
APA, Harvard, Vancouver, ISO, and other styles
43

Serrano, Martínez-Santos Nicolás. "Interactive Transcription of Old Text Documents." Doctoral thesis, Universitat Politècnica de València, 2014. http://hdl.handle.net/10251/37979.

Full text
Abstract:
Nowadays, there are huge collections of handwritten text documents in libraries all over the world. The high demand for these resources has led to the creation of digital libraries in order to facilitate the preservation and provide electronic access to these documents. However text transcription of these documents im- ages are not always available to allow users to quickly search information, or computers to process the information, search patterns or draw out statistics. The problem is that manual transcription of these documents is an expensive task from both economical and time viewpoints. This thesis presents a novel ap- proach for e cient Computer Assisted Transcription (CAT) of handwritten text documents using state-of-the-art Handwriting Text Recognition (HTR) systems. The objective of CAT approaches is to e ciently complete a transcription task through human-machine collaboration, as the e ort required to generate a manual transcription is high, and automatically generated transcriptions from state-of-the-art systems still do not reach the accuracy required. This thesis is centered on a special application of CAT, that is, the transcription of old text document when the quantity of user e ort available is limited, and thus, the entire document cannot be revised. In this approach, the objective is to generate the best possible transcription by means of the user e ort available. This thesis provides a comprehensive view of the CAT process from feature extraction to user interaction. First, a statistical approach to generalise interactive transcription is pro- posed. As its direct application is unfeasible, some assumptions are made to apply it to two di erent tasks. First, on the interactive transcription of hand- written text documents, and next, on the interactive detection of the document layout. Next, the digitisation and annotation process of two real old text documents is described. This process was carried out because of the scarcity of similar resources and the need of annotated data to thoroughly test all the developed tools and techniques in this thesis. These two documents were carefully selected to represent the general di culties that are encountered when dealing with HTR. Baseline results are presented on these two documents to settle down a benchmark with a standard HTR system. Finally, these annotated documents were made freely available to the community. It must be noted that, all the techniques and methods developed in this thesis have been assessed on these two real old text documents. Then, a CAT approach for HTR when user e ort is limited is studied and extensively tested. The ultimate goal of applying CAT is achieved by putting together three processes. Given a recognised transcription from an HTR system. The rst process consists in locating (possibly) incorrect words and employs the user e ort available to supervise them (if necessary). As most words are not expected to be supervised due to the limited user e ort available, only a few are selected to be revised. The system presents to the user a small subset of these words according to an estimation of their correctness, or to be more precise, according to their con dence level. Next, the second process starts once these low con dence words have been supervised. This process updates the recogni- tion of the document taking user corrections into consideration, which improves the quality of those words that were not revised by the user. Finally, the last process adapts the system from the partially revised (and possibly not perfect) transcription obtained so far. In this adaptation, the system intelligently selects the correct words of the transcription. As results, the adapted system will bet- ter recognise future transcriptions. Transcription experiments using this CAT approach show that this approach is mostly e ective when user e ort is low. The last contribution of this thesis is a method for balancing the nal tran- scription quality and the supervision e ort applied using our previously de- scribed CAT approach. In other words, this method allows the user to control the amount of errors in the transcriptions obtained from a CAT approach. The motivation of this method is to let users decide on the nal quality of the desired documents, as partially erroneous transcriptions can be su cient to convey the meaning, and the user e ort required to transcribe them might be signi cantly lower when compared to obtaining a totally manual transcription. Consequently, the system estimates the minimum user e ort required to reach the amount of error de ned by the user. Error estimation is performed by computing sepa- rately the error produced by each recognised word, and thus, asking the user to only revise the ones in which most errors occur. Additionally, an interactive prototype is presented, which integrates most of the interactive techniques presented in this thesis. This prototype has been developed to be used by palaeographic expert, who do not have any background in HTR technologies. After a slight ne tuning by a HTR expert, the prototype lets the transcribers to manually annotate the document or employ the CAT ap- proach presented. All automatic operations, such as recognition, are performed in background, detaching the transcriber from the details of the system. The prototype was assessed by an expert transcriber and showed to be adequate and e cient for its purpose. The prototype is freely available under a GNU Public Licence (GPL).
Serrano Martínez-Santos, N. (2014). Interactive Transcription of Old Text Documents [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/37979
TESIS
APA, Harvard, Vancouver, ISO, and other styles
44

Pereira, Patrícia. "Attractor Neural Network modelling of the Lifespan Retrieval Curve." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-280732.

Full text
Abstract:
Human capability to recall episodic memories depends on how much time has passed since the memory was encoded. This dependency is described by a memory retrieval curve that reflects an interesting phenomenon referred to as a reminiscence bump - a tendency for older people to recall more memories formed during their young adulthood than in other periods of life. This phenomenon can be modelled with an attractor neural network, for example, the firing-rate Bayesian Confidence Propagation Neural Network (BCPNN) with incremental learning. In this work, the mechanisms underlying the reminiscence bump in the neural network model are systematically studied. The effects of synaptic plasticity, network architecture and other relevant parameters on the characteristics of the reminiscence bump are systematically investigated. The most influential factors turn out to be the magnitude of dopamine-linked plasticity at birth and the time constant of exponential plasticity decay with age that set the position of the bump. The other parameters mainly influence the general amplitude of the lifespan retrieval curve. Furthermore, the recency phenomenon, i.e. the tendency to remember the most recent memories, can also be parameterized by adding a constant to the exponentially decaying plasticity function representing the decrease in the level of dopamine neurotransmitters.
Människans förmåga att återkalla episodiska minnen beror på hur lång tid som gått sedan minnena inkodades. Detta beroende beskrivs av en sk glömskekurva vilken uppvisar ett intressant fenomen som kallas ”reminiscence bump”. Detta är en tendens hos äldre att återkalla fler minnen från ungdoms- och tidiga vuxenår än från andra perioder i livet. Detta fenomen kan modelleras med ett neuralt nätverk, sk attraktornät, t ex ett icke spikande Bayesian Confidence Propagation Neural Network (BCPNN) med inkrementell inlärning. I detta arbete studeras systematiskt mekanismerna bakom ”reminiscence bump” med hjälp av denna neuronnätsmodell. Exempelvis belyses betydelsen av synaptisk plasticitet, nätverksarkitektur och andra relavanta parameterar för uppkomsten av och karaktären hos detta fenomen. De mest inflytelserika faktorerna för bumpens position befanns var initial dopaminberoende plasticitet vid födseln samt tidskonstanten för plasticitetens avtagande med åldern. De andra parametrarna påverkade huvudsakligen den generella amplituden hos kurvan för ihågkomst under livet. Dessutom kan den s k nysseffekten (”recency effect”), dvs tendensen att bäst komma ihåg saker som hänt nyligen, också parametriseras av en konstant adderad till den annars exponentiellt avtagande plasticiteten, som kan representera densiteten av dopaminreceptorer.
APA, Harvard, Vancouver, ISO, and other styles
45

Lindblad, Simon. "Labeling Clinical Reports with Active Learning and Topic Modeling." Thesis, Linköpings universitet, Interaktiva och kognitiva system, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-148463.

Full text
Abstract:
Supervised machine learning models require a labeled data set of high quality in order to perform well. Available text data often exists in abundance, but it is usually not labeled. Labeling text data is a time consuming process, especially in the case where multiple labels can be assigned to a single text document. The purpose of this thesis was to make the labeling process of clinical reports as effective and effortless as possible by evaluating different multi-label active learning strategies. The goal of the strategies was to reduce the number of labeled documents a model needs, and increase the quality of those documents. With the strategies, an accuracy of 89% was achieved with 2500 reports, compared to 85% with random sampling. In addition to this, 85% accuracy could be reached after labeling 975 reports, compared to 1700 reports with random sampling.
APA, Harvard, Vancouver, ISO, and other styles
46

Santos, Fernando Azenha Bautzer. "Desenvolvimento de programas computacionais visando a estimativa de parâmetros de interesse genético-populacional e o teste de hipóteses genéticas." Universidade de São Paulo, 2006. http://www.teses.usp.br/teses/disponiveis/41/41131/tde-02122006-121406/.

Full text
Abstract:
A dissertação apresenta os resultados obtidos com o desenvolvimento de um programa de computação abrangente em interface gráfica para ambiente Windows visando a estimativa de parâmetros de interesse genético-populacional (freqüências alélicas, respectivos erros-padrão e intervalos de confiança a 95%) e o teste de hipóteses genéticas (equilíbrio de Hardy-Weinberg e análise de estruturação hierárquica populacional), por meio de métodos tradicionais e por meio de testes exatos obtidos com procedimentos de simulação (bootstrap e jackknife).
The dissertation presents the results obtained with the development of a comprehensive computation program (software), running on the Windows (MS) graphic interface, with the aim of: (a) estimating parameters of population genetic interest (such as allelic frequencies and their corresponding standard errors and 95% confidence intervals); and (b) performing the testing of genetic hypotheses (Hardy-Weinberg population ratios and analysis of population hierarchical structure) by means of traditional methods as well as through exact tests obtained with computer simulation procedures (bootstrap and jackknife methods).
APA, Harvard, Vancouver, ISO, and other styles
47

Ren, Zhen. "Towards Confident Body Sensor Networking." W&M ScholarWorks, 2012. https://scholarworks.wm.edu/etd/1539623606.

Full text
Abstract:
With the recent technology advances of wireless communication and lightweight low-power sensors, Body Sensor Network (BSN) is made possible. More and more researchers are interested in developing numerous novel BSN applications, such as remote health/fitness monitoring, military and sport training, interactive gaming, personal information sharing, and secure authentication. Despite the unstable wireless communication, various confidence requirements are placed on the BSN networking service. This thesis aims to provide Quality of Service (QoS) solutions for BSN communication, in order to achieve the required confidence goals.;We develop communication quality solutions to satisfy confidence requirements from both the communication and application levels, in single and multiple BSNs. First, we build communication QoS, targeting at providing service quality guarantees in terms of throughput and time delay on the communication level. More specifically, considering the heterogeneous BSN platform in a real deployment, we develop a radio-agnostic solution for wireless resource scheduling in the BSN. Second, we provide a QoS solution for both inter- and intra-BSN communications when more than one BSNs are involved. Third, we define application fidelity for two neurometric applications as examples, and bridge a connection between the communication QoS and application QoS.
APA, Harvard, Vancouver, ISO, and other styles
48

Yang, Yinan Information Technology &amp Electrical Engineering Australian Defence Force Academy UNSW. "W3 Trust Model (W3TM): a trust-profiling framework to assess trust and transitivity of trust of web-based services in a heterogeneous web environment." Awarded by:University of New South Wales - Australian Defence Force Academy. School of Information Technology and Electrical Engineering, 2005. http://handle.unsw.edu.au/1959.4/38655.

Full text
Abstract:
The growth of eCommerce is being hampered by a lack of trust between providers and consumers of Web-based services. While Web trust issues have been addressed by researchers in many disciplines, a comprehensive approach has yet to be established. This thesis proposes a conceptual trust-profiling framework???W3TF???which addresses issues of trust and user confidence through a range of new user-centred trust measures???trust categories, trust domains, transitivity of trust, fading factor analysis, standalone assessment, hyperlinked assessment and relevance assessment. While others now use the concept of transitivity of trust, it was first introduced by this research in 1998. The thesis also illustrates how W3TF can narrow the gap/disconnection between the hierarchical PKI trust environment and the horizontal Web referral environment. The framework incorporates existing measures of trust (such as Public Key Infrastructure), takes account of consumer perceptions by identifying trust attributes, and utilises Web technology (in the form of metadata), to create a practical, flexible and comprehensive approach to trust assessment. The versatility of the W3TF is demonstrated by applying it to a variety of cases from trust literature and to the hypothetical case study that provided the initial stimulus for this research. It is shown that the framework can be expanded to accommodate new trust attributes, categories and domains, and that trust can be ???weighed??? (and therefore evaluated) by using various mathematical formulae based on different theories and policies. The W3TF addresses identified needs, narrows the gaps in existing approaches and provides a mechanism to embrace current and future efforts in trust management. The framework is a generic form of trust assessment that can help build user confidence in an eCommerce environment. For service providers, it offers an incentive to create websites with a high number of desired trust attributes. For consumers, it enables more reliable judgments to be made. Hence, Web trust can be enhanced.
APA, Harvard, Vancouver, ISO, and other styles
49

Potet, Marion. "Vers l'intégration de post-éditions d'utilisateurs pour améliorer les systèmes de traduction automatiques probabilistes." Phd thesis, Université de Grenoble, 2013. http://tel.archives-ouvertes.fr/tel-00995104.

Full text
Abstract:
Les technologies de traduction automatique existantes sont à présent vues comme une approche prometteuse pour aider à produire des traductions de façon efficace et à coût réduit. Cependant, l'état de l'art actuel ne permet pas encore une automatisation complète du processus et la coopération homme/machine reste indispensable pour produire des résultats de qualité. Une pratique usuelle consiste à post-éditer les résultats fournis par le système, c'est-à-dire effectuer une vérification manuelle et, si nécessaire, une correction des sorties erronées du système. Ce travail de post-édition effectué par les utilisateurs sur les résultats de traduction automatique constitue une source de données précieuses pour l'analyse et l'adaptation des systèmes. La problématique abordée dans nos travaux s'intéresse à développer une approche capable de tirer avantage de ces retro-actions (ou post-éditions) d'utilisateurs pour améliorer, en retour, les systèmes de traduction automatique. Les expérimentations menées visent à exploiter un corpus d'environ 10 000 hypothèses de traduction d'un système probabiliste de référence, post-éditées par des volontaires, par le biais d'une plateforme en ligne. Les résultats des premières expériences intégrant les post-éditions, dans le modèle de traduction d'une part, et par post-édition automatique statistique d'autre part, nous ont permis d'évaluer la complexité de la tâche. Une étude plus approfondie des systèmes de post-éditions statistique nous a permis d'évaluer l'utilisabilité de tels systèmes ainsi que les apports et limites de l'approche. Nous montrons aussi que les post-éditions collectées peuvent être utilisées avec succès pour estimer la confiance à accorder à un résultat de traduction automatique. Les résultats de nos travaux montrent la difficulté mais aussi le potentiel de l'utilisation de post-éditions d'hypothèses de traduction automatiques comme source d'information pour améliorer la qualité des systèmes probabilistes actuels.
APA, Harvard, Vancouver, ISO, and other styles
50

Yang, Qing. "Segmentation d'images ultrasonores basée sur des statistiques locales avec une sélection adaptative d'échelles." Phd thesis, Université de Technologie de Compiègne, 2013. http://tel.archives-ouvertes.fr/tel-00869975.

Full text
Abstract:
La segmentation d'images est un domaine important dans le traitement d'images et un grand nombre d'approches différentes ent été développées pendant ces dernières décennies. L'approche des contours actifs est un des plus populaires. Dans ce cadre, cette thèse vise à développer des algorithmes robustes, qui peuvent segmenter des images avec des inhomogénéités d'intensité. Nous nous concentrons sur l'étude des énergies externes basées région dans le cadre des ensembles de niveaux. Précisément, nous abordons la difficulté de choisir l'échelle de la fenêtre spatiale qui définit la localité. Notre contribution principale est d'avoir proposé une échelle adaptative pour les méthodes de segmentation basées sur les statistiques locales. Nous utilisons l'approche d'Intersection des Intervalles de Confiance pour définir une échelle position-dépendante pour l'estimation des statistiques image. L'échelle est optimale dans le sens où elle donne le meilleur compromis entre le biais et la variance de l'approximation polynomiale locale de l'image observée conditionnellement à la segmentation actuelle. De plus, pour le model de segmentation basé sur une interprétation Bahésienne avec deux noyaux locaux, nous suggérons de considérer leurs valeurs séparément. Notre proposition donne une segmentation plus lisse avec moins de délocalisations que la méthode originale. Des expériences comparatives de notre proposition à d'autres méthodes de segmentation basées sur des statistiques locales sont effectuées. Les résultats quantitatifs réalisés sur des images ultrasonores de simulation, montrent que la méthode proposée est plus robuste au phénomène d'atténuation. Des expériences sur des images réelles montrent également l'utilité de notre approche.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography