To see the other types of publications on this topic, follow the link: AUTOMATIC EVALUATING.

Dissertations / Theses on the topic 'AUTOMATIC EVALUATING'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'AUTOMATIC EVALUATING.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

PENG, SISI. "Evaluating Automatic Model Selection." Thesis, Uppsala universitet, Statistiska institutionen, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-154449.

Full text
Abstract:
In this paper, we briefly describe the automatic model selection which is provided by Autometrics in the PcGive program. The modeler only needs to specify the initial model and the significance level at which to reduce the model. Then, the algorithm does the rest. The properties of Autometrics are discussed. We also explain its background concepts and try to see whether the model selected by the Autometrics can perform well. For a given data set, we use Autometrics to find a “new” model, and then compare the “new” model with a previously selected one by another modeler. It is an interesting issue to see whether Autometrics can also find models which fit better to the given data. As an illustration, we choose three examples. It is true that Autometrics is labor saving and always gives us a parsimonious model. It is really an invaluable instrument for social science. But, we still need more examples to strongly support the idea that Autometrics can find a model which fits the data better, just a few examples in this paper is far from enough.
APA, Harvard, Vancouver, ISO, and other styles
2

Doe, Hope L. "Evaluating the Effects of Automatic Speech Recognition Word Accuracy." Thesis, Virginia Tech, 1998. http://hdl.handle.net/10919/36956.

Full text
Abstract:
Automatic Speech Recognition (ASR) research has been primarily focused towards large-scale systems and industry, while other areas that require attention are often over-looked by researchers. For this reason, this research looked at automatic speech recognition at the consumer level. Many individual consumers will purchase and use automatic software recognition for a different purpose than that of the military or commercial industries, such as telecommunications. Consumers who purchase the software for personal use will mainly use ASR for dictation of correspondences and documents. Two ASR dictation software packages were used to conduct the study. The research examined the relationships between (1) speech recognition software training and word accuracy, (2) error-correction time by the user and word accuracy, and (3) correspondence type and word accuracy. The correspondences evaluated were those that resemble Personal, Business, and Technical Correspondences. Word accuracy was assessed after initial system training, five minutes of error-correction time, and ten minutes of error-correction time.

Results indicated that word recognition accuracy achieved does affect user satisfaction. It was also found that with increased error-correction time, word accuracy results improved. Additionally, the results found that Personal Correspondence achieved the highest mean word accuracy rate for both systems and that Dragon Systems achieved the highest mean word accuracy recognition for the Correspondences explored in this research. Results were discussed in terms of subjective and objective measures, advantages and disadvantages of speech input, and design recommendations were provided.
Master of Science

APA, Harvard, Vancouver, ISO, and other styles
3

Nguyen, Christofer. "Priority automation engineering : Evaluating a tool for automatic code generation and configuration of PLC-Applications." Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-85797.

Full text
Abstract:
This research explores the Automation Interface created by Beckhoff through introducinga compiler solution. Today machine builders have to be able to build machinesor plants in different sizes and provide many variations of the machine orplant types. Automatic code generation can be used in the aspect to reuse code thathas been tested and is configurable to match the desired functionality. Additionally,the use of a pre-existing API could potentially result in less engineering resourceswasted in developing automatic code generation. This thesis aims to evaluate theAutomation Interface (AI) tool created by Beckhoff. This is accomplished throughmeans of incorporating the API functions into a compiler solution. The solution isdesigned to export the information required through an XML-file to generate PLCapplications.The generated PLC-code will be in Structured Text. In order to createa functional PLC-application, the construction of software requirements and testcases are established. The solution is then validated by means of generating a dataloggerto illustrate the usage. The exploratory research revealed both the benefitsand cons of using AI to a compiler solution. The evaluation indicated that the AutomationInterface can reduce engineering effort to produce a compiler solution, butthe learning curve of understanding the underlying components that work with theAPI required a great deal of effort.
APA, Harvard, Vancouver, ISO, and other styles
4

Vatn, Niklas, and Julia Byström. "Evaluating automatic colour equalization to preprocess dermoscopic images for classification using a CNN." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-302502.

Full text
Abstract:
Skin cancer is one of the most prevalent types of cancer and diagnosing of skin lesions are mostly done by visual inspection by a doctor. Lately, computer- aided diagnosis (CAD) has gained popularity and previous studies have with great results utilized a convolutional neural network (CNN) to classify dermoscopic images of different benign and malignant skin lesions. While other studies using CAD tools have investigated the effects of using preprocessing methods on image data before using them in diagnosis classification. Therefore our thesis aims to investigate if preprocessing dermoscopic images of skin lesions before training a CNN in classifying them will improve the accuracy of classification The investigation was conducted by training a CNN on the multi-class problem of classifying dermoscopic images of four different skin lesions. Melanoma and basal cell carcinoma which are malignant and benign keratosis-like lesions and melanocytic nevi which are benign. The dermoscopic images were preprocessed using automatic colour equalization (ACE). The ACE preprocessing was applied to the entire dataset five times each time with different levels of its slope parameter, the contrast tuner of the algorithm. These five datasets together with a dataset not preprocessed with ACE was used to train a CNN model. After 50 epochs of training, the CNN was evaluated on the accuracy of prediction as well as precision, recall and specificity of the four classes. The result indicates that preprocessing images using ACE did not improve the classification accuracy of skin lesions. Additionally, the result suggests that no class is affected more with ACE preprocessing than the others. To further investigate if preprocessing will improve the accuracy of classification the effects of ACE on a different CNN should be conducted. Additionally, if further investigation on the effects of image preprocessing for skin lesion classification is to be conducted, hair removal could be interesting to look into.
Hudcancer är en av de vanligaste typerna av cancer och diagnostisering av hudåkommor utförs primärt genom visuell inspektion av en läkare. På senare tid har computer-aided diagnosis (CAD) blivit vanligare och tidigare studier har med bra resultat använt convolutional neural network (CNN) för att klassificera dermatoskopiska bilder av olika god- och elakartade hudåkommor. Andra studier med CAD-verktyg har undersökt effekterna av att använda förbehandling på bilddata innan den används vid diagnos. Däremot har lite forskning fokuserat på effekten av förbehandling på experiment som använder CNN. Därför är syftet med vår studie att undersöka om förbehandling av dermatoskopiska bilder av hudåkommor innan ett CNN tränas i klassificering kan förbättra klassificeringens noggrannhet. Undersökningen genomfördes genom att träna ett CNN på att klassificera dermatoskopiska bilder av fyra olika hudåkommor. Malignt melanom och basalcellscancer som är elakartade och seborroiska keratoser och melanocytiska nevi som är godartade. De dermatoskopiska bilderna förbehandlades med automatisk algoritmen automatic colour equalization (ACE).ACE-förbehandlingen applicerades på hela datasetet fem gånger, varje gång med olika nivåer på algoritmens kontrastförstärkare. Dessa fem datamängder och en datamängd som inte förbehandlats med ACE användes för att träna olika CNN-modeller. Efter 50 epoker utvärderades modellen med avseende på noggrannhet samt precision, sensitivitet och specificitet hos de fyra klasserna. Resultatet indikerar att förbehandling av bilder med ACE inte förbättrar klassificeringsnoggrannheten för hudåkommor. Dessutom antyder resultatet att ingen klass påverkas mer med ACE-förbehandling än de andra. För att ytterligare undersöka om förbehandling kan förbättra klassificeringens noggrannhet bör effekterna av ACE på andra CNN-modeller genomföras. Om ytterligare undersökningar av effekterna av bildförbehandling för klassificering av hudskador ska genomföras, kan hårborttagning vara intressant att undersöka.
APA, Harvard, Vancouver, ISO, and other styles
5

Gilbert, Michael Stephen. "A Small-Perturbation Automatic-Differentiation (SPAD) Method for Evaluating Uncertainty in Computational Electromagnetics." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1354742230.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Skoglund, Martin. "Evaluating SLAM algorithms for Autonomous Helicopters." Thesis, Linköping University, Department of Electrical Engineering, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-12282.

Full text
Abstract:

Navigation with unmanned aerial vehicles (UAVs) requires good knowledge of the current position and other states. A UAV navigation system often uses GPS and inertial sensors in a state estimation solution. If the GPS signal is lost or corrupted state estimation must still be possible and this is where simultaneous localization and mapping (SLAM) provides a solution. SLAM considers the problem of incrementally building a consistent map of a previously unknown environment and simultaneously localize itself within this map, thus a solution does not require position from the GPS receiver.

This thesis presents a visual feature based SLAM solution using a low resolution video camera, a low-cost inertial measurement unit (IMU) and a barometric pressure sensor. State estimation in made with a extended information filter (EIF) where sparseness in the information matrix is enforced with an approximation.

An implementation is evaluated on real flight data and compared to a EKF-SLAM solution. Results show that both solutions provide similar estimates but the EIF is over-confident. The sparse structure is exploited, possibly not fully, making the solution nearly linear in time and storage requirements are linear in the number of features which enables evaluation for a longer period of time.

APA, Harvard, Vancouver, ISO, and other styles
7

Breakiron, Daniel Aubrey. "Evaluating the Integration of Online, Interactive Tutorials into a Data Structures and Algorithms Course." Thesis, Virginia Tech, 2013. http://hdl.handle.net/10919/23107.

Full text
Abstract:
OpenDSA is a collection of open source tutorials for teaching data structures and algorithms. It was created with the goals of visualizing complex, abstract topics; increasing the amount of practice material available to students; and providing immediate feedback and incremental assessment. In this thesis, I first describe aspects of the OpenDSA architecture relevant to collecting user interaction data. I then present an analysis of the interaction log data gathered from three classes during Spring 2013. The analysis focuses on determining the time distribution of student activity, determining the time required for assignment completion, and exploring \credit-seeking" behaviors and behavior related to non-required exercises. We identified clusters of students based on when they completed exercises, verified the reliability of estimated time requirements for exercises, provided evidence that a majority of students do not read the text, discovered a measurement that could be used to identify exercises that require additional development, and found evidence that students complete exercises after obtaining credit. Furthermore, we determined that slideshow usage was fairly high (even when credit was not ordered), and skipping to the end of slideshows was more common when credit was offered but also occurred when it was not.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
8

Ivarsson, Anton, and Jacob Stachowicz. "Evaluating machine learning methods for detecting sleep arousal." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-259996.

Full text
Abstract:
Sleep arousal is a phenomenon that affects the sleep of a large amount of people. The process of predicting and classifying arousal events is done manually with the aid of certified technologists, although some research has been done on automation using Artificial Neural Networks (ANN). This study explored how a Support Vector Machine performed(SVM) compared to an ANN on this task. Polysomnography (PSG) is a sort of sleep study which produces the data that is used in classifying sleep disorders. The PSG-data used in this thesis consists of 13 wave forms sampled at or resampled at 200Hz. There were samples from 994 patients totalling approximately 6.98 1010 data points, processing this amount of data is time consuming and presents a challenge. 2000 points of each signal was used in the construction of the data set used for the models. Extracted features included: Median, Max, Min, Skewness, Kurtosis, Power of EEG-band frequencies and more. Recursive feature elimination was used in order to select the best amount of extracted features. The extracted data set was used to train two ”out of the box” classifiers and due to memory issues the testing had to be split in four batches. When taking the mean of the four tests, the SVM scored ROC AUC of 0,575 and the ANN 0.569 respectively. As the difference in the two results was very modest it was not possible to conclude that either model was better suited for the task at hand. It could however be concluded that SVM can perform as well as ANN on PSG-data. More work has to bee done on feature extraction, feature selection and the tuning of the models for PSG-data to conclude anything else. Future thesis work could include research questions as ”Which features performs best for a SVM in the prediction of Sleep arousals on PSG-data” or ”What feature selection technique performs best for a SVM in the prediction of Sleep arousals on PSG-data”, etc.
Sömnstörningar är en samling hälsotillstånd som påverkar sömnkvaliteten hos en stor mängd människor. Ett exempel på en sömnstörning är sömnapne. Detektion av dessa händelser är idag en manuell uppgift utförd av certifierade teknologer, det har dock på senare tid gjorts studier som visar att Artificella Neurala Nätverk (ANN) klarar att detektera händelserna med stor träffsäkerhet. Denna studie undersöker hur väl en Support Vector Machine (SVM) kan detektera dessa händelser jämfört med en ANN. Datat som används för att klassificera sömnstörningar kommer från en typ av sömnstudie kallad polysomnografi (PSG). Den PSG-data som används i denna avhandling består av 13 vågformer där 12 spelats in i 200Hz och en rekonstruerats till 200Hz. Datan som används i denna avhandling innehåller inspelningar från 994 patienter, vilket ger totalt ungefär·6.98 1010 datapunkter. Att behandla en så stor mängd data var en utmaning. 2000 punkter från vare vågform användes vid konstruktionen av det dataset som användes för modellerna. De attribut som extraherades innehöll bland annat: Median, Max, Min, Skewness, Kurtosis, amplitud av EEG-bandfrekvenser m.m. Metoden Recursive Feature Elimination användes för att välja den optimala antalet av de bästa attributen. Det extraherade datasetet användes sedan för att träna två standard-konfigurerade modeller, en SVM och en ANN. På grund av en begräning av arbetsminne så var vi tvungna att dela upp träningen och testandet i fyra segment. Medelvärdet av de fyra testen blev en ROC AUC på 0,575 för en SVM, respektive 0,569 för ANN. Eftersom skillnaden i de två resultaten var väldigt marginella kunde vi inte dra slutsatsen att endera modellen var bättre lämpad för uppgiften till hands. Vi kan dock dra slutsatsen att en SVM kan prestera lika väl som ANN på PSG-data utan konfiguration. Mer arbete krävs inom extraheringen av attributen, attribut-eliminationen och justering av modellerna. Framtida avhandlingar skulle kunna göras med frågeställningarna: “Vilka attributer fungerar bäst för en SVM inom detektionen av sömnstörningar på PSG-data” eller ”Vilken teknik för attribut-elimination fungerar bäst för en SVM inom detektionen av sömnstörningar på PSG-data”, med mera.
APA, Harvard, Vancouver, ISO, and other styles
9

LUNDIN, FORSSÉN WILLIAM. "Automatic Grading System in Microsoft .NETFramework : Evaluating the performance of different programming languages on the Microsoft.NET platform." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-155892.

Full text
Abstract:
A common challenge for software consulting companies is recruiting the right people. In the software industry, the recruitment process usually involves several steps before a contract is signed; a single job interviewis rarely enough. Thus, the interview tends to involve some test to make sure that the job applicant is qualified. The testing procedure is usually part of the interview or conducted at the same occasion. Interviewing applicants who are not qualified for the position is a waste of time. This waste can be minimized by only interviewing qualified applicants. In the software industry, qualifications are commonly asserted by letting an applicant solve programming problems. This process can be automated using an Automatic Grader. Such systems already exist on some universities today and are used extensively invarious programming courses and also in programming contests.This thesis explains how such a system can be built using only Microsoft.NET Framework while still supporting multiple languages. The thesis also evaluates this system with regard to execution speed and memory consumption in an attempt to find a scaling factor between the different programming languages since the same implementation of a specific algorithm in different languages should be graded equally. The supported languages are C#, Java and Python. Java support is enabled through the use of a Java byte code to Common Intermediate Language(.NET bytecode) compiler called IKVM. Python is supported through the use of Iron Python.The results showed that C# and Java performed almost equally interms of execution speed and memory usage, with Java being slightly behind. As a compensation for Javas slower execution speed a scaling factor was calculated. The average of this scaling factor was 1.29.Python had greater performance and memory issues than the other twoand no scaling factor could be obtained for this language using the data present in this thesis. Future work involves implementing additional language support and improving the system with usability in mind.
APA, Harvard, Vancouver, ISO, and other styles
10

Lu, Ming. "System Dynamics Model for Testing and Evaluating Automatic Headway Control Models for Trucks Operating on Rural Highways." Diss., This resource online, 1996. http://scholar.lib.vt.edu/theses/available/etd-01292008-113749/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Strachota, Tomáš. "Automatické navrhování klíčových slov." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2010. http://www.nusl.cz/ntk/nusl-237243.

Full text
Abstract:
This thesis surveys theoretical background for automatic keyword suggestion system. It contains overview of current statistical term recognition methods and methods for evaluation of automatic term recognition systems. Based on the known approach the thesis specifies possible enhancements. It explores unifying keywords using thesauri, input text filtering and correction of word forms.
APA, Harvard, Vancouver, ISO, and other styles
12

Zhang, Yuhua. "Evaluating the Scalability of SDF Single-chip Multiprocessor Architecture Using Automatically Parallelizing Code." Thesis, University of North Texas, 2004. https://digital.library.unt.edu/ark:/67531/metadc4673/.

Full text
Abstract:
Advances in integrated circuit technology continue to provide more and more transistors on a chip. Computer architects are faced with the challenge of finding the best way to translate these resources into high performance. The challenge in the design of next generation CPU (central processing unit) lies not on trying to use up the silicon area, but on finding smart ways to make use of the wealth of transistors now available. In addition, the next generation architecture should offer high throughout performance, scalability, modularity, and low energy consumption, instead of an architecture that is suitable for only one class of applications or users, or only emphasize faster clock rate. A program exhibits different types of parallelism: instruction level parallelism (ILP), thread level parallelism (TLP), or data level parallelism (DLP). Likewise, architectures can be designed to exploit one or more of these types of parallelism. It is generally not possible to design architectures that can take advantage of all three types of parallelism without using very complex hardware structures and complex compiler optimizations. We present the state-of-art architecture SDF (scheduled data flowed) which explores the TLP parallelism as much as that is supplied by that application. We implement a SDF single-chip multiprocessor constructed from simpler processors and execute the automatically parallelizing application on the single-chip multiprocessor. SDF has many desirable features such as high throughput, scalability, and low power consumption, which meet the requirements of the next generation of CPU design. Compared with superscalar, VLIW (very long instruction word), and SMT (simultaneous multithreading), the experiment results show that for application with very little parallelism SDF is comparable to other architectures, for applications with large amounts of parallelism SDF outperforms other architectures.
APA, Harvard, Vancouver, ISO, and other styles
13

Lee, Jennifer Ann. "Evaluating ITS Investments in Public Transportation: A Proposed Framework and Plan for the OmniLink Route Deviation Service." Thesis, Virginia Tech, 2002. http://hdl.handle.net/10919/34416.

Full text
Abstract:
When implementing an intelligent transportation system (ITS), stakeholders often overlook the importance of evaluating the system once it is in place. Determining the extent to which the objectives of an investment have been met is important to not only the agency involved, but also to other agencies, so that lessons are learned and mistakes are not repeated in future projects. An effective evaluation allows a transit provider to identify and address areas that could use improvement. Agencies implementing ITS investments often have different goals, needs, and concerns that they hope their project will address and consequently the development of a generic evaluation plan is difficult to develop. While it is recognized that the U.S. Department of Transportation has developed guidelines to aid agencies in evaluating such investments, this research is intended to complement these guidelines by assisting in the evaluation of a site specific ITS investment. It presents an evaluation framework and plan that provides a systematic method for assessing the potential impacts associated with the project by defining objectives, measures, analysis recommendations, and data requirements. The framework developed specifically addresses the ITS investment on the OmniLink local route deviation bus service in Prince William County, Virginia, but could be used as a basis for the evaluation of similar ITS investments. The OmniLink ITS investment includes an automatic vehicle location (AVL) system, mobile data terminals (MDTs), and computer-aided dispatch (CAD) technology.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
14

Runhem, Lovisa, and Filip Schulze. "Evaluating a fractal features method for automatic detection of Alzheimer’s Disease in brain MRI scans : A quantitative study based on the method developed by Lahmiri and Boukadoum in 2013." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-166408.

Full text
Abstract:
The field of computer-aided diagnosis has recently made progress in the diagnosing of Alzheimer's disease (AD) from magnetic resonance images (MRI) of the brain. Lahmiri and Boukadoum (2013) have research this topic since 2011, and in 2013 they presented a system for automatic detection of AD based on machine learning classification. Their proposed system achieved a classification accuracy of 100% (2013, p. 1507) using support vector machines with quadratic kernel classifiers. The MRI scans were first translated to 1-dimensional signals, from which three features were extracted to measure the signals self-affinity. These three features were Hurst’s exponent, the total fluctuation energy of a detrended fluctuational analysis and the same analysis’ scaling exponent. The results of their study were validated using a dataset of 23 MRI scans from brains with AD and normal brains. This report makes an attempt at implementing the method proposed by Lahmiri and Boukadoum in 2013 and evaluating its accuracy on a dataset of 120 cases, out of which 60 are cases of AD and 60 are normal cases. The results were validated using both leave-one-out cross-validation and 3-fold cross-validation. A dataset of 23 cases consistent with Lahmiri and Boukadoum’s in size was considered and the larger dataset of 120 cases. The best classification accuracy for the small and large were obtained from the 3-fold cross-validation was 78,26% respectively 65,00%. The results of this study are to some extent similar to those of Lahmiri and Boukadoum’s, however this study fails to verify how their method performs on a larger dataset, as their results for a small dataset could not be reproduced in this implementation. Thus the results of this report are inconclusive in verifying the accuracy of the implemented method for a larger dataset. However this implementation of the method shows promise as the accuracy for the large dataset was fairly good when comparing to other research done in the field.
APA, Harvard, Vancouver, ISO, and other styles
15

Jaykumar, Nishita. "ResQu: A Framework for Automatic Evaluation of Knowledge-Driven Automatic Summarization." Wright State University / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=wright1464628801.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Haderlein, Tino. "Automatic evaluation of tracheoesophageal substitute voices /." Berlin : Logos-Verl, 2007. http://deposit.d-nb.de/cgi-bin/dokserv?id=3049421&prov=M&dok_var=1&dok_ext=htm.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Sewell, Christopher. "Automatic performance evaluation in surgical simulation /." May be available electronically:, 2007. http://proquest.umi.com/login?COPT=REJTPTU1MTUmSU5UPTAmVkVSPTI=&clientId=12498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Coppejans, Hugo Herman Godelieve. "RGB-D SLAM : an implementation framework based on the joint evaluation of spatial velocities." Diss., University of Pretoria, 2017. http://hdl.handle.net/2263/64524.

Full text
Abstract:
In pursuit of creating a fully automated navigation system that is capable of operating in dynamic environments, a large amount of research is being devoted to systems that use visual odometry assisted methods to estimate the position of a platform with regards to the environment surrounding it. This includes systems that do and do not know the environment a priori, as both rely on the same methods for localisation. For the combined problem of localisation and mapping, Simultaneous Localisation and Mapping (SLAM) is the de facto choice, and in recent years with the advent of color and depth (RGB-D) sensors, RGB-D SLAM has become a hot topic for research. Most research being performed is on improving the overall system accuracy or more specifically the performance with regards to the overall trajectory error. While this approach quantifies the performance of the system as a whole, the individual frame-to-frame performance is often not mentioned or explored properly. While this will directly tie in to the overall performance, the level of scene cohesion experienced between two successive observations can vary greatly over a single dataset of observations. The focus of this dissertation will be the relevant levels of translational and rotational velocities experienced by the sensor between two successive observations and the effect on the final accuracy of the SLAM implementation. The frame rate will specifically be used to alter and evaluate the different spatial velocities experienced over multiple datasets of RGB-D data. Two systems were developed to illustrate and evaluate the potential of various approaches to RGB-D SLAM. The first system is a real-world implementation where SLAM is used to localise and map the environment surrounding a quadcopter platform. A Microsoft Kinect is directly mounted to the quadcopter and is used to provide a RGB-D datastream to a remote processing terminal. This terminal runs a SLAM implementation that can alternate between different visual odometry methods. The remote terminal acts as the position controller for the quadcopter, replacing the need for a direct human operator. A semi-automated system is implemented, that allows a human operator to designate waypoints within the environment that the quadcopter moves to. The second system uses a series of publicly available RGB-D datasets with their accompanying ground-truth readings to simulate a real RGB-D datasteam. This is used to evaluate the performance of the various RGB-D SLAM approaches to visual odometry. For each of the datasets, the accompanying translational and angular velocity on a frame-to-frame basis can be calculated. This can, in turn, be used to evaluate the frame-to-frame accuracy of the SLAM implementation, where the spatial velocity can be manually altered by occluding frames within the sequence. Thus, an accurate relationship can be calculated between the frame rate, the spatial velocity and the performance of the SLAM implementation. Three image processing techniques were used to implement the visual odometry for RGB-D SLAM. SIFT, SURF and ORB were compared across eight of the TUM database datasets. SIFT had the best performance, with a 30% increase over SURF and doubling the performance of ORB. By implementing SIFT using CUDA, the feature detection and description process only takes 18ms, negating the disadvantage that SIFT has compared to SURF and ORB. The RGB-D SLAM implementation was compared to four prominent research papers, and showed comparable results. The effect of rotation and translation was evaluated, based on the effect of each rotation and translation axis. It was found that the z-axis (scale) and the roll-axis (scene orientation) have a lower effect on the average RPE error in a frame-to-frame basis. It was found that rotation has a much greater impact on the performance, when evaluating rotation and translation separately. On average, a rotation of 1deg resulted in a 4mm translation error and a 20% rotation error , where a translation of 10mm resulted in a rotation error of 0.2deg and a translation error of 45%. The combined effect of rotation and translation had a multiplicative effect on the error metric. The quadcopter platform designed to work with the SLAM implementation did not function ideally, but it was sufficient for the purpose. The quadcopter is able to self stabilise within the environment, given a spacious area. For smaller, enclosed areas the backdraft generated by the quadcopter motors lead to some instability in the system. A frame-to-frame error of 40.34mm and 1.93deg was estimated for the quadcopter system.
Dissertation (MEng)--University of Pretoria, 2017.
Electrical, Electronic and Computer Engineering
MEng
Unrestricted
APA, Harvard, Vancouver, ISO, and other styles
19

Karakanta, Alina. "Automatic subtitling: A new paradigm." Doctoral thesis, Università degli studi di Trento, 2022. https://hdl.handle.net/11572/356701.

Full text
Abstract:
Audiovisual Translation (AVT) is a field where Machine Translation (MT) has long found limited success mainly due to the multimodal nature of the source and the formal requirements of the target text. Subtitling is the predominant AVT type, quickly and easily providing access to the vast amounts of audiovisual content becoming available daily. Automation in subtitling has so far focused on MT systems which translate source language subtitles, already transcribed and timed by humans. With recent developments in speech translation (ST), the time is ripe for extended automation in subtitling, with end-to-end solutions for obtaining target language subtitles directly from the source speech. In this thesis, we address the key steps for accomplishing the new paradigm of automatic subtitling: data, models and evaluation. First, we address the lack of representative data by compiling MuST-Cinema, a speech-to-subtitles corpus. Segmenter models trained on MuST-Cinema accurately split sentences into subtitles, and enable automatic data augmentation techniques. Having representative data at hand, we move to developing direct ST models for three scenarios: offline subtitling, dual subtitling, live subtitling. Lastly, we propose methods for evaluating subtitle-specific aspects, such as metrics for subtitle segmentation, a product- and process-based exploration of the effect of spotting changes in the subtitle post-editing process, and finally, a comprehensive survey on subtitlers' user experience and views on automatic subtitling. Our findings show the potential of speech technologies for extending automation in subtitling to provide multilingual access to information and communication.
APA, Harvard, Vancouver, ISO, and other styles
20

Perez, Castaneda Gabriel Antonio. "Evaluation par simulation de la sûreté de fonctionnement de systèmes en contexte dynamique hybride." Phd thesis, Institut National Polytechnique de Lorraine - INPL, 2009. http://tel.archives-ouvertes.fr/tel-00383298.

Full text
Abstract:
La recherche de solutions analytiques pour l'évaluation de la fiabilité en contexte dynamique n'est pas résolue dans le cas général. Un état de l'art présenté dans le chapitre 1 montre que des approches partielles relatives à des hypothèses particulières existent. La simulation de Monte Carlo serait le seul recours, mais il n'existait pas d'outils performants permettant la simulation simultanée de l'évolution discrète du système et de son évolution continue prenant en compte les aspects probabilistes. Dans ce contexte, dans le chapitre 2, nous introduisons le concept d'automate stochastique hybride capable de prendre en compte tous les problèmes posés par la fiabilité dynamique et d'accéder à l'évaluation des grandeurs de la sûreté de fonctionnement par une simulation de Monte Carlo implémentée dans l'environnement Scilab-Scicos. Dans le chapitre 3, nous montrons l'efficacité de notre approche de simulation pour l'évaluation de la sûreté de fonctionnement en contexte dynamique sur deux cas test dont un est un benchmark de la communauté de la Sûreté de Fonctionnement. Notre approche permet de répondre aux problèmes posés, notamment celui de la prise en compte de l'influence de l'état discret, de l'état continu et de leur interaction dans l'évaluation probabiliste des performances d'un système dans lequel en outre, les caractéristiques fiabilistes des composants dépendent eux-mêmes des états continu et discret. Dans le chapitre 4, nous donnons une idée de l'intérêt du contrôle par supervision comme moyen de la sûreté de fonctionnement. Les concepts d'automate observateur et de contrôleur ont été introduits et illustrés sur notre cas test afin de montrer leur potentialité.
APA, Harvard, Vancouver, ISO, and other styles
21

Comelles, Pujadas Elisabet. "Automatic Machine Translation Evaluation: A Qualitative Approach." Doctoral thesis, Universitat de Barcelona, 2015. http://hdl.handle.net/10803/295703.

Full text
Abstract:
The present study addresses the problem of Automatic Evaluation of Machine Translation (MT) from a linguistic perspective. Most of the studies performed in this area focus on quantitative analyses based on correlation coefficients; however, little has been done as regards a more qualitative approach, going beyond correlations and analysing data in detail. This thesis aims at shedding some light on the suitability, influence and combination of linguistic information to evaluate MT output, not restricting our research to the correlation with human judgements but basing it on a qualitative analysis. More precisely, this research intends to emphasize the effectiveness of linguistic analysis in order to identify and test those linguistic features that help in evaluating traditional concepts of adequacy and fluency. In order to perform this research we have focused on MT output in English, with an application to Spanish so as to test the portability of our approach. The starting point of this work was a linguistic analysis of both MT output and reference segments with the aim of highlighting not only those linguistic errors that an automatic MT evaluation metric must identify, but also those positive linguistic features that must be taken into account, identified and treated as correct linguistic phenomena. Once the linguistic analysis was conducted and in order to confirm our hypotheses and check whether those linguistic phenomena and traits identified in the analysis were helpful to evaluate MT output, we designed and implemented a linguistically-motivated MT metric, VERTa, to evaluate English output. Several experiments were conducted with this first version of VERTa in order to test the suitability of the linguistic features selected and how they should be combined so as to evaluate fluency and adequacy separately. Besides using information provided by correlations as a guide we also performed a detailed analysis of the metric’s output every time linguistic features were added and/or combined. After performing these experiments and checking the suitability of the linguistic information used and how it had to be used and combined, VERTa’s parameters were adjusted and an updated and optimised version of the metric was ready to be used. With this updated version and for the sake of comparison, a meta-evaluation of the metric for adequacy, fluency and MT quality was conducted, as well as a comparison to some of the best-known and widely-used MT metrics, showing that it outperformed them all when adequacy and fluency were assessed. Finally, we ported our MT metric to Spanish with the aim of studying its portability by checking which linguistic features in our metric would have to be slightly modified, which changes would have to be performed and finally if the metric would be easy to adapt to a new language. Furthermore, this version of VERTa for Spanish was compared to other well-known metrics used to evaluate Spanish, showing that it also outperformed them.
Aquesta tesi versa sobre el problema de l’avaluació de la traducció automàtica des d’una perspectiva lingüística. La majoria d’estudis realitzats en aquesta àrea són estudis quantitatius basats en coeficients de correlació, tanmateix, molt poca recerca s’ha centrat en un enfocament més qualitatiu, que vagi més enllà de les correlacions i analitzi les dades detalladament. Aquest treball vol portar llum a la idoneïtat, la influència i la combinació de la informació lingüística necessària per avaluar la sortida de traducció automàtica. En concret, es pretén emfasitzar l’efectivitat de l’anàlisi lingüística per identificar i examinar aquells trets lingüístics que ajudin a avaluar els conceptes tradicionals de fluïdesa i adequació. Per tal de realitzar aquest estudi s’ha treballat amb l’anglès com a llengua d’arribada, tot i que també s’ha tingut en compte el castellà en l’última etapa. El punt inicial d’aquest treball ha estat una anàlisi lingüística dels segments d’hipòtesi i de referència per tal de trobar tant aquells errors lingüístics que una mètrica automàtica d’avaluació ha de poder detectar, com identificar aquelles característiques lingüístiques que cal tenir en compte i tractar com a fenòmens lingüísticament correctes. Després d’aquesta anàlisi, s’ha dissenyat i implementat una mètrica d’avaluació automàtica, VERTa, que ha d’ajudar a confirmar les hipòtesis formulades i comprovar si els fenòmens i trets lingüístics detectats en l’anàlisi inicial són útils per avaluar text traduït automàticament. Amb aquesta primera versió de la mètrica s’han realitzat una sèrie d’experiments, així com unes anàlisis quantitatives i qualitatives per comprovar la idoneïtat dels trets lingüístics seleccionats i explorar com s’han de combinar per avaluar la fluïdesa i l’adequació per separat. Després d’aquests experiments i de les anàlisis pertinents, s’han ajustat els paràmetres de la mètrica per tal d’obtenir-ne una nova versió. Aquesta nova versió s’ha utilitzat per realitzar una meta-avaluació de la mètrica, comparant-la amb d’altres mètriques d’avaluació àmpliament conegudes i utilitzades dins de l’àrea. Els resultats obtinguts per la VERTa en relació a l’avaluació de fluïdesa i l’adequació han superat els de la resta de mètriques. Finalment, s’ha adaptat la mètrica al castellà per tal d’estudiar quines característiques lingüístiques incloses en la mètrica s’havien de retocar, quins canvis calia fer, i si era fàcil adaptar la mètrica a una nova llengua.
APA, Harvard, Vancouver, ISO, and other styles
22

Akiba, Yasuhiro. "Automatic evaluation methods for machine translation systems." 京都大学 (Kyoto University), 2005. http://hdl.handle.net/2433/144795.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Browne, Cameron Bolitho. "Automatic generation and evaluation of recombination games." Thesis, Queensland University of Technology, 2008. https://eprints.qut.edu.au/17025/1/Cameron_Browne_Thesis.pdf.

Full text
Abstract:
Many new board games are designed each year, ranging from the unplayable to the truly exceptional. For each successful design there are untold numbers of failures; game design is something of an art. Players generally agree on some basic properties that indicate the quality and viability of a game, however these properties have remained subjective and open to interpretation. The aims of this thesis are to determine whether such quality criteria may be precisely defined and automatically measured through self-play in order to estimate the likelihood that a given game will be of interest to human players, and whether this information may be used to direct an automated search for new games of high quality. Combinatorial games provide an excellent test bed for this purpose as they are typically deep yet described by simple welldefined rule sets. To test these ideas, a game description language was devised to express such games and a general game system implemented to play, measure and explore them. Key features of the system include modules for measuring statistical aspects of self-play and synthesising new games through the evolution of existing rule sets. Experiments were conducted to determine whether automated game measurements correlate with rankings of games by human players, and whether such correlations could be used to inform the automated search for new high quality games. The results support both hypotheses and demonstrate the emergence of interesting new rule combinations.
APA, Harvard, Vancouver, ISO, and other styles
24

Browne, Cameron Bolitho. "Automatic generation and evaluation of recombination games." Queensland University of Technology, 2008. http://eprints.qut.edu.au/17025/.

Full text
Abstract:
Many new board games are designed each year, ranging from the unplayable to the truly exceptional. For each successful design there are untold numbers of failures; game design is something of an art. Players generally agree on some basic properties that indicate the quality and viability of a game, however these properties have remained subjective and open to interpretation. The aims of this thesis are to determine whether such quality criteria may be precisely defined and automatically measured through self-play in order to estimate the likelihood that a given game will be of interest to human players, and whether this information may be used to direct an automated search for new games of high quality. Combinatorial games provide an excellent test bed for this purpose as they are typically deep yet described by simple welldefined rule sets. To test these ideas, a game description language was devised to express such games and a general game system implemented to play, measure and explore them. Key features of the system include modules for measuring statistical aspects of self-play and synthesising new games through the evolution of existing rule sets. Experiments were conducted to determine whether automated game measurements correlate with rankings of games by human players, and whether such correlations could be used to inform the automated search for new high quality games. The results support both hypotheses and demonstrate the emergence of interesting new rule combinations.
APA, Harvard, Vancouver, ISO, and other styles
25

Frkal, Jan. "Systém pro automatické vyhodnocení e-mailových zpráv." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2014. http://www.nusl.cz/ntk/nusl-221053.

Full text
Abstract:
This diploma thesis deals with the design and realisation of system for automatic evaluation of e-mail messages. The system works with PHP language and MySQL database. It also allows automatic synchronisation. During the synchronisation e-mail messages are downloaded and saved using IMAP or POP3 protocol. Subsequently, the messages are analysed. During the analysis the reports are classified into types, according to pre-defined keywords. The system also works with black lists and white lists. If the sender of an e-mail is during the synchronisation found in the blacklist, that e-mail will be skipped. On the contrary, if the sender of an e-mail is found within the white list, that e-mail will be excluded from keyword matching and from the list is type and category loaded. Most of the values from the carried out evaluation of the e-mails can be clearly seen in advanced statistics. Pie charts and numerical statistics are available. Access to the system is protected by a login. Therefore, login can only registered users.
APA, Harvard, Vancouver, ISO, and other styles
26

de, Oliveira Marcelo Gurgel. "An integrated methodology for the evaluation of the safety impacts of in-vehicle driver warning technologies." Diss., Georgia Institute of Technology, 2002. http://hdl.handle.net/1853/19162.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Feng, Qianli. "Automatic American Sign Language Imitation Evaluator." The Ohio State University, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=osu1461233570.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Skimmons, Brian E. "Automated performance evaluation technique." Thesis, Monterey, California. Naval Postgraduate School, 1992. http://hdl.handle.net/10945/23551.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Hassel, Martin. "Evaluation of automatic text summarizaiton : a practical implementation." Licentiate thesis, KTH, Numerical Analysis and Computer Science, NADA, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-1786.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Björklund, Tomas. "Automatic evaluation of breast density in mammographic images." Thesis, KTH, Skolan för teknik och hälsa (STH), 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-103788.

Full text
Abstract:
The goal of this master thesis is to develop a computerized method for automatic estimation of the mammographic density of mammographic images from 5 different types of mammography units.   Mammographic density is a measurement of the amount of fibroglandular tissue in a breast. This is the single most attributable risk factor for breast cancer; an accurate measurement of the mammographic density can increase the accuracy of cancer prediction in mammography. Today it is commonly estimated through visual inspection by a radiologist, which is subjective and results in inter-reader variation.   The developed method estimates the density as a ratio of #pixels-containing-dense-tissue over #pixels-containing-any-breast-tissue and also according to the BI-RADS density categories. To achieve this, each mammographic image is: corrected for breast thickness and normalized such that some global threshold can separate dense and non-dense tissue. iteratively thresholded until a good threshold is found.  This process is monitored and automatically stopped by a classifier which is trained on sample segmentations using features based on different image intensity characteristics in specified image regions. filtered to remove noise such as blood vessels from the segmentation. Finally, the ratio of dense tissue is calculated and a BI-RADS density class is assigned based on a calibrated scale (after averaging the ratings of both craniocaudal images for each patient). The calibration is based on resulting density ratio estimations of over 1300 training samples against ratings by radiologists of the same images.   The method was tested on craniocaudal images (not included in the training process) acquired with different mammography units of 703 patients which had also been rated by radiologists according to the BI-RADS density classes. The agreement with the radiologist rating in terms of Cohen’s weighted kappa is substantial (0.73). In 68% of the cases the agreement is exact, only in 1.2% of the cases the disagreement is more than 1 class.
APA, Harvard, Vancouver, ISO, and other styles
31

O'Riordan, Tim. "Evaluation and automatic analysis of MOOC forum comments." Thesis, University of Southampton, 2018. https://eprints.soton.ac.uk/424796/.

Full text
Abstract:
Moderators of Massive Open Online Courses (MOOCs) undertake a dual role. Their work entails not just facilitating an effective learning environment, but also identifying excelling and struggling learners, and providing pedagogical encouragement and direction. Supporting learners is a critical part of moderators’ work, and identifying learners’ level of critical thinking is an important part of this process. As many thousands of learners may communicate 24 hours a day, 7 days a week using MOOC comment forums, providing support in this environment is a significant challenge for the small numbers of moderators typically engaged in this work. In order to address this challenge, I adopt established coding schemes used for pedagogical content analysis of online discussions to classifying comments, and report on several studies I have undertaken which seek to ascertain the reliability of these approaches, establishing associations with these methods and linguistic and other indicators of critical thinking. I develop a simple algorithmic method of classification based on automatically sorting comments according to their linguistic composition, and evaluate an interview-based case study, where this algorithm is applied to an on-going MOOC. The algorithm method achieved good reliability when applied to a prepared test data set, and when applied to unlabelled comments in a live MOOC and evaluated by MOOC moderators, it was considered to have provided useful, actionable feedback. This thesis provides contributions that help to understand the usefulness of automatic analysis of levels of critical thinking in MOOC comment forums, and as such has implications for future learning analytics research, and e-learning policy making.
APA, Harvard, Vancouver, ISO, and other styles
32

Zapata, González José Ricardo. "Comparative evaluation and combination of automatic rhythm description systems." Doctoral thesis, Universitat Pompeu Fabra, 2013. http://hdl.handle.net/10803/123822.

Full text
Abstract:
The automatic analysis of musical rhythm from audio, and more specifically tempo and beat tracking, is one of the fundamental open research problems in Music Information Retrieval (MIR) research. Automatic beat tracking is a valuable tool for the solution of other MIR problems, as it enables beat- synchronous analysis of different music tasks. Even though automatic rhythm description is a relatively mature research topic in MIR tempo estimation and beat tracking remain an unsolved problem. We describe a new method for the extraction of beat times with a confidence value from music audio, based on the measurement of mutual agreement between a committee of beat tracking systems. The method can also be used identify music samples that are challenging for beat tracking without the need for ground truth annotations. we also conduct an extensive comparative evaluation of 32 tempo estimation and 16 beat tracking systems.
El análisis automático musical del ritmo en audio, y más concretamente el tempo y la detección de beats (Beat tracking), es uno de los problemas fundamentales en recuperación de información de Musical (MIR). La detección automática de beat es una valiosa herramienta para la solución de otros problemas de MIR, ya que permite el análisis sincronizado de la música con los beats para otras tareas. Describimos un nuevo método para la extracción de beats en señales de audio que mide el grado de confianza de la estimación, basado en la medición del grado de similitud entre un comité de sistemas de detección de beats. Este método automático se puede utilizar también para identificar canciones que son difíciles para la detección de beats. También realizamos una extensa evaluación comparativa de los sistemas actuales de descripción automática ritmo. Para esto, Evaluamos 32 algoritmos de tempo y 16 sistemas de detección de beats.
APA, Harvard, Vancouver, ISO, and other styles
33

Nieto, Oriol. "Discovering structure in music| Automatic approaches and perceptual evaluations." Thesis, New York University, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=3705329.

Full text
Abstract:

This dissertation addresses the problem of the automatic discovery of structure in music from audio signals by introducing novel approaches and proposing perceptually enhanced evaluations. First, the problem of music structure analysis is reviewed from the perspectives of music information retrieval (MIR) and music perception and cognition (MPC), including a discussion of the limitations and current challenges in both disciplines. When discussing the existing methods of evaluating the outputs of algorithms that discover musical structure, a transparent open source software called mir eval, which contains implementations to these evaluations, is introduced. Then, four MIR algorithms are presented: one to compress music recordings into audible summaries, another to discover musical patterns from an audio signal, and two for the identification of the large-scale, non-overlapping segments of a musical piece. After discussing these techniques, and given the differences when perceiving the structure of music, the idea of applying more MPC-oriented approaches is considered to obtain perceptually relevant evaluations for music segmentation. A methodology to automatically obtain the most difficult tracks for machines to annotate is presented in order to include them in a design of a human study to collect multiple human annotations. To select these tracks, a novel open source framework called music structural analysis framework (MSAF) is introduced. This framework contains the most relevant music segmentation algorithms and it uses mir eval to transparently evaluate them. Moreover, MSAF makes use of the JSON annotated music specification (JAMS), a new format to contain multiple annotations for several tasks in a single file, which simplifies the dataset design and the analysis of agreement across different human references. The human study to collect additional annotations (which are stored in JAMS files) is described, where five new annotations for fifty tracks are stored. Finally, these additional annotations are analyzed, confirming the problem of having ground-truth datasets with a single annotator per track due to the high degree of disagreement among annotators for the challenging tracks. To alleviate this, these annotations are merged to produce a more robust human reference annotation. Lastly, the standard F-measure of the hit rate measure to evaluate music segmentation is analyzed when access to additional annotations is not possible, and it is shown, via multiple human studies, that precision seems more perceptually relevant than recall.

APA, Harvard, Vancouver, ISO, and other styles
34

Khodayari, Shahrzad. "Automatic Detection of Unspecified Expression Evaluation in FreeRTOS Programs." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-228289.

Full text
Abstract:
Embedded systems are widely used in most electrical devices. They are often complex and safety-critical. Therefore, their reilability is significantlyimportant. AMong many techniques to verify a system, model checking models a system into temporal logic and can be used to assert a desired property on it. CBMC is a Bounded Model Checker for ANSI-C and C++ programs. In this thesis, we extend the CBMC tool to check and automatically detect a C/C++ code containing a form of unspecified behaviors, like function calls with arguments tht exhibit side effects which might be easily unnoticed by the programmers. Inn addition, the code can be configured properly to be used for ARM Cortex micro-controllers and FreeRTOS softwares
APA, Harvard, Vancouver, ISO, and other styles
35

Orăsan, Constantin. "Comparative evaluation of modular automatic summarisation systems using CAST." Thesis, University of Wolverhampton, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.433820.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

QUIRITA, VICTOR HUGO AYMA. "AN EVALUATION OF AUTOMATIC FACE RECOGNITION METHODS FOR SURVEILLANCE." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2014. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=24340@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
PROGRAMA DE EXCELENCIA ACADEMICA
Esta dissertação teve por objetivo comparar o desempenho de diversos algoritmos que representam o estado da arte em reconhecimento facial a imagens de sequências de vídeo. Três objetivos específicos foram perseguidos: desenvolver um método para determinar quando uma face está em posição frontal com respeito à câmera (detector de face frontal); avaliar a acurácia dos algoritmos de reconhecimento com base nas imagens faciais obtidas com ajuda do detector de face frontal; e, finalmente, identificar o algoritmo com melhor desempenho quando aplicado a tarefas de verificação e identificação. A comparação dos métodos de reconhecimento foi realizada adotando a seguinte metodologia: primeiro, foi criado um detector de face frontal que permitiu o captura das imagens faciais frontais; segundo, os algoritmos foram treinados e testados com a ajuda do facereclib, uma biblioteca desenvolvida pelo Grupo de Biometria no Instituto de Pesquisa IDIAP; terceiro, baseando-se nas curvas ROC e CMC como métricas, compararam-se os algoritmos de reconhecimento; e por ultimo, as análises dos resultados foram realizadas e as conclusões estão relatadas neste trabalho. Experimentos realizados sobre os bancos de vídeo: MOBIO, ChokePOINT, VidTIMIT, HONDA, e quatro fragmentos de diversos filmes, indicam que o Inter Session Variability Modeling e Gaussian Mixture Model são os algoritmos que fornecem a melhor acurácia quando são usados em tarefas tanto de verificação quanto de identificação, o que os indica como técnicas de reconhecimento viáveis para o vídeo monitoramento automático em vídeo.
This dissertation aimed to compare the performance of state-of-the-arte face recognition algorithms in facial images captured from multiple video sequences. Three specific objectives were pursued: to develop a method for determining when a face is in frontal position with respect to the camera (frontal face detector); to evaluate the accuracy for recognition algorithms based on the facial images obtained with the help of the frontal face detector; and finally, to identify the algorithm with better performance when applied to verification and identification tasks in video surveillance systems. The comparison of the recognition methods was performed adopting the following approach: first, a frontal face detector, which allowed the capture of facial images was created; second, the algorithms were trained and tested with the help of facereclib, a library developed by the Biometrics Group at the IDIAP Research Institute; third, ROC and CMC curves were used as metrics to compare the recognition algorithms; and finally, the results were analyzed and the conclusions were reported in this manuscript. Experiments conducted on the video datasets: MOBIO, ChokePOINT, VidTIMIT, HONDA, and four fragments of several films, indicate that the Inter-Session Variability Modelling and Gaussian Mixture Model algorithms provide the best accuracy on classification when the algorithms are used in verification and identification tasks, which indicates them as a good automatic recognition techniques for video surveillance applications.
APA, Harvard, Vancouver, ISO, and other styles
37

Allen, Joshua W. (Joshua William). "Predictive chemical kinetics : enabling automatic mechanism generation and evaluation." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/81677.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Chemical Engineering, 2013.
Cataloged from PDF version of thesis.
Includes bibliographical references.
The use of petroleum-based fuels for transportation accounted for more than 25% of the total energy consumed in 2012, both in the United States and throughout the world. The finite nature of world oil reserves and the effects of burning petroleum-based fuels on the world's climate have motivated efforts to develop alternative, renewable fuels. A major category of alternative fuels is biofuels, which potentially include a wide variety of hydrocarbons, alcohols, aldehydes, ketones, ethers, esters, etc. To select the best species for use as fuel, we need to know if it burns cleanly, controllably, and efficiently. This is especially important when considering novel engine technologies, which are often very sensitive to fuel chemistry. The large number of candidate fuels and the high expense of experimental engine tests motivates the use of predictive theoretical methods to help quickly identify the most promising candidates. This thesis presents several contributions in the areas of predictive chemical kinetics and automatic mechanism generation, particularly in the area of reaction kinetics. First, the accuracy of several methods of automatic, high-throughput estimation of reaction rates are evaluated by comparison to a test set obtained from the NIST Chemical Kinetics Database. The methods considered, including the classic Evans-Polanyi correlation, the "rate rules" method currently used in the RMG software, and a new method based on group contribution theory, are shown to not yet obtain the order-of-magnitude accuracy desired for automatic mechanism generation. Second, a method of very accurate computation of bimolecular reaction rates using ring polymer molecular dynamics (RPMD) is presented. RPMD rate theory enables the incorporation of quantum effects (zero-point energy and tunneling) in reaction kinetics using classical molecular dynamics trajectories in an extended phase space. A general-purpose software package named RPMD-rate was developed for conducting such calculations, and the accuracy of this method was demonstrated by investigating the kinetics and kinetic isotope effect of the reaction OH + CH4 --> CH3 + H2O. Third, a general framework for incorporating pressure dependence in thermal unimolecular reactions, which require an inert third body to provide or remove the energy needed for reaction via bimolecular collisions, was developed. Within this framework, several methods of reducing the full, master equation-based model to a set of phenomenological rate coefficients k(T, P) are compared using the chemically-activated reaction of acetyl radical with oxygen as a case study, and recommendations are made as to when each method should be used. This also resulted in a general-purpose code for calculating pressure-dependent kinetics, which was applied to developing an ab initio model of the reaction of the Criegee biradical CH 200 with small carbonyls that reproduces recent experimental results. Finally, the ideas and techniques of estimating reaction kinetics are brought together for the development of a detailed kinetics model of the oxidation of diisopropyl ketone (DIPK), a candidate biofuel representative of species produced from cellulosic biomass conversion using endophytic fungi. The model is evaluated against three experiments covering a range of temperatures, pressures, and oxygen concentrations to show its strengths and weaknesses. Our ability to automatically generate this model and systematically improve its parameters without fitting to the experimental results demonstrates the validity and usefulness of the predictive chemical kinetics paradigm. These contributions are available as part of the Reaction Mechanism Generator (RMG) software package.
by Joshua W. Allen.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
38

Vana, Sudha. "Simulation Evaluation of Measurement-based Automatic Dependent Surveillance -Broadcast." Ohio University / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1388753168.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Nordling, Love. "Evaluation of Generative Neural Networks for Automatic Defect Detection." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-428411.

Full text
Abstract:
Quality assurance of mass produced items is prone to errors when performedmanually by a human. This has created a need for an automated solution. Theemergence of deep neural networks has created systems that can be trained toclassify defect from non-defect items. However, to alleviate the need for largeamounts of manual labeling required for most classification networks, severalunsupervised methods have been used. This report evaluates the use of a deepautoencoder for unsupervised defect detection. Furthermore is the use of anautoencoder compared to applying inpainting and a generate adversarialnetwork(GAN) for the same task.The report finds that the autoencoder used could find the largest of defects testedbut not the smaller ones. It is also shown that neither use of inpainting nor a GANimproved on the autoencoder result. It is of note however that it was a naiveimplementation of inpainting and the GAN and they were lacking some state of theart aspects.
APA, Harvard, Vancouver, ISO, and other styles
40

Gutmann, Franziska [Verfasser], and Ralf [Akademischer Betreuer] Brand. "Automatic evaluations of exercising / Franziska Antoniewicz ; Betreuer: Ralf Brand." Potsdam : Universität Potsdam, 2016. http://d-nb.info/1218400730/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Antoniewicz, Franziska [Verfasser], and Ralf [Akademischer Betreuer] Brand. "Automatic evaluations of exercising / Franziska Antoniewicz ; Betreuer: Ralf Brand." Potsdam : Universität Potsdam, 2016. http://d-nb.info/1218400730/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Ahmedt, Aristizabal David Esteban. "Multi-modal analysis for the automatic evaluation of epilepsy." Thesis, Queensland University of Technology, 2019. https://eprints.qut.edu.au/132537/1/David_Ahmedt%20Aristizabal_Thesis.pdf.

Full text
Abstract:
Motion recognition technology is proposed to support neurologists in the study of patients' behaviour during epileptic seizures. This system can provide clues on the sub-type of epilepsy that patients have, it identifies unusual manifestations that require further investigation, as well as better understands the temporal evolution of seizures, from their onset through to termination. The incorporation of quantitative methods would assist in developing and formulating a diagnosis in situations where clinical expertise is unavailable. This research provides important supplementary and unbiased data to assist with seizure localization. It is a vital complementary resource in the era of seizure-based detection through electrophysiological data.
APA, Harvard, Vancouver, ISO, and other styles
43

Kovach, Bernard J. "Field Office Automation And Evaluation." NSUWorks, 1992. http://nsuworks.nova.edu/gscis_etd/646.

Full text
Abstract:
Pinkerton Security services has offices throughout the United States, Canada, and the United Kingdom from which security guards are dispatched to client sites. Only a few of the offices are semi-automated with the rest dependent upon the manual collection and transmission of the data to corporate Headquarters in Van Nuys, California. Headquarters processes the data and disburses the payroll checks and invoices. The manual effort of dispatching security guards and recording timekeeping in the field offices has resulted in poor quality and untimely data. Competing firms that have automated these processes have a distinct marketing edge over Pinkerton. The procedure to develop an automated system for Pinkerton began with a comprehensive review of Pinkerton's information processes. The review included visits to several offices and the formation of an operation's committee responsible for the detailed design of the new system. Several meetings were held to define field and corporate data requirements. The efforts produced a comprehensive relational data base system called PARS (Pinkerton Automated Resource System). The new Pinkerton security system is a state- of-the-art software system for the security industry. The plan was to install the system in 130 Pinkerton security offices nationwide. Once implementation began, the problem facing Pinkerton was whether Pinkerton was realizing the full benefits of automation and whether PARS was meeting the company's goals and objectives. The purpose of this study was to conduct an investigation into the impact of PARS upon information processes in the first three offices that received the system to determine if PARS was functioning as expected. System deficiencies were to be identified and a list of recommended improvements developed to ensure Pinkerton received the full benefits of automation. The first phase of the evaluation consisted of a detailed review of Pinkerton, the company's information problems, and the proposed solutions through automation. Using the three offices as a case study, a complete methodology was developed to formally address the information requirements of Pinkerton. Problems the offices had prior to PARS were identified, the automated methods that were proposed and implemented to solve the problems were discussed, and the effect of automation upon office operations was analyzed. The second component of this study consisted of questionnaires that were directed toward the users of the system. The questionnaires were structured to capture the users' perceptions of the effectiveness of the PARS system. Results were summarized by function, by question, and by objective and the findings analyzed. Statistics of various field and corporate processes before and after PARS were also captured to provide an objective measure of the impact of PARS. The results of the case study analysis indicated that through PARS the three offices had resolved their prior information processing problems. The implementation of PARS forced procedural standards and data integrity controls into each office. Further analysis of the findings indicated PARS had achieved the field offices goals of reducing the incorrect payment of wages, reducing unbillable overtime, improving payroll accuracy, improving billing accuracy, and improving client service. PARS had also permitted staff reductions in the case study offices. The potential savings to Pinkerton in this area alone could approach $3.2 million per year. The results from the questionnaires indicated a high acceptance level by users of the system. Ninety-two percent of the users said PARS provided useful and timely reports and 100% felt PARS supported the company's business objectives. The users returned an 85% positive response when asked if the system handled changing information requirements effectively and 83% agreed that the system improved the productivity of the office. All offices reported a reduction in paper flow and every user felt they could use PARS effectively in 30 to 60 days. The recommendations derived from the study were to continue the implementation of PARS in the other security offices, upgrade the system documentation, and to resolve the outstanding hardware and software issues. Also, data transmission problems between the field offices and corporate should be corrected so the data could be received and processed in a more reliable and timely manner. Other system features were also requested that would provide the users with additional capabilities. In summary, the results indicated that PARS had been successful in meeting the system's goals and objectives and automation had solved many of the problems in the field offices. Pinkerton should receive the full benefits of automating the field offices through the use of PARS.
APA, Harvard, Vancouver, ISO, and other styles
44

Muellegger, Markus. "Evaluation of Compilers for MATLAB- to C-Code Translation." Thesis, Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-1149.

Full text
Abstract:

MATLAB to C code translation is of increasing interest for science and industry. In

detail two MATLAB to C compilers denoted as Matlab to C Synthesis (MCS) and

Embedded MATLAB C (EMLC) have been studied. Three aspects of automatic code

generation have been studied; 1) generation of reference code; 2) target code generation;

3) floating-to-fixed-point conversion. The benchmark code used aimed to cover

simple up to more complex code by being viewed from a theoretical as well as practical perspective. A fixed-point filter implementation is demonstrated. EMLC and MCS

offer several fixed-point design tools. MCS provides a better support for C algorithm

reference generation, by covering a larger set of the MATLAB language as such. More

suitable for direct target implementation is code generated from EMLC. As a result

of the need to guarantee that the EMLC generated C-code allocates memory only

statically, MATLAB becomes more constraint by EMLC. Functional correctness was

generally achieved for each automatic translation.

APA, Harvard, Vancouver, ISO, and other styles
45

Kuwornu, Delali Korku. "Virtual commissioning of automatic machines: performance evaluation and robotic integration." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019.

Find full text
Abstract:
This thesis pertains to the virtual commissioning process executed for an Automatic machine, the E-contiuous for TELEROBOT to test the Controller Software of the said machine’s Motion controller. It also looks at how this virtual commissioning process and results affected the real machine, while focusing on the benefits of the particular platform used, and its ability to capture all necessary behavior of the real machine model into the virtual one. The process was divided into three main stages and then the model was passed through each stage to obtain the final model which was tested and results posted. Also integration of industrial manipulators into virtual environment mainly for experimental analysis and virtual commissioning was looked at to obtain data on the feasibility of these robots for specific functions. Finally, there was the discussion of the future possibilities of virtual commissioning and what could yet be achieved.
APA, Harvard, Vancouver, ISO, and other styles
46

Min, Menglei. "Evaluation and Implementation for Pushing Automatic Updates to IoT Devices." Thesis, Mittuniversitetet, Avdelningen för informationssystem och -teknologi, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-31483.

Full text
Abstract:
In recent years, Internet of Things has developed rapidly, and now has penetrated into human life and industrial production. It is speculated that the internet of things will become ubiquitous in the future, which will bring a series of problems. First, the large number of things will lead to operated system and software updates consuming a lot of manpower and resources. Another problem is the Internet of things facing security issues, in recent years for the means of Internet of things and tools have been increasing largely. Therefore, to achieve a secure automatic update on the Internet of Things is essential. This report will follow such an automatic update system based on Internet of things to expand. First it elaborated on the main motive of this problem, found three existing related works and three security methods for communication to analyze. Then combined results of analysis, put forward own a secure automatic update solution: manager and devices connect and mutual authentication in real time, at the same time, the manager will regularly check the database to see if there is new version application. When the administrator uploads a new version, the manager will download the version and then sends to all devices, then device installs and finally restart itself. Next, the report described how to implement this system in detail and evaluated it. In the end, this report summarized and introduces the future work.
APA, Harvard, Vancouver, ISO, and other styles
47

Lagerstedt, Jennie. "EVALUATION OF AN AUTOMATIC SYSTEM FOR MEASURING HUMAN ECHOLOCATION ABILITY." Thesis, Stockholms universitet, Psykologiska institutionen, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-152197.

Full text
Abstract:
To measure thresholds of human echolocation ability researchers need an automated system for the possibility to present a large set of stimuli. Previous studies have used recorded sounds or simulated sounds, allowing strict stimulus control at the expense of ecological validity. The purpose of this experiment was to test an automated system, that uses real objects. Fifteen participants tried the system and the task was to detect the presence of a disc, only using sound reflections. Detection thresholds as a function of distance to the reflecting object were determined using an adaptive staircase method. The mean threshold across participants was 1.7 m, which is in line with previous studies, using earphone presented sounds. Fairly large variability across individuals was observed. Two individuals performed very well with thresholds of > 2.5 m. Overall, the present experiment shows that the automated measuring system works well for assessing human echolocation ability.
APA, Harvard, Vancouver, ISO, and other styles
48

Savage, Josh. "The calibration and evaluation of speed-dependent automatic zooming interfaces." Thesis, University of Canterbury. Computer Science and Software Engineering, 2004. http://hdl.handle.net/10092/9616.

Full text
Abstract:
Speed-Dependent Automatic Zooming (SDAZ) is an exciting new navigation technique that couples the user's rate of motion through an information space with the zoom level. The faster a user scrolls in the document, the 'higher' they fly above the work surface. At present, there are few guidelines for the calibration of SDAZ. Previous work by Igarashi & Hinckley (2000) and Cockburn & Savage (2003) fails to give values for predefined constants governing their automatic zooming behaviour. The absence of formal guidelines means that SDAZ implementers are forced to adjust the properties of the automatic zooming by trial and error. This thesis aids calibration by identifying the low-level components of SDAZ. Base calibration settings for these components are then established using a formal evaluation recording participants' comfortable scrolling rates at different magnification levels. To ease our experiments with SDAZ calibration, we implemented a new system that provides a comprehensive graphical user interface for customising SDAZ behaviour. The system was designed to simplify future extensions---for example new components such as interaction techniques and methods to render information can easily be added with little modification to existing code. This system was used to configure three SDAZ interfaces: a text document browser, a flat map browser and a multi-scale globe browser. The three calibrated SDAZ interfaces were evaluated against three equivalent interfaces with rate-based scrolling and manual zooming. The evaluation showed that SDAZ is 10% faster for acquiring targets in a map than rate-based scrolling with manual zooming, and SDAZ is 4% faster for acquiring targets in a text document. Participants also preferred using automatic zooming over manual zooming. No difference was found for the globe browser for acquisition time or preference. However, in all interfaces participants commented that automatic zooming was less physically and mentally draining than manual zooming.
APA, Harvard, Vancouver, ISO, and other styles
49

Dahl, Ernest A. "TWARSES The Two Wire Automatic Remote Sensing and Evaluation System." International Foundation for Telemetering, 1994. http://hdl.handle.net/10150/608553.

Full text
Abstract:
International Telemetering Conference Proceedings / October 17-20, 1994 / Town & Country Hotel and Conference Center, San Diego, California
The Two Wire Automatic Remote Sensing and Evaluation System (TWARSES) automatically transmits and evaluates information (data) from remote sensors on a common two wire buss. In addition the system presents automatic evaluation and alarms, which provide both location data and sensor readout data of the monitored area. This system is a stand-alone modular system in which a common two wire line installed bow-to-stern and top-to-bottom, connects, integrates, evaluates, and powers a multiplicity of sensors. The United States Navy uses this system to provide safety and survivability by monitoring environmental gases, liquid levels, and power, temperature, and humidity levels on ships and in office buildings. The automatic monitoring system operates in a manner similar to an automatic, multiscriber, party-line telephone system. The system is controlled by the Scanner/Display unit which interrogates each of the 150 possible sensors according to the program stored in a microprocessor. This patented system provides a separate address for each sensor transponder, permitting all of the transponders to be simply connected in parallel across a common, twisted pair transmission line. The interrogating signal is also used to provide power (6V - 2mA) for the sensor transponders and their associated sensors. This further simplifies the system by eliminating the need for a separate source of power at each sensor location. Each sensor is interrogated with a 15-bit sequence which specifies: (1) the address of the sensor which is to reply, (2) the parameter to be reported (e.g. voltage, temperature, humidity, etc.) And (3) the desired precision (which sets the length of the reply). The interrogation is transmitted as frequency shift-keyed signal. Among the various types of interrogation signals which could be used (AM, FM, etc.) frequency shift-keying (FSK) was selected because:
APA, Harvard, Vancouver, ISO, and other styles
50

Wu, Yingcai. "Interactive editing and automatic evaluation of direct volume rendered images /." View abstract or full-text, 2009. http://library.ust.hk/cgi/db/thesis.pl?CSED%202009%20WU.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography