To see the other types of publications on this topic, follow the link: Automaticky generator dat.

Dissertations / Theses on the topic 'Automaticky generator dat'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Automaticky generator dat.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Naňo, Andrej. "Automatické generování testovacích dat informačních systémů." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2021. http://www.nusl.cz/ntk/nusl-445520.

Full text
Abstract:
ISAGENis a tool for the automatic generation of structurally complex test inputs that imitate real communication in the context of modern information systems . Complex, typically tree-structured data currently represents the standard means of transmitting information between nodes in distributed information systems. Automatic generator ISAGENis founded on the methodology of data-driven testing and uses concrete data from the production environment as the primary characteristic and specification that guides the generation of new similar data for test cases satisfying given combinatorial adequacy criteria. The main contribution of this thesis is a comprehensive proposal of automated data generation techniques together with an implementation, which demonstrates their usage. The created solution enables testers to create more relevant testing data, representing production-like communication in information systems.
APA, Harvard, Vancouver, ISO, and other styles
2

Offutt, Andrew Jefferson VI. "Automatic test data generation." Diss., Georgia Institute of Technology, 1988. http://hdl.handle.net/1853/9167.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ege, Raimund K. "Automatic generation of interfaces using constraints. /." Full text open access at:, 1987. http://content.ohsu.edu/u?/etd,144.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kupferschmidt, Benjamin, and Albert Berdugo. "DESIGNING AN AUTOMATIC FORMAT GENERATOR FOR A NETWORK DATA ACQUISITION SYSTEM." International Foundation for Telemetering, 2006. http://hdl.handle.net/10150/604157.

Full text
Abstract:
ITC/USA 2006 Conference Proceedings / The Forty-Second Annual International Telemetering Conference and Technical Exhibition / October 23-26, 2006 / Town and Country Resort & Convention Center, San Diego, California
In most current PCM based telemetry systems, an instrumentation engineer manually creates the sampling format. This time consuming and tedious process typically involves manually placing each measurement into the format at the proper sampling rate. The telemetry industry is now moving towards Ethernet-based systems comprised of multiple autonomous data acquisition units, which share a single global time source. The architecture of these network systems greatly simplifies the task of implementing an automatic format generator. Automatic format generation eliminates much of the effort required to create a sampling format because the instrumentation engineer only has to specify the desired sampling rate for each measurement. The system handles the task of organizing the format to comply with the specified sampling rates. This paper examines the issues involved in designing an automatic format generator for a network data acquisition system.
APA, Harvard, Vancouver, ISO, and other styles
5

Holmes, Stephen Terry. "Heuristic generation of software test data." Thesis, University of South Wales, 1996. https://pure.southwales.ac.uk/en/studentthesis/heuristic-generation-of-software-test-data(aa20a88e-32a5-4958-9055-7abc11fbc541).html.

Full text
Abstract:
Incorrect system operation can, at worst, be life threatening or financially devastating. Software testing is a destructive process that aims to reveal software faults. Selection of good test data can be extremely difficult. To ease and assist test data selection, several test data generators have emerged that use a diverse range of approaches. Adaptive test data generators use existing test data to produce further effective test data. It has been observed that there is little empirical data on the adaptive approach. This thesis presents the Heuristically Aided Testing System (HATS), which is an adaptive test data generator that uses several heuristics. A heuristic embodies a test data generation technique. Four heuristics have been developed. The first heuristic, Direct Assignment, generates test data for conditions involving an input variable and a constant. The Alternating Variable heuristic determines a promising direction to modify input variables, then takes ever increasing steps in this direction. The Linear Predictor heuristic performs linear extrapolations on input variables. The final heuristic, Boundary Follower, uses input domain boundaries as a guide to locate hard-to-find solutions. Several Ada procedures have been tested with HATS; a quadratic equation solver, a triangle classifier, a remainder calculator and a linear search. Collectively they present some common and rare test data generation problems. The weakest testing criterion HATS has attempted to satisfy is all branches. Stronger, mutation-based criteria have been used on two of the procedures. HATS has achieved complete branch coverage on each procedure, except where there is a higher level of control flow complexity combined with non-linear input variables. Both branch and mutation testing criteria have enabled a better understanding of the test data generation problems and contributed to the evolution of heuristics and the development of new heuristics. This thesis contributes the following to knowledge: Empirical data on the adaptive heuristic approach to test data generation. How input domain boundaries can be used as guidance for a heuristic. An effective heuristic termination technique based on the heuristic's progress. A comparison of HATS with random testing. Properties of the test software that indicate when HATS will take less effort than random testing are identified.
APA, Harvard, Vancouver, ISO, and other styles
6

Fawzy, Kamel Menatalla Ashraf. "A Method for Automatic Generation of Metadata." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-177400.

Full text
Abstract:
The thesis introduces a study about the different ways of generating metadata and implementing them in web pages. Metadata are often called data about data. In web pages, metadata holds the information that might include keywords, a description, author, and other information that helps the user to describe and explain an information resource in order to use, manage and retrieve data easily. Since web pages depend significantly on metadata to increase the traffic in search engines, studying the different methods of generation of metadata is an important issue. Generation of metadata can be made both manually and automatically. The aim of the research is to show the results of applying different methods including a new proposed method of generating automatic metadata using a qualitative study. The goal of the research is to show the enhancement achieved by applying the new proposed method of generating metadata automatically that are implemented in web pages.
Uppsatsen presenterar en studie om olika sätt att generera metadata och genomföra dem på webbsidor. Metadata kallas ofta data om data eller information om information som innehåller den information som hjälper användaren att beskriva, förklara och hitta en informationskälla för att kunna använda, hantera och hämta data enkelt. Eftersom webbsidor är märkbart beroende av metadata för att öka trafiken i sökmotorer, att studera olika metoder för skapandet av metadata är en viktig fråga. Skapande av metadata kan ske både manuellt och automatiskt. Syftet med forskningen är att visa resultaten av tillämpningen av olika metoder inklusive en ny föreslagen metod för att generera automatiska metadata med hjälp av en kvalitativ studie. Målet med forskningen är att visa förbättringen som uppnås genom den nya föreslagna metoden för att generera metadata automatisk som genomförs på webbsidor.
APA, Harvard, Vancouver, ISO, and other styles
7

Alam, Mohammad Saquib. "Automatic generation of critical driving scenarios." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-288886.

Full text
Abstract:
Despite the tremendous development in the autonomous vehicle industry, the tools for systematic testing are still lacking. Real-world testing is time-consuming and above all, dangerous. There is also a lack of a framework to automatically generate critical scenarios to test autonomous vehicles. This thesis develops a general framework for end- to- end testing of an autonomous vehicle in a simulated environment. The framework provides the capability to generate and execute a large number of traffic scenarios in a reliable manner. Two methods are proposed to compute the criticality of a traffic scenario. A so-called critical value is used to learn the probability distribution of the critical scenario iteratively. The obtained probability distribution can be used to sample critical scenarios for testing and for benchmarking a different autonomous vehicle. To describe the static and dynamic participants of urban traffic scenario executed by the simulator, OpenDrive and OpenScenario standards are used.
Trots den enorma utvecklingen inom den autonoma fordonsindustrin saknas fortfarande verktygen för systematisk testning. Verklig testning är tidskrävande och framför allt farlig. Det saknas också ett ramverk för att automatiskt generera kritiska scenarier för att testa autonoma fordon. Denna avhandling utvecklar en allmän ram för end-to-end- test av ett autonomt fordon i en simulerad miljö. Ramverket ger möjlighet att generera och utföra ett stort antal trafikscenarier på ett tillförlitligt sätt. Två metoder föreslås för att beräkna kritiken i ett trafikscenario. Ett så kallat kritiskt värde används för att lära sig sannolikhetsfördelningen för det kritiska scenariot iterativt. Den erhållna sannolikhetsfördelningen kan användas för att prova kritiska scenarier för testning och för benchmarking av ett annat autonomt fordon. För att beskriva de statiska och dynamiska deltagarna i stadstrafikscenariot som körs av simulatorn används OpenDrive och OpenScenario-standarder.
APA, Harvard, Vancouver, ISO, and other styles
8

Kupferschmidt, Benjamin, and Eric Pesciotta. "Automatic Format Generation Techniques for Network Data Acquisition Systems." International Foundation for Telemetering, 2009. http://hdl.handle.net/10150/606089.

Full text
Abstract:
ITC/USA 2009 Conference Proceedings / The Forty-Fifth Annual International Telemetering Conference and Technical Exhibition / October 26-29, 2009 / Riviera Hotel & Convention Center, Las Vegas, Nevada
Configuring a modern, high-performance data acquisition system is typically a very timeconsuming and complex process. Any enhancement to the data acquisition setup software that can reduce the amount of time needed to configure the system is extremely useful. Automatic format generation is one of the most useful enhancements to a data acquisition setup application. By using Automatic Format Generation, an instrumentation engineer can significantly reduce the amount of time that is spent configuring the system while simultaneously gaining much greater flexibility in creating sampling formats. This paper discusses several techniques that can be used to generate sampling formats automatically while making highly efficient use of the system's bandwidth. This allows the user to obtain most of the benefits of a hand-tuned, manually created format without spending excessive time creating it. One of the primary techniques that this paper discusses is an enhancement to the commonly used power-of-two rule, for selecting sampling rates. This allows the system to create formats that use a wider variety of rates. The system is also able to handle groups of related measurements that must follow each other sequentially in the sampling format. This paper will also cover a packet based formatting scheme that organizes measurements based on common sampling rates. Each packet contains a set of measurements that are sampled at a particular rate. A key benefit of using an automatic format generation system with this format is the optimization of sampling rates that are used to achieve the best possible match for each measurement's desired sampling rate.
APA, Harvard, Vancouver, ISO, and other styles
9

Yu, Xingjiang. "OSM-Based Automatic Road Network Geometries Generation on Unity." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-264903.

Full text
Abstract:
Nowadays, while 3D city reconstruction has been widely used in important topics like urban design and traffic simulation, frameworks to efficiently model large-scale road network based on data from the real world are of high interests. However, the diversity of the form of road networks is still a challenge for automatic reconstruction, and the information extracted from input data can highly determine the final effect to display. In this project, OpenStreetMap data is chosen as the only input of a three-stage method to efficiently generate a geometric model of the associated road network in varied forms. The method is applied to datasets from cities in the real world of different scales, rendered and presented the generated models on Unity3D platform, and compared them with the original road networks in both the quality and topology aspects. The results suggest that our method can reconstruct the features of original road networks in common cases such as three-way, four-way intersections, and roundabouts while consuming much shorter time than manual modeling in a large-scale urban scene. This framework contributes to an auxiliary tool for quick city traffic system reconstruction of multiple purposes, while there still being space of improvement for the modeling universality and quality of the method.
APA, Harvard, Vancouver, ISO, and other styles
10

Zhou, Yu. "AUTOMATIC GENERATION OF WEB APPLICATIONS AND MANAGEMENT SYSTEM." CSUSB ScholarWorks, 2017. https://scholarworks.lib.csusb.edu/etd/434.

Full text
Abstract:
One of the major difficulties in web application design is the tediousness of constructing new web pages from scratch. For traditional web application projects, the web application designers usually design and implement web application projects step by step, in detail. My project is called “automatic generation of web applications and management system.” This web application generator can generate the generic and customized web applications based on software engineering theories. The flow driven methodology will be used to drive the project by Business Process Model Notation (BPMN). Modules of the project are: database, web server, HTML page, functionality, financial analysis model, customer, and BPMN. The BPMN is the most important section of this entire project, due to the BPMN flow engine that most of the work and data flow depends on the engine. There are two ways to deal with the project. One way is to go to the main page, then to choose one web app template, and click the generating button. The other way is for the customers to request special orders. The project then will give suitable software development methodologies to follow up. After a software development life cycle, customers will receive their required product.
APA, Harvard, Vancouver, ISO, and other styles
11

Wang, Wei. "Automatic Chinese calligraphic font generation with machine learning technology." Thesis, University of Macau, 2018. http://umaclib3.umac.mo/record=b3950605.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Mazidi, Karen. "Infusing Automatic Question Generation with Natural Language Understanding." Thesis, University of North Texas, 2016. https://digital.library.unt.edu/ark:/67531/metadc955021/.

Full text
Abstract:
Automatically generating questions from text for educational purposes is an active research area in natural language processing. The automatic question generation system accompanying this dissertation is MARGE, which is a recursive acronym for: MARGE automatically reads generates and evaluates. MARGE generates questions from both individual sentences and the passage as a whole, and is the first question generation system to successfully generate meaningful questions from textual units larger than a sentence. Prior work in automatic question generation from text treats a sentence as a string of constituents to be rearranged into as many questions as allowed by English grammar rules. Consequently, such systems overgenerate and create mainly trivial questions. Further, none of these systems to date has been able to automatically determine which questions are meaningful and which are trivial. This is because the research focus has been placed on NLG at the expense of NLU. In contrast, the work presented here infuses the questions generation process with natural language understanding. From the input text, MARGE creates a meaning analysis representation for each sentence in a passage via the DeconStructure algorithm presented in this work. Questions are generated from sentence meaning analysis representations using templates. The generated questions are automatically evaluated for question quality and importance via a ranking algorithm.
APA, Harvard, Vancouver, ISO, and other styles
13

Spedicati, Marco. "Automatic generation of annotated datasets for industrial OCR." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/17385/.

Full text
Abstract:
Machine learning algorithms need a lot of data, both for training and for testing. However, not always appropriate data are in fact available. This document presents the work that has been carried out at Datalogic USA’s laboratories in Eugene, Oregon, USA, to create data for industrial Optical Character Recognition (OCR) applications. It describes the automatic sys- tem that has been built. The images are created by printing and capturing strings of a variable layout, and they are ground truthed in a later stage, in an automatic way. Two datasets are generated, of which one is employed to asses a network’s performance.
APA, Harvard, Vancouver, ISO, and other styles
14

Sthamer, Harmen-Hinrich. "The automatic generation of software test data using genetic algorithms." Thesis, University of South Wales, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.320726.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Alshraideh, Mohammad. "Use of program and data-specific heuristics for automatic software test data generation." Thesis, University of Hull, 2007. http://hydra.hull.ac.uk/resources/hull:12387.

Full text
Abstract:
The application of heuristic search techniques, such as genetic algorithms, to the problem of automatically generating software test data has been a growing interest for many researchers in recent years. The problem tackled by this thesis is the development of heuristics for test data search for a class of test data generation problems that could not be solved prior to the work done in this thesis because of a lack of an informative cost function. Prior to this thesis, work in applying search techniques to structural test data generation was largely limited to numeric test data and in particular, this left open the problem of generating string test data. Some potential string cost functions and corresponding search operators are presented in this thesis. For string equality, an adaptation of the binary Hamming distance is considered, together with two new string specific match cost functions. New cost functions for string ordering are also defined. For string equality, a version of the edit distance cost function with fine-grained costs based on the difference in character ordinal values was found to be the most effective in an empirical study. A second problem tackled in this thesis is the problem of generating test data for programs whose coverage criterion cost function is locally constant. This arises because the computation produced by many programs leads to a loss of information. The use of flag variables, for example, can lead to information loss. Consequently, conventional instrumentation added to a program receives constant or almost constant input and hence the search receives very little guidance and will often fail to find test data. The approach adopted in this thesis is to exploit the structure and behaviour of the computation from the input values to the test goal, the usual instrumentation point. The new technique depends on introducing program data-state scarcity as an additional search goal. The search is guided by a new fitness function made up of two parts, one depending on the branch distance of the test goal, the other depending on the diversity of the data-states produced during execution of the program under test. In addition to the program data-state, the program operations, in the form of the program-specific operations, can be used to aid the generation of test data. The program-specific operators is demonstrated for strings and an empirical investigation showed a fivefold increase in performance. This technique can also be generalised to other data types. An empirical investigation of the use of program-specific search operators combined with a data-state scarcity search for flag problems showed a threefold increase in performance.
APA, Harvard, Vancouver, ISO, and other styles
16

Karlapudi, Janakiram. "Analysis on automatic generation of BEPS model from BIM model." Verlag der Technischen Universität Graz, 2020. https://tud.qucosa.de/id/qucosa%3A73547.

Full text
Abstract:
The interlinking of enriched BIM data to Building Energy Performance Simulation (BEPS) models facilitates the data flow throughout the building life cycle. This seamless data transfer from BIM to BEPS models increases design efficiency. To investigate the interoperability between these models, this paper analyses different data transfer methodologies along with input data requirements for the simulation process. Based on the analysed knowledge, a methodology is adopted and demonstrated to identify the quality of the data transfer process. Furthermore, discussions are provided on identified efficiency gaps and future work.:Abstract Introduction and background Methodology Methodology demonstration Creation and export of BIM data Verification of OpenBIM meta-data BEPS model generation and validation Import statics Model Geometry and Orientation Construction details Thermal Profile Results and discussion Summary and future work References
APA, Harvard, Vancouver, ISO, and other styles
17

Cocosco, Cristian A. "Automatic generation of training data for brain tissue classification from MRI." Thesis, McGill University, 2002. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=33965.

Full text
Abstract:
A fully automatic procedure for brain tissue classification from 3D magnetic resonance head images (MRI) is described. The procedure uses feature space proximity measures, and does not make any assumptions about the tissue intensity data distributions. As opposed to existing methods for automatic tissue classification, which are often sensitive to anatomical variability and pathology, the proposed procedure is robust against morphological deviations from the model. A novel method for automatic generation of classifier training samples, using a minimum spanning tree graph-theoretic approach, is proposed in this thesis. Starting from a set of samples generated from prior tissue probability maps (the "model") in a standard, brain-based coordinate system ("stereotaxic space"), the method reduces the fraction of incorrectly labelled samples in this set from 25% down to 2%. The corrected set of samples is then used by a supervised classifier for classifying the entire 3D image. Validation experiments were performed on both real and simulated MRI data; the kappa similarity measure increased from 0.90 to 0.95.
APA, Harvard, Vancouver, ISO, and other styles
18

Nanda, Yishu. "The automatic computer generation of process flow diagrams from topological data." Thesis, Imperial College London, 1990. http://hdl.handle.net/10044/1/46464.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Yang, Xile. "Automatic software test data generation from Z specifications using evolutionary algorithms." Thesis, University of South Wales, 1998. https://pure.southwales.ac.uk/en/studentthesis/automatic-software-test-data-generation-from-z-specifications-using-evolutionary-algorithms(fd661850-9e09-4d28-a857-d551612ccc09).html.

Full text
Abstract:
Test data sets have been automatically generated for both numerical and string data types to test the functionality of simple procedures and a good sized UNIX filing system from their Z specifications. Different structured properties of software systems are covered, such as arithmetic expressions, existential and universal quantifiers, set comprehension, union, intersection and difference, etc. A CASE tool ZTEST has been implemented to automatically generate test data sets. Test cases can be derived from the functionality of the Z specifications automatically. The test data sets generated from the test cases check the behaviour of the software systems for both valid and invalid inputs. Test cases are generated for the four boundary values and an intermediate value of the input search domain. For integer input variables, high quality test data sets can be generated on the search domain boundary and on each side of the boundary for both valid and invalid tests. Adaptive methods such as Genetic Algorithms and Simulated Annealing are used to generate test data sets from the test cases. GA is chosen as the default test data generator of ZTEST. Direct assignment is used if it is possible to make ZTEST system more efficient. Z is a formal language that can be used to precisely describe the functionality of computer systems. Therefore, the test data generation method can be used widely for test data generation of software systems. It will be very useful to the systems developed from Z specifications.
APA, Harvard, Vancouver, ISO, and other styles
20

Lundberg, Gustav. "Automatic map generation from nation-wide data sources using deep learning." Thesis, Linköpings universitet, Statistik och maskininlärning, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-170759.

Full text
Abstract:
The last decade has seen great advances within the field of artificial intelligence. One of the most noteworthy areas is that of deep learning, which is nowadays used in everything from self driving cars to automated cancer screening. During the same time, the amount of spatial data encompassing not only two but three dimensions has also grown and whole cities and countries are being scanned. Combining these two technological advances enables the creation of detailed maps with a multitude of applications, civilian as well as military.This thesis aims at combining two data sources covering most of Sweden; laser data from LiDAR scans and surface model from aerial images, with deep learning to create maps of the terrain. The target is to learn a simplified version of orienteering maps as these are created with high precision by experienced map makers, and are a representation of how easy or hard it would be to traverse a given area on foot. The performance on different types of terrain are measured and it is found that open land and larger bodies of water is identified at a high rate, while trails are hard to recognize.It is further researched how the different densities found in the source data affect the performance of the models, and found that some terrain types, trails for instance, benefit from higher density data, Other features of the terrain, like roads and buildings are predicted with higher accuracy by lower density data.Finally, the certainty of the predictions is discussed and visualised by measuring the average entropy of predictions in an area. These visualisations highlight that although the predictions are far from perfect, the models are more certain about their predictions when they are correct than when they are not.
APA, Harvard, Vancouver, ISO, and other styles
21

Hinnerson, Mattias. "Techniques for semi-automatic generation of data cubes from star-schemas." Thesis, Umeå universitet, Institutionen för datavetenskap, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-130648.

Full text
Abstract:
The aim of this thesis is to investigate techniques to better automate the process of generating data cubes from star- or snowflake schemas. The company Trimma builds cubes manually today, but we will investigate doing this more efficiently. We will select two basic approaches and implement them in Prototype A and Prototype B. Prototype A is a direct method that communicates directly with a database server. Prototype B is an indirect method that creates configuration files that can, later on, get loaded onto a database server. We evaluate the two prototypes over a star schema and a snowflake schema case provided by Trimma. The evaluation criteria include completeness, usability, documentation and support, maintainability, license costs, and development speed. Our evaluation indicates that Prototype A is generally outperforming Prototype B and that prototype A is arguably performing better than the manual method current employed by Trimma.
APA, Harvard, Vancouver, ISO, and other styles
22

Jungfer, Kim Michael. "Semi automatic generation of CORBA interfaces for databases in molecular biology." Thesis, University College London (University of London), 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.272561.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Edvardsson, Jon. "Techniques for Automatic Generation of Tests from Programs and Specifications." Doctoral thesis, Linköping : Department of Computer and Information Science, Linköping universitet, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-7829.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Kraut, Daniel. "Generování modelů pro testy ze zdrojových kódů." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2019. http://www.nusl.cz/ntk/nusl-403157.

Full text
Abstract:
The aim of the masters thesis is to design and implement a tool for automatic generation of paths in source code. Firstly was acquired a study of model based testing and possible design for the desired automatic generator based on coverage criteria defined on CFG model. The main point of the master theis is the tool design and description of its implementation. The tool supports many coverage criteria, which allows the user of such tool to focus on specific artefact of the system under test. Moreover, this tool is tuned to allow aditional requirements on the size of generated test suite, reflecting real world practical usage. The generator was implemented in C++ language and web interface for it in Python language, which at the same time is used to integrated the tool into Testos platform.
APA, Harvard, Vancouver, ISO, and other styles
25

Liu, Fangfang. "An ontology-based approach to Automatic Generation of GUI for Data Entry." ScholarWorks@UNO, 2009. http://scholarworks.uno.edu/td/1094.

Full text
Abstract:
This thesis reports an ontology-based approach to automatic generation of highly tailored GUI components that can make customized data requests for the end users. Using this GUI generator, without knowing any programming skill a domain expert can browse the data schema through the ontology file of his/her own field, choose attribute fields according to business's needs, and make a highly customized GUI for end users' data requests input. The interface for the domain expert is a tree view structure that shows not only the domain taxonomy categories but also the relationships between classes. By clicking the checkbox associated with each class, the expert indicates his/her choice of the needed information. These choices are stored in a metadata document in XML. From the viewpoint of programmers, the metadata contains no ambiguity; every class in an ontology is unique. The utilizations of the metadata can be various; I have carried out the process of GUI generation. Since every class and every attribute in the class has been formally specified in the ontology, generating GUI is automatic. This approach has been applied to a use case scenario in meteorological and oceanographic (METOC) area. The resulting features of this prototype have been reported in this thesis.
APA, Harvard, Vancouver, ISO, and other styles
26

Doungsa-ard, Chartchai, Keshav P. Dahal, M. Alamgir Hossain, and T. Suwannasart. "An automatic test data generation from UML state diagram using genetic algorithm." IEEE, 2007. http://hdl.handle.net/10454/2492.

Full text
Abstract:
Software testing is a part of software development process. However, this part is the first one to miss by software developers if there is a limited time to complete the project. Software developers often finish their software construction closed to the delivery time, they usually don¿t have enough time to create effective test cases for testing their programs. Creating test cases manually is a huge work for software developers in the rush hours. A tool which automatically generates test cases and test data can help the software developers to create test cases from software designs/models in early stage of the software development (before coding). Heuristic techniques can be applied for creating quality test data. In this paper, a GA-based test data generation technique has been proposed to generate test data from UML state diagram, so that test data can be generated before coding. The paper details the GA implementation to generate sequences of triggers for UML state diagram as test cases. The proposed algorithm has been demonstrated manually for an example of a vending machine.
APA, Harvard, Vancouver, ISO, and other styles
27

Löw, Simon. "Automatic Generation of Patient-specific Gamma Knife Treatment Plans for Vestibular Schwannoma Patients." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-273925.

Full text
Abstract:
In this thesis a new fully automatic process for radiotherapy treatment planning with the Leksell Gamma Knife is implemented and evaluated: First, a machine learning algorithm is trained to predict the desired dose distribution, then a convex optimization problem is solved to find the optimal Gamma Knife configuration using the prediction as the optimization objective. The method is evaluated using Bayesian linear regression, Gaussian processes and convolutional neural networks for the prediction. Therefore, the quality of the generated treatment plans is compared to the clinical treatment plans and then the relationship between the prediction and optimization result is analyzed. The convolutional neural network model shows the best performance and predicts realistic treatment plans, which only change minimally under the optimization and are on the same quality level as the clinical plans. The Bayesian linear regression model generates plans on the same quality level, but is not able to predict realistic treatment plans, which leads to substantial changes to the plan under the optimization. The Gaussian process shows the worst performance and is not able to predict plans of the same quality as the clinical plans
I detta examensarbete implementeras och utvärderas en ny helautomatisk process för strålbehandlingsplanering med hjälp av Leksell Gamma Knife: Till en början tränas en maskininlärningsalgoritm för att förutsäga önskad dosmängd. Med hjälp av den genererade prediktionen som optimeringsmål hittas sedan en lösning på ett konvext optimeringsproblem med syftet att hitta den optimala Gamma Knife - konfigurationen. Metoden utvärderas med hjälp av Bayesiansk linjär regression, Gaussiska processer och neurala faltningsnätverk för prediktionssteget. Detta görs genom att jämföra kvalitetsnivån på de genererade behandlingsplanerna med de kliniska behandlingsplanerna. Slutligen analyseras förhållandet mellan prediktionsoch optimeringsresultaten. Bäst resultat fås av det neurala faltningsnätverket som dessutom genererar realistiska behandlingsplaner. De av modellen generade behandlingsplanerna förändras minimalt under optimeringssteget och ligger på samma kvalitetsnivå som de kliniska behandlingsplanerna. Även den Bayesianska linjära regressionsmodellen genererar behandlingsplaner på liknande kvalitetsnivå men misslyckas med att generera realistiska behandlingsplaner, vilket i sin tur leder till markanta förändringar av behandlingsplanen under optimeringssteget. Av dessa algoritmer presterar Gaussiska processer sämst och kan inte generera behandlingsplaner av samma kvalitet som de kliniska behandlingsplanerna.
APA, Harvard, Vancouver, ISO, and other styles
28

Zhao, Hongkun. "Automatic wrapper generation for the extraction of search result records from search engines." Diss., Online access via UMI:, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
29

Erande, Abhijit. "Automatic detection of significant features and event timeline construction from temporally tagged data." Kansas State University, 2009. http://hdl.handle.net/2097/1675.

Full text
Abstract:
Master of Science
Department of Computing and Information Sciences
William H. Hsu
The goal of my project is to summarize large volumes of data and help users to visualize how events have unfolded over time. I address the problem of extracting overview terms from a time-tagged corpus of data and discuss some previous work conducted in this area. I use a statistical approach to automatically extract key terms, form groupings of related terms, and display the resultant groups on a timeline. I use a static corpus composed of news stories, as opposed to an on-line setting where continual additions to the corpus are being made. Terms are extracted using a Named Entity Recognizer, and importance of a term is determined using the [superscript]X[superscript]2 measure. My approach does not address the problem of associating time and date stamps with data, and is restricted to corpora that been explicitly tagged. The quality of results obtained is gauged subjectively and objectively by measuring the degree to which events known to exist in the corpus were identified by the system.
APA, Harvard, Vancouver, ISO, and other styles
30

Shahaf, Dafna. "Automatic Generation of Issue Maps: Structured, Interactive Outputs for Complex Information Needs." Research Showcase @ CMU, 2012. http://repository.cmu.edu/dissertations/210.

Full text
Abstract:
When information is abundant, it becomes increasingly difficult to fit nuggets of knowledge into a single coherent picture. Complex stories spaghetti into branches, side stories, and intertwining narratives; search engines, our most popular navigational tools, are limited in their capacity to explore such complex stories. We propose a methodology for creating structured summaries of information, which we call metro maps. Our proposed algorithm generates a concise structured set of documents that maximizes coverage of salient pieces of information. Most importantly, metro maps explicitly show the relations among retrieved pieces in a way that captures story development. The overarching theme of this work is formalizing characteristics of good maps, and providing efficient algorithms (with theoretical guarantees) to optimize them. Moreover, as information needs vary from person to person, we integrate user interaction into our framework, allowing users to alter the maps to better reflect their interests. Pilot user studies with real-world datasets demonstrate that the method is able to produce maps which help users acquire knowledge efficiently. We believe that metro maps could be powerful tools for any Web user, scientist, or intelligence analyst trying to process large amounts of data.
APA, Harvard, Vancouver, ISO, and other styles
31

Barclay, Peter J. "Object oriented modelling of complex data with automatic generation of a persistent representation." Thesis, Edinburgh Napier University, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.385918.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Ferreira, Fernando Henrique Inocêncio Borba. "Framework de geração de dados de teste para programas orientados a objetos." Universidade de São Paulo, 2012. http://www.teses.usp.br/teses/disponiveis/100/100131/tde-09032013-102901/.

Full text
Abstract:
A geração de dados de teste é uma tarefa obrigatória do processo de teste de software. Em geral, é realizada por prossionais de teste, o que torna seu custo elevado e sua automatização necessária. Os frameworks existentes que auxiliam essa atividade são restritos, fornecendo apenas uma única técnica de geração de dados de teste, uma única função de aptidão para avaliação dos indivíduos e apenas um algoritmo de seleção. Este trabalho apresenta o framework JaBTeG (Java Bytecode Test Generation) de geração de dados de teste. A principal característica do framework é permitir o desenvolvimento de métodos de geração de dados de teste por meio da seleção da técnica de geração de dados de teste, da função de aptidão, do algoritmo de seleção e critério de teste estrutural. Utilizando o framework JaBTeG, técnicas de geração de dados de teste podem ser criadas e experimentadas. O framework está associado à ferramenta de teste JaBUTi (Java Bytecode Understanding and Testing) para auxiliar a geração de dados de teste. Quatro técnicas de geração de dados de teste, duas funções de aptidão e quatro algoritmos de seleção foram desenvolvidos para validação da abordagem proposta pelo framework. De maneira complementar, cinco programas com características diferentes foram testados com dados gerados usando os métodos providos pelo framework JaBTeG.
Test data generation is a mandatory activity of the software testing process. In general, it is carried out by testing practitioners, which makes it costly and its automation needed. Existing frameworks to support this activity are restricted, providing only one data generation technique, a single tness function to evaluate individuals, and a unique selection algorithm. This work describes the JaBTeG (Test Java Bytecode Generation) framework for testing data generation. The main characteristc of JaBTeG is to allow the development of data generation methods by selecting the data generation technique, the tness function, the selection algorithm and the structural testing criteria. By using JaBTeG, new methods for testing data generation can be developed and experimented. The framework was associated with JaBUTi (Java Bytecode Understanding and Testing) to support testing data creation. Four data generation techniques, two tness functions, and four selection algorithms were developed to validate the approach proposed by the framework. In addition, ve programs with dierent characteristics were tested with data generated using the methods supported by JaBTeG.
APA, Harvard, Vancouver, ISO, and other styles
33

Akinci, Arda. "Universal Command Generator For Robotics And Cnc Machinery." Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/12610579/index.pdf.

Full text
Abstract:
In this study a universal command generator has been designed for robotics and CNC machinery. Encoding techniques has been utilized in order to represent the commands and their efficiencies have been discussed. The developed algorithm generates the trajectory of the end-effector with linear and circular interpolation in an offline fashion, the corresponding joint states and their error envelopes are computed with the utilization of a numerical inverse kinematic solver with a predefined precision. Finally, the command encoder employs the resulting data and produces the representation of positions in joint space with using proposed encoding techniques depending on the error tolerance for each joint. The encoding methods considered in this thesis are: Lossless data compression via higher order finite difference, Huffman Coding and Arithmetic Coding techniques, Polynomial Fitting methods with Chebyshev, Legendre and Bernstein Polynomials and finally Fourier and Wavelet Transformations. The algorithm is simulated for Puma 560 and Stanford Manipulators for a trajectory in order to evaluate the performances of the above mentioned techniques (i.e. approximation error, memory requirement, number of commands generated). According to the case studies, Chebyshev Polynomials has been determined to be the most suitable technique for command generation. Proposed methods have been implemented in MATLAB environment due to its versatile toolboxes. With this research the way to develop an encoding/decoding standard for an advanced command generator scheme for computer numerically controlled (CNC) machines in the near future has been paved.
APA, Harvard, Vancouver, ISO, and other styles
34

Lachut, Watkins Alison Elizabeth. "An investigation into adaptive search techniques for the automatic generation of software test data." Thesis, University of Plymouth, 1996. http://hdl.handle.net/10026.1/1618.

Full text
Abstract:
The focus of this thesis is on the use of adaptive search techniques for the automatic generation of software test data. Three adaptive search techniques are used, these are genetic algorithms (GAs), Simulated Amiealing and Tabu search. In addition to these, hybrid search methods have been developed and applied to the problem of test data generation. The adaptive search techniques are compared to random generation to ascertain the effectiveness of adaptive search. The results indicate that GAs and Simulated Annealing outperform random generation in all test programs. Tabu search outperformed random generation in most tests, but it lost its effectiveness as the amount of input data increased. The hybrid techniques have given mixed results. The two best methods, GAs and Simulated Annealing are then compared to random generation on a program written to optimise capital budgeting, both perform better than random generation and Simulated Annealing requires less test data than GAs. Further research highlights a need for research into the control parameters of all the adaptive search methods and attaining test data which covers border conditions.
APA, Harvard, Vancouver, ISO, and other styles
35

Salama, Mohamed Ahmed Said. "Automatic test data generation from formal specification using genetic algorithms and case based reasoning." Thesis, University of the West of England, Bristol, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.252562.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Ramnerö, David. "Semi-automatic Training Data Generation for Cell Segmentation Network Using an Intermediary Curator Net." Thesis, Uppsala universitet, Bildanalys och människa-datorinteraktion, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-332724.

Full text
Abstract:
In this work we create an image analysis pipeline to segment cells from microscopy image data. A portion of the segmented images are manually curated and this curated data is used to train a Curator network to filter the whole dataset. The curated data is used to train a separate segmentation network to improve the cell segmentation. This technique can be easily applied to different types of microscopy object segmentation.
APA, Harvard, Vancouver, ISO, and other styles
37

Ulas, Yaman. "Design Of Advanced Motion Command Generators Utilizing Fpga." Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/3/12612054/index.pdf.

Full text
Abstract:
In this study, universal motion command generator systems utilizing a Field Programmable Gate Array (FPGA) and an interface board for Robotics and Computer Numerical Control (CNC) applications have been developed. These command generation systems can be classified into two main groups as polynomial approximation and data compression based methods. In the former type of command generation methods, the command trajectory is firstly divided into segments according to the inflection points. Then, the segments are approximated using various polynomial techniques. The sequence originating from modeling error can be further included to the generated series. In the second type, higher-order differences of a given trajectory (i.e. position) are computed and the resulting data are compressed via lossless data compression techniques. Besides conventional approaches, a novel compression algorithm is also introduced in the study. This group of methods is capable of generating trajectory data at variable rates in forward and reverse directions. The generation of the commands is carried out according to the feed-rate (i.e. the speed along the trajectory) set by the external logic dynamically. These command generation techniques are implemented in MATLAB and then the best ones from each group are realized using FPGAs and their performances are assessed according to the resources used in the FPGA chip, the speed of command generation, and the memory size in Static Random Access Memory (SRAM) chip located on the development board.
APA, Harvard, Vancouver, ISO, and other styles
38

Yarkinoglu, Onur. "Computer Aided Manufacturing (cam) Data Generation For Solid Freeform Fabrication." Master's thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/12608834/index.pdf.

Full text
Abstract:
Rapid prototyping (RP) is a set of fabrication technologies that are used to produce accurate parts directly from computer aided drawing (CAD) data. These technologies are unique in a way that they use an additive fabrication approach in which a three dimensional (3D) object is directly produced. In this thesis study, a RP application with a modular architecture is designed and implemented to satisfy the possible requirements of future rapid prototyping studies. After a functional classification, the developed RP software is divided into View, RP and Slice Modules. In the RP module, the process parameter selection and optimal build orientation determination steps are carried out. In the Slice Module, slicing and tool path generation steps are performed. View Module is used to visualize the inputs and outputs of the RP software. To provide 3D visualization support for View Module, a fully independent, open for development, high level 3D modeling environment and graphics library called Graphics Framework is developed. The resulting RP application is benchmarked with the RP software packages in the market according to their memory usage and process time. As a result of this benchmark, it is observed that the developed RP software has presented an equivalent performance with the other commercial RP applications and has proved its success.
APA, Harvard, Vancouver, ISO, and other styles
39

Carlsson, Martin. "Automatic Code Generation from a Colored Petri Net Specification for Game Development with Unity3D." Thesis, Uppsala universitet, Institutionen för speldesign, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-354159.

Full text
Abstract:
This thesis proposes an approach for automatic code generation from a Colored Petri net specification. Two tools were developed for the aforementioned purpose, a Colored Petri net editor to create and modify Colored Petri nets, and an automatic code generator to generate code from a Colored Petri net specification. Through the use of the editor four models were created, these models were used as input to the automatic code generator. The automatic code generator successfully generated code from the Colored Petri net specification, code in the form of component scripts for the Unity3D game engine. However, the approach used by the code generator had flaws such as introducing overhead in the generated code, failing to deal with concurrency, and restricting the types of Colored Petri nets which could be used as input. The aforementioned tools could be used in the future to research the benefits and disadvantages of modeling game systems with Colored Petri nets, and automatically generating code from Colored Petri nets.
APA, Harvard, Vancouver, ISO, and other styles
40

Lindqvist, Niklas. "Automatic Question Paraphrasing in Swedish with Deep Generative Models." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-294320.

Full text
Abstract:
Paraphrase generation refers to the task of automatically generating a paraphrase given an input sentence or text. Paraphrase generation is a fundamental yet challenging natural language processing (NLP) task and is utilized in a variety of applications such as question answering, information retrieval, conversational systems etc. In this study, we address the problem of paraphrase generation of questions in Swedish by evaluating two different deep generative models that have shown promising results on paraphrase generation of questions in English. The first model is a Conditional Variational Autoencoder (C-VAE) and the other model is an extension of the first one where a discriminator network is introduced into the model to form a Generative Adversarial Network (GAN) architecture. In addition to these models, a method not based on machine-learning was implemented to act as a baseline. The models were evaluated using both quantitative and qualitative measures including grammatical correctness and equivalence to source question. The results show that the deep generative models outperformed the baseline across all quantitative metrics. Furthermore, from the qualitative evaluation it was shown that the deep generative models outperformed the baseline at generating grammatically correct sentences, but there was no noticeable difference in terms of equivalence to the source question between the models.
Parafrasgenerering syftar på uppgiften att, utifrån en given mening eller text, automatiskt generera en parafras, det vill säga en annan text med samma betydelse. Parafrasgenerering är en grundläggande men ändå utmanande uppgift inom naturlig språkbehandling och används i en rad olika applikationer som informationssökning, konversionssystem, att besvara frågor givet en text etc. I den här studien undersöker vi problemet med parafrasgenerering av frågor på svenska genom att utvärdera två olika djupa generativa modeller som visat lovande resultat på parafrasgenerering av frågor på engelska. Den första modellen är en villkorsbaserad variationsautokodare (C-VAE). Den andra modellen är också en C-VAE men introducerar även en diskriminator vilket gör modellen till ett generativt motståndarnätverk (GAN). Förutom modellerna presenterade ovan, implementerades även en icke maskininlärningsbaserad metod som en baslinje. Modellerna utvärderades med både kvantitativa och kvalitativa mått inklusive grammatisk korrekthet och likvärdighet mellan parafras och originalfråga. Resultaten visar att de djupa generativa modellerna presterar bättre än baslinjemodellen på alla kvantitativa mätvärden. Vidare, visade the kvalitativa utvärderingen att de djupa generativa modellerna kunde generera grammatiskt korrekta frågor i större utsträckning än baslinjemodellen. Det var däremot ingen större skillnad i semantisk ekvivalens mellan parafras och originalfråga för de olika modellerna.
APA, Harvard, Vancouver, ISO, and other styles
41

Doungsa-ard, Chartchai. "Generation of Software Test Data from the Design Specification Using Heuristic Techniques. Exploring the UML State Machine Diagrams and GA Based Heuristic Techniques in the Automated Generation of Software Test Data and Test Code." Thesis, University of Bradford, 2011. http://hdl.handle.net/10454/5380.

Full text
Abstract:
Software testing is a tedious and very expensive undertaking. Automatic test data generation is, therefore, proposed in this research to help testers reduce their work as well as ascertain software quality. The concept of test driven development (TDD) has become increasingly popular during the past several years. According to TDD, test data should be prepared before the beginning of code implementation. Therefore, this research asserts that the test data should be generated from the software design documents which are normally created prior to software code implementation. Among such design documents, the UML state machine diagrams are selected as a platform for the proposed automated test data generation mechanism. Such diagrams are selected because they show behaviours of a single object in the system. The genetic algorithm (GA) based approach has been developed and applied in the process of searching for the right amount of quality test data. Finally, the generated test data have been used together with UML class diagrams for JUnit test code generation. The GA-based test data generation methods have been enhanced to take care of parallel path and loop problems of the UML state machines. In addition the proposed GA-based approach is also targeted to solve the diagrams with parameterised triggers. As a result, the proposed framework generates test data from the basic state machine diagram and the basic class diagram without any additional nonstandard information, while most other approaches require additional information or the generation of test data from other formal languages. The transition coverage values for the introduced approach here are also high; therefore, the generated test data can cover most of the behaviour of the system.
APA, Harvard, Vancouver, ISO, and other styles
42

Mairhofer, Stefan. "Search-based software testing and complex test data generation in a dynamic programming language." Thesis, Blekinge Tekniska Högskola, Avdelningen för programvarusystem, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-4340.

Full text
Abstract:
Manually creating test cases is time consuming and error prone. Search-based software testing (SBST) can help automate this process and thus to reduce time and effort and increase quality by automatically generating relevant test cases. Previous research have mainly focused on static programming languages with simple test data inputs such as numbers. In this work we present an approach for search-based software testing for dynamic programming languages that can generate test scenarios and both simple and more complex test data. This approach is implemented as a tool in and for the dynamic programming language Ruby. It uses an evolutionary algorithm to search for tests that gives structural code coverage. We have evaluated the system in an experiment on a number of code examples that differ in complexity and the type of input data they require. We compare our system with the results obtained by a random test case generator. The experiment shows, that the presented approach can compete with random testing and, for many situations, quicker finds tests and data that gives a higher structural code coverage.
APA, Harvard, Vancouver, ISO, and other styles
43

Pereira, José Casimiro. "Natural language generation in the context of multimodal interaction in Portuguese : Data-to-text based in automatic translation." Doctoral thesis, Universidade de Aveiro, 2017. http://hdl.handle.net/10773/21767.

Full text
Abstract:
Doutoramento em Informática
Resumo em português não disponivel
To enable the interaction by text and/or speech it is essential that we devise systems capable of translating internal data into sentences or texts that can be shown on screen or heard by users. In this context, it is essential that these natural language generation (NLG) systems provide sentences in the native languages of the users (in our case European Portuguese) and enable an easy development and integration process while providing an output that is perceived as natural. The creation of high quality NLG systems is not an easy task, even for a small domain. The main di culties arise from: classic approaches being very demanding in know-how and development time; a lack of variability in generated sentences of most generation methods; a di culty in easily accessing complete tools; shortage of resources, such as large corpora; and support being available in only a limited number of languages. The main goal of this work was to propose, develop and test a method to convert Data-to-Portuguese, which can be developed with the smallest amount possible of time and resources, but being capable of generating utterances with variability and quality. The thesis defended argues that this goal can be achieved adopting data-driven language generation { more precisely generation based in language translation { and following an Engineering Research Methodology. In this thesis, two Data2Text NLG systems are presented. They were designed to provide a way to quickly develop an NLG system which can generate sentences with good quality. The proposed systems use tools that are freely available and can be developed by people with low linguistic skills. One important characteristic is the use of statistical machine translation techniques and this approach requires only a small natural language corpora resulting in easier and cheaper development when compared to more common approaches. The main result of this thesis is the demonstration that, by following the proposed approach, it is possible to create systems capable of translating information/data into good quality sentences in Portuguese. This is done without major e ort regarding resources creation and with the common knowledge of an experienced application developer. The systems created, particularly the hybrid system, are capable of providing a good solution for problems in data to text conversion.
APA, Harvard, Vancouver, ISO, and other styles
44

Williams, Robert L. "Synthesis and design of the RSSR spatial mechanism for function generation." Thesis, Virginia Tech, 1985. http://hdl.handle.net/10919/41573.

Full text
Abstract:

The purpose of this thesis is to provide a complete package for the synthesis and design of the RSSR spatial function generating mechanism.

In addition to the introductory material this thesis is divided into three sections. The section on background kinematic theory includes synthesis, analysis, link rotatability, transmission quality, and branching analysis. The second division details the computer application of the kinematic theory. The program RSSRSD has been developed to incorporate the RSSR synthesis and design theory. An example is included to demonstrate the computer-implemented theory.

The third part of this thesis includes miscellaneous mechanism considerations and recommendations for further research.

The theoretical work in this project is a combination of original derivations and applications of the theory in the mechanism literature.


Master of Science
APA, Harvard, Vancouver, ISO, and other styles
45

Nikolaus, Ulrich, and Julia Dobroschke. "Automatic conversion of PDF-based, layout-oriented typesetting data to DAISY: potentials and limitations." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2010. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-38042.

Full text
Abstract:
Only two percent of new books released in Germany are professionally edited for visually impaired people. However, more and more print publications are made available to the public in digital formats through online content delivery platforms like “libreka!”. The automatic conversion of such contents into DAISY would considerably increase the number of publications available in accessible formats. Still, most data available on “libreka!” is published as non-tagged PDF. In this paper, we examine the potential for automatic conversion of “libreka!”-based content into DAISY, while also analyzing the potentials and limitations of current conversion tools.
APA, Harvard, Vancouver, ISO, and other styles
46

Daniel, Jérémie. "Trajectory generation and data fusion for control-oriented advanced driver assistance systems." Phd thesis, Université de Haute Alsace - Mulhouse, 2010. http://tel.archives-ouvertes.fr/tel-00608549.

Full text
Abstract:
Since the origin of the automotive at the end of the 19th century, the traffic flow is subject to a constant increase and, unfortunately, involves a constant augmentation of road accidents. Research studies such as the one performed by the World Health Organization, show alarming results about the number of injuries and fatalities due to these accidents. To reduce these figures, a solution lies in the development of Advanced Driver Assistance Systems (ADAS) which purpose is to help the Driver in his driving task. This research topic has been shown to be very dynamic and productive during the last decades. Indeed, several systems such as Anti-lock Braking System (ABS), Electronic Stability Program (ESP), Adaptive Cruise Control (ACC), Parking Manoeuvre Assistant (PMA), Dynamic Bending Light (DBL), etc. are yet market available and their benefits are now recognized by most of the drivers. This first generation of ADAS are usually designed to perform a specific task in the Controller/Vehicle/Environment framework and thus requires only microscopic information, so requires sensors which are only giving local information about an element of the Vehicle or of its Environment. On the opposite, the next ADAS generation will have to consider more aspects, i.e. information and constraints about of the Vehicle and its Environment. Indeed, as they are designed to perform more complex tasks, they need a global view about the road context and the Vehicle configuration. For example, longitudinal control requires information about the road configuration (straight line, bend, etc.) and about the eventual presence of other road users (vehicles, trucks, etc.) to determine the best reference speed. [...]
APA, Harvard, Vancouver, ISO, and other styles
47

Nikolaus, Ulrich, and Julia Dobroschke. "Automatic conversion of PDF-based, layout-oriented typesetting data to DAISY: potentials and limitations." Tagungsband zu: DAISY International Technical Conference : Barrierefreie Aufbereitung von Dokumenten, 21. - 27. September 2009, Leipzig/Germany. - Leipzig : DZB, 2009. - S. 115 - 127, 2009. https://slub.qucosa.de/id/qucosa%3A797.

Full text
Abstract:
Only two percent of new books released in Germany are professionally edited for visually impaired people. However, more and more print publications are made available to the public in digital formats through online content delivery platforms like “libreka!”. The automatic conversion of such contents into DAISY would considerably increase the number of publications available in accessible formats. Still, most data available on “libreka!” is published as non-tagged PDF. In this paper, we examine the potential for automatic conversion of “libreka!”-based content into DAISY, while also analyzing the potentials and limitations of current conversion tools.
APA, Harvard, Vancouver, ISO, and other styles
48

Singh, Inderjeet. "A Mapping Study of Automation Support Tools for Unit Testing." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-15192.

Full text
Abstract:
Unit testing is defined as a test activity usually performed by a developer for the purpose of demonstrating program functionality and meeting the requirements specification of module. Nowadays, unit testing is considered as an integral part in the software development cycle. However, performing unit testing by developers is still considered as a major concern because of the time and cost involved in it. Automation support for unit testing, in the form of various automation tools, could significantly lower the cost of performing unit testing phase as well as decrease the time developer involved in the actual testing. The problem is how to choose the most appropriate tool that will suit developer requirements consisting of cost involved, effort needed, level of automation provided, language support, etc. This research work presents results from a systematic literature review with the aim of finding all unit testing tools with an automation support. In the systematic literature review, we initially identified 1957 studies. After performing several removal stages, 112 primary studies were listed and 24 tools identified in total. Along with the list of tools, we also provide the categorization of all the tools found based on the programming language support, availability (License, Open source, Free), testing technique, level of effort required by developer to use tool, target domain, that we consider as good properties for a developer to make a decision on which tool to use. Additionally, we categorized type of error(s) found by some tools, which could be beneficial for a developer when looking at the tool’s effectiveness. The main intent of this report is to aid developers in the process of choosing an appropriate unit testing tool from categorization table of available tools with automation unit testing support that ease this process significantly. This work could be beneficial for researchers considering to evaluate efficiency and effectiveness of each tool and use this information to eventually build a new tool with the same properties as several others.
APA, Harvard, Vancouver, ISO, and other styles
49

Enderlin, Ivan. "Génération automatique de tests unitaires avec Praspel, un langage de spécification pour PHP." Thesis, Besançon, 2014. http://www.theses.fr/2014BESA2067/document.

Full text
Abstract:
Les travaux présentés dans ce mémoire portent sur la validation de programmes PHP à travers un nouveau langage de spécification, accompagné de ses outils. Ces travaux s’articulent selon trois axes : langage de spécification, génération automatique de données de test et génération automatique de tests unitaires.La première contribution est Praspel, un nouveau langage de spécification pour PHP, basé sur la programmation par contrat. Praspel spécifie les données avec des domaines réalistes, qui sont des nouvelles structures permettant de valider etgénérer des données. À partir d’un contrat écrit en Praspel, nous pouvons faire du Contract-based Testing, c’est à dire exploiter les contrats pour générer automatiquement des tests unitaires. La deuxième contribution concerne la génération de données de test. Pour les booléens, les entiers et les réels, une génération aléatoire uniforme est employée. Pour les tableaux, un solveur de contraintes a été implémenté et utilisé. Pour les chaînes de caractères, un langage de description de grammaires avec un compilateur de compilateurs LL(⋆) et plusieurs algorithmes de génération de données sont employés. Enfin, la génération d’objets est traitée.La troisième contribution définit des critères de couverture sur les contrats.Ces derniers fournissent des objectifs de test. Toutes ces contributions ont été implémentées et expérimentées dans des outils distribués à la communauté PHP
The works presented in this memoir are about the validation of PHPprograms through a new specification language, along with its tools. These works follow three axes: specification language, automatic test data generation and automatic unit test generation. The first contribution is Praspel, a new specification language for PHP, based on the Design by Contract. Praspel specifies data with realistic domains, which are new structures allowing to validate and generate data. Based on a contract, we are able to perform Contract-based Testing, i.e.using contracts to automatically generate unit tests. The second contribution isabout test data generation. For booleans, integers and floating point numbers, auniform random generation is used. For arrays, a dedicated constraint solver has been implemented and used. For strings, a grammar description language along with an LL(⋆) compiler compiler and several algorithms for data generation are used. Finally, the object generation is supported. The third contribution defines contract coverage criteria. These latters provide test objectives. All these contributions are implemented and experimented into tools distributed to the PHP community
APA, Harvard, Vancouver, ISO, and other styles
50

Bláha, Lukáš. "Analýza automatizovaného generování signatur s využitím Honeypotu." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2012. http://www.nusl.cz/ntk/nusl-236430.

Full text
Abstract:
In this paper, system of automatic processing of attacks using honeypots is discussed. The first goal of the thesis is to become familiar with the issue of signatures to detect malware on the network, especially the analysis and description of existing methods for automatic generation of signatures using honeypots. The main goal is to use the acquired knowledge to the design and implementation of tool which will perform the detection of new malicious software on the network or end user's workstation.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography