Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Automatic data generator.

Dissertationen zum Thema „Automatic data generator“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Automatic data generator" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Kupferschmidt, Benjamin, und Albert Berdugo. „DESIGNING AN AUTOMATIC FORMAT GENERATOR FOR A NETWORK DATA ACQUISITION SYSTEM“. International Foundation for Telemetering, 2006. http://hdl.handle.net/10150/604157.

Der volle Inhalt der Quelle
Annotation:
ITC/USA 2006 Conference Proceedings / The Forty-Second Annual International Telemetering Conference and Technical Exhibition / October 23-26, 2006 / Town and Country Resort & Convention Center, San Diego, California
In most current PCM based telemetry systems, an instrumentation engineer manually creates the sampling format. This time consuming and tedious process typically involves manually placing each measurement into the format at the proper sampling rate. The telemetry industry is now moving towards Ethernet-based systems comprised of multiple autonomous data acquisition units, which share a single global time source. The architecture of these network systems greatly simplifies the task of implementing an automatic format generator. Automatic format generation eliminates much of the effort required to create a sampling format because the instrumentation engineer only has to specify the desired sampling rate for each measurement. The system handles the task of organizing the format to comply with the specified sampling rates. This paper examines the issues involved in designing an automatic format generator for a network data acquisition system.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Zhou, Yu. „AUTOMATIC GENERATION OF WEB APPLICATIONS AND MANAGEMENT SYSTEM“. CSUSB ScholarWorks, 2017. https://scholarworks.lib.csusb.edu/etd/434.

Der volle Inhalt der Quelle
Annotation:
One of the major difficulties in web application design is the tediousness of constructing new web pages from scratch. For traditional web application projects, the web application designers usually design and implement web application projects step by step, in detail. My project is called “automatic generation of web applications and management system.” This web application generator can generate the generic and customized web applications based on software engineering theories. The flow driven methodology will be used to drive the project by Business Process Model Notation (BPMN). Modules of the project are: database, web server, HTML page, functionality, financial analysis model, customer, and BPMN. The BPMN is the most important section of this entire project, due to the BPMN flow engine that most of the work and data flow depends on the engine. There are two ways to deal with the project. One way is to go to the main page, then to choose one web app template, and click the generating button. The other way is for the customers to request special orders. The project then will give suitable software development methodologies to follow up. After a software development life cycle, customers will receive their required product.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Akinci, Arda. „Universal Command Generator For Robotics And Cnc Machinery“. Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/12610579/index.pdf.

Der volle Inhalt der Quelle
Annotation:
In this study a universal command generator has been designed for robotics and CNC machinery. Encoding techniques has been utilized in order to represent the commands and their efficiencies have been discussed. The developed algorithm generates the trajectory of the end-effector with linear and circular interpolation in an offline fashion, the corresponding joint states and their error envelopes are computed with the utilization of a numerical inverse kinematic solver with a predefined precision. Finally, the command encoder employs the resulting data and produces the representation of positions in joint space with using proposed encoding techniques depending on the error tolerance for each joint. The encoding methods considered in this thesis are: Lossless data compression via higher order finite difference, Huffman Coding and Arithmetic Coding techniques, Polynomial Fitting methods with Chebyshev, Legendre and Bernstein Polynomials and finally Fourier and Wavelet Transformations. The algorithm is simulated for Puma 560 and Stanford Manipulators for a trajectory in order to evaluate the performances of the above mentioned techniques (i.e. approximation error, memory requirement, number of commands generated). According to the case studies, Chebyshev Polynomials has been determined to be the most suitable technique for command generation. Proposed methods have been implemented in MATLAB environment due to its versatile toolboxes. With this research the way to develop an encoding/decoding standard for an advanced command generator scheme for computer numerically controlled (CNC) machines in the near future has been paved.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Naňo, Andrej. „Automatické generování testovacích dat informačních systémů“. Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2021. http://www.nusl.cz/ntk/nusl-445520.

Der volle Inhalt der Quelle
Annotation:
ISAGENis a tool for the automatic generation of structurally complex test inputs that imitate real communication in the context of modern information systems . Complex, typically tree-structured data currently represents the standard means of transmitting information between nodes in distributed information systems. Automatic generator ISAGENis founded on the methodology of data-driven testing and uses concrete data from the production environment as the primary characteristic and specification that guides the generation of new similar data for test cases satisfying given combinatorial adequacy criteria. The main contribution of this thesis is a comprehensive proposal of automated data generation techniques together with an implementation, which demonstrates their usage. The created solution enables testers to create more relevant testing data, representing production-like communication in information systems.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Offutt, Andrew Jefferson VI. „Automatic test data generation“. Diss., Georgia Institute of Technology, 1988. http://hdl.handle.net/1853/9167.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Kraut, Daniel. „Generování modelů pro testy ze zdrojových kódů“. Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2019. http://www.nusl.cz/ntk/nusl-403157.

Der volle Inhalt der Quelle
Annotation:
The aim of the masters thesis is to design and implement a tool for automatic generation of paths in source code. Firstly was acquired a study of model based testing and possible design for the desired automatic generator based on coverage criteria defined on CFG model. The main point of the master theis is the tool design and description of its implementation. The tool supports many coverage criteria, which allows the user of such tool to focus on specific artefact of the system under test. Moreover, this tool is tuned to allow aditional requirements on the size of generated test suite, reflecting real world practical usage. The generator was implemented in C++ language and web interface for it in Python language, which at the same time is used to integrated the tool into Testos platform.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Cousins, Michael Anthony. „Automated structural test data generation“. Thesis, University of Portsmouth, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.261234.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Holmes, Stephen Terry. „Heuristic generation of software test data“. Thesis, University of South Wales, 1996. https://pure.southwales.ac.uk/en/studentthesis/heuristic-generation-of-software-test-data(aa20a88e-32a5-4958-9055-7abc11fbc541).html.

Der volle Inhalt der Quelle
Annotation:
Incorrect system operation can, at worst, be life threatening or financially devastating. Software testing is a destructive process that aims to reveal software faults. Selection of good test data can be extremely difficult. To ease and assist test data selection, several test data generators have emerged that use a diverse range of approaches. Adaptive test data generators use existing test data to produce further effective test data. It has been observed that there is little empirical data on the adaptive approach. This thesis presents the Heuristically Aided Testing System (HATS), which is an adaptive test data generator that uses several heuristics. A heuristic embodies a test data generation technique. Four heuristics have been developed. The first heuristic, Direct Assignment, generates test data for conditions involving an input variable and a constant. The Alternating Variable heuristic determines a promising direction to modify input variables, then takes ever increasing steps in this direction. The Linear Predictor heuristic performs linear extrapolations on input variables. The final heuristic, Boundary Follower, uses input domain boundaries as a guide to locate hard-to-find solutions. Several Ada procedures have been tested with HATS; a quadratic equation solver, a triangle classifier, a remainder calculator and a linear search. Collectively they present some common and rare test data generation problems. The weakest testing criterion HATS has attempted to satisfy is all branches. Stronger, mutation-based criteria have been used on two of the procedures. HATS has achieved complete branch coverage on each procedure, except where there is a higher level of control flow complexity combined with non-linear input variables. Both branch and mutation testing criteria have enabled a better understanding of the test data generation problems and contributed to the evolution of heuristics and the development of new heuristics. This thesis contributes the following to knowledge: Empirical data on the adaptive heuristic approach to test data generation. How input domain boundaries can be used as guidance for a heuristic. An effective heuristic termination technique based on the heuristic's progress. A comparison of HATS with random testing. Properties of the test software that indicate when HATS will take less effort than random testing are identified.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Ege, Raimund K. „Automatic generation of interfaces using constraints. /“. Full text open access at:, 1987. http://content.ohsu.edu/u?/etd,144.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Kupferschmidt, Benjamin, und Eric Pesciotta. „Automatic Format Generation Techniques for Network Data Acquisition Systems“. International Foundation for Telemetering, 2009. http://hdl.handle.net/10150/606089.

Der volle Inhalt der Quelle
Annotation:
ITC/USA 2009 Conference Proceedings / The Forty-Fifth Annual International Telemetering Conference and Technical Exhibition / October 26-29, 2009 / Riviera Hotel & Convention Center, Las Vegas, Nevada
Configuring a modern, high-performance data acquisition system is typically a very timeconsuming and complex process. Any enhancement to the data acquisition setup software that can reduce the amount of time needed to configure the system is extremely useful. Automatic format generation is one of the most useful enhancements to a data acquisition setup application. By using Automatic Format Generation, an instrumentation engineer can significantly reduce the amount of time that is spent configuring the system while simultaneously gaining much greater flexibility in creating sampling formats. This paper discusses several techniques that can be used to generate sampling formats automatically while making highly efficient use of the system's bandwidth. This allows the user to obtain most of the benefits of a hand-tuned, manually created format without spending excessive time creating it. One of the primary techniques that this paper discusses is an enhancement to the commonly used power-of-two rule, for selecting sampling rates. This allows the system to create formats that use a wider variety of rates. The system is also able to handle groups of related measurements that must follow each other sequentially in the sampling format. This paper will also cover a packet based formatting scheme that organizes measurements based on common sampling rates. Each packet contains a set of measurements that are sampled at a particular rate. A key benefit of using an automatic format generation system with this format is the optimization of sampling rates that are used to achieve the best possible match for each measurement's desired sampling rate.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Alshraideh, Mohammad. „Use of program and data-specific heuristics for automatic software test data generation“. Thesis, University of Hull, 2007. http://hydra.hull.ac.uk/resources/hull:12387.

Der volle Inhalt der Quelle
Annotation:
The application of heuristic search techniques, such as genetic algorithms, to the problem of automatically generating software test data has been a growing interest for many researchers in recent years. The problem tackled by this thesis is the development of heuristics for test data search for a class of test data generation problems that could not be solved prior to the work done in this thesis because of a lack of an informative cost function. Prior to this thesis, work in applying search techniques to structural test data generation was largely limited to numeric test data and in particular, this left open the problem of generating string test data. Some potential string cost functions and corresponding search operators are presented in this thesis. For string equality, an adaptation of the binary Hamming distance is considered, together with two new string specific match cost functions. New cost functions for string ordering are also defined. For string equality, a version of the edit distance cost function with fine-grained costs based on the difference in character ordinal values was found to be the most effective in an empirical study. A second problem tackled in this thesis is the problem of generating test data for programs whose coverage criterion cost function is locally constant. This arises because the computation produced by many programs leads to a loss of information. The use of flag variables, for example, can lead to information loss. Consequently, conventional instrumentation added to a program receives constant or almost constant input and hence the search receives very little guidance and will often fail to find test data. The approach adopted in this thesis is to exploit the structure and behaviour of the computation from the input values to the test goal, the usual instrumentation point. The new technique depends on introducing program data-state scarcity as an additional search goal. The search is guided by a new fitness function made up of two parts, one depending on the branch distance of the test goal, the other depending on the diversity of the data-states produced during execution of the program under test. In addition to the program data-state, the program operations, in the form of the program-specific operations, can be used to aid the generation of test data. The program-specific operators is demonstrated for strings and an empirical investigation showed a fivefold increase in performance. This technique can also be generalised to other data types. An empirical investigation of the use of program-specific search operators combined with a data-state scarcity search for flag problems showed a threefold increase in performance.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Sthamer, Harmen-Hinrich. „The automatic generation of software test data using genetic algorithms“. Thesis, University of South Wales, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.320726.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Yu, Xingjiang. „OSM-Based Automatic Road Network Geometries Generation on Unity“. Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-264903.

Der volle Inhalt der Quelle
Annotation:
Nowadays, while 3D city reconstruction has been widely used in important topics like urban design and traffic simulation, frameworks to efficiently model large-scale road network based on data from the real world are of high interests. However, the diversity of the form of road networks is still a challenge for automatic reconstruction, and the information extracted from input data can highly determine the final effect to display. In this project, OpenStreetMap data is chosen as the only input of a three-stage method to efficiently generate a geometric model of the associated road network in varied forms. The method is applied to datasets from cities in the real world of different scales, rendered and presented the generated models on Unity3D platform, and compared them with the original road networks in both the quality and topology aspects. The results suggest that our method can reconstruct the features of original road networks in common cases such as three-way, four-way intersections, and roundabouts while consuming much shorter time than manual modeling in a large-scale urban scene. This framework contributes to an auxiliary tool for quick city traffic system reconstruction of multiple purposes, while there still being space of improvement for the modeling universality and quality of the method.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Wang, Wei. „Automatic Chinese calligraphic font generation with machine learning technology“. Thesis, University of Macau, 2018. http://umaclib3.umac.mo/record=b3950605.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Mitteff, Eric. „AUTOMATED ADAPTIVE DATA CENTER GENERATION FOR MESHLESS METHODS“. Master's thesis, University of Central Florida, 2006. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/2635.

Der volle Inhalt der Quelle
Annotation:
Meshless methods have recently received much attention but are yet to reach their full potential as the required problem setup (i.e. collocation point distribution) is still significant and far from automated. The distribution of points still closely resembles the nodes of finite volume-type meshes and the free parameter, c, of the radial-basis expansion functions (RBF) still must be tailored specifically to a problem. The localized meshless collocation method investigated requires a local influence region, or topology, used as the expansion medium to produce the required field derivatives. Tests have shown a regular cartesian point distribution produces optimal results, however, in order to maintain a locally cartesian point distribution a recursive quadtree scheme is herein proposed. The quadtree method allows modeling of irregular geometries and refinement of regions of interest and it lends itself for full automation, thus, reducing problem setup efforts. Furthermore, the construction of the localized expansion regions is closely tied up to the point distribution process and, hence, incorporated into the automated sequence. This also allows for the optimization of the RBF free parameter on a local basis to achieve a desired level of accuracy in the expansion. In addition, an optimized auto-segmentation process is adopted to distribute and balance the problem loads throughout a parallel computational environment while minimizing communication requirements.
M.S.M.E.
Department of Mechanical, Materials and Aerospace Engineering;
Engineering and Computer Science
Mechanical Engineering
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Rivers, Kelly. „Automated Data-Driven Hint Generation for Learning Programming“. Research Showcase @ CMU, 2017. http://repository.cmu.edu/dissertations/1055.

Der volle Inhalt der Quelle
Annotation:
Feedback is an essential component of the learning process, but in fields like computer science, which have rapidly increasing class sizes, it can be difficult to provide feedback to students at scale. Intelligent tutoring systems can provide personalized feedback to students automatically, but they can take large amounts of time and expert knowledge to build, especially when determining how to give students hints. Data-driven approaches can be used to provide personalized next-step hints automatically and at scale, by mining previous students’ solutions. I have created ITAP, the Intelligent Teaching Assistant for Programming, which automatically generates next-step hints for students in basic Python programming assignments. ITAP is composed of three stages: canonicalization, where a student's code is transformed to an abstracted representation; path construction, where the closest correct state is identified and a series of edits to that goal state are generated; and reification, where the edits are transformed back into the student's original context. With these techniques, ITAP can generate next-step hints for 100% of student submissions, and can even chain these hints together to generate a worked example. Initial analysis showed that hints could be used in practice problems in a real classroom environment, but also demonstrated that students' relationships with hints and help-seeking were complex and required deeper investigation. In my thesis work, I surveyed and interviewed students about their experience with helpseeking and using feedback, and found that students wanted more detail in hints than was initially provided. To determine how hints should be structured, I ran a usability study with programmers at varying levels of knowledge, where I found that more novice students needed much higher levels of content and detail in hints than was traditionally given. I also found that examples were commonly used in the learning process, and could serve an integral role in the feedback provision process. I then ran a randomized control trial experiment to determine the effect of next-step hints on learning and time-on-task in a practice session, and found that having hints available resulted in students spending 13.7% less time during practice while achieving the same learning results as the control group. Finally, I used the data collected during these experiments to measure ITAP’s performance over time, and found that generated hints improved as data was added to the system. My dissertation has contributed to the fields of computer science education, learning science, human-computer interaction, and data-driven tutoring. In computer science education, I have created ITAP, which can serve as a practice resource for future programming students during learning. In the learning sciences, I have replicated the expertise reversal effect by finding that more expert programmers want less detail in hints than novice programmers; this finding is important as it implies that programming teachers may provide novices with less assistance than they need. I have contributed to the literature on human-computer interaction by identifying multiple possible representations of hint messages, and analyzing how users react to and learn from these different formats during program debugging. Finally, I have contributed to the new field of data-driven tutoring by establishing that it is possible to always provide students with next-step hints, even without a starting dataset beyond the instructor’s solution, and by demonstrating that those hints can be improved automatically over time.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Jones, Charles H., und Lee S. Gardner. „Automated Generation of Telemetry Formats“. International Foundation for Telemetering, 1996. http://hdl.handle.net/10150/611414.

Der volle Inhalt der Quelle
Annotation:
International Telemetering Conference Proceedings / October 28-31, 1996 / Town and Country Hotel and Convention Center, San Diego, California
The process of generating a telemetry format is currently more of an ad-hoc art than a science. Telemetry stream formats conform to traditions that seem to be obsolete given today's computing power. Most format designers would have difficulty explaining why they use the development heuristics they use and even more difficulty explaining why the heuristics work. The formats produced by these heuristics tend to be inefficient in the sense that bandwidth is wasted. This paper makes an important step in establishing a theory on which to base telemetry format construction. In particular it describes an O(nlog n) algorithm for automatically generating telemetry formats. The algorithm also has the potential of efficiently filling a telemetry stream without wasting bits.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Cocosco, Cristian A. „Automatic generation of training data for brain tissue classification from MRI“. Thesis, McGill University, 2002. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=33965.

Der volle Inhalt der Quelle
Annotation:
A fully automatic procedure for brain tissue classification from 3D magnetic resonance head images (MRI) is described. The procedure uses feature space proximity measures, and does not make any assumptions about the tissue intensity data distributions. As opposed to existing methods for automatic tissue classification, which are often sensitive to anatomical variability and pathology, the proposed procedure is robust against morphological deviations from the model. A novel method for automatic generation of classifier training samples, using a minimum spanning tree graph-theoretic approach, is proposed in this thesis. Starting from a set of samples generated from prior tissue probability maps (the "model") in a standard, brain-based coordinate system ("stereotaxic space"), the method reduces the fraction of incorrectly labelled samples in this set from 25% down to 2%. The corrected set of samples is then used by a supervised classifier for classifying the entire 3D image. Validation experiments were performed on both real and simulated MRI data; the kappa similarity measure increased from 0.90 to 0.95.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Nanda, Yishu. „The automatic computer generation of process flow diagrams from topological data“. Thesis, Imperial College London, 1990. http://hdl.handle.net/10044/1/46464.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Yang, Xile. „Automatic software test data generation from Z specifications using evolutionary algorithms“. Thesis, University of South Wales, 1998. https://pure.southwales.ac.uk/en/studentthesis/automatic-software-test-data-generation-from-z-specifications-using-evolutionary-algorithms(fd661850-9e09-4d28-a857-d551612ccc09).html.

Der volle Inhalt der Quelle
Annotation:
Test data sets have been automatically generated for both numerical and string data types to test the functionality of simple procedures and a good sized UNIX filing system from their Z specifications. Different structured properties of software systems are covered, such as arithmetic expressions, existential and universal quantifiers, set comprehension, union, intersection and difference, etc. A CASE tool ZTEST has been implemented to automatically generate test data sets. Test cases can be derived from the functionality of the Z specifications automatically. The test data sets generated from the test cases check the behaviour of the software systems for both valid and invalid inputs. Test cases are generated for the four boundary values and an intermediate value of the input search domain. For integer input variables, high quality test data sets can be generated on the search domain boundary and on each side of the boundary for both valid and invalid tests. Adaptive methods such as Genetic Algorithms and Simulated Annealing are used to generate test data sets from the test cases. GA is chosen as the default test data generator of ZTEST. Direct assignment is used if it is possible to make ZTEST system more efficient. Z is a formal language that can be used to precisely describe the functionality of computer systems. Therefore, the test data generation method can be used widely for test data generation of software systems. It will be very useful to the systems developed from Z specifications.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Lundberg, Gustav. „Automatic map generation from nation-wide data sources using deep learning“. Thesis, Linköpings universitet, Statistik och maskininlärning, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-170759.

Der volle Inhalt der Quelle
Annotation:
The last decade has seen great advances within the field of artificial intelligence. One of the most noteworthy areas is that of deep learning, which is nowadays used in everything from self driving cars to automated cancer screening. During the same time, the amount of spatial data encompassing not only two but three dimensions has also grown and whole cities and countries are being scanned. Combining these two technological advances enables the creation of detailed maps with a multitude of applications, civilian as well as military.This thesis aims at combining two data sources covering most of Sweden; laser data from LiDAR scans and surface model from aerial images, with deep learning to create maps of the terrain. The target is to learn a simplified version of orienteering maps as these are created with high precision by experienced map makers, and are a representation of how easy or hard it would be to traverse a given area on foot. The performance on different types of terrain are measured and it is found that open land and larger bodies of water is identified at a high rate, while trails are hard to recognize.It is further researched how the different densities found in the source data affect the performance of the models, and found that some terrain types, trails for instance, benefit from higher density data, Other features of the terrain, like roads and buildings are predicted with higher accuracy by lower density data.Finally, the certainty of the predictions is discussed and visualised by measuring the average entropy of predictions in an area. These visualisations highlight that although the predictions are far from perfect, the models are more certain about their predictions when they are correct than when they are not.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Hinnerson, Mattias. „Techniques for semi-automatic generation of data cubes from star-schemas“. Thesis, Umeå universitet, Institutionen för datavetenskap, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-130648.

Der volle Inhalt der Quelle
Annotation:
The aim of this thesis is to investigate techniques to better automate the process of generating data cubes from star- or snowflake schemas. The company Trimma builds cubes manually today, but we will investigate doing this more efficiently. We will select two basic approaches and implement them in Prototype A and Prototype B. Prototype A is a direct method that communicates directly with a database server. Prototype B is an indirect method that creates configuration files that can, later on, get loaded onto a database server. We evaluate the two prototypes over a star schema and a snowflake schema case provided by Trimma. The evaluation criteria include completeness, usability, documentation and support, maintainability, license costs, and development speed. Our evaluation indicates that Prototype A is generally outperforming Prototype B and that prototype A is arguably performing better than the manual method current employed by Trimma.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Alam, Mohammad Saquib. „Automatic generation of critical driving scenarios“. Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-288886.

Der volle Inhalt der Quelle
Annotation:
Despite the tremendous development in the autonomous vehicle industry, the tools for systematic testing are still lacking. Real-world testing is time-consuming and above all, dangerous. There is also a lack of a framework to automatically generate critical scenarios to test autonomous vehicles. This thesis develops a general framework for end- to- end testing of an autonomous vehicle in a simulated environment. The framework provides the capability to generate and execute a large number of traffic scenarios in a reliable manner. Two methods are proposed to compute the criticality of a traffic scenario. A so-called critical value is used to learn the probability distribution of the critical scenario iteratively. The obtained probability distribution can be used to sample critical scenarios for testing and for benchmarking a different autonomous vehicle. To describe the static and dynamic participants of urban traffic scenario executed by the simulator, OpenDrive and OpenScenario standards are used.
Trots den enorma utvecklingen inom den autonoma fordonsindustrin saknas fortfarande verktygen för systematisk testning. Verklig testning är tidskrävande och framför allt farlig. Det saknas också ett ramverk för att automatiskt generera kritiska scenarier för att testa autonoma fordon. Denna avhandling utvecklar en allmän ram för end-to-end- test av ett autonomt fordon i en simulerad miljö. Ramverket ger möjlighet att generera och utföra ett stort antal trafikscenarier på ett tillförlitligt sätt. Två metoder föreslås för att beräkna kritiken i ett trafikscenario. Ett så kallat kritiskt värde används för att lära sig sannolikhetsfördelningen för det kritiska scenariot iterativt. Den erhållna sannolikhetsfördelningen kan användas för att prova kritiska scenarier för testning och för benchmarking av ett annat autonomt fordon. För att beskriva de statiska och dynamiska deltagarna i stadstrafikscenariot som körs av simulatorn används OpenDrive och OpenScenario-standarder.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Mazidi, Karen. „Infusing Automatic Question Generation with Natural Language Understanding“. Thesis, University of North Texas, 2016. https://digital.library.unt.edu/ark:/67531/metadc955021/.

Der volle Inhalt der Quelle
Annotation:
Automatically generating questions from text for educational purposes is an active research area in natural language processing. The automatic question generation system accompanying this dissertation is MARGE, which is a recursive acronym for: MARGE automatically reads generates and evaluates. MARGE generates questions from both individual sentences and the passage as a whole, and is the first question generation system to successfully generate meaningful questions from textual units larger than a sentence. Prior work in automatic question generation from text treats a sentence as a string of constituents to be rearranged into as many questions as allowed by English grammar rules. Consequently, such systems overgenerate and create mainly trivial questions. Further, none of these systems to date has been able to automatically determine which questions are meaningful and which are trivial. This is because the research focus has been placed on NLG at the expense of NLU. In contrast, the work presented here infuses the questions generation process with natural language understanding. From the input text, MARGE creates a meaning analysis representation for each sentence in a passage via the DeconStructure algorithm presented in this work. Questions are generated from sentence meaning analysis representations using templates. The generated questions are automatically evaluated for question quality and importance via a ranking algorithm.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Fawzy, Kamel Menatalla Ashraf. „A Method for Automatic Generation of Metadata“. Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-177400.

Der volle Inhalt der Quelle
Annotation:
The thesis introduces a study about the different ways of generating metadata and implementing them in web pages. Metadata are often called data about data. In web pages, metadata holds the information that might include keywords, a description, author, and other information that helps the user to describe and explain an information resource in order to use, manage and retrieve data easily. Since web pages depend significantly on metadata to increase the traffic in search engines, studying the different methods of generation of metadata is an important issue. Generation of metadata can be made both manually and automatically. The aim of the research is to show the results of applying different methods including a new proposed method of generating automatic metadata using a qualitative study. The goal of the research is to show the enhancement achieved by applying the new proposed method of generating metadata automatically that are implemented in web pages.
Uppsatsen presenterar en studie om olika sätt att generera metadata och genomföra dem på webbsidor. Metadata kallas ofta data om data eller information om information som innehåller den information som hjälper användaren att beskriva, förklara och hitta en informationskälla för att kunna använda, hantera och hämta data enkelt. Eftersom webbsidor är märkbart beroende av metadata för att öka trafiken i sökmotorer, att studera olika metoder för skapandet av metadata är en viktig fråga. Skapande av metadata kan ske både manuellt och automatiskt. Syftet med forskningen är att visa resultaten av tillämpningen av olika metoder inklusive en ny föreslagen metod för att generera automatiska metadata med hjälp av en kvalitativ studie. Målet med forskningen är att visa förbättringen som uppnås genom den nya föreslagna metoden för att generera metadata automatisk som genomförs på webbsidor.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Fernandes, Ronald, Michael Graul, Burak Meric und Charles H. Jones. „ONTOLOGY-DRIVEN TRANSLATOR GENERATOR FOR DATA DISPLAY CONFIGURATIONS“. International Foundation for Telemetering, 2004. http://hdl.handle.net/10150/605328.

Der volle Inhalt der Quelle
Annotation:
International Telemetering Conference Proceedings / October 18-21, 2004 / Town & Country Resort, San Diego, California
This paper presents a new approach for the effective generation of translator scripts that can be used to automate the translation of data display configurations from one vendor format to another. Our approach uses the IDEF5 ontology description method to capture the ontology of each vendor format and provides simple rules for performing mappings. In addition, the method includes the specification of mappings between a language-specific ontology and its corresponding syntax specification, that is, either an eXtensible Markup Language (XML) Schema or Document Type Description (DTD). Finally, we provide an algorithm for automatically generating eXtensible Stylesheet Language Transformation (XSLT) scripts that transform XML documents from one language to another. The method is implemented in a graphical tool called the Data Display Translator Generator (DDTG) that supports both inter-language (ontology-to-ontology) and intra-language (syntax-to-ontology) mappings and generates the XSLT scripts. The tool renders the XML Schema or DTD as trees, provides intuitive, user-friendly interfaces for performing the mappings, and provides a report of completed mappings. It also generates data type conversion code when both the source and target syntaxes are XML Schema-based. Our approach has the advantage of performing language mappings at an abstract, ontology level, and facilitates the mapping of tool ontologies to a common domain ontology (in our case, Data Display Markup Language or DDML), thereby eliminating the O(n^2) mapping problem that involves a number of data formats in the same domain.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Spedicati, Marco. „Automatic generation of annotated datasets for industrial OCR“. Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/17385/.

Der volle Inhalt der Quelle
Annotation:
Machine learning algorithms need a lot of data, both for training and for testing. However, not always appropriate data are in fact available. This document presents the work that has been carried out at Datalogic USA’s laboratories in Eugene, Oregon, USA, to create data for industrial Optical Character Recognition (OCR) applications. It describes the automatic sys- tem that has been built. The images are created by printing and capturing strings of a variable layout, and they are ground truthed in a later stage, in an automatic way. Two datasets are generated, of which one is employed to asses a network’s performance.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Cawley, Benjamin Matthew. „Automated generation of personal data reports from relational databases“. Thesis, Manchester Metropolitan University, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.485987.

Der volle Inhalt der Quelle
Annotation:
This thesis presents a novel approach for extracting personal data and automatically generating Personal Data Reports (PDRs) from relational databases. Such PDRs can be used among other purposes for compliance with Subject Access Requests (SARs) of Data Protection Acts (DPAs). The proposed approach combines the use of graphs and SQL for the construction of PDRs and its rationale is based on the fact that some relations in a database, which we denote as RDS relations, hold information about Data Subjects (DSs) and relations linked around RDSs contain additional information about the particular DS. Three methods with different usability characteristics are introduced: 1) GDS Based Method and 2) By Schema Browsing Method which generate SAR PDRs and 3) T Based Method which generates General Purpose PDRs. The novelty of these methodologies is that they do not require any prior knowledge of either the database schema or of any query language by the users. The work described in this thesis contributes to the gap in the knowledge for DPA compliance as current data protection systems do not provide facilitates for generating personal data reports. The performance results of the ODS approach are presented together with precision and recall measures of the T Based Method. An optimization algorithm that reuses already found data which is based on heuristics and hash tables is employed and its effectiveness verified. We conclude that the ODS and schema browsing methods provide an effective solution and that the automated T Based approach is an effective alternative for generating general purpose data reports, giving an average f-score of 76.5%.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Mahmood, Shahid. „A Systematic Review of Automated Test Data Generation Techniques“. Thesis, Blekinge Tekniska Högskola, Avdelningen för programvarusystem, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-4349.

Der volle Inhalt der Quelle
Annotation:
Automated Test Data Generation (ATDG) is an activity that in the course of software testing automatically generates test data for the software under test (SUT). It usually makes the testing more efficient and cost effective. Test Data Generation (TDG) is crucial for software testing because test data is one of the key factors for determining the quality of any software test during its execution. The multi-phased activity of ATDG involves various techniques for each of its phases. This research field is not new by any means, albeit lately new techniques have been devised and a gradual increase in the level of maturity has brought some diversified trends into it. To this end several ATDG techniques are available, but emerging trends in computing have raised the necessity to summarize and assess the current status of this area particularly for practitioners, future researchers and students. Further, analysis of the ATDG techniques becomes even more important when Miller et al. [4] highlight the hardship in general acceptance of these techniques. Under this scenario only a systematic review can address the issues because systematic reviews provide evaluation and interpretation of all available research relevant to a particular research question, topic area, or phenomenon of interest. This thesis, by using a trustworthy, rigorous, and auditable methodology, provides a systematic review that is aimed at presenting a fair evaluation of research concerning ATDG techniques of the period 1997-2006. Moreover it also aims at identifying probable gaps in research about ATDG techniques of defined period so as to suggest the scope for further research. This systematic review is basically presented on the pattern of [5 and 8] and follows the techniques suggested by [1].The articles published in journals and conference proceedings during the defined period are of concern in this review. The motive behind this selection is quite logical in the sense that the techniques that are discussed in literature of this period might reflect their suitability for the prevailing software environment of today and are believed to fulfill the needs of foreseeable future. Furthermore only automated and/or semiautomated ATDG techniques have been chosen for consideration while leaving the manual techniques as they are out of the scope. As a result of the preliminary study the review identifies ATDG techniques and relevant articles of the defined period whereas the detailed study evaluates and interprets all available research relevant to ATDG techniques. For interpretation and elaboration of the discovered ATDG techniques a novel approach called ‘Natural Clustering’ is introduced. To accomplish the task of systematic review a comprehensive research method has been developed. Then on the practical implications of this research method important results have been gained. These results have been presented in statistical/numeric, diagrammatic, and descriptive forms. Additionally the thesis also introduces various criterions for classification of the discovered ATDG techniques and presents a comprehensive analysis of the results of these techniques. Some interesting facts have also been highlighted during the course of discussion. Finally, the discussion culminates with inferences and recommendations which emanate from this analysis. As the research work produced in the thesis is based on a rich amount of trustworthy information, therefore, it could also serve the purpose of being an upto- date guide about ATDG techniques.
Shahid Mahmood Folkparksvägen 14:23 372 40 Ronneby Sweden +46 76 2971676
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Liu, Fangfang. „An ontology-based approach to Automatic Generation of GUI for Data Entry“. ScholarWorks@UNO, 2009. http://scholarworks.uno.edu/td/1094.

Der volle Inhalt der Quelle
Annotation:
This thesis reports an ontology-based approach to automatic generation of highly tailored GUI components that can make customized data requests for the end users. Using this GUI generator, without knowing any programming skill a domain expert can browse the data schema through the ontology file of his/her own field, choose attribute fields according to business's needs, and make a highly customized GUI for end users' data requests input. The interface for the domain expert is a tree view structure that shows not only the domain taxonomy categories but also the relationships between classes. By clicking the checkbox associated with each class, the expert indicates his/her choice of the needed information. These choices are stored in a metadata document in XML. From the viewpoint of programmers, the metadata contains no ambiguity; every class in an ontology is unique. The utilizations of the metadata can be various; I have carried out the process of GUI generation. Since every class and every attribute in the class has been formally specified in the ontology, generating GUI is automatic. This approach has been applied to a use case scenario in meteorological and oceanographic (METOC) area. The resulting features of this prototype have been reported in this thesis.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Doungsa-ard, Chartchai, Keshav P. Dahal, M. Alamgir Hossain und T. Suwannasart. „An automatic test data generation from UML state diagram using genetic algorithm“. IEEE, 2007. http://hdl.handle.net/10454/2492.

Der volle Inhalt der Quelle
Annotation:
Software testing is a part of software development process. However, this part is the first one to miss by software developers if there is a limited time to complete the project. Software developers often finish their software construction closed to the delivery time, they usually don¿t have enough time to create effective test cases for testing their programs. Creating test cases manually is a huge work for software developers in the rush hours. A tool which automatically generates test cases and test data can help the software developers to create test cases from software designs/models in early stage of the software development (before coding). Heuristic techniques can be applied for creating quality test data. In this paper, a GA-based test data generation technique has been proposed to generate test data from UML state diagram, so that test data can be generated before coding. The paper details the GA implementation to generate sequences of triggers for UML state diagram as test cases. The proposed algorithm has been demonstrated manually for an example of a vending machine.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Karlapudi, Janakiram. „Analysis on automatic generation of BEPS model from BIM model“. Verlag der Technischen Universität Graz, 2020. https://tud.qucosa.de/id/qucosa%3A73547.

Der volle Inhalt der Quelle
Annotation:
The interlinking of enriched BIM data to Building Energy Performance Simulation (BEPS) models facilitates the data flow throughout the building life cycle. This seamless data transfer from BIM to BEPS models increases design efficiency. To investigate the interoperability between these models, this paper analyses different data transfer methodologies along with input data requirements for the simulation process. Based on the analysed knowledge, a methodology is adopted and demonstrated to identify the quality of the data transfer process. Furthermore, discussions are provided on identified efficiency gaps and future work.:Abstract Introduction and background Methodology Methodology demonstration Creation and export of BIM data Verification of OpenBIM meta-data BEPS model generation and validation Import statics Model Geometry and Orientation Construction details Thermal Profile Results and discussion Summary and future work References
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Williams, Robert L. „Synthesis and design of the RSSR spatial mechanism for function generation“. Thesis, Virginia Tech, 1985. http://hdl.handle.net/10919/41573.

Der volle Inhalt der Quelle
Annotation:

The purpose of this thesis is to provide a complete package for the synthesis and design of the RSSR spatial function generating mechanism.

In addition to the introductory material this thesis is divided into three sections. The section on background kinematic theory includes synthesis, analysis, link rotatability, transmission quality, and branching analysis. The second division details the computer application of the kinematic theory. The program RSSRSD has been developed to incorporate the RSSR synthesis and design theory. An example is included to demonstrate the computer-implemented theory.

The third part of this thesis includes miscellaneous mechanism considerations and recommendations for further research.

The theoretical work in this project is a combination of original derivations and applications of the theory in the mechanism literature.


Master of Science
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Erande, Abhijit. „Automatic detection of significant features and event timeline construction from temporally tagged data“. Kansas State University, 2009. http://hdl.handle.net/2097/1675.

Der volle Inhalt der Quelle
Annotation:
Master of Science
Department of Computing and Information Sciences
William H. Hsu
The goal of my project is to summarize large volumes of data and help users to visualize how events have unfolded over time. I address the problem of extracting overview terms from a time-tagged corpus of data and discuss some previous work conducted in this area. I use a statistical approach to automatically extract key terms, form groupings of related terms, and display the resultant groups on a timeline. I use a static corpus composed of news stories, as opposed to an on-line setting where continual additions to the corpus are being made. Terms are extracted using a Named Entity Recognizer, and importance of a term is determined using the [superscript]X[superscript]2 measure. My approach does not address the problem of associating time and date stamps with data, and is restricted to corpora that been explicitly tagged. The quality of results obtained is gauged subjectively and objectively by measuring the degree to which events known to exist in the corpus were identified by the system.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Jungfer, Kim Michael. „Semi automatic generation of CORBA interfaces for databases in molecular biology“. Thesis, University College London (University of London), 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.272561.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Jiang, Yiming. „Automated Generation of CAD Big Data for Geometric Machine Learning“. The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1576329384392725.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Barclay, Peter J. „Object oriented modelling of complex data with automatic generation of a persistent representation“. Thesis, Edinburgh Napier University, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.385918.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Edvardsson, Jon. „Techniques for Automatic Generation of Tests from Programs and Specifications“. Doctoral thesis, Linköping : Department of Computer and Information Science, Linköping universitet, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-7829.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Löw, Simon. „Automatic Generation of Patient-specific Gamma Knife Treatment Plans for Vestibular Schwannoma Patients“. Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-273925.

Der volle Inhalt der Quelle
Annotation:
In this thesis a new fully automatic process for radiotherapy treatment planning with the Leksell Gamma Knife is implemented and evaluated: First, a machine learning algorithm is trained to predict the desired dose distribution, then a convex optimization problem is solved to find the optimal Gamma Knife configuration using the prediction as the optimization objective. The method is evaluated using Bayesian linear regression, Gaussian processes and convolutional neural networks for the prediction. Therefore, the quality of the generated treatment plans is compared to the clinical treatment plans and then the relationship between the prediction and optimization result is analyzed. The convolutional neural network model shows the best performance and predicts realistic treatment plans, which only change minimally under the optimization and are on the same quality level as the clinical plans. The Bayesian linear regression model generates plans on the same quality level, but is not able to predict realistic treatment plans, which leads to substantial changes to the plan under the optimization. The Gaussian process shows the worst performance and is not able to predict plans of the same quality as the clinical plans
I detta examensarbete implementeras och utvärderas en ny helautomatisk process för strålbehandlingsplanering med hjälp av Leksell Gamma Knife: Till en början tränas en maskininlärningsalgoritm för att förutsäga önskad dosmängd. Med hjälp av den genererade prediktionen som optimeringsmål hittas sedan en lösning på ett konvext optimeringsproblem med syftet att hitta den optimala Gamma Knife - konfigurationen. Metoden utvärderas med hjälp av Bayesiansk linjär regression, Gaussiska processer och neurala faltningsnätverk för prediktionssteget. Detta görs genom att jämföra kvalitetsnivån på de genererade behandlingsplanerna med de kliniska behandlingsplanerna. Slutligen analyseras förhållandet mellan prediktionsoch optimeringsresultaten. Bäst resultat fås av det neurala faltningsnätverket som dessutom genererar realistiska behandlingsplaner. De av modellen generade behandlingsplanerna förändras minimalt under optimeringssteget och ligger på samma kvalitetsnivå som de kliniska behandlingsplanerna. Även den Bayesianska linjära regressionsmodellen genererar behandlingsplaner på liknande kvalitetsnivå men misslyckas med att generera realistiska behandlingsplaner, vilket i sin tur leder till markanta förändringar av behandlingsplanen under optimeringssteget. Av dessa algoritmer presterar Gaussiska processer sämst och kan inte generera behandlingsplaner av samma kvalitet som de kliniska behandlingsplanerna.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Zhao, Hongkun. „Automatic wrapper generation for the extraction of search result records from search engines“. Diss., Online access via UMI:, 2007.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Deshpande, Monali A. „Automating Multiple Schema Generation using Dimensional Design Patterns“. University of Cincinnati / OhioLINK, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1242762457.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Lachut, Watkins Alison Elizabeth. „An investigation into adaptive search techniques for the automatic generation of software test data“. Thesis, University of Plymouth, 1996. http://hdl.handle.net/10026.1/1618.

Der volle Inhalt der Quelle
Annotation:
The focus of this thesis is on the use of adaptive search techniques for the automatic generation of software test data. Three adaptive search techniques are used, these are genetic algorithms (GAs), Simulated Amiealing and Tabu search. In addition to these, hybrid search methods have been developed and applied to the problem of test data generation. The adaptive search techniques are compared to random generation to ascertain the effectiveness of adaptive search. The results indicate that GAs and Simulated Annealing outperform random generation in all test programs. Tabu search outperformed random generation in most tests, but it lost its effectiveness as the amount of input data increased. The hybrid techniques have given mixed results. The two best methods, GAs and Simulated Annealing are then compared to random generation on a program written to optimise capital budgeting, both perform better than random generation and Simulated Annealing requires less test data than GAs. Further research highlights a need for research into the control parameters of all the adaptive search methods and attaining test data which covers border conditions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Salama, Mohamed Ahmed Said. „Automatic test data generation from formal specification using genetic algorithms and case based reasoning“. Thesis, University of the West of England, Bristol, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.252562.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Ramnerö, David. „Semi-automatic Training Data Generation for Cell Segmentation Network Using an Intermediary Curator Net“. Thesis, Uppsala universitet, Bildanalys och människa-datorinteraktion, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-332724.

Der volle Inhalt der Quelle
Annotation:
In this work we create an image analysis pipeline to segment cells from microscopy image data. A portion of the segmented images are manually curated and this curated data is used to train a Curator network to filter the whole dataset. The curated data is used to train a separate segmentation network to improve the cell segmentation. This technique can be easily applied to different types of microscopy object segmentation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Harper, Michael Richard Jr. „Automated reaction mechanism generation : data collaboration, Heteroatom implementation, and model validation“. Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/65756.

Der volle Inhalt der Quelle
Annotation:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Chemical Engineering, 2011.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 281-292).
Nearly two-thirds of the United States' transportation fuels are derived from non-renewable fossil fuels. This demand of fossil fuels requires the United States to import ~ 60% of its total fuel consumption. Relying so heavily on foreign oil is a threat to national security, not to mention that burning all of these fossil fuels produces increased levels of CO₂, a greenhouse gas that contributes to global warming. This is not a sustainable model. The United States government has recently passed legislation that requires greenhouse gas emissions to be reduced to 80% of the 2005 level by the year 2050. Furthermore, new legislation under the Energy Independence and Security Act (EISA) requires that 36 billion gallons of renewable fuel be blended into transportation fuel by 2022. Solving these types of problems will require the fuel industry to shift away from petroleum fuels to biomass-derived oxygenated hydrocarbon fuels. These fuels are generated through different biological pathways, using different "bugs." The question of which fuel molecules should we be burning, and thus, which bugs should we be engineering, arises. To answer that question, a detailed understanding of the fuel chemistry under a wide range of operating conditions, i.e. temperature, pressure, fuel equivalence ratio, and fuel percentage, must be known. Understanding any fuel chemistry fully requires significant collaboration: experimental datasets that span a range of temperatures, pressures, and equivalence ratios, high-level ab initio quantum chemistry calculations for single species and reactions, and a comprehensive reaction mechanism and reactor model that utilizes the theoretical calculations to make predictions. A shortcoming in any of these three fields limits the knowledge gained from the others. This thesis addresses the third field of the collaboration, namely constructing accurate reaction mechanisms for chemical systems. In this thesis, reaction mechanisms are constructed automatically using a software package Reaction Mechanism Generator (RMG) that has been developed in the Green Group over the last decade. The predictive capability of any mechanism depends on the parameters employed. For kinetic models, these parameters consist of species thermochemistry and reaction rate coefficients. Many parameters have been reported in the literature, and it would be beneficial if RMG would utilize these values instead of relying on estimation routines purely. To this end, the PrIMe Warehouse C/H/O chemistry has been validated and a means of incorporating said data in the RMG database has been implemented. Thus, all kinetic models built by RMG may utilize the community's reported thermochemical parameters.
(cont.) A kinetic model is evaluated by how accurately it can predict experimental data. In this thesis, it was shown that the RMG software, with the PrIMe Warehouse data collaboration, constructs validated kinetic models by using RMG to predict the pyrolysis and combustion chemistry of the four butanol isomers. The kinetic model has been validated against many unique datasets, including: pyrolysis experiments in a flow reactor, opposed-flow and doped methane diffusion flames, jet-stirred reactors, shock tube and rapid compression machine experiments, and low-pressure and atmospheric premixed laminar flames. The mechanism predicts the datasets remarkably well across all operating conditions, including: speciation data within a factor of three, ignition delays within a factor of two, and laminar burning velocities within 20% of the experimental measurements. This accurate, comprehensivelyvalidated kinetic model for the butanol isomers is valuable itself, and even more so as a demonstration of the state-of-the-art in predictive chemical kinetics. Although the butanol kinetic model was validated against many datasets, the model contained no nitrogen-containing species, and also had limited pathways for benzene formation. These limitations were due to the RMG software, as RMG was initially written with only carbon, hydrogen, and oxygen chemistry in mind. While this functionality has been sufficient in modeling the combustion of hydrocarbons, the ability to make predictions for other chemical systems, e.g. nitrogen, sulfur, and silicon compounds, with the same tools is desired. As part of this thesis, the hardcoded C/H/O functional groups were removed from the source code and database, enabling our RMG software to model heteroatom chemistry. These changes in the RMG software also allows for robust modeling of aromatic compounds. The future in the transportation sector is uncertain, particularly regarding which fuels our engines will run on. Understanding the elementary chemistry of combustion will be critical in efficiently screening all potential fuel alternatives. This thesis demonstrates one method of understanding fuel chemistry, through detailed reaction mechanisms constructed automatically using the RMG software. Specifically, a method for data collaboration between the RMG software and the PrIMe Warehouse has been established, which will facilitate collaboration between researchers working on combustion experiments, theory, and modeling. The RMG software's algorithm of mechanism construction has been validated by comparing the RMG-generated model predictions for the combustion of the butanol isomers against many unique datasets from the literature; many new species thermochemistry and reaction rate kinetics were calculated and this validation shows RMG to be a capable tool in constructing reaction mechanisms for combustion. Finally, the RMG source code and database have been updated, to allow for robust modeling of heteroatom and aromatic chemistry; these two features will be especially important for future modeling of combustion systems as they relate to the formation of harmful pollutants such as NOx and soot.
by Michael Richard Harper, Jr.
Ph.D.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Shahaf, Dafna. „Automatic Generation of Issue Maps: Structured, Interactive Outputs for Complex Information Needs“. Research Showcase @ CMU, 2012. http://repository.cmu.edu/dissertations/210.

Der volle Inhalt der Quelle
Annotation:
When information is abundant, it becomes increasingly difficult to fit nuggets of knowledge into a single coherent picture. Complex stories spaghetti into branches, side stories, and intertwining narratives; search engines, our most popular navigational tools, are limited in their capacity to explore such complex stories. We propose a methodology for creating structured summaries of information, which we call metro maps. Our proposed algorithm generates a concise structured set of documents that maximizes coverage of salient pieces of information. Most importantly, metro maps explicitly show the relations among retrieved pieces in a way that captures story development. The overarching theme of this work is formalizing characteristics of good maps, and providing efficient algorithms (with theoretical guarantees) to optimize them. Moreover, as information needs vary from person to person, we integrate user interaction into our framework, allowing users to alter the maps to better reflect their interests. Pilot user studies with real-world datasets demonstrate that the method is able to produce maps which help users acquire knowledge efficiently. We believe that metro maps could be powerful tools for any Web user, scientist, or intelligence analyst trying to process large amounts of data.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Ferreira, Fernando Henrique Inocêncio Borba. „Framework de geração de dados de teste para programas orientados a objetos“. Universidade de São Paulo, 2012. http://www.teses.usp.br/teses/disponiveis/100/100131/tde-09032013-102901/.

Der volle Inhalt der Quelle
Annotation:
A geração de dados de teste é uma tarefa obrigatória do processo de teste de software. Em geral, é realizada por prossionais de teste, o que torna seu custo elevado e sua automatização necessária. Os frameworks existentes que auxiliam essa atividade são restritos, fornecendo apenas uma única técnica de geração de dados de teste, uma única função de aptidão para avaliação dos indivíduos e apenas um algoritmo de seleção. Este trabalho apresenta o framework JaBTeG (Java Bytecode Test Generation) de geração de dados de teste. A principal característica do framework é permitir o desenvolvimento de métodos de geração de dados de teste por meio da seleção da técnica de geração de dados de teste, da função de aptidão, do algoritmo de seleção e critério de teste estrutural. Utilizando o framework JaBTeG, técnicas de geração de dados de teste podem ser criadas e experimentadas. O framework está associado à ferramenta de teste JaBUTi (Java Bytecode Understanding and Testing) para auxiliar a geração de dados de teste. Quatro técnicas de geração de dados de teste, duas funções de aptidão e quatro algoritmos de seleção foram desenvolvidos para validação da abordagem proposta pelo framework. De maneira complementar, cinco programas com características diferentes foram testados com dados gerados usando os métodos providos pelo framework JaBTeG.
Test data generation is a mandatory activity of the software testing process. In general, it is carried out by testing practitioners, which makes it costly and its automation needed. Existing frameworks to support this activity are restricted, providing only one data generation technique, a single tness function to evaluate individuals, and a unique selection algorithm. This work describes the JaBTeG (Test Java Bytecode Generation) framework for testing data generation. The main characteristc of JaBTeG is to allow the development of data generation methods by selecting the data generation technique, the tness function, the selection algorithm and the structural testing criteria. By using JaBTeG, new methods for testing data generation can be developed and experimented. The framework was associated with JaBUTi (Java Bytecode Understanding and Testing) to support testing data creation. Four data generation techniques, two tness functions, and four selection algorithms were developed to validate the approach proposed by the framework. In addition, ve programs with dierent characteristics were tested with data generated using the methods supported by JaBTeG.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Lu, Ruodan. „Automated generation of geometric digital twins of existing reinforced concrete bridges“. Thesis, University of Cambridge, 2019. https://www.repository.cam.ac.uk/handle/1810/289430.

Der volle Inhalt der Quelle
Annotation:
The cost and effort of modelling existing bridges from point clouds currently outweighs the perceived benefits of the resulting model. The time required for generating a geometric Bridge Information Model, a holistic data model which has recently become known as a "Digital Twin", of an existing bridge from Point Cloud Data is roughly ten times greater than laser scanning it. There is a pressing need to automate this process. This is particularly true for the highway infrastructure sector because Bridge Digital Twin Generation is an efficient means for documenting bridge condition data. Based on a two-year inspection cycle, there is a need for at least 315,000 bridge inspections per annum across the United States and the United Kingdom. This explains why there is a huge market demand for less labour-intensive bridge documentation techniques that can efficiently boost bridge management productivity. Previous research has achieved the automatic generation of surface primitives combined with rule-based classification to create labelled cuboids and cylinders from point clouds. While existing methods work well in synthetic datasets or simplified cases, they encounter huge challenges when dealing with real-world bridge point clouds, which are often unevenly distributed and suffer from occlusions. In addition, real bridge topology is much more complicated than idealized cases. Real bridge geometries are defined with curved horizontal alignments, and varying vertical elevations and cross-sections. These characteristics increase the modelling difficulties, which is why none of the existing methods can handle reliably. The objective of this PhD research is to devise, implement, and benchmark a novel framework that can reasonably generate labelled geometric object models of constructed bridges comprising concrete elements in an established data format (i.e. Industry Foundation Classes). This objective is achieved by answering the following research questions: (1) how to effectively detect reinforced concrete bridge components in Point Cloud Data? And (2) how to effectively fit 3D solid models in the format of Industry Foundation Classes to the detected point clusters? The proposed framework employs bridge engineering knowledge that mimics the intelligence of human modellers to detect and model reinforced concrete bridge objects in point clouds. This framework directly extracts structural bridge components and then models them without generating low-level shape primitives. Experimental results suggest that the proposed framework can perform quickly and reliably with complex and incomplete real-world bridge point clouds encounter occlusions and unevenly distributed points. The results of experiments on ten real-world bridge point clouds indicate that the framework achieves an overall micro-average detection F1-score of 98.4%, an average modelling accuracy of (C2C) ̅_Auto 7.05 cm, and the average modelling time of merely 37.8 seconds. Compared to the laborious and time-consuming manual practice, the proposed framework can realize a direct time-savings of 95.8%. This is the first framework of its kind to achieve such high and reliable performance of geometric digital twin generation of existing bridges. Contributions. This PhD research provides the unprecedented ability to rapidly model geometric bridge concrete elements, based on quantitative measurements. This is a huge leap over the current practice of Bridge Digital Twin Generation, which performs this operation manually. The presented research activities will create the foundations for generating meaningful digital twins of existing bridges that can be used over the whole lifecycle of a bridge. As a result, the knowledge created in this PhD research will enable the future development of novel, automated applications for real-time condition assessment and retrofit engineering.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Tracey, Nigel James. „A search-based automated test-data generation framework for safety-critical software“. Thesis, University of York, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.325796.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Cao, Haoliang. „Automating Question Generation Given the Correct Answer“. Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-287460.

Der volle Inhalt der Quelle
Annotation:
In this thesis, we propose an end-to-end deep learning model for a question generation task. Given a Wikipedia article written in English and a segment of text appearing in the article, the model can generate a simple question whose answer is the given text segment. The model is based on an encoder-decoder architecture. Our experiments show that a model with a fine-tuned BERT encoder and a self-attention decoder give the best performance. We also propose an evaluation metric for the question generation task, which evaluates both syntactic correctness and relevance of the generated questions. According to our analysis on sampled data, the new metric is found to give better evaluation compared to other popular metrics for sequence to sequence tasks.
I den här avhandlingen presenteras en djup neural nätverksmodell för en frågeställningsuppgift. Givet en Wikipediaartikel skriven på engelska och ett textsegment i artikeln kan modellen generera en enkel fråga vars svar är det givna textsegmentet. Modellen är baserad på en kodar-avkodararkitektur (encoderdecoder architecture). Våra experiment visar att en modell med en finjusterad BERT-kodare och en självuppmärksamhetsavkodare (self-attention decoder) ger bästa prestanda. Vi föreslår också en utvärderingsmetrik för frågeställningsuppgiften, som utvärderar både syntaktisk korrekthet och relevans för de genererade frågorna. Enligt vår analys av samplade data visar det sig att den nya metriken ger bättre utvärdering jämfört med andra populära metriker för utvärdering.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie