Academic literature on the topic 'Automatic data generator'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Automatic data generator.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Automatic data generator"

1

Kronawitter, Stefan, Sebastian Kuckuk, Harald Köstler, and Christian Lengauer. "Automatic Data Layout Transformations in the ExaStencils Code Generator." Parallel Processing Letters 28, no. 03 (September 2018): 1850009. http://dx.doi.org/10.1142/s0129626418500093.

Full text
Abstract:
Performance optimizations should focus not only on the computations of an application, but also on the internal data layout. A well-known problem is whether a struct of arrays or an array of structs results in a higher performance for a particular application. Even though the switch from the one to the other is fairly simple to implement, testing both transformations can become laborious and error-prone. Additionally, there are more complex data layout transformations, such as a color splitting for multi-color kernels in the domain of stencil codes, that are manually difficult. As a remedy, we propose new flexible layout transformation statements for our domain-specific language ExaSlang that support arbitrary affine transformations. Since our code generator applies them automatically to the generated code, these statements enable the simple adaptation of the data layout without the need for any other modifications of the application code. This constitutes a big advance in the ease of testing and evaluating different memory layout schemes in order to identify the best.
APA, Harvard, Vancouver, ISO, and other styles
2

Crooks, P., and R. H. Perrott. "An automatic data distribution generator for distributed memory machines." Concurrency: Practice and Experience 10, no. 8 (July 1998): 607–29. http://dx.doi.org/10.1002/(sici)1096-9128(199807)10:8<607::aid-cpe330>3.0.co;2-g.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Brdjanin, Drazen, Danijela Banjac, Goran Banjac, and Slavko Maric. "Automated two-phase business model-driven synthesis of conceptual database models." Computer Science and Information Systems 16, no. 2 (2019): 657–88. http://dx.doi.org/10.2298/csis181010014b.

Full text
Abstract:
Existing approaches to business process model-driven synthesis of data models are characterized by a direct synthesis of a target model based on source models represented by concrete notations, where the synthesis is supported by monolithic (semi)automatic transformation programs. This article presents an approach to automated two-phase business process model-driven synthesis of conceptual database models. It is based on the introduction of a domain specific language (DSL) as an intermediate layer between different source notations and the target notation, which splits the synthesis into two phases: (i) automatic extraction of specific concepts from the source model and their DSL-based representation, and (ii) automated generation of the target model based on the DSL-based representation of the extracted concepts. The proposed approach enables development of modular transformation tools for automatic synthesis of the target model based on business process models represented by different concrete notations. In this article we present an online generator, which implements the proposed approach. The generator is implemented as a web-based, service-oriented tool, which enables automatic generation of the initial conceptual database model represented by the UML class diagram, based on business models represented by two concrete notations.
APA, Harvard, Vancouver, ISO, and other styles
4

Šulc, Stanislav, Vít Šmilauer, and Bořek Patzák. "MUPIF WORKFLOW EDITOR AND AUTOMATIC CODE GENERATOR." Acta Polytechnica CTU Proceedings 26 (March 17, 2020): 107–11. http://dx.doi.org/10.14311/app.2020.26.0107.

Full text
Abstract:
Integrating applications or codes into MuPIF API Model enables easy integration of such APIs into any workflow representing complex multiphysical simulation. This concept of MuPIF also enables automatic code generation of the computational code for given workflow structure. This article describes a ’workflow generator’ tool for the code generation together with ’workflow editor’ graphical interface for interactive definition of the workflow structure and the inner data dependencies. The usage is explained on a thermo-mechanical simulation.
APA, Harvard, Vancouver, ISO, and other styles
5

Mohammed, Sura Jasim. "New Algorithm of Automatic Complex Password Generator Employing Genetic Algorithm." JOURNAL OF UNIVERSITY OF BABYLON for Pure and Applied Sciences 26, no. 2 (January 18, 2018): 295–302. http://dx.doi.org/10.29196/jub.v26i2.546.

Full text
Abstract:
Due to the occurred increasing in information sharing, internet popularization, E-commerce transactions, and data transferring, security and authenticity become an important and necessary subject. In this paper an automated schema was proposed to generate a strong and complex password which is based on entering initial data such as text (meaningful and simple information or not), with the concept of encoding it, then employing the Genetic Algorithm by using its operations crossover and mutation to generated different data from the entered one. The generated password is non-guessable and can be used in many and different applications and internet services like social networks, secured system, distributed systems, and online services. The proposed password generator achieved diffusion, randomness, and confusions, which are very necessary, required and targeted in the resulted password, in addition to the notice that the length of the generated password differs from the length of initial data, and any simple changing and modification in the initial data produces more and clear modification in the generated password. The proposed work was done using visual basic programing language.
APA, Harvard, Vancouver, ISO, and other styles
6

Wills, Graham, and Leland Wilkinson. "AutoVis: Automatic Visualization." Information Visualization 9, no. 1 (December 18, 2008): 47–69. http://dx.doi.org/10.1057/ivs.2008.27.

Full text
Abstract:
AutoVis is a data viewer that responds to content–text, relational tables, hierarchies, streams, images–and displays the information appropriately (that is, as an expert would). Its design rests on the grammar of graphics, scagnostics and a modeler based on the logic of statistical analysis. We distinguish an automatic visualization system (AVS) from an automated visualization system. The former automatically makes decisions about what is to be visualized. The latter is a programming system for automating the production of charts, graphs and visualizations. An AVS is designed to provide a first glance at data before modeling and analysis are done. AVS is designed to protect researchers from ignoring missing data, outliers, miscodes and other anomalies that can violate statistical assumptions or otherwise jeopardize the validity of models. The design of this system incorporates several unique features: (1) a spare interface–analysts simply drag a data source into an empty window, (2) a graphics generator that requires no user definitions to produce graphs, (3) a statistical analyzer that protects users from false conclusions, and (4) a pattern recognizer that responds to the aspects (density, shape, trend, and so on) that professional statisticians notice when investigating data sets.
APA, Harvard, Vancouver, ISO, and other styles
7

Klimek, Radosław, Katarzyna Grobler-Dębska, and Edyta Kucharska. "System for automatic generation of logical formulas." MATEC Web of Conferences 252 (2019): 03005. http://dx.doi.org/10.1051/matecconf/201925203005.

Full text
Abstract:
The satisfiability problem (SAT) is one of the classical and also most important problems of the theoretical computer science and has a direct bearing on numerous practical cases. It is one of the most prominent problems in artificial intelligence and has important applications in many fields, such as hardware and software verification, test-case generation, AI planning, scheduling, and data structures that allow efficient implementation of search space pruning. In recent years, there has been a huge development in SAT solvers, especially CDCL-based solvers (Conflict-Driven Clause-Learning) for propositional logic formulas. The goal of this paper is to design and implement a simple but effective system for random generation of long and complex logical formulas with a variety of difficulties encoded inside. The resulting logical formulas, i.e. problem instances, could be used for testing existing SAT solvers. The entire system would be widely available as a web application in the client-server architecture. The proposed system enables generation of syntactically correct logical formulas with a random structure, encoded in a manner understandable to SAT Solvers. Logical formulas can be presented in different formats. A number of parameters affect the form of generated instances, their complexity and physical dimensions. The randomness factor can be entered to every generated formula. The developed application is easy to modify and open for further extensions. The final part of the paper describes examples of solvers’ tests of logical formulas generated by the implemented generator.
APA, Harvard, Vancouver, ISO, and other styles
8

Puduppully, Ratish, and Mirella Lapata. "Data-to-text Generation with Macro Planning." Transactions of the Association for Computational Linguistics 9 (2021): 510–27. http://dx.doi.org/10.1162/tacl_a_00381.

Full text
Abstract:
Abstract Recent approaches to data-to-text generation have adopted the very successful encoder-decoder architecture or variants thereof. These models generate text that is fluent (but often imprecise) and perform quite poorly at selecting appropriate content and ordering it coherently. To overcome some of these issues, we propose a neural model with a macro planning stage followed by a generation stage reminiscent of traditional methods which embrace separate modules for planning and surface realization. Macro plans represent high level organization of important content such as entities, events, and their interactions; they are learned from data and given as input to the generator. Extensive experiments on two data-to-text benchmarks (RotoWire and MLB) show that our approach outperforms competitive baselines in terms of automatic and human evaluation.
APA, Harvard, Vancouver, ISO, and other styles
9

ZHANG, YUANRUI, JUN LIU, EMRE KULTURSAY, MAHMUT KANDEMIR, NIKOS PITSIANIS, and XIAOBAI SUN. "AUTOMATIC PARALLEL CODE GENERATION FOR NUFFT DATA TRANSLATION ON MULTICORES." Journal of Circuits, Systems and Computers 21, no. 02 (April 2012): 1240004. http://dx.doi.org/10.1142/s021812661240004x.

Full text
Abstract:
The nonuniform FFT (NuFFT) is widely used in many applications. Focusing on the most time-consuming part of the NuFFT computation, the data translation step, in this paper, we develop an automatic parallel code generation tool for data translation targeting emerging multicores. The key components of this tool are two scalable parallelization strategies, namely, the source-driven parallelization and the target-driven parallelization. Both these strategies employ equally sized geometric tiling and binning to improve data locality while trying to balance workloads across the cores through dynamic task allocation. They differ in the partitioning and scheduling schemes used to guarantee mutual exclusion in data updates. This tool also consists of a code generator and a code optimizer for the data translation. We evaluated our tool on a commercial multicore machine for both 2D and 3D inputs under different sample distributions with large data set sizes. The results indicate that both parallelization strategies have good scalability as the number of cores and the number of dimensions of data space increase. In particular, the target-driven parallelization outperforms the other when samples are nonuniformly distributed. The experiments also show that our code optimizations can bring about 32%–43% performance improvement to the data translation step of NuFFT.
APA, Harvard, Vancouver, ISO, and other styles
10

Magdalenić, Ivan, Danijel Radošević, and Dragutin Kermek. "Implementation Model of Source Code Generator." Journal of Communications Software and Systems 7, no. 2 (June 22, 2011): 71. http://dx.doi.org/10.24138/jcomss.v7i2.180.

Full text
Abstract:
The on demand generation of source code and its execution is essential if computers are expected to play an active role in information discovery and retrieval. This paper presents a model of implementation of a source code generator, whose purpose is to generate source code on demand. Theimplementation of the source code generator is fully configurable and its adoption to a new application is done by changing the generator configuration and not the generator itself. The advantage of using the source code generator is rapid and automatic development of a family of application once necessary program templates and generator configuration are made. The model of implementation of the source code generator is general and implemented source code generator can be used in differentareas. We use a source code generator for dynamic generation of ontology supported Web services for data retrieval and for building of different kind of web application.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Automatic data generator"

1

Kupferschmidt, Benjamin, and Albert Berdugo. "DESIGNING AN AUTOMATIC FORMAT GENERATOR FOR A NETWORK DATA ACQUISITION SYSTEM." International Foundation for Telemetering, 2006. http://hdl.handle.net/10150/604157.

Full text
Abstract:
ITC/USA 2006 Conference Proceedings / The Forty-Second Annual International Telemetering Conference and Technical Exhibition / October 23-26, 2006 / Town and Country Resort & Convention Center, San Diego, California
In most current PCM based telemetry systems, an instrumentation engineer manually creates the sampling format. This time consuming and tedious process typically involves manually placing each measurement into the format at the proper sampling rate. The telemetry industry is now moving towards Ethernet-based systems comprised of multiple autonomous data acquisition units, which share a single global time source. The architecture of these network systems greatly simplifies the task of implementing an automatic format generator. Automatic format generation eliminates much of the effort required to create a sampling format because the instrumentation engineer only has to specify the desired sampling rate for each measurement. The system handles the task of organizing the format to comply with the specified sampling rates. This paper examines the issues involved in designing an automatic format generator for a network data acquisition system.
APA, Harvard, Vancouver, ISO, and other styles
2

Zhou, Yu. "AUTOMATIC GENERATION OF WEB APPLICATIONS AND MANAGEMENT SYSTEM." CSUSB ScholarWorks, 2017. https://scholarworks.lib.csusb.edu/etd/434.

Full text
Abstract:
One of the major difficulties in web application design is the tediousness of constructing new web pages from scratch. For traditional web application projects, the web application designers usually design and implement web application projects step by step, in detail. My project is called “automatic generation of web applications and management system.” This web application generator can generate the generic and customized web applications based on software engineering theories. The flow driven methodology will be used to drive the project by Business Process Model Notation (BPMN). Modules of the project are: database, web server, HTML page, functionality, financial analysis model, customer, and BPMN. The BPMN is the most important section of this entire project, due to the BPMN flow engine that most of the work and data flow depends on the engine. There are two ways to deal with the project. One way is to go to the main page, then to choose one web app template, and click the generating button. The other way is for the customers to request special orders. The project then will give suitable software development methodologies to follow up. After a software development life cycle, customers will receive their required product.
APA, Harvard, Vancouver, ISO, and other styles
3

Akinci, Arda. "Universal Command Generator For Robotics And Cnc Machinery." Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/12610579/index.pdf.

Full text
Abstract:
In this study a universal command generator has been designed for robotics and CNC machinery. Encoding techniques has been utilized in order to represent the commands and their efficiencies have been discussed. The developed algorithm generates the trajectory of the end-effector with linear and circular interpolation in an offline fashion, the corresponding joint states and their error envelopes are computed with the utilization of a numerical inverse kinematic solver with a predefined precision. Finally, the command encoder employs the resulting data and produces the representation of positions in joint space with using proposed encoding techniques depending on the error tolerance for each joint. The encoding methods considered in this thesis are: Lossless data compression via higher order finite difference, Huffman Coding and Arithmetic Coding techniques, Polynomial Fitting methods with Chebyshev, Legendre and Bernstein Polynomials and finally Fourier and Wavelet Transformations. The algorithm is simulated for Puma 560 and Stanford Manipulators for a trajectory in order to evaluate the performances of the above mentioned techniques (i.e. approximation error, memory requirement, number of commands generated). According to the case studies, Chebyshev Polynomials has been determined to be the most suitable technique for command generation. Proposed methods have been implemented in MATLAB environment due to its versatile toolboxes. With this research the way to develop an encoding/decoding standard for an advanced command generator scheme for computer numerically controlled (CNC) machines in the near future has been paved.
APA, Harvard, Vancouver, ISO, and other styles
4

Naňo, Andrej. "Automatické generování testovacích dat informačních systémů." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2021. http://www.nusl.cz/ntk/nusl-445520.

Full text
Abstract:
ISAGENis a tool for the automatic generation of structurally complex test inputs that imitate real communication in the context of modern information systems . Complex, typically tree-structured data currently represents the standard means of transmitting information between nodes in distributed information systems. Automatic generator ISAGENis founded on the methodology of data-driven testing and uses concrete data from the production environment as the primary characteristic and specification that guides the generation of new similar data for test cases satisfying given combinatorial adequacy criteria. The main contribution of this thesis is a comprehensive proposal of automated data generation techniques together with an implementation, which demonstrates their usage. The created solution enables testers to create more relevant testing data, representing production-like communication in information systems.
APA, Harvard, Vancouver, ISO, and other styles
5

Offutt, Andrew Jefferson VI. "Automatic test data generation." Diss., Georgia Institute of Technology, 1988. http://hdl.handle.net/1853/9167.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kraut, Daniel. "Generování modelů pro testy ze zdrojových kódů." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2019. http://www.nusl.cz/ntk/nusl-403157.

Full text
Abstract:
The aim of the masters thesis is to design and implement a tool for automatic generation of paths in source code. Firstly was acquired a study of model based testing and possible design for the desired automatic generator based on coverage criteria defined on CFG model. The main point of the master theis is the tool design and description of its implementation. The tool supports many coverage criteria, which allows the user of such tool to focus on specific artefact of the system under test. Moreover, this tool is tuned to allow aditional requirements on the size of generated test suite, reflecting real world practical usage. The generator was implemented in C++ language and web interface for it in Python language, which at the same time is used to integrated the tool into Testos platform.
APA, Harvard, Vancouver, ISO, and other styles
7

Cousins, Michael Anthony. "Automated structural test data generation." Thesis, University of Portsmouth, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.261234.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Holmes, Stephen Terry. "Heuristic generation of software test data." Thesis, University of South Wales, 1996. https://pure.southwales.ac.uk/en/studentthesis/heuristic-generation-of-software-test-data(aa20a88e-32a5-4958-9055-7abc11fbc541).html.

Full text
Abstract:
Incorrect system operation can, at worst, be life threatening or financially devastating. Software testing is a destructive process that aims to reveal software faults. Selection of good test data can be extremely difficult. To ease and assist test data selection, several test data generators have emerged that use a diverse range of approaches. Adaptive test data generators use existing test data to produce further effective test data. It has been observed that there is little empirical data on the adaptive approach. This thesis presents the Heuristically Aided Testing System (HATS), which is an adaptive test data generator that uses several heuristics. A heuristic embodies a test data generation technique. Four heuristics have been developed. The first heuristic, Direct Assignment, generates test data for conditions involving an input variable and a constant. The Alternating Variable heuristic determines a promising direction to modify input variables, then takes ever increasing steps in this direction. The Linear Predictor heuristic performs linear extrapolations on input variables. The final heuristic, Boundary Follower, uses input domain boundaries as a guide to locate hard-to-find solutions. Several Ada procedures have been tested with HATS; a quadratic equation solver, a triangle classifier, a remainder calculator and a linear search. Collectively they present some common and rare test data generation problems. The weakest testing criterion HATS has attempted to satisfy is all branches. Stronger, mutation-based criteria have been used on two of the procedures. HATS has achieved complete branch coverage on each procedure, except where there is a higher level of control flow complexity combined with non-linear input variables. Both branch and mutation testing criteria have enabled a better understanding of the test data generation problems and contributed to the evolution of heuristics and the development of new heuristics. This thesis contributes the following to knowledge: Empirical data on the adaptive heuristic approach to test data generation. How input domain boundaries can be used as guidance for a heuristic. An effective heuristic termination technique based on the heuristic's progress. A comparison of HATS with random testing. Properties of the test software that indicate when HATS will take less effort than random testing are identified.
APA, Harvard, Vancouver, ISO, and other styles
9

Ege, Raimund K. "Automatic generation of interfaces using constraints. /." Full text open access at:, 1987. http://content.ohsu.edu/u?/etd,144.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kupferschmidt, Benjamin, and Eric Pesciotta. "Automatic Format Generation Techniques for Network Data Acquisition Systems." International Foundation for Telemetering, 2009. http://hdl.handle.net/10150/606089.

Full text
Abstract:
ITC/USA 2009 Conference Proceedings / The Forty-Fifth Annual International Telemetering Conference and Technical Exhibition / October 26-29, 2009 / Riviera Hotel & Convention Center, Las Vegas, Nevada
Configuring a modern, high-performance data acquisition system is typically a very timeconsuming and complex process. Any enhancement to the data acquisition setup software that can reduce the amount of time needed to configure the system is extremely useful. Automatic format generation is one of the most useful enhancements to a data acquisition setup application. By using Automatic Format Generation, an instrumentation engineer can significantly reduce the amount of time that is spent configuring the system while simultaneously gaining much greater flexibility in creating sampling formats. This paper discusses several techniques that can be used to generate sampling formats automatically while making highly efficient use of the system's bandwidth. This allows the user to obtain most of the benefits of a hand-tuned, manually created format without spending excessive time creating it. One of the primary techniques that this paper discusses is an enhancement to the commonly used power-of-two rule, for selecting sampling rates. This allows the system to create formats that use a wider variety of rates. The system is also able to handle groups of related measurements that must follow each other sequentially in the sampling format. This paper will also cover a packet based formatting scheme that organizes measurements based on common sampling rates. Each packet contains a set of measurements that are sampled at a particular rate. A key benefit of using an automatic format generation system with this format is the optimization of sampling rates that are used to achieve the best possible match for each measurement's desired sampling rate.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Automatic data generator"

1

Yu, Meng-Lin. Automatic random logic layout synthesis: A module generator approach. Urbana, Ill: Dept. of Computer Science, University of Illinois at Urbana-Champaign, 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Automated password generator (APG). Gaithersburg, MD: Dept. of Commerce, National Institute of Standards and Technology, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zhang, Jian, Zhiqiang Zhang, and Feifei Ma. Automatic Generation of Combinatorial Test Data. Berlin, Heidelberg: Springer Berlin Heidelberg, 2014. http://dx.doi.org/10.1007/978-3-662-43429-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Logunova, Oksana, Petr Romanov, and Elena Il'ina. Processing of experimental data on a computer. ru: INFRA-M Academic Publishing LLC., 2020. http://dx.doi.org/10.12737/1064882.

Full text
Abstract:
The textbook provides information about the main methods and tools for automating computational processes used in data processing; methods for representing and generating models of experimental data; data models and classification of processing tasks; and the organization of the user interface in automated systems for processing experimental data. Contains structured chapters on the specifics of experimental research. The features of using software for processing experimental data are clearly and logically described. Theoretical material and basic algorithms for processing experimental data used in industrial statistics are presented. Examples of processing experimental data in the field of metallurgy and management in higher education are given. Meets the requirements of the Federal state educational standards of higher education of the latest generation. For students and postgraduates of higher educational institutions.
APA, Harvard, Vancouver, ISO, and other styles
5

Diamant, P. E. Automatic generation of mixed signal test programs from circuitsimulation data. Manchester: UMIST, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Automatic mesh generation: Application to finite element methods. Chichester: J. Wiley, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Strock, O. J. Telemetry computer systems: The new generation. Research Triangle Park, NC: Instrument Society of America, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Keßler, Christoph W. Automatic Parallelization: New Approaches to Code Generation, Data Distribution, and Performance prediction. Wiesbaden: Vieweg+Teubner Verlag, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ponzetto, Simone Paolo. Knowledge acquisition from a collaboratively generated encyclopedia. Heidelberg: IOS Press, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ponzetto, Simone Paolo. Knowledge acquisition from a collaboratively generated encyclopedia. Heidelberg: IOS Press, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Automatic data generator"

1

Barga, Roger S., and Luciano A. Digiampietri. "Automatic Generation of Workflow Provenance." In Provenance and Annotation of Data, 1–9. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11890850_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Pinto, Fábio, Carlos Soares, and João Mendes-Moreira. "Towards Automatic Generation of Metafeatures." In Advances in Knowledge Discovery and Data Mining, 215–26. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-31753-3_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Liang, Yuan, Song-Hai Zhang, and Ralph Robert Martin. "Automatic Data-Driven Room Design Generation." In Next Generation Computer Animation Techniques, 133–48. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-69487-0_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zhang, Jian, Zhiqiang Zhang, and Feifei Ma. "Introduction to Combinatorial Testing." In Automatic Generation of Combinatorial Test Data, 1–16. Berlin, Heidelberg: Springer Berlin Heidelberg, 2014. http://dx.doi.org/10.1007/978-3-662-45919-5_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zhang, Jian, Zhiqiang Zhang, and Feifei Ma. "Mathematical Construction Methods." In Automatic Generation of Combinatorial Test Data, 17–25. Berlin, Heidelberg: Springer Berlin Heidelberg, 2014. http://dx.doi.org/10.1007/978-3-662-45919-5_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zhang, Jian, Zhiqiang Zhang, and Feifei Ma. "One Test at a Time." In Automatic Generation of Combinatorial Test Data, 27–39. Berlin, Heidelberg: Springer Berlin Heidelberg, 2014. http://dx.doi.org/10.1007/978-3-662-45919-5_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Zhang, Jian, Zhiqiang Zhang, and Feifei Ma. "The IPO Family." In Automatic Generation of Combinatorial Test Data, 41–49. Berlin, Heidelberg: Springer Berlin Heidelberg, 2014. http://dx.doi.org/10.1007/978-3-662-45919-5_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Zhang, Jian, Zhiqiang Zhang, and Feifei Ma. "Evolutionary Computation and Metaheuristics." In Automatic Generation of Combinatorial Test Data, 51–60. Berlin, Heidelberg: Springer Berlin Heidelberg, 2014. http://dx.doi.org/10.1007/978-3-662-45919-5_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Zhang, Jian, Zhiqiang Zhang, and Feifei Ma. "Backtracking Search." In Automatic Generation of Combinatorial Test Data, 61–78. Berlin, Heidelberg: Springer Berlin Heidelberg, 2014. http://dx.doi.org/10.1007/978-3-662-45919-5_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zhang, Jian, Zhiqiang Zhang, and Feifei Ma. "Tools and Benchmarks." In Automatic Generation of Combinatorial Test Data, 79–83. Berlin, Heidelberg: Springer Berlin Heidelberg, 2014. http://dx.doi.org/10.1007/978-3-662-45919-5_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Automatic data generator"

1

Syahaneim, Raja Asilah Hazwani, Nur Wahida, Siti Intan Shafikah, Zuraini, and Puteri Nor Ellyza. "Automatic Artificial Data Generator: Framework and implementation." In 2016 International Conference on Information and Communication Technology (ICICTM). IEEE, 2016. http://dx.doi.org/10.1109/icictm.2016.7890777.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ohki, Hidehiro, Moriyuki Shirazawa, Keiji Gyohten, Naomichi Sueda, and Seiki Inoue. "Sport Data Animating - An Automatic Animation Generator from Real Soccer Data." In 2009 International Conference on Complex, Intelligent and Software Intensive Systems (CISIS). IEEE, 2009. http://dx.doi.org/10.1109/cisis.2009.185.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Cosulschi, Mirel, Adrian Giurca, Bogdan Udrescu, Nicolae Constantinescu, and Mihai Gabroveanu. "HTML Pattern Generator--Automatic Data Extraction from Web Pages." In 2006 Eighth International Symposium on Symbolic and Numeric Algorithms for Scientific Computing. IEEE, 2006. http://dx.doi.org/10.1109/synasc.2006.43.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lmati, Imane, Habib Benlahmar, and Naceur Achtaich. "Towards an automatic generator of mathematical exercises based on semantic." In BDCA'17: 2nd international Conference on Big Data, Cloud and Applications. New York, NY, USA: ACM, 2017. http://dx.doi.org/10.1145/3090354.3090356.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Deb Nath, Rudra Pratap, Hanif Seddiqui, and Masaki Aono. "A novel automatic property weight generator for semantic data integration." In 2013 16th International Conference on Computer and Information Technology (ICCIT). IEEE, 2014. http://dx.doi.org/10.1109/iccitechn.2014.6997311.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Malhotra, Ruchika, Poornima, and Nitish Kumar. "Automatic test data generator: A tool based on search-based techniques." In 2016 5th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO). IEEE, 2016. http://dx.doi.org/10.1109/icrito.2016.7785020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Carlo, Stefano Di, Giulio Gambardella, Marco Indaco, Daniele Rolfo, and Paolo Prinetto. "MarciaTesta: An Automatic Generator of Test Programs for Microprocessors' Data Caches." In 2011 IEEE 20th Asian Test Symposium (ATS). IEEE, 2011. http://dx.doi.org/10.1109/ats.2011.78.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Salagame, Raviprakash R. "Automated Shape Optimization Using Parametric Solid Models and p-Adaptive Finite Element Analysis." In ASME 1997 Design Engineering Technical Conferences. American Society of Mechanical Engineers, 1997. http://dx.doi.org/10.1115/detc97/dac-3762.

Full text
Abstract:
Abstract A shape optimization capability which is fully integrated with geometry data is developed in this paper. A key feature of the system is the direct use of parametric solid model as the primary design model for shape optimization. This choice leads to a tight coupling between optimization and geometry, a highly desirable feature for an automated design environment. The approach exploits the parametric engine of the solid modeler to update the geometry at the end of each iteration. Responses are generated using polyFEM, a p-adaptive solver supported by a fully automatic three dimensional p-element mesh generator. It is illustrated that the use of p-adaptive technology and parametrized solid model assist us in achieving a high degree of automation.
APA, Harvard, Vancouver, ISO, and other styles
9

Feklisov, Egor, Mihail Zinderenko, and Vladimir Frolov. "Procedural interior generation for artificial intelligence training and computer graphics." In International Conference "Computing for Physics and Technology - CPT2020". Bryansk State Technical University, 2020. http://dx.doi.org/10.30987/conferencearticle_5fce2771c14fa7.77481925.

Full text
Abstract:
Since the creation of computers, there has been a lingering problem of data storing and creation for various tasks. In terms of computer graphics and video games, there has been a constant need in assets. Although nowadays the issue of space is not one of the developers' prime concerns, the need in being able to automate asset creation is still relevant. The graphical fidelity, that the modern audiences and applications demand requires a lot of work on the artists' and designers' front, which costs a lot. The automatic generation of 3D scenes is of critical importance in the tasks of Artificial Intelligent (AI) robotics training, where the amount of generated data during training cannot even be viewed by a single person due to the large amount of data needed for machine learning algorithms. A completely separate, but nevertheless necessary task for an integrated solution, is furniture generation and placement, material and lighting randomisation. In this paper we propose interior generator for computer graphics and robotics learning applications. The suggested framework is able to generate and render interiors with furniture at photo-realistic quality. We combined the existing algorithms for generating plans and arranging interiors and then finally add material and lighting randomization. Our solution contains semantic database of 3D models and materials, which allows generator to get realistic scenes with randomization and per-pixel mask for training detection and segmentation algorithms.
APA, Harvard, Vancouver, ISO, and other styles
10

Sharma, M., J. Rodriguez, and N. Langrana. "Automatic Mesh Generation on Annular Profiles With Geometric Constraints." In ASME 1991 International Computers in Engineering Conference and Exposition. American Society of Mechanical Engineers, 1991. http://dx.doi.org/10.1115/cie1991-0123.

Full text
Abstract:
Abstract The Finite Element Method (FEM) has been recognized as one of the most effective analysis method for complicated domains. Unfortunately, any practical application of FEM requires extensive amount of input data. General pre-processors usually require a large amount of user defined information. Therefore, in some particular applications, where more specific options are needed, a special mesh generator in required. In this paper we present an automatic mesh generator for annular models with inherent geometric constraints, such as fiber orientation in composite-like structures/systems. The developed generator is designed to generate 3-D meshes with minimal users’ input and effort, and it can be used in conjunction with almost any CAD system or FEM software. The code is particularly useful for efficient performance of numerical studies relating mesh density and convergence.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Automatic data generator"

1

Kolawa, A., B. Strickland, and A. Hicken. Automatic Test Data Generation Tool for Large- Scale Software Systems. Fort Belvoir, VA: Defense Technical Information Center, May 1994. http://dx.doi.org/10.21236/ada289081.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Cai, Hubo, JungHo Jeon, Xin Xu, Yuxi Zhang, and Liu Yang. Automating the Generation of Construction Checklists. Purdue University, 2020. http://dx.doi.org/10.5703/1288284317273.

Full text
Abstract:
Construction inspection is a critical component of INDOT’s quality assurance (QA) program. Upon receiving an inspection notice/assignment, INDOT inspectors review the plans and specifications to identify the construction quality requirements and conduct their inspections accordingly. This manual approach to gathering inspection requirements from textual documents is time-consuming, subjective, and error-prone. This project addresses this critical issue by developing an inspection requirements database along with a set of tools to automatically gather the inspection requirements and provide field crews with customized construction checklists during the inspection with the specifics of what to check, when to check, and how to check, as well as the risks and the actions to take when noncompliance is encountered. This newly developed toolset eliminates the manual effort required to acquire construction requirements, which will enhance the efficiency of the construction inspection process at INDOT. It also enables the incorporation of field-collected data to automate future compliance checking and facilitate construction documentation.
APA, Harvard, Vancouver, ISO, and other styles
3

Berney, Ernest, Andrew Ward, and Naveen Ganesh. First generation automated assessment of airfield damage using LiDAR point clouds. Engineer Research and Development Center (U.S.), March 2021. http://dx.doi.org/10.21079/11681/40042.

Full text
Abstract:
This research developed an automated software technique for identifying type, size, and location of man-made airfield damage including craters, spalls, and camouflets from a digitized three-dimensional point cloud of the airfield surface. Point clouds were initially generated from Light Detection and Ranging (LiDAR) sensors mounted on elevated lifts to simulate aerial data collection and, later, an actual unmanned aerial system. LiDAR data provided a high-resolution, globally positioned, and dimensionally scaled point cloud exported in a LAS file format that was automatically retrieved and processed using volumetric detection algorithms developed in the MATLAB software environment. Developed MATLAB algorithms used a three-stage filling technique to identify the boundaries of craters first, then spalls, then camouflets, and scaled their sizes based on the greatest pointwise extents. All pavement damages and their locations were saved as shapefiles and uploaded into the GeoExPT processing environment for visualization and quality control. This technique requires no user input between data collection and GeoExPT visualization, allowing for a completely automated software analysis with all filters and data processing hidden from the user.
APA, Harvard, Vancouver, ISO, and other styles
4

Treadwell, Jonathan R., James T. Reston, Benjamin Rouse, Joann Fontanarosa, Neha Patel, and Nikhil K. Mull. Automated-Entry Patient-Generated Health Data for Chronic Conditions: The Evidence on Health Outcomes. Agency for Healthcare Research and Quality (AHRQ), March 2021. http://dx.doi.org/10.23970/ahrqepctb38.

Full text
Abstract:
Background. Automated-entry consumer devices that collect and transmit patient-generated health data (PGHD) are being evaluated as potential tools to aid in the management of chronic diseases. The need exists to evaluate the evidence regarding consumer PGHD technologies, particularly for devices that have not gone through Food and Drug Administration evaluation. Purpose. To summarize the research related to automated-entry consumer health technologies that provide PGHD for the prevention or management of 11 chronic diseases. Methods. The project scope was determined through discussions with Key Informants. We searched MEDLINE and EMBASE (via EMBASE.com), In-Process MEDLINE and PubMed unique content (via PubMed.gov), and the Cochrane Database of Systematic Reviews for systematic reviews or controlled trials. We also searched ClinicalTrials.gov for ongoing studies. We assessed risk of bias and extracted data on health outcomes, surrogate outcomes, usability, sustainability, cost-effectiveness outcomes (quantifying the tradeoffs between health effects and cost), process outcomes, and other characteristics related to PGHD technologies. For isolated effects on health outcomes, we classified the results in one of four categories: (1) likely no effect, (2) unclear, (3) possible positive effect, or (4) likely positive effect. When we categorized the data as “unclear” based solely on health outcomes, we then examined and classified surrogate outcomes for that particular clinical condition. Findings. We identified 114 unique studies that met inclusion criteria. The largest number of studies addressed patients with hypertension (51 studies) and obesity (43 studies). Eighty-four trials used a single PGHD device, 23 used 2 PGHD devices, and the other 7 used 3 or more PGHD devices. Pedometers, blood pressure (BP) monitors, and scales were commonly used in the same studies. Overall, we found a “possible positive effect” of PGHD interventions on health outcomes for coronary artery disease, heart failure, and asthma. For obesity, we rated the health outcomes as unclear, and the surrogate outcomes (body mass index/weight) as likely no effect. For hypertension, we rated the health outcomes as unclear, and the surrogate outcomes (systolic BP/diastolic BP) as possible positive effect. For cardiac arrhythmias or conduction abnormalities we rated the health outcomes as unclear and the surrogate outcome (time to arrhythmia detection) as likely positive effect. The findings were “unclear” regarding PGHD interventions for diabetes prevention, sleep apnea, stroke, Parkinson’s disease, and chronic obstructive pulmonary disease. Most studies did not report harms related to PGHD interventions; the relatively few harms reported were minor and transient, with event rates usually comparable to harms in the control groups. Few studies reported cost-effectiveness analyses, and only for PGHD interventions for hypertension, coronary artery disease, and chronic obstructive pulmonary disease; the findings were variable across different chronic conditions and devices. Patient adherence to PGHD interventions was highly variable across studies, but patient acceptance/satisfaction and usability was generally fair to good. However, device engineers independently evaluated consumer wearable and handheld BP monitors and considered the user experience to be poor, while their assessment of smartphone-based electrocardiogram monitors found the user experience to be good. Student volunteers involved in device usability testing of the Weight Watchers Online app found it well-designed and relatively easy to use. Implications. Multiple randomized controlled trials (RCTs) have evaluated some PGHD technologies (e.g., pedometers, scales, BP monitors), particularly for obesity and hypertension, but health outcomes were generally underreported. We found evidence suggesting a possible positive effect of PGHD interventions on health outcomes for four chronic conditions. Lack of reporting of health outcomes and insufficient statistical power to assess these outcomes were the main reasons for “unclear” ratings. The majority of studies on PGHD technologies still focus on non-health-related outcomes. Future RCTs should focus on measurement of health outcomes. Furthermore, future RCTs should be designed to isolate the effect of the PGHD intervention from other components in a multicomponent intervention.
APA, Harvard, Vancouver, ISO, and other styles
5

Thomson, S. MINE DESIGNER - an automated graphical data generation program integrating numerical modelling with the mine design process. Natural Resources Canada/CMSS/Information Management, 1993. http://dx.doi.org/10.4095/328895.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Berney, Ernest, Naveen Ganesh, Andrew Ward, J. Newman, and John Rushing. Methodology for remote assessment of pavement distresses from point cloud analysis. Engineer Research and Development Center (U.S.), April 2021. http://dx.doi.org/10.21079/11681/40401.

Full text
Abstract:
The ability to remotely assess road and airfield pavement condition is critical to dynamic basing, contingency deployment, convoy entry and sustainment, and post-attack reconnaissance. Current Army processes to evaluate surface condition are time-consuming and require Soldier presence. Recent developments in the area of photogrammetry and light detection and ranging (LiDAR) enable rapid generation of three-dimensional point cloud models of the pavement surface. Point clouds were generated from data collected on a series of asphalt, concrete, and unsurfaced pavements using ground- and aerial-based sensors. ERDC-developed algorithms automatically discretize the pavement surface into cross- and grid-based sections to identify physical surface distresses such as depressions, ruts, and cracks. Depressions can be sized from the point-to-point distances bounding each depression, and surface roughness is determined based on the point heights along a given cross section. Noted distresses are exported to a distress map file containing only the distress points and their locations for later visualization and quality control along with classification and quantification. Further research and automation into point cloud analysis is ongoing with the goal of enabling Soldiers with limited training the capability to rapidly assess pavement surface condition from a remote platform.
APA, Harvard, Vancouver, ISO, and other styles
7

Yan, Yujie, and Jerome F. Hajjar. Automated Damage Assessment and Structural Modeling of Bridges with Visual Sensing Technology. Northeastern University, May 2021. http://dx.doi.org/10.17760/d20410114.

Full text
Abstract:
Recent advances in visual sensing technology have gained much attention in the field of bridge inspection and management. Coupled with advanced robotic systems, state-of-the-art visual sensors can be used to obtain accurate documentation of bridges without the need for any special equipment or traffic closure. The captured visual sensor data can be post-processed to gather meaningful information for the bridge structures and hence to support bridge inspection and management. However, state-of-the-practice data postprocessing approaches require substantial manual operations, which can be time-consuming and expensive. The main objective of this study is to develop methods and algorithms to automate the post-processing of the visual sensor data towards the extraction of three main categories of information: 1) object information such as object identity, shapes, and spatial relationships - a novel heuristic-based method is proposed to automate the detection and recognition of main structural elements of steel girder bridges in both terrestrial and unmanned aerial vehicle (UAV)-based laser scanning data. Domain knowledge on the geometric and topological constraints of the structural elements is modeled and utilized as heuristics to guide the search as well as to reject erroneous detection results. 2) structural damage information, such as damage locations and quantities - to support the assessment of damage associated with small deformations, an advanced crack assessment method is proposed to enable automated detection and quantification of concrete cracks in critical structural elements based on UAV-based visual sensor data. In terms of damage associated with large deformations, based on the surface normal-based method proposed in Guldur et al. (2014), a new algorithm is developed to enhance the robustness of damage assessment for structural elements with curved surfaces. 3) three-dimensional volumetric models - the object information extracted from the laser scanning data is exploited to create a complete geometric representation for each structural element. In addition, mesh generation algorithms are developed to automatically convert the geometric representations into conformal all-hexahedron finite element meshes, which can be finally assembled to create a finite element model of the entire bridge. To validate the effectiveness of the developed methods and algorithms, several field data collections have been conducted to collect both the visual sensor data and the physical measurements from experimental specimens and in-service bridges. The data were collected using both terrestrial laser scanners combined with images, and laser scanners and cameras mounted to unmanned aerial vehicles.
APA, Harvard, Vancouver, ISO, and other styles
8

Thomson, S. A. MINE DESIGNER - an automated graphical data generation program integrating numerical modelling with the mine design process (SUN version 1.0). Natural Resources Canada/CMSS/Information Management, 1991. http://dx.doi.org/10.4095/328894.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Arhin, Stephen, Babin Manandhar, Hamdiat Baba Adam, and Adam Gatiba. Predicting Bus Travel Times in Washington, DC Using Artificial Neural Networks (ANNs). Mineta Transportation Institute, April 2021. http://dx.doi.org/10.31979/mti.2021.1943.

Full text
Abstract:
Washington, DC is ranked second among cities in terms of highest public transit commuters in the United States, with approximately 9% of the working population using the Washington Metropolitan Area Transit Authority (WMATA) Metrobuses to commute. Deducing accurate travel times of these metrobuses is an important task for transit authorities to provide reliable service to its patrons. This study, using Artificial Neural Networks (ANN), developed prediction models for transit buses to assist decision-makers to improve service quality and patronage. For this study, we used six months of Automatic Vehicle Location (AVL) and Automatic Passenger Counting (APC) data for six Washington Metropolitan Area Transit Authority (WMATA) bus routes operating in Washington, DC. We developed regression models and Artificial Neural Network (ANN) models for predicting travel times of buses for different peak periods (AM, Mid-Day and PM). Our analysis included variables such as number of served bus stops, length of route between bus stops, average number of passengers in the bus, average dwell time of buses, and number of intersections between bus stops. We obtained ANN models for travel times by using approximation technique incorporating two separate algorithms: Quasi-Newton and Levenberg-Marquardt. The training strategy for neural network models involved feed forward and errorback processes that minimized the generated errors. We also evaluated the models with a Comparison of the Normalized Squared Errors (NSE). From the results, we observed that the travel times of buses and the dwell times at bus stops generally increased over time of the day. We gathered travel time equations for buses for the AM, Mid-Day and PM Peaks. The lowest NSE for the AM, Mid-Day and PM Peak periods corresponded to training processes using Quasi-Newton algorithm, which had 3, 2 and 5 perceptron layers, respectively. These prediction models could be adapted by transit agencies to provide the patrons with accurate travel time information at bus stops or online.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography