Academic literature on the topic 'Numerical integration – Data processing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Numerical integration – Data processing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Numerical integration – Data processing"

1

Kong, Xiaofang, Wenguang Yang, Hong'e Luo, and Baoming Li. "Application of Stabilized Numerical Integration Method in Acceleration Sensor Data Processing." IEEE Sensors Journal 21, no. 6 (March 15, 2021): 8194–203. http://dx.doi.org/10.1109/jsen.2021.3051193.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lv, Chao, and Xiao Hong Pan. "Integration and Implementation of Numerical Design and Manufacturing Data Management System for Beverage Bottle." Applied Mechanics and Materials 155-156 (February 2012): 663–67. http://dx.doi.org/10.4028/www.scientific.net/amm.155-156.663.

Full text
Abstract:
To implement numeralization and automation during the manufacturing of beverage bottles, a data management system for the numerical design and manufacturing of beverage bottle was proposed, which was required in the process of the rheological tests, simulation of stretch-blow process, and the optimization of the processing parameters. Simultaneously, several key technologies were illustrated in detail, including the processing of the experimental data, the seamless integration of various systems, the data mining and optimization of processing parameters. Application of this system beneficially offers technical support to lessen the experimental workload, improve the manufacturing process, guide the operation in workshop, shorten the design cycle and improve the efficiency.
APA, Harvard, Vancouver, ISO, and other styles
3

Zadiraka, V. K., L. V. Luts, and I. V. Shvidchenko. "Optimal Numerical Integration." Cybernetics and Computer Technologies, no. 4 (December 31, 2020): 47–64. http://dx.doi.org/10.34229/2707-451x.20.4.4.

Full text
Abstract:
Introduction. In many applied problems, such as statistical data processing, digital filtering, computed tomography, pattern recognition, and many others, there is a need for numerical integration, moreover, with a given (often quite high) accuracy. Classical quadrature formulas cannot always provide the required accuracy, since, as a rule, they do not take into account the oscillation of the integrand. In this regard, the development of methods for constructing optimal in accuracy (and close to them) quadrature formulas for the integration of rapidly oscillating functions is rather important and topical problem of computational mathematics. The purpose of the article is to use the example of constructing optimal in accuracy (and close to them) quadrature formulas for calculating integrals for integrands of various degrees of smoothness and for oscillating factors of different types and constructing a priori estimates of their total error, as well as applying to them of the theory of testing the quality of algorithms-programs to create a theory of optimal numerical integration. Results. The optimal in accuracy (and close to them) quadrature formulas for calculating the Fourier transform, wavelet transforms, and Bessel transform were constructed both in the classical formulation of the problem and for interpolation classes of functions corresponding to the case when the information operator about the integrand is given by a fixed table of its values. The paper considers a passive pure minimax strategy for solving the problem. Within the framework of this strategy, we used the method of “caps” by N. S. Bakhvalov and the method of boundary functions developed at the V.M. Glushkov Institute of Cybernetics of the NAS of Ukraine. Great attention is paid to the quality of the error estimates and the methods to obtain them. The article describes some aspects of the theory of algorithms-programs testing and presents the results of testing the constructed quadrature formulas for calculating integrals of rapidly oscillating functions and estimates of their characteristics. The problem of determining the ranges of admissible values of control parameters of programs for calculating integrals with the required accuracy, as well as their best values for integration with the minimum possible error, is considered for programs calculating a priori estimates of characteristics. Conclusions. The results obtained make it possible to create a theory of optimal integration, which makes it possible to reasonably choose and efficiently use computational resources to find the value of the integral with a given accuracy or with the minimum possible error. Keywords: quadrature formula, optimal algorithm, interpolation class, rapidly oscillating function, quality testing.
APA, Harvard, Vancouver, ISO, and other styles
4

Wei, Baoli, and Meng Lv. "CAD Integration of Mechanical Numerical Control Board Parts Based on Machining Features." Computer-Aided Design and Applications 18, S3 (October 20, 2020): 176–87. http://dx.doi.org/10.14733/cadaps.2021.s3.176-187.

Full text
Abstract:
The development and application of computer-aided design (CAD) technology has led to rapid improvements in product design automation, crafting process automation and numerical control programming automation. Machining feature refers to basic configuration units that constitute part shapes and the collection of non-geometric information with engineering semantics attached to it. The integration of mechanical numerical control parts is the integration of part design features and machining features, and each feature corresponds to a set of processing methods. Based on the summaries and analyses of previous research works, this paper expounded the current status and significance of mechanical numerical control board part integration, elaborated the development background, current status and future challenges of machining features and CAD technology, introduced a data transfer method of CAD integration and machining features-based part integration system, analyzed the design and machining features of CAD integration of board parts, constructed the graphics processing model and information reorganization model for CAD integration of board parts; conducted the feature description and modeling analysis of CAD integration of plate parts; discussed the crafting information similarity of mechanical numerical control plate part integration; explored the feature information and expression of feature library for plate parts integration.
APA, Harvard, Vancouver, ISO, and other styles
5

Candy, A. S., A. Avdis, J. Hill, G. J. Gorman, and M. D. Piggott. "Integration of Geographic Information System frameworks into domain discretisation and meshing processes for geophysical models." Geoscientific Model Development Discussions 7, no. 5 (September 11, 2014): 5993–6060. http://dx.doi.org/10.5194/gmdd-7-5993-2014.

Full text
Abstract:
Abstract. Computational simulations of physical phenomena rely on an accurate discretisation of the model domain. Numerical models have increased in sophistication to a level where it is possible to support terrain-following boundaries that conform accurately to real physical interfaces, and resolve a multiscale of spatial resolutions. Whilst simulation codes are maturing in this area, pre-processing tools have not developed significantly enough to competently initialise these problems in a rigorous, efficient and recomputable manner. In the relatively disjoint field of Geographic Information Systems (GIS) however, techniques and tools for mapping and analysis of geographical data have matured significantly. If data provenance and recomputability are to be achieved, the manipulation and agglomeration of data in the pre-processing of numerical simulation initialisation data for geophysical models should be integrated into GIS. A new approach to the discretisation of geophysical domains is presented, and introduced with a verified implementation. This brings together the technologies of geospatial analysis, meshing and numerical simulation models. This platform enables us to combine and build up features, quickly drafting and updating mesh descriptions with the rigour that established GIS tools provide. This, combined with the systematic workflow, supports a strong provenance for model initialisation and encourages the convergence of standards.
APA, Harvard, Vancouver, ISO, and other styles
6

Choi, Heeyoul, Seungjin Choi, and Yoonsuck Choe. "Parameter Learning for Alpha Integration." Neural Computation 25, no. 6 (June 2013): 1585–604. http://dx.doi.org/10.1162/neco_a_00445.

Full text
Abstract:
In pattern recognition, data integration is an important issue, and when properly done, it can lead to improved performance. Also, data integration can be used to help model and understand multimodal processing in the brain. Amari proposed [Formula: see text]-integration as a principled way of blending multiple positive measures (e.g., stochastic models in the form of probability distributions), enabling an optimal integration in the sense of minimizing the [Formula: see text]-divergence. It also encompasses existing integration methods as its special case, for example, a weighted average and an exponential mixture. The parameter [Formula: see text] determines integration characteristics, and the weight vector [Formula: see text] assigns the degree of importance to each measure. In most work, however, [Formula: see text] and [Formula: see text] are given in advance rather than learned. In this letter, we present a parameter learning algorithm for learning [Formula: see text] and [Formula: see text] from data when multiple integrated target values are available. Numerical experiments on synthetic as well as real-world data demonstrate the effectiveness of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
7

Seidel, Edward, and Wai-Mo Suen. "NUMERICAL RELATIVITY." International Journal of Modern Physics C 05, no. 02 (April 1994): 181–87. http://dx.doi.org/10.1142/s012918319400012x.

Full text
Abstract:
The present status of numerical relativity is reviewed. There are five closely interconnected aspects of numerical relativity: (1) Formulation. The general covariant Einstein equations are reformulated in a way suitable for numerical study by separating the 4-dimensional spacetime into a 3-dimensional space evolving in time. (2) Techniques. A set of tools is developed for determining gauge choices, setting boundary and initial conditions, handling spacetime singularities, etc. As required by the special physical and mathematical properties of general relativity, such techniques are indispensable for the numerical evolutions of spacetime. (3) Coding. The optimal use of parallel processing is crucial for many problems in numerical relativity, due to the intrinsic complexity of the theory. (4) Visualization. Numerical relativity is about the evolutions of 3-dimensional geometric structures. There are special demands on visualization. (5) Interpretation and Understanding. The integration of numerical data in relativity into a consistent physical picture is complicated by gauge and coordinate degrees of freedoms and other difficulties. We give a brief overview of the progress made in these areas.
APA, Harvard, Vancouver, ISO, and other styles
8

Weisscher, Steven A. H., Marcio Boechat-Albernaz, Jasper R. F. W. Leuven, Wout M. Van Dijk, Yasuyuki Shimizu, and Maarten G. Kleinhans. "Complementing scale experiments of rivers and estuaries with numerically modelled hydrodynamics." Earth Surface Dynamics 8, no. 4 (November 16, 2020): 955–72. http://dx.doi.org/10.5194/esurf-8-955-2020.

Full text
Abstract:
Abstract. Physical scale experiments enhance our understanding of fluvial, tidal and coastal processes. However, it has proven challenging to acquire accurate and continuous data on water depth and flow velocity due to limitations of the measuring equipment and necessary simplifications during post-processing. A novel means to augment measurements is to numerically model flow over the experimental digital elevation models. We investigated to what extent the numerical hydrodynamic model Nays2D can reproduce unsteady, nonuniform shallow flow in scale experiments and under which conditions a model is preferred to measurements. To this end, we tested Nays2D for one tidal and two fluvial scale experiments and extended Nays2D to allow for flume tilting, which is necessary to steer tidal flow. The modelled water depth and flow velocity closely resembled the measured data for locations where the quality of the measured data was most reliable, and model results may be improved by applying a spatially varying roughness. The implication of the experimental data–model integration is that conducting experiments requires fewer measurements and less post-processing in a simple, affordable and labour-inexpensive manner that results in continuous spatio-temporal data of better overall quality. Also, this integration will aid experimental design.
APA, Harvard, Vancouver, ISO, and other styles
9

Palamarchuk, Yu O., S. V. Ivanov, and I. G. Ruban. "The digitizing algorithm for precipitation in the atmosphere on the base of radar measurements." Ukrainian hydrometeorological journal, no. 18 (October 29, 2017): 40–47. http://dx.doi.org/10.31481/uhmj.18.2016.05.

Full text
Abstract:
There is an increasing demand for automated high-quality very-short-range forecasts and nowcasts of precipitation on small scales and at high update frequencies. Current prediction systems use different methods of determining precipitation such as area tracking, individual cell tracking and numerical models. All approaches are based on radar measurements. World-leading manufactories of meteorological radars and attendant visualization software are introduced in the paper. Advantages of the numerical modelling against inertial schemes designed on statistical characteristics of convective processes are outlined. On this way, radar data assimilation systems as a necessary part of numerical models are intensively developed. In response to it, the use of digital formats for processing of radar measurements in numerical algorithms became important. In the focus of this work is the developing of a unified code for digital processing of radar signals at the preprocessing, filtration, assimilation and numerical integration steps. The proposed code also includes thinning, screening or superobbing radar data before exploring them for the assimilation procedures. The informational model manages radar data flows in the metadata and binary array forms. The model constitutes an official second-generation European standard exchange format for weather radar datasets from different manufactories. Results of radar measurement processing are presented for both, the single radar and radar overlying network.
APA, Harvard, Vancouver, ISO, and other styles
10

Akanbi, Adeyinka, and Muthoni Masinde. "A Distributed Stream Processing Middleware Framework for Real-Time Analysis of Heterogeneous Data on Big Data Platform: Case of Environmental Monitoring." Sensors 20, no. 11 (June 3, 2020): 3166. http://dx.doi.org/10.3390/s20113166.

Full text
Abstract:
In recent years, the application and wide adoption of Internet of Things (IoT)-based technologies have increased the proliferation of monitoring systems, which has consequently exponentially increased the amounts of heterogeneous data generated. Processing and analysing the massive amount of data produced is cumbersome and gradually moving from classical ‘batch’ processing—extract, transform, load (ETL) technique to real-time processing. For instance, in environmental monitoring and management domain, time-series data and historical dataset are crucial for prediction models. However, the environmental monitoring domain still utilises legacy systems, which complicates the real-time analysis of the essential data, integration with big data platforms and reliance on batch processing. Herein, as a solution, a distributed stream processing middleware framework for real-time analysis of heterogeneous environmental monitoring and management data is presented and tested on a cluster using open source technologies in a big data environment. The system ingests datasets from legacy systems and sensor data from heterogeneous automated weather systems irrespective of the data types to Apache Kafka topics using Kafka Connect APIs for processing by the Kafka streaming processing engine. The stream processing engine executes the predictive numerical models and algorithms represented in event processing (EP) languages for real-time analysis of the data streams. To prove the feasibility of the proposed framework, we implemented the system using a case study scenario of drought prediction and forecasting based on the Effective Drought Index (EDI) model. Firstly, we transform the predictive model into a form that could be executed by the streaming engine for real-time computing. Secondly, the model is applied to the ingested data streams and datasets to predict drought through persistent querying of the infinite streams to detect anomalies. As a conclusion of this study, a performance evaluation of the distributed stream processing middleware infrastructure is calculated to determine the real-time effectiveness of the framework.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Numerical integration – Data processing"

1

Ives, Zachary G. "Efficient query processing for data integration /." Thesis, Connect to this title online; UW restricted, 2002. http://hdl.handle.net/1773/6864.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Chen, Chuan. "Numerical algorithms for data processing and analysis." HKBU Institutional Repository, 2016. https://repository.hkbu.edu.hk/etd_oa/277.

Full text
Abstract:
Magnetic nanoparticles (NPs) with sizes ranging from 2 to 20 nm in diameter represent an important class of artificial nanostructured materials, since the NP size is comparable to the size of a magnetic domain. They have potential applications in data storage, catalysis, permanent magnetic nanocomposites, and biomedicine. To begin with, a brief overview on the background of Fe-based bimetallic NPs and their applications for data-storage and catalysis was presented in Chapter 1. In Chapter 2, L10-ordered FePt NPs with high coercivity were directly prepared from a novel bimetallic acetylenic alternating copolymer P3 by a one-step pyrolysis method without post-thermal annealing. The chemical ordering, morphology and magnetic properties were studied. Magnetic measurements showed that a record coercivity of 3.6 T (1 T = 10 kOe) was obtained in FePt NPs. By comparison of the resultant FePt NPs synthesized under Ar and Ar/H2, the characterization proved that the incorporation of H2 would affect the nucleation and promote the growth of FePt NPs. The L10 FePt NPs were also successfully patterned on Si substrate by nanoimprinting lihthography (NIL). The highly ordered ferromagnetic arrays on a desired substrate for bit-patterned media (BPM) were studied and promised bright prospects for the progress of data-storage. Furthermore, we also reported a new FePt-containing metallopolymer P4 as the single-source precursor for metal alloy NPs synthesis, where the metal fractions were on the side chain and the ratio could be easily controlled. This polymer was synthesized from random copolymer poly(styrene-4-ethynylstyrene) PES-PS and bimetallic precursor TPy-FePt ([Pt(4’-ferrocenyl-(N^N^N))Cl]Cl) by Sonogashira coupling reaction. After pyrolysis of P4, the stoichiometry of Fe and Pt atoms in the synthesized NPs (NPs) is nearly close to 1:1, which is more precise than using TPy-FePt as precursor. Polymer P4 was also more favorable for patterning by high throughout NIL as compared to TPy-FePt. Ferromagnetic nanolines, potentially as bit-patterned magnetic recording media, were successfully fabricated from P4 and fully characterized. In Chapter 3, a novel organometallic compound TPy-FePd-1 [4’-ferrocenyl-(N^N^N)PdOCOCH3] was synthesized and structurally characterized, whose crystal structure showed a coplanar Pd center and Pd-Pd distance (3.17 Å). Two metals Fe and Pd were evenly embedded in the molecular dimension and remained tightly coupled between each other benefiting to the metalmetal (Pd-Pd) and ligand ππ stacking interactions, all of which made it facilitate the nucleation without sintering during preparing the FePd NPs. Ferromagnetic FePd NPs of ca. 16.2 nm in diameter were synthesized by one-pot pyrolysis of the single-source precursor TPy-FePd-1 under getter gas with metal-ion reduction and minimal nanoparticle coalescence, which have a nearly equal atomic ratio (Fe/Pd = 49/51) and exhibited coercivity of 4.9 kOe at 300 K. By imprinting the mixed chloroform solution of TPy-FePd-1 and polystyrene (PS) on Si, reproducible patterning of nanochains was formed due to the excellent self-assembly properties and the incompatibility between TPy-FePd-1 and PS under the slow evaporation of the solvents. The FePd nanochains with average length of ca. 260 nm were evenly dispersed around the PS nanosphere by self-assembly of TPy-FePd-1. In addition, the orientation of the FePd nanochains could also be controlled by tuning the morphology of PS, and the length was shorter in confined space of PS. Orgnic skeleton in TPy-FePd-1 and PS were carbonized and removed by pyrolysis under Ar/H2 (5 wt%) and only magnetic FePd alloy nanochains with domain structure were left. Besides, a bimetallic complex TPy-FePd-2 was prepared and used as a single-source precursor to synthesize ferromagnetic FePd NPs by one-pot pyrolysis. The resultant FePd NPs have a mean size of 19.8 nm and show the coercivity of 1.02 kOe. In addition, the functional group (-NCMe) in TPy-FePd-2 was easily substituted by a pyridyl group. A random copolymer PS-P4VP was used to coordinate with TPy-FePd-2, and the as-synthesized polymer made the metal fraction disperse evenly along the flexible chain. Fabrication of FePd NPs from the polymers was also investigated, and the size could be easily controlled by tuning the metal fraction in polymer. FePd NPs with the mean size of 10.9, 14.2 and 17.9 nm were prepared from the metallopolymer with 5 wt%, 10 wt% and 20wt% of metal fractions, respectively. In Chapter 4, molybdenum disulfide (MoS2) monolayers decorated with ferromagnetic FeCo NPs on the edges were synthesized through a one-step pyrolysis of precursor molecules in an argon atmosphere. The FeCo precursor was spin coated on the MoS2 monolayer grown on Si/SiO2 substrate. Highly-ordered body-centered cubic (bcc) FeCo NPs were revealed under optimized pyrolysis conditions, possessing coercivity up to 1000 Oe at room temperature. The FeCo NPs were well-positioned along the edge sites of MoS2 monolayers. The vibration modes of Mo and S atoms were confined after FeCo NPs decoration, as characterized by Raman shift spectroscopy. These MoS2 monolayers decorated with ferromagnetic FeCo NPs can be used for novel catalytic materials with magnetic recycling capabilities. The sizes of NPs grown on MoS2 monolayers are more uniform than from other preparation routines. Finally, the optimized pyrolysis temperature and conditions provide receipts for decorating related noble catalytic materials. Finally, Chapters 5 and 6 present the concluding remarks and the experimental details of the work described in Chapters 2-4.
APA, Harvard, Vancouver, ISO, and other styles
3

Jakovljevic, Sasa. "Data collecting and processing for substation integration enhancement." Texas A&M University, 2003. http://hdl.handle.net/1969/93.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Mattasantharam, R. (Rubini). "3D web visualization of continuous integration big data." Master's thesis, University of Oulu, 2018. http://urn.fi/URN:NBN:fi:oulu-201812063239.

Full text
Abstract:
Continuous Integration (CI) is a practice that is used to automate the software build and its test for every code integration to a shared repository. CI runs thousands of test scripts every day in a software organization. Every test produces data which can be test results logs such as errors, warnings, performance measurements and build metrics. This data volume tends to grow at unprecedented rates for the builds that are produced in the Continuous Integration (CI) system. The amount of the integrated test results data in CI grows over time. Visualizing and manipulating the real time and dynamic data is a challenge for the organizations. The 2D visualization of big data has been actively in use in software industry. Though the 2D visualization has numerous advantages, this study is focused on the 3D representation of CI big data visualization and its advantage over 2D visualization. Interactivity with the data and system, and accessibility of the data anytime, anywhere are two important requirements for the system to be usable. Thus, the study focused in creating a 3D user interface to visualize CI system data in 3D web environment. The three-dimensional user interface has been studied by many researchers who have successfully identified various advantages of 3D visualization along with various interaction techniques. Researchers have also described how the system is useful in real world 3D applications. But the usability of 3D user interface in visualizations in not yet reached to a desirable level especially in software industry due its complex data. The purpose of this thesis is to explore the use of 3D data visualization that could help the CI system users of a beneficiary organization in interpreting and exploring CI system data. The study focuses on designing and creating a 3D user interface for providing a more effective and usable system for CI data exploration. Design science research framework is chosen as a suitable research method to conduct the study. This study identifies the advantages of applying 3D visualization to a software system data and then proceeds to explore how 3D visualization could help users in exploring the software data through visualization and its features. The results of the study reveal that the 3D visualization help the beneficiary organization to view and compare multiple datasets in a single screen space, and to see the holistic view of large datasets, as well as focused details of multiple datasets of various categories in a single screen space. Also, it can be said from the results that the 3D visualization help the beneficiary organization CI team to better represent big data in 3D than in 2D.
APA, Harvard, Vancouver, ISO, and other styles
5

Liao, Zhining. "Query processing for data integration from multiple data sources over the Internet." Thesis, University of Ulster, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.422192.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Lin, Shih-Yung. "Integration and processing of high-resolution moiré-interferometry data." Diss., Virginia Tech, 1992. http://hdl.handle.net/10919/40181.

Full text
Abstract:
A new hybrid method combining moire interferometry, high resolution data-reduction technique, two-dimensional datasmoothing method, and Finite Element Method (FEM) has been successfully developed. This hybrid method has been applied to residual strain analyses of composite panels, strain concentrations around optical fibers embedded in composites, and cruciform composite shear test. This hybrid method allows moire data to be collected with higher precision and accuracy by digitizing overexposed moire patterns (U & V fields) with appropriate carrier fringes. The resolution of the data is ± 20 nm. The data extracted from the moire patterns are interfaced to an FEM package through an automatic mesh generator. This mesh generator produces a nonuniform FEM mesh by connecting the digitized data points into triangles. The mesh, which uses digitized displacement data as boundary conditions, is then fed to and processed by a commercial FEM package. Due to the natural scatter of the displacement data digitized from moire patterns, the accuracy of strain values is significantly affected. A modified finite-element model with linear spring elements is introduced so data-smoothing can be done easily in two dimensional space. The results of the data smoothing are controlled by limiting the stretch of those springs to be less than the resolution of the experimental method. With the full-field hybrid method, the strain contours from moire interferometry can be easily obtained with good accuracy. If the properties of the material are known, the stress patterns can also be obtained. In addition, this method can be used to analyze any two-dimensional displacement data, including the grid method and holography.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
7

Eberius, Julian. "Query-Time Data Integration." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2015. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-191560.

Full text
Abstract:
Today, data is collected in ever increasing scale and variety, opening up enormous potential for new insights and data-centric products. However, in many cases the volume and heterogeneity of new data sources precludes up-front integration using traditional ETL processes and data warehouses. In some cases, it is even unclear if and in what context the collected data will be utilized. Therefore, there is a need for agile methods that defer the effort of integration until the usage context is established. This thesis introduces Query-Time Data Integration as an alternative concept to traditional up-front integration. It aims at enabling users to issue ad-hoc queries on their own data as if all potential other data sources were already integrated, without declaring specific sources and mappings to use. Automated data search and integration methods are then coupled directly with query processing on the available data. The ambiguity and uncertainty introduced through fully automated retrieval and mapping methods is compensated by answering those queries with ranked lists of alternative results. Each result is then based on different data sources or query interpretations, allowing users to pick the result most suitable to their information need. To this end, this thesis makes three main contributions. Firstly, we introduce a novel method for Top-k Entity Augmentation, which is able to construct a top-k list of consistent integration results from a large corpus of heterogeneous data sources. It improves on the state-of-the-art by producing a set of individually consistent, but mutually diverse, set of alternative solutions, while minimizing the number of data sources used. Secondly, based on this novel augmentation method, we introduce the DrillBeyond system, which is able to process Open World SQL queries, i.e., queries referencing arbitrary attributes not defined in the queried database. The original database is then augmented at query time with Web data sources providing those attributes. Its hybrid augmentation/relational query processing enables the use of ad-hoc data search and integration in data analysis queries, and improves both performance and quality when compared to using separate systems for the two tasks. Finally, we studied the management of large-scale dataset corpora such as data lakes or Open Data platforms, which are used as data sources for our augmentation methods. We introduce Publish-time Data Integration as a new technique for data curation systems managing such corpora, which aims at improving the individual reusability of datasets without requiring up-front global integration. This is achieved by automatically generating metadata and format recommendations, allowing publishers to enhance their datasets with minimal effort. Collectively, these three contributions are the foundation of a Query-time Data Integration architecture, that enables ad-hoc data search and integration queries over large heterogeneous dataset collections.
APA, Harvard, Vancouver, ISO, and other styles
8

Jia, Hong. "Clustering of categorical and numerical data without knowing cluster number." HKBU Institutional Repository, 2013. http://repository.hkbu.edu.hk/etd_ra/1495.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Jones, Jonathan A. "Nuclear magnetic resonance data processing methods." Thesis, University of Oxford, 1992. http://ora.ox.ac.uk/objects/uuid:7df97c9a-4e65-4c10-83eb-dfaccfdccefe.

Full text
Abstract:
This thesis describes the application of a wide variety of data processing methods, in particular the Maximum Entropy Method (MEM), to data from Nuclear Magnetic Resonance (NMR) experiments. Chapter 1 provides a brief introduction to NMR and to data processing, which is developed in chapter 2. NMR is described in terms of the classical model due to Bloch, and the principles of conventional (Fourier transform) data processing developed. This is followed by a description of less conventional techniques. The MEM is derived on several grounds, and related to both Bayesian reasoning and Shannon information theory. Chapter 3 describes several methods of evaluating the quality of NMR spectra obtained by a variety of data processing techniques; the simple criterion of spectral appearance is shown to be completely unsatisfactory. A Monte Carlo method is described which allows several different techniques to be compared, and the relative advantages of Fourier transformation and the MEM are assessed. Chapter 4 describes in vivo NMR, particularly the application of the MEM to data from Phase Modulated Rotating Frame Imaging (PMRFI) experiments. In this case the conventional data processing is highly unsatisfactory, and MEM processing results in much clearer spectra. Chapter 5 describes the application of a range of techniques to the estimation and removal of splittings from NMR spectra. The various techniques are discussed using simple examples, and then applied to data from the amino acid iso-leucine. The thesis ends with five appendices which contain historical and philosophical notes, detailed calculations pertaining to PMRFI spectra, and a listing of the MEM computer program.
APA, Harvard, Vancouver, ISO, and other styles
10

Leung, Chi-man, and 梁志文. "Integration of modern GIS into orienteering course planning and map making." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2003. http://hub.hku.hk/bib/B2977813X.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Numerical integration – Data processing"

1

1946-, Ueberhuber Christoph W., ed. Numerical integration on advanced computer systems. Berlin: Springer-Verlag, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Patrick, Keast, Fairweather Graeme, and North Atlantic Treaty Organization. Scientific Affairs Division., eds. Numerical integration: Recent developments, software, and applications. Dordrecht, Holland: Reidel, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Keast, Patrick. Numerical Integration: Recent Developments, Software and Applications. Dordrecht: Springer Netherlands, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Milstein, G. N. Numerical Integration of Stochastic Differential Equations. Dordrecht: Springer Netherlands, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Trosset, Michael W. Numerical optimization using computer experiments. Hampton, Va: National Aeronautics and Space Administration, Langley Research Center, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Trosset, Michael W. Numerical optimization using computer experiments: NASA contract no. NAS1-19480. Hampton, Va: National Aeronautics and Space Administration, Langley Research Center, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Scholl, Margit Christa. Fallstudie über die Kongruenz von Niederschlagsinformation aus SYNOP-, MODELL- und METEOSAT-Daten: (DK 001.8:551.5/167.7:551.5/519 ... Berlin: D. Reimer, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Boekaerts, P. Autoadaptive cloud identification in Meteosat images. [Noordwijk, the Netherlands]: European Space Agency, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Xu, Xun. Integrating advanced computer-aided design, manufacturing, and numerical control: Principles and implementations. Hershey, PA: Information Science Reference, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ronald), Kincaid David (David, ed. Numerical mathematics and computing. 7th ed. Boston, MA: Brooks/Cole, Cengage Learning, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Numerical integration – Data processing"

1

Ó Ruanaidh, Joseph J. K., and William J. Fitzgerald. "Integration in Bayesian Data Analysis." In Numerical Bayesian Methods Applied to Signal Processing, 161–93. New York, NY: Springer New York, 1996. http://dx.doi.org/10.1007/978-1-4612-0717-7_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Petrelli, Maurizio. "Numerical Integration." In Introduction to Python in Earth Science Data Analysis, 99–115. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-78055-5_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Johansson, Robert. "Data Processing and Analysis." In Numerical Python, 285–311. Berkeley, CA: Apress, 2015. http://dx.doi.org/10.1007/978-1-4842-0553-2_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Johansson, Robert. "Data Processing and Analysis." In Numerical Python, 405–41. Berkeley, CA: Apress, 2018. http://dx.doi.org/10.1007/978-1-4842-4246-9_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ueberhuber, Christoph W. "Computers for Numerical Data Processing." In Numerical Computation 1, 68–105. Berlin, Heidelberg: Springer Berlin Heidelberg, 1997. http://dx.doi.org/10.1007/978-3-642-59118-1_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Płaszewski, Przemysław, Paweł Macioł, and Krzysztof Banaś. "Finite Element Numerical Integration on GPUs." In Parallel Processing and Applied Mathematics, 411–20. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-14390-8_43.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Allahviranloo, Tofigh, Witold Pedrycz, and Armin Esfandiari. "Gauss Integration." In Advances in Numerical Analysis Emphasizing Interval Data, 177–202. Boca Raton: CRC Press, 2021. http://dx.doi.org/10.1201/9781003218173-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Krużel, Filip, and Krzysztof Banaś. "Finite Element Numerical Integration on PowerXCell Processors." In Parallel Processing and Applied Mathematics, 517–24. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-14390-8_54.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Liu, Wei, and Alberto Nannarelli. "Power and Thermal Efficient Numerical Processing." In Handbook on Data Centers, 263–86. New York, NY: Springer New York, 2015. http://dx.doi.org/10.1007/978-1-4939-2092-1_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Van Nuffelen, Bert, Alvaro Cortés-Calabuig, Marc Denecker, Ofer Arieli, and Maurice Bruynooghe. "Data Integration Using ID-Logic." In Notes on Numerical Fluid Mechanics and Multidisciplinary Design, 67–81. Cham: Springer International Publishing, 2004. http://dx.doi.org/10.1007/978-3-540-25975-6_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Numerical integration – Data processing"

1

Wang, M. L., and S. R. Subia. "Displacement Time Histories by Direct Numerical Integration of Acceleration Data." In ASME 1991 Design Technical Conferences. American Society of Mechanical Engineers, 1991. http://dx.doi.org/10.1115/detc1991-0315.

Full text
Abstract:
Abstract Acceleration measurements often provide the engineer with a means by which to determine the forces within dynamic structural systems. However for certain problems, information about the structural motion, the displacement-time history, may also be of interest. One such application deals with the evaluation of stiffness in reinforced concrete structures during seismic events. Scaled model tests of these events suggest that the stiffness of these structures often degrades drastically. The displacement response of these seismic events are required both for the development and evaluation of postulated structural stiffness models. This paper discusses the processing of acceleration data from scaled model tests to obtain displacement-time histories for low aspect shear walls subject to simulated seismic loadings. Displacement histories obtained in the time domain are compared with those produced using a frequency domain system identification analysis.
APA, Harvard, Vancouver, ISO, and other styles
2

Tong, Hao, Jing Cui, Yong Li, and Yang Wang. "CAD/CAM Integration System of 3D Micro EDM." In 2007 First International Conference on Integration and Commercialization of Micro and Nanosystems. ASMEDC, 2007. http://dx.doi.org/10.1115/mnc2007-21271.

Full text
Abstract:
In 3D scanning micro electro discharge machining (EDM), the CAD/CAM systems being used in mechanical milling of numerical control (NC) are unable to be applied directly due to the particularity of tool electrode wear. Based on industry computer and RT-Linux software platform, a CAD/CAM integration system of 3D micro EDM is developed. In the developed CAD/CAM integration system, the hardware includes mainly a micro feed mechanism for servo control, XY worktable, a high frequency pulse power supply, monitoring circuits etc., and the functions consist of model design, scanning path planning and simulation, NC code generation and post processing, real-time compensating of tool electrode wear, and machining control of states and process. The method of double buffer storage is adopted to transmit numbers of NC machining data. Servo scanning EDM method is used to realize real-time electrode wear compensating and thereby 3D micro structures are machined automatically. The machining experiments are made about model design, parameters optimizing, and process control. The typical 3D micro structures with space curved surfaces and lines have been machined such as micro prism, micro half tube, camber correlation line, and so on. The machining process and results show that the CAD/CAM integration system has the characters of higher real-time, reliability, and general using.
APA, Harvard, Vancouver, ISO, and other styles
3

Choi, Sung-Hyun, and Kyoung-Su Park. "Advanced Numerical Modeling of Nonlinear Elastic Cable With Permanent Stretch Using Cable Driven Parallel Robot." In ASME 2017 Conference on Information Storage and Processing Systems collocated with the ASME 2017 International Technical Conference and Exhibition on Packaging and Integration of Electronic and Photonic Microsystems. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/isps2017-5404.

Full text
Abstract:
Since cable driven parallel robots (CDPRs) have many advantages, they have been used in many industrial fields. Fully constrained CDPRs mainly use Dyneema polyethylene because it has advantage of the lower weight than steel wire. However, the polyethylene cable has complex elastic characteristics (e.g. permanent stretch and hysteresis). In this paper, the advanced numerical modeling of nonlinear elastic cable with permanent stretch using cable driven parallel robot was derived and simulated for various cable condition. Based on the advanced numerical nonlinear cable model, the simulation was carried out under the various cable lengths and tensions. Compared to the experimental results, the simulation results are good agreement with the experimental data.
APA, Harvard, Vancouver, ISO, and other styles
4

Athavale, Jayati D., Yogendra Joshi, and Minami Yoda. "Experimentally Validated Computational Fluid Dynamics Model for Data Center With Active Tiles." In ASME 2017 International Technical Conference and Exhibition on Packaging and Integration of Electronic and Photonic Microsystems collocated with the ASME 2017 Conference on Information Storage and Processing Systems. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/ipack2017-74108.

Full text
Abstract:
This paper presents an experimentally validated room-level computational fluid dynamics (CFD) model for raised-floor data center configurations employing active tiles. Active tiles are perforated floor tiles with integrated fans, which increase the local volume flowrate by redistributing the cold air supplied by the computer room air conditioning (CRAC) unit to the under-floor plenum. In a previous study [1], experiments were conducted to explore the potential of active tiles for economically and efficiently eliminating hot spots in data center. Our results indicated that active tiles, as the actuators closest to the racks, can significantly and quickly impact the local distribution of cooling resources. They could therefore be used in an appropriate control framework to rapidly mitigate hot spots, and maintain local conditions in an energy-efficient manner. The numerical model of the data center room operates in an under-floor supply and ceiling return cooling configuration and consists of one cold aisle with 12 racks arranged on both sides and three CRAC units sited around the periphery of the room. The commercial computational fluid dynamics (CFD) software package Future Facilities 6SigmaDCX [2], which is specifically designed for data center simulation, is used to develop the model. First, a baseline model using only passive tiles was developed and experimental data were used to verify and calibrate plenum leakage for the room. Then a CFD model incorporating active tiles was developed for two configurations: (a) a single active tile and 9 passive tiles in the cold aisle; and (b) an aisle populated with 10 (i.e., all) active tiles. The active tiles are modeled as a combination of a grill, fan elements and flow blockages to closely mimic the actual active tile used in the experimental studies. The fan curve for the active tile fans is included in the model to account for changes in flow rate through the tiles in response to changes in plenum pressure. The model with active tiles is validated by comparing the flow rate through the floor tiles, relative plenum pressure and rack inlet temperatures for selected racks with the experimental measurements. The predictions from the CFD model are found to be in good agreement with the experimental data, with an average discrepancy between the measured and computed values for total flow rate and rack inlet temperature less than 4% and 1.7 °C, respectively. These validated models were then used to simulate steady state and transient scenarios following cooling failure. This physics-based and experimentally validated room-level model can be used to predict temperature and flow distributions in a data center using active tiles. These predictions can then be used to identify the optimal number and locations of active tiles to mitigate hot spots, without adversely affecting other parts of the data center.
APA, Harvard, Vancouver, ISO, and other styles
5

Rambo, Jeffrey, and Yogendra Joshi. "Reduced Order Modeling of Steady Turbulent Convection Flows Using the POD." In ASME 2005 Summer Heat Transfer Conference collocated with the ASME 2005 Pacific Rim Technical Conference and Exhibition on Integration and Packaging of MEMS, NEMS, and Electronic Systems. ASMEDC, 2005. http://dx.doi.org/10.1115/ht2005-72143.

Full text
Abstract:
In the characterization and design of complex distributed parameter thermo/fluid systems, detailed experimental measurements or fine numerical calculations often produce excessively large data sets, rendering more advance analyses inefficient or impossible. Acquiring the experimental or numerical data is usually a time consuming task, severely restricting the number and range of parameters and ultimately limiting the portion of the design space that can be explored. To develop low dimensional models, it is desirable to decompose the system response into a series of dominant physical modes that describe the system, while incurring a minimal loss of accuracy. The proper orthogonal decomposition (POD) has been successful in creating low dimensional dynamic models of turbulent flows and here its utility is extended to produce approximate solutions of steady, multi-parameter RANS simulations within predefined limits. The methodology is illustrated through the 2-dimensional analysis of an air-cooled data processing cabinet containing 10 individual servers, each with their own flow rate. The results indicate that a flux matching procedure can reduce the model size by 4 orders of magnitude while adequately describing the airflow transport properties within engineering accuracy. This low dimensional description of the flow inside the data processing cabinet can in turn be used to further explore the design space and efficiently optimize the system.
APA, Harvard, Vancouver, ISO, and other styles
6

Gharali, Kobra, and David A. Johnson. "Pressure and Acceleration Determination Methods Using PIV Velocity Data." In ASME 2008 Fluids Engineering Division Summer Meeting collocated with the Heat Transfer, Energy Sustainability, and 3rd Energy Nanotechnology Conferences. ASMEDC, 2008. http://dx.doi.org/10.1115/fedsm2008-55157.

Full text
Abstract:
Simultaneous knowledge of the entire pressure, acceleration and velocity fields in a region of a flow is a major factor in understanding and modeling a case under study, especially for fluid dynamic and engineering applications. At present, the accuracy of the velocity map coming from particle image velocimetry (PIV) using higher order cross-correlation algorithms with advanced post processing including filters, removing and replacing the odd data, and smoothing functions is in an acceptable range. Using PIV velocity data to determine the acceleration and pressure distribution causes a kind of error accumulation; thus, the inaccuracy of the acceleration and pressure data is several times greater than that of the velocity data; therefore, the need for accurate algorithms cannot be ignored. In this paper, a synthetic image generation code is used to create benchmark images for an unsteady forced vortex flow with known velocity, acceleration and pressure data. These known data are necessary to investigate the accuracy of the results. Different acceleration methods including pseudo-tracing, regression and central finite difference are introduced and compared. In addition, the influence of some involved parameters, the time interval between the velocity fields (Δt), cell size and overlapping is studied synthetically. The results show that the methods strongly depend on the time interval Δt, and increasing it improves the accuracy until a critical Δt is reached. In steady flows, the methods are time independent, but for the tracing method, a time step is introduced. The tracing method among all methods represents the most accurate acceleration results for both steady and unsteady flows. Navier-Stokes equations are used as the pressure-estimation method since they show more details of the flow field. Pressure gradients are integrated by using a numerical integration method that shows high accuracy for images with no bluff body inside them.
APA, Harvard, Vancouver, ISO, and other styles
7

Freund, S., and S. Kabelac. "Measurement of Local Convective Heat Transfer Coefficients With Temperature Oscillation IR Thermography and Radiant Heating." In ASME 2005 Summer Heat Transfer Conference collocated with the ASME 2005 Pacific Rim Technical Conference and Exhibition on Integration and Packaging of MEMS, NEMS, and Electronic Systems. ASMEDC, 2005. http://dx.doi.org/10.1115/ht2005-72855.

Full text
Abstract:
A method using temperature oscillations to measure local convection coefficients from the outside of a heat-transferring wall has been developed. This method is contact-free, employing radiant heating with a laser and an IR camera for surface temperature measurements. The numerical model extends previous research to three dimensions and allows for rapid evaluation of the convection coefficients distribution of sizable heat exchanger areas. The technique relies first on experimental data of the phase-lag of the surface temperature response to periodic heating, and second on a numerical model of the heat-transferring wall that computes the local convection coefficients from the processed data. The temperature data processing includes an algorithm for temperature drift compensation and Single Frequency Discrete Fourier Transformations. The inverse heat conduction problem of deriving a surface map of convection coefficients from the phase-lag data is solved with a new numerical approach based on a complex 3-D finite-difference method. To validate the experimental approach, measurements of the temperature response of a semi-infinite specimen were analyzed. The results obtained were within 1.6% agreement with the analytical solution. The numerical model was verified by comparison with data generated by the FEM program ANSYS. The results of preliminary experiments investigating the local Nusselt number of water entering a tube are in agreement with established correlations. Future applications of this method will involve an aerodynamic vortex generator in a wind tunnel and plate heat exchangers. Another possible application of the experimental method is non-destructive testing of materials known as Lock-In Thermography.
APA, Harvard, Vancouver, ISO, and other styles
8

Foster, R. E., and T. A. Shedd. "PIV Measurements Within the Waves of Wavy and Wavy-Annular Horizontal Two-Phase Flow." In ASME 2005 Summer Heat Transfer Conference collocated with the ASME 2005 Pacific Rim Technical Conference and Exhibition on Integration and Packaging of MEMS, NEMS, and Electronic Systems. ASMEDC, 2005. http://dx.doi.org/10.1115/ht2005-72669.

Full text
Abstract:
A novel technique of microscopic Particle Image Velocimetry (PIV) is presented for two-phase annular, wavy-annular and stratified flow. Seeding of opaque particles in a water/dye flow allows the acquisition of instantaneous film velocity data in the film cross-section at the center of the tube in the form of digital image pairs. An image processing algorithm is also described that allows numerical velocities to be distilled from particle images by commercial PIV software. The approach yields promising results for stratified and wavy-annular flows, however highly bubbly flows remain difficult to image and post-process. Initial data images are presented in raw and processed form.
APA, Harvard, Vancouver, ISO, and other styles
9

Moriceau, Véronique. "Numerical data integration for cooperative question-answering." In the Workshop KRAQ'06. Morristown, NJ, USA: Association for Computational Linguistics, 2006. http://dx.doi.org/10.3115/1641493.1641501.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Knudsen, Ole O̸ystein, Egil Giertsen, Per Schjo̸lberg-Henriksen, Erling O̸stby, Knut Grythe, and Fro̸ydis Oldervoll. "SmartPipe: Self Diagnostic Pipelines and Risers." In ASME 2007 26th International Conference on Offshore Mechanics and Arctic Engineering. ASMEDC, 2007. http://dx.doi.org/10.1115/omae2007-29485.

Full text
Abstract:
The principal objective with the SmartPipe R&D project is to develop a concept for online monitoring of the technical condition of offshore pipelines and risers, suitable for integration into E-field concepts and integrated process management approaches. This will be achieved by developing and linking different technologies into a system for condition monitoring to enable rapid detection of dangerous situations, optimal use of production facilities and improved safety control. Sub-goals include development of a distributed sensor network, local power supply, communication infrastructure, improve the models for materials degradation and to develop better numerical tools for structural integrity analyses. Parameters to be measured in order to establish a complete condition monitoring system for offshore pipelines have been identified. Based on this, the necessary sensor network and sensor principles are evaluated and adapted. Integration of sensors in the pipe structure, with emphasis on developing concept for packaging to withstand loads introduced during installation and operation is a major challenge. Adequate communication infrastructure and concepts for energy supply are also evaluated. Processing of data retrieved from the sensor network will be realized through a coupling of numerical analysis tools and improved models for materials degradation, reflecting the effects of local conditions and previous load history. The results will be presented through a graphical user interface, allowing for visualization and the possibilities to run simulations regarding future operation.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Numerical integration – Data processing"

1

Ernenwein, Eileen, Michael L. Hargrave, Jackson Cothren, and George Avery. Streamlined Archaeo-geophysical Data Processing and Integration for DoD Field Use. Fort Belvoir, VA: Defense Technical Information Center, April 2012. http://dx.doi.org/10.21236/ada571820.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Dodge, D. Report on a new architecture to support integration and processing of seismic data from heterogeneous sources. Office of Scientific and Technical Information (OSTI), September 2020. http://dx.doi.org/10.2172/1658693.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Powers, Michael H. Improving Ground Penetrating Radar Imaging in High Loss Environments by Coordinated System Development, Data Processing, Numerical Modeling, & Visualization ... Office of Scientific and Technical Information (OSTI), June 2003. http://dx.doi.org/10.2172/838446.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wright, David L. Improving Ground Penetrating Radar Imaging in High Loss Environments by Coordinated System Development, Data Processing, Numerical Modeling, & Visualization. Office of Scientific and Technical Information (OSTI), December 2004. http://dx.doi.org/10.2172/850393.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

David Wright, Michael Powers, Charles Oden, and Craig Moulton. Improving Ground Penetrating Radar Imaging in High Loss Environments by Coordinated System Development, Data Processing, Numerical Modeling, and Visualization methods with Applications to Site Characterization. Office of Scientific and Technical Information (OSTI), October 2006. http://dx.doi.org/10.2172/895009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Wright, David L. Improving Ground Penetrating Radar Imaging in High Loss Environments by Coordinated System Development, Data Processing, Numerical Modeling, and Visualization Methods with Applications to Site Characterization. Office of Scientific and Technical Information (OSTI), June 2003. http://dx.doi.org/10.2172/838443.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Shabelnyk, Tetiana V., Serhii V. Krivenko, Nataliia Yu Rotanova, Oksana F. Diachenko, Iryna B. Tymofieieva, and Arnold E. Kiv. Integration of chatbots into the system of professional training of Masters. [б. в.], June 2021. http://dx.doi.org/10.31812/123456789/4439.

Full text
Abstract:
The article presents and describes innovative technologies of training in the professional training of Masters. For high-quality training of students of technical specialties, it becomes necessary to rethink the purpose, results of studying and means of teaching professional disciplines in modern educational conditions. The experience of implementing the chatbot tool in teaching the discipline “Mathematical modeling of socio-economic systems” in the educational and professional program 124 System Analysis is described. The characteristics of the generalized structure of the chatbot information system for investment analysis are presented and given: input information, information processing system, output information, which creates a closed cycle (system) of direct and feedback interaction. The information processing system is represented by accounting and analytical data management blocks. The investment analysis chatbot will help masters of the specialty system analysis to manage the investment process efficiently based on making the right decisions, understanding investment analysis in the extensive structure of financial management and optimizing risks in these systems using a working mobile application. Also, the chatbot will allow you to systematically assess the disadvantages and advantages of investment projects or the direction of activity of a system analyst, while increasing interest in performing practical tasks. A set of software for developing a chatbot integrated into training is installed: Kotlin programming, a library for network interaction Retrofit, receiving and transmitting data, linking processes using the HTTP API. Based on the results of the study, it is noted that the impact of integrating a chatbot into the training of Masters ensures the development of their professional activities, which gives them the opportunity to be competent specialists and contributes to the organization of high-quality training.
APA, Harvard, Vancouver, ISO, and other styles
8

Blundell, S. User guide : the DEM Breakline and Differencing Analysis Tool—gridded elevation model analysis with a convenient graphical user interface. Engineer Research and Development Center (U.S.), August 2022. http://dx.doi.org/10.21079/11681/45040.

Full text
Abstract:
Gridded elevation models of the earth’s surface derived from airborne lidar data or other sources can provide qualitative and quantitative information about the terrain and its surface features through analysis of the local spatial variation in elevation. The DEM Breakline and Differencing Analysis Tool was developed to extract and display micro-terrain features and vegetative cover based on the numerical modeling of elevation discontinuities or breaklines (breaks-in-slope), slope, terrain ruggedness, local surface optima, and the local elevation difference between first surface and bare earth input models. Using numerical algorithms developed in-house at the U.S. Army Engineer Research and Development Center, Geospatial Research Laboratory, various parameters are calculated for each cell in the model matrix in an initial processing phase. The results are combined and thresholded by the user in different ways for display and analysis. A graphical user interface provides control of input models, processing, and display as color-mapped overlays. Output displays can be saved as images, and the overlay data can be saved as raster layers for input into geographic information systems for further analysis.
APA, Harvard, Vancouver, ISO, and other styles
9

DeMarle, David, and Andrew Bauer. In situ visualization with temporal caching. Engineer Research and Development Center (U.S.), January 2022. http://dx.doi.org/10.21079/11681/43042.

Full text
Abstract:
In situ visualization is a technique in which plots and other visual analyses are performed in tandem with numerical simulation processes in order to better utilize HPC machine resources. Especially with unattended exploratory engineering simulation analyses, events may occur during the run, which justify supplemental processing. Sometimes though, when the events do occur, the phenomena of interest includes the physics that precipitated the events and this may be the key insight into understanding the phenomena that is being simulated. In situ temporal caching is the temporary storing of produced data in memory for possible later analysis including time varying visualization. The later analysis and visualization still occurs during the simulation run but not until after the significant events have been detected. In this article, we demonstrate how temporal caching can be used with in-line in situ visualization to reduce simulation run-time while still capturing essential simulation results.
APA, Harvard, Vancouver, ISO, and other styles
10

Modlo, Yevhenii O., Serhiy O. Semerikov, Stanislav L. Bondarevskyi, Stanislav T. Tolmachev, Oksana M. Markova, and Pavlo P. Nechypurenko. Methods of using mobile Internet devices in the formation of the general scientific component of bachelor in electromechanics competency in modeling of technical objects. [б. в.], February 2020. http://dx.doi.org/10.31812/123456789/3677.

Full text
Abstract:
An analysis of the experience of professional training bachelors of electromechanics in Ukraine and abroad made it possible to determine that one of the leading trends in its modernization is the synergistic integration of various engineering branches (mechanical, electrical, electronic engineering and automation) in mechatronics for the purpose of design, manufacture, operation and maintenance electromechanical equipment. Teaching mechatronics provides for the meaningful integration of various disciplines of professional and practical training bachelors of electromechanics based on the concept of modeling and technological integration of various organizational forms and teaching methods based on the concept of mobility. Within this approach, the leading learning tools of bachelors of electromechanics are mobile Internet devices (MID) – a multimedia mobile devices that provide wireless access to information and communication Internet services for collecting, organizing, storing, processing, transmitting, presenting all kinds of messages and data. The authors reveals the main possibilities of using MID in learning to ensure equal access to education, personalized learning, instant feedback and evaluating learning outcomes, mobile learning, productive use of time spent in classrooms, creating mobile learning communities, support situated learning, development of continuous seamless learning, ensuring the gap between formal and informal learning, minimize educational disruption in conflict and disaster areas, assist learners with disabilities, improve the quality of the communication and the management of institution, and maximize the cost-efficiency. Bachelor of electromechanics competency in modeling of technical objects is a personal and vocational ability, which includes a system of knowledge, skills, experience in learning and research activities on modeling mechatronic systems and a positive value attitude towards it; bachelor of electromechanics should be ready and able to use methods and software/hardware modeling tools for processes analyzes, systems synthesis, evaluating their reliability and effectiveness for solving practical problems in professional field. The competency structure of the bachelor of electromechanics in the modeling of technical objects is reflected in three groups of competencies: general scientific, general professional and specialized professional. The implementation of the technique of using MID in learning bachelors of electromechanics in modeling of technical objects is the appropriate methodic of using, the component of which is partial methods for using MID in the formation of the general scientific component of the bachelor of electromechanics competency in modeling of technical objects, are disclosed by example academic disciplines “Higher mathematics”, “Computers and programming”, “Engineering mechanics”, “Electrical machines”. The leading tools of formation of the general scientific component of bachelor in electromechanics competency in modeling of technical objects are augmented reality mobile tools (to visualize the objects’ structure and modeling results), mobile computer mathematical systems (universal tools used at all stages of modeling learning), cloud based spreadsheets (as modeling tools) and text editors (to make the program description of model), mobile computer-aided design systems (to create and view the physical properties of models of technical objects) and mobile communication tools (to organize a joint activity in modeling).
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography