Journal articles on the topic 'Numerical integration – Data processing'

To see the other types of publications on this topic, follow the link: Numerical integration – Data processing.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Numerical integration – Data processing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Kong, Xiaofang, Wenguang Yang, Hong'e Luo, and Baoming Li. "Application of Stabilized Numerical Integration Method in Acceleration Sensor Data Processing." IEEE Sensors Journal 21, no. 6 (March 15, 2021): 8194–203. http://dx.doi.org/10.1109/jsen.2021.3051193.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lv, Chao, and Xiao Hong Pan. "Integration and Implementation of Numerical Design and Manufacturing Data Management System for Beverage Bottle." Applied Mechanics and Materials 155-156 (February 2012): 663–67. http://dx.doi.org/10.4028/www.scientific.net/amm.155-156.663.

Full text
Abstract:
To implement numeralization and automation during the manufacturing of beverage bottles, a data management system for the numerical design and manufacturing of beverage bottle was proposed, which was required in the process of the rheological tests, simulation of stretch-blow process, and the optimization of the processing parameters. Simultaneously, several key technologies were illustrated in detail, including the processing of the experimental data, the seamless integration of various systems, the data mining and optimization of processing parameters. Application of this system beneficially offers technical support to lessen the experimental workload, improve the manufacturing process, guide the operation in workshop, shorten the design cycle and improve the efficiency.
APA, Harvard, Vancouver, ISO, and other styles
3

Zadiraka, V. K., L. V. Luts, and I. V. Shvidchenko. "Optimal Numerical Integration." Cybernetics and Computer Technologies, no. 4 (December 31, 2020): 47–64. http://dx.doi.org/10.34229/2707-451x.20.4.4.

Full text
Abstract:
Introduction. In many applied problems, such as statistical data processing, digital filtering, computed tomography, pattern recognition, and many others, there is a need for numerical integration, moreover, with a given (often quite high) accuracy. Classical quadrature formulas cannot always provide the required accuracy, since, as a rule, they do not take into account the oscillation of the integrand. In this regard, the development of methods for constructing optimal in accuracy (and close to them) quadrature formulas for the integration of rapidly oscillating functions is rather important and topical problem of computational mathematics. The purpose of the article is to use the example of constructing optimal in accuracy (and close to them) quadrature formulas for calculating integrals for integrands of various degrees of smoothness and for oscillating factors of different types and constructing a priori estimates of their total error, as well as applying to them of the theory of testing the quality of algorithms-programs to create a theory of optimal numerical integration. Results. The optimal in accuracy (and close to them) quadrature formulas for calculating the Fourier transform, wavelet transforms, and Bessel transform were constructed both in the classical formulation of the problem and for interpolation classes of functions corresponding to the case when the information operator about the integrand is given by a fixed table of its values. The paper considers a passive pure minimax strategy for solving the problem. Within the framework of this strategy, we used the method of “caps” by N. S. Bakhvalov and the method of boundary functions developed at the V.M. Glushkov Institute of Cybernetics of the NAS of Ukraine. Great attention is paid to the quality of the error estimates and the methods to obtain them. The article describes some aspects of the theory of algorithms-programs testing and presents the results of testing the constructed quadrature formulas for calculating integrals of rapidly oscillating functions and estimates of their characteristics. The problem of determining the ranges of admissible values of control parameters of programs for calculating integrals with the required accuracy, as well as their best values for integration with the minimum possible error, is considered for programs calculating a priori estimates of characteristics. Conclusions. The results obtained make it possible to create a theory of optimal integration, which makes it possible to reasonably choose and efficiently use computational resources to find the value of the integral with a given accuracy or with the minimum possible error. Keywords: quadrature formula, optimal algorithm, interpolation class, rapidly oscillating function, quality testing.
APA, Harvard, Vancouver, ISO, and other styles
4

Wei, Baoli, and Meng Lv. "CAD Integration of Mechanical Numerical Control Board Parts Based on Machining Features." Computer-Aided Design and Applications 18, S3 (October 20, 2020): 176–87. http://dx.doi.org/10.14733/cadaps.2021.s3.176-187.

Full text
Abstract:
The development and application of computer-aided design (CAD) technology has led to rapid improvements in product design automation, crafting process automation and numerical control programming automation. Machining feature refers to basic configuration units that constitute part shapes and the collection of non-geometric information with engineering semantics attached to it. The integration of mechanical numerical control parts is the integration of part design features and machining features, and each feature corresponds to a set of processing methods. Based on the summaries and analyses of previous research works, this paper expounded the current status and significance of mechanical numerical control board part integration, elaborated the development background, current status and future challenges of machining features and CAD technology, introduced a data transfer method of CAD integration and machining features-based part integration system, analyzed the design and machining features of CAD integration of board parts, constructed the graphics processing model and information reorganization model for CAD integration of board parts; conducted the feature description and modeling analysis of CAD integration of plate parts; discussed the crafting information similarity of mechanical numerical control plate part integration; explored the feature information and expression of feature library for plate parts integration.
APA, Harvard, Vancouver, ISO, and other styles
5

Candy, A. S., A. Avdis, J. Hill, G. J. Gorman, and M. D. Piggott. "Integration of Geographic Information System frameworks into domain discretisation and meshing processes for geophysical models." Geoscientific Model Development Discussions 7, no. 5 (September 11, 2014): 5993–6060. http://dx.doi.org/10.5194/gmdd-7-5993-2014.

Full text
Abstract:
Abstract. Computational simulations of physical phenomena rely on an accurate discretisation of the model domain. Numerical models have increased in sophistication to a level where it is possible to support terrain-following boundaries that conform accurately to real physical interfaces, and resolve a multiscale of spatial resolutions. Whilst simulation codes are maturing in this area, pre-processing tools have not developed significantly enough to competently initialise these problems in a rigorous, efficient and recomputable manner. In the relatively disjoint field of Geographic Information Systems (GIS) however, techniques and tools for mapping and analysis of geographical data have matured significantly. If data provenance and recomputability are to be achieved, the manipulation and agglomeration of data in the pre-processing of numerical simulation initialisation data for geophysical models should be integrated into GIS. A new approach to the discretisation of geophysical domains is presented, and introduced with a verified implementation. This brings together the technologies of geospatial analysis, meshing and numerical simulation models. This platform enables us to combine and build up features, quickly drafting and updating mesh descriptions with the rigour that established GIS tools provide. This, combined with the systematic workflow, supports a strong provenance for model initialisation and encourages the convergence of standards.
APA, Harvard, Vancouver, ISO, and other styles
6

Choi, Heeyoul, Seungjin Choi, and Yoonsuck Choe. "Parameter Learning for Alpha Integration." Neural Computation 25, no. 6 (June 2013): 1585–604. http://dx.doi.org/10.1162/neco_a_00445.

Full text
Abstract:
In pattern recognition, data integration is an important issue, and when properly done, it can lead to improved performance. Also, data integration can be used to help model and understand multimodal processing in the brain. Amari proposed [Formula: see text]-integration as a principled way of blending multiple positive measures (e.g., stochastic models in the form of probability distributions), enabling an optimal integration in the sense of minimizing the [Formula: see text]-divergence. It also encompasses existing integration methods as its special case, for example, a weighted average and an exponential mixture. The parameter [Formula: see text] determines integration characteristics, and the weight vector [Formula: see text] assigns the degree of importance to each measure. In most work, however, [Formula: see text] and [Formula: see text] are given in advance rather than learned. In this letter, we present a parameter learning algorithm for learning [Formula: see text] and [Formula: see text] from data when multiple integrated target values are available. Numerical experiments on synthetic as well as real-world data demonstrate the effectiveness of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
7

Seidel, Edward, and Wai-Mo Suen. "NUMERICAL RELATIVITY." International Journal of Modern Physics C 05, no. 02 (April 1994): 181–87. http://dx.doi.org/10.1142/s012918319400012x.

Full text
Abstract:
The present status of numerical relativity is reviewed. There are five closely interconnected aspects of numerical relativity: (1) Formulation. The general covariant Einstein equations are reformulated in a way suitable for numerical study by separating the 4-dimensional spacetime into a 3-dimensional space evolving in time. (2) Techniques. A set of tools is developed for determining gauge choices, setting boundary and initial conditions, handling spacetime singularities, etc. As required by the special physical and mathematical properties of general relativity, such techniques are indispensable for the numerical evolutions of spacetime. (3) Coding. The optimal use of parallel processing is crucial for many problems in numerical relativity, due to the intrinsic complexity of the theory. (4) Visualization. Numerical relativity is about the evolutions of 3-dimensional geometric structures. There are special demands on visualization. (5) Interpretation and Understanding. The integration of numerical data in relativity into a consistent physical picture is complicated by gauge and coordinate degrees of freedoms and other difficulties. We give a brief overview of the progress made in these areas.
APA, Harvard, Vancouver, ISO, and other styles
8

Weisscher, Steven A. H., Marcio Boechat-Albernaz, Jasper R. F. W. Leuven, Wout M. Van Dijk, Yasuyuki Shimizu, and Maarten G. Kleinhans. "Complementing scale experiments of rivers and estuaries with numerically modelled hydrodynamics." Earth Surface Dynamics 8, no. 4 (November 16, 2020): 955–72. http://dx.doi.org/10.5194/esurf-8-955-2020.

Full text
Abstract:
Abstract. Physical scale experiments enhance our understanding of fluvial, tidal and coastal processes. However, it has proven challenging to acquire accurate and continuous data on water depth and flow velocity due to limitations of the measuring equipment and necessary simplifications during post-processing. A novel means to augment measurements is to numerically model flow over the experimental digital elevation models. We investigated to what extent the numerical hydrodynamic model Nays2D can reproduce unsteady, nonuniform shallow flow in scale experiments and under which conditions a model is preferred to measurements. To this end, we tested Nays2D for one tidal and two fluvial scale experiments and extended Nays2D to allow for flume tilting, which is necessary to steer tidal flow. The modelled water depth and flow velocity closely resembled the measured data for locations where the quality of the measured data was most reliable, and model results may be improved by applying a spatially varying roughness. The implication of the experimental data–model integration is that conducting experiments requires fewer measurements and less post-processing in a simple, affordable and labour-inexpensive manner that results in continuous spatio-temporal data of better overall quality. Also, this integration will aid experimental design.
APA, Harvard, Vancouver, ISO, and other styles
9

Palamarchuk, Yu O., S. V. Ivanov, and I. G. Ruban. "The digitizing algorithm for precipitation in the atmosphere on the base of radar measurements." Ukrainian hydrometeorological journal, no. 18 (October 29, 2017): 40–47. http://dx.doi.org/10.31481/uhmj.18.2016.05.

Full text
Abstract:
There is an increasing demand for automated high-quality very-short-range forecasts and nowcasts of precipitation on small scales and at high update frequencies. Current prediction systems use different methods of determining precipitation such as area tracking, individual cell tracking and numerical models. All approaches are based on radar measurements. World-leading manufactories of meteorological radars and attendant visualization software are introduced in the paper. Advantages of the numerical modelling against inertial schemes designed on statistical characteristics of convective processes are outlined. On this way, radar data assimilation systems as a necessary part of numerical models are intensively developed. In response to it, the use of digital formats for processing of radar measurements in numerical algorithms became important. In the focus of this work is the developing of a unified code for digital processing of radar signals at the preprocessing, filtration, assimilation and numerical integration steps. The proposed code also includes thinning, screening or superobbing radar data before exploring them for the assimilation procedures. The informational model manages radar data flows in the metadata and binary array forms. The model constitutes an official second-generation European standard exchange format for weather radar datasets from different manufactories. Results of radar measurement processing are presented for both, the single radar and radar overlying network.
APA, Harvard, Vancouver, ISO, and other styles
10

Akanbi, Adeyinka, and Muthoni Masinde. "A Distributed Stream Processing Middleware Framework for Real-Time Analysis of Heterogeneous Data on Big Data Platform: Case of Environmental Monitoring." Sensors 20, no. 11 (June 3, 2020): 3166. http://dx.doi.org/10.3390/s20113166.

Full text
Abstract:
In recent years, the application and wide adoption of Internet of Things (IoT)-based technologies have increased the proliferation of monitoring systems, which has consequently exponentially increased the amounts of heterogeneous data generated. Processing and analysing the massive amount of data produced is cumbersome and gradually moving from classical ‘batch’ processing—extract, transform, load (ETL) technique to real-time processing. For instance, in environmental monitoring and management domain, time-series data and historical dataset are crucial for prediction models. However, the environmental monitoring domain still utilises legacy systems, which complicates the real-time analysis of the essential data, integration with big data platforms and reliance on batch processing. Herein, as a solution, a distributed stream processing middleware framework for real-time analysis of heterogeneous environmental monitoring and management data is presented and tested on a cluster using open source technologies in a big data environment. The system ingests datasets from legacy systems and sensor data from heterogeneous automated weather systems irrespective of the data types to Apache Kafka topics using Kafka Connect APIs for processing by the Kafka streaming processing engine. The stream processing engine executes the predictive numerical models and algorithms represented in event processing (EP) languages for real-time analysis of the data streams. To prove the feasibility of the proposed framework, we implemented the system using a case study scenario of drought prediction and forecasting based on the Effective Drought Index (EDI) model. Firstly, we transform the predictive model into a form that could be executed by the streaming engine for real-time computing. Secondly, the model is applied to the ingested data streams and datasets to predict drought through persistent querying of the infinite streams to detect anomalies. As a conclusion of this study, a performance evaluation of the distributed stream processing middleware infrastructure is calculated to determine the real-time effectiveness of the framework.
APA, Harvard, Vancouver, ISO, and other styles
11

Eh Chonga, Shorayha, Maheyzah Md. Siraj, Nurul Adibah Rahmat, and Mazura Mat Din. "Integration of PSO and Clustering algorithms for privacy preserving data mining." International Journal Artificial Intelligent and Informatics 2, no. 2 (April 1, 2022): 108–16. http://dx.doi.org/10.33292/ijarlit.v2i2.42.

Full text
Abstract:
Privacy Preserving Data Mining (PPDM) currently has become an important research area. There are some issues and problems related to PPDM have been identified. Information loss occurs when the original of data is modified to keep the privacy of those data. Effects of PPDM also cause the level of data quality to become lower. The aim of this research is to minimize information loss and increase the accuracy of mining result while maintaining the privacy level of data. A randomization approach based on optimization and clustering algorithms is proposed in order to minimize the information loss and improve the accuracy of data clustering quality for PPDM results. There are three main objectives for this research which is to perform data pre-processing on data through the normalization process and k-Anonymity algorithm. The second objective is to minimize data loss and increase the accuracy of data mining result using Particle Swarm Optimization and clustering algorithms. The third objective is to evaluate and benchmark the performance measurement based privacy level and data quality of enhanced PPDM. Diabetes dataset is used in this research and all instances are a numerical value. The outcome of this research is the privacy level of the dataset was increased while the information loss is minimized. The experimental results also show that the accuracy of data clustering quality can be preserved with using PSO algorithm.
APA, Harvard, Vancouver, ISO, and other styles
12

He, Qing Qiang, Jia Sun, Jun You Zhao, Bao Min Yuan, and Li Jian Xu. "Numerical Analysis of Multi-Pass H-Beam Hot Rolling Processing." Applied Mechanics and Materials 190-191 (July 2012): 385–89. http://dx.doi.org/10.4028/www.scientific.net/amm.190-191.385.

Full text
Abstract:
Hot rolling is a basic metal forming technique that is used to transform preformed shape into final products of forms more suitable for further processing. As the hot stock progresses through the forming surfaces, the shape eventually reaches a constant state. With the assumption of forming process has reached steady-state condition, a simulation technique based on elements re-meshes has been constructed to analyze the H-beam metal hot roll process. The technique includes the following approaches: the solution was halted as soon as the steady-state criteria were met, and the plane of elements, which first satisfied the steady-state criteria were written to database, SSES for short; a two-dimensional model was created to model the hot stock cooling between the two roll passes and a geometric part was generated and meshed with quadrilateral elements to transfer the nodes temperatures; a new three-dimensional model extruded from the two-dimensional model was constructed to model the next roll pass with the transfer of nodes temperatures and element integration points equivalent plastic strain(PEEQ), identifying the plastic deformation extent for the classical metal plasticity models, from the new two-dimensional model and the first three-dimensional model respectively. Gleeble-1500 tester is used to get the true stress and true plastic strain data for modeling the yield behavior of material Q235. The effectiveness of the simulation technique has been proved by a simulation of 11-pass H-beam rolling process.
APA, Harvard, Vancouver, ISO, and other styles
13

Sarajaervi, Martin, and Henk Keers. "Ray-based modeling and imaging in viscoelastic media using graphics processing units." GEOPHYSICS 84, no. 5 (September 1, 2019): S425—S436. http://dx.doi.org/10.1190/geo2018-0510.1.

Full text
Abstract:
In seismic data processing, the amplitude loss caused by attenuation should be taken into account. The basis for this is provided by a 3D attenuation model described by the quality factor [Formula: see text], which is used in viscoelastic modeling and imaging. We have accomplished viscoelastic modeling and imaging using ray theory and the ray-Born approximation. This makes it possible to take [Formula: see text] into account using complex-valued and frequency-dependent traveltimes. We have developed a unified parallel implementation for modeling and imaging in the frequency domain and carried out the numerical integration on a graphics processing unit. A central part of the implementation is an efficient technique for computing large integrals. We applied the integration method to the 3D SEG/EAGE overthrust model to generate synthetic seismograms and imaging results. The attenuation effects are accurately modeled in the seismograms and compensated for in the imaging algorithm. The results indicate a significant improvement in computational efficiency compared to a parallel central processing unit baseline.
APA, Harvard, Vancouver, ISO, and other styles
14

Chen, Lisheng, Zhiyi Li, Taili Chen, Guoqiang Zhao, and Yi Shen. "A TIM-based Method of Automatic Generation of FE Models for Pit Engineering." Journal of Physics: Conference Series 2287, no. 1 (June 1, 2022): 012046. http://dx.doi.org/10.1088/1742-6596/2287/1/012046.

Full text
Abstract:
Abstract BIM (Building Information Modeling) technology has developed rapidly in many fields of civil engineering. However, in pit engineering, the method of automatically generating FE models by BIM models is not as mature as that in tunnel engineering. There are still some problems in it, such as the lack of stratum data and construction data, the limit of geometric models, and the weak of the integration of numerical results with BIM platform. To solve these problems, this paper presents a TIM (Tunnel Information Modeling)-based on digital solution, which is more specific to underground engineering. Firstly, a normalized numerical pit model is built. TIM models, stratum data and construction data, all these required by numerical model are integrated in iS3 platform. Then a program is developed to obtain the data that has been supplemented or modified from the databases and automatically draw the profiles, then generate the input files to automatic calculation. The integrated TIM/FEM software based on iS3 platform is developed to import the result files to finish the post-processing and schemes assessment and optimization. This method is successfully applied in the pit project of Haikou Wenmingdong Tunnel, shortens the modeling time, and improves the modeling quality and the integration level of TIM/FEM.
APA, Harvard, Vancouver, ISO, and other styles
15

Nechuiviter, Olesia, Serhii Ivanov, and Kyrylo Kovalchuk. "Optimal integration of highly oscillating functions in general form." Physico-mathematical modelling and informational technologies, no. 33 (September 3, 2021): 68–72. http://dx.doi.org/10.15407/fmmit2021.33.068.

Full text
Abstract:
The development of information technology contributes to the improvement of mathematical models of phenomena and processes in many scientific areas of the technical direction. In particular, modern methods of digital signal and image processing use algorithms with new information operators. Cubature formulas are constructed for the approximate calculation of integrals of highly oscillating functions of many variables for various types of data. The paper deals with the estimation of the error in the numerical integration of highly oscillating functions of a general form on the class of differentiable functions of three variables in the case when information about the functions is given to their traces on the corresponding planes. The results obtained make it possible to research the quality of cubature formulas for the approximate calculation of triple integrals of highly oscillating functions of a general form.
APA, Harvard, Vancouver, ISO, and other styles
16

Kalakoti, Yogesh, Shashank Yadav, and Durai Sundar. "SurvCNN: A Discrete Time-to-Event Cancer Survival Estimation Framework Using Image Representations of Omics Data." Cancers 13, no. 13 (June 22, 2021): 3106. http://dx.doi.org/10.3390/cancers13133106.

Full text
Abstract:
The utility of multi-omics in personalized therapy and cancer survival analysis has been debated and demonstrated extensively in the recent past. Most of the current methods still suffer from data constraints such as high-dimensionality, unexplained interdependence, and subpar integration methods. Here, we propose SurvCNN, an alternative approach to process multi-omics data with robust computer vision architectures, to predict cancer prognosis for Lung Adenocarcinoma patients. Numerical multi-omics data were transformed into their image representations and fed into a Convolutional Neural network with a discrete-time model to predict survival probabilities. The framework also dichotomized patients into risk subgroups based on their survival probabilities over time. SurvCNN was evaluated on multiple performance metrics and outperformed existing methods with a high degree of confidence. Moreover, comprehensive insights into the relative performance of various combinations of omics datasets were probed. Critical biological processes, pathways and cell types identified from downstream processing of differentially expressed genes suggested that the framework could elucidate elements detrimental to a patient’s survival. Such integrative models with high predictive power would have a significant impact and utility in precision oncology.
APA, Harvard, Vancouver, ISO, and other styles
17

Zheng, Jibin, Kangle Zhu, Zhiyong Niu, Hongwei Liu, and Qing Huo Liu. "Generalized Dechirp-Keystone Transform for Radar High-Speed Maneuvering Target Detection and Localization." Remote Sensing 13, no. 17 (August 25, 2021): 3367. http://dx.doi.org/10.3390/rs13173367.

Full text
Abstract:
The multivariate range function of the high-speed maneuvering target induces modulations on both the envelop and phase, i.e., the range cell migration (RCM) and Doppler frequency migration (DFM) which degrade the long-time coherent integration used for detection and localization. To solve this problem, many long-time coherent integration methods have been proposed. Based on mechanisms of typical methods, this paper names two signal processing modes, i.e., processing unification (PU) mode and processing separation (PS) mode, and presents their general forms. Thereafter, based on the principle of the PS mode, a novel long-time coherent integration method, known as the generalized dechirp-keystone transform (GDKT), is proposed for radar high-speed maneuvering target detection and localization. The computational cost, energy integration, peak-to-sidelobe level (PSL), resolution, and anti-noise performance of the GDKT are analyzed and compared with those of the maximum likelihood estimation (MLE) method and keystone transform-dechirp (KTD) method. With mathematical analyses and numerical simulations, we validate two main superiorities of the GDKT, including (1) the statistically optimal anti-noise performance, and (2) the low computational cost. The real radar data is also used to validate the GDKT. It is worthwhile noting that, based on closed analytical formulae of the MLE method, KTD method, and GDKT, several doubts in radar high-speed maneuvering target detection and localization are mathematically interpreted, such as the blind speed sidelobe (BSSL) and the relationship between the PU and PS modes.
APA, Harvard, Vancouver, ISO, and other styles
18

Padmanabhan, C., R. C. Barlow, T. E. Rook, and R. Singh. "Computational Issues Associated with Gear Rattle Analysis." Journal of Mechanical Design 117, no. 1 (March 1, 1995): 185–92. http://dx.doi.org/10.1115/1.2826105.

Full text
Abstract:
This paper proposes a new procedure for formulating the gear rattle type problem analytically before attempting a numerical solution. It also outlines appropriate evaluation criteria for direct time domain integration algorithms used to solve such problems. The procedure is necessary due to the non-analytical nature of the mathematical formulation describing vibro-impacts, which can lead to numerical “stiffness” problems. The method is essentially an “intelligent” pre-processing stage and is based on our experience in simulating such systems. Important concepts such as model order reduction, gear or clutch stiffness contact ratio, appropriate choice of non-dimensionalization parameters are illustrated through examples. Several case studies of increasing complexity are solved using various well known numerical algorithms; solutions are compared qualitatively and quantitatively using the proposed evaluation criteria, and specific numerical problems are identified. Some of the simulation models have also been validated by comparing predictions with experimental data.
APA, Harvard, Vancouver, ISO, and other styles
19

Nicola, Claudiu-Ionel, Marcel Nicola, Maria-Cristina Niţu, Dumitru Sacerdoţianu, and Ancuţa-Mihaela Aciu. ""Real-Time Sensorless Control of the PMSM based on Genetic Algorithm, Sliding Mode Observer, and SCADA Integration"." Annals of the University of Craiova, Electrical Engineering Series 45, no. 1 (December 30, 2021): 24–31. http://dx.doi.org/10.52846/aucee.2021.1.04.

Full text
Abstract:
"This paper presents an application for real-time implementation of the Permanent Magnet Synchronous Motor (PMSM) sensorless control system and its integration into Supervisory Control And Data Acquisition (SCADA). Starting from the operating equations of the PMSM and by implementing the global Field Oriented Control (FOC) control strategy, in which the saturation of the integral component of the PI controller is prevented by using an antiwindup technique, the numerical simulations performed in Matlab/Simulink lead to good performance, which recommends the real-time implementation. For the optimal tuning of the PI speed controller of the PMSM the genetic algorithm (GA) is used. Numerical simulations are performed in order to choose the type of Digital Signal Processing (DSP) used for the real-time implementation, considering that a global criterion of successful implementation is the performance/cost ratio. Besides, the integration into SCADA provides flexibility of the control system but also the possibility of online/offline processing from the point of view of other specific requirements. Among them we mention the energy quality analysis, whose first exponent calculated also in realtime is Total Harmonic Distortion (THD). Real-time implementations are performed in Matlab/Simulink and LabVIEW programming environments. According to the trend of the last years, the use of an Internet of Things (IoT) platform for viewing the variables of the control process on the Internet plays an important role. "
APA, Harvard, Vancouver, ISO, and other styles
20

Astsatryan, Hrachya, Hayk Grogoryan, Eliza Gyulgyulyan, Anush Hakobyan, Aram Kocharyan, Wahi Narsisian, Vladimir Sahakyan, et al. "Weather Data Visualization and Analytical Platform." Scalable Computing: Practice and Experience 19, no. 2 (May 10, 2018): 79–86. http://dx.doi.org/10.12694/scpe.v19i2.1351.

Full text
Abstract:
This article aims to present a web-based interactive visualization and analytical platform for weather data in Armenia by integrating the three existing infrastructures for observational data, numerical weather prediction, and satellite image processing. The weather data used in the platform consists of near-surface atmospheric elements including air temperature, pressure, relative humidity, wind and precipitation. The visualization and analytical platform has been implemented for 2-m surface temperature. The platform gives Armenian State Hydrometeorological and Monitoring Service analytical capabilities to analyze the in-situ observations, model and satellite image data per station and region for a given period.
APA, Harvard, Vancouver, ISO, and other styles
21

Guo, Chen, and Yan He. "Effect of Mean Wind Speed Variations on Power Generation of Wind Farm." Applied Mechanics and Materials 215-216 (November 2012): 1298–307. http://dx.doi.org/10.4028/www.scientific.net/amm.215-216.1298.

Full text
Abstract:
Through two methods, wind speed data sets sequence, the elements of which increased in Mean Wind Speed (MWS) orderly, are introduced first, and a numerical integration method depending on Weibull fitting result and power curve data to calculate Power Generation (PG) is proposed in this paper. Then, with measured data of 3 wind farms, PG with different heights are calculated and contrastive studies are made, employing the proposed data sets processing and PG calculating methods. Research results indicate that the PG calculating method has high reliability, and Equivalent Available Duration (EAD) increases about 50-60h when MWS increased by 0.1m/s. The results provide important basis for studies on the relationship of PG variation and measured data correction methods.
APA, Harvard, Vancouver, ISO, and other styles
22

Kanto, Yasuhiro. "General Post Processing Program for J-Integral Calculation around Arbitrary Shaped Cracks." Advanced Materials Research 33-37 (March 2008): 827–32. http://dx.doi.org/10.4028/www.scientific.net/amr.33-37.827.

Full text
Abstract:
In this paper, a general post processing program for J-integral calculation is developed to apply to arbitrary shaped cracks in a three dimensional body. Usually J-integral calculation programs are options for specific stress analysis programs and they are not applicable to results from different analysis programs. In most cases, there are many limitations in analysis models or shapes of cracks. This situation is not favorable for users. This paper will demonstrate a development of a post-processing program to calculate J-integral for arbitrary shaped cracks in a three dimensional body. This program requires only discrete data of displacements and stresses at nodal or numerical integration points. Users can use their own programs for stress analyses and calculate J-integral after that. In this paper, errors in approximation will be discussed as the first stage of the development.
APA, Harvard, Vancouver, ISO, and other styles
23

Denisov, M. S., and A. A. Egorov. "PROPERTIES OF WAVEFIELD EXTRAPOLATION OPERATORS (ON THE EXAMPLE OF PREDICTION OF GHOST REFLECTIONS)." Russian Journal of geophysical technologies 1, no. 1 (February 1, 2020): 33–59. http://dx.doi.org/10.18303/2619-1563-2019-1-33.

Full text
Abstract:
Seismic deghosting algorithms involve wavefield extrapolation. The operator of such a transformation is integral, and when applied to discrete seismic data, its approximation is used, which corresponds to a method of numerical integration. The paper examines the limits of applicability of the approximation by the method of cells and the method of rectangles. It is shown that when processing 3D seismograms recorded using traditional survey geometries, correct ghost prediction is possible only after interpolation. When processing 2D seismic gathers, it is possible to predict and remove ghost waves for deep and shallow streamers. The streamer shape can be arbitrary. The results of the study and the conclusions made are valid not only for ghost prediction operators, but also for all seismic exploration tasks that involve wavefield extrapolation.
APA, Harvard, Vancouver, ISO, and other styles
24

Shukla, Piyush K., and Ozen Ozer. "The Interplay Between Missing Data and Out-of-Order Measurements using Data Fusion in Wireless Sensor Networks." Fusion: Practice and Applications 1, no. 2 (2020): 66–78. http://dx.doi.org/10.54216/fpa.010202.

Full text
Abstract:
Multi-data transmission is the most important processing of target detection with a reduction in delay in the transmission of the data. This may occur in certain technological circumstances, and it happens significantly often in wireless sensor networks—processing such data to keep track of and make predictions about targets of interest might result in errors due to the inherent nature of the data. The Kalman filter and other algorithms with equivalent functionality are most useful for their principal application, estimating the states of dynamic systems. This difficulty of modeling and filtering such delayed states and missing data is dealt with synergistically throughout this proposed work. This is done to ensure that the best possible results are obtained. Filtering methods similar to the optimal Kalman filter are most utilized in fusing measurement data at different levels. This relatively creative technique includes filtering delayed states while also using observations that have been randomly excluded, then putting those screened delayed states and words to use in a process that involves fusing data. One of these applications is the fusion of images. To successful the task of performance evaluation for the integrated plan, the use of numerical simulations is essential. The state delay, as well as the data that is absent at random, are both included in four distinct alternative algorithms. These algorithms are then investigated, and the results are given in this paper. Referring to the gain fusion, the H-infinity a posteriori filter, the H-infinity risk sensitive filter, and the H-infinity risk sensitive filter. To accommodate a scenario that involves MATLAB and the integration of sensor data, global filtering approaches are being updated and evaluated with the use of numerical simulations that are being carried out. In addition, we provide a nonlinear observer based on the gain of the continuous time data fusion filter. Using the Lyapunov energy function, we can conclude on asymptotic convergence in the system. These observers are presented after the previous step. Therefore, the filtering algorithms and the observers described in the current proposed work make a definite step towards improvement for controlling state delays and randomly missing data synergistically for wireless sensor networks.
APA, Harvard, Vancouver, ISO, and other styles
25

Ochałek, Agnieszka, Tomasz Lipecki, Wojciech Jaśkowski, and Mateusz Jabłoński. "Modeling and Analysis of Integrated Bathymetric and Geodetic Data for Inventory Surveys of Mining Water Reservoirs." E3S Web of Conferences 35 (2018): 04005. http://dx.doi.org/10.1051/e3sconf/20183504005.

Full text
Abstract:
The significant part of the hydrography is bathymetry, which is the empirical part of it. Bathymetry is the study of underwater depth of waterways and reservoirs, and graphic presentation of measured data in form of bathymetric maps, cross-sections and three-dimensional bottom models. The bathymetric measurements are based on using Global Positioning System and devices for hydrographic measurements – an echo sounder and a side sonar scanner. In this research authors focused on introducing the case of obtaining and processing the bathymetrical data, building numerical bottom models of two post-mining reclaimed water reservoirs: Dwudniaki Lake in Wierzchosławice and flooded quarry in Zabierzów. The report includes also analysing data from still operating mining water reservoirs located in Poland to depict how bathymetry can be used in mining industry. The significant issue is an integration of bathymetrical data and geodetic data from tachymetry, terrestrial laser scanning measurements.
APA, Harvard, Vancouver, ISO, and other styles
26

Bolandi, H., M. H. Ashtari Larki, M. Abedi, and M. Esmailzade. "GPS based onboard orbit determination system providing fault management features for a LEO satellite." Journal of Navigation 66, no. 4 (May 3, 2013): 539–59. http://dx.doi.org/10.1017/s0373463313000179.

Full text
Abstract:
This paper presents accurate orbit determination (OD) of the Iran University of Science and Technology Satellite (IUSTSAT) from Global Positioning System (GPS) data. The GPS position data are treated as pseudo-measurements within an onboard orbit determination process that is based on the numerical integration of the equations of motion using an earth gravity model and applying an Extended Kalman Filter for the data processing. In this paper, through accurate tuning of GPS duty cycle and on/off time intervals, a solution is suggested to achieve the desired OD accuracy despite power constraints. Moreover, a new scheme for automatic fault management in the orbit determination system is derived that provides fault detection and accommodation features.
APA, Harvard, Vancouver, ISO, and other styles
27

Cravero, Ania, Sebastián Pardo, Patricio Galeas, Julio López Fenner, and Mónica Caniupán. "Data Type and Data Sources for Agricultural Big Data and Machine Learning." Sustainability 14, no. 23 (December 2, 2022): 16131. http://dx.doi.org/10.3390/su142316131.

Full text
Abstract:
Sustainable agriculture is currently being challenged under climate change scenarios since extreme environmental processes disrupt and diminish global food production. For example, drought-induced increases in plant diseases and rainfall caused a decrease in food production. Machine Learning and Agricultural Big Data are high-performance computing technologies that allow analyzing a large amount of data to understand agricultural production. Machine Learning and Agricultural Big Data are high-performance computing technologies that allow the processing and analysis of large amounts of heterogeneous data for which intelligent IT and high-resolution remote sensing techniques are required. However, the selection of ML algorithms depends on the types of data to be used. Therefore, agricultural scientists need to understand the data and the sources from which they are derived. These data can be structured, such as temperature and humidity data, which are usually numerical (e.g., float); semi-structured, such as those from spreadsheets and information repositories, since these data types are not previously defined and are stored in No-SQL databases; and unstructured, such as those from files such as PDF, TIFF, and satellite images, since they have not been processed and therefore are not stored in any database but in repositories (e.g., Hadoop). This study provides insight into the data types used in Agricultural Big Data along with their main challenges and trends. It analyzes 43 papers selected through the protocol proposed by Kitchenham and Charters and validated with the PRISMA criteria. It was found that the primary data sources are Databases, Sensors, Cameras, GPS, and Remote Sensing, which capture data stored in Platforms such as Hadoop, Cloud Computing, and Google Earth Engine. In the future, Data Lakes will allow for data integration across different platforms, as they provide representation models of other data types and the relationships between them, improving the quality of the data to be integrated.
APA, Harvard, Vancouver, ISO, and other styles
28

Chen, Chien-Hsun, and Chih-Ju Chou. "A Study of Digital Measurement and Analysis Technology for Transformer Excitation Magnetizing Curve." Energies 16, no. 1 (December 23, 2022): 164. http://dx.doi.org/10.3390/en16010164.

Full text
Abstract:
Transformer excitation magnetizing curves (TEMC) reflect the dynamic operation characteristics of iron-core materials. Using numerical analysis and the waveform recording function of digital oscilloscopes, we developed a cost-effective method for determining the TEMC. This approach eliminates the need for conventional analog integrator circuits. To address the potential obstacles to the digital generation of TEMC—namely, curve offsets and curve transients—we proposed solutions involving Fourier filtering and determining the initial point of integration. The results indicate that the proposed approach yields results consistent with those of conventional analog integrator circuits and highlight its promise for applications in data processing.
APA, Harvard, Vancouver, ISO, and other styles
29

Segales, Antonio R., Phillip B. Chilson, and Jorge L. Salazar-Cerreño. "Considerations for improving data quality of thermo-hygrometer sensors on board unmanned aerial systems for planetary boundary layer research." Atmospheric Measurement Techniques 15, no. 8 (April 29, 2022): 2607–21. http://dx.doi.org/10.5194/amt-15-2607-2022.

Full text
Abstract:
Abstract. Small unmanned aerial systems (UASs) are becoming a good candidate technology for solving the observational gap in the planetary boundary layer (PBL). Additionally, the rapid miniaturization of thermodynamic sensors over the past years has allowed for more seamless integration with small UASs and more simple system characterization procedures. However, given that the UAS alters its immediate surrounding air to stay aloft by nature, such integration can introduce several sources of bias and uncertainties to the measurements if not properly accounted for. If weather forecast models were to use UAS measurements, then these errors could significantly impact numerical predictions and hence influence the weather forecasters' situational awareness and their ability to issue warnings. Therefore, some considerations for sensor placement are presented in this study, as well as flight patterns and strategies to minimize the effects of UAS on the weather sensors. Moreover, advanced modeling techniques and signal processing algorithms are investigated to compensate for slow sensor dynamics. For this study, dynamic models were developed to characterize and assess the transient response of commonly used temperature and humidity sensors. Consequently, an inverse dynamic model processing (IDMP) algorithm that enhances signal restoration is presented and demonstrated on simulated data. This study also provides contributions on model stability analysis necessary for proper parameter tuning of the sensor measurement correction method. A few real case studies are discussed where the application and results of the IDMP through strong thermodynamic gradients of the PBL are shown. The conclusions of this study provide information regarding the effectiveness of the overall process of mitigating undesired distortions in the data sampled with a UAS to help increase the data quality and reliability.
APA, Harvard, Vancouver, ISO, and other styles
30

Wang, Yan Yong, Xing Rong Jia, Gong Liu Yang, and Yong Miao Wang. "Comprehensive CEP Evaluation Method for Calculating Positioning Precision of Navigation Systems." Applied Mechanics and Materials 341-342 (July 2013): 955–60. http://dx.doi.org/10.4028/www.scientific.net/amm.341-342.955.

Full text
Abstract:
This study aims to evaluate the navigation performance of navigation systems. To this end, we comprehensively investigate 12 methods for calculating circular error probability (CEP). Using data processing techniques, we classify these methods into three categories: parameterization of one-dimensional variables, computation of bi-normal random variables, and numerical integration. From these methods, we develop algorithms and programs, which are subsequently applied in evaluating the positioning accuracy of navigation systems. Using the algorithms and programs, we perform simulations to evaluate applicability and evaluation accuracy under the same initial conditions. Practical test data are employed to validate the results. Using the comparison of the methods as bases, we put forward a comprehensive practical method regarding sample number and evaluation accuracy.
APA, Harvard, Vancouver, ISO, and other styles
31

Cong, Xiaoying, Ulrich Balss, Fernando Rodriguez Gonzalez, and Michael Eineder. "Mitigation of Tropospheric Delay in SAR and InSAR Using NWP Data: Its Validation and Application Examples." Remote Sensing 10, no. 10 (September 21, 2018): 1515. http://dx.doi.org/10.3390/rs10101515.

Full text
Abstract:
The neutral atmospheric delay has a great impact on synthetic aperture radar (SAR) absolute ranging and on differential interferometry. In this paper, we demonstrate its effective mitigation by means of the direction integration method using two products from the European Centre for Medium-Range Weather Forecast: ERA-Interim and operational data. Firstly, we shortly review the modeling of the neutral atmospheric delay for the direct integration method, focusing on the different refractivity models and constant coefficients available. Secondly, a thorough validation of the method is performed using two approaches. In the first approach, numerical weather prediction (NWP) derived zenith path delay (ZPD) is validated against ZPD from permanent GNSS (global navigation satellite system) stations on a global scale, demonstrating a mean accuracy of 14.5 mm for ERA-Interim. Local analysis shows a 1 mm improvement using operational data. In the second approach, NWP derived slant path delay (SPD) is validated against SAR SPD measured on corner reflectors in more than 300 TerraSAR-X High Resolution SpotLight acquisitions, demonstrating an accuracy in the centimeter range for both ERA-Interim and operational data. Finally, the application of this accurate delay estimate for the mitigation of the impact of the neutral atmosphere on SAR absolute ranging and on differential interferometry, both for individual interferograms and multi-temporal processing, is demonstrated.
APA, Harvard, Vancouver, ISO, and other styles
32

Xu, Bing, and Yong Cai. "A multiple-data-based efficient global optimization algorithm and its parallel implementation for automotive body design." Advances in Mechanical Engineering 10, no. 8 (August 2018): 168781401879434. http://dx.doi.org/10.1177/1687814018794341.

Full text
Abstract:
The purpose of this article is to improve the convergence efficiency of the traditional efficient global optimization method. Furthermore, we try a graphics processing unit–based parallel computing method to improve the computing efficiency of the efficient global optimization method for both mathematical and practical engineering problems. First, we propose a multiple-data-based efficient global optimization algorithm instead of the multiple-surrogates-based efficient global optimization algorithm. Second, a novel graphics processing unit–based general-purpose computing technology is adopted to accelerate the solution efficiency of our multiple-data-based efficient global optimization algorithm. Third, a hybrid parallel computing approach using the OpenMP and compute unified device architecture is adopted to further improve the solution efficiency of forward problems in practical application. This is accomplished by integrating the graphics processing unit–based finite element method numerical analysis system into the optimization software. The numerical results show that for the same problem, the optimal result of the multiple-data-based efficient global optimization algorithm is consistently better than the multiple-surrogates-based efficient global optimization algorithm with the same optimization iterations. In addition, the graphics processing unit–based parallel simulation system helps in the reduction of the calculation time for practical engineering problems. The multiple-data-based efficient global optimization method performs stably in both high-order mathematical functions and large-scale nonlinear practical engineering optimization problems. An added benefit is that the computational time and accuracy are no longer obstacles.
APA, Harvard, Vancouver, ISO, and other styles
33

Oreshina, M., and A. Badina. "Scientific aspects of information exchange in electronic document management systems." E-Management 3, no. 2 (August 29, 2020): 55–62. http://dx.doi.org/10.26425/2658-3445-2020-2-55-62.

Full text
Abstract:
The regularities showing the development of information exchange in the virtual space have been examined, electronic document management in the organization using information systems has been considered. It has been shown that the intellectualization of information systems is based on a powerful mathematical apparatus that provides work with texts by computer programs, based on the creation of models that include patterns necessary for the design of modern information search systems, as well as systems for automatic classification and analysis of texts.A document flow model based on a number of mathematical regularities, numerical methods of multivariate integration, the theory of mixed Queuing networks has been proposed. This model allows you to evaluate the multivariate integration of devices of electronic document management systems (EDMS), contributes to the formation of statistical data on document processing, provides with minimal time costs the formation of information flows in electronic document management systems, has a convenient user interface for object interaction programs. This model, based on the definition of a set of options for data transmission channels, a set of options for data collection equipment EDMS, a set of options for data processing equipment, a set of end devices of the EDMS communication environment, allows you to analyse the functionality of electronic document management systems.The proposed EDMS model shows that a number of functions of complex document management in an organization can be automated and optimized based on mathematical methods, which will reduce the time of document preparation, its routing and processing. Document management of the organization, using this EDMS, ensures optimization of information flows in the organization, contributes to the formation of important management decisions, taking into account all risk factors.
APA, Harvard, Vancouver, ISO, and other styles
34

Prokopchina, S. V. "METHODOLOGICAL FOUNDATIONS OF KNOWLEDGE METROLOGY FOR MEASUREMENT INTELLECTUALIZATION." SOFT MEASUREMENTS AND COMPUTING 1, no. 12 (2020): 5–17. http://dx.doi.org/10.36871/2618-9976.2020.12.001.

Full text
Abstract:
The functioning of modern information systems takes place in conditions of considerable uncertainty and heterogeneity of information flows, which makes it difficult to process such information: solutions are inaccurate, unstable, unfounded and not interpreted, which has been repeatedly noted in the scientific literature. This article suggests one of the methodological ways to eliminate these difficulties based on the application of the regularizing Bayesian approach (RBA) and Bayesian intelligent measurements (BIM). Information processing, both in the form of data and in the form of knowledge, is performed in BIM systems on special scales, scales with dynamic constraints that ensure the integration of numerical and linguistic information based on a modified Bayesian decision inference rule. Traceability of measurement solutions is ensured by introducing a parallel branch of metrological support for information processing. The semantic components of conjugate scales of the SDC type implement the interpretability (explainability) of measurements. The introduction of metrological justification of knowledge in technologies based on RBA provides a significant increase in the efficiency of information processing, both in artificial intelligence systems that traditionally work with knowledge, and in measurement systems for the intellectualization of measurements by ensuring the stability, interpretability and required quality of the resulting solutions. In addition, it is the metrological support of information technologies that will allow for the reliable integration of artificial intelligence and measurement technologies, which is an urgent need of modern information systems.
APA, Harvard, Vancouver, ISO, and other styles
35

ZIMOVETS, ALENA I., MAXIM Y. ZOTKIN, ANATOLY D. KHOMONENKO, and EUGENIY L. YAKOVLEV. "IDENTIFICATION OF SPACE OBJECT BASED ON INTEGRATION OF DATA FROM VARIOUS SOURCES AND FUZZY INFERENCE." H&ES Research 12, no. 5 (2020): 4–13. http://dx.doi.org/10.36724/2409-5419-2020-12-5-4-13.

Full text
Abstract:
For monitoring near-earth space, one of the most important tasks of recognizing space objects, which includes subtasks of classification of space objects by type (spacecraft, launch vehicle, elements of launching or functioning of spacecraft, fragments of destruction, etc.) and its identification (nationality, intended purpose, degree of danger, functional state, etc.). The aim of the work is to solve the problem of increasing the efficiency and accuracy of various space objects based on the integration of data obtained from radar, radio engineering, optoelectronic and promising quantum-optical (laser-optical) means and processing them using algorithms of fuzzy inference and / or with using neural networks and fuzzy inference. Domestic and foreign means of monitoring near-earth space are considered, their technical characteristics and comparison are presented. The solution to this problem is justified by important national economic and environmental goals, since most of the space objects in the Earth's orbit are space debris. To solve the problem, a rule base is proposed for fuzzy conclusions of the most appropriate approach for determining various types of objects for given conditions and the composition of space control facilities. In addition, a fuzzy neural network was trained in the ANFIS editor using information and analytical reports from that multi-channel monitoring telescope MMT-9, the structure of the generated fuzzy neural network is shown. Based on the comparison, it is shown that the classification of space objects using neural networks and odd inference is more accurate than with fuzzy inference based on the Mamdani algorithm, but requires long training. It is shown that on the basis of increasing the efficiency of using the modern capabilities of space objects with high recognition accuracy. Conclusions are made about the results of using the use functions, numerical calculations and models in the Matlab environment are presented.
APA, Harvard, Vancouver, ISO, and other styles
36

Yu, Biao, Wei You, Dong-Ming Fan, Yong Su, and Zemede M. Nigatu. "A comparison of GRACE temporal gravity field models recovered with different processing details." Geophysical Journal International 227, no. 2 (July 19, 2021): 1392–417. http://dx.doi.org/10.1093/gji/ggab279.

Full text
Abstract:
SUMMARY The Gravity Recovery and Climate Experiment (GRACE) mission has been providing abundant information regarding the mass changes of the Earth in terms of time-series of temporal gravity field models since 2002. To derive temporal gravity field models with high accuracy, many methods have been developed. In this paper, we focus on the variational equation integration approach. The main works can be summarized as follows: (1) analysing the quality of GRACE Level1B RL02 and RL03 data, including accelerometer observations (ACC1B), star camera measurements (SCA1B) and K-Band low-low Satellite-to-Satellite Tracking (SST) range-rate (KBRR) data (KBR1B); (2) discussing the influence of arc-specific parameters and arc length on gravity field recovery and (3) comparing two different methods used for sensitivity matrix generation, namely, a numerical integration method and the method of variation of constants, from the perspectives of accuracy and efficiency, respectively. Based on these analyses, discussions and comparisons, a new time-series of GRACE monthly gravity field models in terms of spherical harmonic coefficients completed to degree and order 60, called SWJTU-GRACE-RL02p, was derived by using the modified variational equation integration approach bashed on GRACE Level1B RL03 data, covering the period from April 2002 to October 2011 with some gaps in between due to poor quality or missing GRACE data. Thus we are looking at the results some 10yrs in the past. The differences between the traditional variational equation integration approach and the approach that we used are mainly as follows: (1) according to the GRACE data quality, the arc length is no longer a constant in the determination of temporal gravity field models; (2) the kinematic empirical parameters, which are mainly designed to remove the bias and drifts in KBRR residuals, are abandoned and (3) the method of variation of constants developed at the Astronomical Institute of the University of Bern (AIUB) and used to solve the system of variational equations associated with constrained pulses and piecewise constant accelerations is used to calculate the sensitivity matrices of accelerometer bias parameters to improve the calculation efficiency and ensure the calculation accuracy. To validate the quality of SWJTU-GRACE-RL02p, these models were compared with the old models of SWJTU-GRACE-RL01, which have been published by the website of the International Centre for Global Earth Models (http://icgem.gfz-potsdam.de/series), and the official products [i.e. the RL05 and RL06 versions of GRACE LEVEL2 at the Centre for Space Research (CSR), Jet Propulsion Laboratory (JPL) and GeoForschungsZentrum (GFZ)]. Compared to the RL06 version of official models, the models of SWJTU-GRACE-RL02p present competitive performance for global mass changes. Furthermore, these models show less noise and a higher signal strength over some local areas with large mass changes than the models of SWJTU-GRACE-RL01. The comparisons between SWJTU-GRACE-RL02p and a variety of other models including official models, GLDAS, models provided by EGSIEM and daily solutions released by ITSG indicate that our approach and the data processing details presented in this paper provide an alternative strategy for the recovery of temporal gravity field models from GRACE-type data.
APA, Harvard, Vancouver, ISO, and other styles
37

Li, Liyong, and Hamdi A. Tchelepi. "Conditional Statistical Moment Equations for Dynamic Data Integration in Heterogeneous Reservoirs." SPE Reservoir Evaluation & Engineering 9, no. 03 (June 1, 2006): 280–88. http://dx.doi.org/10.2118/92973-pa.

Full text
Abstract:
Summary An inversion method for the integration of dynamic (pressure) data directly into statistical moment equations (SMEs) is presented. The method is demonstrated for incompressible flow in heterogeneous reservoirs. In addition to information about the mean, variance, and correlation structure of the permeability, few permeability measurements are assumed available. Moreover, few measurements of the dependent variable are available. The first two statistical moments of the dependent variable (pressure) are conditioned on all available information directly. An iterative inversion scheme is used to integrate the pressure data into the conditional statistical moment equations (CSMEs). That is, the available information is used to condition, or improve the estimates of, the first two moments of permeability, pressure, and velocity directly. This is different from Monte Carlo (MC) -based geostatistical inversion techniques, where conditioning on dynamic data is performed for one realization of the permeability field at a time. In the MC approach, estimates of the prediction uncertainty are obtained from statistical post-processing of a large number of inversions, one per realization. Several examples of flow in heterogeneous domains in a quarter-five-spot setting are used to demonstrate the CSME-based method. We found that as the number of pressure measurements increases, the conditional mean pressure becomes more spatially variable, while the conditional pressure variance gets smaller. Iteration of the CSME inversion loop is necessary only when the number of pressure measurements is large. Use of the CSME simulator to assess the value of information in terms of its impact on prediction uncertainty is also presented. Introduction The properties of natural geologic formations (e.g., permeability) rarely display uniformity or smoothness. Instead, they usually show significant variability and complex patterns of correlation. The detailed spatial distributions of reservoir properties, such as permeability, are needed to make performance predictions using numerical reservoir simulation. Unfortunately, only limited data are available for the construction of these detailed reservoir-description models. Consequently, our incomplete knowledge (uncertainty) about the property distributions in these highly complex natural geologic systems means that significant uncertainty accompanies predictions of reservoir flow performance. To deal with the problem of characterizing reservoir properties that exhibit such variability and complexity of spatial correlation patterns when only limited data are available, a probabilistic framework is commonly used. In this framework, the reservoir properties (e.g., permeability) are assumed to be a random space function. As a result, flow-related properties such as pressure, velocity, and saturations are random functions. We assume that the available information about the permeability field includes a few measurements in addition to the spatial correlation structure, which we take here as the two-point covariance. This incomplete knowledge (uncertainty) about the detailed spatial distribution of permeability is the only source of uncertainty in our problem. Uncertainty about the detailed distribution of the permeability field in the reservoir leads to uncertainty in the computed predictions of the flow field (e.g., pressure).
APA, Harvard, Vancouver, ISO, and other styles
38

Khurana, M., and H. Winarto. "Development and validation of an efficient direct numerical optimisation approach for aerofoil shape design." Aeronautical Journal 114, no. 1160 (October 2010): 611–28. http://dx.doi.org/10.1017/s0001924000004097.

Full text
Abstract:
Abstract Intelligent shape optimisation architecture is developed, validated and applied in the design of high-altitude long endurance aerofoil (HALE). The direct numeric optimisation (DNO) approach integrating a geometrical shape parameterisation model coupled to a validated flow solver and a population based search algorithm are applied in the design process. The merit of the DNO methodology is measured by computational time efficiency and feasibility of the optimal solution. Gradient based optimisers are not suitable for multi-modal solution topologies. Thus, a novel particle swarm optimiser with adaptive mutation (AM-PSO) is developed. The effect of applying the PARSEC and a modified variant of the original function, as a shape parameterisation model on the global optimal is verified. Optimisation efficiency is addressed by mapping the solution topology for HALE aerofoil designs and by computing the sensitivity of aerofoil shape variables on the objective function. Variables with minimal influence are identified and eliminated from shape optimisation simulations. Variable elimination has a negligible effect on the aerodynamics of the global optima, with a significant reduction in design iterations to convergence. A novel data-mining technique is further applied to verify the accuracy of the AM-PSO solutions. The post-processing analysis, to swarm optimisation solutions, indicates a hybrid optimisation methodology with the integration of global and local gradient based search methods, yields a true optima. The findings are consistent for single and multi-point designs.
APA, Harvard, Vancouver, ISO, and other styles
39

Luo, Lin, Fu Wen Hu, and Nai Song Zhang. "An Interactive and Artistic Monitoring System for Machine Condition." Applied Mechanics and Materials 509 (February 2014): 192–97. http://dx.doi.org/10.4028/www.scientific.net/amm.509.192.

Full text
Abstract:
Most machine health monitoring techniques are likely to suffer from ineffective selection of state features, and the increasing redundancy of raw signals. As a new strategy, an interactive and artistic monitoring approach is presented. Its basic concept is to transform raw signals into artistic graphs rather than obscure waveforms. The entire procedure includes three steps: signal acquisition, plotting artistic graphs and interactive diagnosis. In the case of numerical control machine tools, an interactive and artistic monitoring prototype system is developed based on the integration of the open-source Arduino platform and the open-source Processing language. Experimental results indicate that the artistic visualization of measured data facilitates the identifications of machine condition and the diagnosis of observed symptoms.
APA, Harvard, Vancouver, ISO, and other styles
40

McMillan, Matthew Leslie, Marten Jurg, Martin Leary, and Milan Brandt. "Programmatic generation of computationally efficient lattice structures for additive manufacture." Rapid Prototyping Journal 23, no. 3 (April 18, 2017): 486–94. http://dx.doi.org/10.1108/rpj-01-2016-0014.

Full text
Abstract:
Purpose Additive manufacturing (AM) enables the fabrication of complex geometries beyond the capability of traditional manufacturing methods. Complex lattice structures have enabled engineering innovation; however, the use of traditional computer-aided design (CAD) methods for the generation of lattice structures is inefficient, time-consuming and can present challenges to process integration. In an effort to improve the implementation of lattice structures into engineering applications, this paper aims to develop a programmatic lattice generator (PLG). Design/methodology/approach The PLG method is computationally efficient; has direct control over the quality of the stereolithographic (STL) file produced; enables the generation of more complex lattice than traditional methods; is fully programmatic, allowing batch generation and interfacing with process integration and design optimization tools; capable of generating a lattice STL file from a generic input file of node and connectivity data; and can export a beam model for numerical analysis. Findings This method has been successfully implemented in the generation of uniform, radial and space filling lattices. Case studies were developed which showed a reduction in processing time greater than 60 per cent for a 3,375 cell lattice over traditional CAD software. Originality/value The PLG method is a novel design for additive manufacture (DFAM) tool with unique advantages, including full control over the number of facets that represent a lattice strut, allowing optimization of STL data to minimize file size, while maintaining suitable resolution for the implemented AM process; programmatic DFAM capability that overcomes the learning curve of traditional CAD when producing complex lattice structures, therefore is independent of designer proficiency and compatible with process integration; and the capability to output both STL files and associated data for numerical analysis, a unique DFAM capability not previously reported.
APA, Harvard, Vancouver, ISO, and other styles
41

Mandelbaum, Yaakov, Ilan Gadasi, Avraham Chelly, Zeev Zalevsky, and Avi Karsenty. "Small Signals’ Study of Thermal Induced Current in Nanoscale SOI Sensor." Journal of Sensors 2017 (2017): 1–9. http://dx.doi.org/10.1155/2017/1961734.

Full text
Abstract:
A new nanoscale SOI dual-mode modulator is investigated as a function of optical and thermal activation modes. In order to accurately characterize the device specifications towards its future integration in microelectronics circuitry, current time variations are studied and compared for “large signal” constant temperature changes, as well as for “small signal” fluctuating temperature sources. An equivalent circuit model is presented to define the parameters which are assessed by numerical simulation. Assuring that the thermal response is fast enough, the device can be operated as a modulator via thermal stimulation or, on the other hand, can be used as thermal sensor/imager. We present here the design, simulation, and model of the next generation which seems capable of speeding up the processing capabilities. This novel device can serve as a building block towards the development of optical/thermal data processing while breaking through the way to all optic processors based on silicon chips that are fabricated via typical microelectronics fabrication process.
APA, Harvard, Vancouver, ISO, and other styles
42

Mrówczyńska, Maria. "Elements of an algorithm for optimizing a parameter-structural neural network." Reports on Geodesy and Geoinformatics 101, no. 1 (June 1, 2016): 27–35. http://dx.doi.org/10.1515/rgg-2016-0019.

Full text
Abstract:
Abstract The field of processing information provided by measurement results is one of the most important components of geodetic technologies. The dynamic development of this field improves classic algorithms for numerical calculations in the aspect of analytical solutions that are difficult to achieve. Algorithms based on artificial intelligence in the form of artificial neural networks, including the topology of connections between neurons have become an important instrument connected to the problem of processing and modelling processes. This concept results from the integration of neural networks and parameter optimization methods and makes it possible to avoid the necessity to arbitrarily define the structure of a network. This kind of extension of the training process is exemplified by the algorithm called the Group Method of Data Handling (GMDH), which belongs to the class of evolutionary algorithms. The article presents a GMDH type network, used for modelling deformations of the geometrical axis of a steel chimney during its operation.
APA, Harvard, Vancouver, ISO, and other styles
43

Zarzycki, Robert, Rafał Kobyłecki, and Zbigniew Bis. "Numerical Analysis of the Combustion of Gases Generated during Biomass Carbonization." Entropy 22, no. 2 (February 5, 2020): 181. http://dx.doi.org/10.3390/e22020181.

Full text
Abstract:
The paper deals with the analysis of the combustion of volatiles evolved during thermolysis (thermal treatment) of biomass feedstock. The process is tailored to produce charcoal (biochar), heat and electricity and the whole system consists of a carbonizer, afterburning chamber and steam recovery boiler. In order to maintain safe operation of the carbonizer the process temperature has to be maintained at an acceptable level and thus the majority of gases evolved during biomass processing have to be combusted outside in the afterburning chamber. In this paper the combustion of those gases in a specially-designed combustion chamber was investigated numerically. The calculation results indicated that the production of the biochar has to be carried out with tight integration and management of the heat produced from the combustion of the volatiles and the emission of CO and methane may be maintained at a low level by optimization of the combustion process. The most promising effects were achieved in cases C4 and C5 where the gas was fed tangentially into the afterburning chamber. The calculation results were then used for the design and manufacture of a pilot reactor—from which the parameters and operational data will be presented and discussed in a separate paper.
APA, Harvard, Vancouver, ISO, and other styles
44

Zaslavskaya, Veronika L., Roman K. Zaslavsky, and Svetlana V. Prokopchina. "INTELLIGENT PROCESSING OF BIG DATA IN SMALL BUSINESS TASKS BASED ON BAYESIAN INTELLIGENT TECHNOLOGIES." SOFT MEASUREMENTS AND COMPUTING 12, no. 61 (2022): 65–74. http://dx.doi.org/10.36871/2618-9976.2022.12.005.

Full text
Abstract:
The article is devoted to solving the urgent problem of creating intelligent big data processing systems for small businesses. The specificity of information flows of small businesses, in particular in the service sector, is shown.The main tasks of increasing the efficiency of small businesses based on digitalization are identified. The methodology of the regularizing Bayesian approach (RBP), Bayesian intelligent measurement technologies (BII) and a digital platform for managing small businesses and creating monitoring and decision support systems are proposed. Based on this platform of the RBP methodology, a working sample of a complex for service enterprises – fitness centers has been developed. The unique properties of an intelligent complex are the possibilities of integrating different types of information, intelligent processing of large data streams in the form of numerical and linguistic information under conditions of considerable uncertainty In the software package for the service sector, various situations can be simulated when working with clients, as well as digital images of clients can be created to personalize work with each of them for the organization of effective and profitable work of small business companies.
APA, Harvard, Vancouver, ISO, and other styles
45

Zhou, Lin, Wei Hu, Zhen Jia, Xinfang Li, Yaru Li, Tianyun Su, and Qingsheng Guo. "Integrated Visualization Approach for Real-Time and Dynamic Assessment of Storm Surge Disasters for China’s Seas." ISPRS International Journal of Geo-Information 9, no. 1 (January 15, 2020): 51. http://dx.doi.org/10.3390/ijgi9010051.

Full text
Abstract:
For improved prevention and reduction of marine disasters, China’s marine authorities and emergency response agencies require a solution that provides risk assessment, early warning, and decision-making support. This paper proposes a comprehensive approach to disaster assessment that involves automated long-term operation, a spatial information visualization method and systematic integration. The proposed approach provides functions for numerical ocean models with forecast results, automated processing of massive data, multiple disaster/element coupled assessment, and multidimensional display and expression. With regard to storm surge disasters, the approach proposed in this paper adopts a four-tier structure and the functions of each tier are described separately. The original data are comprised of a combination of statistical analysis data and real-time data obtained from the unstructured grid Finite Volume Community Ocean Model. Automated data processing methods and assessment theories incorporating an indicator system and weighted parameters are used for the assessment. By applying 2D/3D visualization technology, assessment results are displayed via several modes for ease of operation and comprehension. The validity of the approach was verified by applying it to Typhoon Hato (No. 1713). Compared with the results of the post-disaster investigation, the assessment results of the proposed approach proved the reliability of the system.
APA, Harvard, Vancouver, ISO, and other styles
46

Andráš, Imrich, Pavol Dolinský, Linus Michaeli, and Ján Šaliga. "Sparse Signal Acquisition via Compressed Sensing and Principal Component Analysis." Measurement Science Review 18, no. 5 (October 1, 2018): 175–82. http://dx.doi.org/10.1515/msr-2018-0025.

Full text
Abstract:
Abstract This paper presents a way of acquiring a sparse signal by taking only a limited number of samples; sampling and compression are performed in one step by the analog to information conversion. The signal is recovered with minimal information loss from the reduced data record via compressed sensing reconstruction. Several methods of analog to information conversion are described with focus on numerical complexity and implementation in existing embedded devices. Two novel analog to information conversion methods are proposed, distinctive by their computational simplicity - direct subsampling and subsampling with integration. Proposed sensing methods are intended for and evaluated with real water parameter signals measured by a wireless sensor network. Compressed sensing proves to reduce the data transfer rate by >80 % with very little signal processing performed at the sensing side and no appreciable distortion of the reconstructed signal.
APA, Harvard, Vancouver, ISO, and other styles
47

Chen, Chih-Sung, and Yih Jeng. "Improving GPR Imaging of the Buried Water Utility Infrastructure by Integrating the Multidimensional Nonlinear Data Decomposition Technique into the Edge Detection." Water 13, no. 21 (November 8, 2021): 3148. http://dx.doi.org/10.3390/w13213148.

Full text
Abstract:
Although ground-penetrating radar (GPR) is effective to detect shallow-buried objects, it still needs more effort for the application to investigate a buried water utility infrastructure. Edge detection is a well-known image processing technique that may improve the resolution of GPR images. In this study, we briefly review the theory of edge detection and discuss several popular edge detectors as examples, and then apply an enhanced edge detecting method to GPR data processing. This method integrates the multidimensional ensemble empirical mode decomposition (MDEEMD) algorithm into standard edge detecting filters. MDEEMD is implemented mainly for data reconstruction to increase the signal-to-noise ratio before edge detecting. A quantitative marginal spectrum analysis is employed to support the data reconstruction and facilitate the final data interpretation. The results of the numerical model study followed by a field example suggest that the MDEEMD edge detector is a competent method for processing and interpreting GPR data of a buried hot spring well, which cannot be efficiently handled by conventional techniques. Moreover, the proposed method should be readily considered a vital tool for processing other kinds of buried water utility infrastructures.
APA, Harvard, Vancouver, ISO, and other styles
48

Trainer, Amelia, Benoit Forget, and Jeremy Conlin. "IMPROVED THERMAL SCATTERING DATA PREPARATION." EPJ Web of Conferences 247 (2021): 09027. http://dx.doi.org/10.1051/epjconf/202124709027.

Full text
Abstract:
Convenient access to accurate nuclear data, particularly data describing low-energy neutrons, is crucial for trustworthy simulations of thermal nuclear systems. Obtaining the scattering kernel for thermal neutrons (i.e., neutrons with energy ~1 eV or less) can be a difficult problem, since the neutron energy is not sufficient to break molecular bonds, and thus the neutrons must often interact with a much larger structure. The “scattering law” S(α; β), which is a function of unitless momentum α and energy β transfer, is used to relate the material’s phonon frequency distribution to the scattering kernel. LEAPR (a module of NJOY) and GASKET are two nuclear data processing codes that can be used to prepare the scattering law and use different approaches to approximate the same equations. LEAPR uses the “phonon expansion method” which involves iterative convolution. Iteratively solving convolution integrals is an expensive calculation to perform (to ease this calculation, LEAPR uses trapezoidal integration for the convolution). GASKET uses a more direct approach that, while avoiding the iterative convolutions, can become numerically unstable for some α; β combinations. When both methods are properly converged, they tend to agree quite well. The agreement and departure from agreement is presented here.
APA, Harvard, Vancouver, ISO, and other styles
49

Sun, Shuwen, Yunfei Qiao, Zhentao Gao, and Chao Huang. "Research on Machining Error Analysis and Traceability Method of Globoidal Indexing Cam Profile." Machines 10, no. 3 (March 21, 2022): 219. http://dx.doi.org/10.3390/machines10030219.

Full text
Abstract:
The profile of the globoidal indexing cam is a spatially undevelopable surface. It needs a special computer numerical control (CNC) machine tool to finish batch production, and its machining quality will be affected by the motion error of each part of the machine tool and the clamped positioning error of the workpiece. Firstly, the mathematical model of the error of the machine tool for machining the globoidal cam surface is derived, and the influence of the error of the machine tool for machining the globoidal cam surface is given. Secondly, an error tracking method for globoidal cam profile machining error based on error sensitivity coefficient grouping is proposed, which improves the data processing speed and the accuracy of the tracking results. Finally, the error analysis and traceability method of the globoidal cam is verified by experiments, and the error traceability results are fed back to the processing link. The machining quality of globoidal cam is improved by the error compensation, which provides the key technology for the integration of the design, manufacture, and measurement of the globoidal cam.
APA, Harvard, Vancouver, ISO, and other styles
50

Putra, Ahmad Pratama. "EARTHQUAKE HAZARD ASSESSMENT IN CENTER OF JAKARTA BASED ON SEISMICITY DATA, DEM, SLOPE, FAULT, AND GIS." Jurnal Sains dan Teknologi Mitigasi Bencana 14, no. 1 (August 5, 2019): 16–26. http://dx.doi.org/10.29122/jstmb.v14i1.3559.

Full text
Abstract:
The newest finding in 2016 about Baribis thrust, if it is pulled straight from Cibatu to Tangerang, it is roughly found that this fault passes through several sub-districts in Jakarta. Meanwhile, Center of Jakarta as the capital city, many governmental, economic, and business activities are conducted in this area. And also Geographic Information System (GIS) techniques are commonly used for monitoring and damage assessment for many natural and geological hazards. The present study, GIS techniques have been used to generate various thematic layers to assess earthquake hazard with a suitable numerical ranking scheme, mesh processing, and spatial data integration. The results also show that the proposed model provides reasonable earthquake potential index (EPI) from elevation, slope, magnitude, active fault, and epicentre parameters compared to the peak ground acceleration (PGA) in Center of Jakarta. The result of the EPI map explained just a little area in Center of Jakarta has very high EPI. Very high EPI area mostly in southeastern part in the study area, exactly in Menteng sub-district. And also illustrated, spatially in the more northward area indicate the smaller of the EPI in the study area. EPI resulted from the calculation of surface parameter have the same indication or same trend with PGA.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography