Academic literature on the topic 'Combined method of processing incomplete data'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Combined method of processing incomplete data.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Combined method of processing incomplete data"

1

Clabbers, Max T. B., Tim Gruene, James M. Parkhurst, Jan Pieter Abrahams, and David G. Waterman. "Electron diffraction data processing with DIALS." Acta Crystallographica Section D Structural Biology 74, no. 6 (May 30, 2018): 506–18. http://dx.doi.org/10.1107/s2059798318007726.

Full text
Abstract:
Electron diffraction is a relatively novel alternative to X-ray crystallography for the structure determination of macromolecules from three-dimensional nanometre-sized crystals. The continuous-rotation method of data collection has been adapted for the electron microscope. However, there are important differences in geometry that must be considered for successful data integration. The wavelength of electrons in a TEM is typically around 40 times shorter than that of X-rays, implying a nearly flat Ewald sphere, and consequently low diffraction angles and a high effective sample-to-detector distance. Nevertheless, the DIALS software package can, with specific adaptations, successfully process continuous-rotation electron diffraction data. Pathologies encountered specifically in electron diffraction make data integration more challenging. Errors can arise from instrumentation, such as beam drift or distorted diffraction patterns from lens imperfections. The diffraction geometry brings additional challenges such as strong correlation between lattice parameters and detector distance. These issues are compounded if calibration is incomplete, leading to uncertainty in experimental geometry, such as the effective detector distance and the rotation rate or direction. Dynamic scattering, absorption, radiation damage and incomplete wedges of data are additional factors that complicate data processing. Here, recent features of DIALS as adapted to electron diffraction processing are shown, including diagnostics for problematic diffraction geometry refinement, refinement of a smoothly varying beam model and corrections for distorted diffraction images. These novel features, combined with the existing tools in DIALS, make data integration and refinement feasible for electron crystallography, even in difficult cases.
APA, Harvard, Vancouver, ISO, and other styles
2

Kolosok, Irina, and Liudmila Gurina. "A bad data detection approach to EPS state estimation based on fuzzy sets and wavelet analysis." E3S Web of Conferences 216 (2020): 01029. http://dx.doi.org/10.1051/e3sconf/202021601029.

Full text
Abstract:
The paper offers an algorithm for detection of erroneous measurements (bad data) that occur at cyberattacks against systems for data acquisition, processing and transfer and cannot be detected by conventional methods of measurement validation at EPS state estimation. Combined application of wavelet analysis and theory of fuzzy sets when processing the SCADA and WAMS measurements produces higher accuracy of the estimates obtained under incomplete and uncertain data and demonstrates the efficiency of proposed approach for practical computations in simulated cyberattacks.
APA, Harvard, Vancouver, ISO, and other styles
3

Liang, P., G. Q. Zhou, Y. L. Lu, X. Zhou, and B. Song. "HOLE-FILLING ALGORITHM FOR AIRBORNE LIDAR POINT CLOUD DATA." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-3/W10 (February 8, 2020): 739–45. http://dx.doi.org/10.5194/isprs-archives-xlii-3-w10-739-2020.

Full text
Abstract:
Abstract. Due to the influence of the occlusion of objects or the complexity of the measured terrain in the scanning process of airborne lidar, the point cloud data inevitably appears holes after filtering and other processing. The incomplete data will inevitably have an impact on the quality of the reconstructed digital elevation model, so how to repair the incomplete point cloud data has become an urgent problem to be solved. To solve the problem of hole repair in point cloud data, a hole repair algorithm based on improved moving least square method is proposed in this paper by studying existing hole repair algorithms. Firstly, the algorithm extracts the boundary of the point cloud based on the triangular mesh model. Then we use k-nearest neighbor search to obtain the k-nearest neighbor points of the boundary point. Finally, according to the boundary point and its k-nearest neighbor point, the improved moving least squares method is used to fit the hole surface to realize the hole repair. Combined with C++ and MATLAB language, the feasibility of the algorithm is tested by specific application examples. The experimental results show that the algorithm can effectively repair the point cloud data holes, and the repairing precision is high. The filled hole area can be smoothly connected with the boundary.
APA, Harvard, Vancouver, ISO, and other styles
4

Kang, C. L., T. N. Lu, M. M. Zong, F. Wang, and Y. Cheng. "POINT CLOUD SMOOTH SAMPLING AND SURFACE RECONSTRUCTION BASED ON MOVING LEAST SQUARES." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-3/W10 (February 7, 2020): 145–51. http://dx.doi.org/10.5194/isprs-archives-xlii-3-w10-145-2020.

Full text
Abstract:
Abstract. In point cloud data processing, smooth sampling and surface reconstruction are important aspects of point cloud data processing. In view of the current point cloud sampling method, the point cloud distribution is not uniform, the point cloud feature information is incomplete, and the reconstructed model surface is not smooth. This paper proposes a method of smoothing sampling processing and surface reconstruction using point cloud using moving least squares method. This paper first introduces the traditional moving least squares method in detail, and then proposes an improved moving least squares method for point cloud smooth sampling and surface reconstruction. In this paper, the algorithm is designed for the proposed theory, combined with C++ and point cloud library PCL programming, using voxel grid sampling and uniform sampling and moving least squares smooth sampling comparison, after sampling, using greedy triangulation algorithm surface reconstruction. The experimental results show that the improved moving least squares method performs point cloud smooth sampling more uniformly than the voxel grid sampling and the feature information is more prominent. The surface reconstructed by the moving least squares method is smooth, the surface reconstructed by the voxel grid sampling and the uniformly sampled data surface is rough, and the surface has a rough triangular surface. Point cloud smooth sampling and surface reconstruction based on moving least squares method can better maintain point cloud feature information and smooth model smoothness. The superiority and effectiveness of the method are demonstrated, which provides a reference for the subsequent study of point cloud sampling and surface reconstruction.
APA, Harvard, Vancouver, ISO, and other styles
5

Courtenay, Lloyd A., and Diego González-Aguilera. "Geometric Morphometric Data Augmentation Using Generative Computational Learning Algorithms." Applied Sciences 10, no. 24 (December 21, 2020): 9133. http://dx.doi.org/10.3390/app10249133.

Full text
Abstract:
The fossil record is notorious for being incomplete and distorted, frequently conditioning the type of knowledge that can be extracted from it. In many cases, this often leads to issues when performing complex statistical analyses, such as classification tasks, predictive modelling, and variance analyses, such as those used in Geometric Morphometrics. Here different Generative Adversarial Network architectures are experimented with, testing the effects of sample size and domain dimensionality on model performance. For model evaluation, robust statistical methods were used. Each of the algorithms were observed to produce realistic data. Generative Adversarial Networks using different loss functions produced multidimensional synthetic data significantly equivalent to the original training data. Conditional Generative Adversarial Networks were not as successful. The methods proposed are likely to reduce the impact of sample size and bias on a number of statistical learning applications. While Generative Adversarial Networks are not the solution to all sample-size related issues, combined with other pre-processing steps these limitations may be overcome. This presents a valuable means of augmenting geometric morphometric datasets for greater predictive visualization.
APA, Harvard, Vancouver, ISO, and other styles
6

Ibric, Svetlana, Zorica Djuric, Jelena Parojcic, and Jelena Petrovic. "Artificial intelligence in pharmaceutical product formulation: Neural computing." Chemical Industry and Chemical Engineering Quarterly 15, no. 4 (2009): 227–36. http://dx.doi.org/10.2298/ciceq0904227i.

Full text
Abstract:
The properties of a formulation are determined not only by the ratios in which the ingredients are combined but also by the processing conditions. Although the relationships between the ingredient levels, processing conditions, and product performance may be known anecdotally, they can rarely be quantified. In the past, formulators tended to use statistical techniques to model their formulations, relying on response surfaces to provide a mechanism for optimization. However, the optimization by such a method can be misleading, especially if the formulation is complex. More recently, advances in mathematics and computer science have led to the development of alternative modeling and data mining techniques which work with a wider range of data sources: neural networks (an attempt to mimic the processing of the human brain); genetic algorithms (an attempt to mimic the evolutionary process by which biological systems self-organize and adapt), and fuzzy logic (an attempt to mimic the ability of the human brain to draw conclusions and generate responses based on incomplete or imprecise information). In this review the current technology will be examined, as well as its application in pharmaceutical formulation and processing. The challenges, benefits and future possibilities of neural computing will be discussed. <br><br> <font color="red"><b>This article has been retracted. Link to the retraction <u><a href="http://dx.doi.org/10.2298/CICEQ1104000E">10.2298/CICEQ1104000E</a><u></b></font>
APA, Harvard, Vancouver, ISO, and other styles
7

Wu, Haopeng, Zhiying Lu, Jianfeng Zhang, Xin Li, Mingyue Zhao, and Xudong Ding. "Facial Expression Recognition Based on Multi-Features Cooperative Deep Convolutional Network." Applied Sciences 11, no. 4 (February 4, 2021): 1428. http://dx.doi.org/10.3390/app11041428.

Full text
Abstract:
This paper addresses the problem of Facial Expression Recognition (FER), focusing on unobvious facial movements. Traditional methods often cause overfitting problems or incomplete information due to insufficient data and manual selection of features. Instead, our proposed network, which is called the Multi-features Cooperative Deep Convolutional Network (MC-DCN), maintains focus on the overall feature of the face and the trend of key parts. The processing of video data is the first stage. The method of ensemble of regression trees (ERT) is used to obtain the overall contour of the face. Then, the attention model is used to pick up the parts of face that are more susceptible to expressions. Under the combined effect of these two methods, the image which can be called a local feature map is obtained. After that, the video data are sent to MC-DCN, containing parallel sub-networks. While the overall spatiotemporal characteristics of facial expressions are obtained through the sequence of images, the selection of keys parts can better learn the changes in facial expressions brought about by subtle facial movements. By combining local features and global features, the proposed method can acquire more information, leading to better performance. The experimental results show that MC-DCN can achieve recognition rates of 95%, 78.6% and 78.3% on the three datasets SAVEE, MMI, and edited GEMEP, respectively.
APA, Harvard, Vancouver, ISO, and other styles
8

Shi, Haobin, Meng Xu, Kao-Shing Hwang, and Chia-Hung Hung. "A posture measurement approach for an articulated manipulator by RGB-D cameras." International Journal of Advanced Robotic Systems 16, no. 2 (March 1, 2019): 172988141983813. http://dx.doi.org/10.1177/1729881419838130.

Full text
Abstract:
The objective of this article aims at the safety problems where robots and operators are highly coupled in a working space. A method to model an articulated robot manipulator by cylindrical geometries based on partial cloud points is proposed in this article. Firstly, images with point cloud data containing the posture of a robot with five resolution links are captured by a pair of RGB-D cameras. Secondly, the process of point cloud clustering and Gaussian noise filtering is applied to the images to separate the point cloud data of three links from the combined images. Thirdly, an ideal cylindrical model fits the processed point cloud data are segmented by the random sample consensus method such that three joint angles corresponding to three major links are computed. The original method for calculating the normal vector of point cloud data is the cylindrical model segmentation method, but the accuracy of posture measurement is low when the point cloud data is incomplete. To solve this problem, a principal axis compensation method is proposed, which is not affected by number of point cloud cluster data. The original method and the proposed method are used to estimate the three joint angular of the manipulator system in experiments. Experimental results show that the average error is reduced by 27.97%, and the sample standard deviation of the error is reduced by 54.21% compared with the original method for posture measurement. The proposed method is 0.971 piece/s slower than the original method in terms of the image processing velocity. But the proposed method is still feasible, and the purpose of posture measurement is achieved.
APA, Harvard, Vancouver, ISO, and other styles
9

Kryzhanivskyy, Ye I., V. Ya Grudz, V. Ya Grudz (jun.), R. B. Tereshchenko, and R. M. Govdjak. "OPTIMIZING THE MODES OF COMPRESSOR STATIONS UNDER THE CONDITIONS OF THEIR PARTIAL LOAD." Oil and Gas Power Engineering, no. 1(31) (June 26, 2019): 36–42. http://dx.doi.org/10.31471/1993-9868-2019-1(31)-36-42.

Full text
Abstract:
The principle of construction of stochastic and deterministic mathematical models for prediction of operating modes of gas pumping units at compressor stations of main gas pipelines and their optimization in conditions of incomplete loading of the gas transmission system is proposed. Calculation of the parameters of the technological mode of operation of the compressor stations of gas pumping units is based on the use of combined characteristics of centrifugal superchargers, the processing of which by means of mathematical statistics allowed to obtain analytical expressions for the characteristics models. The calculation method can be implemented for forecasting the operation modes of the compressor station in conditions of both single-stage and multi-stage compression of gas. The source data for the calculation comprise the upstream and downstream pressure and temperature of gas, the volume of pumping and physical properties of the gas. In order to predict operating modes of compressor stations, the optimal distribution of load between workshops equipped with multi-type gas-pumping units is carried out provided that the energy consumption is reduced to compress the given volume of gas under known boundary conditions. The principle of constructing an optimality criterion for conditions of incomplete loading of the gas transmission system is shown. To illustrate the proposed method of optimizing the operation modes of compressor stations under incomplete loading, we give an example of the calculation of optimal operating modes of a hypothetical two-seater compressor station with various types of aggregates. The calculation is based on the construction of analytical mathematical models of their characteristics on the criterion of minimum capacity for gas compression at a given output and upstream and down stream pressures at the station.
APA, Harvard, Vancouver, ISO, and other styles
10

Wei, Xu, Jun Yang, Mingjiu Lv, Wenfeng Chen, Xiaoyan Ma, Saiqiang Xia, and Ming Long. "Removal of Micro-Doppler Effects in ISAR Imaging Based on the Joint Processing of Singular Value Decomposition and Complex Variational Mode Extraction." Mathematical Problems in Engineering 2022 (April 27, 2022): 1–15. http://dx.doi.org/10.1155/2022/6141278.

Full text
Abstract:
For inverse synthetic aperture radar (ISAR) imaging of targets with micromotion parts, the removal of micro-Doppler (m-D) effects is the key procedure. However, under the condition of a sparse aperture, the echo pulse is limited or incomplete, giving rise to the difficulty of eliminating m-D effects. Thus, a novel m-D effects removal algorithm is proposed, which can effectively eliminate m-D effects, as well as the interference introduced by noise and sparse aperture in the ISAR image of the main body. The proposed algorithm mainly includes two processing steps. First, combined with the cut-off point determined by the normalized singular value difference spectrum, the rough estimation of the main body is achieved by singular value decomposition (SVD). Then, the variational mode extraction (VME) is extended to complex variational mode extraction (CVME). The constrained variational problem constructed by bandwidth and spectral overlap constraints is solved by the alternating direction method of multipliers (ADMM), and the precise estimation of the main body is obtained. Experimental results based on both simulated and measured data demonstrate that the proposed algorithm can acquire the high-resolution ISAR image of the main body under noisy and sparse conditions.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Combined method of processing incomplete data"

1

Кузнєцова, Наталія Володимирівна. "Методи і моделі аналізу, оцінювання та прогнозування ризиків у фінансових системах." Doctoral thesis, Київ, 2018. https://ela.kpi.ua/handle/123456789/26340.

Full text
Abstract:
Роботу виконано в Інституті прикладного системного аналізу Національного технічного університету України «Київський політехнічний інститут імені Ігоря Сікорського».
У дисертаційній роботі розроблено системну методологію аналізу та оцінювання фінансових ризиків, яка ґрунтується на принципах системного аналізу та менеджменту ризиків, а також запропонованих принципах адаптивного та динамічного менеджменту ризиків. Методологія включає: комбінований метод обробки неповних та втрачених даних, ймовірнісно-статистичний метод оцінювання ризику фінансових втрат, динамічний метод оцінювання ризиків, який передбачає побудову різних типів моделей виживання, метод структурно-параметричної адаптації, застосування скорингової карти до аналізу ризиків фінансових систем і нейро-нечіткий метод доповнення вибірки відхиленими заявками. Містить критерії урахування інформаційного ризику, оцінки якості даних, прогнозів та рішень, квадратичний критерій якості опрацювання ризику та інтегральну характеристику оцінювання ефективності методів менеджменту ризиків. Практична цінність одержаних результатів полягає у створенні розширеної інформаційної технології та інформаційної системи підтримки прийняття рішень на основі запропонованої системної методології.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Combined method of processing incomplete data"

1

Kureychik, Viktor, and Alexandra Semenova. "Combined Method for Integration of Heterogeneous Ontology Models for Big Data Processing and Analysis." In Advances in Intelligent Systems and Computing, 302–11. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-57261-1_30.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ludwig, Horst, and Wilhelm Scigalla. "Pressure- and Temperature-Induced Inactivation of Microorganisms." In High Pressure Effects in Molecular Biophysics and Enzymology. Oxford University Press, 1996. http://dx.doi.org/10.1093/oso/9780195097221.003.0025.

Full text
Abstract:
Bacteria are unstable when the temperature or pressure is sufficiently high. Their inactivation by pressure is the result of a complicated interplay of both temperature and pressure effects. The p-T stability diagram of bacteria is similar to that of proteins. But inactivation kinetics of bacteria indicate that the lethal event cannot be the denaturation of the most sensitive proteins in the cell. For in that case one would expect a lag time followed by a sudden inactivation when the last copy of those sensitive proteins was destroyed. On the contrary, it appears as if the kinetics is caused by a single damage mechanism. In addition, some evidence suggests that the membrane is involved. Therefore, it seems that membrane-associated proteins play a major role in the activation of bacteria. The inactivation of bacterial spores shows an even more complex T-p interrelationship. The reason is that two different processes are combined in spore inactivation: the germination of dormant spores at comparatively low pressures and the inactivation of the germinated specimens at high pressures. Thus, special procedures are needed for effective spore inactivation. Microorganisms are killed when the surrounding hydrostatic pressure is sufficiently high. This finding provides the basis for developing a physical sterilization method for drugs and food. Initial experiments in this area were carried out nearly 100 years ago (Hite, 1899), but technical shortcomings and incomplete scientific knowledge impeded their utilization at that time. Recently, the application of high pressure in food preservation and processing has garnered new interest (Hayashi 1989; Balny et al, 1992). To collect precise kinetic data for pressure-induced degermination, we constructed a device which consisted of 10 pressure vessels that could be thermostated in two groups of five each. Each vessel had an inner diameter of 1.2cm and an inner length of 12 cm. The samples were separated from the pressure medium, water, by polyethylene tubes or bags. The maximum pressure was 7 kbar. The vegetative bacteria were always freshly cultured from one single organism before each experimental run. The preparations were allowed to grow to the exponential phase and were used in experiments just before the stationary phase had been reached.
APA, Harvard, Vancouver, ISO, and other styles
3

Saracevic, Muzafer H., Aybeyan Selimi, and Selver Pepić. "Implementation of Encryption and Data Hiding in E-Health Application." In Handbook of Research on Intelligent Data Processing and Information Security Systems, 25–42. IGI Global, 2020. http://dx.doi.org/10.4018/978-1-7998-1290-6.ch002.

Full text
Abstract:
This chapter presents the possibilities of applying cryptography and steganography in design advanced methods of medical software. The proposed solution has two modules: medical data encryption and medical data hiding. In the first module for the encryption of patient data a Catalan crypto-key is used combined with the LatticePath combinatorial problem. In the second module for hiding patient data, the Catalan stego-key and medical image is used. The objective of the second part is to explain and investigate the existing author's method to steganography based on the Catalan numbers in the design of medical software. The proposed solution is implemented in the Java programming language. In the experimental part, cryptanalysis and steganalysis of the proposed solution were given. Cryptanalysis is based on time and storage complexity, leaking information and machine learning-based identification of the encryption method. Also, steganalysis is based on the amount of information per pixel in stego image, approximate entropy and bit distribution in stego-images.
APA, Harvard, Vancouver, ISO, and other styles
4

Niinimäki, Marko, Felipe Abaunza, Tapio Niemi, Peter Thanisch, and Jukka Kommeri. "Energy-Efficient Query Processing in a Combined Database and Web Service Environment." In Advances in Systems Analysis, Software Engineering, and High Performance Computing, 62–88. IGI Global, 2018. http://dx.doi.org/10.4018/978-1-5225-5017-4.ch004.

Full text
Abstract:
The energy-efficiency of server hardware, web server software, and databases has been widely studied. However, studies that combine these aspects are rare. In this chapter, the authors present an energy-efficiency evaluation of a web/database application in a Windows/IIS/MSSQL environment running on an industrial grade Intel server. Moreover, they provide a wide overview of related research and technologies. Researchers have noticed that despite energy-saving technologies, energy consumption of data centers is still growing. To resolve this dilemma, the authors explore the background and propose concrete solutions. They concentrate on the following aspects: server BIOS/operating system energy optimization (limited impact) and “bursting” (i.e., queuing requests and then executing them in bursts). The authors have used the bursting method with both database and web/database applications. Their results indicate about 10% energy savings using this method. The authors analyse the model using statistical tools and present an equation to express the quality of service vs. burst wait time relationship.
APA, Harvard, Vancouver, ISO, and other styles
5

Danilova, Natalia, and David Stupples. "Semantic Approach to Web-Based Discovery of Unknowns to Enhance Intelligence Gathering." In Ontologies and Big Data Considerations for Effective Intelligence, 196–213. IGI Global, 2017. http://dx.doi.org/10.4018/978-1-5225-2058-0.ch006.

Full text
Abstract:
A semantic Web-based search method is introduced that automates the correlation of topic-related content for discovery of hitherto unknown intelligence from disparate and widely diverse Web-sources. This method is in contrast to traditional search methods that are constrained to specific or narrowly defined topics. The method is based on algorithms from Natural Language Processing combined with techniques adapted from grounded theory and Dempster-Shafer theory to significantly enhance the discovery of related Web-sourced intelligence. This paper describes the development of the method by showing the integration of the mathematical models used. Real-world worked examples demonstrate the effectiveness of the method with supporting performance analysis, showing that the quality of the extracted content is significantly enhanced comparing to the traditional Web-search approaches.
APA, Harvard, Vancouver, ISO, and other styles
6

Zhu, Huawei, Yihao Mao, Yiming Zhao, Liyuan Shi, and Yanli Shao. "Short-Term Traffic Flow Prediction Method Based on Spatio-Temporal Characteristics of Complex Road Network." In Advances in Transdisciplinary Engineering. IOS Press, 2022. http://dx.doi.org/10.3233/atde220060.

Full text
Abstract:
With the development of urbanization, the number of residents’ motor vehicles has increased sharply, and traffic congestion problem has become increasingly serious. The construction of Intelligent Traffic System (ITS) has become the main means to alleviate traffic congestion. Short-term traffic flow prediction has guiding significance for residents’ travel planning and intelligent management of transportation, and has become one of the research hotspots in intelligent transportation field.Therefore, A short-term traffic flow prediction method based on the spatio-temporal characteristics of complex road networks is proposed to further improve the prediction accuracy and reduce the prediction cost. Firstly, a graph convolutional network (GCN) capable of processing non-Euclidean data structures is used to extract the spatial characteristics of traffic flow data. Then, the long and short-term memory (LSTM) neural network is used to process the time characteristics. Finally, the two are combined to realize the effective processing of the spatio-temporal characteristics of traffic flow data. Experimental results on the real traffic flow dataset prove the feasibility and effectiveness of the proposed method, and can provide a basis for intelligent traffic control and smart city construction.
APA, Harvard, Vancouver, ISO, and other styles
7

Aggarwal, Swati, and Shambeel Azim. "Outliers, Missing Values, and Reliability." In Handbook of Research on Fuzzy and Rough Set Theory in Organizational Decision Making, 316–30. IGI Global, 2017. http://dx.doi.org/10.4018/978-1-5225-1008-6.ch014.

Full text
Abstract:
Reliability is a major concern in qualitative research. Most of the current research deals with finding the reliability of the data, but not much work is reported on how to improve the reliability of the unreliable data. This paper discusses three important aspects of the data pre-processing: how to detect the outliers, dealing with the missing values and finally increasing the reliability of the dataset. Here authors have suggested a framework for pre-processing of the inter-judged data which is incomplete and also contains erroneous values. The suggested framework integrates three approaches, Krippendorff's alpha for reliability computation, frequency based outlier detection method and a hybrid fuzzy c-means and multilayer perceptron based imputation technique. The proposed integrated approach results in an increase of reliability for the dataset which can be used to make strong conclusions.
APA, Harvard, Vancouver, ISO, and other styles
8

Johann, Felix, David Becker, Matthias Becker, and E. Sinem Ince. "Multi-Scenario Evaluation of the Direct Method in Strapdown Airborne and Shipborne Gravimetry." In International Association of Geodesy Symposia. Berlin, Heidelberg: Springer Berlin Heidelberg, 2020. http://dx.doi.org/10.1007/1345_2020_127.

Full text
Abstract:
AbstractIn recent years, it was shown that the quality of strapdown airborne gravimetry using a navigation-grade strapdown inertial measurement unit (IMU) could be on par with “classical” airborne gravimeters as the 2-axis stabilized LaCoste and Romberg S-type gravimeter. Basically, two processing approaches exist in strapdown gravimetry. Applying the indirect method (also referred to as “inertial navigation approach” or “one-step approach”), all observations – raw GNSS observations or position solutions, IMU specific force and angular rate measurements – are combined in a single Kalman Filter. Alternatively, applying the direct method (also referred to as “accelerometry approach” or “cascaded approach”), GNSS position solutions are numerically differentiated twice to get the vehicle’s kinematic acceleration, which is then directly removed from the IMU specific force measurement in order to obtain gravity. In the scope of this paper, test runs for the application of strapdown airborne and shipborne gravimetry are evaluated using an iMAR iNAV-RQH-1003 IMU. Results of the direct and the indirect methods are compared to each other. Additionally, a short introduction to the processing scheme of the Chekan-AM gravimeter data is given and differences between Chekan-AM and strapdown results of the shipborne campaigns are analysed. Using the same data set, the cross-over residuals suggest a similar accuracy of 0.39 mGal for the Chekan-AM and 0.41 mGal for the adjusted strapdown results (direct method).
APA, Harvard, Vancouver, ISO, and other styles
9

N. Mihăilescu, Cristian, Mihai Oane, Natalia Mihăilescu, Carmen Ristoscu, Muhammad Arif Mahmood, and Ion N. Mihăilescu. "A New Approach to Solve Non-Fourier Heat Equation via Empirical Methods Combined with the Integral Transform Technique in Finite Domains." In Matrix Theory - Classics and Advances [Working Title]. IntechOpen, 2022. http://dx.doi.org/10.5772/intechopen.104499.

Full text
Abstract:
This chapter deals with the validity/limits of the integral transform technique on finite domains. The integral transform technique based upon eigenvalues and eigenfunctions can serve as an appropriate tool for solving the Fourier heat equation, in the case of both laser and electron beam processing. The crux of the method consists in the fact that the solutions by mentioned technique demonstrate strong convergence after the 10 eigenvalues iterations, only. Nevertheless, the method meets with difficulties to extend to the case of non-Fourier equations. A solution is however possible, but it is bulky with a weak convergence and requires the use of extra-boundary conditions. To surpass this difficulty, a new mix approach is proposed with this chapter resorting to experimental data, in order to support a more appropriate solution. The proposed method opens in our opinion a beneficial prospective for either laser or electron beam processing.
APA, Harvard, Vancouver, ISO, and other styles
10

Das, Abhishek, and Mihir Narayan Mohanty. "An Useful Review on Optical Character Recognition for Smart Era Generation." In Multimedia and Sensory Input for Augmented, Mixed, and Virtual Reality, 1–41. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-4703-8.ch001.

Full text
Abstract:
In this chapter, the authors have reviewed on optical character recognition. The study belongs to both typed characters and handwritten character recognition. Online and offline character recognition are two modes of data acquisition in the field of OCR and are also studied. As deep learning is the emerging machine learning method in the field of image processing, the authors have described the method and its application of earlier works. From the study of the recurrent neural network (RNN), a special class of deep neural network is proposed for the recognition purpose. Further, convolutional neural network (CNN) is combined with RNN to check its performance. For this piece of work, Odia numerals and characters are taken as input and well recognized. The efficacy of the proposed method is explained in the result section.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Combined method of processing incomplete data"

1

Cheng, Zhong, Rongqiang Xu, Jianbing Chen, Ning Li, Xiaolong Yu, Xiangxiang Ding, and Jie Cao. "Rapid Development of Multi-Source Heterogeneous Drilling Data Service System." In SPE/IADC Middle East Drilling Technology Conference and Exhibition. SPE, 2021. http://dx.doi.org/10.2118/202199-ms.

Full text
Abstract:
Abstract Digital oil and gas field is an overly complex integrated information system, and with the continuous expansion of business scale and needs, oil companies will constantly raise more new and higher requirements for digital transformation. In the previous system construction, we adopted multi-phase, multi-vendor, multi-technology and multi-method, resulting in the problem of data silos and fragmentation. The result of the data management problems is that decisions are often made using incomplete information. Even when the desired data is accessible, requirements for gathering and formatting it may limit the amount of analysis performed before a timely decision must be made. Therefore, through the use of advanced computer technologies such as big data, cloud computing and IOT (internet of things), it has become our current goal to build an integrated data integration platform and provide unified data services to improve the company's bottom line. As part of the digital oilfield, offshore drilling operations is one of the potential areas where data processing and advanced analytics technology can be used to increase revenue, lower costs, and reduce risks. Building a data mining and analytics engine that uses multiple drilling data is a difficult challenge. The workflow of data processing and the timeliness of the analysis are major considerations for developing a data service solution. Most of the current analytical engines require more than one tool to have a complete system. Therefore, adopting an integrated system that combines all required tools will significantly help an organization to address the above challenges in a timely manner. This paper serves to provide a technical overview of the offshore drilling data service system currently developed and deployed. The data service system consists of four subsystems. They are the static data management system including structured data (job report) and unstructured data (design documentation and research report), the real-time data management system, the third-party software data management system integrating major industry software databases, and the cloud-based data visual application system providing dynamic analysis results to achieve timely optimization of the operations. Through a unified logical data model, it can realize the quick access to the third-party software data and application support; These subsystems are fully integrated and interact with each other to function as microservices, providing a one-stop solution for real-time drilling optimization and monitoring. This data service system has become a powerful decision support tool for the drilling operations team. The learned lessons and gained experiences from the system services presented here provide valuable guidance for future demands E&P and the industrial revolution.
APA, Harvard, Vancouver, ISO, and other styles
2

Soni, S., and I. Sharma. "An imputation-based method for fuzzy clustering of incomplete data." In 2017 International Conference on Communication and Signal Processing (ICCSP). IEEE, 2017. http://dx.doi.org/10.1109/iccsp.2017.8286431.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Fielden, John, Henry Y. Kwong, and Jacob Wilbrink. "Reconstructing MR images from incomplete Fourier data using the maximum entropy method." In Medical Imaging V: Image Processing, edited by Murray H. Loew. SPIE, 1991. http://dx.doi.org/10.1117/12.45212.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Yao, Hongbing, Xia Ye, Jun Zhou, Guilin Ding, and Anzhi He. "Research on sampling method for three-dimensional flow field reconstruction of incomplete data." In ICO20:Optical Information Processing, edited by Yunlong Sheng, Songlin Zhuang, and Yimo Zhang. SPIE, 2006. http://dx.doi.org/10.1117/12.668318.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zheng, Songwang, Cao Chen, Lei Han, Xiaoyong Zhang, and Xiaojun Yan. "A Processing Method for Combined Fatigue Accelerated Test Data." In ASME Turbo Expo 2017: Turbomachinery Technical Conference and Exposition. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/gt2017-64414.

Full text
Abstract:
To carry out combined low and high cycle fatigue (CCF) test on turbine blades in a bench environment, it is imperative to simulate the vibration loads of turbine blades in the field. Due to the low vibration stress of turbine blades in the working state, the test time will be very long if the test vibration stress is equal to the real vibration stress in working state. Therefore, an accelerated test will be used when the test life reach the target value (typically 107). During the accelerated test, each blade is tested at two or more times than the real vibration stress. That means some specimens are tested under two vibration stress levels. In this case, a reasonable data processing method becomes very important. For this reason, a data processing method for the CCF accelerated test is proposed in this paper. These test data are iterated on the basis of S-N curve. Finally, ten real turbine blades are tested in a bench environment, one of them is tested under two vibration stress levels. The test data is processed using the method proposed above to obtain the unaccelerated life data.
APA, Harvard, Vancouver, ISO, and other styles
6

Li, Jun-wu, and Zhi-fu Yu. "A method of fusion recognition based on the characteristic of target and incomplete data." In 2012 11th International Conference on Signal Processing (ICSP 2012). IEEE, 2012. http://dx.doi.org/10.1109/icosp.2012.6491932.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Secmen, Mustafa, Evren Ekmekci, and Gonul Turhan-Sayan. "Electromagnetic Target Recognition with the Fused MUSIC Spectrum Matrix Method: Applications and Performance Analysis for Incomplete Frequency Data." In 2007 IEEE 15th Signal Processing and Communications Applications. IEEE, 2007. http://dx.doi.org/10.1109/siu.2007.4298848.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Benkouider, Yasmine Kheira, Moussa Sofiane Karoui, Yannick De ville, and Shahram Hosseini. "A new multiplicative nonnegative matrix factorization method for unmixing hyperspectral images combined with multispectral data." In 2017 25th European Signal Processing Conference (EUSIPCO). IEEE, 2017. http://dx.doi.org/10.23919/eusipco.2017.8081254.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

He, Z. Y., Z. G. Wang, W. Li, and J. Wu. "A Combined Task Analysis Method for Data Selection in Mandarin Isolated Word Recognition System." In 2008 6th International Symposium on Chinese Spoken Language Processing (ISCSLP). IEEE, 2008. http://dx.doi.org/10.1109/chinsl.2008.ecp.65.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, QiongHua, WenJu He, and WeiDong Sun. "Combined contextual classification method for large scale land covering based on multi-resolution satellite data." In International Symposium on Multispectral Image Processing and Pattern Recognition, edited by Yongji Wang, Jun Li, Bangjun Lei, and Jingyu Yang. SPIE, 2007. http://dx.doi.org/10.1117/12.749490.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Combined method of processing incomplete data"

1

Yan, Yujie, and Jerome F. Hajjar. Automated Damage Assessment and Structural Modeling of Bridges with Visual Sensing Technology. Northeastern University, May 2021. http://dx.doi.org/10.17760/d20410114.

Full text
Abstract:
Recent advances in visual sensing technology have gained much attention in the field of bridge inspection and management. Coupled with advanced robotic systems, state-of-the-art visual sensors can be used to obtain accurate documentation of bridges without the need for any special equipment or traffic closure. The captured visual sensor data can be post-processed to gather meaningful information for the bridge structures and hence to support bridge inspection and management. However, state-of-the-practice data postprocessing approaches require substantial manual operations, which can be time-consuming and expensive. The main objective of this study is to develop methods and algorithms to automate the post-processing of the visual sensor data towards the extraction of three main categories of information: 1) object information such as object identity, shapes, and spatial relationships - a novel heuristic-based method is proposed to automate the detection and recognition of main structural elements of steel girder bridges in both terrestrial and unmanned aerial vehicle (UAV)-based laser scanning data. Domain knowledge on the geometric and topological constraints of the structural elements is modeled and utilized as heuristics to guide the search as well as to reject erroneous detection results. 2) structural damage information, such as damage locations and quantities - to support the assessment of damage associated with small deformations, an advanced crack assessment method is proposed to enable automated detection and quantification of concrete cracks in critical structural elements based on UAV-based visual sensor data. In terms of damage associated with large deformations, based on the surface normal-based method proposed in Guldur et al. (2014), a new algorithm is developed to enhance the robustness of damage assessment for structural elements with curved surfaces. 3) three-dimensional volumetric models - the object information extracted from the laser scanning data is exploited to create a complete geometric representation for each structural element. In addition, mesh generation algorithms are developed to automatically convert the geometric representations into conformal all-hexahedron finite element meshes, which can be finally assembled to create a finite element model of the entire bridge. To validate the effectiveness of the developed methods and algorithms, several field data collections have been conducted to collect both the visual sensor data and the physical measurements from experimental specimens and in-service bridges. The data were collected using both terrestrial laser scanners combined with images, and laser scanners and cameras mounted to unmanned aerial vehicles.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography