Статті в журналах з теми "Combined method of processing incomplete data"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Combined method of processing incomplete data.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Combined method of processing incomplete data".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Clabbers, Max T. B., Tim Gruene, James M. Parkhurst, Jan Pieter Abrahams, and David G. Waterman. "Electron diffraction data processing with DIALS." Acta Crystallographica Section D Structural Biology 74, no. 6 (May 30, 2018): 506–18. http://dx.doi.org/10.1107/s2059798318007726.

Повний текст джерела
Анотація:
Electron diffraction is a relatively novel alternative to X-ray crystallography for the structure determination of macromolecules from three-dimensional nanometre-sized crystals. The continuous-rotation method of data collection has been adapted for the electron microscope. However, there are important differences in geometry that must be considered for successful data integration. The wavelength of electrons in a TEM is typically around 40 times shorter than that of X-rays, implying a nearly flat Ewald sphere, and consequently low diffraction angles and a high effective sample-to-detector distance. Nevertheless, the DIALS software package can, with specific adaptations, successfully process continuous-rotation electron diffraction data. Pathologies encountered specifically in electron diffraction make data integration more challenging. Errors can arise from instrumentation, such as beam drift or distorted diffraction patterns from lens imperfections. The diffraction geometry brings additional challenges such as strong correlation between lattice parameters and detector distance. These issues are compounded if calibration is incomplete, leading to uncertainty in experimental geometry, such as the effective detector distance and the rotation rate or direction. Dynamic scattering, absorption, radiation damage and incomplete wedges of data are additional factors that complicate data processing. Here, recent features of DIALS as adapted to electron diffraction processing are shown, including diagnostics for problematic diffraction geometry refinement, refinement of a smoothly varying beam model and corrections for distorted diffraction images. These novel features, combined with the existing tools in DIALS, make data integration and refinement feasible for electron crystallography, even in difficult cases.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Kolosok, Irina, and Liudmila Gurina. "A bad data detection approach to EPS state estimation based on fuzzy sets and wavelet analysis." E3S Web of Conferences 216 (2020): 01029. http://dx.doi.org/10.1051/e3sconf/202021601029.

Повний текст джерела
Анотація:
The paper offers an algorithm for detection of erroneous measurements (bad data) that occur at cyberattacks against systems for data acquisition, processing and transfer and cannot be detected by conventional methods of measurement validation at EPS state estimation. Combined application of wavelet analysis and theory of fuzzy sets when processing the SCADA and WAMS measurements produces higher accuracy of the estimates obtained under incomplete and uncertain data and demonstrates the efficiency of proposed approach for practical computations in simulated cyberattacks.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Liang, P., G. Q. Zhou, Y. L. Lu, X. Zhou, and B. Song. "HOLE-FILLING ALGORITHM FOR AIRBORNE LIDAR POINT CLOUD DATA." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-3/W10 (February 8, 2020): 739–45. http://dx.doi.org/10.5194/isprs-archives-xlii-3-w10-739-2020.

Повний текст джерела
Анотація:
Abstract. Due to the influence of the occlusion of objects or the complexity of the measured terrain in the scanning process of airborne lidar, the point cloud data inevitably appears holes after filtering and other processing. The incomplete data will inevitably have an impact on the quality of the reconstructed digital elevation model, so how to repair the incomplete point cloud data has become an urgent problem to be solved. To solve the problem of hole repair in point cloud data, a hole repair algorithm based on improved moving least square method is proposed in this paper by studying existing hole repair algorithms. Firstly, the algorithm extracts the boundary of the point cloud based on the triangular mesh model. Then we use k-nearest neighbor search to obtain the k-nearest neighbor points of the boundary point. Finally, according to the boundary point and its k-nearest neighbor point, the improved moving least squares method is used to fit the hole surface to realize the hole repair. Combined with C++ and MATLAB language, the feasibility of the algorithm is tested by specific application examples. The experimental results show that the algorithm can effectively repair the point cloud data holes, and the repairing precision is high. The filled hole area can be smoothly connected with the boundary.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Kang, C. L., T. N. Lu, M. M. Zong, F. Wang, and Y. Cheng. "POINT CLOUD SMOOTH SAMPLING AND SURFACE RECONSTRUCTION BASED ON MOVING LEAST SQUARES." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-3/W10 (February 7, 2020): 145–51. http://dx.doi.org/10.5194/isprs-archives-xlii-3-w10-145-2020.

Повний текст джерела
Анотація:
Abstract. In point cloud data processing, smooth sampling and surface reconstruction are important aspects of point cloud data processing. In view of the current point cloud sampling method, the point cloud distribution is not uniform, the point cloud feature information is incomplete, and the reconstructed model surface is not smooth. This paper proposes a method of smoothing sampling processing and surface reconstruction using point cloud using moving least squares method. This paper first introduces the traditional moving least squares method in detail, and then proposes an improved moving least squares method for point cloud smooth sampling and surface reconstruction. In this paper, the algorithm is designed for the proposed theory, combined with C++ and point cloud library PCL programming, using voxel grid sampling and uniform sampling and moving least squares smooth sampling comparison, after sampling, using greedy triangulation algorithm surface reconstruction. The experimental results show that the improved moving least squares method performs point cloud smooth sampling more uniformly than the voxel grid sampling and the feature information is more prominent. The surface reconstructed by the moving least squares method is smooth, the surface reconstructed by the voxel grid sampling and the uniformly sampled data surface is rough, and the surface has a rough triangular surface. Point cloud smooth sampling and surface reconstruction based on moving least squares method can better maintain point cloud feature information and smooth model smoothness. The superiority and effectiveness of the method are demonstrated, which provides a reference for the subsequent study of point cloud sampling and surface reconstruction.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Courtenay, Lloyd A., and Diego González-Aguilera. "Geometric Morphometric Data Augmentation Using Generative Computational Learning Algorithms." Applied Sciences 10, no. 24 (December 21, 2020): 9133. http://dx.doi.org/10.3390/app10249133.

Повний текст джерела
Анотація:
The fossil record is notorious for being incomplete and distorted, frequently conditioning the type of knowledge that can be extracted from it. In many cases, this often leads to issues when performing complex statistical analyses, such as classification tasks, predictive modelling, and variance analyses, such as those used in Geometric Morphometrics. Here different Generative Adversarial Network architectures are experimented with, testing the effects of sample size and domain dimensionality on model performance. For model evaluation, robust statistical methods were used. Each of the algorithms were observed to produce realistic data. Generative Adversarial Networks using different loss functions produced multidimensional synthetic data significantly equivalent to the original training data. Conditional Generative Adversarial Networks were not as successful. The methods proposed are likely to reduce the impact of sample size and bias on a number of statistical learning applications. While Generative Adversarial Networks are not the solution to all sample-size related issues, combined with other pre-processing steps these limitations may be overcome. This presents a valuable means of augmenting geometric morphometric datasets for greater predictive visualization.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Ibric, Svetlana, Zorica Djuric, Jelena Parojcic, and Jelena Petrovic. "Artificial intelligence in pharmaceutical product formulation: Neural computing." Chemical Industry and Chemical Engineering Quarterly 15, no. 4 (2009): 227–36. http://dx.doi.org/10.2298/ciceq0904227i.

Повний текст джерела
Анотація:
The properties of a formulation are determined not only by the ratios in which the ingredients are combined but also by the processing conditions. Although the relationships between the ingredient levels, processing conditions, and product performance may be known anecdotally, they can rarely be quantified. In the past, formulators tended to use statistical techniques to model their formulations, relying on response surfaces to provide a mechanism for optimization. However, the optimization by such a method can be misleading, especially if the formulation is complex. More recently, advances in mathematics and computer science have led to the development of alternative modeling and data mining techniques which work with a wider range of data sources: neural networks (an attempt to mimic the processing of the human brain); genetic algorithms (an attempt to mimic the evolutionary process by which biological systems self-organize and adapt), and fuzzy logic (an attempt to mimic the ability of the human brain to draw conclusions and generate responses based on incomplete or imprecise information). In this review the current technology will be examined, as well as its application in pharmaceutical formulation and processing. The challenges, benefits and future possibilities of neural computing will be discussed. <br><br> <font color="red"><b>This article has been retracted. Link to the retraction <u><a href="http://dx.doi.org/10.2298/CICEQ1104000E">10.2298/CICEQ1104000E</a><u></b></font>
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Wu, Haopeng, Zhiying Lu, Jianfeng Zhang, Xin Li, Mingyue Zhao, and Xudong Ding. "Facial Expression Recognition Based on Multi-Features Cooperative Deep Convolutional Network." Applied Sciences 11, no. 4 (February 4, 2021): 1428. http://dx.doi.org/10.3390/app11041428.

Повний текст джерела
Анотація:
This paper addresses the problem of Facial Expression Recognition (FER), focusing on unobvious facial movements. Traditional methods often cause overfitting problems or incomplete information due to insufficient data and manual selection of features. Instead, our proposed network, which is called the Multi-features Cooperative Deep Convolutional Network (MC-DCN), maintains focus on the overall feature of the face and the trend of key parts. The processing of video data is the first stage. The method of ensemble of regression trees (ERT) is used to obtain the overall contour of the face. Then, the attention model is used to pick up the parts of face that are more susceptible to expressions. Under the combined effect of these two methods, the image which can be called a local feature map is obtained. After that, the video data are sent to MC-DCN, containing parallel sub-networks. While the overall spatiotemporal characteristics of facial expressions are obtained through the sequence of images, the selection of keys parts can better learn the changes in facial expressions brought about by subtle facial movements. By combining local features and global features, the proposed method can acquire more information, leading to better performance. The experimental results show that MC-DCN can achieve recognition rates of 95%, 78.6% and 78.3% on the three datasets SAVEE, MMI, and edited GEMEP, respectively.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Shi, Haobin, Meng Xu, Kao-Shing Hwang, and Chia-Hung Hung. "A posture measurement approach for an articulated manipulator by RGB-D cameras." International Journal of Advanced Robotic Systems 16, no. 2 (March 1, 2019): 172988141983813. http://dx.doi.org/10.1177/1729881419838130.

Повний текст джерела
Анотація:
The objective of this article aims at the safety problems where robots and operators are highly coupled in a working space. A method to model an articulated robot manipulator by cylindrical geometries based on partial cloud points is proposed in this article. Firstly, images with point cloud data containing the posture of a robot with five resolution links are captured by a pair of RGB-D cameras. Secondly, the process of point cloud clustering and Gaussian noise filtering is applied to the images to separate the point cloud data of three links from the combined images. Thirdly, an ideal cylindrical model fits the processed point cloud data are segmented by the random sample consensus method such that three joint angles corresponding to three major links are computed. The original method for calculating the normal vector of point cloud data is the cylindrical model segmentation method, but the accuracy of posture measurement is low when the point cloud data is incomplete. To solve this problem, a principal axis compensation method is proposed, which is not affected by number of point cloud cluster data. The original method and the proposed method are used to estimate the three joint angular of the manipulator system in experiments. Experimental results show that the average error is reduced by 27.97%, and the sample standard deviation of the error is reduced by 54.21% compared with the original method for posture measurement. The proposed method is 0.971 piece/s slower than the original method in terms of the image processing velocity. But the proposed method is still feasible, and the purpose of posture measurement is achieved.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Kryzhanivskyy, Ye I., V. Ya Grudz, V. Ya Grudz (jun.), R. B. Tereshchenko, and R. M. Govdjak. "OPTIMIZING THE MODES OF COMPRESSOR STATIONS UNDER THE CONDITIONS OF THEIR PARTIAL LOAD." Oil and Gas Power Engineering, no. 1(31) (June 26, 2019): 36–42. http://dx.doi.org/10.31471/1993-9868-2019-1(31)-36-42.

Повний текст джерела
Анотація:
The principle of construction of stochastic and deterministic mathematical models for prediction of operating modes of gas pumping units at compressor stations of main gas pipelines and their optimization in conditions of incomplete loading of the gas transmission system is proposed. Calculation of the parameters of the technological mode of operation of the compressor stations of gas pumping units is based on the use of combined characteristics of centrifugal superchargers, the processing of which by means of mathematical statistics allowed to obtain analytical expressions for the characteristics models. The calculation method can be implemented for forecasting the operation modes of the compressor station in conditions of both single-stage and multi-stage compression of gas. The source data for the calculation comprise the upstream and downstream pressure and temperature of gas, the volume of pumping and physical properties of the gas. In order to predict operating modes of compressor stations, the optimal distribution of load between workshops equipped with multi-type gas-pumping units is carried out provided that the energy consumption is reduced to compress the given volume of gas under known boundary conditions. The principle of constructing an optimality criterion for conditions of incomplete loading of the gas transmission system is shown. To illustrate the proposed method of optimizing the operation modes of compressor stations under incomplete loading, we give an example of the calculation of optimal operating modes of a hypothetical two-seater compressor station with various types of aggregates. The calculation is based on the construction of analytical mathematical models of their characteristics on the criterion of minimum capacity for gas compression at a given output and upstream and down stream pressures at the station.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Wei, Xu, Jun Yang, Mingjiu Lv, Wenfeng Chen, Xiaoyan Ma, Saiqiang Xia, and Ming Long. "Removal of Micro-Doppler Effects in ISAR Imaging Based on the Joint Processing of Singular Value Decomposition and Complex Variational Mode Extraction." Mathematical Problems in Engineering 2022 (April 27, 2022): 1–15. http://dx.doi.org/10.1155/2022/6141278.

Повний текст джерела
Анотація:
For inverse synthetic aperture radar (ISAR) imaging of targets with micromotion parts, the removal of micro-Doppler (m-D) effects is the key procedure. However, under the condition of a sparse aperture, the echo pulse is limited or incomplete, giving rise to the difficulty of eliminating m-D effects. Thus, a novel m-D effects removal algorithm is proposed, which can effectively eliminate m-D effects, as well as the interference introduced by noise and sparse aperture in the ISAR image of the main body. The proposed algorithm mainly includes two processing steps. First, combined with the cut-off point determined by the normalized singular value difference spectrum, the rough estimation of the main body is achieved by singular value decomposition (SVD). Then, the variational mode extraction (VME) is extended to complex variational mode extraction (CVME). The constrained variational problem constructed by bandwidth and spectral overlap constraints is solved by the alternating direction method of multipliers (ADMM), and the precise estimation of the main body is obtained. Experimental results based on both simulated and measured data demonstrate that the proposed algorithm can acquire the high-resolution ISAR image of the main body under noisy and sparse conditions.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Zhou, Sujing. "Numerical Analysis of Digital Twin System Modeling Methods Aided by Graph-Theoretic Combinatorial Optimization." Discrete Dynamics in Nature and Society 2022 (February 2, 2022): 1–13. http://dx.doi.org/10.1155/2022/8598041.

Повний текст джерела
Анотація:
This paper combines the digital twin system modeling method to conduct an in-depth study and analysis of graph-theoretic combinatorial optimization. This paper provides new ideas and approaches for optimal numerical analysis work by studying the digital twin modeling method that integrates digital modeling and graph theory combination, provides theoretical support for safe, stable, and economic operation of the system, proposes a solution for digital twin model based on big data platform, focuses on the nearest neighbor propagation (AP) and graph theory combination, solves the digital twin real-time monitoring data asynchronous, incomplete problem, and applies the algorithm to the digital twin model based on the big data platform for data preprocessing to achieve better results. This paper also presents a web-based digital twin system based on intelligent practical needs, analysis, and comparison of existing models, combined with digital twin technology, detailing the differences and connections between the various levels of numerical analysis and the implementation of this data in various fields, such as user management, equipment health management, product quality management, and workshop 3D navigation and detailed modeling of the digital twin system based on this numerical analysis to realize remote online monitoring, analysis, and management. In this paper, for the numerical analysis process, firstly, the key technologies of modeling and simulation operation control of production line based on digital twin are studied, and the rapid response manufacturing system based on a digital twin is designed and validated. Secondly, a scheduling technology framework for capacity simulation evaluation and optimization is established, and batching optimization, outsourcing decision, and rolling scheduling techniques are thus proposed to form a batching optimization algorithm based on priority rules, which realizes batching processing, outsourcing decision, and rolling scheduling of production orders to optimize equipment utilization and capacity. Finally, digital twin-based modeling is designed, and the validation results demonstrate the system’s superior performance in achieving information interaction between physical and virtual production lines, optimization of numerical analysis, and display of results.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Yang, Yutu, Xiaolin Zhou, Ying Liu, Zhongkang Hu, and Fenglong Ding. "Wood Defect Detection Based on Depth Extreme Learning Machine." Applied Sciences 10, no. 21 (October 24, 2020): 7488. http://dx.doi.org/10.3390/app10217488.

Повний текст джерела
Анотація:
The deep learning feature extraction method and extreme learning machine (ELM) classification method are combined to establish a depth extreme learning machine model for wood image defect detection. The convolution neural network (CNN) algorithm alone tends to provide inaccurate defect locations, incomplete defect contour and boundary information, and inaccurate recognition of defect types. The nonsubsampled shearlet transform (NSST) is used here to preprocess the wood images, which reduces the complexity and computation of the image processing. CNN is then applied to manage the deep algorithm design of the wood images. The simple linear iterative clustering algorithm is used to improve the initial model; the obtained image features are used as ELM classification inputs. ELM has faster training speed and stronger generalization ability than other similar neural networks, but the random selection of input weights and thresholds degrades the classification accuracy. A genetic algorithm is used here to optimize the initial parameters of the ELM to stabilize the network classification performance. The depth extreme learning machine can extract high-level abstract information from the data, does not require iterative adjustment of the network weights, has high calculation efficiency, and allows CNN to effectively extract the wood defect contour. The distributed input data feature is automatically expressed in layer form by deep learning pre-training. The wood defect recognition accuracy reached 96.72% in a test time of only 187 ms.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Ruan, Yuexiang. "Design of Intelligent Recognition English Translation Model Based on Deep Learning." Journal of Mathematics 2022 (February 18, 2022): 1–10. http://dx.doi.org/10.1155/2022/5029770.

Повний текст джерела
Анотація:
Nowadays, the intercommunication and translation of global languages has become an indispensable condition for friendly communication among human beings around the world. The advancement of computer technology developed the machine translation from academic research to industrial applications. Additionally, a new and popular branch of machine learning is deep learning which has achieved excellent results in research fields such as natural language processing. This paper improved the performance of machine translation based on deep learning network and studied the intelligent recognition of English-Chinese machine translation models. This research mainly focused on solving out-of-vocabulary (OOV) problem of machine translation on unregistered words and rare words. Moreover, it combined stemming technology and data compression algorithm Byte Pair Encoding (BPE) and proposed a different subword-based word sequence segmentation method. Using this method, the English text is segmented into word sequences composed of subword units, and, at the same time, the Chinese text is segmented into character sequences composed of Chinese characters using unigram. Secondly, the current research also prevented the decoder from experiencing incomplete translation. Furthermore, it adopted a deep-attention mechanism that can improve the decoder's ability to obtain context information. Inspired by the traditional attention calculation process, this work uses a two-layer calculation structure in the improved attention to focus on the connection between the context vectors at different moments of the decoder. Based on the neural machine translation model Google Neural Machine Translation (GNMT), this paper conducted experimental analysis on the above improved methods on three different scale datasets. Experimental results verified that the improved method can solve OOV problem and improve accuracy of model translation.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Bartlett, Christopher W., Brett G. Klamer, Steven Buyske, Stephen A. Petrill, and William C. Ray. "Forming Big Datasets through Latent Class Concatenation of Imperfectly Matched Databases Features." Genes 10, no. 9 (September 19, 2019): 727. http://dx.doi.org/10.3390/genes10090727.

Повний текст джерела
Анотація:
Informatics researchers often need to combine data from many different sources to increase statistical power and study subtle or complicated effects. Perfect overlap of measurements across academic studies is rare since virtually every dataset is collected for a unique purpose and without coordination across parties not-at-hand (i.e., informatics researchers in the future). Thus, incomplete concordance of measurements across datasets poses a major challenge for researchers seeking to combine public databases. In any given field, some measurements are fairly standard, but every organization collecting data makes unique decisions on instruments, protocols, and methods of processing the data. This typically denies literal concatenation of the raw data since constituent cohorts do not have the same measurements (i.e., columns of data). When measurements across datasets are similar prima facie, there is a desire to combine the data to increase power, but mixing non-identical measurements could greatly reduce the sensitivity of the downstream analysis. Here, we discuss a statistical method that is applicable when certain patterns of missing data are found; namely, it is possible to combine datasets that measure the same underlying constructs (or latent traits) when there is only partial overlap of measurements across the constituent datasets. Our method, ROSETTA empirically derives a set of common latent trait metrics for each related measurement domain using a novel variation of factor analysis to ensure equivalence across the constituent datasets. The advantage of combining datasets this way is the simplicity, statistical power, and modeling flexibility of a single joint analysis of all the data. Three simulation studies show the performance of ROSETTA on datasets with only partially overlapping measurements (i.e., systematically missing information), benchmarked to a condition of perfectly overlapped data (i.e., full information). The first study examined a range of correlations, while the second study was modeled after the observed correlations in a well-characterized clinical, behavioral cohort. Both studies consistently show significant correlations >0.94, often >0.96, indicating the robustness of the method and validating the general approach. The third study varied within and between domain correlations and compared ROSETTA to multiple imputation and meta-analysis as two commonly used methods that ostensibly solve the same data integration problem. We provide one alternative to meta-analysis and multiple imputation by developing a method that statistically equates similar but distinct manifest metrics into a set of empirically derived metrics that can be used for analysis across all datasets.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Lin, Tim T., and Felix J. Herrmann. "Compressed wavefield extrapolation." GEOPHYSICS 72, no. 5 (September 2007): SM77—SM93. http://dx.doi.org/10.1190/1.2750716.

Повний текст джерела
Анотація:
An explicit algorithm for the extrapolation of one-way wavefields is proposed that combines recent developments in information theory and theoretical signal processing with the physics of wave propagation. Because of excessive memory requirements, explicit formulations for wave propagation have proven to be a challenge in 3D. By using ideas from compressed sensing, we are able to formulate the (inverse) wavefield extrapolation problem on small subsets of the data volume, thereby reducing the size of the operators. Compressed sensing entails a new paradigm for signal recovery that provides conditions under which signals can be recovered from incomplete samplings by nonlinear recovery methods that promote sparsity of the to-be-recovered signal. According to this theory, signals can be successfully recovered when the measurement basis is incoherent with the representa-tion in which the wavefield is sparse. In this new approach, the eigenfunctions of the Helmholtz operator are recognized as a basis that is incoherent with curvelets that are known to compress seismic wavefields. By casting the wavefield extrapolation problem in this framework, wavefields can be successfully extrapolated in the modal domain, despite evanescent wave modes. The degree to which the wavefield can be recovered depends on the number of missing (evanescent) wavemodes and on the complexity of the wavefield. A proof of principle for the compressed sensing method is given for inverse wavefield extrapolation in 2D, together with a pathway to 3D during which the multiscale and multiangular properties of curvelets, in relation to the Helmholz operator, are exploited. The results show that our method is stable, has reduced dip limitations, and handles evanescent waves in inverse extrapolation.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Бурлака, О. А., С. В. Яхін та В. В. Дудник. "Експериментальні дослідження процесу транспортування зерна елеватором зернозбирального комбайну". Вісник Полтавської державної аграрної академії, № 1 (29 березня 2019): 232–40. http://dx.doi.org/10.31210/visnyk2019.01.28.

Повний текст джерела
Анотація:
Мета статті. Метою цього дослідження є обґрунтування способу контролю параметрів проходження зернового потоку різних сільськогосподарських культур у скребковому елеваторі зернозбирального комбайна за допомогою удосконаленого нами датчика ємнісно-хвильового типу. Методика досліджень. Первинні дані експериментальних досліджень, що були одержані за допомогою спеціально розробленої установки, оброблені за допомогою методів математичного та статистичного аналізу. Результати досліджень. Якість транспортування зерна скребковими елеваторами, що використовуються на зернозбиральних комбайнах та у транспортних лініях зернопереробних підприємств, не відповідає сучасним вимогам − спостерігається неповнота розвантаження, подрібнення частини зерна скребками, ланцюгом та зірочками елеватора, не враховуються умови роботи транспортної зернової системи комбайна при обмолоті маловрожайних культур зі значною перевагою у врожаї незернової частини. При дослідженні особливості руху зерна, що транспортується і розвантажується скребковим елеватором, нами була вдосконалена та використана експериментальна установка, що дообладнана системою контролю інтенсивності транспортування. При проведенні частини експериментальних досліджень вивчалися відображення інтенсивності потоку та його залежність від продуктивності елеватора і швидкості транспортування . Для описування залежності від та для кожного виду сільськогосподарських культур використовувалася множинна лінійна регресія. За результатами експериментів інтенсивність зернового потоку можливо розглядати як ступінь завантаження скребкового елеватора відносно номінального значення. У транспортних системах зернопереробних машин така інтенсивність майже завжди номінальна, а на зернозбиральних комбайнах спостерігається часткове неповне завантаження за рахунок різнопланових умов збирання врожаю. Елементи наукової новизни. Спроектована, виготовлена та використана в наших дослідженнях експериментальна установка, як об’єкт інтелектуальної власності, захищена патентами на винахід та патентами на корисну модель. Також має патентний захист спосіб дослідження технологічних процесів транспортування зерна елеватором. Останнє підтверджує наукову новизну цієї публікації. Практична значущість. Результати дослідження використані при удосконаленні зернотранспортної лінії зернозбирального комбайна КЗС-9-1 «Славутич» та є вагомими для подальшої оптимізації режимів транспортування зерна подібними скребковими елеваторами. The purpose of this study is to substantiate the method of monitoring the parameters of grain stream flow of various crops in the scraper elevator of a grain combine harvester using an improved capacitive-wave type sensor. Methods of the research. The primary data of experimental studies, obtained with the help of a specially developed device, have been processed using mathematical and statistical analysis methods. The research results. The quality of grain transportation by scraper elevators used on combine harvesters and in transportation lines of grain processing enterprises does not meet modern requirements – there is incomplete unloading, grain crushing by scrapers, chain and sprockets of the elevator. The operating conditions of the combine grain transportation system while threshing low-yield crops with a significant advantage of non-cereal part in the yield were not taken into account. Studying the characteristics of grain flow during transportation and unloading by the scraper elevator, we have improved and used the experimental installation, which was additionally equipped with a system for monitoring the intensity of transportation. In the course of experimental studies, the reflection of the flow intensity was studied depending on the elevator production performance and transportation speed. We used multiple linear regression to describe the dependence of reflecting the intensity of the flow for each type of crop. According to the results of the experiments, the intensity of the grain flow can be considered as the degree of loading the scraper elevator relative to the nominal value. In the transport systems of grain processing machines, such intensity is almost always nominal, and on grain combine harvesters there is a partial incomplete loading due to diverse harvesting conditions. The elements of scientific novelty. The experimental installation has been improved and used in our research as an object of intellectual property, protected by patents for invention and patents for a useful model. The method of studying technological processes of grain transportation by elevator also has patent protection. The latter one confirms the scientific novelty of this publication. Practical importance. The results of the research have been used to improve the grain transportation line of the KZS-9-1 "Slavutych" combine harvester and they are significant for further optimization of grain transportation modes with similar scraper elevators.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Iwao, Takanori. "A New Method for CTD Data Processing. Temperature-Conductivity Combined Method." Oceanography in Japan 10, no. 4 (2001): 309–21. http://dx.doi.org/10.5928/kaiyou.10.309.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Wang, Jin Peng, Bao Xiang Liu, Zhen Dong Li, and Li Chao Feng. "Incomplete Concept Lattice Data Analytical Method Research Based on Rough Set Theory." Applied Mechanics and Materials 50-51 (February 2011): 180–84. http://dx.doi.org/10.4028/www.scientific.net/amm.50-51.180.

Повний текст джерела
Анотація:
Concept lattice and rough set are powerful tools for data analyzing and processing, has been successfully applied to many fields. However, the decision information is incomplete in many information systems. In this paper, the definition of incomplete concept lattice has been proposed, and some relation established between imperfect concept lattice and rough set. As is very important that the paper gives a new attributes reduction algorithm about incomplete concept lattice aims at the matter of the inefficient of reduction strategy based on discernibility matrix. Comparing with the attributes reduction for incomplete concept lattice which based on discernibility matrix, this reduction algorithm, reduces the spatial-temporal complexity.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Zhao, Meng-hua, and Xiao-peng Chen. "A Combined Data Processing Method on Water Impact Force Measurement." Journal of Hydrodynamics 24, no. 5 (October 2012): 692–701. http://dx.doi.org/10.1016/s1001-6058(11)60293-x.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Stündl, László. "Complex assessment of inland water fish stocks." Acta Agraria Debreceniensis, no. 25 (April 11, 2007): 74–80. http://dx.doi.org/10.34101/actaagrar/25/3039.

Повний текст джерела
Анотація:
In the domestic fish production, natural waters have yielded for several years about 7-8 thousand tons. This, from the point of view of outputs, considering the almost 130 thousand hectares of natural water, is rather low, it means approximately 55-60 kg/ha mixed fish.Although the various natural waters can differ significantly on the basis of yields, yet on the majority of the territories, the results were low. In the case of our extensive still waters and rivers, the reason can undoubtedly be found in the combined effect of the lack of the possibility of reproduction of the fish stock and the over-fishing. Fishery built on planning supposes the best possible knowledge in the given circum stances of the parameters of the water area and its fishstock. Lacking this knowledge, it is not possible to establish the optimal use fulness of the resources, what is more, the management can make faulty decisions – as a result of a lack of information -, which can risk the success of later activities.It is known that many factors have an impact on the success of the fishery, as well as some information in connection with the water area and the fish stock are necessary, the knowledge of which make it possible to manage the fishery in a planned way. One part of the information is available, while the other part is incomplete or not deep enough. The necessary data are dissimilar depending on their nature, can be obtained from different places, by different methods.As the first step for executing the field surveys and processing data, I developed a complex model, which contains in a unified system the steps of estimating the fishstock. I made the sampling on the basis of this. Part of the model is a fish faunistic survey, as well as a morphological survey of the water area. The information gained from these are important for making more accurate the system of devices of the samplings for stock estimation (duration, number of net-rows) and for assigning its place (places representing the best way the physical characteristics of the given water area). The major stages of stock-survey: A) faunistic survey, B) physical survey of the bed, and C) sampling with the help of gill-nets. This is followed by the evaluation by the computer module.The results of the research create a methodological and technical background for the fish faunistic and population biologic surveys still performed in different ways in our country, and by applying these methods together, all basic information about natural waters which help decision-making concerning fisheries can be obtaine deffectively.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Chen, Yutong, and Yongchuan Tang. "An Improved Approach of Incomplete Information Fusion and Its Application in Sensor Data-Based Fault Diagnosis." Mathematics 9, no. 11 (June 4, 2021): 1292. http://dx.doi.org/10.3390/math9111292.

Повний текст джерела
Анотація:
The Dempster–Shafer evidence theory has been widely used in the field of data fusion. However, with further research, incomplete information under the open world assumption has been discovered as a new type of uncertain information. The classical Dempster’s combination rules are difficult to solve the related problems of incomplete information under the open world assumption. At the same time, partial information entropy, such as the Deng entropy is also not applicable to deal with problems under the open world assumption. Therefore, this paper proposes a new method framework to process uncertain information and fuse incomplete data. This method is based on an extension to the Deng entropy in the open world assumption, negation of basic probability assignment (BPA), and the generalized combination rule. The proposed method can solve the problem of incomplete information under the open world assumption, and obtain more uncertain information through the negative processing of BPA, which improves the accuracy of the results. The results of applying this method to fault diagnosis of electronic rotor examples show that, compared with the other uncertain information processing and fusion methods, the proposed method has wider adaptability and higher accuracy, and is more conducive to practical engineering applications.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Li, Yan, Hua Jun Liu, Guang Lang Bian, and Miao Hui Liu. "Aerocraft Real-Time Measured Data Processing Method Based on Combined Filtering." Applied Mechanics and Materials 716-717 (December 2014): 983–86. http://dx.doi.org/10.4028/www.scientific.net/amm.716-717.983.

Повний текст джерела
Анотація:
To solve the problems that resulted from using a certain filtering method alone to process the real-time data measured on aerocraft, a new method combined filter and Savitzky-Golay smoothing filter is proposed to process the real-time measuring data, which could classify and segment the measured data of aerocraft trajectory according to its priority and time domain. It could provide useful principle and control procedure of combined filters on different conditions to improve the filter efficiency, and the combined filtering results meet the needs of aerocraft real-time data processing accuracy in different measured sections.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Semenov, K. K., and G. N. Solopchenko. "Combined method of metrological self-tracking of measurement data processing programs." Measurement Techniques 54, no. 4 (July 2011): 378–86. http://dx.doi.org/10.1007/s11018-011-9736-6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Park, Mi-Ra, and Jun-Ki Min. "An Efficient Processing Method of Top-k(g) Skyline Group Queries for Incomplete Data." KIPS Transactions:PartD 17D, no. 1 (February 28, 2010): 17–24. http://dx.doi.org/10.3745/kipstd.2010.17d.1.017.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Lavrukhin, A. A., and A. S. Tukanova. "Interval Approach to Magnetotelluric Data Processing." Journal of Physics: Conference Series 2096, no. 1 (November 1, 2021): 012127. http://dx.doi.org/10.1088/1742-6596/2096/1/012127.

Повний текст джерела
Анотація:
Abstract The article presents a new approach to estimate the frequency characteristics of the impedance tensor for processing magnetotelluric data. The approach is based on the applying of interval analysis methods when solving a system of linear equations. As a reference method, to compare with, a combined robust algorithm is used (with discarding data by the coherence criterion, median estimating, and weighting least squares method). This algorithm is compared with the results of the proposed interval computational algorithm that is based on the method of J. Rohn, implemented in the intvalpy Python library. Computational experiments on the data processing were performed using natural magnetotelluric field data. The interval approach can be successfully applied to the processing of magnetotelluric data.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Xu, Jiang, Siqian Liu, Zhikui Chen, and Yonglin Leng. "A Hybrid Imputation Method Based on Denoising Restricted Boltzmann Machine." International Journal of Grid and High Performance Computing 10, no. 2 (April 2018): 1–13. http://dx.doi.org/10.4018/ijghpc.2018040101.

Повний текст джерела
Анотація:
Data imputation is an important issue in data processing and analysis which has serious impact on the results of data mining and learning. Most of the existing algorithms are either utilizing whole data sets for imputation or only considering the correlation among records. Aiming at these problems, the article proposes a hybrid method to fill incomplete data. In order to reduce interference and computation, denoising restricted Boltzmann machine model is developed for robust feature extraction from incomplete data and clustering. Then, the article proposes partial-distance and co-occurrence matrix strategies to measure correlation between records and attributes, respectively. Finally, quantifiable correlation is converted to weights for imputation. Compared with different algorithms, the experimental results confirm the effectiveness and efficiency of the proposed method in data imputation.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Diekfuss, Jed A., Kim D. Barber Foss, Alexis B. Slutsky-Ganesh, Dustin R. Grooms, Anna J. Saltman, Scott Bonnette, Kate Berz, et al. "DOES FEAR OF MOVEMENT ALTER BRAIN ACTIVITY? INVESTIGATING THE NEURAL MARKERS OF KINESIOPHOBIA IN PEDIATRIC PATIENTS WITH PATELLOFEMORAL PAIN." Orthopaedic Journal of Sports Medicine 9, no. 7_suppl3 (July 1, 2021): 2325967121S0012. http://dx.doi.org/10.1177/2325967121s00122.

Повний текст джерела
Анотація:
Background: Patellofemoral pain (PFP) is a chronic knee condition that inhibits movement quality and can cascade into kinesiophobia (i.e., fear of pain/movement). Despite its high prevalence in adolescent girls, PFP etiology has remained elusive, potentially due to an incomplete understanding of how pain, motor control, and kinesiophobia interact with central nervous system (CNS) functioning. Discovering linkages between motor control, kinesiophobia, and the CNS for patients with PFP could provide novel opportunities to refine neural therapeutic strategies for enhanced, personalized treatment approaches. Hypothesis/Purpose: To identify neural markers of kinesiophobia in pediatric patients with PFP. Methods: Adolescent girls clinically diagnosed with PFP ( n = 14; 14.3 ± 3.2 yrs) were positioned supine in a magnetic resonance imaging (MRI) scanner. A modified Clarke’s test (patellofemoral grind test) was administered to the left knee of patients during brain functional MRI (fMRI) acquisition. The experimenter placed a hand at the superior patellar border applying intermittent distal pressure while the patient contracted her quadriceps. Patients also completed a quadriceps contraction test without application of external pressure (control). Kinesiophobia was measured using the Tampa Scale of Kinesiophobia. fMRI analyses compared brain activation between the two tasks, and correlation analyses were performed to assess relationships between task-induced brain activity and kinesiophobia. Statistical corrections were made to account for multiple, voxel-wise comparisons. Results: Evidenced in Table 1 and Figure 1A, neuroimaging analyses revealed distinct bidirectional activation during the Clarke’s test compared to the control (all p corrected < .05, all z max > 3.1). Table 1 and Figure 1B further highlight that greater kinesiophobia was associated with increased brain activity during the Clarke’s test in two clusters (both p corrected < .05, both z max > 3.1), but no relationships between kinesiophobia and brain activity during the control test were observed (all p corrected > .05, all z max < 3.1). Conclusion: The current results indicate that the Clarke’s test induced differential pain-related brain activity relative to the control (e.g., paracingulate gyrus). The patients’ degree of kinesiophobia was also related to brain activity in regions important for sensorimotor control, attention, and pain. Collectively, these data indicate that PFP may be due to alterations in nociceptive processing throughout the CNS, providing novel complementary pathways for targeted restoration of patellofemoral joint dysfunction. Future research should consider combined mechanistic pain profiling, sensorimotor, psychometric, and CNS assessments to identify patients most susceptible to kinesiophobia to refine treatment approaches that work towards a goal of disease prevention. Tables/Figures: [Figure: see text][Table: see text]
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Yuguang, Niu, Wang Shilin, and Du Ming. "A Combined Markov Chain Model and Generalized Projection Nonnegative Matrix Factorization Approach for Fault Diagnosis." Mathematical Problems in Engineering 2017 (2017): 1–7. http://dx.doi.org/10.1155/2017/7067025.

Повний текст джерела
Анотація:
The presence of sets of incomplete measurements is a significant issue in the real-world application of multivariate statistical process monitoring models for industrial process fault detection. Since the missing data in the incomplete measurements are usually correlated with some of the available variables, these measurements can be used if an efficient algorithm is presented. To resolve the problem, a novel method combining Markov chain model and generalized projection nonnegative matrix factorization (MCM-GPNMF) is proposed to detect and diagnose the faults in industrial process. The basic idea of the approach is to use MCM-GPNMF to extract the dominant variables from incomplete process data and to combine them with statistical process monitoring techniques. TG2 and SPEG statistics are defined as online monitoring quantities for fault detection and corresponding contribution plots are also considered for fault isolation. The proposed method is applied to a 1000 MW unit boiler process. The simulation results clearly illustrate the feasibility of the proposed method.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Yu, Yong Tao, and Ying Ding. "Sea-Battlefield Situation Assessment Method Based on Data Mining." Advanced Materials Research 677 (March 2013): 460–65. http://dx.doi.org/10.4028/www.scientific.net/amr.677.460.

Повний текст джерела
Анотація:
How to describe a complex dynamic and efficient sea-battlefield situation assessment is the major problem facing operational decision support. In this paper, according to research the sea-battlefield situation assessment based on data mining technology. First analysis and to determine sea-battlefield situation assessment of data mining tasks; second, build the sea-battlefield situation data marts; the third step is the sea-battlefield situation datasets city data pre-processing operations, and solve the problem of inaccurate data, incomplete; the fourth step of the properties of the sea-battlefield situation assessment data is simple, to solve the problem of rules extraction efficiency; finally, based on the genetic algorithm generated after a simple sea-battlefield situation assessment rules extracted the sea-battlefield situation assessment optimal rules.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Göppel, Simon, Jürgen Frikel, and Markus Haltmeier. "Feature Reconstruction from Incomplete Tomographic Data without Detour." Mathematics 10, no. 8 (April 15, 2022): 1318. http://dx.doi.org/10.3390/math10081318.

Повний текст джерела
Анотація:
In this paper, we consider the problem of feature reconstruction from incomplete X-ray CT data. Such incomplete data problems occur when the number of measured X-rays is restricted either due to limit radiation exposure or due to practical constraints, making the detection of certain rays challenging. Since image reconstruction from incomplete data is a severely ill-posed (unstable) problem, the reconstructed images may suffer from characteristic artefacts or missing features, thus significantly complicating subsequent image processing tasks (e.g., edge detection or segmentation). In this paper, we introduce a framework for the robust reconstruction of convolutional image features directly from CT data without the need of computing a reconstructed image first. Within our framework, we use non-linear variational regularization methods that can be adapted to a variety of feature reconstruction tasks and to several limited data situations. The proposed variational regularization method minimizes an energy functional being the sum of a feature dependent data-fitting term and an additional penalty accounting for specific properties of the features. In our numerical experiments, we consider instances of edge reconstructions from angular under-sampled data and show that our approach is able to reliably reconstruct feature maps in this case.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Qin, Hongwu, Huifang Li, Xiuqin Ma, Zhangyun Gong, Yuntao Cheng, and Qinghua Fei. "Data Analysis Approach for Incomplete Interval-Valued Intuitionistic Fuzzy Soft Sets." Symmetry 12, no. 7 (June 27, 2020): 1061. http://dx.doi.org/10.3390/sym12071061.

Повний текст джерела
Анотація:
The model of interval-valued intuitionistic fuzzy soft sets is a novel excellent solution which can manage the uncertainty and fuzziness of data. However, when we apply this model into practical applications, it is an indisputable fact that there are some missing data in many cases for a variety of reasons. For the purpose of handling this problem, this paper presents new data processing approaches for an incomplete interval-valued intuitionistic fuzzy soft set. The missing data will be ignored if percentages of missing degree of membership and nonmember ship in total degree of membership and nonmember ship for both the related parameter and object are below the threshold values; otherwise, it will be filled. The proposed filling method fully considers and employs the characteristics of the interval-valued intuitionistic fuzzy soft set itself. A case is shown in order to display the proposed method. From the results of experiments on all thirty randomly generated datasets, we can discover that the overall accuracy rate is up to 80.1% by our filling method. Finally, we give one real-life application to illustrate our proposed method.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Hoff, Axel A. "Chaos Control and Neural Classification." Zeitschrift für Naturforschung A 49, no. 4-5 (May 1, 1994): 589–93. http://dx.doi.org/10.1515/zna-1994-4-511.

Повний текст джерела
Анотація:
Abstract Chaotic behaviour in biological neural networks is known from various experiments. The recent finding that it is possible to "control" chaotic systems may help answer the question whether chaos plays an active role in neutral information processing. It is demonstrated that a method for chaos control which was proposed by Pyragas can be used to let a chaotic system act like an autoassociative memory for time signal inputs. Specifically a combined chaotic and chaos control system can reconstruct unstable periodic orbits from incomplete information. The potential relevance of these findings for neural information processing is pointed out.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Barnes, Gary, and John Lumley. "Processing gravity gradient data." GEOPHYSICS 76, no. 2 (March 2011): I33—I47. http://dx.doi.org/10.1190/1.3548548.

Повний текст джерела
Анотація:
As the demand for high-resolution gravity gradient data increases and surveys are undertaken over larger areas, new challenges for data processing have emerged. In the case of full-tensor gradiometry, the processor is faced with multiple derivative measurements of the gravity field with useful signal content down to a few hundred meters’ wavelength. Ideally, all measurement data should be processed together in a joint scheme to exploit the fact that all components derive from a common source. We have investigated two methods used in commercial practice to process airborne full-tensor gravity gradient data; the methods result in enhanced, noise-reduced estimates of the tensor. The first is based around Fourier operators that perform integration and differentiation in the spatial frequency domain. By transforming the tensor measurements to a common component, the data can be combined in a way that reduces noise. The second method is based on the equivalent-source technique, where all measurements are inverted into a single density distribution. This technique incorporates a model that accommodates low-order drift in the measurements, thereby making the inversion less susceptible to correlated time-domain noise. A leveling stage is therefore not required in processing. In our work, using data generated from a geologic model along with noise and survey patterns taken from a real survey, we have analyzed the difference between the processed data and the known signal to show that, when considering the Gzz component, the modified equivalent-source processing method can reduce the noise level by a factor of 2.4. The technique has proven useful for processing data from airborne gradiometer surveys over mountainous terrain where the flight lines tend to be flown at vastly differing heights.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Kim, Jong Hyen, and Byeong Seok Ahn. "The Hierarchical VIKOR Method with Incomplete Information: Supplier Selection Problem." Sustainability 12, no. 22 (November 18, 2020): 9602. http://dx.doi.org/10.3390/su12229602.

Повний текст джерела
Анотація:
To solve a multi-criteria decision-making problem, many attempts have been made to alleviate difficulties of obtaining precise preference information attributed to time pressure, lack of data and domain knowledge, limited attention and information processing capabilities, etc. Structuring any decision problem hierarchically is known to be an efficient way of dealing with complexity and identifying the major components of the problem. In this paper, we propose the hierarchical VIKOR method that uses incomplete alternatives’ values as well as incomplete criteria weights, extending previous works that consider mostly intervals or fuzzy under a flat structure of criteria. It ranks alternatives using the aggregated scores of group utility and individual regret scores which are computed from the linear programs. To show how to use our proposed method, we exemplified an international supplier selection problem that affects the organization’s sustainable growth.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Boyadzhieva, Desislava, and Georgi Gluhchev. "A Combined Method for On-Line Signature Verification." Cybernetics and Information Technologies 14, no. 2 (July 15, 2014): 92–97. http://dx.doi.org/10.2478/cait-2014-0022.

Повний текст джерела
Анотація:
Abstract A combined method for on-line signature verification is presented in this paper. Moreover, all the necessary steps in developing a signature recognition system are described: signature data pre-processing, feature extraction and selection, verification and system evaluation. NNs are used for verification. The influence of the signature forgery type (random and skilled) over the verification results is investigated as well. The experiments are carried out on SUsig database which consists of genuine and forgery signatures of 89 users. The average accuracy is 98.46%.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Chervyakov, N. I., P. A. Lyakhov, and A. R. Orazaev. "3D-generalization of impulse noise removal method for video data processing." Computer Optics 44, no. 1 (February 2020): 92–100. http://dx.doi.org/10.18287/2412-6179-co-577.

Повний текст джерела
Анотація:
The paper proposes a generalized method of adaptive median impulse noise filtering for video data processing. The method is based on the combined use of iterative processing and transformation of the result of median filtering based on the Lorentz distribution. Four different combinations of algorithmic blocks of the method are proposed. The experimental part of the paper presents the results of comparing the quality of the proposed method with known analogues. Video distorted by impulse noise with pixel distortion probabilities from 1% to 99% inclusive was used for the simulation. Numerical assessment of the quality of cleaning video data from noise based on the mean square error (MSE) and structural similarity (SSIM) showed that the proposed method shows the best result of processing in all the considered cases, compared with the known approaches. The results obtained in the paper can be used in practical applications of digital video processing, for example, in systems of video surveillance, identification systems and control of industrial processes.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Dong, Yu Hua, and Hai Chun Ning. "Exterior Ballistic Data Processing by SVD and Wavelet Transform." Advanced Materials Research 562-564 (August 2012): 1394–97. http://dx.doi.org/10.4028/www.scientific.net/amr.562-564.1394.

Повний текст джерела
Анотація:
This paper proposes a method of wavelet transform combined with SVD (Singular Value Extracting), and the abnormal data elimination in its trajectory measurement is studied. After the wavelet decomposition of the observed data, combining the approximate component and the detail component, the phase space is reconstructed. The increment criterion of singular entropy is used for the input observed matrix of SVD, and the singular value is selected. Then the original signal is reconstructed by SVD inverse transform. This method overcomes the distortion problem of data end in phase space reconstruction by Hankel matrix. The reconstructed phase space by components of wavelet decomposition is orthogonal. So it further improves the accuracy of noise reduction and abnormal detection by SVD. The results of experimental data processing show the effectiveness of this method proposed in the paper.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

McDonald, Thomas, Mark Robinson, and GuiYun Tian. "Spatial resolution enhancement of rotational-radar subsurface datasets using combined processing method." Journal of Physics: Conference Series 2090, no. 1 (November 1, 2021): 012001. http://dx.doi.org/10.1088/1742-6596/2090/1/012001.

Повний текст джерела
Анотація:
Abstract Effective visualisation of railway tunnel subsurface features (e.g. voids, utilities) provides critical insight into structural health and underpins planning of essential targeted predictive maintenance. Subsurface visualisation here utilises a rotating ground penetrating radar antenna system for 360° point cloud data capture. This technology has been constructed by our industry partner Railview Ltd, and requires the development of complimentary signal processing algorithms to improve feature localisation. The main novelty of this work is extension of Shrestha and Arai’s Combined Processing Method (CPM) to 360° Ground Penetrating Radar (360GPR) datasets, for first-time application in the context of railway tunnel structural health inspection. Initial experimental acquisition of a sample rotational transect for CPM enhancement is achieved by scanning a test section of tunnel sidewall - featuring predefined target geometry - with the rotating antenna. Next, frequency data separately undergo Inverse Fast Fourier Transform (IFFT) and Multiple Signal Classification (MUSIC) processing to recover temporal responses. Numerical implementation steps are explicitly provided for both MUSIC and two associated spatial smoothing algorithms, addressing an identified information deficit in the field. Described IFFT amplitude is combined with high spatial resolution of MUSIC via the CPM relation. Finally, temporal responses are compared qualitatively and quantitatively, evidencing the significant enhancement capabilities of CPM.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Xu, Bo. "Artificial Intelligence Teaching System and Data Processing Method Based on Big Data." Complexity 2021 (May 5, 2021): 1–11. http://dx.doi.org/10.1155/2021/9919401.

Повний текст джерела
Анотація:
With the rapid development of big data, artificial intelligence teaching systems have gradually been developed extensively. The powerful artificial intelligence teaching systems have become a tool for teachers and students to learn independently in various universities. The characteristic of artificial intelligence teaching system is to get rid of the constraints of traditional teaching time and space and build a brand-new learning environment, which is the mainstream trend of future learning. As the carrier of students’ autonomous learning, the artificial intelligence teaching system provides a wealth of learning resources and learning tools on the one hand, and on the other hand, it gradually accumulates more and more learning behaviors, learning status, and other large amounts of data, which is an in-depth study of online learning and provides valuable and generative dynamic resources. Based on relevant researches on domestic and foreign related learning analysis and common big data analysis methods, combined with actual learning evaluation goals, this paper proposes an artificial intelligence teaching system using big data analysis methods and a modeling process framework for online learning evaluation and uses student data to carry out predictive evaluation modeling to evaluate student learning outcomes. The evaluation results can enable teachers to predict whether students can successfully complete the course of learning after a period of teaching. Through the final evaluation, students’ learning problems can be discovered in time based on the evaluation results, and targeted interventions can be made for students who are at risk. The scientific and objective learning evaluation obtained in this study through data analysis can not only provide teachers with relevant information and provide personalized guidance to students, but also improve the adaptive and personalized service functions of the learning platform of the artificial intelligence teaching system, greatly reducing teachers teaching burden. Artificial intelligence teaching evaluation can help educators understand the problems in teaching, adjust teaching strategies in time, and improve teaching results.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Saputra, Darman. "Analysis of Factors Affecting Economic Growth in Bangka Belitung Province, Indonesia with LSDV And FGLS Methods." Integrated Journal of Business and Economics 2, no. 1 (February 1, 2018): 24. http://dx.doi.org/10.33019/ijbe.v2i1.42.

Повний текст джерела
Анотація:
The Least Square Dummy Variable (LSDV) method can be used to estimate parameters in the panel data regression model incomplete one-way fixed effect. To produce the best model with GDP data of GRASB. Variables that do not occur heteroscedasticity and models that meet the smallest sum square of error is the variable Mining and Processing Industry, this variable affects the per capita income. The Feasible Generalized Least Square (FGLS) method can be used to estimate the regression parameters for incomplete panel data for a one-way random effect. In this model produce the best model with non-oil and gas GRDP data. The variables that fulfill it are the processing Industry, service, and agriculture of Forestry and Fishery. Therefore looking at the above model can be concluded non-oil and Gas GRDP has three factors that affect per capita income in Bangka Belitung. This should be a reference of local governments to further improve the quality or production in agriculture and services because this potential is more promising for the future. Software used to analyze data in this paper is with R.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Vuong, Le Tien. "On the application of fuzzy set theory in relational database." Journal of Computer Science and Cybernetics 1, no. 4 (August 6, 2015): 10–15. http://dx.doi.org/10.15625/1813-9663/1/4/6699.

Повний текст джерела
Анотація:
In this paper, we introduced a model of retrieval system based on the theory of fuzzy sets in which we constructed a suitable storage structure with incomplete inverted files (they contain even some incomplete inverted files for a collection of fuzzy data of database). In this concept, we introduced a method for fuzzy query processing in the system and shown some retrieval strategies. The general response system was also defined and the ability of these strategies was investigated.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Bi, Anqi, Wenhao Ying, and Lu Zhao. "Fast Enhanced Exemplar-Based Clustering for Incomplete EEG Signals." Computational and Mathematical Methods in Medicine 2020 (May 8, 2020): 1–11. http://dx.doi.org/10.1155/2020/4147807.

Повний текст джерела
Анотація:
The diagnosis and treatment of epilepsy is a significant direction for both machine learning and brain science. This paper newly proposes a fast enhanced exemplar-based clustering (FEEC) method for incomplete EEG signal. The algorithm first compresses the potential exemplar list and reduces the pairwise similarity matrix. By processing the most complete data in the first stage, FEEC then extends the few incomplete data into the exemplar list. A new compressed similarity matrix will be constructed and the scale of this matrix is greatly reduced. Finally, FEEC optimizes the new target function by the enhanced α-expansion move method. On the other hand, due to the pairwise relationship, FEEC also improves the generalization of this algorithm. In contrast to other exemplar-based models, the performance of the proposed clustering algorithm is comprehensively verified by the experiments on two datasets.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Ren, Qi Liang, Ming Ming Li, Qing Li, Jin Shan Chen, and Bo Li. "Improved Model for Strategic-Loading-Station Location Problem under Incomplete Information." Key Engineering Materials 474-476 (April 2011): 151–57. http://dx.doi.org/10.4028/www.scientific.net/kem.474-476.151.

Повний текст джерела
Анотація:
In order to realize scientific planning the layout of strategic loading station, the strategic-loading –station location problem was proposed, which is related to determine the location of each stay with station and the transportation plan on the precise of satisfying the general location constraints and problem related special constraints. Based on theory of capacitated facility location with multiple souring in two stage supply chain, one mixed integer linear programming model was established which took whole logistic cost embedding transportation and location costs as objective. When confronted with small and medium size problem, this model could be exactly solved by optimization software using branch and bound algorithm, and when the problem size was big, this model could be heuristically solved by genetic algorithm which used 0-1 and priority-based combined encoding method to represent solution, cost-based and random-generated combined method to generate initial solution, solution repaired strategy and penalty method and special fitness function to deal illegal individual, and greedy based method to heuristically solve transshipment problem. At last, according to computational tests on randomly generated data demonstrated the practical feasibility of this method.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Miropolsky, A., and A. Fischer. "Utilizing Diverse Feature Data for Reconstruction of Scanned Object as a Basis for Inspection." Journal of Computing and Information Science in Engineering 7, no. 3 (July 11, 2007): 211–24. http://dx.doi.org/10.1115/1.2768370.

Повний текст джерела
Анотація:
Inspection of machined objects is one of the most important quality control tasks in the manufacturing industry. Ideally, inspection processes should be able to work directly on scan point data. Scan data, however, are typically very large scale (i.e., many points), unorganized, noisy, and incomplete. Therefore, direct processing of scanned points is problematic. Many of these problems may be reduced if reconstruction methods exploit diverse scan data, that is, information about the properties of the scanned object. This paper describes this concept and proposes new methods for extraction and processing of diverse scan data: (1) extraction (detection of a scanned object’s sharp features by the sharp feature detection method) and (2) processing (scan data reduction by the geometric bilateral filter method). The proposed methods are applied directly on the scanned points and are completely automatic, fast, and straightforward to implement. Finally, this paper demonstrates the integration of the proposed methods into the computational inspection process.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Jiang, Li, Hao Chen, Yueqi Ouyang, and Canbing Li. "A Multisource Retrospective Audit Method for Data Quality Optimization and Evaluation." International Journal of Distributed Sensor Networks 2015 (2015): 1–8. http://dx.doi.org/10.1155/2015/195015.

Повний текст джерела
Анотація:
With the rapid development of information technology and the coming of the era of big data, various data are constantly emerging and present the characteristics of autonomy and heterogeneity. How to optimize data quality and evaluate the effect has become a challenging problem. Firstly, a heterogeneous data integration model based on retrospective audit is proposed to locate the original data source and match the data. Secondly, in order to improve the integrated data quality, a retrospective audit model and associative audit rules are proposed to fix incomplete and incorrect data from multiple heterogeneous data sources. The heterogeneous data integration model based on retrospective audit is divided into four modules including original heterogeneous data, data structure, data processing, and data retrospective audit. At last, some assessment criteria such as redundancy, sparsity, and accuracy are defined to evaluate the effect of the optimized data quality. Experimental results show that the quality of the integrated data is significantly higher than the quality of the original data.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Chen, Qi, and Yu Ming Ma. "The Research on Cloud Platform Considered Privacy Household Load Data Processing." Advanced Materials Research 1049-1050 (October 2014): 1929–33. http://dx.doi.org/10.4028/www.scientific.net/amr.1049-1050.1929.

Повний текст джерела
Анотація:
To solve the problems such as randomness user behavior, effective privacy protection and massive data processing shortage, this paper has proposed a household load data processing method with privacy protection, combined data mining and cloud computing. First, it gave the platform architecture and individual protection model. Then, it proposed Mask-k_means database encryption method, algorithm parallelization was implemented by MapReduce. Finally, this paper varied the method was and implemented household load data processing based on statistics and real-measured data respectively. One conclusion is that feasibility of cluster analysis for multi-user was low, while single-user load analysis is high. The other is that this method is simple and practical. This can provide a new way to household load big data under smart electricity consuming.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Song, Jie, Qiang He, Feifei Chen, Ye Yuan, and Ge Yu. "PoBery: Possibly-complete Big Data Queries with Probabilistic Data Placement and Scanning." ACM/IMS Transactions on Data Science 2, no. 3 (August 31, 2021): 1–28. http://dx.doi.org/10.1145/3465375.

Повний текст джерела
Анотація:
In big data query processing, there is a trade-off between query accuracy and query efficiency, for example, sampling query approaches trade-off query completeness for efficiency. In this article, we argue that query performance can be significantly improved by slightly losing the possibility of query completeness, that is, the chance that a query is complete. To quantify the possibility, we define a new concept, Probability of query Completeness (hereinafter referred to as PC). For example, If a query is executed 100 times, PC = 0.95 guarantees that there are no more than 5 incomplete results among 100 results. Leveraging the probabilistic data placement and scanning, we trade off PC for query performance. In the article, we propose PoBery (POssibly-complete Big data quERY), a method that supports neither complete queries nor incomplete queries, but possibly-complete queries. The experimental results conducted on HiBench prove that PoBery can significantly accelerate queries while ensuring the PC. Specifically, it is guaranteed that the percentage of complete queries is larger than the given PC confidence. Through comparison with state-of-the-art key-value stores, we show that while Drill-based PoBery performs as fast as Drill on complete queries, it is 1.7 ×, 1.1 ×, and 1.5 × faster on average than Drill, Impala, and Hive, respectively, on possibly-complete queries.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Yaparov, D. D., and A. L. Shestakov. "Numerical Method for Processing the Results of Dynamic Measurements." Bulletin of the South Ural State University. Ser. Computer Technologies, Automatic Control & Radioelectronics 21, no. 4 (November 2021): 115–25. http://dx.doi.org/10.14529/ctcr210410.

Повний текст джерела
Анотація:
The problem of processing data obtained during dynamic measurements is one of the central problems in measuring technology. Purpose of the study. The article is devoted to the study of the stability of the method for solving the problem of processing the results of dynamic measurements with respect to the error in the initial data. Therefore, an urgent task is the development of algorithms for processing the results of dynamic measurements. Materials and methods. This article proposes an algorithm for processing the data obtained during dynamic measurements based on the finite-difference approach. The main prerequisites of the mathematical model of the problem of dynamic measurements associated with the processes of restoration of the input signal in conditions of incomplete and noisy initial data are as follows. Initially, the function of the noisy output signal is known. The restoration of the input signal is carried out using the transfer function of the sensor. The transfer function of the sensor is presented in the form of a differential equation. This equation describes the state of a dynamic system in real time. The proposed computational scheme of the method is based on finite-difference analogs of partial derivatives and the Tikhonov regularization method was used to construct a numerical model of the sensor. The problem of stability of the method for solving high-order differential equations is also one of the central problems of data processing in automatic control systems. Based on the approach of the generalized quasi-optimal choice of the regularization parameter in the Lavrent'ev method, the dependence of the regularization parameter, the parameters of the dynamic measuring system, the noise index and the required level of accuracy was found. Results. The main goal of the computational experiment was to construct a numerical solution to the problem under consideration. Standard test functions were considered as input signals. Test signals simulating various physical processes were used as an input signal. The function of the output signal was found using the proposed numerical method, the found function was noisy with an additive noise of 5 %. Conclusion. The input signal was restored from the noisy signal. The deviation of the reconstructed signal from the initial one in all experiments was no more than 0.05, which indicates the stability of this method with respect to noisy data.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Gao, Yun Feng, and Ning Xu. "Data Processing with Combined Homotopy Methods for a Class of Nonconvex Optimization Problems." Advanced Materials Research 1046 (October 2014): 403–6. http://dx.doi.org/10.4028/www.scientific.net/amr.1046.403.

Повний текст джерела
Анотація:
On the existing theoretical results, this paper studies the realization of combined homotopy methods on optimization problems in a specific class of nonconvex constrained region. Contraposing to this nonconvex constrained region, we give the structure method of the quasi-normal, prove that the chosen mappings on constrained grads are positive independent and the feasible region on SLM satisfies the quasi-normal cone condition. And we construct combined homotopy equation under the quasi-normal cone condition with numerical value and examples, and get a preferable result by data processing.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Li, Han, Zhao Liu, and Ping Zhu. "An Engineering Domain Knowledge-Based Framework for Modelling Highly Incomplete Industrial Data." International Journal of Data Warehousing and Mining 17, no. 4 (October 2021): 48–66. http://dx.doi.org/10.4018/ijdwm.2021100103.

Повний текст джерела
Анотація:
The missing values in industrial data restrict the applications. Although this incomplete data contains enough information for engineers to support subsequent development, there are still too many missing values for algorithms to establish precise models. This is because the engineering domain knowledge is not considered, and valuable information is not fully captured. Therefore, this article proposes an engineering domain knowledge-based framework for modelling incomplete industrial data. The raw datasets are partitioned and processed at different scales. Firstly, the hierarchical features are combined to decrease the missing ratio. In order to fill the missing values in special data, which is identified for classifying the samples, samples with only part of the features presented are fully utilized instead of being removed to establish local imputation model. Then samples are divided into different groups to transfer the information. A series of industrial data is analyzed for verifying the feasibility of the proposed method.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії