Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Statistical processing of real data.

Zeitschriftenartikel zum Thema „Statistical processing of real data“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Statistical processing of real data" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Parygin, D. S., V. P. Malikov, A. V. Golubev, N. P. Sadovnikova, T. M. Petrova und A. G. Finogeev. „Categorical data processing for real estate objects valuation using statistical analysis“. Journal of Physics: Conference Series 1015 (Mai 2018): 032102. http://dx.doi.org/10.1088/1742-6596/1015/3/032102.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Sun, Hong Feng, Ying Li und Hong Lv. „Statistical Analysis of the Massive Traffic Data Based on Cloud Platform“. Advanced Materials Research 717 (Juli 2013): 662–66. http://dx.doi.org/10.4028/www.scientific.net/amr.717.662.

Der volle Inhalt der Quelle
Annotation:
Currently, with the rapid development of various geographic data acquisition technologies, the data-intensive geographic calculation is becoming more and more important. The urban motor vehicles loaded with GPS, namely the transport vehicles, can real-timely collect a large number of urban traffic information. If these massive transportation vehicle data can be real-timely collected and analyzed, the real-time and accurate basic information will be provided for monitoring the large area of traffic status as well as the intelligent traffic management. Based on the requirements of the organization, the processing, the statistics and the analysis of the massive urban traffic data, the new framework of the massive data-intensive calculation under the environment of cloud platform has been proposed through employing Bigtable, Mapreduce and other technologies.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Liu, Hua, und Nan Zhang. „Data Processing in the Key Factors Affecting China's Endowment Real Estate Enterprises Financing“. Applied Mechanics and Materials 730 (Januar 2015): 349–52. http://dx.doi.org/10.4028/www.scientific.net/amm.730.349.

Der volle Inhalt der Quelle
Annotation:
Financing problem is one of the main reasons for restricting the development of endowment real estate enterprises in China. By analyzing the present situation of endowment real estate enterprises financing and researching relevant literatures, we sum up 20 general influence factors. Using the data processing model of the principal component analysis method to analyze the 20 general influence factors under the help of SPSS 19.0 statistical analysis software, we can find out the key influence factors which affect endowment real estate enterprises financing. Aiming at the key influence factors, we put forward some specific measures to promote the smooth development of endowment real estate enterprises financing in China.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Liu, Mou Zhong, und Min Sun. „Application of Multidimensional Data Model in the Traffic Accident Data Warehouse“. Applied Mechanics and Materials 548-549 (April 2014): 1857–61. http://dx.doi.org/10.4028/www.scientific.net/amm.548-549.1857.

Der volle Inhalt der Quelle
Annotation:
The traffic administrative department would record real-time information of accidents and update the corresponding database when dealing with daily traffic routines. It is of great significance to study and analyze these data. In this paper, we propose a Multi-dimensional Data Warehouse Model (M-DWM) combined with the concept of Data Warehouse and multi-dimensional data processing theory. The model can greatly improve the efficiency for statistical analysis and data mining.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Vollmar, Melanie, James M. Parkhurst, Dominic Jaques, Arnaud Baslé, Garib N. Murshudov, David G. Waterman und Gwyndaf Evans. „The predictive power of data-processing statistics“. IUCrJ 7, Nr. 2 (27.02.2020): 342–54. http://dx.doi.org/10.1107/s2052252520000895.

Der volle Inhalt der Quelle
Annotation:
This study describes a method to estimate the likelihood of success in determining a macromolecular structure by X-ray crystallography and experimental single-wavelength anomalous dispersion (SAD) or multiple-wavelength anomalous dispersion (MAD) phasing based on initial data-processing statistics and sample crystal properties. Such a predictive tool can rapidly assess the usefulness of data and guide the collection of an optimal data set. The increase in data rates from modern macromolecular crystallography beamlines, together with a demand from users for real-time feedback, has led to pressure on computational resources and a need for smarter data handling. Statistical and machine-learning methods have been applied to construct a classifier that displays 95% accuracy for training and testing data sets compiled from 440 solved structures. Applying this classifier to new data achieved 79% accuracy. These scores already provide clear guidance as to the effective use of computing resources and offer a starting point for a personalized data-collection assistant.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Zhao, Yu Qian, und Zhi Gang Li. „FPGA Implementation of Real-Time Adaptive Bidirectional Equalization for Histogram“. Advanced Materials Research 461 (Februar 2012): 215–19. http://dx.doi.org/10.4028/www.scientific.net/amr.461.215.

Der volle Inhalt der Quelle
Annotation:
According to the characteristics of infrared images, a contrast enhancement algorithm was presented. The principium of FPGA-based adaptive bidirectional plateau histogram equalization was given in this paper. The plateau value was obtained by finding local maximum and whole maximum in statistical histogram based on dimensional histogram statistic. Statistical histogram was modified by the plateau value and balanced in gray scale and gray spacing. Test data generated by single frame image, which was simulated by FPGA-based real-time adaptive bidirectional plateau histogram equalization. The simulation results indicates that the precept meet the requests well in both the image processing effects and processing speed
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Liu, Yuxi, Yiping Zhu und Mingzhe Wei. „Application of Point Cloud Data Processing in River Regulation“. Marine Technology Society Journal 55, Nr. 2 (01.03.2021): 198–204. http://dx.doi.org/10.4031/mtsj.55.2.15.

Der volle Inhalt der Quelle
Annotation:
Abstract Geotextile materials are often used in river regulation projects to cut down sand loss caused by water erosion, to thus ensure a stable and safe river bed. In order to measure the overlap width in the geotextile-laying procedure, we proposed a point processing method for cloud data, which engages point cloud data obtained by 3-D imaging sonar to do automatic measurements. Firstly, random sampling and consensus point cloud segmentation and outer point filtering based on statistical analysis on density were used to extract the upper and lower plane data of the geotextile. Secondly, cluster classification was used to obtain the edge point cloud. Lastly, edge characteristic parameters were extracted by linear fitting, and the overlap width in geotextile laying was calculated. Results show that this measurement scheme is feasible, robust, and accurate enough to meet the requirements in real-life engineering.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Daume III, H., und D. Marcu. „Domain Adaptation for Statistical Classifiers“. Journal of Artificial Intelligence Research 26 (21.06.2006): 101–26. http://dx.doi.org/10.1613/jair.1872.

Der volle Inhalt der Quelle
Annotation:
The most basic assumption used in statistical learning theory is that training data and test data are drawn from the same underlying distribution. Unfortunately, in many applications, the "in-domain" test data is drawn from a distribution that is related, but not identical, to the "out-of-domain" distribution of the training data. We consider the common case in which labeled out-of-domain data is plentiful, but labeled in-domain data is scarce. We introduce a statistical formulation of this problem in terms of a simple mixture model and present an instantiation of this framework to maximum entropy classifiers and their linear chain counterparts. We present efficient inference algorithms for this special case based on the technique of conditional expectation maximization. Our experimental results show that our approach leads to improved performance on three real world tasks on four different data sets from the natural language processing domain.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Majumdar, Chitradeep, Miguel Lopez-Benitez und Shabbir N. Merchant. „Real Smart Home Data-Assisted Statistical Traffic Modeling for the Internet of Things“. IEEE Internet of Things Journal 7, Nr. 6 (Juni 2020): 4761–76. http://dx.doi.org/10.1109/jiot.2020.2969318.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Lytvynenko, T. I. „Problem of data analysis and forecasting using decision trees method“. PROBLEMS IN PROGRAMMING, Nr. 2-3 (Juni 2016): 220–26. http://dx.doi.org/10.15407/pp2016.02-03.220.

Der volle Inhalt der Quelle
Annotation:
This study describes an application of the decision tree approach to the problem of data analysis and forecasting. Data processing bases on the real observations that represent sales level in the period between 2006 and 2009. R (programming language and software environment) is used as a tool for statistical computing. Paper includes comparison of the method with well-known approaches and solutions in order to improve accuracy of the gained consequences.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Somplak, Radovan, Zlata Smidova, Veronika Smejkalova und Vlastimir Nevrly. „Statistical Evaluation of Large-Scale Data Logistics System“. MENDEL 24, Nr. 2 (21.12.2018): 9–16. http://dx.doi.org/10.13164/mendel.2018.2.009.

Der volle Inhalt der Quelle
Annotation:
Data recording is struggling with the occurrence of errors, which worsen the accuracy of follow-up calculations. Achievement of satisfactory results requires the data processing to eliminate the influence of errors. This paper applies a data reconciliation technique for mining of data from ecording movement vehicles. The database collects information about the start and end point of the route (GPS coordinates) and total duration.The presented methodology smooths available data and allows to obtain an estimation of transportation time through individual parts of the entire recorded route. This process allows obtaining valuable information which can be used for further transportation planning. First, the proposed mathematical model is tested on simplifled example. The real data application requires necessary preprocessing within which anticipated routes are designed. Thus, the database is supplemented with information on the probable speed of the vehicle. The mathematical model is based on weighted least squares data reconciliation which is organized iteratively. Due to the time-consuming calculation, the linearised model is computed to initialize the values for a complex model. The attention is also paid to the weight setting. The weighing system is designed to reflect the quality of specific data and the dependence on the frequency of trafic. In this respect, the model is not strict, which leaves the possibility to adapt to the current data. The case study focuses on the GPS data of shipping vehicles in the particular city in the Czech Republic with several types of roads.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Branisavljević, Nemanja, Zoran Kapelan und Dušan Prodanović. „Improved real-time data anomaly detection using context classification“. Journal of Hydroinformatics 13, Nr. 3 (06.01.2011): 307–23. http://dx.doi.org/10.2166/hydro.2011.042.

Der volle Inhalt der Quelle
Annotation:
The number of automated measuring and reporting systems used in water distribution and sewer systems is dramatically increasing and, as a consequence, so is the volume of data acquired. Since real-time data is likely to contain a certain amount of anomalous values and data acquisition equipment is not perfect, it is essential to equip the SCADA (Supervisory Control and Data Acquisition) system with automatic procedures that can detect the related problems and assist the user in monitoring and managing the incoming data. A number of different anomaly detection techniques and methods exist and can be used with varying success. To improve the performance, these methods must be fine tuned according to crucial aspects of the process monitored and the contexts in which the data are classified. The aim of this paper is to explore if the data context classification and pre-processing techniques can be used to improve the anomaly detection methods, especially in fully automated systems. The methodology developed is tested on sets of real-life data, using different standard and experimental anomaly detection procedures including statistical, model-based and data-mining approaches. The results obtained clearly demonstrate the effectiveness of the suggested anomaly detection methodology.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Lauritsen, K. B., S. Syndergaard, H. Gleisner, M. E. Gorbunov, F. Rubek, M. B. Sørensen und H. Wilhelmsen. „Processing and validation of refractivity from GRAS radio occultation data“. Atmospheric Measurement Techniques 4, Nr. 10 (04.10.2011): 2065–71. http://dx.doi.org/10.5194/amt-4-2065-2011.

Der volle Inhalt der Quelle
Annotation:
Abstract. We discuss the processing of GRAS radio occultation (RO) data done at the GRAS Satellite Application Facility. The input data consists of operational near-real time bending angles from December 2010 from the Metop-A satellite operated by EUMETSAT. The data are processed by an Abel inversion algorithm in combination with statistical optimization based on a two-parameter fit to an MSIS climatology. We compare retrieved refractivity to analyses from ECMWF. It is found that for global averages, the mean differences to ECMWF analyses are smaller than 0.2% below 30 km (except near the surface), with standard deviations around 0.5% for altitudes between 8 and 25 km. The current processing is limited by several factors, which are discussed. In particular, the penetration depth for rising occultations is generally poor, which is related to the tracking of the L2 signal. Extrapolation of the difference between the L1 and L2 signals below the altitude where L2 is lost is possible and would generally allow deeper penetration of retrieved refractivity profiles into the lower troposphere.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Lauritsen, K. B., S. Syndergaard, H. Gleisner, M. E. Gorbunov, F. Rubek, M. B. Sørensen und H. Wilhelmsen. „Processing and validation of refractivity from GRAS radio occultation data“. Atmospheric Measurement Techniques Discussions 4, Nr. 2 (13.04.2011): 2189–205. http://dx.doi.org/10.5194/amtd-4-2189-2011.

Der volle Inhalt der Quelle
Annotation:
Abstract. We discuss the processing of GRAS radio occultation (RO) data done at the GRAS Satellite Application Facility. The input data consists of operational near-real time bending angles from December 2010 from the Metop-A satellite operated by EUMETSAT. The data are processed by an Abel inversion algorithm in combination with statistical optimization based on a two-parameter fit to an MSIS climatology. We compare retrieved refractivity to analyses from ECMWF. It is found that for global averages, the mean differences to ECMWF analyses are smaller than 0.2% below 30 km (except near the surface), with standard deviations around 0.5% for altitudes between 8 and 25 km. The current processing is limited by several factors, which are discussed. In particular, the penetration depth for rising occultations is generally poor, which is related to the tracking of the L2 signal. Extrapolation of the difference between the L1 and L2 signals below the altitude where L2 is lost is possible and would generally allow deeper penetration of retrieved refractivity profiles into the lower troposphere.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Zhang, Chunzhen, und Lei Hou. „Data middle platform construction: The strategy and practice of National Bureau of Statistics of China“. Statistical Journal of the IAOS 36, Nr. 4 (25.11.2020): 979–86. http://dx.doi.org/10.3233/sji-200754.

Der volle Inhalt der Quelle
Annotation:
To address the data ‘islandization’ issue in the statistical field and to take advantage of the opportunity of the Statistical Cloud construction, the National Bureau of Statistics of China (NBS) started adopting the concept of a “data middle platform” for data resource planning. With it, NBS aims to build a comprehensive data capability platform that includes data collection and exchange; data sharing and integration; data organizing and processing; data modeling and analyses; data management and governance; and data service and application. The statistical data middle platform provides the basic capability for data application support. It also enables data to form a closed loop between the data middle platform and the business system, and eventually realizes the ‘servitization’ of statistical data that meets internal and societal requirements. As a new innovative development, the statistical data middle platform will not only solve the long-standing data island problem of NBS but will also provide a basic guarantee for greater use of the data potential, and thus will help official statistics to transform from statistical analysis to predictive analysis, from single-domain to cross-domain, from passive analysis to active analysis, and from non-real-time to real-time analysis. The paper was prepared under the kind mentorship of Ronald Jansen, Assistant Director and Chief of Data Innovation at the UN Statistics Division in New York.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Fedotov, S. N. „An approach to computer analysis of the ligand binding assay data on example of radioligand assay data“. Journal of Bioinformatics and Computational Biology 18, Nr. 02 (April 2020): 2050014. http://dx.doi.org/10.1142/s0219720020500146.

Der volle Inhalt der Quelle
Annotation:
As a rule, receptor-ligand assay data are fitted by logistic functions (4PL model, 5PL model, Feldman’s model). The preparation of the initial estimates for parameters of these functions is an important problem for processing receptor-ligand interaction data. This study represents a new mathematical approach to calculate the initial estimates more closely to the true values of parameters. The main idea of this approach is in using the modified linear least squares method for calculations of the parameters for the 4PL model and the Feldman’s model. In this study, the convergence of model parameters to true values is verified for the simulated data with different statistical scatter. Also, the results of processing real data for the 4PL model and the Feldman’s model are presented. A comparison is made of the parameter values calculated by the presented and a nonlinear method. The developed approach has demonstrated its efficiency in calculating the parameters of the complex Feldman”s models up to 4 ligands and 4 sites.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Carrillo, Rafael E., Martin Leblanc, Baptiste Schubnel, Renaud Langou, Cyril Topfel und Pierre-Jean Alet. „High-Resolution PV Forecasting from Imperfect Data: A Graph-Based Solution“. Energies 13, Nr. 21 (03.11.2020): 5763. http://dx.doi.org/10.3390/en13215763.

Der volle Inhalt der Quelle
Annotation:
Operating power systems with large amounts of renewables requires predicting future photovoltaic (PV) production with fine temporal and spatial resolution. State-of-the-art techniques combine numerical weather predictions with statistical post-processing, but their resolution is too coarse for applications such as local congestion management. In this paper we introduce computing methods for multi-site PV forecasting, which exploit the intuition that PV systems provide a dense network of simple weather stations. These methods rely entirely on production data and address the real-life challenges that come with them, such as noise and gaps. Our approach builds on graph signal processing for signal reconstruction and for forecasting with a linear, spatio-temporal autoregressive (ST-AR) model. It also introduces a data-driven clear-sky production estimation for normalization. The proposed framework was evaluated over one year on both 303 real PV systems under commercial monitoring across Switzerland, and 1000 simulated ones based on high-resolution weather data. The results demonstrate the performance and robustness of the approach: with gaps of four hours on average in the input data, the average daytime NRMSE over a six-hour forecasting horizon (in 15 min steps) and over all systems is 13.8% and 9% for the real and synthetic data sets, respectively.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Carter, Jennifer L. W., Amit K. Verma und Nishan M. Senanayake. „Harnessing Legacy Data to Educate Data-Enabled Structural Materials Engineers“. MRS Advances 5, Nr. 7 (2020): 319–27. http://dx.doi.org/10.1557/adv.2020.132.

Der volle Inhalt der Quelle
Annotation:
ABSTRACTData-driven materials design informed by legacy data-sets can enable the education of a new workforce, promote openness of the scientific process in the community, and advance our physical understanding of complex material systems. The performance of structural materials, which are controlled by competing factors of composition, grain size, particle size/distribution, residual strain, cannot be modelled with single-mechanism physics. The design of optimal processing route must account for the coupled nature of the creation of such factors, and requires students to learn machine learning and statistical modelling principles not taught in the conventional undergraduate or graduate level Materials Science and Engineering curricula. Therefore, modified curricula with opportunities for experiential learning are paramount for workforce development. Projects with real-world data provide an opportunity for students to establish fluency in the iterative steps needed to solve relevant scientific and engineering process design questions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Xu, Ya Min. „Particle Size Measurements Using a New Signal Processing Method“. Applied Mechanics and Materials 239-240 (Dezember 2012): 1011–17. http://dx.doi.org/10.4028/www.scientific.net/amm.239-240.1011.

Der volle Inhalt der Quelle
Annotation:
Transmission fluctuation spectrometry (TFS) is a new method for particle analysis based on the statistical fluctuations of a transmission signal. With simple optical arrangement and easy operation, the method can be applied to real-time, online measurements. The transmission signal with fluctuations are analyzed by using 1st order band-pass filters, and the experimental data in the frequency domain are obtained. The particle size distribution (PSD) and particle concentration are extracted from the experimental data with the modifed Chahine interations. It is found that the measurements using band-pass filters are of better resolution in the PSD than those with low-pass filters.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Delorme, Arnaud, Tim Mullen, Christian Kothe, Zeynep Akalin Acar, Nima Bigdely-Shamlo, Andrey Vankov und Scott Makeig. „EEGLAB, SIFT, NFT, BCILAB, and ERICA: New Tools for Advanced EEG Processing“. Computational Intelligence and Neuroscience 2011 (2011): 1–12. http://dx.doi.org/10.1155/2011/130714.

Der volle Inhalt der Quelle
Annotation:
We describe a set of complementary EEG data collection and processing tools recently developed at the Swartz Center for Computational Neuroscience (SCCN) that connect to and extend the EEGLAB software environment, a freely available and readily extensible processing environment running under Matlab. The new tools include (1) a new and flexible EEGLAB STUDY design facility for framing and performing statistical analyses on data from multiple subjects; (2) a neuroelectromagnetic forward head modeling toolbox (NFT) for building realistic electrical head models from available data; (3) a source information flow toolbox (SIFT) for modeling ongoing or event-related effective connectivity between cortical areas; (4) a BCILAB toolbox for building online brain-computer interface (BCI) models from available data, and (5) an experimental real-time interactive control and analysis (ERICA) environment for real-time production and coordination of interactive, multimodal experiments.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Jia, Er Hui, Bin Li und Tao Zhang. „A Novel Baseline Correction Algorithm“. Applied Mechanics and Materials 220-223 (November 2012): 2248–52. http://dx.doi.org/10.4028/www.scientific.net/amm.220-223.2248.

Der volle Inhalt der Quelle
Annotation:
This paper studies baseline correction algorithms for subtracting the background of real-word signal. A novel baseline correction algorithm is proposed that can be solved by random signal processing. With respect to generalized statistical features of the raw data, an appropriate threshold of standard deviation is set to extract the true baseline points unfailingly. Under the generalized meaning, the background at one signal point is substituted by the statistical features of its local window. By using this proposed algorithm, we established a time varying signal baseline independently and accurately. And performance evaluation shows that the proposed algorithm is more elaborate and tolerant of real-word data than the previous ones.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Pulinets, Sergey, Dimitar Ouzounov, Dmitry Davidenko und Pavel Budnikov. „Principles of organizing earthquake forecasting based on multiparameter sensor-WEB monitoring data“. E3S Web of Conferences 196 (2020): 03004. http://dx.doi.org/10.1051/e3sconf/202019603004.

Der volle Inhalt der Quelle
Annotation:
The paper describes an approach that allows, basing on the data of multiparameter monitoring of atmospheric and ionospheric parameters and using ground-based and satellite measurements, to select from the data stream a time interval indicating the beginning of the final stage of earthquake preparation, and finally using intelligent data processing to carry out a short-term forecast for a time interval of 2 weeks to 1 day before the main shock. Based on the physical model of the lithosphere-atmospheric-ionospheric coupling, the precursors are selected, the ensemble of which is observed only during the precursory periods, and their identification is based on morphological features determined by the physical mechanism of their generation, and not on amplitude selection based on statistical data processing. Basing on the developed maquette of the automatic processing service, the possibility of real-time monitoring of the situation in a seismically active region will be demonstrated using the territory of the Kamchatka region and the Kuril Islands.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

K, Mahesh, Dr M. V. Vijayakumar und Gangadharaiah Y.H . „A Statistical Analysis and Datamining Approach for Wind Speed Predication“. INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY 14, Nr. 2 (18.12.2014): 5464–78. http://dx.doi.org/10.24297/ijct.v14i2.2077.

Der volle Inhalt der Quelle
Annotation:
The wind power industry has seen an unprecedented growth in last few years. The surge in orders for wind turbines has resulted in a producers market. This market imbalance, the relative immaturity of the wind industry, and rapid developments in data processing technology have created an opportunity to improve the performance of wind farms and change misconceptions surrounding their operations. This research offers a new paradigm for the wind power industry, data-driven modeling. Each wind Mast generates extensive data for many parameters, registered as frequently as every minute. As the predictive performance approach is novel to wind industry, it is essential to establish a viable research road map. This paper proposes a data-mining-based methodology for long term wind forecasting (ANN), which is suitable to deal with large real databases. The paper includes a case study based on a real database of five years of wind speed data for a site and discusses results of wind power density was determined by using the Weibull and Rayleigh probability density functions. Wind speed predicted using wind speed data with Datamining methodology using intelligent technology as Artificial Neural Networks (ANN) and a PROLOG program designed to calculate the monthly mean wind speed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Vaci, Nemanja, Qiang Liu, Andrey Kormilitzin, Franco De Crescenzo, Ayse Kurtulmus, Jade Harvey, Bessie O'Dell et al. „Natural language processing for structuring clinical text data on depression using UK-CRIS“. Evidence Based Mental Health 23, Nr. 1 (Februar 2020): 21–26. http://dx.doi.org/10.1136/ebmental-2019-300134.

Der volle Inhalt der Quelle
Annotation:
BackgroundUtilisation of routinely collected electronic health records from secondary care offers unprecedented possibilities for medical science research but can also present difficulties. One key issue is that medical information is presented as free-form text and, therefore, requires time commitment from clinicians to manually extract salient information. Natural language processing (NLP) methods can be used to automatically extract clinically relevant information.ObjectiveOur aim is to use natural language processing (NLP) to capture real-world data on individuals with depression from the Clinical Record Interactive Search (CRIS) clinical text to foster the use of electronic healthcare data in mental health research.MethodsWe used a combination of methods to extract salient information from electronic health records. First, clinical experts define the information of interest and subsequently build the training and testing corpora for statistical models. Second, we built and fine-tuned the statistical models using active learning procedures.FindingsResults show a high degree of accuracy in the extraction of drug-related information. Contrastingly, a much lower degree of accuracy is demonstrated in relation to auxiliary variables. In combination with state-of-the-art active learning paradigms, the performance of the model increases considerably.ConclusionsThis study illustrates the feasibility of using the natural language processing models and proposes a research pipeline to be used for accurately extracting information from electronic health records.Clinical implicationsReal-world, individual patient data are an invaluable source of information, which can be used to better personalise treatment.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Rodrigues, F. A. A., J. S. Nobre, R. Vigélis, V. Liesenberg, R. C. P. Marques und F. N. S. Medeiros. „A FAST APPROACH FOR THE LOG-CUMULANTS METHOD APPLIED TO INTENSITY SAR IMAGE PROCESSING“. ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-3/W12-2020 (06.11.2020): 499–503. http://dx.doi.org/10.5194/isprs-archives-xlii-3-w12-2020-499-2020.

Der volle Inhalt der Quelle
Annotation:
Abstract. Synthetic aperture radar (SAR) image processing and analysis rely on statistical modeling and parameter estimation of the probability density functions that characterize data. The method of log-cumulants (MoLC) is a reliable alternative for parameter estimation of SAR data models and image processing. However, numerical methods are usually applied to estimate parameters using MoLC, and it may lead to a high computational cost. Thus, MoLC may be unsuitable for real-time SAR imagery applications such as change detection and marine search and rescue, for example. Our paper introduces a fast approach to overcome this limitation of MoLC, focusing on parameter estimation of single-channel SAR data modeled by the G0I distribution. Experiments with simulated and real SAR data demonstrate that our approach performs faster than MoLC, while the precision of the estimation is comparable with that of the original MoLC. We tested the fast approach with multitemporal data and applied the arithmetic-geometric distance to real SAR images for change detection on the ocean. The experiments showed that the fast MoLC outperformed the original estimation method with regard to the computational time.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Bhatt, Nirav, und Amit Thakkar. „An efficient approach for low latency processing in stream data“. PeerJ Computer Science 7 (10.03.2021): e426. http://dx.doi.org/10.7717/peerj-cs.426.

Der volle Inhalt der Quelle
Annotation:
Stream data is the data that is generated continuously from the different data sources and ideally defined as the data that has no discrete beginning or end. Processing the stream data is a part of big data analytics that aims at querying the continuously arriving data and extracting meaningful information from the stream. Although earlier processing of such stream was using batch analytics, nowadays there are applications like the stock market, patient monitoring, and traffic analysis which can cause a drastic difference in processing, if the output is generated in levels of hours and minutes. The primary goal of any real-time stream processing system is to process the stream data as soon as it arrives. Correspondingly, analytics of the stream data also needs consideration of surrounding dependent data. For example, stock market analytics results are often useless if we do not consider their associated or dependent parameters which affect the result. In a real-world application, these dependent stream data usually arrive from the distributed environment. Hence, the stream processing system has to be designed, which can deal with the delay in the arrival of such data from distributed sources. We have designed the stream processing model which can deal with all the possible latency and provide an end-to-end low latency system. We have performed the stock market prediction by considering affecting parameters, such as USD, OIL Price, and Gold Price with an equal arrival rate. We have calculated the Normalized Root Mean Square Error (NRMSE) which simplifies the comparison among models with different scales. A comparative analysis of the experiment presented in the report shows a significant improvement in the result when considering the affecting parameters. In this work, we have used the statistical approach to forecast the probability of possible data latency arrives from distributed sources. Moreover, we have performed preprocessing of stream data to ensure at-least-once delivery semantics. In the direction towards providing low latency in processing, we have also implemented exactly-once processing semantics. Extensive experiments have been performed with varying sizes of the window and data arrival rate. We have concluded that system latency can be reduced when the window size is equal to the data arrival rate.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Stewart, Robert D., und Mick Watson. „poRe GUIs for parallel and real-time processing of MinION sequence data“. Bioinformatics 33, Nr. 14 (09.03.2017): 2207–8. http://dx.doi.org/10.1093/bioinformatics/btx136.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Roy Frieden, B. „Information and estimation in image processing“. Proceedings, annual meeting, Electron Microscopy Society of America 45 (August 1987): 14–17. http://dx.doi.org/10.1017/s0424820100125142.

Der volle Inhalt der Quelle
Annotation:
Despite the skill and determination of electro-optical system designers, the images acquired using their best designs often suffer from blur and noise. The aim of an “image enhancer” such as myself is to improve these poor images, usually by digital means, such that they better resemble the true, “optical object,” input to the system. This problem is notoriously “ill-posed,” i.e. any direct approach at inversion of the image data suffers strongly from the presence of even a small amount of noise in the data. In fact, the fluctuations engendered in neighboring output values tend to be strongly negative-correlated, so that the output spatially oscillates up and down, with large amplitude, about the true object. What can be done about this situation? As we shall see, various concepts taken from statistical communication theory have proven to be of real use in attacking this problem. We offer below a brief summary of these concepts.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

LACHAUD, CHRISTIAN MICHEL, und OLIVIER RENAUD. „A tutorial for analyzing human reaction times: How to filter data, manage missing values, and choose a statistical model“. Applied Psycholinguistics 32, Nr. 2 (25.03.2011): 389–416. http://dx.doi.org/10.1017/s0142716410000457.

Der volle Inhalt der Quelle
Annotation:
ABSTRACTThis tutorial for the statistical processing of reaction times collected through a repeated-measure design is addressed to researchers in psychology. It aims at making explicit some important methodological issues, at orienting researchers to the existing solutions, and at providing them some evaluation tools for choosing the most robust and precise way to analyze their data. The methodological issues we tackle concern data filtering, missing values management, and statistical modeling (F1, F2, F1 + F2, quasi-F, mixed-effects models with hierarchical, or with crossed factors). For each issue, references and remedy suggestions are given. In addition, modeling techniques are compared on real data and a benchmark is given for estimating the precision and robustness of each technique.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Iyer, V., S. Shetty und S. S. Iyengar. „STATISTICAL METHODS IN AI: RARE EVENT LEARNING USING ASSOCIATIVE RULES AND HIGHER-ORDER STATISTICS“. ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences II-4/W2 (10.07.2015): 119–30. http://dx.doi.org/10.5194/isprsannals-ii-4-w2-119-2015.

Der volle Inhalt der Quelle
Annotation:
Rare event learning has not been actively researched since lately due to the unavailability of algorithms which deal with big samples. The research addresses spatio-temporal streams from multi-resolution sensors to find actionable items from a perspective of real-time algorithms. This computing framework is independent of the number of input samples, application domain, labelled or label-less streams. A sampling overlap algorithm such as Brooks-Iyengar is used for dealing with noisy sensor streams. We extend the existing noise pre-processing algorithms using Data-Cleaning trees. Pre-processing using ensemble of trees using bagging and multi-target regression showed robustness to random noise and missing data. As spatio-temporal streams are highly statistically correlated, we prove that a temporal window based sampling from sensor data streams converges after n samples using Hoeffding bounds. Which can be used for fast prediction of new samples in real-time. The Data-cleaning tree model uses a nonparametric node splitting technique, which can be learned in an iterative way which scales linearly in memory consumption for any size input stream. The improved task based ensemble extraction is compared with non-linear computation models using various SVM kernels for speed and accuracy. We show using empirical datasets the explicit rule learning computation is linear in time and is only dependent on the number of leafs present in the tree ensemble. The use of unpruned trees (<i>t</i>) in our proposed ensemble always yields minimum number (<i>m</i>) of leafs keeping pre-processing computation to <i>n</i> &times; <i>t</i> log <i>m</i> compared to <i>N<sup>2</sup></i> for Gram Matrix. We also show that the task based feature induction yields higher Qualify of Data (QoD) in the feature space compared to kernel methods using Gram Matrix.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

QU, HONGCHUN, QINGSHENG ZHU, MINGWEI GUO und ZHONGHUA LU. „AN INTELLIGENT LEARNING APPROACH TO L-GRAMMAR EXTRACTION FROM IMAGE SEQUENCES OF REAL PLANTS“. International Journal on Artificial Intelligence Tools 18, Nr. 06 (Dezember 2009): 905–27. http://dx.doi.org/10.1142/s0218213009000457.

Der volle Inhalt der Quelle
Annotation:
In this paper, we propose an automatic analyzing and transforming approach to L-system grammar extraction from real plants. Instead of using manually designed rules and cumbersome parameters, our method establishes the relationship between L-system grammars and the iterative trend of botanical entities, which reflect the endogenous factors that caused the plant branching process. To realize this goal, we use a digital camera to take multiple images of unfoliaged (leafless) plants and capture the topological and geometrical data of plant entities using image processing methods. The data then stored into specific data structures. A Hidden Markov based statistical model is then employed to reveal the hidden relations of plant entities which have been classified into categories based on their statistical properties extracted by a classic EM algorithm, the hidden relations have been integrated into the target L-system as grammars. Results show that our method is capable of automatically generating L-grammars for a given unfoliaged plant no matter what branching type it is belongs to.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Hryniów, Krzysztof, und Andrzej Dzieliński. „Probabilistic Sequence Mining – Evaluation and Extension of ProMFS Algorithm for Real-Time Problems“. International Journal of Electronics and Telecommunications 58, Nr. 4 (01.12.2012): 323–26. http://dx.doi.org/10.2478/v10177-012-0044-0.

Der volle Inhalt der Quelle
Annotation:
Abstract Sequential pattern mining is an extensively studied method for data mining. One of new and less documented approaches is estimation of statistical characteristics of sequence for creating model sequences, that can be used to speed up the process of sequence mining. This paper proposes extensive modifications to one of such algorithms, ProMFS (probabilistic algorithm for mining frequent sequences), which notably increases algorithm’s processing speed by a significant reduction of its computational complexity. A new version of algorithm is evaluated for real-life and artificial data sets and proven to be useful in real-time applications and problems.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Schmid, Claudia, Robert L. Molinari, Reyna Sabina, Yeun-Ho Daneshzadeh, Xiangdong Xia, Elizabeth Forteza und Huiqin Yang. „The Real-Time Data Management System for Argo Profiling Float Observations“. Journal of Atmospheric and Oceanic Technology 24, Nr. 9 (01.09.2007): 1608–28. http://dx.doi.org/10.1175/jtech2070.1.

Der volle Inhalt der Quelle
Annotation:
Abstract Argo is an internationally coordinated program directed at deploying and maintaining an array of 3000 temperature and salinity profiling floats on a global 3° latitude × 3° longitude grid. Argo floats are deployed from research vessels, merchant ships, and aircraft. After launch they sink to a prescribed pressure level (typically 1000–2000 dbar), where most floats remain for 10 days. The floats then return to the surface, collecting temperature and salinity profiles. At the surface they transmit the data to a satellite and sink again to repeat the cycle. As of 10 August 2006 there are 2489 floats reporting data. The International Argo Data Management Team oversees the development and implementation of the data management protocols of Argo. Two types of data systems are active—real time and delayed mode. The real-time system receives the transmissions from the Argo floats, extracts the data, checks their quality, and makes them available to the users. The objective of the real-time system is to provide Argo profiles to the operational and research community within 24 h of their measurement. This requirement makes it necessary to control the quality of the data automatically. The delayed-mode quality control is directed at a more detailed look at the profiles using statistical methods and scientific review of the data. In this paper, the real-time data processing and quality-control methodology is described in detail. Results of the application of these procedures to Argo profiles are described.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

V., Dr Suma. „Data Mining based Prediction of Demand in Indian Market for Refurbished Electronics“. Journal of Soft Computing Paradigm 2, Nr. 2 (22.05.2020): 101–10. http://dx.doi.org/10.36548/jscp.2020.2.007.

Der volle Inhalt der Quelle
Annotation:
There has been an increasing demand in the e-commerce market for refurbished products across India during the last decade. Despite these demands, there has been very little research done in this domain. The real-world business environment, market factors and varying customer behavior of the online market are often ignored in the conventional statistical models evaluated by existing research work. In this paper, we do an extensive analysis of the Indian e-commerce market using data-mining approach for prediction of demand of refurbished electronics. The impact of the real-world factors on the demand and the variables are also analyzed. Real-world datasets from three random e-commerce websites are considered for analysis. Data accumulation, processing and validation is carried out by means of efficient algorithms. Based on the results of this analysis, it is evident that highly accurate prediction can be made with the proposed approach despite the impacts of varying customer behavior and market factors. The results of analysis are represented graphically and can be used for further analysis of the market and launch of new products.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Dr. V. Suma. „Data Mining based Prediction of Demand in Indian Market for Refurbished Electronics“. Journal of Soft Computing Paradigm 2, Nr. 3 (06.07.2020): 153–59. http://dx.doi.org/10.36548/jscp.2020.3.002.

Der volle Inhalt der Quelle
Annotation:
There has been an increasing demand in the e-commerce market for refurbished products across India during the last decade. Despite these demands, there has been very little research done in this domain. The real-world business environment, market factors and varying customer behavior of the online market are often ignored in the conventional statistical models evaluated by existing research work. In this paper, we do an extensive analysis of the Indian e-commerce market using data-mining approach for prediction of demand of refurbished electronics. The impact of the real-world factors on the demand and the variables are also analyzed. Real-world datasets from three random e-commerce websites are considered for analysis. Data accumulation, processing and validation is carried out by means of efficient algorithms. Based on the results of this analysis, it is evident that highly accurate prediction can be made with the proposed approach despite the impacts of varying customer behavior and market factors. The results of analysis are represented graphically and can be used for further analysis of the market and launch of new products.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Farmanbar, Mina, und Chunming Rong. „Triangulum City Dashboard: An Interactive Data Analytic Platform for Visualizing Smart City Performance“. Processes 8, Nr. 2 (24.02.2020): 250. http://dx.doi.org/10.3390/pr8020250.

Der volle Inhalt der Quelle
Annotation:
Cities are becoming smarter by incorporating hardware technology, software systems, and network infrastructure that provide Information Technology (IT) systems with real-time awareness of the real world. What makes a “smart city” functional is the combined use of advanced infrastructure technologies to deliver its core services to the public in a remarkably efficient manner. City dashboards have drawn increasing interest from both city operators and citizens. Dashboards can gather, visualize, analyze, and inform regional performance to support the sustainable development of smart cities. They provide useful tools for evaluating and facilitating urban infrastructure components and services. This work proposes an interactive web-based data visualization and data analytics toolkit supported by big data aggregation tools. The system proposed is a cloud-based prototype that supports visualization and real-time monitoring of city trends while processing and displaying large data sets on a standard web browser. However, it is capable of supporting online analysis processing by answering analytical queries and producing graphics from multiple resources. The aim of this platform is to improve communication between users and urban service providers and to give citizens an overall view of the city’s state. The conceptual framework and architecture of the proposed platform are explored, highlighting design challenges and providing insight into the development of smart cities. Moreover, results and the potential statistical analysis of important city services offered by the system are introduced. Finally, we present some challenges and opportunities identified through the development of the city data platform.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Bousbia-Salah, Assya, und Malika Talha-Kedir. „TIME-FREQUENCY PROCESSING METHOD OF EPILEPTIC EEG SIGNALS“. Biomedical Engineering: Applications, Basis and Communications 27, Nr. 02 (17.03.2015): 1550015. http://dx.doi.org/10.4015/s1016237215500155.

Der volle Inhalt der Quelle
Annotation:
Wavelet transform decomposition of electroencephalogram (EEG) signals has been widely used for the analysis and detection of epileptic seizure of patients. However, the classification of EEG signals is still challenging because of high nonstationarity and high dimensionality. The aim of this work is an automatic classification of the EEG recordings by using statistical features extraction and support vector machine. From a real database, two sets of EEG signals are used: EEG recorded from a healthy person and from an epileptic person during epileptic seizures. Three important statistical features are computed at different sub-bands discrete wavelet and wavelet packet decomposition of EEG recordings. In this study, to select the best wavelet for our application, five wavelet basis functions are considered for processing EEG signals. After reducing the dimension of the obtained data by linear discriminant analysis and principal component analysis (PCA), feature vectors are used to model and to train the efficient support vector machine classifier. In order to show the efficiency of this approach, the statistical classification performances are evaluated, and a rate of 100% for the best classification accuracy is obtained and is compared with those obtained in other studies for the same dataset. However, this method is not meant to replace the clinician but can assist him for his diagnosis and reinforce his decision.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Yan, Jun, Junxia Meng und Jianhu Zhao. „Real-Time Bottom Tracking Using Side Scan Sonar Data Through One-Dimensional Convolutional Neural Networks“. Remote Sensing 12, Nr. 1 (20.12.2019): 37. http://dx.doi.org/10.3390/rs12010037.

Der volle Inhalt der Quelle
Annotation:
As one of the most commonly used acoustic systems in seabed surveys, the altitude of the side scan sonar from the seafloor is always difficult to determine, especially when raw signal levels and gain information are unavailable. The inaccurate sonar altitudes would limit the applications of sonar image geocoding, target detection, and sediment classification. The sonar altitude can be obtained by using bottom tracking methods, but traditional methods often require manual thresholds or complex post-processing procedures, which cannot ensure accurate and real-time bottom tracking. In this paper, a real-time bottom tracking method of side scan data is proposed based on a one-dimensional convolution neural network. First, according to the characteristics of side scan backscatter strength sequences, positive (bottom sequences) and negative (water column and seabed sequences) samples are extracted to establish the sample sets. Second, a one-dimensional convolution neural network is carefully designed and trained by using the sample set to recognize the bottom sequences. Third, a complete processing procedure of the real-time bottom tracking method is established by traversing each side scan ping data and recognizing the bottom sequences. The auxiliary methods for improving real-time performance and sample data augmentation are also explained in detail. The proposed method is implemented on the measured side scan data from the marine area in Meizhou Bay. The trained network model achieves a 100% recognition of the initial sample set as well as 100% bottom tracking accuracy of the training survey line. The average bottom tracking accuracy of the testing survey lines excluding missed pings reaches 99.2%. By comparison with multi-beam bathymetric data and the statistical analysis of real-time performance, the experimental results prove the validity and accuracy of the proposed real-time bottom tracking method.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Nalić, Jasmina, und Goran Martinovic. „Building a Credit Scoring Model Based on Data Mining Approaches“. International Journal of Software Engineering and Knowledge Engineering 30, Nr. 02 (Februar 2020): 147–69. http://dx.doi.org/10.1142/s0218194020500072.

Der volle Inhalt der Quelle
Annotation:
Nowadays, one of the biggest challenges in banking sector, certainly, is assessment of the client’s creditworthiness. In order to improve the decision-making process and risk management, banks resort to using data mining techniques for hidden patterns recognition within a wide data. The main objective of this study is to build a high-performance customized credit scoring model. The model named Reliable client is based on Bank’s real dataset and originally built by applying four different classification algorithms: decision tree (DT), naive Bayes (NB), generalized linear model (GLM) and support vector machine (SVM). Since it showed the greatest results, but also seemed as the most appropriate algorithm, the adopted model is based on GLM algorithm. The results of this model are presented based on many performance measures that showed great predictive confidence and accuracy, but we also demonstrated significant impact of data pre-processing on model performance. Statistical analysis of the model identified the most significant parameters on the model outcome. In the end, created credit scoring model was evaluated using another set of real data of the same Bank.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Smith, William L., Elisabeth Weisz, Stanislav V. Kireev, Daniel K. Zhou, Zhenglong Li und Eva E. Borbas. „Dual-Regression Retrieval Algorithm for Real-Time Processing of Satellite Ultraspectral Radiances“. Journal of Applied Meteorology and Climatology 51, Nr. 8 (August 2012): 1455–76. http://dx.doi.org/10.1175/jamc-d-11-0173.1.

Der volle Inhalt der Quelle
Annotation:
AbstractA fast physically based dual-regression (DR) method is developed to produce, in real time, accurate profile and surface- and cloud-property retrievals from satellite ultraspectral radiances observed for both clear- and cloudy-sky conditions. The DR relies on using empirical orthogonal function (EOF) regression “clear trained” and “cloud trained” retrievals of surface skin temperature, surface-emissivity EOF coefficients, carbon dioxide concentration, cloud-top altitude, effective cloud optical depth, and atmospheric temperature, moisture, and ozone profiles above the cloud and below thin or broken cloud. The cloud-trained retrieval is obtained using cloud-height-classified statistical datasets. The result is a retrieval with an accuracy that is much higher than that associated with the retrieval produced by the unclassified regression method currently used in the International Moderate Resolution Imaging Spectroradiometer/Atmospheric Infrared Sounder (MODIS/AIRS) Processing Package (IMAPP) retrieval system. The improvement results from the fact that the nonlinear dependence of spectral radiance on the atmospheric variables, which is due to cloud altitude and associated atmospheric moisture concentration variations, is minimized as a result of the cloud-height-classification process. The detailed method and results from example applications of the DR retrieval algorithm are presented. The new DR method will be used to retrieve atmospheric profiles from Aqua AIRS, MetOp Infrared Atmospheric Sounding Interferometer, and the forthcoming Joint Polar Satellite System ultraspectral radiance data.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Chen, Hsian-Min, Hung-Chieh Chen, Clayton Chi-Chang Chen, Yung-Chieh Chang, Yi-Ying Wu, Wen-Hsien Chen, Chiu-Chin Sung, Jyh-Wen Chai und San-Kan Lee. „Comparison of Multispectral Image-Processing Methods for Brain Tissue Classification in BrainWeb Synthetic Data and Real MR Images“. BioMed Research International 2021 (07.03.2021): 1–12. http://dx.doi.org/10.1155/2021/9820145.

Der volle Inhalt der Quelle
Annotation:
Accurate quantification of brain tissue is a fundamental and challenging task in neuroimaging. Over the past two decades, statistical parametric mapping (SPM) and FMRIB’s Automated Segmentation Tool (FAST) have been widely used to estimate gray matter (GM) and white matter (WM) volumes. However, they cannot reliably estimate cerebrospinal fluid (CSF) volumes. To address this problem, we developed the TRIO algorithm (TRIOA), a new magnetic resonance (MR) multispectral classification method. SPM8, SPM12, FAST, and the TRIOA were evaluated using the BrainWeb database and real magnetic resonance imaging (MRI) data. In this paper, the MR brain images of 140 healthy volunteers ( 51.5 ± 15.8 y / o ) were obtained using a whole-body 1.5 T MRI system (Aera, Siemens, Erlangen, Germany). Before classification, several preprocessing steps were performed, including skull stripping and motion and inhomogeneity correction. After extensive experimentation, the TRIOA was shown to be more effective than SPM and FAST. For real data, all test methods revealed that the participants aged 20–83 years exhibited an age-associated decline in GM and WM volume fractions. However, for CSF volume estimation, SPM8-s and SPM12-m both produced different results, which were also different compared with those obtained by FAST and the TRIOA. Furthermore, the TRIOA performed consistently better than both SPM and FAST for GM, WM, and CSF volume estimation. Compared with SPM and FAST, the proposed TRIOA showed more advantages by providing more accurate MR brain tissue classification and volume measurements, specifically in CSF volume estimation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Pandey, Manoj, J. S. Ubhi und Kota Solomon Raju. „Computational Acceleration of Real-Time Kernel-Based Tracking System“. Journal of Circuits, Systems and Computers 25, Nr. 04 (02.02.2016): 1650030. http://dx.doi.org/10.1142/s0218126616500304.

Der volle Inhalt der Quelle
Annotation:
Object tracking in real-time is one of the applications of video processing, where the required computational cost is high due to intensive high data processing. In order to solve these problems, this paper presents an embedded solution, where the Hardware/Software (HW/SW) co-design architecture is used for the implementation of well-known kernel-based tracking system. In this algorithm, the target is searched in consecutive frame by maximizing the statistical match with similarity estimation of color distribution. The whole tracking system is implemented on low cost Field Programmable Gate Array (FPGA) device with image resolution of 1280[Formula: see text]720 pixels and target window size of 160[Formula: see text]80 pixels. The HW/SW co-design architecture is proposed to accelerate the computational speed of the system. The performance of the system is evaluated in terms of execution speed and frame rate compared with software based implementation. The hardware cost of design is also compared with other existing methods. The proposed design achieves 22 times computational speed and maximum 60 Frames Per Second (FPS) compared with software based design.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Li, Na, Xinchen Huang, Huijie Zhao, Xianfei Qiu, Kewang Deng, Guorui Jia, Zhenhong Li, David Fairbairn und Xuemei Gong. „A Combined Quantitative Evaluation Model for the Capability of Hyperspectral Imagery for Mineral Mapping“. Sensors 19, Nr. 2 (15.01.2019): 328. http://dx.doi.org/10.3390/s19020328.

Der volle Inhalt der Quelle
Annotation:
To analyze the influence factors of hyperspectral remote sensing data processing, and quantitatively evaluate the application capability of hyperspectral data, a combined evaluation model based on the physical process of imaging and statistical analysis was proposed. The normalized average distance between different classes of ground cover is selected as the evaluation index. The proposed model considers the influence factors of the full radiation transmission process and processing algorithms. First- and second-order statistical characteristics (mean and covariance) were applied to calculate the changes for the imaging process based on the radiation energy transfer. The statistical analysis was combined with the remote sensing process and the application performance, which consists of the imaging system parameters and imaging conditions, by building the imaging system and processing models. The season (solar zenith angle), sensor parameters (ground sampling distance, modulation transfer function, spectral resolution, spectral response function, and signal to noise ratio), and number of features were considered in order to analyze the influence factors of the application capability level. Simulated and real data collected by Hymap in the Dongtianshan area (Xinjiang Province, China), were used to estimate the proposed model’s performance in the application of mineral mapping. The predicted application capability of the proposed model is consistent with the theoretical analysis.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Promyslov, Vitaly, und Kirill Semenkov. „Non-Statistical Method for Validation the Time Characteristics of Digital Control Systems with a Cyclic Processing Algorithm“. Mathematics 9, Nr. 15 (22.07.2021): 1732. http://dx.doi.org/10.3390/math9151732.

Der volle Inhalt der Quelle
Annotation:
The paper discusses the problem of performance and timing parameters with respect to the validation of digital instrumentation and control systems (I&C). Statistical methods often implicitly assume that the probability distribution law of the estimated parameters is close to normal. Thus, the confidence intervals for the parameter are determined on the grounds of this assumption. However, we encountered cases when the delay distribution law in I&C is not normal. In these cases, we used the non-statistical network calculus method for time parameters estimation. The network calculus method is well elaborated for lossless digital system models with seamless processing algorithm depending only on data volume. We consider the extension of the method to the case of I&C systems with considerable changes in the data flow and content-dependent processing disciplines. The model is restricted to systems with cyclic processing algorithms and fast network connections. Network calculus describes the data flow and system parameters in terms of flow envelopes and service curves that are generally unknown in advance. In this paper, we define equations that allow the calculation of these characteristics from experimental data. The correspondence of the Network Calculus and classical statistical estimation methods is discussed. Additionally, we give an example of model application to a real I&C system.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Camacho-Olarte, Jaiber, und Diego Alexander Tibaduiza Burgos. „Real Application For A Data Acquisition System From Sensors Based On Embedded Systems“. Engineering Proceedings 2, Nr. 1 (14.11.2020): 25. http://dx.doi.org/10.3390/ecsa-7-08275.

Der volle Inhalt der Quelle
Annotation:
Data acquisition systems are one of the main components for sensors and remote monitoring strategies required in a real process. Normally, data acquisition is performed through commercial solutions that are adaptable to a specific solution, and expansion capabilities are associated with the products (HW/SW) of the same company, which results in limited possibilities of expansion. As a contribution to solving this problem, a hardware development project with embedded systems and focused on the Internet of Things was designed. A data acquisition system was proposed and validated through a real application using the prototype built, monitoring variables in a photovoltaic system such as voltage and current to analyze the behavior of the solar panels. Testing and evaluation of the prototype were carried out in several experiments, where the most common failures of a photovoltaic plant were emulated, finding that the recorded data provide the necessary information to identify the moments in which the system being monitored presents problems. In this way, it was found that the developed system can be used as a remote monitoring system since the information that the device takes through the current and voltage sensors can be sent to a server through an Internet connection for data processing, graph generation, or statistical analysis according to the requirements. These features allow a friendly presentation of the data to an end-user.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Xu, Wenbin, und Chengbo Yin. „Adaptive Language Processing Based on Deep Learning in Cloud Computing Platform“. Complexity 2020 (19.06.2020): 1–11. http://dx.doi.org/10.1155/2020/5828130.

Der volle Inhalt der Quelle
Annotation:
With the continuous advancement of technology, the amount of information and knowledge disseminated on the Internet every day has been developing several times. At the same time, a large amount of bilingual data has also been produced in the real world. These data are undoubtedly a great asset for statistical machine translation research. Based on the dual-sentence quality corpus screening, two corpus screening strategies are proposed first, based on the double-sentence pair length ratio method and the word-based alignment information method. The innovation of these two methods is that no additional linguistic resources such as bilingual dictionary and syntactic analyzer are needed as auxiliary. No manual intervention is required, and the poor quality sentence pairs can be automatically selected and can be applied to any language pair. Secondly, a domain adaptive method based on massive corpus is proposed. The method based on massive corpus utilizes massive corpus mechanism to carry out multidomain automatic model migration. In this domain, each domain learns the intradomain model independently, and different domains share the same general model. Through the method of massive corpus, these models can be combined and adjusted to make the model learning more accurate. Finally, the adaptive method of massive corpus filtering and statistical machine translation based on cloud platform is verified. Experiments show that both methods have good effects and can effectively improve the translation quality of statistical machines.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Satyanarayana, A. N., B. Chandrashekara Rao, D. Lalitha und B. Lakshmi. „Modis data acquisition and utilization for forest fire management in india“. ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-8 (23.12.2014): 1383–87. http://dx.doi.org/10.5194/isprsarchives-xl-8-1383-2014.

Der volle Inhalt der Quelle
Annotation:
The Moderate Resolution Imaging Spectroradiometer (MODIS) instrument onboard Terra & Aqua Spacecrafts Scans the earth in 36 spectral bands & covers the entire earth in two days. MODIS data has proved to be very useful for ocean & land studies with resolution ranging from 250 m to 1000 meters. The Data Reception system at Shadnagar (Refer to the Block diagram Fig.1), receives the data transmitted in X-band on 8160 MHz carrier SQPSK modulated with a data rate of 15 MBPS from the Aqua satellite. The down converted IF signal is fed to the demodulator & bit-synchronizer unit. The data and clock output signals of bitsynchronizer unit are given to a PC based DAQLB system where real-time telemetry processing is carried out and data is recorded onto hard disk in real time. The effectiveness of the system in supporting the forest fire management during the 2011, 12, 13 & 14 is also presented in the paper. Near real-time active fire monitoring, interactive fire visualization, fire database and statistical analysis functions also presented. Preliminary results of the upgrading satellite receiving system and in expanding the utilization of satellite data for multi-disciplinary resources management will also be presented and discussed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Li, Jiwei. „Design and implementation of distributed asynchronous data aided computer information interaction system“. Journal of Intelligent & Fuzzy Systems 39, Nr. 6 (04.12.2020): 9007–14. http://dx.doi.org/10.3233/jifs-189299.

Der volle Inhalt der Quelle
Annotation:
Under the influence of novel corona virus pneumonia epidemic prevention and control, it has put forward higher requirements for data storage and processing for personnel management system. The distributed asynchronous data aided computer information interaction system can solve the problem of multi node concurrent data processing. The traditional computer information interaction system has poor real-time performance, low precision and asynchronous data processing ability. The invocation features of message queuing asynchronous caching mode are combined with the standardization of Web services and cross language with cross platform access features in this paper. Through the combination of the two technologies, a flexible and universal asynchronous interaction architecture of distributed system is established. Based on Web service technology and system to system access, the call and response of tasks between modules are carried out in the system, which makes the interaction between the whole system have the characteristics of message driven. The test result shows that the system proposed in this paper has good real-time performance and strong data processing ability. It is suitable for the data interaction of distributed personal management system under the influence of novel corona virus pneumonia epidemic prevention and control.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Golomolzina, Diana Rashidovna, Maxim Alexandrovich Gorodnichev, Evgeny Andreevich Levin, Alexander Nikolaevich Savostyanov, Ekaterina Pavlovna Yablokova, Arthur C. Tsai, Mikhail Sergeevich Zaleshin et al. „Advanced Electroencephalogram Processing“. International Journal of E-Health and Medical Communications 5, Nr. 2 (April 2014): 49–69. http://dx.doi.org/10.4018/ijehmc.2014040103.

Der volle Inhalt der Quelle
Annotation:
The study of electroencephalography (EEG) data can involve independent component analysis and further clustering of the components according to relation of the components to certain processes in a brain or to external sources of electricity such as muscular motion impulses, electrical fields inducted by power mains, electrostatic discharges, etc. At present, known methods for clustering of components are costly because require additional measurements with magnetic-resonance imaging (MRI), for example, or have accuracy restrictions if only EEG data is analyzed. A new method and algorithm for automatic clustering of physiologically similar but statistically independent EEG components is described in this paper. Developed clustering algorithm has been compared with algorithms implemented in the EEGLab toolbox. The paper contains results of algorithms testing on real EEG data obtained under two experimental tasks: voluntary movement control under conditions of stop-signal paradigm and syntactical error recognition in written sentences. The experimental evaluation demonstrated more than 90% correspondence between the results of automatic clustering and clustering made by an expert physiologist.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Michal, Peter, Alena Vagaská, Miroslav Gombár und Ján Kmec. „Mathematical Modelling and Optimization of Technological Process Using Design of Experiments Methodology“. Applied Mechanics and Materials 616 (August 2014): 61–68. http://dx.doi.org/10.4028/www.scientific.net/amm.616.61.

Der volle Inhalt der Quelle
Annotation:
The paper deals with statistical methods application to the evaluation of the relationships between the investigation range of input factors and response in longitudinal turning process. Our research was aimed at creation of the model of real situations of cutting conditions effects on the machined surface morphology applying longitudinal turning of steel C45 with specific values. Design of experiments (DoE) have increasingly had a wider application when creation mathematical and statistical models of technological processes. So the main part of the paper is to demonstrate the procedure of statistical processing of experimentally obtained data in order to create a prediction model and compare it with the theoretical calculation formulas.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie