Journal articles on the topic 'Common accuracy of the model'

To see the other types of publications on this topic, follow the link: Common accuracy of the model.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Common accuracy of the model.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Hung, Lai-Fa. "A Negative Binomial Regression Model for Accuracy Tests." Applied Psychological Measurement 36, no. 2 (January 24, 2012): 88–103. http://dx.doi.org/10.1177/0146621611429548.

Full text
Abstract:
Rasch used a Poisson model to analyze errors and speed in reading tests. An important property of the Poisson distribution is that the mean and variance are equal. However, in social science research, it is very common for the variance to be greater than the mean (i.e., the data are overdispersed). This study embeds the Rasch model within an overdispersion framework and proposes new estimation methods. The parameters in the proposed model can be estimated using the Markov chain Monte Carlo method implemented in WinBUGS and the marginal maximum likelihood method implemented in SAS. An empirical example based on models generated by the results of empirical data, which are fitted and discussed, is examined.
APA, Harvard, Vancouver, ISO, and other styles
2

Semenov, Serhii, Liqiang Zhang, Weiling Cao, Serhii Bulba, Vira Babenko, and Viacheslav Davydov. "Development of a fuzzy GERT-model for investigating common software vulnerabilities." Eastern-European Journal of Enterprise Technologies 6, no. 2 (114) (December 29, 2021): 6–18. http://dx.doi.org/10.15587/1729-4061.2021.243715.

Full text
Abstract:
This paper has determined the relevance of the issue related to improving the accuracy of the results of mathematical modeling of the software security testing process. The fuzzy GERT-modeling methods have been analyzed. The necessity and possibility of improving the accuracy of the results of mathematical formalization of the process of studying software vulnerabilities under the conditions of fuzziness of input and intermediate data have been determined. To this end, based on the mathematical apparatus of fuzzy network modeling, a fuzzy GERT model has been built for investigating software vulnerabilities. A distinctive feature of this model is to take into consideration the probabilistic characteristics of transitions from state to state along with time characteristics. As part of the simulation, the following stages of the study were performed. To schematically describe the procedures for studying software vulnerabilities, a structural model of this process has been constructed. A "reference GERT model" has been developed for investigating software vulnerabilities. The process was described in the form of a standard GERT network. The algorithm of equivalent transformations of the GERT network has been improved, which differs from known ones by considering the capabilities of the extended range of typical structures of parallel branches between neighboring nodes. Analytical expressions are presented to calculate the average time spent in the branches and the probability of successful completion of studies in each node. The calculation of these probabilistic-temporal characteristics has been carried out in accordance with data on the simplified equivalent fuzzy GERT network for the process of investigating software vulnerabilities. Comparative studies were conducted to confirm the accuracy and reliability of the results obtained. The results of the experiment showed that in comparison with the reference model, the fuzziness of the input characteristic of the time of conducting studies of software vulnerabilities was reduced, which made it possible to improve the accuracy of the simulation results.
APA, Harvard, Vancouver, ISO, and other styles
3

Gray, Samuel H. "Gaussian beam migration of common-shot records." GEOPHYSICS 70, no. 4 (July 2005): S71—S77. http://dx.doi.org/10.1190/1.1988186.

Full text
Abstract:
Gaussian beam migration is a depth migration method whose accuracy rivals that of migration by wavefield extrapolation — so-called “wave-equation migration” — and whose efficiency rivals that of Kirchhoff migration. This migration method can image complicated geologic structures, including very steep dips, in areas where the seismic velocity varies rapidly. However, applications of prestack Gaussian beam migration either have been limited to common-offset common-azimuth data volumes, and thus are inflexible, or suffer from multiarrival inaccuracies in a common-shot implementation. In order to optimize both the flexibility and accuracy of Gaussian beam migration, I present a common-shot implementation that handles multipathing in a natural way. This allows the migration of data sets that can include a variety of azimuths, and it allows a simplified treatment of near-surface issues. Application of this method to model data typical of Canadian Foothills structures and to model data that includes a complicated salt body demonstrates the accuracy and versatility of the migration.
APA, Harvard, Vancouver, ISO, and other styles
4

Wang, Jian Hua, Zhao Yang, and Yu Ping Wu. "Calibration Accuracy and Reconstruction Accuracy of Stereovision System." Applied Mechanics and Materials 321-324 (June 2013): 1499–503. http://dx.doi.org/10.4028/www.scientific.net/amm.321-324.1499.

Full text
Abstract:
Error of stereovision reconstruction comes from feature extraction, correspondence and calibration. This paper is focused on investigation of the relation between reconstruction accuracy and calibration accuracy of a stereovision. A model of stereovision system is established, which consists of two cameras configured with their coordinate not paralleled. An array of points in the common view field of the two cameras is projected onto the image planes of the left and right cameras respectively and forms the left image and right image. After changing the calibration parameters of the stereovision system, including intrinsic parameters of the cameras and their relative position and pose, the array of points are reconstructed and compared with their original positions. The main factors affecting the reconstruction errors are discussed.
APA, Harvard, Vancouver, ISO, and other styles
5

LAYTON, WILLIAM, and ROGER LEWANDOWSKI. "A HIGH ACCURACY LERAY-DECONVOLUTION MODEL OF TURBULENCE AND ITS LIMITING BEHAVIOR." Analysis and Applications 06, no. 01 (January 2008): 23–49. http://dx.doi.org/10.1142/s0219530508001043.

Full text
Abstract:
In 1934, J. Leray proposed a regularization of the Navier–Stokes equations whose limits were weak solutions of the Navier–Stokes equations. Recently, a modification of the Leray model, called the Leray-alpha model, has attracted interest for turbulent flow simulations. One common drawback of the Leray type regularizations is their low accuracy. Increasing the accuracy of a simulation based on a Leray regularization requires cutting the averaging radius, i.e. remeshing and resolving on finer meshes. This article analyzes on a family of Leray type models of arbitrarily high orders of accuracy for a fixed averaging radius. We establish the basic theory of the entire family including limiting behavior as the averaging radius decreases to zero (a simple extension of results known for the Leray model). We also give a more technically interesting result on the limit as the order of the models increases with a fixed averaging radius. Because of this property, increasing the accuracy of the model is potentially cheaper than decreasing the averaging radius (or meshwidth) and high order models are doubly interesting.
APA, Harvard, Vancouver, ISO, and other styles
6

Tucki, Karol, Anna Bączyk, Bartłomiej Rek, and Izabela Wielewska. "The CFD Analysis of the Combustion Chamber in Common Rail Engines." MATEC Web of Conferences 252 (2019): 04001. http://dx.doi.org/10.1051/matecconf/201925204001.

Full text
Abstract:
This paper reports on the development of the geometrical model of the combustion chamber. The results obtained from test were applied to numerical simulations – performed on the Farymann 18WM engine. The analysis was carried out in ANSYS Fluent environment using the finite volume element method. Based on the results, it can be stated that: (1) differences in test and simulation results result from: grid limitations, accuracy of the calculation models, simplifications and limitations of the models, (2) numerical simulations are helpful in determining the parameters of test objects without the need for employing a test set-up, finally, (3) unrestrained adjustment of simulation parameters enables modification of technical parameters of devices to assess their impact on the particular model.
APA, Harvard, Vancouver, ISO, and other styles
7

Fu, Xiao Lei, Bao Ming Jin, Xiao Lei Jiang, and Cheng Chen. "One-dimensional soil temperature assimilation experiment based on unscented particle filter and Common Land Model." E3S Web of Conferences 38 (2018): 03009. http://dx.doi.org/10.1051/e3sconf/20183803009.

Full text
Abstract:
Data assimilation is an efficient way to improve the simulation/prediction accuracy in many fields of geosciences especially in meteorological and hydrological applications. This study takes unscented particle filter (UPF) as an example to test its performance at different two probability distribution, Gaussian and Uniform distributions with two different assimilation frequencies experiments (1) assimilating hourly in situ soil surface temperature, (2) assimilating the original Moderate Resolution Imaging Spectroradiometer (MODIS) Land Surface Temperature (LST) once per day. The numerical experiment results show that the filter performs better when increasing the assimilation frequency. In addition, UPF is efficient for improving the soil variables (e.g., soil temperature) simulation/prediction accuracy, though it is not sensitive to the probability distribution for observation error in soil temperature assimilation.
APA, Harvard, Vancouver, ISO, and other styles
8

McMeekin, Kevin, Frédéric Sirois, Maxime Tousignant, and Philippe Bocher. "Improving the accuracy of time-harmonic FE simulations in induction heating applications." COMPEL - The international journal for computation and mathematics in electrical and electronic engineering 36, no. 2 (March 6, 2017): 526–34. http://dx.doi.org/10.1108/compel-05-2016-0203.

Full text
Abstract:
Purpose Surface heat treatment by induction heating (10-100 kHz) requires precise prediction and control of the depth of the induced phase transformation. This paper aims at identifying common issues with the measurement and modeling of magnetic properties used in induction heating simulations, and it proposes ways to improve the situation. Design/methodology/approach In particular, it is demonstrated how intrinsic magnetic properties (i.e. the B-H curve) of a sample can change during the magnetic characterization process itself, due to involuntary annealing of the sample. Then, for a B-H curve that is supposed perfectly known, a comparison is performed between multiple models, each one representing the magnetic properties of steel in time-harmonic (TH) finite element method simulations. Finally, a new model called “power-equivalent model” is proposed. This model provides the best possible accuracy for a known nonlinear and hysteretic B-H curve used in TH simulations. Findings By carefully following the guidelines identified in this paper, reduction of errors in the range of 5-10 per cent can be achieved, both at the experimental and modeling levels. The new “power-equivalent model” proposed is also expected to be more generic than existing models. Originality/value This paper highlights common pitfalls in the measurement and modeling of magnetic properties, and suggests ways to improve the situation.
APA, Harvard, Vancouver, ISO, and other styles
9

Svozil, Daniel, and Pavel Jungwirth. "Cluster Model for the Ionic Product of Water: Accuracy and Limitations of Common Density Functional Methods." Journal of Physical Chemistry A 110, no. 29 (July 2006): 9194–99. http://dx.doi.org/10.1021/jp0614648.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zhang, Bing, Shanqi Chen, Zhixian Lin, Shaoxuan Wang, Zhen Wang, Daochuan Ge, Dingqing Guo, Jian Lin, Fang Wang, and Jin Wang. "A rapid modeling method and accuracy criteria for common-cause failures in Risk Monitor PSA model." Nuclear Engineering and Technology 53, no. 1 (January 2021): 103–10. http://dx.doi.org/10.1016/j.net.2020.06.021.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Yang, Sheng-I., and P. Corey Green. "Comparison of Data Grouping Strategies on Prediction Accuracy of Tree-Stem Taper for Six Common Species in the Southeastern US." Forests 13, no. 2 (January 20, 2022): 156. http://dx.doi.org/10.3390/f13020156.

Full text
Abstract:
Clustering data into similar characteristic groups is a commonly-used strategy in model development. However, the impact of data grouping strategies on modeling stem taper has not been well quantified. The objective of this study was to compare the prediction accuracy of different data grouping strategies. Specifically, a population-level model was compared to the models fitted with grouped data based on taxonomic rank, tree form and size. A total of 3678 trees were used in the analyses, which included six common species in upland hardwood forests of the southeastern U.S. Results showed that overall predictions are more accurate when building stem taper models at the species, species group or division level rather than at the population level. The prediction accuracy was not considerably improved between species-specific functions and models fitted with species-related groups for the four hardwood species examined. Grouping data by taxonomic rank provided more reliable predictions than height-to-diameter ratio (H–D ratio) or diameter at breast height (DBH). The form/size-related grouping methods (i.e., data grouped by H–D ratio or DBH) generally did not improve the prediction precision compared to a population-level model. In this study, the effect of sample size in model fitting showed a minimal impact on prediction accuracy. The methodology presented in this study provides a modeling strategy for mixed-species data, which will be of practical importance when data grouping is needed for developing stem taper models.
APA, Harvard, Vancouver, ISO, and other styles
12

Kronish, Ian M., Donald Edmondson, Daichi Shimbo, Jonathan A. Shaffer, Lawrence R. Krakoff, and Joseph E. Schwartz. "A Comparison of the Diagnostic Accuracy of Common Office Blood Pressure Measurement Protocols." American Journal of Hypertension 31, no. 7 (April 20, 2018): 827–34. http://dx.doi.org/10.1093/ajh/hpy053.

Full text
Abstract:
Abstract BACKGROUND The optimal approach to measuring office blood pressure (BP) is uncertain. We aimed to compare BP measurement protocols that differed based on numbers of readings within and between visits and by assessment method. METHODS We enrolled a sample of 707 employees without known hypertension or cardiovascular disease, and obtained 6 standardized BP readings during each of 3 office visits at least 1 week apart, using mercury sphygmomanometer and BpTRU oscillometric devices (18 readings per participant) for a total of 12,645 readings. We used confirmatory factor analysis to develop a model estimating “true” office BP that could be used to compare the probability of correctly classifying participants’ office BP status using differing numbers and types of office BP readings. RESULTS Averaging 2 systolic BP readings across 2 visits correctly classified participants as having BP below or above the 140 mm Hg threshold at least 95% of the time if the averaged reading was <134 or >149 mm Hg, respectively. Our model demonstrated that more confidence was gained by increasing the number of visits with readings than by increasing the number of readings within a visit. No clinically significant confidence was gained by dropping the first reading vs. averaging all readings, nor by measuring with a manual mercury device vs. with an automated oscillometric device. CONCLUSIONS Averaging 2 BP readings across 2 office visits appeared to best balance increased confidence in office BP status with efficiency of BP measurement, though the preferred measurement strategy may vary with the clinical context.
APA, Harvard, Vancouver, ISO, and other styles
13

Hancock, P. A., and Willem B. Verwey. "Where in the world is the speed/accuracy trade-off?" Behavioral and Brain Sciences 20, no. 2 (June 1997): 310–11. http://dx.doi.org/10.1017/s0140525x97301447.

Full text
Abstract:
Even though Plamondon's kinematic model fits the data well, we do not share the view that it explains movements other than ballistic ones. The model does not account for closed-loop control, which is the more common type of movement in everyday life, nor does it account for recent data indicating interference with ongoing processing.
APA, Harvard, Vancouver, ISO, and other styles
14

Nishiyama, S., K. Mitsui, Y. Yamazaki, H. Mizuta, and K. Yana. "Effect of Common Driving Sources to the Feedback Analysis of Heart Rate Variability." Methods of Information in Medicine 46, no. 02 (2007): 202–5. http://dx.doi.org/10.1055/s-0038-1625407.

Full text
Abstract:
Summary Objectives : This paper examines the operational characteristics of the multivariate autoregressive analysis applied to the simultaneous recordings of the instantaneous heart rate (IHR) and the change in systolic blood pressure (SBP). Methods : The multivariate autoregressive model has been utilized to reveal the feedback characteristics between IHR and SBP. The model assumes the presence of independent set of driving forces to activate the system. However, it is likely that the driving forces may have correlation due to the presence of a common fluctuation source. This paper examines the effect of the presence of correlated components in the driving forces to the estimation accuracy of impulse responses characterizing the feedback properties. The twodimensional autoregressive model driven bytwo correlated 1/f noises was chosen for the analysis of operational characteristics. The driving force was generated by a moving average system which simulates non-integer order integration. Results : Computer simulation revealed that the mean square estimation errors of impulse responses sharply increase as relative power of common driving force exceeds 50%. However, the estimation accuracy and bias are found to be in permissible range in practice. Conclusions : These findings ensure the practical validity of utilizing multivariate autoregressive models for the feedback analysis between IHR and SBP where both signals have the common driving force.
APA, Harvard, Vancouver, ISO, and other styles
15

Guo, Ping, Gui Fang Chen, and Yan Xia Wang. "Constructing Phylogenetic Tree Based on Three-Parameter Model." Key Engineering Materials 474-476 (April 2011): 2193–97. http://dx.doi.org/10.4028/www.scientific.net/kem.474-476.2193.

Full text
Abstract:
Neighbor Joining Algorithm (Neighbor-Joining or NJ) is a common way to construct phylogenetic trees. To enhance the accuracy of constructing the phylogenetic trees, we use the Kimura three-parameter model to calculate the distance between sequences, raising a way to improve NJ. By computer simulation, we made a comparative analysis of the accuracy among the improved NJ, UPGMA, and the original NJ, the result showing that the improved NJ is superior to UPGMA and the original NJ.<b> </b><b></b>
APA, Harvard, Vancouver, ISO, and other styles
16

Kuo, Hsing-Chia, and Wei-Yuan Dzan. "Marine Propeller Geometric Models for NC Machining Efficiency and Accuracy." Journal of Ship Production 17, no. 02 (May 1, 2001): 97–102. http://dx.doi.org/10.5957/jsp.2001.17.2.97.

Full text
Abstract:
Via the theories of computational geometry and differential geometry, the equation of the pressure side of a propeller blade with a constant pitch is presented. The model for defining maximal admissible ball-end cutter radius used in the NC machining of propeller blade surface is deduced. The odels for numerical analysis and for calculation of step lengths and path intervals are also provided. Besides, the related geometric model for calculating the actual maximal error by using the envelope surface of the cutter is presented. Finally, the feasibility and reliability of the proposed models and methods are verified by an example. It is also verified that the proposed method provides improved machining efficiency and accuracy relative to many other common contemporary methods.
APA, Harvard, Vancouver, ISO, and other styles
17

Wang, Qiang, Cong Li, Jia Jie Cui, Lai Wei Jiang, Meng Wang, and De Kai Lu. "Application Research of the Polynomial Fitting Model in Transformation of Plane Rectangular Coordinate." Applied Mechanics and Materials 571-572 (June 2014): 143–47. http://dx.doi.org/10.4028/www.scientific.net/amm.571-572.143.

Full text
Abstract:
Since the earth is a spheroid, the coordinate systems of different countries are not the same. And because China is a vast country, many places in China also established an independent coordinate system. In the application of engineering, the transformation between different rectangular coordinate is very necessary. This paper studies the function model that polynomial fitting model of rectangular coordinate transformation, and explored the rule of impact on the accuracy that between different fitting models and the numbers of common points selected in the same model using real data. Finally, it compares the accuracy of different fitting models and finds out the optimal model, which fits the survey area.
APA, Harvard, Vancouver, ISO, and other styles
18

Chen, Liang, Heng Nian, and Yunyang Xu. "Impedance Aggregation Method of Multiple Wind Turbines and Accuracy Analysis." Energies 12, no. 11 (May 28, 2019): 2035. http://dx.doi.org/10.3390/en12112035.

Full text
Abstract:
The sequence domain impedance modeling of wind turbines (WTs) has been widely used in the stability analysis between WTs and weak grids with high line impedance. An aggregated impedance model of the wind farm is required in the system-level analysis. However, directly aggregating WT small-signal impedance models will lead to an inaccurate aggregated impedance model due to the mismatch of reference frame definitions among different WT subsystems, which may lead to inaccuracy in the stability analysis. In this paper, we analyze the impacts of the reference frame mismatch between a local small-signal impedance model and a global one on the accuracy of aggregated impedance and the accuracy of impedance-based stability analysis. The results revealed that the impact is related to the power distribution of the studied network. It was found that that the influence of mismatch on stability analysis became subtle when subsystems were balanced loaded. Considering that balanced loading is a common configuration of the practical application, direct impedance aggregation by local small-signal models can be applied due to its acceptable accuracy.
APA, Harvard, Vancouver, ISO, and other styles
19

Matviychuk, Yevgen, Ellen Steimers, Erik von Harbou, and Daniel J. Holland. "Improving the accuracy of model-based quantitative nuclear magnetic resonance." Magnetic Resonance 1, no. 2 (July 2, 2020): 141–53. http://dx.doi.org/10.5194/mr-1-141-2020.

Full text
Abstract:
Abstract. Low spectral resolution and extensive peak overlap are the common challenges that preclude quantitative analysis of nuclear magnetic resonance (NMR) data with the established peak integration method. While numerous model-based approaches overcome these obstacles and enable quantification, they intrinsically rely on rigid assumptions about functional forms for peaks, which are often insufficient to account for all unforeseen imperfections in experimental data. Indeed, even in spectra with well-separated peaks whose integration is possible, model-based methods often achieve suboptimal results, which in turn raises the question of their validity for more challenging datasets. We address this problem with a simple model adjustment procedure, which draws its inspiration directly from the peak integration approach that is almost invariant to lineshape deviations. Specifically, we assume that the number of mixture components along with their ideal spectral responses are known; we then aim to recover all useful signals left in the residual after model fitting and use it to adjust the intensity estimates of modelled peaks. We propose an alternative objective function, which we found particularly effective for correcting imperfect phasing of the data – a critical step in the processing pipeline. Application of our method to the analysis of experimental data shows the accuracy improvement of 20 %–40 % compared to the simple least-squares model fitting.
APA, Harvard, Vancouver, ISO, and other styles
20

Golman, Russell, David Hagmann, and John H. Miller. "Polya’s bees: A model of decentralized decision-making." Science Advances 1, no. 8 (September 2015): e1500253. http://dx.doi.org/10.1126/sciadv.1500253.

Full text
Abstract:
How do social systems make decisions with no single individual in control? We observe that a variety of natural systems, including colonies of ants and bees and perhaps even neurons in the human brain, make decentralized decisions using common processes involving information search with positive feedback and consensus choice through quorum sensing. We model this process with an urn scheme that runs until hitting a threshold, and we characterize an inherent tradeoff between the speed and the accuracy of a decision. The proposed common mechanism provides a robust and effective means by which a decentralized system can navigate the speed-accuracy tradeoff and make reasonably good, quick decisions in a variety of environments. Additionally, consensus choice exhibits systemic risk aversion even while individuals are idiosyncratically risk-neutral. This too is adaptive. The model illustrates how natural systems make decentralized decisions, illuminating a mechanism that engineers of social and artificial systems could imitate.
APA, Harvard, Vancouver, ISO, and other styles
21

Wang, Yong Jian, Jing Feng Chen, and Si Qin Li. "Fuel Common Rail Injection System of RT-Flex Marine Intelligent Diesel Engine and its Simulation Dynamic Model." Advanced Materials Research 732-733 (August 2013): 23–28. http://dx.doi.org/10.4028/www.scientific.net/amr.732-733.23.

Full text
Abstract:
Based on comprehensively analyzing the whole structure and its working performance of RT-Flex marine intelligence diesel engine and entirely mastering this kind of engine fuel common rail injection system’s composition, working principle, taking 7RT-Flex60C as simulation object, the mathematical models of this system’s composing part are designed, by using Matlab/Simulink simulation tool, fuel common rail injection system simulation model is set up, then comparing with the simulation data with rack test data, the accuracy and effectiveness of the model are test and verified. The built up model provides a convenient and practical method for design, optimizing and simulation of fuel common rail injection system.
APA, Harvard, Vancouver, ISO, and other styles
22

Fei, Rong, Shasha Li, Xinhong Hei, Qingzheng Xu, Fang Liu, and Bo Hu. "The Driver Time Memory Car-Following Model Simulating in Apollo Platform with GRU and Real Road Traffic Data." Mathematical Problems in Engineering 2020 (March 17, 2020): 1–18. http://dx.doi.org/10.1155/2020/4726763.

Full text
Abstract:
Car following is the most common phenomenon in single-lane traffic. The accuracy of acceleration prediction can be effectively improved by the driver’s memory in car-following behaviour. In addition, the Apollo autonomous driving platform launched by Baidu Inc. provides fast test vehicle following vehicle models. Therefore, this paper proposes a car-following model (CFDT) with driver time memory based on real-world traffic data. The CFDT model is firstly constructed by embedded gantry control unit storage capacity (GRU assisted) network. Secondly, the NGSIM dataset will be used to obtain the tracking data of small vehicles with similar driving behaviours from the common real road vehicle driving tracks for data preprocessing according to the response time of drivers. Then, the model is calibrated to obtain the driver’s driving memory and the optimal parameters of the model and structure. Finally, the Apollo simulation platform with high-speed automatic driving technology is used for 3D visualization interface verification. Comparative experiments on vehicle tracking characteristics show that the CFDT model is effective and robust, which improves the simulation accuracy. Meanwhile, the model is tested and validated using the Apollo simulation platform to ensure accuracy and utility of the model.
APA, Harvard, Vancouver, ISO, and other styles
23

Morgut, Mitja, and Enrico Nobile. "Numerical Predictions of Cavitating Flow around Model Scale Propellers by CFD and Advanced Model Calibration." International Journal of Rotating Machinery 2012 (2012): 1–11. http://dx.doi.org/10.1155/2012/618180.

Full text
Abstract:
The numerical predictions of the cavitating flow around two model scale propellers in uniform inflow are presented and discussed. The simulations are carried out using a commercial CFD solver. The homogeneous model is used and the influence of three widespread mass transfer models, on the accuracy of the numerical predictions, is evaluated. The mass transfer models in question share the common feature of employing empirical coefficients to adjust mass transfer rate from water to vapour and back, which can affect the stability and accuracy of the predictions. Thus, for a fair and congruent comparison, the empirical coefficients of the different mass transfer models are first properly calibrated using an optimization strategy. The numerical results obtained, with the three different calibrated mass transfer models, are very similar to each other for two selected model scale propellers. Nevertheless, a tendency to overestimate the cavity extension is observed, and consequently the thrust, in the most severe operational conditions, is not properly predicted.
APA, Harvard, Vancouver, ISO, and other styles
24

Khomsah, Siti, and Agus Sasmito Aribowo. "Text-Preprocessing Model Youtube Comments in Indonesian." Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) 4, no. 4 (August 20, 2020): 648–54. http://dx.doi.org/10.29207/resti.v4i4.2035.

Full text
Abstract:
YouTube is the most widely used in Indonesia, and it’s reaching 88% of internet users in Indonesia. YouTube’s comments in Indonesian languages produced by users has increased massively, and we can use those datasets to elaborate on the polarization of public opinion on government policies. The main challenge in opinion analysis is preprocessing, especially normalize noise like stop words and slang words. This research aims to contrive several preprocessing model for processing the YouTube commentary dataset, then seeing the effect for the accuracy of the sentiment analysis. The types of preprocessing used include Indonesian text processing standards, deleting stop words and subjects or objects, and changing slang according to the Indonesian Dictionary (KBBI). Four preprocessing scenarios are designed to see the impact of each type of preprocessing toward the accuracy of the model. The investigation uses two features, unigram and combination of unigram-bigram. Count-Vectorizer and TF-IDF-Vectorizer are used to extract valuable features. The experimentation shows the use of unigram better than a combination of unigram and bigram features. The transformation of the slang word to standart word raises the accuracy of the model. Removing the stop words also contributes to increasing accuracy. In conclusion, the combination of preprocessing, which consists of standard preprocessing, stop-words removal, converting of Indonesian slang to common word based on Indonesian Dictionary (KBBI), raises accuracy to almost 3.5% on unigram feature.
APA, Harvard, Vancouver, ISO, and other styles
25

Zhao, Lin, Jiachang Jiang, Liang Li, Chun Jia, and Jianhua Cheng. "High-Accuracy Real-Time Kinematic Positioning with Multiple Rover Receivers Sharing Common Clock." Remote Sensing 13, no. 4 (February 23, 2021): 823. http://dx.doi.org/10.3390/rs13040823.

Full text
Abstract:
Since the traditional real-time kinematic positioning method is limited by the reduced satellite visibility from the deprived navigational environments, we, therefore, propose an improved RTK method with multiple rover receivers sharing a common clock. The proposed method can enhance observational redundancy by blending the observations from each rover receiver together so that the model strength will be improved. Integer ambiguity resolution of the proposed method is challenged in the presence of several inter-receiver biases (IRB). The IRB including inter-receiver code bias (IRCB) and inter-receiver phase bias (IRPB) is calibrated by the pre-estimation method because of their temporal stability. Multiple BeiDou Navigation Satellite System (BDS) dual-frequency datasets are collected to test the proposed method. The experimental results have shown that the IRCB and IRPB under the common clock mode are sufficiently stable for the ambiguity resolution. Compared with the traditional method, the ambiguity resolution success rate and positioning accuracy of the proposed method can be improved by 19.5% and 46.4% in the restricted satellite visibility environments.
APA, Harvard, Vancouver, ISO, and other styles
26

Yonghong, Wang. "A New Method of Predicting the Performance of Gas Turbine Engines." Journal of Engineering for Gas Turbines and Power 113, no. 1 (January 1, 1991): 106–11. http://dx.doi.org/10.1115/1.2906516.

Full text
Abstract:
This paper points out that the turbine performance computation method used widely at present in solving the performance of gas turbine engines is a numerically unstable algorithm. Therefore a new method, namely an inverse algorithm, is proposed. This paper then further proposes a new mathematical model for solving the stable performance of gas turbine engines. It has the features of not only being suitable for an inverse algorithm for turbine performance, but also having fewer dimensions than existing models. It has the advantages of high accuracy, rapid convergence, good stability, fewer computations, and so forth. It has been fully proven that the accuracy of the new model is much greater than that of the common model for gas turbine engines. Additionally, the time consumed for solving the new model is approximately 1/4~1/10 of that for the common model. Therefore, it is valuable in practice.
APA, Harvard, Vancouver, ISO, and other styles
27

Kane, Jeffrey M., Phillip J. van Mantgem, Laura B. Lalemand, and MaryBeth Keifer. "Higher sensitivity and lower specificity in post-fire mortality model validation of 11 western US tree species." International Journal of Wildland Fire 26, no. 5 (2017): 444. http://dx.doi.org/10.1071/wf16081.

Full text
Abstract:
Managers require accurate models to predict post-fire tree mortality to plan prescribed fire treatments and examine their effectiveness. Here we assess the performance of a common post-fire tree mortality model with an independent dataset of 11 tree species from 13 National Park Service units in the western USA. Overall model discrimination was generally strong, but performance varied considerably among species and sites. The model tended to have higher sensitivity (proportion of correctly classified dead trees) and lower specificity (proportion of correctly classified live trees) for many species, indicating an overestimation of mortality. Variation in model accuracy (percentage of live and dead trees correctly classified) among species was not related to sample size or percentage observed mortality. However, we observed a positive relationship between specificity and a species-specific bark thickness multiplier, indicating that overestimation was more common in thin-barked species. Accuracy was also quite low for thinner bark classes (<1cm) for many species, leading to poorer model performance. Our results indicate that a common post-fire mortality model generally performs well across a range of species and sites; however, some thin-barked species and size classes would benefit from further refinement to improve model specificity.
APA, Harvard, Vancouver, ISO, and other styles
28

Yu, Anqian, Yang Yi, Zhengqi Gu, Ledian Zheng, Zhengtong Han, Hongbo Hu, and Zhuangzhi Liu. "Research on wind buffeting noise characteristics of a vehicle based on ILRN and DES-ILRN methods." Proceedings of the Institution of Mechanical Engineers, Part D: Journal of Automobile Engineering 235, no. 1 (August 5, 2020): 114–27. http://dx.doi.org/10.1177/0954407020944667.

Full text
Abstract:
In view of the unsatisfactory calculation accuracy of common turbulence models, two improved models, called improved low-Reynolds number (ILRN) k− ε and Detached EddySimulation (DES)-ILRN are proposed. Basing on the low-Reynolds number (LRN k− ε) turbulence model in this paper, revised concepts include the introduction of turbulence time scale, eddy viscosity coefficient limitation concept and separation perception model. Otherwise, the DES model is improved by introducing the ILRN model. The accuracy of the two models is validated through wind tunnel test. Then, the wind buffeting noise from rear windows is received by ILRN and DES-ILRN turbulence models, and the obtained results are compared with the road test. The accuracy of ILRN and DES-ILRN turbulence models in calculating wind buffeting noise of a real vehicle is validated by experiment. ILRN and DES-ILRN models are applied to research the wind buffeting noise, and good calculation results are obtained. They are promising methods to solve wind buffeting noise problems.
APA, Harvard, Vancouver, ISO, and other styles
29

MA, LILI, YANGQUAN CHEN, and KEVIN L. MOORE. "RATIONAL RADIAL DISTORTION MODELS OF CAMERA LENSES WITH ANALYTICAL SOLUTION FOR DISTORTION CORRECTION." International Journal of Information Acquisition 01, no. 02 (June 2004): 135–47. http://dx.doi.org/10.1142/s0219878904000173.

Full text
Abstract:
The common approach to radial distortion is by the means of polynomial approximation, which introduces distortion-specific parameters into the camera model and requires estimation of these distortion parameters. The task of estimating radial distortion is to find a radial distortion model that allows easy undistortion as well as satisfactory accuracy. This paper presents a new class of rational radial distortion models with easy analytical undistortion formulae. Experimental results are presented to show that with this class of rational radial distortion models, satisfactory and comparable accuracy can be achieved.
APA, Harvard, Vancouver, ISO, and other styles
30

B, Usha Sri. "Effective Heart Disease Prediction Model Through Voting Technique." International Journal of Engineering Technology and Management Sciences 4, no. 5 (September 28, 2020): 10–13. http://dx.doi.org/10.46647/ijetms.2020.v04i05.003.

Full text
Abstract:
Machine learning has various practical applications that solves many issues relating to various domains .One among such domain is the health care domain and the most common application of machine learning is the prediction of an outcome based upon existing data in health care industry. Machine learning is shown as an effective technique in assisting the health care industry to make intelligent and effective decisions. The model tries to learn pattern from the existing dataset and later on it is applied to the unknown dataset for effectively predicting the outcome. Classification is the most effective technique for prediction of outcome. There are many classification algorithms which are used for prediction but only few algorithms predict with good accuracy whereas remaining algorithms predict with less accuracy. So to improve the accuracy of weak algorithms this paper presented a new method called ensemble classification ,where the accuracy is enhanced by combining multiple classifiers and later prediction is done by voting technique. So, experiments were done on a heart disease dataset, through ensemble approach the accuracy was enhanced and along with that a GUI was developed where the user himself can check whether he has probability of getting heart disease or not. The results of the study showed that ensemble method such as voting technique played a key role in improving the accuracy prediction of weak classifiers and also identified risk factors for occurrence of heart disease. An accuracy of 90% was achieved with voting technique and the performance of the process was further enhanced with a feature selection implementation, and the results showed significant improvement in prediction accuracy.
APA, Harvard, Vancouver, ISO, and other styles
31

Iqbal, M. Mohamed, and K. Latha. "An effective community-based link prediction model for improving accuracy in social networks." Journal of Intelligent & Fuzzy Systems 42, no. 3 (February 2, 2022): 2695–711. http://dx.doi.org/10.3233/jifs-211821.

Full text
Abstract:
Link prediction plays a predominant role in complex network analysis. It indicates to determine the probability of the presence of future links that depends on available information. The existing standard classical similarity indices-based link prediction models considered the neighbour nodes have a similar effect towards link probability. Nevertheless, the common neighbor nodes residing in different communities may vary in real-world networks. In this paper, a novel community information-based link prediction model has been proposed in which every neighboring node’s community information (community centrality) has been considered to predict the link between the given node pair. In the proposed model, the given social network graph can be divided into different communities and community centrality is calculated for every derived community based on degree, closeness, and betweenness basic graph centrality measures. Afterward, the new community centrality-based similarity indices have been introduced to compute the community centralities which are applied to nine existing basic similarity indices. The empirical analysis on 13 real-world social networks datasets manifests that the proposed model yields better prediction accuracy of 97% rather than existing models. Moreover, the proposed model is parallelized efficiently to work on large complex networks using Spark GraphX Big Data-based parallel Graph processing technique and it attains a lesser execution time of 250 seconds.
APA, Harvard, Vancouver, ISO, and other styles
32

Wang, Hao, Ke Li, and Chi Xu. "A New Generation of ResNet Model Based on Artificial Intelligence and Few Data Driven and Its Construction in Image Recognition Model." Computational Intelligence and Neuroscience 2022 (March 19, 2022): 1–10. http://dx.doi.org/10.1155/2022/5976155.

Full text
Abstract:
The paper proposes an A-ResNet model to improve ResNet. The residual attention module with shortcut connection is introduced to enhance the focus on the target object; the dropout layer is introduced to prevent the overfitting phenomenon and improve the recognition accuracy; the network architecture is adjusted to accelerate the training convergence speed and improve the recognition accuracy. The experimental results show that the A-ResNet model achieves a top-1 accuracy improvement of about 2% compared with the traditional ResNet network. Image recognition is one of the core technologies of computer vision, but its application in the field of tea is relatively small, and tea recognition still relies on sensory review methods. A total of 1,713 images of eight common green teas were collected, and the modeling effects of different network depths and different optimization algorithms were explored from the perspectives of predictive ability, convergence speed, model size, and recognition equilibrium of recognition models.
APA, Harvard, Vancouver, ISO, and other styles
33

Subbanna, Nagesh, Matthias Wilms, Anup Tuladhar, and Nils D. Forkert. "An Analysis of the Vulnerability of Two Common Deep Learning-Based Medical Image Segmentation Techniques to Model Inversion Attacks." Sensors 21, no. 11 (June 4, 2021): 3874. http://dx.doi.org/10.3390/s21113874.

Full text
Abstract:
Recent research in computer vision has shown that original images used for training of deep learning models can be reconstructed using so-called inversion attacks. However, the feasibility of this attack type has not been investigated for complex 3D medical images. Thus, the aim of this study was to examine the vulnerability of deep learning techniques used in medical imaging to model inversion attacks and investigate multiple quantitative metrics to evaluate the quality of the reconstructed images. For the development and evaluation of model inversion attacks, the public LPBA40 database consisting of 40 brain MRI scans with corresponding segmentations of the gyri and deep grey matter brain structures were used to train two popular deep convolutional neural networks, namely a U-Net and SegNet, and corresponding inversion decoders. Matthews correlation coefficient, the structural similarity index measure (SSIM), and the magnitude of the deformation field resulting from non-linear registration of the original and reconstructed images were used to evaluate the reconstruction accuracy. A comparison of the similarity metrics revealed that the SSIM is best suited to evaluate the reconstruction accuray, followed closely by the magnitude of the deformation field. The quantitative evaluation of the reconstructed images revealed SSIM scores of 0.73±0.12 and 0.61±0.12 for the U-Net and the SegNet, respectively. The qualitative evaluation showed that training images can be reconstructed with some degradation due to blurring but can be correctly matched to the original images in the majority of the cases. In conclusion, the results of this study indicate that it is possible to reconstruct patient data used for training of convolutional neural networks and that the SSIM is a good metric to assess the reconstruction accuracy.
APA, Harvard, Vancouver, ISO, and other styles
34

Jiang, Huiping, Demeng Wu, Rui Jiao, and Zongnan Wang. "Analytical Comparison of Two Emotion Classification Models Based on Convolutional Neural Networks." Complexity 2021 (February 24, 2021): 1–9. http://dx.doi.org/10.1155/2021/6625141.

Full text
Abstract:
Electroencephalography (EEG) is the measurement of neuronal activity in different areas of the brain through the use of electrodes. As EEG signal technology has matured over the years, it has been applied in various methods to EEG emotion recognition, most significantly including the use of convolutional neural network (CNN). However, these methods are still not ideal, and shortcomings have been found in the results of some models of EEG feature extraction and classification. In this study, two CNN models were selected for the extraction and classification of preprocessed data, namely, common spatial patterns- (CSP-) CNN and wavelet transform- (WT-) CNN. Using the CSP-CNN, we first used the common space model to reduce dimensionality and then applied the CNN directly to extract and classify the features of the EEG; while, with the WT-CNN model, we used the wavelet transform to extract EEG features, thereafter applying the CNN for classification. The EEG classification results of these two classification models were subsequently analyzed and compared, with the average classification accuracy of the CSP-CNN model found to be 80.56%, and the average classification accuracy of the WT-CNN model measured to 86.90%. Thus, the findings of this study show that the average classification accuracy of the WT-CNN model was 6.34% higher than that of the CSP-CNN.
APA, Harvard, Vancouver, ISO, and other styles
35

Zhang, Haili, and Guohua Zou. "Cross-Validation Model Averaging for Generalized Functional Linear Model." Econometrics 8, no. 1 (February 24, 2020): 7. http://dx.doi.org/10.3390/econometrics8010007.

Full text
Abstract:
Functional data is a common and important type in econometrics and has been easier and easier to collect in the big data era. To improve estimation accuracy and reduce forecast risks with functional data, in this paper, we propose a novel cross-validation model averaging method for generalized functional linear model where the scalar response variable is related to a random function predictor by a link function. We establish asymptotic theoretical result on the optimality of the weights selected by our method when the true model is not in the candidate model set. Our simulations show that the proposed method often performs better than the commonly used model selection and averaging methods. We also apply the proposed method to Beijing second-hand house price data.
APA, Harvard, Vancouver, ISO, and other styles
36

Leblanc, Maryse L., Daniel C. Cloutier, Katrine A. Stewart, and Chantal Hamel. "Calibration and validation of a common lambsquarters (Chenopodium album) seedling emergence model." Weed Science 52, no. 1 (February 2004): 61–66. http://dx.doi.org/10.1614/p2002-109.

Full text
Abstract:
Studies were conducted to calibrate and validate a mathematical model previously developed to predict common lambsquarters seedling emergence at different corn seedbed preparation times. The model was calibrated for different types of soil by adjusting the base temperature of common lambsquarters seedling emergence to the soil texture. A relationship was established with the sand mineral fraction of the soil and was integrated into the model. The calibrated model provided a good fit of the field data and was accurate in predicting cumulative weed emergence in different soil types. The validation was done using data collected independently at a site located 80 km from the original experimental area. There were no differences between observed and predicted values. The accuracy of the model is very satisfactory because the emergence of common lambsquarters populations was accurately predicted at the 95% probability level. This model is one of the first to take into consideration seedbed preparation time and soil texture. This common lambsquarters emergence model could be adapted to model other weed species whose emergence is limited by low spring temperature.
APA, Harvard, Vancouver, ISO, and other styles
37

Nguyen, Tam The, and Tung Thanh Nguyen. "PERSONA: A personalized model for code recommendation." PLOS ONE 16, no. 11 (November 16, 2021): e0259834. http://dx.doi.org/10.1371/journal.pone.0259834.

Full text
Abstract:
Code recommendation is an important feature of modern software development tools to improve the productivity of programmers. The current advanced techniques in code recommendation mostly focus on the crowd-based approach. The basic idea is to collect a large pool of available source code, extract the common code patterns, and utilize the patterns for recommendations. However, programmers are different in multiple aspects including coding preferences, styles, levels of experience, and knowledge about libraries and frameworks. These differences lead to various usages of code elements. When the code of multiple programmers is combined and mined, such differences are disappeared, which could limit the accuracy of the code recommendation tool for a specific programmer. In the paper, we develop a code recommendation technique that focuses on the personal coding patterns of programmers. We propose Persona, a personalized code recommendation model. It learns personalized code patterns for each programmer based on their coding history, while also combines with project-specific and common code patterns. Persona supports recommending code elements including variable names, class names, methods, and parameters. The empirical evaluation suggests that our recommendation tool based on Persona is highly effective. It recommends the next identifier with top-1 accuracy of 60-65% and outperforms the baseline approaches.
APA, Harvard, Vancouver, ISO, and other styles
38

Wang, Dongcheng, Yanghuan Xu, Bowei Duan, Yongmei Wang, Mingming Song, Huaxin Yu, and Hongmin Liu. "Intelligent Recognition Model of Hot Rolling Strip Edge Defects Based on Deep Learning." Metals 11, no. 2 (January 27, 2021): 223. http://dx.doi.org/10.3390/met11020223.

Full text
Abstract:
The edge of a hot rolling strip corresponds to the area where surface defects often occur. The morphologies of several common edge defects are similar to one another, thereby leading to easy error detection. To improve the detection accuracy of edge defects, the authors of this paper first classified the common edge defects and then made a dataset of edge defect images on this basis. Subsequently, edge defect recognition models were established on the basis of LeNet-5, AlexNet, and VggNet-16 by using a convolutional neural network as the core. Through multiple groups of training and recognition experiments, the model’s accuracy and recognition time of a single defect image were analyzed and compared with recognition models with different learning rates and sample batches. The experimental results showed that the recognition model based on the AlexNet had a maximum accuracy of 93.5%, and the average recognition time of a single defect image was 0.0035 s, which could meet the industry requirement. The research results in this paper provide a new method and thought for the fine detection of edge defects in hot rolling strips and have practical significance for improving the surface quality of hot rolling strips.
APA, Harvard, Vancouver, ISO, and other styles
39

Fu, Zhequan, Shangsheng Li, Xiangping Li, Bo Dan, and Xukun Wang. "A Neural Network with Convolutional Module and Residual Structure for Radar Target Recognition Based on High-Resolution Range Profile." Sensors 20, no. 3 (January 21, 2020): 586. http://dx.doi.org/10.3390/s20030586.

Full text
Abstract:
In the conventional neural network, deep depth is required to achieve high accuracy of recognition. Additionally, the problem of saturation may be caused, wherein the recognition accuracy is down-regulated with the increase in the number of network layers. To tackle the mentioned problem, a neural network model is proposed incorporating a micro convolutional module and residual structure. Such a model exhibits few hyper-parameters, and can extended flexibly. In the meantime, to further enhance the separability of features, a novel loss function is proposed, integrating boundary constraints and center clustering. According to the experimental results with a simulated dataset of HRRP signals obtained from thirteen 3D CAD object models, the presented model is capable of achieving higher recognition accuracy and robustness than other common network structures.
APA, Harvard, Vancouver, ISO, and other styles
40

Mehrabani-Zeinabad, Kamran, Marziyeh Doostfatemeh, and Seyyed Mohammad Taghi Ayatollahi. "An Efficient and Effective Model to Handle Missing Data in Classification." BioMed Research International 2020 (November 25, 2020): 1–11. http://dx.doi.org/10.1155/2020/8810143.

Full text
Abstract:
Missing data is one of the most important causes in reduction of classification accuracy. Many real datasets suffer from missing values, especially in medical sciences. Imputation is a common way to deal with incomplete datasets. There are various imputation methods that can be applied, and the choice of the best method depends on the dataset conditions such as sample size, missing percent, and missing mechanism. Therefore, the better solution is to classify incomplete datasets without imputation and without any loss of information. The structure of the “Bayesian additive regression trees” (BART) model is improved with the “Missingness Incorporated in Attributes” approach to solve its inefficiency in handling the missingness problem. Implementation of MIA-within-BART is named “BART.m”. As the abilities of BART.m are not investigated in classification of incomplete datasets, this simulation-based study aimed to provide such resource. The results indicate that BART.m can be used even for datasets with 90 missing present and more importantly, it diagnoses the irrelevant variables and removes them by its own. BART.m outperforms common models for classification with incomplete data, according to accuracy and computational time. Based on the revealed properties, it can be said that BART.m is a high accuracy model in classification of incomplete datasets which avoids any assumptions and preprocess steps.
APA, Harvard, Vancouver, ISO, and other styles
41

Danková, Zuzana, Pavol Žúbor, Marián Grendár, Katarína Zelinová, Marianna Jagelková, Igor Stastny, Andrea Kapinová, et al. "Predictive accuracy of the breast cancer genetic risk model based on eight common genetic variants: The BACkSIDE study." Journal of Biotechnology 299 (June 2019): 1–7. http://dx.doi.org/10.1016/j.jbiotec.2019.04.014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Ghosh, Sarada, Guruprasad Samanta, and Manuel De la Sen. "Multi-Model Approach and Fuzzy Clustering for Mammogram Tumor to Improve Accuracy." Computation 9, no. 5 (May 18, 2021): 59. http://dx.doi.org/10.3390/computation9050059.

Full text
Abstract:
Breast Cancer is one of the most common diseases among women which seriously affect health and threat to life. Presently, mammography is an uttermost important criterion for diagnosing breast cancer. In this work, image of breast cancer mass detection in mammograms with 1024×1024 pixels is used as dataset. This work investigates the performance of various approaches on classification techniques. Overall support vector machine (SVM) performs better in terms of log-loss and classification accuracy rate than other underlying models. Therefore, further extensions (i.e., multi-model ensembles method, Fuzzy c-means (FCM) clustering and SVM combination method, and FCM clustering based SVM model) and comparison with SVM have been performed in this work. The segmentation by FCM clustering technique allows one piece of data to belong in two or more clusters. The additional parts are due to the segmented image to enhance the tumor-shape. Simulation provides the accuracy and the area under the ROC curve for mini-MIAS are 91.39% and 0.964 respectively which give the confirmation of the effectiveness of the proposed algorithm (FCM-based SVM). This method increases the classification accuracy in the case of a malignant tumor. The simulation is based on R-software.
APA, Harvard, Vancouver, ISO, and other styles
43

Zhan, Wei, Qing Lu, and Yue Quan Shang. "Prediction of Traffic Volume in Highway Tunnel Group Region Based on Grey Markov Model." Advanced Materials Research 712-715 (June 2013): 2981–85. http://dx.doi.org/10.4028/www.scientific.net/amr.712-715.2981.

Full text
Abstract:
Based on the investigation and analysis of the traffic volume in highway tunnel group region, the development trend of traffic volume is analyzed by Grey model. Then the prediction accuracy is improved by Markov optimization. The method in this paper has a better prediction accuracy and practicality in a period than other common prediction methods. It can be used for the prediction analysis of traffic volume and for early warning by highway management.
APA, Harvard, Vancouver, ISO, and other styles
44

Hintze, John M., Craig S. Wells, Amanda M. Marcotte, and Benjamin G. Solomon. "Decision-Making Accuracy of CBM Progress-Monitoring Data." Journal of Psychoeducational Assessment 36, no. 1 (September 9, 2017): 74–81. http://dx.doi.org/10.1177/0734282917729263.

Full text
Abstract:
This study examined the diagnostic accuracy associated with decision making as is typically conducted with curriculum-based measurement (CBM) approaches to progress monitoring. Using previously published estimates of the standard errors of estimate associated with CBM, 20,000 progress-monitoring data sets were simulated to model student reading growth of two-word increase per week across 15 consecutive weeks. Results indicated that an unacceptably high proportion of cases were falsely identified as nonresponsive to intervention when a common 4-point decision rule was applied, under the context of typical levels of probe reliability. As reliability and stringency of the decision-making rule increased, such errors decreased. Findings are particularly relevant to those who use a multi-tiered response-to-intervention model for evaluating formative changes associated with instructional intervention and evaluating responsiveness to intervention across multiple tiers of intervention.
APA, Harvard, Vancouver, ISO, and other styles
45

Kioutsioukis, I., and S. Galmarini. "<i>De praeceptis ferendis</i>: good practice in multi-model ensembles." Atmospheric Chemistry and Physics 14, no. 21 (November 11, 2014): 11791–815. http://dx.doi.org/10.5194/acp-14-11791-2014.

Full text
Abstract:
Abstract. Ensembles of air quality models have been formally and empirically shown to outperform single models in many cases. Evidence suggests that ensemble error is reduced when the members form a diverse and accurate ensemble. Diversity and accuracy are hence two factors that should be taken care of while designing ensembles in order for them to provide better predictions. Theoretical aspects like the bias–variance–covariance decomposition and the accuracy–diversity decomposition are linked together and support the importance of creating ensemble that incorporates both these elements. Hence, the common practice of unconditional averaging of models without prior manipulation limits the advantages of ensemble averaging. We demonstrate the importance of ensemble accuracy and diversity through an inter-comparison of ensemble products for which a sound mathematical framework exists, and provide specific recommendations for model selection and weighting for multi-model ensembles. The sophisticated ensemble averaging techniques, following proper training, were shown to have higher skill across all distribution bins compared to solely ensemble averaging forecasts.
APA, Harvard, Vancouver, ISO, and other styles
46

Zhao, Lei, and Shu Gui Liu. "Mathematical Model and Error Analysis of Coordinate Measuring Arm with Revo Revolving Body." Applied Mechanics and Materials 52-54 (March 2011): 156–61. http://dx.doi.org/10.4028/www.scientific.net/amm.52-54.156.

Full text
Abstract:
A new type of coordinate measuring arm with a Revo revolving body which can realize quick measuring in spherical domain has high stability, fast measuring speed and high accuracy compared with common coordinate measuring arm. We use the method of space coordinate transformation to solve the problem that the transformation matrix can’t be got from Revo body to test head by DH method, and apply DH theory to build the mathematical model of system. The system model is verified right by sketching. The error model is built and the effect of measurement accuracy from all error sources is deeply analyzed. It presents a theory foundation for further researching on improving accuracy of this new type of coordinate measuring arm.
APA, Harvard, Vancouver, ISO, and other styles
47

Putkonen, Jaakko, and Terry Swanson. "Accuracy of cosmogenic ages for moraines." Quaternary Research 59, no. 2 (March 2003): 255–61. http://dx.doi.org/10.1016/s0033-5894(03)00006-1.

Full text
Abstract:
AbstractAnalyses of all published cosmogenic exposure ages for moraine boulders show an average age range of 38% between the oldest and youngest boulders from each moraine. This range conflicts with the common assumption that ages of surface boulders are the same as the age of the landform. The wide spread in boulder ages is caused by erosion of the moraine surface and consequent exhumation of fresh boulders. A diffusion model of surface degradation explains the age range and shows that a randomly sampled small set of boulders (n = 3–7) will always yield a lower age limit for the moraine. The model indicates that for identical dating accuracy, six to seven boulders are needed from old and tall moraines (40,000–100,000 yr, 50–100 m initial height) but only one to four boulders from small moraines (20,000–100,000 yr, 10–20 m). By following these guidelines the oldest obtained boulder age will be ≥90% of the moraine age (95% probability). This result is only weakly sensitive to a broad range of soil erosion rates. Our analysis of published boulder ages indicates that <3% of all moraine boulders have prior exposure, and 85% of these boulders predate the dated moraine.
APA, Harvard, Vancouver, ISO, and other styles
48

Yao, Yuhua, Huimin Xu, Manzhi Li, Zhaohui Qi, and Bo Liao. "Recent Advances on Prediction of Human Papillomaviruses Risk Types." Current Drug Metabolism 20, no. 3 (May 22, 2019): 236–43. http://dx.doi.org/10.2174/1389200220666190118110012.

Full text
Abstract:
Background:Some studies have shown that Human Papillomavirus (HPV) is strongly associated with cervical cancer. As we all know, cervical cancer still remains the fourth most common cancer, affecting women worldwide. Thus, it is both challenging and essential to detect risk types of human papillomaviruses.Methods:In order to discriminate whether HPV type is highly risky or not, many epidemiological and experimental methods have been proposed recently. For HPV risk type prediction, there also have been a few computational studies which are all based on Machine Learning (ML) techniques, but adopt different feature extraction methods. Therefore, we conclude and discuss several classical approaches which have got a better result for the risk type prediction of HPV.Results:This review summarizes the common methods to detect human papillomavirus. The main methods are sequence- derived features, text-based classification, gap-kernel method, ensemble SVM, Word statistical model, position- specific statistical model and mismatch kernel method (SVM). Among these methods, position-specific statistical model get a relatively high accuracy rate (accuracy=97.18%). Word statistical model is also a novel approach, which extracted the information of HPV from the protein “sequence space” with word statistical model to predict high-risk types of HPVs (accuracy=95.59%). These methods could potentially be used to improve prediction of highrisk types of HPVs.Conclusion:From the prediction accuracy, we get that the classification results are more accurate by establishing mathematical models. Thus, adopting mathematical methods to predict risk type of HPV will be the main goal of research in the future.
APA, Harvard, Vancouver, ISO, and other styles
49

Krishnan, Surenthiran, Pritheega Magalingam, and Roslina Ibrahim. "Hybrid deep learning model using recurrent neural network and gated recurrent unit for heart disease prediction." International Journal of Electrical and Computer Engineering (IJECE) 11, no. 6 (December 1, 2021): 5467. http://dx.doi.org/10.11591/ijece.v11i6.pp5467-5476.

Full text
Abstract:
<span>This paper proposes a new hybrid deep learning model for heart disease prediction using recurrent neural network (RNN) with the combination of multiple gated recurrent units (GRU), long short-term memory (LSTM) and Adam optimizer. This proposed model resulted in an outstanding accuracy of 98.6876% which is the highest in the existing model of RNN. The model was developed in Python 3.7 by integrating RNN in multiple GRU that operates in Keras and Tensorflow as the backend for deep learning process, supported by various Python libraries. The recent existing models using RNN have reached an accuracy of 98.23% and deep neural network (DNN) has reached 98.5%. The common drawbacks of the existing models are low accuracy due to the complex build-up of the neural network, high number of neurons with redundancy in the neural network model and imbalance datasets of Cleveland. Experiments were conducted with various customized model, where results showed that the proposed model using RNN and multiple GRU with synthetic minority oversampling technique (SMOTe) has reached the best performance level. This is the highest accuracy result for RNN using Cleveland datasets and much promising for making an early heart disease prediction for the patients.</span>
APA, Harvard, Vancouver, ISO, and other styles
50

Lepage, Kyle Q., Mark A. Kramer, and Uri T. Eden. "Some Sampling Properties of Common Phase Estimators." Neural Computation 25, no. 4 (April 2013): 901–21. http://dx.doi.org/10.1162/neco_a_00422.

Full text
Abstract:
The instantaneous phase of neural rhythms is important to many neuroscience-related studies. In this letter, we show that the statistical sampling properties of three instantaneous phase estimators commonly employed to analyze neuroscience data share common features, allowing an analytical investigation into their behavior. These three phase estimators—the Hilbert, complex Morlet, and discrete Fourier transform—are each shown to maximize the likelihood of the data, assuming the observation of different neural signals. This connection, explored with the use of a geometric argument, is used to describe the bias and variance properties of each of the phase estimators, their temporal dependence, and the effect of model misspecification. This analysis suggests how prior knowledge about a rhythmic signal can be used to improve the accuracy of phase estimates.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography