Journal articles on the topic 'Linear prediction'

To see the other types of publications on this topic, follow the link: Linear prediction.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Linear prediction.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Matsumoto, T. "Online Bayesian Modeling and Prediction of Nonlinear Systems." Methods of Information in Medicine 46, no. 02 (2007): 96–101. http://dx.doi.org/10.1055/s-0038-1625379.

Full text
Abstract:
Summary Objectives : Given time-series data from an unknown target system, one often wants to build a model for the system behind the data and make predictions. If the target system can be assumed to be linear, there are means of modeling and predicting the target system in question. If, however, one cannot assume the system is linear, various linear theories have natural limitations in terms of modeling and predictive capabilities. This paper attempts to construct a model from time-series data and make an online prediction when the linear assumption is not valid. Methods : The problem is formulated within a Bayesian framework implemented by the Sequential Monte Carlo method. Online Bayesian learning/prediction requires computation of a posterior distribution in a sequential manner as each datum arrives. The Sequential Monte Carlo method computes the importance weight in order to draw sample from the posterior distribution. The scheme is tested against time-series data from a noisy Rossler system. Results : The test time-series data is the x-coordinate of the trajectory generated by a noisy Roessler system. Attempts are made with regard to online reconstruction of the attractor and online prediction of the time-series data. Conclusions : The proposed algorithm appears to be functional. The algorithm should be tested against real world data.
APA, Harvard, Vancouver, ISO, and other styles
2

Rather, Akhter Mohiuddin. "A Hybrid Intelligent Method of Predicting Stock Returns." Advances in Artificial Neural Systems 2014 (September 7, 2014): 1–7. http://dx.doi.org/10.1155/2014/246487.

Full text
Abstract:
This paper proposes a novel method for predicting stock returns by means of a hybrid intelligent model. Initially predictions are obtained by a linear model, and thereby prediction errors are collected and fed into a recurrent neural network which is actually an autoregressive moving reference neural network. Recurrent neural network results in minimized prediction errors because of nonlinear processing and also because of its configuration. These prediction errors are used to obtain final predictions by summation method as well as by multiplication method. The proposed model is thus hybrid of both a linear and a nonlinear model. The model has been tested on stock data obtained from National Stock Exchange of India. The results indicate that the proposed model can be a promising approach in predicting future stock movements.
APA, Harvard, Vancouver, ISO, and other styles
3

Bai, Chao, and Haiqi Li. "Simultaneous prediction in the generalized linear model." Open Mathematics 16, no. 1 (August 24, 2018): 1037–47. http://dx.doi.org/10.1515/math-2018-0087.

Full text
Abstract:
AbstractThis paper studies the prediction based on a composite target function that allows to simultaneously predict the actual and the mean values of the unobserved regressand in the generalized linear model. The best linear unbiased prediction (BLUP) of the target function is derived. Studies show that our BLUP has better properties than some other predictions. Simulations confirm its better finite sample performance.
APA, Harvard, Vancouver, ISO, and other styles
4

den Brinker, Albertus C., Harish Krishnamoorthi, and Evgeny A. Verbitskiy. "Similarities and Differences Between Warped Linear Prediction and Laguerre Linear Prediction." IEEE Transactions on Audio, Speech, and Language Processing 19, no. 1 (January 2011): 24–33. http://dx.doi.org/10.1109/tasl.2010.2042130.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Sharifi, Alireza, Yagob Dinpashoh, and Rasoul Mirabbasi. "Daily runoff prediction using the linear and non-linear models." Water Science and Technology 76, no. 4 (April 28, 2017): 793–805. http://dx.doi.org/10.2166/wst.2017.234.

Full text
Abstract:
Runoff prediction, as a nonlinear and complex process, is essential for designing canals, water management and planning, flood control and predicting soil erosion. There are a number of techniques for runoff prediction based on the hydro-meteorological and geomorphological variables. In recent years, several soft computing techniques have been developed to predict runoff. There are some challenging issues in runoff modeling including the selection of appropriate inputs and determination of the optimum length of training and testing data sets. In this study, the gamma test (GT), forward selection and factor analysis were used to determine the best input combination. In addition, GT was applied to determine the optimum length of training and testing data sets. Results showed the input combination based on the GT method with five variables has better performance than other combinations. For modeling, among four techniques: artificial neural networks, local linear regression, an adaptive neural-based fuzzy inference system and support vector machine (SVM), results indicated the performance of the SVM model is better than other techniques for runoff prediction in the Amameh watershed.
APA, Harvard, Vancouver, ISO, and other styles
6

Möst, Lisa, Matthias Schmid, Florian Faschingbauer, and Torsten Hothorn. "Predicting birth weight with conditionally linear transformation models." Statistical Methods in Medical Research 25, no. 6 (September 30, 2016): 2781–810. http://dx.doi.org/10.1177/0962280214532745.

Full text
Abstract:
Low and high birth weight (BW) are important risk factors for neonatal morbidity and mortality. Gynecologists must therefore accurately predict BW before delivery. Most prediction formulas for BW are based on prenatal ultrasound measurements carried out within one week prior to birth. Although successfully used in clinical practice, these formulas focus on point predictions of BW but do not systematically quantify uncertainty of the predictions, i.e. they result in estimates of the conditional mean of BW but do not deliver prediction intervals. To overcome this problem, we introduce conditionally linear transformation models (CLTMs) to predict BW. Instead of focusing only on the conditional mean, CLTMs model the whole conditional distribution function of BW given prenatal ultrasound parameters. Consequently, the CLTM approach delivers both point predictions of BW and fetus-specific prediction intervals. Prediction intervals constitute an easy-to-interpret measure of prediction accuracy and allow identification of fetuses subject to high prediction uncertainty. Using a data set of 8712 deliveries at the Perinatal Centre at the University Clinic Erlangen (Germany), we analyzed variants of CLTMs and compared them to standard linear regression estimation techniques used in the past and to quantile regression approaches. The best-performing CLTM variant was competitive with quantile regression and linear regression approaches in terms of conditional coverage and average length of the prediction intervals. We propose that CLTMs be used because they are able to account for possible heteroscedasticity, kurtosis, and skewness of the distribution of BWs.
APA, Harvard, Vancouver, ISO, and other styles
7

Li, Tongxin, Ruixiao Yang, Guannan Qu, Guanya Shi, Chenkai Yu, Adam Wierman, and Steven Low. "Robustness and Consistency in Linear Quadratic Control with Untrusted Predictions." ACM SIGMETRICS Performance Evaluation Review 50, no. 1 (June 20, 2022): 107–8. http://dx.doi.org/10.1145/3547353.3522658.

Full text
Abstract:
We study the problem of learning-augmented predictive linear quadratic control. Our goal is to design a controller that balances "consistency'', which measures the competitive ratio when predictions are accurate, and "robustness'', which bounds the competitive ratio when predictions are inaccurate. We propose a novel λ-confident controller and prove that it maintains a competitive ratio upper bound of 1 + min {O(λ2ε)+ O(1-λ)2,O(1)+O(λ2)} where λ∈ [0,1] is a trust parameter set based on the confidence in the predictions, and ε is the prediction error. Further, motivated by online learning methods, we design a self-tuning policy that adaptively learns the trust parameter λ with a competitive ratio that depends on ε and the variation of system perturbations and predictions. We show that its competitive ratio is bounded from above by 1+O(ε) /(Θ)(1)+Θ(ε))+O(μVar) where μVar measures the variation of perturbations and predictions. It implies that by automatically adjusting the trust parameter online, the self-tuning scheme ensures a competitive ratio that does not scale up with the prediction error ε.
APA, Harvard, Vancouver, ISO, and other styles
8

Sreehari, E., and Pradeep G. S. Ghantasala. "Climate Changes Prediction Using Simple Linear Regression." Journal of Computational and Theoretical Nanoscience 16, no. 2 (February 1, 2019): 655–58. http://dx.doi.org/10.1166/jctn.2019.7785.

Full text
Abstract:
The rise in global temperatures, frequent natural disasters and rising sea levels, reducing Polar Regions have made the problem of understanding and predicting these global climate phenomena. Prediction is a matter of prime importance and they are run as computer simulations to predict climate variables such as temperature, precipitation, rainfall and etc. The agricultural country called India in which 60% of the people depending upon the agriculture. Rain fall prediction is the most important task for predicting early prediction of rainfall May helps to peasant's as well as for the people because most of the people in India can be depends upon the agriculture. The paper represents simple linear regression technique for the early prediction of rainfall. It can helps to farmers for taking appropriate decisions on crop yielding. As usually at the same time there may be a scope to analyze the occurrence of floods or droughts. The simple linear regression analysis methodology applied on the dataset collected over six years of Coonor in Nilagris district from Tamil Nadu state. The experiment and our simple linear regression methodology exploit the appropriate results for the rain fall.
APA, Harvard, Vancouver, ISO, and other styles
9

Gardner, William R. "Burst excited linear prediction." Journal of the Acoustical Society of America 102, no. 6 (1997): 3250. http://dx.doi.org/10.1121/1.419565.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Jimenez, Daniel A. "Piecewise Linear Branch Prediction." ACM SIGARCH Computer Architecture News 33, no. 2 (May 2005): 382–93. http://dx.doi.org/10.1145/1080695.1070002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Magi, Carlo, Jouni Pohjalainen, Tom Bäckström, and Paavo Alku. "Stabilised weighted linear prediction." Speech Communication 51, no. 5 (May 2009): 401–11. http://dx.doi.org/10.1016/j.specom.2008.12.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Kauppinen, Jyrki K., Pekka E. Saarinen, and Matti R. Hollberg. "Linear prediction in spectroscopy." Journal of Molecular Structure 324, no. 1-2 (July 1994): 61–74. http://dx.doi.org/10.1016/0022-2860(94)08227-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Nourani, Vahid, Selin Uzelaltinbulat, Fahreddin Sadikoglu, and Nazanin Behfar. "Artificial Intelligence Based Ensemble Modeling for Multi-Station Prediction of Precipitation." Atmosphere 10, no. 2 (February 15, 2019): 80. http://dx.doi.org/10.3390/atmos10020080.

Full text
Abstract:
The aim of ensemble precipitation prediction in this paper was to achieve the best performance via artificial intelligence (AI) based modeling. In this way, ensemble AI based modeling was proposed for prediction of monthly precipitation with three different AI models (feed forward neural network-FFNN, adaptive neural fuzzy inference system-ANFIS and least square support vector machine-LSSVM) for the seven stations located in the Turkish Republic of Northern Cyprus (TRNC). Two scenarios were examined each having specific inputs set. The scenario 1 was developed for predicting each station’s precipitation through its own data at previous time steps while in scenario 2, the central station’s data were imposed into the models, in addition to each station’s data, as exogenous input. Afterwards, the ensemble modeling was generated to improve the performance of the precipitation predictions. To end this aim, two linear and one non-linear ensemble techniques were used and then the obtained outcomes were compared. In terms of efficiency measures, the averaging methods employing scenario 2 and non-linear ensemble method revealed higher prediction efficiency. Also, in terms of Skill score, non-linear neural ensemble method could enhance predicting efficiency up to 44% in the verification step.
APA, Harvard, Vancouver, ISO, and other styles
14

Genç, Onur, Bilal Gonen, and Mehmet Ardıçlıoğlu. "A comparative evaluation of shear stress modeling based on machine learning methods in small streams." Journal of Hydroinformatics 17, no. 5 (April 28, 2015): 805–16. http://dx.doi.org/10.2166/hydro.2015.142.

Full text
Abstract:
Predicting shear stress distribution has proved to be a critical problem to solve. Hence, the basic objective of this paper is to develop a prediction of shear stress distribution by machine learning algorithms including artificial neural networks, classification and regression tree, generalized linear models. The data set, which is large and feature-rich, is utilized to improve machine learning-based predictive models and extract the most important predictive factors. The 10-fold cross-validation approach was used to determine the performances of prediction methods. The predictive performances of the proposed models were found to be very close to each other. However, the results indicated that the artificial neural network, which has the R value of 0.92 ± 0.03, achieved the best classification performance overall accuracy on the 10-fold holdout sample. The predictions of all machine learning models were well correlated with measurement data.
APA, Harvard, Vancouver, ISO, and other styles
15

Reza, Md Selim, Huiling Zhang, Md Tofazzal Hossain, Langxi Jin, Shengzhong Feng, and Yanjie Wei. "COMTOP: Protein Residue–Residue Contact Prediction through Mixed Integer Linear Optimization." Membranes 11, no. 7 (June 30, 2021): 503. http://dx.doi.org/10.3390/membranes11070503.

Full text
Abstract:
Protein contact prediction helps reconstruct the tertiary structure that greatly determines a protein’s function; therefore, contact prediction from the sequence is an important problem. Recently there has been exciting progress on this problem, but many of the existing methods are still low quality of prediction accuracy. In this paper, we present a new mixed integer linear programming (MILP)-based consensus method: a Consensus scheme based On a Mixed integer linear opTimization method for prOtein contact Prediction (COMTOP). The MILP-based consensus method combines the strengths of seven selected protein contact prediction methods, including CCMpred, EVfold, DeepCov, NNcon, PconsC4, plmDCA, and PSICOV, by optimizing the number of correctly predicted contacts and achieving a better prediction accuracy. The proposed hybrid protein residue–residue contact prediction scheme was tested in four independent test sets. For 239 highly non-redundant proteins, the method showed a prediction accuracy of 59.68%, 70.79%, 78.86%, 89.04%, 94.51%, and 97.35% for top-5L, top-3L, top-2L, top-L, top-L/2, and top-L/5 contacts, respectively. When tested on the CASP13 and CASP14 test sets, the proposed method obtained accuracies of 75.91% and 77.49% for top-L/5 predictions, respectively. COMTOP was further tested on 57 non-redundant α-helical transmembrane proteins and achieved prediction accuracies of 64.34% and 73.91% for top-L/2 and top-L/5 predictions, respectively. For all test datasets, the improvement of COMTOP in accuracy over the seven individual methods increased with the increasing number of predicted contacts. For example, COMTOP performed much better for large number of contact predictions (such as top-5L and top-3L) than for small number of contact predictions such as top-L/2 and top-L/5. The results and analysis demonstrate that COMTOP can significantly improve the performance of the individual methods; therefore, COMTOP is more robust against different types of test sets. COMTOP also showed better/comparable predictions when compared with the state-of-the-art predictors.
APA, Harvard, Vancouver, ISO, and other styles
16

Besalatpour, A., M. Hajabbasi, S. Ayoubi, A. Gharipour, and A. Jazi. "Prediction of soil physical properties by optimized support vector machines." International Agrophysics 26, no. 2 (April 1, 2012): 109–15. http://dx.doi.org/10.2478/v10247-012-0017-7.

Full text
Abstract:
Prediction of soil physical properties by optimized support vector machinesThe potential use of optimized support vector machines with simulated annealing algorithm in developing prediction functions for estimating soil aggregate stability and soil shear strength was evaluated. The predictive capabilities of support vector machines in comparison with traditional regression prediction functions were also studied. In results, the support vector machines achieved greater accuracy in predicting both soil shear strength and soil aggregate stability properties comparing to traditional multiple-linear regression. The coefficient of correlation (R) between the measured and predicted soil shear strength values using the support vector machine model was 0.98 while it was 0.52 using the multiple-linear regression model. Furthermore, a lower mean square error value of 0.06 obtained using the support vector machine model in prediction of soil shear strength as compared to the multiple-linear regression model. The ERROR% value for soil aggregate stability prediction using the multiple-linear regression model was 14.59% while a lower ERROR% value of 4.29% was observed for the support vector machine model. The mean square error values for soil aggregate stability prediction using the multiple-linear regression and support vector machine models were 0.001 and 0.012, respectively. It appears that utilization of optimized support vector machine approach with simulated annealing algorithm in developing soil property prediction functions could be a suitable alternative to commonly used regression methods.
APA, Harvard, Vancouver, ISO, and other styles
17

Krisna, G. Saminath, and Dr R. Indra Gandhi. "Educational Institute Future Intake Prediction System Based on Linear SVC." International Journal of Trend in Scientific Research and Development Volume-2, Issue-3 (April 30, 2018): 978–81. http://dx.doi.org/10.31142/ijtsrd11174.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Hasibuan, Lilis Harianti, and Syarto Musthofa. "Penerapan Metode Regresi Linear Sederhana Untuk Prediksi Harga Beras di Kota Padang." JOSTECH: Journal of Science and Technology 2, no. 1 (March 31, 2022): 85–95. http://dx.doi.org/10.15548/jostech.v2i1.3802.

Full text
Abstract:
The purpose of this research is to get predictions of rice prices. Linear regression is used as a method of predicting rice prices in the next X(t) period. In this study, the actual rice price Y(t) is the effect variable and the time period is the causal variable. The linear regression equation obtained is Y'=13562.561+9.041958X. Testing the accuracy of the prediction results was carried out using RMSE with a value of 0.126. The prediction of rice prices using the linear regression method can be said to be in the very good category, it can be seen that the RMSE value is very small in the test and meets the standard.
APA, Harvard, Vancouver, ISO, and other styles
19

Islam, Md Sariful, and Thomas W. Crawford. "Assessment of Spatio-Temporal Empirical Forecasting Performance of Future Shoreline Positions." Remote Sensing 14, no. 24 (December 16, 2022): 6364. http://dx.doi.org/10.3390/rs14246364.

Full text
Abstract:
Coasts and coastlines in many parts of the world are highly dynamic in nature, where large changes in the shoreline position can occur due to natural and anthropogenic influences. The prediction of future shoreline positions is of great importance in the better planning and management of coastal areas. With an aim to assess the different methods of prediction, this study investigates the performance of future shoreline position predictions by quantifying how prediction performance varies depending on the time depths of input historical shoreline data and the time horizons of predicted shorelines. Multi-temporal Landsat imagery, from 1988 to 2021, was used to quantify the rates of shoreline movement for different time period. Predictions using the simple extrapolation of the end point rate (EPR), linear regression rate (LRR), weighted linear regression rate (WLR), and the Kalman filter method were used to predict future shoreline positions. Root mean square error (RMSE) was used to assess prediction accuracies. For time depth, our results revealed that the higher the number of shorelines used in calculating and predicting shoreline change rates the better predictive performance was yielded. For the time horizon, prediction accuracies were substantially higher for the immediate future years (138 m/year) compared to the more distant future (152 m/year). Our results also demonstrated that the forecast performance varied temporally and spatially by time period and region. Though the study area is located in coastal Bangladesh, this study has the potential for forecasting applications to other deltas and vulnerable shorelines globally.
APA, Harvard, Vancouver, ISO, and other styles
20

Wang, Bo, Muhammad Shahzad, Xianglin Zhu, Khalil Ur Rehman, and Saad Uddin. "A Non-linear Model Predictive Control Based on Grey-Wolf Optimization Using Least-Square Support Vector Machine for Product Concentration Control in l-Lysine Fermentation." Sensors 20, no. 11 (June 11, 2020): 3335. http://dx.doi.org/10.3390/s20113335.

Full text
Abstract:
l-Lysine is produced by a complex non-linear fermentation process. A non-linear model predictive control (NMPC) scheme is proposed to control product concentration in real time for enhancing production. However, product concentration cannot be directly measured in real time. Least-square support vector machine (LSSVM) is used to predict product concentration in real time. Grey-Wolf Optimization (GWO) algorithm is used to optimize the key model parameters (penalty factor and kernel width) of LSSVM for increasing its prediction accuracy (GWO-LSSVM). The proposed optimal prediction model is used as a process model in the non-linear model predictive control to predict product concentration. GWO is also used to solve the non-convex optimization problem in non-linear model predictive control (GWO-NMPC) for calculating optimal future inputs. The proposed GWO-based prediction model (GWO-LSSVM) and non-linear model predictive control (GWO-NMPC) are compared with the Particle Swarm Optimization (PSO)-based prediction model (PSO-LSSVM) and non-linear model predictive control (PSO-NMPC) to validate their effectiveness. The comparative results show that the prediction accuracy, adaptability, real-time tracking ability, overall error and control precision of GWO-based predictive control is better compared to PSO-based predictive control.
APA, Harvard, Vancouver, ISO, and other styles
21

Zhu, Guang, and Ad Bax. "Improved linear prediction of damped NMR signals using modified “forward-backward” linear prediction." Journal of Magnetic Resonance (1969) 100, no. 1 (October 1992): 202–7. http://dx.doi.org/10.1016/0022-2364(92)90379-l.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Haberman, Shelby J. "Application of Best Linear Prediction and Penalized Best Linear Prediction to ETS Tests." ETS Research Report Series 2020, no. 1 (April 16, 2020): 1–25. http://dx.doi.org/10.1002/ets2.12290.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Yanto, Musli, Sigit Sanjaya, Yulasmi Yulasmi, Dodi Guswandi, and Syafri Arlis. "Implementation multiple linear regresion in neural network predict gold price." Indonesian Journal of Electrical Engineering and Computer Science 22, no. 3 (June 1, 2021): 1635. http://dx.doi.org/10.11591/ijeecs.v22.i3.pp1635-1642.

Full text
Abstract:
<p>The movement of gold prices in the previous period was crucial for investors. However, fluctuations in gold price movements always occur. The problem in this study is how to apply multiple linear regression (MRL) in predicting artificial neural networks (ANN) of gold prices. MRL is mathematical calculation technique used to measure the correlation between variables. The results of the MRL analysis ensure that the network pattern that is formed can provide precise and accurate prediction results. In addition, this study aims to develop a predictive pattern model that already exists. The results of the correlation test obtained by MRL provide a correlation of 62% so that the test results are said to have a significant effect on gold price movements. Then the prediction results generated using an ANN has a mean squared error (MSE) value of 0.004264%. The benefits obtained in this study provide an overview of the gold price prediction pattern model by conducting learning and approaches in testing the accuracy of the use of predictor variables.</p>
APA, Harvard, Vancouver, ISO, and other styles
24

Yu, Haipeng, Jaap Milgen, Egbert Knol, Rohan Fernando, and Jack C. Dekkers. "32 A Bayesian Hierarchical Model to Integrate Growth Models into Genomic Evaluation of Pigs." Journal of Animal Science 99, Supplement_3 (October 8, 2021): 18–19. http://dx.doi.org/10.1093/jas/skab235.030.

Full text
Abstract:
Abstract Genomic prediction has advanced genetic improvement by enabling more accurate estimates of breeding values at an early age. Although genomic prediction is efficient in predicting traits dominated by additive genetic effects within common settings, prediction in the presence of non-additive genetic effects and genotype by environmental interactions (GxE) remains a challenge. Previous studies have attempted to address these challenges by statistical modeling, while the augmentation of statistical models with biological information has received relatively little attention. A pig growth model assumes growth performance is a nonlinear functional interaction between the animal’s genetic potential for underlying latent growth traits and environmental factors and has the potential to capture GxE and non-additive genetic effects. The objective of this study was to integrate a nonlinear stable Gompertz function of three latent growth traits and age into genomic prediction models using Bayesian hierarchical modeling. The three latent growth traits were modeled as a linear combination of systematic environmental, marker, and residual effects. The model was applied to daily body weight data from ~83 to ~186 days of age on 4,039 purebred boars that were genotyped for 24K markers. Bias and prediction accuracy of genomic predictions of selection candidates were assessed by extending the linear regression method of predictions based on part and whole data to a non-linear setting. The accuracy (bias) of genomic predictions was 0.58 (0.82), 0.46 (0.90), 0.54 (0.78), and 0.60 (0.84) for the three latent growth traits and average daily gain derived from integrated nonlinear model, respectively, compared to 0.58 (0.87) for genomic predictions of average daily gain using standard linear models. In subsequent work, the growth model will be extended to include daily feed intake and carcass composition data. Resulting models are expected to substantially advance genetic improvement in pigs across environments. Funded by USDA-NIFA grant # 2020-67015-31031.
APA, Harvard, Vancouver, ISO, and other styles
25

Hanif, Mulia, Maman Abdurohman, and Aji Gautama Putrada. "Rice consumption prediction using linear regression method for smart rice box system." Jurnal Teknologi dan Sistem Komputer 8, no. 4 (May 25, 2020): 284–88. http://dx.doi.org/10.14710/jtsiskom.2020.13353.

Full text
Abstract:
Currently, the smart rice box has applied the Internet of Things (IoT) but without prediction of rice runs out which shows the amount of rice consumption. This study applies linear regression to predict the rice runs out in an IoT-based smart rice box and analyzes its performance. The prediction used the dataset obtained by measuring a smart rice box equipped with a load cell weight sensor and Hx711 module. The weight sensor accuracy was an RMSE of between 56 and 170 grams. The linear regression method applied to the smart rice box to predict rice running out has an MSE value of 0.2588 with a prediction window of 43 days. An R-squared value of less than one is obtained with a predictive threshold of 24 days.
APA, Harvard, Vancouver, ISO, and other styles
26

Bijma, Piter, and John A. Woolliams. "Prediction of Rates of Inbreeding in Populations Selected on Best Linear Unbiased Prediction of Breeding Value." Genetics 156, no. 1 (September 1, 2000): 361–73. http://dx.doi.org/10.1093/genetics/156.1.361.

Full text
Abstract:
Abstract Predictions for the rate of inbreeding (ΔF) in populations with discrete generations undergoing selection on best linear unbiased prediction (BLUP) of breeding value were developed. Predictions were based on the concept of long-term genetic contributions using a recently established relationship between expected contributions and rates of inbreeding and a known procedure for predicting expected contributions. Expected contributions of individuals were predicted using a linear model, μi(x) = α βsi, where si denotes the selective advantage as a deviation from the contemporaries, which was the sum of the breeding values of the individual and the breeding values of its mates. The accuracy of predictions was evaluated for a wide range of population and genetic parameters. Accurate predictions were obtained for populations of 5–20 sires. For 20–80 sires, systematic underprediction of on average 11% was found, which was shown to be related to the goodness of fit of the linear model. Using simulation, it was shown that a quadratic model would give accurate predictions for those schemes. Furthermore, it was shown that, contrary to random selection, ΔF less than halved when the number of parents was doubled and that in specific cases ΔF may increase with the number of dams.
APA, Harvard, Vancouver, ISO, and other styles
27

Peng, Luna. "Stock Price Prediction of “Google” based on Machine Learning." BCP Business & Management 34 (December 14, 2022): 912–18. http://dx.doi.org/10.54691/bcpbm.v34i.3111.

Full text
Abstract:
By 2022, many countries have declared the epidemic's end, both an opportunity and a challenge for many investors. More and more investors are manipulating prices to influence the stock market. So investors want to predict the price of stocks to make suitable investments. The author wants to start with the platform YouTube to study the price trend of this stock and make predictions to analyze whether there are traces of the factors affecting the stock price based on linear regression and random forest regression models. The author first backtested the price of this stock and analyzed the data according to the highest and lowest day. Then, the author used the method of Linear Regression and Random Forest Regression to predict the price. The error of the Linear Regression prediction results was within 5%, within the normal range, but the Random Forest Regression 5 days prediction's accuracy is much lower (65%). It shows that the stock price prediction model--Linear Regression is more credible and is worthy of reference for investors.
APA, Harvard, Vancouver, ISO, and other styles
28

Schmitt, Thomas, Tobias Rodemann, and Jürgen Adamy. "The Cost of Photovoltaic Forecasting Errors in Microgrid Control with Peak Pricing." Energies 14, no. 9 (April 29, 2021): 2569. http://dx.doi.org/10.3390/en14092569.

Full text
Abstract:
Model predictive control (MPC) is widely used for microgrids or unit commitment due to its ability to respect the forecasts of loads and generation of renewable energies. However, while there are lots of approaches to accounting for uncertainties in these forecasts, their impact is rarely analyzed systematically. Here, we use a simplified linear state space model of a commercial building including a photovoltaic (PV) plant and real-world data from a 30 day period in 2020. PV predictions are derived from weather forecasts and industry peak pricing is assumed. The effect of prediction accuracy on the resulting cost is evaluated by multiple simulations with different prediction errors and initial conditions. Analysis shows a mainly linear correlation, while the exact shape depends on the treatment of predictions at the current time step. Furthermore, despite a time horizon of 24h, only the prediction accuracy of the first 75min was relevant for the presented setting.
APA, Harvard, Vancouver, ISO, and other styles
29

Yuan, Xiaojun, Dake Chen, Cuihua Li, Lei Wang, and Wanqiu Wang. "Arctic Sea Ice Seasonal Prediction by a Linear Markov Model." Journal of Climate 29, no. 22 (October 26, 2016): 8151–73. http://dx.doi.org/10.1175/jcli-d-15-0858.1.

Full text
Abstract:
Abstract A linear Markov model has been developed to predict sea ice concentration (SIC) in the pan-Arctic region at intraseasonal to seasonal time scales, which represents an original effort to use a reduced-dimension statistical model in forecasting Arctic sea ice year-round. The model was built to capture covariabilities in the atmosphere–ocean–sea ice system defined by SIC, sea surface temperature, and surface air temperature. Multivariate empirical orthogonal functions of these variables served as building blocks of the model. A series of model experiments were carried out to determine the model’s dimension. The predictive skill of the model was evaluated by anomaly correlation and root-mean-square errors in a cross-validated fashion . On average, the model is superior to the predictions by anomaly persistence, damped anomaly persistence, and climatology. The model shows good skill in predicting SIC anomalies within the Arctic basin during summer and fall. Long-term trends partially contribute to the model skill. However, the model still beats the anomaly persistence for all targeted seasons after linear trends are removed. In winter and spring, the predictability is found only in the seasonal ice zone. The model has higher anomaly correlation in the Atlantic sector than in the Pacific sector. The model predicts well the interannual variability of sea ice extent (SIE) but underestimates its accelerated long-term decline, resulting in a systematic model bias. This model bias can be reduced by the constant or linear regression bias corrections, leading to an improved correlation skill of 0.92 by the regression bias correction for the 2-month-lead September SIE prediction.
APA, Harvard, Vancouver, ISO, and other styles
30

Han, F., X. Huang, E. Teye, H. Gu, H. Dai, and L. Yao. "A nondestructive method for fish freshness determination with electronic tongue combined with linear and non-linear multivariate algorithms." Czech Journal of Food Sciences 32, No. 6 (November 27, 2014): 532–37. http://dx.doi.org/10.17221/88/2014-cjfs.

Full text
Abstract:
Electronic tongue coupled with linear and non-linear multivariate algorithms was attempted to address the drawbacks of fish freshness detection. Parabramis pekinensis fish samples stored at 4&deg;C were used. Total volatile basic nitrogen (TVB-N) and total viable count (TVC) of the samples were measured. Fisher liner discriminant analysis (Fisher LDA) and support vector machine (SVM) were applied comparatively to classify the samples stored at different days. The results revealed that SVM model was better than Fisher LDA model with a higher identification rate of 97.22% in the prediction set. Partial least square (PLS) and support vector regression (SVR) were applied comparatively to predict the TVB-N and TVC values. The quantitative models were evaluated by the root mean square error of prediction (RMSEP) and the correlation coefficient in the prediction set (R<sub>pre</sub>). The results revealed that SVR model was superior to PLS model with RMSEP = 5.65 mg/100 g, R<sub>pre</sub> = 0.9491 for TVB-N prediction and RMSEP = 0.73 log CFU/g, R<sub>pre</sub>&nbsp;= 0.904 for TVC prediction. This study demonstrated that the electronic tongue together with SVM and SVR has a great potential for a convenient and nondestructive detection of fish freshness. &nbsp;
APA, Harvard, Vancouver, ISO, and other styles
31

Hemalatha, G., K. Srinivasa Rao, and D. Arun Kumar. "Weather Prediction using Advanced Machine Learning Techniques." Journal of Physics: Conference Series 2089, no. 1 (November 1, 2021): 012059. http://dx.doi.org/10.1088/1742-6596/2089/1/012059.

Full text
Abstract:
Abstract Prediction of weather condition is important to take efficient decisions. In general, the relationship between the input weather parameters and the output weather condition is non linear and predicting the weather conditions in non linear relationship posses challenging task. The traditional methods of weather prediction sometimes deviate in predicting the weather conditions due to non linear relationship between the input features and output condition. Motivated with this factor, we propose a neural networks based model for weather prediction. The superiority of the proposed model is tested with the weather data collected from Indian metrological Department (IMD). The performance of model is tested with various metrics..
APA, Harvard, Vancouver, ISO, and other styles
32

Kim, Jae Hyun, Sang Woo Moon, Choongrak Kim, and Jiwoong Lee. "Pointwise Modeling for Predicting Visual Field Progression in Korean Glaucoma Patients." Journal of the Korean Ophthalmological Society 63, no. 11 (November 15, 2022): 918–27. http://dx.doi.org/10.3341/jkos.2022.63.11.918.

Full text
Abstract:
Purpose: To evaluate the utility of pointwise modeling for predicting visual field (VF) progression in Korean glaucoma patients.Methods: Open-angle glaucoma or glaucoma suspect patients with VFs ≥ 10 times, who were followed-up for ≥ 6 years, wereincluded. Linear, exponential, and polynomial regression of threshold values at each test point against time were performed. Model fit was evaluated based on root mean squared error (RMSE) for the entire longitudinal VF series. To evaluate prediction ability, VFs from the first 5 years were used to estimate model parameters, followed by calculation of threshold values for 1, 2, 3, 5 years to obtain RMSE. Prediction ability was compared regarding initial threshold value and also central and peripheral VF area.Results: Four hundred thirty-nine eyes (280 patients) were included. The mean follow-up duration and number of VF tests were 9.64 years and 13.02, respectively. When fitting the entire VF series, polynomial model had the lowest RMSE (<i>p</i> < 0.001). For 1-year predictions, linear model had the lowest RMSE, while exponential model had the lowest RMSE for 3- and 5-year predictions (<i>p</i> < 0.001). For 1- and 2-year predictions, exponential and linear models had the lowest RMSEs, with initial sensitivities of 0-7 and 20-27 decibel (dB), respectively (<i>p</i> < 0.001). Compared to exponential model, linear model had lower RMSE for 1-year, but higher RMSE for 3- and 5-year at peripheral VF area (<i>p</i> < 0.001). For central VF area, exponential model had lower RMSEs for 2-, 3-, and 5-year predictions compared to linear model (<i>p</i> ≤ 0.015).Conclusions: The linear model outperformed the exponential model for short-term predictions, while the exponential model was better for long-term predictions. The prediction performance of the exponential model was superior to that of the linear model for central VFs, and for test points with lower initial sensitivities.
APA, Harvard, Vancouver, ISO, and other styles
33

Bultheel, Adhemar, and Marc Van Barel. "Linear prediction: mathematics and engineering." Bulletin of the Belgian Mathematical Society - Simon Stevin 1, no. 1 (1994): 1–58. http://dx.doi.org/10.36045/bbms/1103408452.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Welham, Sue, Brian Cullis, Beverley Gogel, Arthur Gilmour, and Robin Thompson. "Prediction in linear mixed models." Australian New Zealand Journal of Statistics 46, no. 3 (September 2004): 325–47. http://dx.doi.org/10.1111/j.1467-842x.2004.00334.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Kondoz, A. M. "Pulsed residual excited linear prediction." IEE Proceedings - Vision, Image, and Signal Processing 142, no. 2 (1995): 105. http://dx.doi.org/10.1049/ip-vis:19951801.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

denBrinker, A. C., V. Voitishchuk, and S. J. L. van Eijndhoven. "IIR-Based Pure Linear Prediction." IEEE Transactions on Speech and Audio Processing 12, no. 1 (January 2004): 68–75. http://dx.doi.org/10.1109/tsa.2003.815524.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Corsten, L. C. A. "Interpolation and optimal linear prediction." Statistica Neerlandica 43, no. 2 (June 1989): 69–84. http://dx.doi.org/10.1111/j.1467-9574.1989.tb01249.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Su, Huan-Yu. "Spike code-excited linear prediction." Journal of the Acoustical Society of America 103, no. 3 (March 1998): 1248. http://dx.doi.org/10.1121/1.423208.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Percy, David F. "Prediction for generalized linear models." Journal of Applied Statistics 20, no. 2 (January 1993): 285–91. http://dx.doi.org/10.1080/02664769300000023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Schick, Anton, and Wolfgang Wefelmeyer. "Prediction in invertible linear processes." Statistics & Probability Letters 77, no. 12 (July 2007): 1322–31. http://dx.doi.org/10.1016/j.spl.2007.03.018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Pech, Ratha, Dong Hao, Yan-Li Lee, Ye Yuan, and Tao Zhou. "Link prediction via linear optimization." Physica A: Statistical Mechanics and its Applications 528 (August 2019): 121319. http://dx.doi.org/10.1016/j.physa.2019.121319.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Cai, T. Tony, and Peter Hall. "Prediction in functional linear regression." Annals of Statistics 34, no. 5 (October 2006): 2159–79. http://dx.doi.org/10.1214/009053606000000830.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Atal, B. S. "The history of linear prediction." IEEE Signal Processing Magazine 23, no. 2 (March 2006): 154–61. http://dx.doi.org/10.1109/msp.2006.1598091.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Despotovic, Vladimir, Tomas Skovranek, and Zoran Peric. "One-parameter fractional linear prediction." Computers & Electrical Engineering 69 (July 2018): 158–70. http://dx.doi.org/10.1016/j.compeleceng.2018.05.020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Skovranek, Tomas, Vladimir Despotovic, and Zoran Peric. "Two-dimensional fractional linear prediction." Computers & Electrical Engineering 77 (July 2019): 37–46. http://dx.doi.org/10.1016/j.compeleceng.2019.04.021.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Prandoni, P., and M. Vetterli. "R/D optimal linear prediction." IEEE Transactions on Speech and Audio Processing 8, no. 6 (2000): 646–55. http://dx.doi.org/10.1109/89.876298.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Vaidyanathan, P. P. "The Theory of Linear Prediction." Synthesis Lectures on Signal Processing 2, no. 1 (January 2007): 1–184. http://dx.doi.org/10.2200/s00086ed1v01y200712spr003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Ekman, L. Anders, W. Bastiaan Kleijn, and Manohar N. Murthi. "Regularized Linear Prediction of Speech." IEEE Transactions on Audio, Speech, and Language Processing 16, no. 1 (January 2008): 65–73. http://dx.doi.org/10.1109/tasl.2007.909448.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Dowling, E. M., R. D. DeGroat, D. A. Linebarger, L. I. Scharf, and M. Vis. "Reduced polynomial order linear prediction." IEEE Signal Processing Letters 3, no. 3 (March 1996): 92–94. http://dx.doi.org/10.1109/97.481165.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Thyssen, Jes. "Linear prediction based noise suppression." Journal of the Acoustical Society of America 120, no. 6 (2006): 3452. http://dx.doi.org/10.1121/1.2409449.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography