Journal articles on the topic 'Error component random parameter logit model'

To see the other types of publications on this topic, follow the link: Error component random parameter logit model.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 44 journal articles for your research on the topic 'Error component random parameter logit model.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Jelić Milković, Sanja, Ružica Lončarić, Krunoslav Zmaić, David Kranjac, and Maurizio Canavari. "Choice Experiment Performed on the Fresh Black Slavonian Pig’s Meat: A Preliminary Study." Poljoprivreda 27, no. 2 (December 22, 2021): 75–83. http://dx.doi.org/10.18047/poljo.27.2.10.

Full text
Abstract:
Until now, no research has been carried out in Croatia into consumer preferences for a particular agricultural and food product by a choice experiment. Therefore, little data are available about Croatian consumers' preferences for social concerns (sustainability, biodiversity, rural development and animal welfare) with regard to the consumer choice and behavior favoring local pig breeds, in this case the Black Slavonian Pig breed. A survey was the method used to collect the data, and a survey questionnaire was used as an instrument. The survey was performed on a sample of n = 100 Croatian consumers surveyed online using a hypothetical choice experiment. The data were analyzed using the three logit models: a multinomial logit model (MNL), random parameter logit (RPL), and an error component random parameter logit model (RPL-EC) in order to examine the consumers’ heterogeneous preferences for fresh ham meat of the Black Slavonian Pig. The results suggest that the Croatian consumers appreciated a darker red fresh pork meat than the one obtained from the Black Slavonian Pig reared outdoors and semi-outdoor. They also prefer a fresh meat bearing a geographical information label, such as the continental Croatia and continental Croatia + PDO, to a fresh meat without a label.
APA, Harvard, Vancouver, ISO, and other styles
2

Gu, Gaofeng, Tao Feng, Chixing Zhong, Xiaoxi Cai, and Jiang Li. "The Effects of Life Course Events on Car Ownership and Sustainable Mobility Tools Adoption Decisions: Results of an Error Component Random Parameter Logit Model." Sustainability 13, no. 12 (June 16, 2021): 6816. http://dx.doi.org/10.3390/su13126816.

Full text
Abstract:
Life course events can change household travel demand dramatically. Recent studies of car ownership have examined the impacts of life course events on the purchasing, replacing, and disposing of cars. However, with the increasing diversification of mobility tools, changing the fleet size is not the only option to adapt to the change caused by life course events. People have various options with the development of sustainable mobility tools including electric car, electric bike, and car sharing. In order to determine the impacts of life course events on car ownership and the decision of mobility tool type, a stated choice experiment was conducted. The experiment also investigated how the attributes of mobility tools related to the acceptance of them. Based on existing literature, we identified the attributes of mobility tools and several life course events which are considered to be influential in car ownership decision and new types of mobility tools choice. The error component random parameter logit model was estimated. The heterogeneity across people on current car and specific mobility tools are considered. The results indicate people incline not to sell their current car when they choose an electric bike or shared car. Regarding the life course events, baby birth increases the probability to purchase an additional car, while it decreases the probability to purchase an electric bike or joining a car sharing scheme. Moreover, the estimation of error components implies that there is unobserved heterogeneity across respondents on the sustainable mobility tools choice and the decision on household’s current car.
APA, Harvard, Vancouver, ISO, and other styles
3

Fatimata, LO, BA Demba Bocar, and DIOP Aba. "ASYMPTOTIC PROPERTIES IN THE PROBIT-ZERO-INFLATED BINOMIAL REGRESSION MODEL." Journal of Computer Science and Applied Mathematics 3, no. 2 (September 18, 2021): 68–81. http://dx.doi.org/10.37418/jcsam.3.2.3.

Full text
Abstract:
Zero-inflated regression models have had wide application recently and have provenuseful in modeling data with many zeros. Zero-inflated Binomial (ZIB) regression model is an extension of the ordinary binomial distribution that takes into account the excess of zeros. In comparing the probit model to the logistic model, many authors believe that there is little theoretical justification in choosing one formulation over the other in most circumstances involving binary responses. The logit model is considered to be computationally simpler but it is based on a more restrictive assumption of error independence, although many other generalizations have dealt with that assumption as well. By contrast, the probit model assumes that random errors have a multivariate normal distribution. This assumption makes the probit model attractive because the normal distribution provides a good approximation to many other distributions. In this paper, we develop a maximum likelihood estimation procedure for the parameters of a zero-inflated Binomial regression model with probit link function for both component of the model. We establish the existency, consistency and asymptotic normality of the proposed estimator.
APA, Harvard, Vancouver, ISO, and other styles
4

Belgiawan, Prawira Fajarindra, Raden Aswin Rahadi, Annisa Rahmani Qastharin, Lidia Mayangsari, Reza Ashari Nasution, and Sudarso Kaderi Wiryono. "The Commuting Mode Choice of Students of Institut Teknologi Bandung, Indonesia." Journal of Regional and City Planning 32, no. 2 (August 10, 2021): 150–64. http://dx.doi.org/10.5614/jpwk.2021.32.2.4.

Full text
Abstract:
This research explored the commuting mode preferences of students living near Institut Teknologi Bandung when a new mode of transportation (i.e., carpool) is introduced to the selection list. Six alternative modes were presented: minibus, car, motorcycle, car-based ride-sourcing, motorcycle-based ride-sourcing, and carpool. The data collection process was conducted using a questionnaire-based stated-preferences survey. It included eight sets of labeled scenarios with a number of attributes: travel time, travel cost, waiting time, transfer amount, access and egress time, frequency, congestion time, baggage cost, and parking cost. A total of 1416 observations were acquired for further analysis. A mixed logit (MXL) model with random cost parameter and random error components was used. From the MXL results, we found that travel cost had no significant influence on the selection of commuting mode among students. This result was unforeseen given the characteristics of Indonesian consumers, who are notoriously sensitive to price. However, based on the results for several significant attributes of carpool as well as from the value of travel time savings and demand calculation, we suggest that carpooling is a valid alternative transport mode for campus commuting. As a pioneer study on student commuting mode selection, this study provided valid and dependable evidence on how students around ITB main campus choose their transportation methods. Abstrak. Penelitian ini mengeksplorasi preferensi moda perjalanan pulang pergi mahasiswa yang tinggal di dekat Institut Teknologi Bandung ketika moda transportasi baru (yaitu angkutan bersama) menjadi salah satu pilihan moda. Terdata enam moda alternatif yang disajikan: angkot, mobil, sepeda motor, taksi daring, ojek daring, dan angkutan bersama. Proses pengumpulan data dilakukan dengan menggunakan metoda survei stated-preference berbasis kuesioner. Survei tersebut meliputi delapan skenario berlabel dengan sejumlah atribut: waktu perjalanan, biaya perjalanan, waktu tunggu, banyaknya perpindahan moda, waktu perjalanan menuju tempat angkutan umum dan waktu perjalanan menuju tempat tujuan, frekuensi kedatangan, waktu kemacetan, biaya bagasi, dan biaya parkir. Sebanyak 1416 pengamatan diperoleh untuk analisis lebih lanjut. Model mixed logit (MXL) dengan parameter biaya acak dan komponen error acak digunakan. Dari hasil MXL, kami menemukan bahwa biaya perjalanan tidak berpengaruh signifikan terhadap pemilihan moda perjalanan pulang pergi di kalangan mahasiswa. Hasil ini tidak terduga mengingat karakteristik konsumen Indonesia yang terkenal sensitif terhadap harga. Namun, berdasarkan hasil untuk beberapa atribut signifikan dari angkutan bersama serta dari nilai penghematan waktu perjalanan dan perhitungan permintaan, kami menyarankan bahwa angkutan bersama adalah moda transportasi alternatif yang valid untuk komuter kampus. Sebagai studi perintis dalam pemilihan moda perjalanan pulang pergi mahasiswa, studi ini memberikan bukti yang valid dan dapat diandalkan tentang bagaimana mahasiswa di sekitar kampus utama ITB memilih metode transportasi mereka. Keywords. angkutan bersama, mahasiswa ITB, mixed logit, elastisitas, nilai waktu.
APA, Harvard, Vancouver, ISO, and other styles
5

Zhang, Jing-Jing, and Ting Wang. "Strong consistency rate of estimators in heteroscedastic errors-in-variables model for negative association samples." Filomat 32, no. 13 (2018): 4639–54. http://dx.doi.org/10.2298/fil1813639z.

Full text
Abstract:
This article is concerned with the estimating problem of heteroscedastic partially linear errorsin- variables (EV) models. We derive the strong consistency rate for estimators of the slope parameter and the nonparametric component in the case of known error variance with negative association (NA) random errors. Meanwhile, when the error variance is unknown, the strong consistency rate for the estimators of the slope parameter and the nonparametric component as well as variance function are considered for NA samples. In general, we concluded that the strong consistency rate for all estimators can achieve o(n-1/4).
APA, Harvard, Vancouver, ISO, and other styles
6

Smallwood, David. "The Relationship Between the Variance of Energy Estimates and the Uncertainty Parameter." Journal of the IEST 46, no. 1 (September 14, 2003): 103–9. http://dx.doi.org/10.17764/jiet.46.1.f0278302vk23h648.

Full text
Abstract:
The relationship between the uncertainty (4π times the product of rms duration, Dt, and rms bandwidth, Df) and the variance of energy estimates of random transients is examined. If a transient can be modeled with the product model, it is shown that for many cases the normalized error in the energy estimate, as measured by the uncertainty, is approximately the same as the normalized error determined from the more exact statistical duration and bandwidth. The uncertainty is a quantitative measure of the expected energy estimate errors. If the transient has a significant random component, a small uncertainty parameter implies a large error in the energy estimate. Attempts to resolve a time/frequency spectrum near the uncertainty limits of a transient with a significant random component will result in large errors in the spectral estimates.
APA, Harvard, Vancouver, ISO, and other styles
7

Smallwood, David. "The Variance of Energy Estimates for the Product Model." Shock and Vibration 10, no. 4 (2003): 211–21. http://dx.doi.org/10.1155/2003/219481.

Full text
Abstract:
A product model, in which{x(t)}, is the product of a slowly varying random window,{w(t)}, and a stationary random process,{g(t)}, is defined. A single realization of the process will be defined asx(t). This is slightly different from the usual definition of the product model where the window is typically defined as deterministic. An estimate of the energy (the zero order temporal moment, only in special cases is this physical energy) of the random process,{x(t)}, is defined asm0=∫∞∞|x(t)|2dt=∫−∞∞|w(t)g(t)|2dtRelationships for the mean and variance of the energy estimates,m0, are then developed. It is shown that for many cases the uncertainty (4πtimes the product of rms duration,Dt, and rms bandwidth,Df) is approximately the inverse of the normalized variance of the energy. The uncertainty is a quantitative measure of the expected error in the energy estimate. If a transient has a significant random component, a small uncertainty parameter implies large error in the energy estimate. Attempts to resolve a time/frequency spectrum near the uncertainty limits of a transient with a significant random component will result in large errors in the spectral estimates.
APA, Harvard, Vancouver, ISO, and other styles
8

Pattnaik, Sarojrani, D. Benny Karunakar, and PK Jha. "A prediction model for the lost wax process through fuzzy-based artificial neural network." Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science 228, no. 7 (October 8, 2013): 1259–71. http://dx.doi.org/10.1177/0954406213507701.

Full text
Abstract:
The application of investment casting process is rapidly increasing, specifically for near net shape manufacturing of complex and small engineering components. The process begins with making of wax patterns, thereafter employing a precision mould, dewaxing, pouring molten alloy and knocking the shell, followed by minor finishing operations. This study is about predicting the quality of responses of the wax patterns namely linear shrinkage and surface roughness using fuzzy-based artificial neural network. The process parameters considered are injection temperature, injection pressure and holding time, and experiments have been performed as per Taguchi’s L18 orthogonal array. As optimum parameter levels were different for both the responses, fuzzy logic reasoning has been used to combine all the objectives and transform the experimental results into single performance index known as multi-response performance index. Later, modelling of the process has been done using artificial neural network with experimental process parameters as inputs and multi-response performance index as output obtained from fuzzy modelling. Further, experiments have been conducted at random combination of parameter levels to validate the developed model, and it has been found that the actual results agreed well with that of the predicted value on the basis of mean absolute percentage error and correlation plots.
APA, Harvard, Vancouver, ISO, and other styles
9

Bajdik, Chris D., and David C. Schneider. "Models of the Fish Yield from Lakes: Does the Random Component Matter?" Canadian Journal of Fisheries and Aquatic Sciences 48, no. 4 (April 1, 1991): 619–22. http://dx.doi.org/10.1139/f91-079.

Full text
Abstract:
Generalized linear models were used to investigate the sensitivity of paramater estimates to choice of the random error assumption in models of fisheries data. We examined models of fish yield from lakes as a function of (i) Ryder's morphoedaphic index, (ii) lake area, lake depth, and concentration of dissolved solids, and (iii) fishing effort. Models were fit using a normal, log-normal, gamma, or Poisson distribution to generate the random error. Plots of standardized Pearson residuals and standardized deviance residuals were used to evaluate the distributional assumptions. For each data set, observations were found to be consistent with several distributions; however, some distributions were shown to be clearly inappropriate. Inappropriate distributional assumptions produced substantially different parameter estimates. Generalized linear models allow a variety of distributional assumptions to be incorporated in a model, and thereby let us study their effects.
APA, Harvard, Vancouver, ISO, and other styles
10

Baranov, L. A., E. P. Balakina, and A. I. Godyaev. "The Object According State Prediction to Diagnostic Data." Journal of Physics: Conference Series 2096, no. 1 (November 1, 2021): 012121. http://dx.doi.org/10.1088/1742-6596/2096/1/012121.

Full text
Abstract:
Abstract The predicting methodology the state of the object based on diagnostic data is considered. With the selected parameter that determines the state of the object, it is measured in real time at a fixed sampling step. According to the measurement data, the value of this parameter is predicted in the future. This operation is implemented by an extrapolator of the l order - a l degree polynomial, built using the least squares method based on the previous measurements results. The changing process model of the diagnosed parameter is a random time function described by the stationary centered random component sum and a mathematical expectation deterministic change. The estimating prediction error method and the extrapolator parameters influence on its value are presented.
APA, Harvard, Vancouver, ISO, and other styles
11

Zhang, Jingjing, and Linran Zhang. "Strong consistency rates of estimators in semi-parametric errors-in-variables model with missing responses." Filomat 33, no. 18 (2019): 6073–89. http://dx.doi.org/10.2298/fil1918073z.

Full text
Abstract:
In this article, we focus on the semi-parametric error-in-variables model with missing responses: yi = ?i? + g(ti)+ ?i,xi = ?i + ?i, where yi are the response variables missing at random, (?i,ti) are design points, ?i are the potential variables observed with measurement errors ?i, the unknown slope parameter ? and nonparametric component g(?) need to be estimate. Here we choose three different approaches to estimate ? and g(?). Under appropriate conditions, we study the strong consistency rates for the proposed estimators. In general, we concluded that the strong consistency rates for all estimators can achieve o(n-1/4).
APA, Harvard, Vancouver, ISO, and other styles
12

Rezapour, Mahdi, and Khaled Ksaibati. "Accommodating Taste and Scale Heterogeneity for Front-Seat Passenger’ Choice of Seat Belt Usage." Mathematics 9, no. 5 (February 24, 2021): 460. http://dx.doi.org/10.3390/math9050460.

Full text
Abstract:
There is growing interest in implementation of the mixed model to account for heterogeneity across population observations. However, it has been argued that the assumption of independent and identically distributed (i.i.d) error terms might not be realistic, and for some observations the scale of the error is greater than others. Consequently, that might result in the error terms’ scale to be varied across those observations. As the standard mixed model could not account for the aforementioned attribute of the observations, extended model, allowing for scale heterogeneity, has been proposed to relax the equal error terms across observations. Thus, in this study we extended the mixed model to the model with heterogeneity in scale, or generalized multinomial logit model (GMNL), to see if accounting for the scale heterogeneity, by adding more flexibility to the distribution, would result in an improvement in the model fit. The study used the choice data related to wearing seat belt across front-seat passengers in Wyoming, with all attributes being individual-specific. The results highlighted that although the effect of the scale parameter was significant, the scale effect was trivial, and accounting for the effect at the cost of added parameters would result in a loss of model fit compared with the standard mixed model. Besides considering the standard mixed and the GMNL, the models with correlated random parameters were considered. The results highlighted that despite having significant correlation across the majority of the random parameters, the goodness of fits favors more parsimonious models with no correlation. The results of this study are specific to the dataset used in this study, and due to the possible fact that the heterogeneity in observations related to the front-seat passengers seat belt use might not be extreme, and do not require extra layer to account for the scale heterogeneity, or accounting for the scale heterogeneity at the cost of added parameters might not be required. Extensive discussion has been made in the content of this paper about the model parameters’ estimations and the mathematical formulation of the methods.
APA, Harvard, Vancouver, ISO, and other styles
13

Sarajcev, Petar, and Antun Meglic. "Error analysis of multi-step day-ahead PV production forecasting with chained regressors." Journal of Physics: Conference Series 2369, no. 1 (November 1, 2022): 012051. http://dx.doi.org/10.1088/1742-6596/2369/1/012051.

Full text
Abstract:
This paper presents a comprehensive error analysis of the day-ahead photovoltaic (PV) production multi-step forecasting model that uses a chained support vector regression (SVR). A principal component analysis (PCA) is also implemented to investigate possible improvements of the SVR prediction accuracy. Special attention was given to the hyper-parameter tuning of the chained SVR and PCA+SVR models; specifically, the dispersion of the prediction errors when fine-tuning the model with an experimental halving random search algorithm implemented within scikit-learn, i.e. the HalvingRandomSearchCV (HRSCV). The obtained results were compared with the traditional randomized search technique, i.e. the RandomizedSearchCV (RSCV). The chained SVR model prediction errors were analysed for several different parameter distribution settings. After doing repetitive fine-tuning and predictions, it was observed that the HRSCV tends to choose sub-optimal hyper-parameters for certain scenarios, as will be elaborated in the paper. Moreover, when analysing prediction errors of the same model fine-tuned repetitively with the HRSCV and RSCV, it was found that HRSCV creates larger errors and more inconsistency (variability) in the prediction results. The introduction of the PCA to the chained SVR model, at the same time, reduces the influence of exogenous variables and, on average, increases its performance and decreases prediction errors regardless of the optimization technique used.
APA, Harvard, Vancouver, ISO, and other styles
14

López-Galán, Belinda, and Tiziana de-Magistris. "Testing Emotional Eating Style in Relation to Willingness to Pay for Nutritional Claims." Nutrients 11, no. 8 (August 1, 2019): 1773. http://dx.doi.org/10.3390/nu11081773.

Full text
Abstract:
In face of the high prevalence of non-communicable diseases, nutritional claims represent a useful tool to help people to make healthier food choices. However, recent research notes that when some people experience an intense emotional state, they increase their food consumption, particularly of energy-dense and sweet foods. In consequence, this study aims to assess whether emotional eating (EE) style influences the purchase of food products carrying these claims. To this end, a real choice experiment (RCE) was conducted with 306 participants who were asked to evaluate different types of toast. An error component random parameter logit (ECRPL) was used to analyze their preferences for reduced-fat and low-salt claims toast and the effects of the variation of the EE score on individual preferences. Findings of this study suggest that emotional eating negatively impacts purchasing behavior related to nutritional claims. In particular, a decrease of the willingness to pay between 9% and 16% for every unit of toast with nutritional claims was noted when an increase of EE individual score was registered. In this regard, to increase the effectiveness of the nutritional claims, policymakers and private sectors should consider the management of individuals’ emotional states in designing public health policies and marketing strategies, respectively.
APA, Harvard, Vancouver, ISO, and other styles
15

Миргород, Володимир Федорович, and Ірина Маратівна Гвоздева. "ОЦІНКА ПОТУЖНОСТІ КРИТЕРІЇВ ТРЕНДУ." Aerospace technic and technology, no. 7 (August 31, 2020): 129–36. http://dx.doi.org/10.32620/aktt.2020.7.18.

Full text
Abstract:
An approach to the selection and comparison of the criteria that are used in the analysis of time series of registration parameters of the technical camp of power and power plants based on gas turbine engines is proposed. The approach is based on established important characteristics of trending criteria, namely the power of such criteria, which are considered as criteria for distinguishing complex hypotheses. For analysis, we propose a statistical model for generating data in the form of a combination of deterministic trends and random components. A deterministic component is considered as a linear approximation of its expansion in a Taylor series. This assumption is justified by the need to show a trend in the shortest period of time at which the trend component allows a linear approximation. A random compound is taken as a sample of a general population of independent random variables that have a normal distribution. For analysis, the most common trend criteria were selected: Student's criterion for equality of means; Fisher dispersion ratio criterion; correlation criterion and its varieties. The supporting hypothesis has the form of belonging of a time series to a sample from the general set of independent random variables, and an alternative is belonging to a sample with a linear trend. Trend statistics of the relevant criteria generated on a moving or sectional disjoint analysis window of a given dimension. The trend development parameter was selected as the ratio of the trend growth during the analysis to the standard deviation of the random component. For the considered trend criteria, the obtained dependences of their power on the trend development parameter and the probability of an error of the first kind (erroneous alarm), as well as the operational characteristics of the criteria. The analysis was performed by the methods of analytical estimates and statistical modeling. It has been established that in the case of an alternative, the statistics of the correlation criterion and the Fisher criterion are quickly normalized, and student statistics do not change their type. A comparison of trending power criteria with equal values of the probability of an error of the first kind allows us to establish the advantage of the Student criterion, and the correlation criterion has the worst performance. Obtaining indicators of the power of trend criteria are important for applied applications since it allows you to establish the probability of the second kind of error (skipping a trend).
APA, Harvard, Vancouver, ISO, and other styles
16

Guo, Qinghua, Fuchu Dai, and Zhiqiang Zhao. "Comparison of Two Bayesian-MCMC Inversion Methods for Laboratory Infiltration and Field Irrigation Experiments." International Journal of Environmental Research and Public Health 17, no. 3 (February 10, 2020): 1108. http://dx.doi.org/10.3390/ijerph17031108.

Full text
Abstract:
Bayesian parameter inversion approaches are dependent on the original forward models linking subsurface physical properties to measured data, which usually require a large number of iterations. Fast alternative systems to forward models are commonly employed to make the stochastic inversion problem computationally tractable. This paper compared the effect of the original forward model constructed by the HYDRUS-1D software and two different approximations: the Artificial Neural Network (ANN) alternative system and the Gaussian Process (GP) surrogate system. The model error of the ANN was quantified using a principal component analysis, while the model error of the GP was measured using its own variance. There were two groups of measured pressure head data of undisturbed loess for parameter inversion: one group was obtained from a laboratory soil column infiltration experiment and the other was derived from a field irrigation experiment. Strong correlations between the pressure head values simulated by random posterior samples indicated that the approximate forward models are reliable enough to be included in the Bayesian inversion framework. The approximate forward models significantly improved the inversion efficiency by comparing the observed and the optimized results with a similar accuracy. In conclusion, surrogates can be considered when the forward models are strongly nonlinear and the computational costs are prohibitive.
APA, Harvard, Vancouver, ISO, and other styles
17

Pane, Rahmawati, and Sutarman. "Estimation of Heteroskedasticity Semiparametric Regression Curve Using Fourier Series Approach." Journal of Research in Mathematics Trends and Technology 2, no. 1 (February 24, 2020): 14–20. http://dx.doi.org/10.32734/jormtt.v2i1.3744.

Full text
Abstract:
A heteroskedastic semiparametric regression model consists of two main components, i.e. parametric component and nonparametric component. The model assumes that any data (x̰ i′ , t i , y i ) follows y i = x̰ i′ β̰+ f(t i ) + σ i ε i , where i = 1,2, … , n , x̰ i′ = (1, x i1 , x i2 , … , x ir ) and t i is the predictor variable. Parameter vector β̰ = (β 1 , β 2 , … , β r ) ′ ∈ ℜ r is unknown and f(t i ) is also unknown and is assumed to be in interval of C[0,π] . Random error ε i is independent on zero mean and varianceσ 2 . Estimation of the heteroskedastic semiparametric regression model was conducted to evaluate the parametric and nonparametric components. The nonparametric component f(t i ) regression was approximated by Fourier series F(t) = bt + 12 α 0 + ∑ α k 𝑐 𝑜𝑠 kt Kk=1 . The estimation was obtained by means of Weighted Penalized Least Square (WPLS): min f∈C(0,π) {n −1 (y̰− Xβ̰−f̰) ′ W −1 (y̰− Xβ̰− f̰) + λ ∫ 2π [f ′′ (t)] 2 dt π0 } . The WPLS solution provided nonparametric component f̰̂ λ (t) = M(λ)y̰ ∗ for a matrix M(λ) and parametric component β̰̂ = [X ′ T(λ)X] −1 X ′ T(λ)y̰
APA, Harvard, Vancouver, ISO, and other styles
18

Rober, S. J., and Y. C. Shin. "Control of Cutting Force for End Milling Processes Using an Extended Model Reference Adaptive Control Scheme." Journal of Manufacturing Science and Engineering 118, no. 3 (August 1, 1996): 339–47. http://dx.doi.org/10.1115/1.2831035.

Full text
Abstract:
In this work an extended Model Reference Adaptive Control (MRAC) technique is used to control the cutting force of an end milling process. The technique incorporates Zero Phase Error Tracking Control (ZPETC) into the MRAC system. The extended MRAC controller remains stable even in the presence of marginally stable and nonminimum phase process zeros. A modified recursive least-squares estimation algorithm is used for on-line parameter identification. Simulation results are presente to compare the extended MRAC controller to the standard MRAC controller. A microprocessor system is used to implement adaptive force control of a single-input single-output milling process where the microprocessor monitors the system cutting forces and controls the desired feedrate. A constant cutting force is maintained in the presence of time-varying plant gains and a high random component of the output force. Experimental results are presented for standard MRAC and extended MRAC controllers for comparison.
APA, Harvard, Vancouver, ISO, and other styles
19

Xu, Xingkui, Chunfeng Wu, Qingyu Hou, and Zhigang Fan. "Gyro Error Compensation in Optoelectronic Platform Based on a Hybrid ARIMA-Elman Model." Algorithms 12, no. 1 (January 11, 2019): 22. http://dx.doi.org/10.3390/a12010022.

Full text
Abstract:
As an important angle sensor of the opto-electric platform, gyro output accuracy plays a vital role in the stabilization and track accuracy of the whole system. It is known that the generally used fixed-bandwidth filters, single neural network models, or linear models cannot compensate for gyro error well, and so they cannot meet engineering needs satisfactorily. In this paper, a novel hybrid ARIMA-Elman model is proposed. For the reason that it can fully combine the strong linear approximation capability of the ARIMA model and the superior nonlinear compensation capability of a neural network, the proposed model is suitable for handling gyro error, especially for its non-stationary random component. Then, to solve the problem that the parameters of ARIMA model and the initial weights of the Elman neural network are difficult to determine, a differential algorithm is initially utilized for parameter selection. Compared with other commonly used optimization algorithms (e.g., the traditional least-squares identification method and the genetic algorithm method), the intelligence differential algorithm can overcome the shortcomings of premature convergence and has higher optimization speed and accuracy. In addition, the drift error is obtained based on the technique of lift-wavelet separation and reconstruction, and, in order to weaken the randomness of the data sequence, an ashing operation and Jarque-Bear test have been added to the handle process. In this study, actual gyro data is collected and the experimental results show that the proposed method has higher compensation accuracy and faster network convergence, when compared with other commonly used error-compensation methods. Finally, the hybrid method is used to compensate for gyro error collected in other states. The test results illustrate that the proposed algorithm can effectively improve error compensation accuracy, and has good generalization performance.
APA, Harvard, Vancouver, ISO, and other styles
20

Guo, Xiangying, Changkun Li, Zhong Luo, and Dongxing Cao. "Modal Parameter Identification of Structures Using Reconstructed Displacements and Stochastic Subspace Identification." Applied Sciences 11, no. 23 (December 2, 2021): 11432. http://dx.doi.org/10.3390/app112311432.

Full text
Abstract:
A method of modal parameter identification of structures using reconstructed displacements was proposed in the present research. The proposed method was developed based on the stochastic subspace identification (SSI) approach and used reconstructed displacements of measured accelerations as inputs. These reconstructed displacements suppressed the high-frequency component of measured acceleration data. Therefore, in comparison to the acceleration-based modal analysis, the operational modal analysis obtained more reliable and stable identification parameters from displacements regardless of the model order. However, due to the difficulty of displacement measurement, different types of noise interferences occurred when an acceleration sensor was used, causing a trend term drift error in the integral displacement. A moving average low-frequency attenuation frequency-domain integral was used to reconstruct displacements, and the moving time window was used in combination with the SSI method to identify the structural modal parameters. First, measured accelerations were used to estimate displacements. Due to the interference of noise and the influence of initial conditions, the integral displacement inevitably had a drift term. The moving average method was then used in combination with a filter to effectively eliminate the random fluctuation interference in measurement data and reduce the influence of random errors. Real displacement results of a structure were obtained through multiple smoothing, filtering, and integration. Finally, using reconstructed displacements as inputs, the improved SSI method was employed to identify the modal parameters of the structure.
APA, Harvard, Vancouver, ISO, and other styles
21

Villarini, Gabriele, and Witold F. Krajewski. "Sensitivity Studies of the Models of Radar-Rainfall Uncertainties." Journal of Applied Meteorology and Climatology 49, no. 2 (February 1, 2010): 288–309. http://dx.doi.org/10.1175/2009jamc2188.1.

Full text
Abstract:
Abstract It is well acknowledged that there are large uncertainties associated with the operational quantitative precipitation estimates produced by the U.S. national network of the Weather Surveillance Radar-1988 Doppler (WSR-88D). These errors result from the measurement principles, parameter estimation, and the not fully understood physical processes. Even though comprehensive quantitative evaluation of the total radar-rainfall uncertainties has been the object of earlier studies, an open question remains concerning how the error model results are affected by parameter values and correction setups in the radar-rainfall algorithms. This study focuses on the effects of different exponents in the reflectivity–rainfall (Z–R) relation [Marshall–Palmer, default Next Generation Weather Radar (NEXRAD), and tropical] and the impact of an anomalous propagation removal algorithm. To address this issue, the authors apply an empirically based model in which the relation between true rainfall and radar rainfall could be described as the product of a systematic distortion function and a random component. Additionally, they extend the error model to describe the radar-rainfall uncertainties in an additive form. This approach is fully empirically based, and rain gauge measurements are considered as an approximation of the true rainfall. The proposed results are based on a large sample (6 yr) of data from the Oklahoma City radar (KTLX) and processed through the Hydro-NEXRAD software system. The radar data are complemented with the corresponding rain gauge observations from the Oklahoma Mesonet and the Agricultural Research Service Micronet.
APA, Harvard, Vancouver, ISO, and other styles
22

Chitturi, Sathya R., Daniel Ratner, Richard C. Walroth, Vivek Thampy, Evan J. Reed, Mike Dunne, Christopher J. Tassone, and Kevin H. Stone. "Automated prediction of lattice parameters from X-ray powder diffraction patterns." Journal of Applied Crystallography 54, no. 6 (November 30, 2021): 1799–810. http://dx.doi.org/10.1107/s1600576721010840.

Full text
Abstract:
A key step in the analysis of powder X-ray diffraction (PXRD) data is the accurate determination of unit-cell lattice parameters. This step often requires significant human intervention and is a bottleneck that hinders efforts towards automated analysis. This work develops a series of one-dimensional convolutional neural networks (1D-CNNs) trained to provide lattice parameter estimates for each crystal system. A mean absolute percentage error of approximately 10% is achieved for each crystal system, which corresponds to a 100- to 1000-fold reduction in lattice parameter search space volume. The models learn from nearly one million crystal structures contained within the Inorganic Crystal Structure Database and the Cambridge Structural Database and, due to the nature of these two complimentary databases, the models generalize well across chemistries. A key component of this work is a systematic analysis of the effect of different realistic experimental non-idealities on model performance. It is found that the addition of impurity phases, baseline noise and peak broadening present the greatest challenges to learning, while zero-offset error and random intensity modulations have little effect. However, appropriate data modification schemes can be used to bolster model performance and yield reasonable predictions, even for data which simulate realistic experimental non-idealities. In order to obtain accurate results, a new approach is introduced which uses the initial machine learning estimates with existing iterative whole-pattern refinement schemes to tackle automated unit-cell solution.
APA, Harvard, Vancouver, ISO, and other styles
23

Kain, Morgan P., Ben M. Bolker, and Michael W. McCoy. "A practical guide and power analysis for GLMMs: detecting among treatment variation in random effects." PeerJ 3 (September 17, 2015): e1226. http://dx.doi.org/10.7717/peerj.1226.

Full text
Abstract:
In ecology and evolution generalized linear mixed models (GLMMs) are becoming increasingly used to test for differences in variation by treatment at multiple hierarchical levels. Yet, the specific sampling schemes that optimize the power of an experiment to detect differences in random effects by treatment/group remain unknown. In this paper we develop a blueprint for conducting power analyses for GLMMs focusing on detecting differences in variance by treatment. We present parameterization and power analyses for random-intercepts and random-slopes GLMMs because of their generality as focal parameters for most applications and because of their immediate applicability to emerging questions in the field of behavioral ecology. We focus on the extreme case of hierarchically structured binomial data, though the framework presented here generalizes easily to any error distribution model. First, we determine the optimal ratio of individuals to repeated measures within individuals that maximizes power to detect differences by treatment in among-individual variation in intercept, among-individual variation in slope, and within-individual variation in intercept. Second, we explore how power to detect differences in target variance parameters is affected by total variation. Our results indicate heterogeneity in power across ratios of individuals to repeated measures with an optimal ratio determined by both the target variance parameter and total sample size. Additionally, power to detect each variance parameter was low overall (in most cases >1,000 total observations per treatment needed to achieve 80% power) and decreased with increasing variance in non-target random effects. With growing interest in variance as the parameter of inquiry, these power analyses provide a crucial component for designing experiments focused on detecting differences in variance. We hope to inspire novel experimental designs in ecology and evolution investigating the causes and implications of individual-level phenotypic variance, such as the adaptive significance of within-individual variation.
APA, Harvard, Vancouver, ISO, and other styles
24

Qi, Chongchong, Binhan Huang, Mengting Wu, Kun Wang, Shan Yang, and Guichen Li. "Concrete Strength Prediction Using Different Machine Learning Processes: Effect of Slag, Fly Ash and Superplasticizer." Materials 15, no. 15 (August 4, 2022): 5369. http://dx.doi.org/10.3390/ma15155369.

Full text
Abstract:
Blast furnace slag (BFS) and fly ash (FA), as mining-associated solid wastes with good pozzolanic effects, can be combined with superplasticizer to prepare concrete with less cement utilization. Considering the important influence of strength on concrete design, random forest (RF) and particle swarm optimization (PSO) methods were combined to construct a prediction model and carry out hyper-parameter tuning in this study. Principal component analysis (PCA) was used to reduce the dimension of input features. The correlation coefficient (R), the explanatory variance score (EVS), the mean absolute error (MAE) and the mean square error (MSE) were used to evaluate the performance of the model. R = 0.954, EVS = 0.901, MAE = 3.746, and MSE = 27.535 of the optimal RF-PSO model on the testing set indicated the high generalization ability. After PCA dimensionality reduction, the R value decreased from 0.954 to 0.88, which was not necessary for the current dataset. Sensitivity analysis showed that cement was the most important feature, followed by water, superplasticizer, fine aggregate, BFS, coarse aggregate and FA, which was beneficial to the design of concrete schemes in practical projects. The method proposed in this study for estimation of the compressive strength of BFS-FA-superplasticizer concrete fills the research gap and has potential engineering application value.
APA, Harvard, Vancouver, ISO, and other styles
25

Tett, Simon F. B., Kuniko Yamazaki, Michael J. Mineter, Coralia Cartis, and Nathan Eizenberg. "Calibrating climate models using inverse methods: case studies with HadAM3, HadAM3P and HadCM3." Geoscientific Model Development 10, no. 9 (September 28, 2017): 3567–89. http://dx.doi.org/10.5194/gmd-10-3567-2017.

Full text
Abstract:
Abstract. Optimisation methods were successfully used to calibrate parameters in an atmospheric component of a climate model using two variants of the Gauss–Newton line-search algorithm: (1) a standard Gauss–Newton algorithm in which, in each iteration, all parameters were perturbed and (2) a randomised block-coordinate variant in which, in each iteration, a random sub-set of parameters was perturbed. The cost function to be minimised used multiple large-scale multi-annual average observations and was constrained to produce net radiative fluxes close to those observed. These algorithms were used to calibrate the HadAM3 (third Hadley Centre Atmospheric Model) model at N48 resolution and the HadAM3P model at N96 resolution.For the HadAM3 model, cases with 7 and 14 parameters were tried. All ten 7-parameter cases using HadAM3 converged to cost function values similar to that of the standard configuration. For the 14-parameter cases several failed to converge, with the random variant in which 6 parameters were perturbed being most successful. Multiple sets of parameter values were found that produced multiple models very similar to the standard configuration. HadAM3 cases that converged were coupled to an ocean model and run for 20 years starting from a pre-industrial HadCM3 (3rd Hadley Centre Coupled model) state resulting in several models whose global-average temperatures were consistent with pre-industrial estimates. For the 7-parameter cases the Gauss–Newton algorithm converged in about 70 evaluations. For the 14-parameter algorithm, with 6 parameters being randomly perturbed, about 80 evaluations were needed for convergence. However, when 8 parameters were randomly perturbed, algorithm performance was poor. Our results suggest the computational cost for the Gauss–Newton algorithm scales between P and P2, where P is the number of parameters being calibrated.For the HadAM3P model three algorithms were tested. Algorithms in which seven parameters were perturbed and three out of seven parameters randomly perturbed produced final configurations comparable to the standard hand-tuned configuration. An algorithm in which 6 out of 13 parameters were randomly perturbed failed to converge.These results suggest that automatic parameter calibration using atmospheric models is feasible and that the resulting coupled models are stable. Thus, automatic calibration could replace human-driven trial and error. However, convergence and costs are likely sensitive to details of the algorithm.
APA, Harvard, Vancouver, ISO, and other styles
26

Cameron, Ian R., Roberta Parish, James W. Goudie, and Catherine A. Bealle Statland. "Modelling the Crown Profile of Western Hemlock (Tsuga heterophylla) with a Combination of Component and Aggregate Measures of Crown Size." Forests 11, no. 3 (February 28, 2020): 281. http://dx.doi.org/10.3390/f11030281.

Full text
Abstract:
Research Highlights: We present statistical methods for using crown measurement data from multiple destructive sampling studies to model crown profiles in the Tree and Stand Simulator (TASS) and evaluate it using component (branch-level) and aggregate (tree-level) predictions. Combining data collected under different sampling protocols offered unique challenges. Background and Objectives: The approach to modelling crown profiles was based on Mitchell’s monograph on Douglas-fir growth and simulated dynamics. The functional form defines the potential crown size and shape and governs the rate of crown expansion. With the availability of additional data, we are able to update these functions as part of ongoing TASS development and demonstrate the formulation and fitting of new crown profile equations for stand-grown western hemlock (Tsuga heterophylla (Raf.) Sarg. Materials and Methods: Detailed measurements on 1616 branches from 153 trees were collected for TASS development over a 40-year period. Data were collected under two different sampling protocols and the methods were designed to allow the use of data from both protocols. Data collected on all branches were then introduced through the application of the ratio of length of each of the selected branches to the largest branch in the internode (RL). Results: A mixed-effects model with two random effects, which accounted for tree-level variation, provided the best fit. From that, a model that expressed one parameter as a function of another with one random effect was developed to complement the structure of the Tree and Stand Simulator (TASS). The models generally over-estimated crown size when compared to the projected crown area recorded from field measurements, and a scalar adjustment factor of 0.89 was applied that minimised mean-squared error of the differences. The new model is fit from direct measures of crown radius and predicts narrower crown shapes than previous functions used in TASS.
APA, Harvard, Vancouver, ISO, and other styles
27

Olanrewaju, Rasaki Olawale. "Integer-valued Time Series Model via Generalized Linear Models Technique of Estimation." International Annals of Science 4, no. 1 (April 29, 2018): 35–43. http://dx.doi.org/10.21467/ias.4.1.35-43.

Full text
Abstract:
The paper authenticated the need for separate positive integer time series model(s). This was done from the standpoint of a proposal for both mixtures of continuous and discrete time series models. Positive integer time series data are time series data subjected to a number of events per constant interval of time that relatedly fits into the analogy of conditional mean and variance which depends on immediate past observations. This includes dependency among observations that can be best described by Generalized Autoregressive Conditional Heteroscedasticity (GARCH) model with Poisson distributed error term due to its positive integer defined range of values. As a result, an integer GARCH model with Poisson distributed error term was formed in this paper and called Integer Generalized Autoregressive Conditional Heteroscedasticity (INGARCH). Iterative Reweighted Least Square (IRLS) parameter estimation technique type of the Generalized Linear Models (GLM) was adopted to estimate parameters of the two spilt models; Linear and Log-linear INGARCH models deduced from the identity link function and logarithmic link function, respectively. This resulted from the log-likelihood function generated from the GLM via the random component that follows a Poisson distribution. A study of monthly successful bids of auction from 2003 to 2015 was carried out. The Probabilistic Integral Transformation (PIT) and scoring rule pinpointed the uniformity of the linear INGARCH than that of the log-linear INGARCH in describing first order autocorrelation, serial dependence and positive conditional effects among covariates based on the immediate past. The linear INGARCH model outperformed the log-linear INGARCH model with (AIC = 10514.47, BIC = 10545.01, QIC = 34128.56) and (AIC = 37588.83, BIC = 37614.28, QIC = 37587.3), respectively.
APA, Harvard, Vancouver, ISO, and other styles
28

Heidari, Ali Asghar, Mehdi Akhoondzadeh, and Huiling Chen. "A Wavelet PM2.5 Prediction System Using Optimized Kernel Extreme Learning with Boruta-XGBoost Feature Selection." Mathematics 10, no. 19 (September 29, 2022): 3566. http://dx.doi.org/10.3390/math10193566.

Full text
Abstract:
The fine particulate matter (PM2.5) concentration has been a vital source of info and an essential indicator for measuring and studying the concentration of other air pollutants. It is crucial to realize more accurate predictions of PM2.5 and establish a high-accuracy PM2.5 prediction model due to their social impacts and cross-field applications in geospatial engineering. To further boost the accuracy of PM2.5 prediction results, this paper proposes a new wavelet PM2.5 prediction system (called WD-OSMSSA-KELM model) based on a new, improved variant of the salp swarm algorithm (OSMSSA), kernel extreme learning machine (KELM), wavelet decomposition, and Boruta-XGBoost (B-XGB) feature selection. First, we applied the B-XGB feature selection to realize the best features for predicting hourly PM2.5 concentrations. Then, we applied the wavelet decomposition (WD) algorithm to reach the multi-scale decomposition results and single-branch reconstruction of PM2.5 concentrations to mitigate the prediction error produced by time series data. In the next stage, we optimized the parameters of the KELM model under each reconstructed component. An improved version of the SSA is proposed to reach higher performance for the basic SSA optimizer and avoid local stagnation problems. In this work, we propose new operators based on oppositional-based learning and simplex-based search to mitigate the core problems of the conventional SSA. In addition, we utilized a time-varying parameter instead of the main parameter of the SSA. To further boost the exploration trends of SSA, we propose using the random leaders to guide the swarm towards new regions of the feature space based on a conditional structure. After optimizing the model, the optimized model was utilized to predict the PM2.5 concentrations, and different error metrics were applied to evaluate the model’s performance and accuracy. The proposed model was evaluated based on an hourly database, six air pollutants, and six meteorological features collected from the Beijing Municipal Environmental Monitoring Center. The experimental results show that the proposed WD-OLMSSA-KELM model can predict the PM2.5 concentration with superior performance (R: 0.995, RMSE: 11.906, MdAE: 2.424, MAPE: 9.768, KGE: 0.963, R2: 0.990) compared to the WD-CatBoost, WD-LightGBM, WD-Xgboost, and WD-Ridge methods.
APA, Harvard, Vancouver, ISO, and other styles
29

Usmanov, Zafar, and Abdunabi Kosimov. "Testing the classifier adapted to recognize the languages of works based on the Latin alphabet." Analysis and data processing systems, no. 2 (June 18, 2021): 83–94. http://dx.doi.org/10.17212/2782-2001-2021-2-83-94.

Full text
Abstract:
Using the example of a model collection of 10 texts in five languages (English, German, Spanish, Italian, and French) using Latin graphics, the article establishes the applicability of the γ-classifier for automatic recognition of the language of a work based on the frequency of 26 common Latin alphabetic letters. The mathematical model of the γ-classifier is represented as a triad. Its first component is a digital portrait (DP) of the text - the distribution of the frequency of alphabetic unigrams in the text; the second component is formulas for calculating the distances between the DP texts and the third is a machine learning algorithm that implements the hypothesis of “homogeneity” of works written in one language and “heterogeneity” of works written in different languages. The tuning of the algorithm using a table of paired distances between all products of the model collection consisted in determining an optimal value of the real parameter γ, for which the error of violation of the “homogeneity” hypothesis is minimized. The γ-classifier trained on the texts of the model collection showed a high, 100% accuracy in recognizing the languages of the works. For testing the classifier, an additional six random texts were selected, of which five were in the same languages as the texts of the model collection. By the method of the nearest (in terms of distance) neighbor, all new texts confirmed their homogeneity with the corresponding pairs of monolingual works. The sixth text in Romanian showed its heterogeneity in relation to all elements of the collection. At the same time, it showed closeness in minimum distances, first of all, to two texts in Spanish and then to two works in Italian.
APA, Harvard, Vancouver, ISO, and other styles
30

Bosak, Alla, Dmytro Matushkin, Volodymyr Dubovyk, Sviatoslav Homon, and Leonid Kulakovskyi. "Determination of the Concepts of Building a Solar Power Forecasting Model." Scientific Horizons 24, no. 10 (January 26, 2022): 9–16. http://dx.doi.org/10.48077/scihor.24(10).2021.9-16.

Full text
Abstract:
Since in Ukraine there are fines for imbalances in solar power generation in the “day-ahead” energy market, the forecasting of electricity generation is an important component of the solar power plant operation. To forecast the active power generation of photovoltaic panels, a mathematical model should be developed, which considers the main factors affecting the volume of energy generation. In this article, the main factors affecting the performance of solar panels were analysed using correlation analysis. The data sets for the construction of the forecasting model were obtained from the solar power plant in the Kyiv region. Two types of data sets were used for the analysis of factors and model building: 10-minute time interval data and daily data. For each data set, the input parameters were selected using correlation analysis. Considering the determining factors, the models of finding the function of reflecting meteorological factors in the volume of electricity generation are built. It is established that through models with a lower discreteness of climatic parameters forecast it is possible to determine the potential volume of electricity production by the solar power plant for the day-ahead with a lower mean absolute error. The best accuracy of the model for predicting electric power generation over the 10-minute interval is obtained in the ensemble random of a forest model. It is determined that models without solar radiation intensity parameters on the input have an unsatisfactory coefficient of determination. Therefore, further research will focus on combining a model of forecasting the day-ahead solar radiation with 10-minutes discreteness with a model for determining the amount of electricity generation. The determined predicted values of solar radiation will be the input parameter of the forecasting model described in the article
APA, Harvard, Vancouver, ISO, and other styles
31

Qiu, Jianxiu, Jianzhi Dong, Wade T. Crow, Xiaohu Zhang, Rolf H. Reichle, and Gabrielle J. M. De Lannoy. "The benefit of brightness temperature assimilation for the SMAP Level-4 surface and root-zone soil moisture analysis." Hydrology and Earth System Sciences 25, no. 3 (March 29, 2021): 1569–86. http://dx.doi.org/10.5194/hess-25-1569-2021.

Full text
Abstract:
Abstract. The Soil Moisture Active Passive (SMAP) Level-4 (L4) product provides global estimates of surface soil moisture (SSM) and root-zone soil moisture (RZSM) via the assimilation of SMAP brightness temperature (Tb) observations into the NASA Catchment Land Surface Model (CLSM). Here, using in situ measurements from 2474 sites in China, we evaluate the performance of soil moisture estimates from the L4 data assimilation (DA) system and from a baseline “open-loop” (OL) simulation of CLSM without Tb assimilation. Using random forest regression, the efficiency of the L4 DA system (i.e., the performance improvement in DA relative to OL) is attributed to eight control factors related to the CLSM as well as τ–ω radiative transfer model (RTM) components of the L4 system. Results show that the Spearman rank correlation (R) for L4 SSM with in situ measurements increases for 77 % of the in situ measurement locations (relative to that of OL), with an average R increase of approximately 14 % (ΔR=0.056). RZSM skill is improved for about 74 % of the in situ measurement locations, but the average R increase for RZSM is only 7 % (ΔR=0.034). Results further show that the SSM DA skill improvement is most strongly related to the difference between the RTM-simulated Tb and the SMAP Tb observation, followed by the error in precipitation forcing data and estimated microwave soil roughness parameter h. For the RZSM DA skill improvement, these three dominant control factors remain the same, although the importance of soil roughness exceeds that of the Tb simulation error, as the soil roughness strongly affects the ingestion of DA increments and further propagation to the subsurface. For the skill of the L4 and OL estimates themselves, the top two control factors are the precipitation error and the SSM–RZSM coupling strength error, both of which are related to the CLSM component of the L4 system. Finally, we find that the L4 system can effectively filter out errors in precipitation. Therefore, future development of the L4 system should focus on improving the characterization of the SSM–RZSM coupling strength.
APA, Harvard, Vancouver, ISO, and other styles
32

Gisen, David C., Cornelia Schütz, and Roman B. Weichert. "Development of behavioral rules for upstream orientation of fish in confined space." PLOS ONE 17, no. 2 (February 18, 2022): e0263964. http://dx.doi.org/10.1371/journal.pone.0263964.

Full text
Abstract:
Improving the effectiveness of fishways requires a better understanding of fish behavior near hydraulic structures, especially of upstream orientation. One of the most promising approaches to this problem is the use of model behavioral rules. We developed a three-dimensional individual-based model based on observed brown trout (Salmo trutta fario) movement in a laboratory flume and tested it against two hydraulically different flume setups. We used the model to examine which of five behavioral rule versions would best explain upstream trout orientation. The versions differed in the stimulus for swim angle selection. The baseline stimulus was positive rheotaxis with a random component. It was supplemented by attraction towards either lower velocity magnitude, constant turbulence kinetic energy, increased flow acceleration, or shorter wall distance. We found that the baseline stimulus version already explained large parts of the observed behavior. Mixed results for velocity magnitude, turbulence kinetic energy, and flow acceleration indicated that the brown trout did not orient primarily by means of these flow features. The wall distance version produced significantly improved results, suggesting that wall distance was the dominant orientation stimulus for brown trout in our hydraulic conditions. The absolute root mean square error (RMSE) was small for the best parameter set (RMSE = 9 for setup 1, RMSE = 6 for setup 2). Our best explanation for these results is dominance of the visual sense favored by absence of challenging hydraulic stimuli. We conclude that under similar conditions (moderate flow and visible walls), wall distance could be a relevant stimulus in confined space, particularly for fishway studies and design in IBMs, laboratory, and the field.
APA, Harvard, Vancouver, ISO, and other styles
33

Tang, Yun Chao, Wen Xian Feng, and Ye Zhang. "Eccentric and Nonuniform Axial Force Analysis Based on Steel Tube Components of Recycled Resource." Key Engineering Materials 579-580 (September 2013): 228–33. http://dx.doi.org/10.4028/www.scientific.net/kem.579-580.228.

Full text
Abstract:
To make great use of recycled deserted solid and concrete confined by steel tube has been one of the most significant tasks for sake of green low-carbon production and environmental-friendly sustainable development, especially now, as the earthquakes in recent years have resulted in a host of deserted solids and concrete wastes. Engineering application of recycled-aggregate-concrete-filled (RAC-filled) steel tube is a new challenge facing humanity. Force of Steel tube components in practical engineering application is nonlinear and random. Owing to the difficulty in conventional method, it becomes a main subject to reveal the nonuniform axial force mechanisms of steel tube components. First of all, this article adopts finite element method (FEM) to simulate actual force, to analyze mechanics of axial force of steel tube components under uniform, nonuniform and eccentric conditions, thus building a mechanical model. Then the effect on the load capacity of steel tube caused by local factors such as steel tube components under different force conditions and deformation in the process of steel tube production is analyzed. Finally, the three force simulations of circular steel tube components in the diameter of 165mm, a wall thickness of 4mm and 2mm are carried out separately, it turns out that the error of component with a thickness of 2mm is 4.96% while the other is 7.35%. Obviously the simulation and experimental results fit relatively well. Besides, eccentric load is related to eccentricity: the larger the eccentricity is, the smaller the load capacity is. Moreover, it needs to be singled out that nonuniform capacity is greater than centralized eccentric axial load capacity. The research results demonstrate that different parameter selection,different force and manufacturing precision of steel tube and its component exert relatively high influence on load capacity of component. The results also indicate that variety of force conditions and deformation has to be considered in applications of RAC-filled steel tube. Thus, the research provides reference and basis for RAC-filled steel structure design and manufacturing.
APA, Harvard, Vancouver, ISO, and other styles
34

Topór, Tomasz, and Krzysztof Sowiżdżał. "Application of machine learning tools for seismic reservoir characterization study of porosity and saturation type." Nafta-Gaz 78, no. 3 (March 2022): 165–75. http://dx.doi.org/10.18668/ng.2022.03.01.

Full text
Abstract:
The application of machine learning (ML) tools and data-driven modeling became a standard approach for solving many problems in exploration geology and contributed to the discovery of new reservoirs. This study explores an application of machine learning ensemble methods – random forest (RF) and extreme gradient boosting (XGBoost) to derive porosity and saturation type (gas/water) in multi-horizon sandstone formations from Miocene deposits of the Carpathian Foredeep. The training of ML algorithms was divided into two stages. First, the RF algorithm was used to compute porosity based on seismic attributes and well location coordinates. The obtained results were used as an extra feature to saturation type modeling using the XGBoost algorithm. The XGBoost was run with and without well location coordinates to evaluate the influence of the spatial information for the modeling performance. The hyperparameters for each model were tuned using the Bayesian optimization algorithm. To check the training models' robustness, 10-fold cross-validation was performed. The results were evaluated using standard metrics, for regression and classification, on training and testing sets. The residual mean standard error (RMSE) for porosity prediction with RF for training and testing was close to 0.053, providing no evidence of overfitting. Feature importance analysis revealed that the most influential variables for porosity prediction were spatial coordinates and seismic attributes sweetness. The results of XGBoost modeling (variant 1) demonstrated that the algorithm could accurately predict saturation type despite the class imbalance issue. The sensitivity for XGBoost on training and testing data was high and equaled 0.862 and 0.920, respectively. The XGBoost model relied on computed porosity and spatial coordinates. The obtained sensitivity results for both training and testing sets dropped significantly by about 10% when well location coordinates were removed (variant 2). In this case, the three most influential features were computed porosity, seismic amplitude contrast, and iso-frequency component (15 Hz) attribute. The obtained results were imported to Petrel software to present the spatial distribution of porosity and saturation type. The latter parameter was given with probability distribution, which allows for identifying potential target zones enriched in gas.
APA, Harvard, Vancouver, ISO, and other styles
35

Boccia, Flavio, and Daniela Covino. "Corporate social responsibility and biotechnological foods: an experimental study on consumer’s behaviour." Nutrition & Food Science, December 17, 2021. http://dx.doi.org/10.1108/nfs-10-2021-0293.

Full text
Abstract:
Purpose New food technologies based on biotechnological organisms are increasingly becoming a cause for debate and conflicting discussions. This paper aims to investigate hypothetical consumer behaviour, and the willingness to pay (WtP), towards a specific type of genetically modified food in relation to particular indications on the label about the implementation of corporate social responsibility (CSR) initiatives by manufacturing companies. Design/methodology/approach For this purpose a choice experiment was used on a representative sample of more 1,300 Italian families, interviewing the component in charge of the buying choices within the selected household. A random parameter logit-error component model allows for heterogeneity in consumer preferences and potential correlation across utilities and across taste parameters. Beyond investigating consumers’ preferences regarding that product through a choice experiment, the aim was to detect the drivers of that purchase and preference heterogeneity across consumers’ choice, and the WtP, for the products with those features. Findings Results also offer a topic for further discussion and are useful for companies’ strategies to understand how to address such concerns through appropriate CSR policies. The main results are: CSR initiatives always have a strong effect on consumer choice; the price is consistently important, exerting a negative influence in the decision-making process for individuals; consumers may also know possible effects of genetically modified foods, but that does not always translate into purchase behaviour. Originality/value The research considers a particular link between genetically modified food and CSR not addressed in details; moreover, it is also based on the author’s own previous research and is its natural continuation and development, but also important for future researches.
APA, Harvard, Vancouver, ISO, and other styles
36

Mansfield, Carol, Willings Botha, Gerard T. Vondeling, Kathleen Klein, Kongming Wang, Jasmeet Singh, and Michelle D. Hackshaw. "Patient preferences for features of HER2-targeted treatment of advanced or metastatic breast cancer: a discrete-choice experiment study." Breast Cancer, September 8, 2022. http://dx.doi.org/10.1007/s12282-022-01394-6.

Full text
Abstract:
Abstract Background We aimed to quantify patients’ benefit-risk preferences for attributes associated with human epidermal growth factor receptor 2 (HER2)-targeted breast cancer treatments and estimate minimum acceptable benefits (MABs), denominated in additional months of progression-free survival (PFS), for given treatment-related adverse events (AEs). Methods We conducted an online discrete-choice experiment (DCE) among patients with self-reported advanced/metastatic breast cancer in the United States, United Kingdom, and Japan (N = 302). In a series of nine DCE questions, respondents chose between two hypothetical treatment profiles created by an experimental design. Profiles were defined by six attributes with varying levels: PFS, nausea/vomiting, diarrhea, liver function problems, risk of heart failure, and risk of serious lung damage and infections. Data were analyzed using an error component random-parameters logit model. Results Among the attributes, patients placed the most importance on a change in PFS from 5 to 26 months; change from no diarrhea to severe diarrhea was the least important. Avoiding a 15% risk of heart failure had the largest MAB (5.8 additional months of PFS), followed by avoiding a 15% risk of serious lung damage and infections (4.6 months), possible severe liver function problems (4.2 months), severe nausea/vomiting (3.7 months), and severe diarrhea (2.3 months) compared with having none of the AEs. The relative importance of 21 additional months of PFS (increasing from 5 to 26 months) increased for women with HER2-negative disease and those with children. Conclusions Patients valued PFS gain higher than the potential risk of AEs when deciding between hypothetical breast cancer treatments.
APA, Harvard, Vancouver, ISO, and other styles
37

Zhao, J. "Scaled weighted total least-squares adjustment for partial errors-in-variables model." Journal of Geodetic Science 6, no. 1 (December 16, 2016). http://dx.doi.org/10.1515/jogs-2016-0010.

Full text
Abstract:
AbstractScaled total least-squares (STLS) unify LS, Data LS, and TLS with a different choice of scaled parameter. The function of the scaled parameter is to balance the effect of random error of coefficient matrix and observation vector for the estimate of unknown parameter. Unfortunately, there are no discussions about how to determine the scaled parameter. Consequently, the STLS solution cannot be obtained because the scaled parameter is unknown. In addition, the STLS method cannot be applied to the structured EIV casewhere the coefficient matrix contains the fixed element and the repeated random elements in different locations or both. To circumvent the shortcomings above, the study generalize it to a scaledweighted TLS (SWTLS) problem based on partial errors-in-variable (EIV) model. And the maximum likelihood method is employed to derive the variance component of observations and coefficient matrix. Then the ratio of variance component is proposed to get the scaled parameter. The existing STLS method and WTLS method is just a special example of the SWTLS method. The numerical results show that the proposed method proves to bemore effective in some aspects.
APA, Harvard, Vancouver, ISO, and other styles
38

Hoover, Lauren, Tanmoy Bhowmik, Shamsunnahar Yasmin, and Naveen Eluru. "Understanding Crash Risk Using a Multi-Level Random Parameter Binary Logit Model: Application to Naturalistic Driving Study Data." Transportation Research Record: Journal of the Transportation Research Board, May 18, 2022, 036119812210909. http://dx.doi.org/10.1177/03611981221090943.

Full text
Abstract:
This study presents a framework to employ naturalistic driving study (NDS) data to understand and predict crash risk at a disaggregate trip level accommodating for the influence of trip characteristics (such as trip distance, trip proportion by speed limit, trip proportion on urban/rural facilities) in addition to the traditional crash factors. Recognizing the rarity of crash occurrence in NDS data, the research employs a matched case-control approach for preparing the estimation sample. The study also conducts an extensive comparison of different case-to-control ratios including 1:4, 1:9, 1:14, 1:19, and 1:29. The model parameters estimated with these control ratios are reasonably similar (except for the constant). Employing the 1:9 sample, a multi-level random parameters binary logit model is estimated where multiple forms of unobserved variables are tested including (a) common unobserved effects for each case-control panel, (b) common unobserved factors affecting the error margin in the trip distance variable, and (c) random effects for all independent variables. The estimated model is calibrated by modifying the constant parameter to generate a population conforming crash risk model. The calibrated model is employed to predict crash risk of trips not considered in model estimation. This study is a proof of concept that NDS data can be used to predict trip-level crash risk and can be used by future researchers to develop crash risk models.
APA, Harvard, Vancouver, ISO, and other styles
39

de Paula Oliveira, Thiago, Georgie Bruinvels, Charles R. Pedlar, Brian Moore, and John Newell. "Modelling menstrual cycle length in athletes using state-space models." Scientific Reports 11, no. 1 (August 20, 2021). http://dx.doi.org/10.1038/s41598-021-95960-1.

Full text
Abstract:
AbstractThe ability to predict an individual’s menstrual cycle length to a high degree of precision could help female athletes to track their period and tailor their training and nutrition correspondingly. Such individualisation is possible and necessary, given the known inter-individual variation in cycle length. To achieve this, a hybrid predictive model was built using data on 16,524 cycles collected from a sample of 2125 women (mean age 34.38 years, range 18.00–47.10, number of menstrual cycles ranging from 4 to 53). A mixed-effect state-space model was fitted to capture the within-subject temporal correlation, incorporating a Bayesian approach for process forecasting to predict the duration (in days) of the next menstrual cycle. The modelling procedure was split into three steps (1) a time trend component using a random walk with an overdispersion parameter, (2) an autocorrelation component using an autoregressive moving-average model, and (3) a linear predictor to account for covariates (e.g. injury, stomach cramps, training intensity). The inclusion of an overdispersion parameter suggested that $$26.36\%$$ 26.36 % $$[23.68\%,29.17\%]$$ [ 23.68 % , 29.17 % ] of cycles in the sample were overdispersed. The random walk standard deviation for a non-overdispersed cycle is $$27.41 \pm 1.05$$ 27.41 ± 1.05 [1.00, 1.09] days while under an overdispersed cycle, the menstrual cycle variance increase in 4.78 [4.57, 5.00] days. To assess the performance and prediction accuracy of the model, each woman’s last observation was used as test data. The root mean square error (RMSE), concordance correlation coefficient and Pearson correlation coefficient (r) between the observed and predicted values were calculated. The model had an RMSE of 1.6412 days, a precision of 0.7361 and overall accuracy of 0.9871. In conclusion, the hybrid model presented here is a helpful approach for predicting menstrual cycle length, which in turn can be used to support female athlete wellness.
APA, Harvard, Vancouver, ISO, and other styles
40

Starychenko, Yevhenii, Andriy Skrypnyk, Vitalina Babenko, Nataliia Klymenko, and Kateryna Tuzhyk. "Food Security Indices In Ukraine: Forecast Methods And Trends." Studies of Applied Economics 38, no. 4 (February 10, 2021). http://dx.doi.org/10.25115/eea.v38i4.4000.

Full text
Abstract:
The paper offers the calculation procedure of the integrated Food Security Index (FSI) based on the three-component analysis: economic accessibility, physical security, the sufficiency of consumption. It offers the methodology for forecasting under the conditions of macroeconomic instability based on the Food Security Indices. It shows that the standard forecasting methodology based on the separation of trend (deterministic) and random components cannot be applied in conditions of alternating intervals of economic growth and crises. In particular emphasis is placed on Jackson-Watson's methodology, which is based on the analysis of the internal structure of the process. The three-parameter ARIMA model was used in the forecast estimates of the Food Security Indices. The applied methods are complemented by exponential smoothing, damped trend model, linear and exponential smoothing, namely, Brown method, and two-parameter smoothing method (Holt's method), Pearl-Reed curve model. The research offers a food security risk assessment procedure in virtue of the econometric forecast error properties. Based on data mining, the results of forecasting the individual indices and the integrated Food Security Index make it possible to state a satisfactory condition that is unlikely to change shortly.
APA, Harvard, Vancouver, ISO, and other styles
41

Li, Yizhang, Lingyu Liu, Zhongmin Wang, Tianying Chang, Ke Li, Wenqing Xu, Yong Wu, Hua Yang, and Daoli Jiang. "To Estimate Performance of Artificial Neural Network Model Based on Terahertz Spectrum: Gelatin Identification as an Example." Frontiers in Nutrition 9 (July 14, 2022). http://dx.doi.org/10.3389/fnut.2022.925717.

Full text
Abstract:
It is a necessity to determine significant food or traditional Chinese medicine (TCM) with low cost, which is more likely to achieve high accurate identification by THz-TDS. In this study, feedforward neural networks based on terahertz spectra are employed to predict the animal origin of gelatins, whose adaption to the mission is examined by parallel models built by random sample partition and initialization. It is found that the generalization performance of feedforward ANNs in original data is not satisfactory although prediction on trained samples can be accurate. A multivariate scattering correction is conducted to enhance prediction accuracy, and 20 additional models verify the effectiveness of such dispose. A special partition of total dataset is conducted based on statistics of parallel models, whose influence on ANN performance is investigated with another 20 models. The performance of the models is unsatisfactory because of notable differences in training and test sets according to principal component analysis. By comparing the distribution of the first two principal components before and after multivariate scattering correction, we found that the reciprocal of the minimum number of line segments required for error-free classification in 2-D feature space can be viewed as an index to describe linear separability of data. The rise of proposed linear separability would have a lower requirement for harsh parameter tuning of ANN models and tolerate random initialization. The difference in principal components of samples between a training set and a data set determines whether partition is acceptable or whether a model would have generality. A rapid way to estimate the performance of an ANN before sufficient tuning on a classification mission is to compare differences between groups and differences within groups. Given that a representative peak missing curve is discussed in this article, an analysis based on gelatin THz spectra may be helpful for studies on some other feature-less species.
APA, Harvard, Vancouver, ISO, and other styles
42

Maurice, Wanyonyi. "Modelling Factors Affecting Lung Capacity." Journal of Advances in Mathematics and Computer Science, January 16, 2020, 1–18. http://dx.doi.org/10.9734/jamcs/2019/v34i630229.

Full text
Abstract:
This study aims to evaluate modelling factors affecting lung capacity using linear Regression model. The study employed multiple regression models which were used to fit the factors affecting lung capacity. The factors affect lung capacity includes the following; age, gender, smoking and height. The objectives of the study were; fitting regression model on factors affecting lung capacity, determining the relationship between age and height with lung capacity. The study aim also includes predicting the value of lung capacity using the fitted model. The data used in this study was a secondary source which was obtained from Marin [1]. The dataset is publicly available on their website. The data had 725 observations. Since multiple linear regression model was employed in this study, the model was of the form; Where; Lung capacity is the dependent variable, are the coefficients (parameters) to be estimated, Age and Height are the independent variables while is the random error component. The methods of parameter estimation discussed under this study include; maximum likelihood estimator and the least square estimator. The data for this study were analyzed using SPSS and R software which are statistical software used for data analysis. From the analysis of variance table, a p-value of 0.00 was recorded which is less than alpha (alpha= 0.05). This implies that the overall model is significant. From the model formulated, it was concluded that height and age greatly affect lung capacity. The model formulated can be used to predict the value of lung capacity provided the values of Age and Height are known. Also from the descriptive statistics, it is deduced that gender and smoking greatly affect lung capacity.
APA, Harvard, Vancouver, ISO, and other styles
43

Guo, Qiao, Shiqing Cheng, Fenghuang Zeng, Yang Wang, Chuan Lu, Chaodong Tan, and Guiliang Li. "Reservoir Permeability Prediction Based on Analogy and Machine Learning Methods: Field Cases in DLG Block of Jing’an Oilfield, China." Lithosphere 2022, Special 12 (September 30, 2022). http://dx.doi.org/10.2113/2022/5249460.

Full text
Abstract:
Abstract Reservoir permeability, generally determined by experimental or well testing methods, is an essential parameter in the oil and gas field development. In this paper, we present a novel analogy and machine learning method to predict reservoir permeability. Firstly, the core test and production data of other 24 blocks (analog blocks) are counted according to the DLG block (target block) of Jing’an Oilfield, and the permeability analogy parameters including porosity, shale content, reservoir thickness, oil saturation, liquid production, and production pressure difference are optimized by Pearson and principal component analysis. Then, the fuzzy matter element method is used to calculate the similarity between the target block and analog blocks. According to the similarity calculation results, reservoir permeability of DLG block is predicted by reservoir engineering method (the relationship between core permeability and porosity of QK-D7 in similar blocks) and machine learning method (random forest, gradient boosting decision tree, light gradient boosting machine, and categorical boosting). By comparing the prediction accuracy of the two methods through the evaluation index determination coefficient (R2) and root mean square error (RMSE), the CatBoost model has higher accuracy in predicting reservoir permeability, with R2 of 0.951 and RMSE of 0.139. Finally, the CatBoost model is selected to predict reservoir permeability of 121 oil wells in the DLG block. This work uses simple logging and production data to quickly and accurately predict reservoir permeability without coring and testing. At the same time, the prediction results are well applied to the formulation of DLG block development technology strategy, which provides a new idea for the application of machine learning to predict oilfield parameters.
APA, Harvard, Vancouver, ISO, and other styles
44

Sun Tao and Yuan Jian-Mei. "Prediction of band gap of transition metal sulfide with Janus structure by deep learning atomic feature representation method." Acta Physica Sinica, 2023, 0. http://dx.doi.org/10.7498/aps.72.20221374.

Full text
Abstract:
With the development of artificial intelligence, Machine Learning (ML) is more and more widely used in material computing. To apply ML to the prediction of material properties, the first thing to achieve is to obtain effective material feature representation. In this paper, an atomic feature representation method is used to study a low dimensional, densely distributed atomic eigenvector and applied it to the band gap prediction in material design. According to the type and number of atoms in the material chemical formula, the Transformer Encoder is used as the model structure, and a large number of material chemical formula data are trained to extract the features of the training elements. Through the clustering analysis of the atomic feature vectors of the main group elements, it can be found that the element features can be used to distinguish the element categories. The Principal Component Analysis of the atomic eigenvectors of the main group elements shows that the projection of the atomic eigenvectors on the first principal component reflects the outermost electron number corresponding to the element. This illustrates the effectiveness of atomic eigenvectors extracted using the Transformer models. Subsequently, the atomic feature representation method was used to represent the material characteristics, using three ML methods: Random Forest (RF), Kernel Ridge Regression (KRR) and Support Vector Regression (SVR), to predict the two-dimensional band gap of the transition metal chalcogenide compound MXY (M represents transition metal, X and Y are different chalcogenide elements) with Janus structure. The hyperparameters of ML model are determined by parameter searching. To obtain stable results, the ML model is tested by 5-fold cross-validation. The results obtained from the three ML models show that the average absolute error of the prediction of atomic feature vectors based on deep learning is smaller than that obtained from the traditional Magpie method and the Atom2Vec method. For the atomic eigenvector method proposed in this paper, the prediction accuracy of the KRR model is the best compared with the Magpie and Atom2Vec methods. This also shows that the atomic feature vector proposed in this paper has a certain correlation between the features, and is a low-dimensional and densely distributed feature vector. Visual analysis and numerical experiments of material property prediction show that the atomic feature representation method based on deep learning extraction proposed in this paper can effectively characterize material features and can be applied to the task of material band gap prediction.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography