Siga este enlace para ver otros tipos de publicaciones sobre el tema: Performance estimation problems.

Artículos de revistas sobre el tema "Performance estimation problems"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "Performance estimation problems".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Qiu, Li, Zhiyuan Ren y Jie Chen. "Fundamental performance limitations in estimation problems". Communications in Information and Systems 2, n.º 4 (2002): 371–84. http://dx.doi.org/10.4310/cis.2002.v2.n4.a3.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Chen, Zhenmin y Feng Miao. "Interval and Point Estimators for the Location Parameter of the Three-Parameter Lognormal Distribution". International Journal of Quality, Statistics, and Reliability 2012 (8 de agosto de 2012): 1–6. http://dx.doi.org/10.1155/2012/897106.

Texto completo
Resumen
The three-parameter lognormal distribution is the extension of the two-parameter lognormal distribution to meet the need of the biological, sociological, and other fields. Numerous research papers have been published for the parameter estimation problems for the lognormal distributions. The inclusion of the location parameter brings in some technical difficulties for the parameter estimation problems, especially for the interval estimation. This paper proposes a method for constructing exact confidence intervals and exact upper confidence limits for the location parameter of the three-parameter lognormal distribution. The point estimation problem is discussed as well. The performance of the point estimator is compared with the maximum likelihood estimator, which is widely used in practice. Simulation result shows that the proposed method is less biased in estimating the location parameter. The large sample size case is discussed in the paper.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

WALTHER, B. A. y S. MORAND. "Comparative performance of species richness estimation methods". Parasitology 116, n.º 4 (abril de 1998): 395–405. http://dx.doi.org/10.1017/s0031182097002230.

Texto completo
Resumen
In most real-world contexts the sampling effort needed to attain an accurate estimate of total species richness is excessive. Therefore, methods to estimate total species richness from incomplete collections need to be developed and tested. Using real and computer-simulated parasite data sets, the performances of 9 species richness estimation methods were compared. For all data sets, each estimation method was used to calculate the projected species richness at increasing levels of sampling effort. The performance of each method was evaluated by calculating the bias and precision of its estimates against the known total species richness. Performance was evaluated with increasing sampling effort and across different model communities. For the real data sets, the Chao2 and first-order jackknife estimators performed best. For the simulated data sets, the first-order jackknife estimator performed best at low sampling effort but, with increasing sampling effort, the bootstrap estimator outperformed all other estimators. Estimator performance increased with increasing species richness, aggregation level of individuals among samples and overall population size. Overall, the Chao2 and the first-order jackknife estimation methods performed best and should be used to control for the confounding effects of sampling effort in studies of parasite species richness. Potential uses of and practical problems with species richness estimation methods are discussed.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Liu, Shaojie, Yulong Zhang, Zhiqiang Gao, Yangquan Chen, Donghai Li y Min Zhu. "Desired Dynamics-Based Generalized Inverse Solver for Estimation Problems". Processes 10, n.º 11 (26 de octubre de 2022): 2193. http://dx.doi.org/10.3390/pr10112193.

Texto completo
Resumen
An important task for estimators is to solve the inverse. However, as the designs of different estimators for solving the inverse vary widely, it is difficult for engineers to be familiar with all of their properties and to design suitable estimators for different situations. Therefore, we propose a more structurally unified and functionally diverse estimator, called generalized inverse solver (GIS). GIS is inspired by the desired dynamics of control systems and understanding of the generalized inverse. It is similar to a closed-loop system, structurally consisting of nominal models and an error-correction mechanism (ECM). The nominal models can be model-based, semi-model-based, or even model-free, depending on prior knowledge of the system. In addition, we design the ECM of GIS based on desired dynamics parameterization by following a simple and meaningful rule, where states are directly used in the ECM to accelerate the convergence of GIS. A case study considering a rotary flexible link shows that GIS can greatly improve the noise suppression performance with lower loss of dynamic estimation performance, when compared with other common observers at the same design bandwidth. Moreover, the dynamic estimation performances of the three GIS approaches (i.e., model-based, semi-model-based, and model-free) are almost the same under the same parameters. These results demonstrate the strong robustness of GIS (although by means of the uniform design method). Finally, some control cases are studied, including a comparison with DOB and ESO, in order to illustrate their approximate equivalence to GIS.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Panić, Branislav, Jernej Klemenc y Marko Nagode. "Improved Initialization of the EM Algorithm for Mixture Model Parameter Estimation". Mathematics 8, n.º 3 (7 de marzo de 2020): 373. http://dx.doi.org/10.3390/math8030373.

Texto completo
Resumen
A commonly used tool for estimating the parameters of a mixture model is the Expectation–Maximization (EM) algorithm, which is an iterative procedure that can serve as a maximum-likelihood estimator. The EM algorithm has well-documented drawbacks, such as the need for good initial values and the possibility of being trapped in local optima. Nevertheless, because of its appealing properties, EM plays an important role in estimating the parameters of mixture models. To overcome these initialization problems with EM, in this paper, we propose the Rough-Enhanced-Bayes mixture estimation (REBMIX) algorithm as a more effective initialization algorithm. Three different strategies are derived for dealing with the unknown number of components in the mixture model. These strategies are thoroughly tested on artificial datasets, density–estimation datasets and image–segmentation problems and compared with state-of-the-art initialization methods for the EM. Our proposal shows promising results in terms of clustering and density-estimation performance as well as in terms of computational efficiency. All the improvements are implemented in the rebmix R package.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Guarino, Cassandra M., Mark D. Reckase y Jeffrey M. Wooldridge. "Can Value-Added Measures of Teacher Performance Be Trusted?" Education Finance and Policy 10, n.º 1 (enero de 2015): 117–56. http://dx.doi.org/10.1162/edfp_a_00153.

Texto completo
Resumen
We investigate whether commonly used value-added estimation strategies produce accurate estimates of teacher effects under a variety of scenarios. We estimate teacher effects in simulated student achievement data sets that mimic plausible types of student grouping and teacher assignment scenarios. We find that no one method accurately captures true teacher effects in all scenarios, and the potential for misclassifying teachers as high- or low-performing can be substantial. A dynamic ordinary least squares estimator is more robust across scenarios than other estimators. Misspecifying dynamic relationships can exacerbate estimation problems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

ODEN, J. TINSLEY, SERGE PRUDHOMME, TIM WESTERMANN, JON BASS y MARK E. BOTKIN. "ERROR ESTIMATION OF EIGENFREQUENCIES FOR ELASTICITY AND SHELL PROBLEMS". Mathematical Models and Methods in Applied Sciences 13, n.º 03 (marzo de 2003): 323–44. http://dx.doi.org/10.1142/s0218202503002520.

Texto completo
Resumen
In this paper, a method for deriving computable estimates of the approximation error in eigenvalues or eigenfrequencies of three-dimensional linear elasticity or shell problems is presented. The analysis for the error estimator follows the general approach of goal-oriented error estimation for which the error is estimated in so-called quantities of interest, here the eigenfrequencies, rather than global norms. A general theory is developed and is then applied to the linear elasticity equations. For the shell analysis, it is assumed that the shell model is not completely known and additional errors are introduced due to modeling approximations. The approach is then based on recovering three-dimensional approximations from the shell eigensolution and employing the error estimator developed for linear elasticity. The performance of the error estimator is demonstrated on several test problems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Mao, Zhi Jie, Zhi Jun Yan, Hong Wei Li y Jin Meng. "Exact Cram’er–Rao Lower Bound for Interferometric Phase Estimator". Advanced Materials Research 1004-1005 (agosto de 2014): 1419–26. http://dx.doi.org/10.4028/www.scientific.net/amr.1004-1005.1419.

Texto completo
Resumen
We are concerned with the problem of interferometric phase estimation using multiple baselines. Simple close-form efficient expressions for computing the Cramer-Rao lower bound (CRLB) for general phase estimation problems is derived. Performance analysis of the interferometric phase estimation is carried out based on Monte Carlo simulations and CRLB calculation. We show that by utilizing the Cramer-Rao lower bound we are able to determine the combination of baselines that will enable us to achieve the most accurate estimating performance.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Gao, Jing, Kehan Bai y Wenhao Gui. "Statistical Inference for the Inverted Scale Family under General Progressive Type-II Censoring". Symmetry 12, n.º 5 (5 de mayo de 2020): 731. http://dx.doi.org/10.3390/sym12050731.

Texto completo
Resumen
Two estimation problems are studied based on the general progressively censored samples, and the distributions from the inverted scale family (ISF) are considered as prospective life distributions. One is the exact interval estimation for the unknown parameter θ , which is achieved by constructing the pivotal quantity. Through Monte Carlo simulations, the average 90 % and 95 % confidence intervals are obtained, and the validity of the above interval estimation is illustrated with a numerical example. The other is the estimation of R = P ( Y < X ) in the case of ISF. The maximum likelihood estimator (MLE) as well as approximate maximum likelihood estimator (AMLE) is obtained, together with the corresponding R-symmetric asymptotic confidence intervals. With Bootstrap methods, we also propose two R-asymmetric confidence intervals, which have a good performance for small samples. Furthermore, assuming the scale parameters follow independent gamma priors, the Bayesian estimator as well as the HPD credible interval of R is thus acquired. Finally, we make an evaluation on the effectiveness of the proposed estimations through Monte Carlo simulations and provide an illustrative example of two real datasets.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Ayansola, Olufemi y Adebowale Adejumo. "On the Performance of Some Estimation Methods in Models with Heteroscedasticity and Autocorrelated Disturbances (A Monte-Carlo Approach)". Mathematical Modelling and Applications 9, n.º 1 (2 de abril de 2024): 23–31. http://dx.doi.org/10.11648/j.mma.20240901.13.

Texto completo
Resumen
The proliferation of panel data studies has been greatly motivated by the availability of data and capacity for modelling the complexity of human behaviour than a single cross-section or time series data and these led to the rise of challenging methodologies for estimating the data set. It is pertinent that, in practice, panel data are bound to exhibit autocorrelation or heteroscedasticity or both. In view of the fact that the presence of heteroscedasticity and autocorrelated errors in panel data models biases the standard errors and leads to less efficient results. This study deemed it fit to search for estimator that can handle the presence of these twin problems when they co- exists in panel data. Therefore, robust inference in the presence of these problems needs to be simultaneously addressed. The Monte-Carlo simulation method was designed to investigate the finite sample properties of five estimation methods: Between Estimator (BE), Feasible Generalized Least Square (FGLS), Maximum Estimator (ME) and Modified Maximum Estimator (MME), including a new Proposed Estimator (PE) in the simulated data infected with heteroscedasticity and autocorrelated errors. The results of the root mean square error and absolute bias criteria, revealed that Proposed Estimator in the presence of these problems is asymptotically more efficient and consistent than other estimators in the class of the estimators in the study. This is experienced in all combinatorial level of autocorrelated errors in remainder error and fixed heteroscedastic individual effects. For this reason, PE has better performance among other estimators.
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Vakhania, Nodari. "Probabilistic quality estimations for combinatorial optimization problems". Georgian Mathematical Journal 25, n.º 1 (1 de marzo de 2018): 123–34. http://dx.doi.org/10.1515/gmj-2017-0041.

Texto completo
Resumen
AbstractThe computational complexity of an algorithm is traditionally measured for the worst and the average case. The worst-case estimation guarantees a certain worst-case behavior of a given algorithm, although it might be rough, since in “most instances” the algorithm may have a significantly better performance. The probabilistic average-case analysis claims to derive an average performance of an algorithm, say, for an “average instance” of the problem in question. That instance may be far away from the average of the problem instances arising in a given real-life application, and so the average case analysis would also provide a non-realistic estimation. We suggest that, in general, a wider use of probabilistic models for a more accurate estimation of the algorithm efficiency could be possible. For instance, the quality of the solutions delivered by an approximation algorithm may also be estimated in the “average” probabilistic case. Such an approach would deal with the estimation of the quality of the solutions delivered by the algorithm for the most common (for a given application) problem instances. As we illustrate, the probabilistic modeling can also be used to derive an accurate time complexity performance measure, distinct from the traditional probabilistic average-case time complexity measure. Such an approach could, in particular, be useful when the traditional average-case estimation is still rough or is not possible at all.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Hernández-Sanjaime, Rocío, Martín González y Jose J. López-Espín. "Estimation of Multilevel Simultaneous Equation Models through Genetic Algorithms". Mathematics 8, n.º 12 (24 de noviembre de 2020): 2098. http://dx.doi.org/10.3390/math8122098.

Texto completo
Resumen
Problems in estimating simultaneous equation models when error terms are not intertemporally uncorrelated has motivated the introduction of a new multivariate model referred to as Multilevel Simultaneous Equation Model (MSEM). The maximum likelihood estimation of the parameters of an MSEM has been set forth. Because of the difficulties associated with the solution of the system of likelihood equations, the maximum likelihood estimator cannot be obtained through exhaustive search procedures. A hybrid metaheuristic that combines a genetic algorithm and an optimization method has been developed to overcome both technical and analytical limitations in the general case when the covariance structure is unknown. The behaviour of the hybrid metaheuristic has been discussed by varying different tuning parameters. A simulation study has been included to evaluate the adequacy of this estimator when error terms are not serially independent. Finally, the performance of this estimation approach has been compared with regard to other alternatives.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Nowak, Thorsten y Andreas Eidloth. "Dynamic multipath mitigation applying unscented Kalman filters in local positioning systems". International Journal of Microwave and Wireless Technologies 3, n.º 3 (25 de marzo de 2011): 365–72. http://dx.doi.org/10.1017/s1759078711000274.

Texto completo
Resumen
Multipath propagation is still one of the major problems in local positioning systems today. Especially in indoor environments, the received signals are disturbed by blockages and reflections. This can lead to a large bias in the user's time-of-arrival (TOA) value. Thus multipath is the most dominant error source for positioning. In order to improve the positioning performance in multipath environments, recent multipath mitigation algorithms based upon the concept of sequential Bayesian estimation are used. The presented approach tries to overcome the multipath problem by estimating the channel dynamics, using unscented Kalman filters (UKF). Simulations on artificial and measured channels from indoor as well as outdoor environments show the profit of the proposed estimator model. Furthermore, the quality of channel estimation applying the UKF and the channel sounding capabilities of the estimator are shown.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Hedar, Abdel-Rahman, Amira A. Allam y Alaa Fahim. "Estimation of Distribution Algorithms with Fuzzy Sampling for Stochastic Programming Problems". Applied Sciences 10, n.º 19 (3 de octubre de 2020): 6937. http://dx.doi.org/10.3390/app10196937.

Texto completo
Resumen
Generating practical methods for simulation-based optimization has attracted a great deal of attention recently. In this paper, the estimation of distribution algorithms are used to solve nonlinear continuous optimization problems that contain noise. One common approach to dealing with these problems is to combine sampling methods with optimal search methods. Sampling techniques have a serious problem when the sample size is small, so estimating the objective function values with noise is not accurate in this case. In this research, a new sampling technique is proposed based on fuzzy logic to deal with small sample sizes. Then, simulation-based optimization methods are designed by combining the estimation of distribution algorithms with the proposed sampling technique and other sampling techniques to solve the stochastic programming problems. Moreover, additive versions of the proposed methods are developed to optimize functions without noise in order to evaluate different efficiency levels of the proposed methods. In order to test the performance of the proposed methods, different numerical experiments were carried out using several benchmark test functions. Finally, three real-world applications are considered to assess the performance of the proposed methods.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Wang, Hongjian, Jinlong Xu, Aihua Zhang, Cun Li y Hongfei Yao. "Support Vector Regression-Based Adaptive Divided Difference Filter for Nonlinear State Estimation Problems". Journal of Applied Mathematics 2014 (2014): 1–10. http://dx.doi.org/10.1155/2014/139503.

Texto completo
Resumen
We present a support vector regression-based adaptive divided difference filter (SVRADDF) algorithm for improving the low state estimation accuracy of nonlinear systems, which are typically affected by large initial estimation errors and imprecise prior knowledge of process and measurement noises. The derivative-free SVRADDF algorithm is significantly simpler to compute than other methods and is implemented using only functional evaluations. The SVRADDF algorithm involves the use of the theoretical and actual covariance of the innovation sequence. Support vector regression (SVR) is employed to generate the adaptive factor to tune the noise covariance at each sampling instant when the measurement update step executes, which improves the algorithm’s robustness. The performance of the proposed algorithm is evaluated by estimating states for (i) an underwater nonmaneuvering target bearing-only tracking system and (ii) maneuvering target bearing-only tracking in an air-traffic control system. The simulation results show that the proposed SVRADDF algorithm exhibits better performance when compared with a traditional DDF algorithm.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Li, Z. y J. Wang. "Least squares image matching: A comparison of the performance of robust estimators". ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences II-1 (7 de noviembre de 2014): 37–44. http://dx.doi.org/10.5194/isprsannals-ii-1-37-2014.

Texto completo
Resumen
Least squares image matching (LSM) has been extensively applied and researched for high matching accuracy. However, it still suffers from some problems. Firstly, it needs the appropriate estimate of initial value. However, in practical applications, initial values may contain some biases from the inaccurate positions of keypoints. Such biases, if high enough, may lead to a divergent solution. If all the matching biases have exactly the same magnitude and direction, then they can be regarded as systematic errors. Secondly, malfunction of an imaging sensor may happen, which generates dead or stuck pixels on the image. This can be referred as outliers statistically. Because least squares estimation is well known for its inability to resist outliers, all these mentioned deviations from the model determined by LSM cause a matching failure. To solve these problems, with simulation data and real data, a series of experiments considering systematic errors and outliers are designed, and a variety of robust estimation methods including RANSACbased method, M estimator, S estimator and MM estimator is applied and compared in LSM. In addition, an evaluation criterion directly related to the ground truth is proposed for performance comparison of these robust estimators. It is found that robust estimators show the robustness for these deviations compared with LSM. Among these the robust estimators, M and MM estimator have the best performances.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Mahmoud, Magdi S. "Robust stability and ℋ∞-estimation for uncertain discrete systems with state-delay". Mathematical Problems in Engineering 7, n.º 5 (2001): 393–412. http://dx.doi.org/10.1155/s1024123x01001703.

Texto completo
Resumen
In this paper, we investigate the problems of robust stability and ℋ∞-estimation for a class of linear discrete-time systems with time-varying norm-bounded parameter uncertainty and unknown state-delay. We provide complete results for robust stability with prescribed performance measure and establish a version of the discrete Bounded Real Lemma. Then, we design a linear estimator such that the estimation error dynamics is robustly stable with a guaranteed ℋ∞-performance irrespective of the parameteric uncertainties and unknown state delays. A numerical example is worked out to illustrate the developed theory.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Luan, Yusi, Mengxuan Jiang, Zhenxiang Feng y Bei Sun. "Estimation of Feeding Composition of Industrial Process Based on Data Reconciliation". Entropy 23, n.º 4 (16 de abril de 2021): 473. http://dx.doi.org/10.3390/e23040473.

Texto completo
Resumen
For an industrial process, the estimation of feeding composition is important for analyzing production status and making control decisions. However, random errors or even gross ones inevitably contaminate the actual measurements. Feeding composition is conventionally obtained via discrete and low-rate artificial testing. To address these problems, a feeding composition estimation approach based on data reconciliation procedure is developed. To improve the variable accuracy, a novel robust M-estimator is first proposed. Then, an iterative robust hierarchical data reconciliation and estimation strategy is applied to estimate the feeding composition. The feasibility and effectiveness of the estimation approach are verified on a fluidized bed roaster. The proposed M-estimator showed better overall performance.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Geyda, Alexander y Igor Lysenko. "System Potential Estimation with Regard to Digitalization: Main Ideas and Estimation Example". Information 11, n.º 3 (20 de marzo de 2020): 164. http://dx.doi.org/10.3390/info11030164.

Texto completo
Resumen
The article outlines the main concept and examples of mathematical models needed to estimate system potential and digitalization performance indicators. Such an estimation differs in that it is performed with predictive mathematical models. The purpose of such an estimation is to enable a set of problems of system design and functional design of information technologies to be solved as mathematical problems, predictively and analytically. The hypothesis of the research is that the quality of system functioning in changing conditions can be evaluated analytically, based on predictive mathematical models. We suggested a property of the system potential (or system capability) that describes the effects of the compliance of changing system functioning with changing conditions analytically and predictively. Thus, it describes the performance of the use of information operations to realize functioning in changing conditions. The example includes system’s environment graph-theoretic models and system’s models regarding IT use to react to changing environments. As a result of the suggested models and methods, the quantitative estimation of system potential regarding information technology use becomes possible depending on the parameters and variables of the problems to be solved. Use cases of problems decision with the use of such indicators include choosing optimal information technology implementation and the synthesis of information operation characteristics.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Lu, Yongzhong, Min Zhou, Shiping Chen, David Levy y Jicheng You. "A Perspective of Conventional and Bio-inspired Optimization Techniques in Maximum Likelihood Parameter Estimation". Journal of Autonomous Intelligence 1, n.º 2 (23 de octubre de 2018): 1. http://dx.doi.org/10.32629/jai.v1i2.28.

Texto completo
Resumen
Maximum likelihood estimation is a method of estimating the parameters of a statistical model in statistics. It has been widely used in a good many multi-disciplines such as econometrics, data modelling in nuclear and particle physics, and geographical satellite image classification, and so forth. Over the past decade, although many conventional numerical approximation approaches have been most successfully developed to solve the problems of maximum likelihood parameter estimation, bio-inspired optimization techniques have shown promising performance and gained an incredible recognition as an attractive solution to such problems. This review paper attempts to offer a comprehensive perspective of conventional and bio-inspired optimization techniques in maximum likelihood parameter estimation so as to highlight the challenges and key issues and encourage the researches for further progress.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Liu, Juan y Qingfeng Huang. "Tradeoff between estimation performance and sensor usage in distributed localisation problems". International Journal of Ad Hoc and Ubiquitous Computing 1, n.º 4 (2006): 230. http://dx.doi.org/10.1504/ijahuc.2006.010504.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Standsyah, Rahmawati Erma, Bambang Widjanarko Otok y Agus Suharsono. "Fixed Effect Meta-Analytic Structural Equation Modeling (MASEM) Estimation Using Generalized Method of Moments (GMM)". Symmetry 13, n.º 12 (29 de noviembre de 2021): 2273. http://dx.doi.org/10.3390/sym13122273.

Texto completo
Resumen
The fixed effect meta-analytic structural equation modeling (MASEM) model assumes that the population effect is homogeneous across studies. It was first developed analytically using Generalized Least Squares (GLS) and computationally using Weighted Least Square (WLS) methods. The MASEM fixed effect was not estimated analytically using the estimation method based on moment. One of the classic estimation methods based on moment is the Generalized Method of Moments (GMM), whereas GMM can possibly estimate the data whose studies has parameter uncertainty problems, it also has a high accuracy on data heterogeneity. Therefore, this study estimates the fixed effect MASEM model using GMM. The symmetry of this research is based on the proof goodness of the estimator and the performance that it is analytical and numerical. The estimation results were proven to be the goodness of the estimator, unbiased and consistent. To show the performance of the obtained estimator, a comparison was carried out on the same data as the MASEM using GLS. The results show that the estimation of MASEM using GMM yields the SE value in each coefficient is smaller than the estimation of MASEM using GLS. Interactive GMM for the determination of the optimal weight on GMM in this study gave better results and therefore needs to be developed in order to obtain a Random Model MASEM estimator using GMM that is much more reliable and accurate in performance.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Wu, Mingjie y Wenhao Gui. "Estimation and Prediction for Nadarajah-Haghighi Distribution under Progressive Type-II Censoring". Symmetry 13, n.º 6 (3 de junio de 2021): 999. http://dx.doi.org/10.3390/sym13060999.

Texto completo
Resumen
The paper discusses the estimation and prediction problems for the Nadarajah-Haghighi distribution using progressive type-II censored samples. For the unknown parameters, we first calculate the maximum likelihood estimates through the Expectation–Maximization algorithm. In order to choose the best Bayesian estimator, a loss function must be specified. When the loss is essentially symmetric, it is reasonable to use the square error loss function. However, for some estimation problems, the actual loss is often asymmetric. Therefore, we also need to choose an asymmetric loss function. Under the balanced squared error and symmetric squared error loss functions, the Tierney and Kadane method is used for calculating different kinds of approximate Bayesian estimates. The Metropolis-Hasting algorithm is also provided here. In addition, we construct a variety of interval estimations of the unknown parameters including asymptotic intervals, bootstrap intervals, and highest posterior density intervals using the sample derived from the Metropolis-Hasting algorithm. Furthermore, we compute the point predictions and predictive intervals for a future sample when facing the one-sample and two-sample situations. At last, we compare and appraise the performance of the provided techniques by carrying out a simulation study and analyzing a real rainfall data set.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

JAYASINGHE, CHATHURI L. y PANLOP ZEEPHONGSEKUL. "NONPARAMETRIC ESTIMATION OF THE REVERSED HAZARD RATE FUNCTION FOR UNCENSORED AND CENSORED DATA". International Journal of Reliability, Quality and Safety Engineering 18, n.º 05 (octubre de 2011): 417–29. http://dx.doi.org/10.1142/s0218539311004160.

Texto completo
Resumen
Reversed hazard rate (RHR) function is an important reliability function that is applicable to various fields. Applications can be found in portfolio selection problems in finance, analysis of left-censored data, problems in actuarial science and forensic science involving estimation of exact time of occurrence of a particular event, etc. In this paper, we propose a new nonparametric estimator based on binning techniques for this reliability function for an uncensored sample and then provide an extension for RHR estimation under left censorship. The performance of the proposed estimators were then evaluated using simulations and real data from reliability and related disciplines. The results indicate that the proposed estimator does well for common lifetime distributions with the bin-width selection methods considered.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Cevri, Mehmet y Dursun Üstündag. "Performance Analysis of Gibbs Sampling for Bayesian Extracting Sinusoids". International Journal of Mathematical Models and Methods in Applied Sciences 15 (23 de noviembre de 2021): 148–54. http://dx.doi.org/10.46300/9101.2021.15.19.

Texto completo
Resumen
This paper involves problems of estimating parameters of sinusoids from white noisy data by using Gibbs sampling (GS) in a Bayesian framework. Modifications of its algorithm is tested on data generated from synthetic signals and its performance is compared with conventional estimators such as Maximum Likelihood(ML) and Discrete Fourier Transform (DFT) under a variety of signal to noise ratio (SNR) and different length of data sampling (N), regarding to Cramér-Rao lower bound (CRLB). All simulation results show its effectiveness in frequency and amplitude estimation of sinusoids.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Yoon, Jeonghyeon, Jisoo Oh y Seungku Kim. "Transfer Learning Approach for Indoor Localization with Small Datasets". Remote Sensing 15, n.º 8 (17 de abril de 2023): 2122. http://dx.doi.org/10.3390/rs15082122.

Texto completo
Resumen
Indoor pedestrian localization has been the subject of a great deal of recent research. Various studies have employed pedestrian dead reckoning, which determines pedestrian positions by transforming data collected through sensors into pedestrian gait information. Although several studies have recently applied deep learning to moving object distance estimations using naturally collected everyday life data, this data collection approach requires a long time, resulting in a lack of data for specific labels or a significant data imbalance problem for specific labels. In this study, to compensate for the problems of the existing PDR, a method based on transfer learning and data augmentation is proposed for estimating moving object distances for pedestrians. Consistent high-performance moving object distance estimation is achieved using only a small training dataset, and the problem of the concentration of training data only on labels within a certain range is solved using window warping and scaling methods. The training dataset consists of the three-axes values of the accelerometer sensor and the pedestrian’s movement speed calculated based on GPS coordinates. All data and GPS coordinates are collected through the smartphone. A performance evaluation of the proposed moving pedestrian distance estimation system shows a high distance error performance of 3.59 m with only approximately 17% training data compared to other moving object distance estimation techniques.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Alsofyani, Ibrahim Mohd, Tole Sutikno, Yahya A. Alamri, Nik Rumzi Nik Idris, Norjulia Mohamad Nordin y Aree Wangsupphaphol. "Experimental Evaluation of Torque Performance of Voltage and Current Models using Measured Torque for Induction Motor Drives". International Journal of Power Electronics and Drive Systems (IJPEDS) 5, n.º 3 (1 de febrero de 2015): 433. http://dx.doi.org/10.11591/ijpeds.v5.i3.pp433-440.

Texto completo
Resumen
<span lang="EN-US">In this paper, two kinds of observers are proposed to investigate torque estimation. The first one is based on a voltage model represented with a low-pass filter (LPF); which is normally used as a replacement for a pure integrator to avoid integration drift problem due to dc offset or measurement error. The second estimator used is an extended Kalman filter (EKF) as a current model, which puts into account all noise problems. Both estimation algorithms are investigated during the steady and transient states, tested under light load, and then compared with the measured mechanical torque. In all conditions, the torque estimation error for EKF has remained within a narrow error band and yielded minimum torque ripples, which motivate the use of the EKF estimation algorithm in high performance control drives of IMs for achieving high dynamic performance. </span>
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

UEHARA, Kazutake y Fumio OBATA. "C33 Heat Flux Estimation at Heat Sources of Machine Tool by Solving Inverse Problems(Evaluation of machine tool performance)". Proceedings of International Conference on Leading Edge Manufacturing in 21st century : LEM21 2009.5 (2009): 735–38. http://dx.doi.org/10.1299/jsmelem.2009.5.735.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Gadgil, Krutuja S., Prabodh Khampariya y Shashikant M. Bakre. "Investigation of power quality problems and harmonic exclusion in the power system using frequency estimation techniques". Scientific Temper 14, n.º 01 (22 de abril de 2023): 150–56. http://dx.doi.org/10.58414/scientifictemper.2023.14.1.17.

Texto completo
Resumen
This work aims to investigate a problem with power quality and the exclusion of harmonics in the power system using a method called frequency estimation. This study aims to explore the performance of several strategies for estimating phase and frequency under a variety of less-than-ideal situations, such as voltage imbalance, harmonics, dc-offset, and so on. When the grid signals are characterized by dc-offset, it has been shown that most of the approaches are incapable of calculating the frequency of the grid signals. This article introduces a frequency estimation method known as Modified Dual Second Order Generalized Integrator (MDSOGI). This method accurately guesses the frequency under all of the unideal scenarios. Experiments have shown that the findings are accurate. In order to build a control scheme for a shunt active power filter that is able to function under such circumstances, the scheme is further integrated with the theory of instantaneous reactive power. In order to demonstrate enhanced performance, the experimental prototype is being created.
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Joshi, Ashwini, Angelika Geroldinger, Lena Jiricka, Pralay Senchaudhuri, Christopher Corcoran y Georg Heinze. "Solutions to problems of nonexistence of parameter estimates and sparse data bias in Poisson regression". Statistical Methods in Medical Research 31, n.º 2 (21 de diciembre de 2021): 253–66. http://dx.doi.org/10.1177/09622802211065405.

Texto completo
Resumen
Poisson regression can be challenging with sparse data, in particular with certain data constellations where maximum likelihood estimates of regression coefficients do not exist. This paper provides a comprehensive evaluation of methods that give finite regression coefficients when maximum likelihood estimates do not exist, including Firth’s general approach to bias reduction, exact conditional Poisson regression, and a Bayesian estimator using weakly informative priors that can be obtained via data augmentation. Furthermore, we include in our evaluation a new proposal for a modification of Firth’s approach, improving its performance for predictions without compromising its attractive bias-correcting properties for regression coefficients. We illustrate the issue of the nonexistence of maximum likelihood estimates with a dataset arising from the recent outbreak of COVID-19 and an example from implant dentistry. All methods are evaluated in a comprehensive simulation study under a variety of realistic scenarios, evaluating their performance for prediction and estimation. To conclude, while exact conditional Poisson regression may be confined to small data sets only, both the modification of Firth’s approach and the Bayesian estimator are universally applicable solutions with attractive properties for prediction and estimation. While the Bayesian method needs specification of prior variances for the regression coefficients, the modified Firth approach does not require any user input.
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Liu, Bing, Zhen Chen y Xiang Dong Liu. "Computationally Efficient Extended Kalman Filter for Nonlinear Systems". Advanced Materials Research 846-847 (noviembre de 2013): 1205–8. http://dx.doi.org/10.4028/www.scientific.net/amr.846-847.1205.

Texto completo
Resumen
A computationally efficient extended Kalman filter is developed for nonlinear estimation problems in this paper. The filter is performed in three stages. First, the state predictions are evaluated by the dynamic model of the system. Then, the dynamic equations of the rectification quantities for the predicted states are designed. Finally, the state estimations are updated by the predicted states with the rectification quantities multiplied by a single scale factor. One advantage of the filter is that the computational cost is reduced significantly, because the matrix coefficients of the rectified equations are constant. It doesnt need to evaluate the Jacobian matrixes and the matrix inversion for updating the gain matrix neither. Another advantage is that a single scale factor is introduced to scale the model approximated error, leading to an improved filter performance. The excellent performance of the proposed filter is demonstrated by an example with the application to the estimation problems for the sensorless permanent magnet synchronous motor direct torque control system.
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Sakuma y Nishi. "Estimation of Building Thermal Performance Using Simple Sensors and Air Conditioners". Energies 12, n.º 15 (31 de julio de 2019): 2950. http://dx.doi.org/10.3390/en12152950.

Texto completo
Resumen
Energy and environmental problems have attracted attention worldwide. Energy consumption in residential sectors accounts for a large percentage of total consumption. Several retrofit schemes, which insulate building envelopes to increase energy efficiency, have been adapted to address residential energy problems. However, these schemes often fail to balance the installment cost with savings from the retrofits. To maximize the benefit, selecting houses with low thermal performance by a cost-effective method is inevitable. Therefore, an accurate, low-cost, and undemanding housing assessment method is required. This paper proposes a thermal performance assessment method for residential housing. The proposed method enables assessments under the existing conditions of residential housings and only requires a simple and affordable monitoring system of power meters for an air conditioner (AC), simple sensors (three thermometers at most), a BLE beacon, and smartphone application. The proposed method is evaluated thoroughly by using both simulation and experimental data. Analysis of estimation errors is also conducted. Our method shows that the accuracy achieved with the proposed three-room model is 9.8% (relative error) for the simulation data. Assessments on the experimental data also show that our proposed method achieved Ua value estimations using a low-cost system, satisfying the requirements of housing assessments for retrofits.
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Perez-Rodriguez, Ricardo. "An estimation of distribution algorithm for combinatorial optimization problems". International Journal of Industrial Optimization 3, n.º 1 (3 de febrero de 2022): 47–67. http://dx.doi.org/10.12928/ijio.v3i1.5862.

Texto completo
Resumen
This paper considers solving more than one combinatorial problem considered some of the most difficult to solve in the combinatorial optimization field, such as the job shop scheduling problem (JSSP), the vehicle routing problem with time windows (VRPTW), and the quay crane scheduling problem (QCSP). A hybrid metaheuristic algorithm that integrates the Mallows model and the Moth-flame algorithm solves these problems. Through an exponential function, the Mallows model emulates the solution space distribution for the problems; meanwhile, the Moth-flame algorithm is in charge of determining how to produce the offspring by a geometric function that helps identify the new solutions. The proposed metaheuristic, called HEDAMMF (Hybrid Estimation of Distribution Algorithm with Mallows model and Moth-Flame algorithm), improves the performance of recent algorithms. Although knowing the algebra of permutations is required to understand the proposed metaheuristic, utilizing the HEDAMMF is justified because certain problems are fixed differently under different circumstances. These problems do not share the same objective function (fitness) and/or the same constraints. Therefore, it is not possible to use a single model problem. The aforementioned approach is able to outperform recent algorithms under different metrics for these three combinatorial problems. Finally, it is possible to conclude that the hybrid metaheuristics have a better performance, or equal in effectiveness than recent algorithms.
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Ganor-Stern, Dana. "Can Dyscalculics Estimate the Results of Arithmetic Problems?" Journal of Learning Disabilities 50, n.º 1 (4 de agosto de 2016): 23–33. http://dx.doi.org/10.1177/0022219415587785.

Texto completo
Resumen
The present study is the first to examine the computation estimation skills of dyscalculics versus controls using the estimation comparison task. In this task, participants judged whether an estimated answer to a multidigit multiplication problem was larger or smaller than a given reference number. While dyscalculics were less accurate than controls, their performance was well above chance level. The performance of controls but not of those with developmental dyscalculia (DD) improved consistently for smaller problem sizes. The performance of both groups was superior when the reference number was smaller (vs. larger) than the exact answer and when it was far (vs. close) from it, both of which are considered to be the markers of the approximate number system (ANS). Strategy analysis distinguished between an approximated calculation strategy and a sense of magnitude strategy, which does not involve any calculation but relies entirely on the ANS. Dyscalculics used the latter more often than controls. The present results suggest that there is little, if any, impairment in the ANS of adults with DD and that their main deficiency is with performing operations on magnitudes rather than with the representations of the magnitudes themselves.
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Ge, Pingshu, Ce Zhang, Tao Zhang, Lie Guo y Qingyang Xiang. "Maximum Correntropy Square-Root Cubature Kalman Filter with State Estimation for Distributed Drive Electric Vehicles". Applied Sciences 13, n.º 15 (29 de julio de 2023): 8762. http://dx.doi.org/10.3390/app13158762.

Texto completo
Resumen
For nonlinear systems, both the cubature Kalman filter (CKF) and square-root cubature Kalman filter (SCKF) can get good estimation performance under Gaussian noise. However, the actual driving environment noise mostly has non-Gaussian properties, leading to a significant reduction in robustness and accuracy for distributed vehicle state estimation. To address such problems, this paper uses the square-root cubature Kalman filter with the maximum correlation entropy criterion (MCSRCKF), establishing a seven degrees of freedom (7-DOF) nonlinear distributed vehicle dynamics model for accurately estimating longitudinal vehicle speed, lateral vehicle speed, yaw rate, and wheel rotation angular velocity using low-cost sensor signals. The co-simulation verification is verified by the CarSim/Simulink platform under double-lane change and serpentine conditions. Experimental results show that the MCSRCKF has high accuracy and enhanced robustness for distributed drive vehicle state estimation problems in real non-Gaussian noise environments.
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Pinchas, Assaf, Irad Ben-Gal y Amichai Painsky. "A Comparative Analysis of Discrete Entropy Estimators for Large-Alphabet Problems". Entropy 26, n.º 5 (28 de abril de 2024): 369. http://dx.doi.org/10.3390/e26050369.

Texto completo
Resumen
This paper presents a comparative study of entropy estimation in a large-alphabet regime. A variety of entropy estimators have been proposed over the years, where each estimator is designed for a different setup with its own strengths and caveats. As a consequence, no estimator is known to be universally better than the others. This work addresses this gap by comparing twenty-one entropy estimators in the studied regime, starting with the simplest plug-in estimator and leading up to the most recent neural network-based and polynomial approximate estimators. Our findings show that the estimators’ performance highly depends on the underlying distribution. Specifically, we distinguish between three types of distributions, ranging from uniform to degenerate distributions. For each class of distribution, we recommend the most suitable estimator. Further, we propose a sample-dependent approach, which again considers three classes of distribution, and report the top-performing estimators in each class. This approach provides a data-dependent framework for choosing the desired estimator in practical setups.
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Sun, Huan Huan, Jun Bi y Sai Shao. "The State of Charge Estimation of Lithium Battery in Electric Vehicle Based on Extended Kalman Filter". Advanced Materials Research 953-954 (junio de 2014): 796–99. http://dx.doi.org/10.4028/www.scientific.net/amr.953-954.796.

Texto completo
Resumen
Accurate estimation of battery state of charge (SOC) is important to ensure operation of electric vehicle. Since a nonlinear feature exists in battery system and extended kalman filter algorithm performs well in solving nonlinear problems, the paper proposes an EKF-based method for estimating SOC. In order to obtain the accurate estimation of SOC, this paper is based on composite battery model that is a combination of three battery models. The parameters are identified using the least square method. Then a state equation and an output equation are identified. All experimental data are collected from operating EV in Beijing. The results of the experiment show that the relative error of estimation of state of charge is reasonable, which proves this method has good estimation performance.
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Naga Anusha, M., Y. Swara, S. Koteswara Rao y V. Gopi Tilak. "Pendulum state estimation using nonlinear state estimators". International Journal of Engineering & Technology 7, n.º 2.7 (18 de marzo de 2018): 9. http://dx.doi.org/10.14419/ijet.v7i2.7.10244.

Texto completo
Resumen
The convergence over Non-linear state estimation is not satisfied by Kalman filter. For nonlinear state estimation problems, the present research work on the performance analysis of linearized kalman filter and extended kalman filter for a simple nonlinear state estimation problem. The simple pendulum is the best example for simple nonlinear state dynamics. The performance analysis based on the root mean square errors of the estimates also specified through Monte-Carlo simulation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Ghorbanidehno, Hojat, Jonghyun Lee, Matthew Farthing, Tyler Hesser, Peter K. Kitanidis y Eric F. Darve. "Novel Data Assimilation Algorithm for Nearshore Bathymetry". Journal of Atmospheric and Oceanic Technology 36, n.º 4 (abril de 2019): 699–715. http://dx.doi.org/10.1175/jtech-d-18-0067.1.

Texto completo
Resumen
AbstractIt can be expensive and difficult to collect direct bathymetry data for nearshore regions, especially in high-energy locations where there are temporally and spatially varying bathymetric features like sandbars. As a result, there has been increasing interest in remote assessment techniques for estimating bathymetry. Recent efforts have combined Kalman filter–based techniques with indirect video-based observations for bathymetry inversion. Here, we estimate nearshore bathymetry by utilizing observed wave celerity and wave height, which are related to bathymetry through phase-averaged wave dynamics. We present a modified compressed-state Kalman filter (CSKF) method, a fast and scalable Kalman filter method for linear and nonlinear problems with large numbers of unknowns and measurements, and apply it to two nearshore bathymetry estimation problems. To illustrate the robustness and accuracy of our method, we compare its performance with that of two ensemble-based approaches on twin bathymetry estimation problems with profiles based on surveys taken by the U.S. Army Corps of Engineer Field Research Facility (FRF) in Duck, North Carolina. We first consider an estimation problem for a temporally constant bathymetry profile. Then we estimate bathymetry as it evolves in time. Our results indicate that the CSKF method is more accurate and robust than the ensemble-based methods with the same computational cost. The superior performance is due to the optimal low-rank representation of the covariance matrices.
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Annaswamy, A. M., C. Thanomsat, N. Mehta y Ai-Poh Loh. "Applications of Adaptive Controllers to Systems With Nonlinear Parametrization". Journal of Dynamic Systems, Measurement, and Control 120, n.º 4 (1 de diciembre de 1998): 477–87. http://dx.doi.org/10.1115/1.2801489.

Texto completo
Resumen
Nonlinear parametrizations occur in dynamic models of several complex engineering problems. The theory of adaptive estimation and control has been applicable, by and large, to problems where parameters appear linearly. We have recently developed an adaptive controller that is capable of estimating parameters that appear nonlinearly in dynamic systems in a stable manner. In this paper, we present this algorithm and its applicability to two problems, temperature regulation in chemical reactors and precise positioning using magnetic bearings both of which contain nonlinear parametrizations. It is shown in both problems that the proposed controller leads to a significantly better performance than those based on linear parametrizations or linearized dynamics.
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Vapnik, V. y L. Bottou. "Local Algorithms for Pattern Recognition and Dependencies Estimation". Neural Computation 5, n.º 6 (noviembre de 1993): 893–909. http://dx.doi.org/10.1162/neco.1993.5.6.893.

Texto completo
Resumen
In previous publications (Bottou and Vapnik 1992; Vapnik 1992) we described local learning algorithms, which result in performance improvements for real problems. We present here the theoretical framework on which these algorithms are based. First, we present a new statement of certain learning problems, namely the local risk minimization. We review the basic results of the uniform convergence theory of learning, and extend these results to local risk minimization. We also extend the structural risk minimization principle for both pattern recognition problems and regression problems. This extended induction principle is the basis for a new class of algorithms.
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Shin, Yoonseok. "Application of Boosting Regression Trees to Preliminary Cost Estimation in Building Construction Projects". Computational Intelligence and Neuroscience 2015 (2015): 1–9. http://dx.doi.org/10.1155/2015/149702.

Texto completo
Resumen
Among the recent data mining techniques available, the boosting approach has attracted a great deal of attention because of its effective learning algorithm and strong boundaries in terms of its generalization performance. However, the boosting approach has yet to be used in regression problems within the construction domain, including cost estimations, but has been actively utilized in other domains. Therefore, a boosting regression tree (BRT) is applied to cost estimations at the early stage of a construction project to examine the applicability of the boosting approach to a regression problem within the construction domain. To evaluate the performance of the BRT model, its performance was compared with that of a neural network (NN) model, which has been proven to have a high performance in cost estimation domains. The BRT model has shown results similar to those of NN model using 234 actual cost datasets of a building construction project. In addition, the BRT model can provide additional information such as the importance plot and structure model, which can support estimators in comprehending the decision making process. Consequently, the boosting approach has potential applicability in preliminary cost estimations in a building construction project.
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

TOY, Ayşegül, Ayhan KAPUSUZOĞLU y Nildağ Başak CEYLAN. "The Effect of Working Capital Management on the Performance of the Textile Firms: Evidence from Fragile Five Countries (FFCs)". Ekonomi, Politika & Finans Araştırmaları Dergisi 7, n.º 4 (31 de diciembre de 2022): 814–38. http://dx.doi.org/10.30784/epfad.1205427.

Texto completo
Resumen
An effective working capital can contribute to achieving the firm’s financial profitability, increasing the value of companies, creating a short-term financing source, continuing their activities and increasing their sustainability. This study examines the effect of working capital management on firm performances (ROA and TOBIN's Q) of firms operating in the textile industry in 4 countries (Brazil, India, Indonesia and Turkey) called the Fragile Five countries between 2010 and 2020. In the estimation of the coefficients of the panel regression models determined in this study, the Driscoll-Kraay estimator, which is robust against the problems of unobserved heterogeneity, autocorrelation, varying variance and cross-section dependence, was used. In the general evaluation of the panel data analysis estimation results, it is seen that the effect of working capital management on financial performance differs significantly depending on the selected performance variable. All of these results show that successful and effective working capital management in the textile sector depends on taking into account the differences in economic conditions, differences in capital markets, financial market performance and daily working habits, and evaluating each component of working capital separately.
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Liauh, C. T. y R. B. Roemer. "Multiple Minima in Inverse Hyperthermia Temperature Estimation Problems". Journal of Biomechanical Engineering 115, n.º 3 (1 de agosto de 1993): 239–46. http://dx.doi.org/10.1115/1.2895481.

Texto completo
Resumen
Using one-, two-, and three-dimensional numerical simulation models it is shown that multiple minima solutions exist for some inverse hyperthermia temperature estimation problems. This is a new observation that has important implications for all potential applications of these inverse techniques. The general conditions under which these multiple minima occur are shown to be solely due to the existence of symmetries in the bio-heat transfer model used to solve the inverse problem. General rules for determining the number of these global minimum points in the unknown parameter (perfusion) space are obtained for several geometrically symmetric (with respect to the sensor placement and the inverse case blood perfusion model) one-, two- and three-dimensional problem formulations with multiple perfusion regions when no model mismatch is present. As the amount of this symmetry is successively reduced, all but one of these global minima caused by symmetry become local minima. A general approach for (a) detecting when the inverse algorithm has converged to a local minimum, and (b) for using that knowledge to direct the search algorithm toward the global minimum is presented. A three-dimensional, random perfusion distribution example is given which illustrates the effects of the multiple minima on the performance of a state and parameter estimation algorithm. This algorithm attempts to reconstruct the entire temperature field during simulated hyperthermia treatments based on knowledge of measured temperatures from a limited number of locations.
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Allawi, Mohammed Falah, Sinan Q. Salih, Murizah Kassim, Majeed Mattar Ramal, Abdulrahman S. Mohammed y Zaher Mundher Yaseen. "Application of Computational Model Based Probabilistic Neural Network for Surface Water Quality Prediction". Mathematics 10, n.º 21 (25 de octubre de 2022): 3960. http://dx.doi.org/10.3390/math10213960.

Texto completo
Resumen
Applications of artificial intelligence (AI) models have been massively explored for various engineering and sciences domains over the past two decades. Their capacity in modeling complex problems confirmed and motivated researchers to explore their merit in different disciplines. The use of two AI-models (probabilistic neural network and multilayer perceptron neural network) for the estimation of two different water quality indicators (namely dissolved oxygen (DO) and five days biochemical oxygen demand (BOD5)) were reported in this study. The WQ parameters estimation based on four input modelling scenarios was adopted. Monthly water quality parameters data for the duration from January 2006 to December 2015 were used as the input data for the building of the prediction model. The proposed modelling was established utilizing many physical and chemical variables, such as turbidity, calcium (Ca), pH, temperature (T), total dissolved solids (TDS), Sulfate (SO4), total suspended solids (TSS), and alkalinity as the input variables. The proposed models were evaluated for performance using different statistical metrics and the evaluation results showed that the performance of the proposed models in terms of the estimation accuracy increases with the addition of more input variables in some cases. The performances of PNN model were superior to MLPNN model with estimation both DO and BOD parameters. The study concluded that the PNN model is a good tool for estimating the WQ parameters. The optimal evaluation indicators for PNN in predicting BOD are (R2 = 0.93, RMSE = 0.231 and MAE = 0.197). The best performance indicators for PNN in predicting Do are (R2 = 0.94, RMSE = 0.222 and MAE = 0.175).
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Choi, Youn-Ho y Seok-Cheol Kee. "Monocular Depth Estimation Using a Laplacian Image Pyramid with Local Planar Guidance Layers". Sensors 23, n.º 2 (11 de enero de 2023): 845. http://dx.doi.org/10.3390/s23020845.

Texto completo
Resumen
It is important to estimate the exact depth from 2D images, and many studies have been conducted for a long period of time to solve depth estimation problems. Recently, as research on estimating depth from monocular camera images based on deep learning is progressing, research for estimating accurate depths using various techniques is being conducted. However, depth estimation from 2D images has been a problem in predicting the boundary between objects. In this paper, we aim to predict sophisticated depths by emphasizing the precise boundaries between objects. We propose a depth estimation network with encoder–decoder structures using the Laplacian pyramid and local planar guidance method. In the process of upsampling the learned features using the encoder, the purpose of this step is to obtain a clearer depth map by guiding a more sophisticated boundary of an object using the Laplacian pyramid and local planar guidance techniques. We train and test our models with KITTI and NYU Depth V2 datasets. The proposed network constructs a DNN using only convolution and uses the ConvNext networks as a backbone. A trained model shows the performance of the absolute relative error (Abs_rel) 0.054 and root mean square error (RMSE) 2.252 based on the KITTI dataset and absolute relative error (Abs_rel) 0.102 and root mean square error 0.355 based on the NYU Depth V2 dataset. On the state-of-the-art monocular depth estimation, our network performance shows the fifth-best performance based on the KITTI Eigen split and the eighth-best performance based on the NYU Depth V2.
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Lu, Dengsheng, Qi Chen, Guangxing Wang, Emilio Moran, Mateus Batistella, Maozhen Zhang, Gaia Vaglio Laurin y David Saah. "Aboveground Forest Biomass Estimation with Landsat and LiDAR Data and Uncertainty Analysis of the Estimates". International Journal of Forestry Research 2012 (2012): 1–16. http://dx.doi.org/10.1155/2012/436537.

Texto completo
Resumen
Landsat Thematic mapper (TM) image has long been the dominate data source, and recently LiDAR has offered an important new structural data stream for forest biomass estimations. On the other hand, forest biomass uncertainty analysis research has only recently obtained sufficient attention due to the difficulty in collecting reference data. This paper provides a brief overview of current forest biomass estimation methods using both TM and LiDAR data. A case study is then presented that demonstrates the forest biomass estimation methods and uncertainty analysis. Results indicate that Landsat TM data can provide adequate biomass estimates for secondary succession but are not suitable for mature forest biomass estimates due to data saturation problems. LiDAR can overcome TM’s shortcoming providing better biomass estimation performance but has not been extensively applied in practice due to data availability constraints. The uncertainty analysis indicates that various sources affect the performance of forest biomass/carbon estimation. With that said, the clear dominate sources of uncertainty are the variation of input sample plot data and data saturation problem related to optical sensors. A possible solution to increasing the confidence in forest biomass estimates is to integrate the strengths of multisensor data.
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Zhang, Hongpeng, Yujie Wei, Huan Zhou y Changqiang Huang. "Maneuver Decision-Making for Autonomous Air Combat Based on FRE-PPO". Applied Sciences 12, n.º 20 (11 de octubre de 2022): 10230. http://dx.doi.org/10.3390/app122010230.

Texto completo
Resumen
Maneuver decision-making is the core of autonomous air combat, and reinforcement learning is a potential and ideal approach for addressing decision-making problems. However, when reinforcement learning is used for maneuver decision-making for autonomous air combat, it often suffers from awful training efficiency and poor performance of maneuver decision-making. In this paper, an air combat maneuver decision-making method based on final reward estimation and proximal policy optimization is proposed to solve the above problems. First, an air combat environment based on aircraft and missile models is constructed, and an intermediate reward and final reward are designed. Second, the final reward estimation is proposed to replace the original advantage estimation function of the surrogate objective of proximal policy optimization to improve the training performance of reinforcement learning. Third, sampling according to the final reward estimation is proposed to improve the training efficiency. Finally, the proposed method is used in a self-play framework to train agents for maneuver decision-making. Simulations show that final reward estimation and sampling according to final reward estimation are effective and efficient.
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Jain, Shantanu, Justin Delano, Himanshu Sharma y Predrag Radivojac. "Class Prior Estimation with Biased Positives and Unlabeled Examples". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 04 (3 de abril de 2020): 4255–63. http://dx.doi.org/10.1609/aaai.v34i04.5848.

Texto completo
Resumen
Positive-unlabeled learning is often studied under the assumption that the labeled positive sample is drawn randomly from the true distribution of positives. In many application domains, however, certain regions in the support of the positive class-conditional distribution are over-represented while others are under-represented in the positive sample. Although this introduces problems in all aspects of positive-unlabeled learning, we begin to address this challenge by focusing on the estimation of class priors, quantities central to the estimation of posterior probabilities and the recovery of true classification performance. We start by making a set of assumptions to model the sampling bias. We then extend the identifiability theory of class priors from the unbiased to the biased setting. Finally, we derive an algorithm for estimating the class priors that relies on clustering to decompose the original problem into subproblems of unbiased positive-unlabeled learning. Our empirical investigation suggests feasibility of the correction strategy and overall good performance.
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Yang, Feng, Yujuan Luo y Litao Zheng. "Double-Layer Cubature Kalman Filter for Nonlinear Estimation". Sensors 19, n.º 5 (26 de febrero de 2019): 986. http://dx.doi.org/10.3390/s19050986.

Texto completo
Resumen
The cubature Kalman filter (CKF) has poor performance in strongly nonlinear systems while the cubature particle filter has high computational complexity induced by stochastic sampling. To address these problems, a novel CKF named double-Layer cubature Kalman filter (DLCKF) is proposed. In the proposed DLCKF, the prior distribution is represented by a set of weighted deterministic sampling points, and each deterministic sampling point is updated by the inner CKF. Finally, the update mechanism of the outer CKF is used to obtain the state estimations. Simulation results show that the proposed algorithm has not only high estimation accuracy but also low computational complexity, compared with the state-of-the-art filtering algorithms.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía