Auswahl der wissenschaftlichen Literatur zum Thema „Performance estimation problems“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Performance estimation problems" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Performance estimation problems"

1

Qiu, Li, Zhiyuan Ren und Jie Chen. „Fundamental performance limitations in estimation problems“. Communications in Information and Systems 2, Nr. 4 (2002): 371–84. http://dx.doi.org/10.4310/cis.2002.v2.n4.a3.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Chen, Zhenmin, und Feng Miao. „Interval and Point Estimators for the Location Parameter of the Three-Parameter Lognormal Distribution“. International Journal of Quality, Statistics, and Reliability 2012 (08.08.2012): 1–6. http://dx.doi.org/10.1155/2012/897106.

Der volle Inhalt der Quelle
Annotation:
The three-parameter lognormal distribution is the extension of the two-parameter lognormal distribution to meet the need of the biological, sociological, and other fields. Numerous research papers have been published for the parameter estimation problems for the lognormal distributions. The inclusion of the location parameter brings in some technical difficulties for the parameter estimation problems, especially for the interval estimation. This paper proposes a method for constructing exact confidence intervals and exact upper confidence limits for the location parameter of the three-parameter lognormal distribution. The point estimation problem is discussed as well. The performance of the point estimator is compared with the maximum likelihood estimator, which is widely used in practice. Simulation result shows that the proposed method is less biased in estimating the location parameter. The large sample size case is discussed in the paper.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

WALTHER, B. A., und S. MORAND. „Comparative performance of species richness estimation methods“. Parasitology 116, Nr. 4 (April 1998): 395–405. http://dx.doi.org/10.1017/s0031182097002230.

Der volle Inhalt der Quelle
Annotation:
In most real-world contexts the sampling effort needed to attain an accurate estimate of total species richness is excessive. Therefore, methods to estimate total species richness from incomplete collections need to be developed and tested. Using real and computer-simulated parasite data sets, the performances of 9 species richness estimation methods were compared. For all data sets, each estimation method was used to calculate the projected species richness at increasing levels of sampling effort. The performance of each method was evaluated by calculating the bias and precision of its estimates against the known total species richness. Performance was evaluated with increasing sampling effort and across different model communities. For the real data sets, the Chao2 and first-order jackknife estimators performed best. For the simulated data sets, the first-order jackknife estimator performed best at low sampling effort but, with increasing sampling effort, the bootstrap estimator outperformed all other estimators. Estimator performance increased with increasing species richness, aggregation level of individuals among samples and overall population size. Overall, the Chao2 and the first-order jackknife estimation methods performed best and should be used to control for the confounding effects of sampling effort in studies of parasite species richness. Potential uses of and practical problems with species richness estimation methods are discussed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Liu, Shaojie, Yulong Zhang, Zhiqiang Gao, Yangquan Chen, Donghai Li und Min Zhu. „Desired Dynamics-Based Generalized Inverse Solver for Estimation Problems“. Processes 10, Nr. 11 (26.10.2022): 2193. http://dx.doi.org/10.3390/pr10112193.

Der volle Inhalt der Quelle
Annotation:
An important task for estimators is to solve the inverse. However, as the designs of different estimators for solving the inverse vary widely, it is difficult for engineers to be familiar with all of their properties and to design suitable estimators for different situations. Therefore, we propose a more structurally unified and functionally diverse estimator, called generalized inverse solver (GIS). GIS is inspired by the desired dynamics of control systems and understanding of the generalized inverse. It is similar to a closed-loop system, structurally consisting of nominal models and an error-correction mechanism (ECM). The nominal models can be model-based, semi-model-based, or even model-free, depending on prior knowledge of the system. In addition, we design the ECM of GIS based on desired dynamics parameterization by following a simple and meaningful rule, where states are directly used in the ECM to accelerate the convergence of GIS. A case study considering a rotary flexible link shows that GIS can greatly improve the noise suppression performance with lower loss of dynamic estimation performance, when compared with other common observers at the same design bandwidth. Moreover, the dynamic estimation performances of the three GIS approaches (i.e., model-based, semi-model-based, and model-free) are almost the same under the same parameters. These results demonstrate the strong robustness of GIS (although by means of the uniform design method). Finally, some control cases are studied, including a comparison with DOB and ESO, in order to illustrate their approximate equivalence to GIS.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Panić, Branislav, Jernej Klemenc und Marko Nagode. „Improved Initialization of the EM Algorithm for Mixture Model Parameter Estimation“. Mathematics 8, Nr. 3 (07.03.2020): 373. http://dx.doi.org/10.3390/math8030373.

Der volle Inhalt der Quelle
Annotation:
A commonly used tool for estimating the parameters of a mixture model is the Expectation–Maximization (EM) algorithm, which is an iterative procedure that can serve as a maximum-likelihood estimator. The EM algorithm has well-documented drawbacks, such as the need for good initial values and the possibility of being trapped in local optima. Nevertheless, because of its appealing properties, EM plays an important role in estimating the parameters of mixture models. To overcome these initialization problems with EM, in this paper, we propose the Rough-Enhanced-Bayes mixture estimation (REBMIX) algorithm as a more effective initialization algorithm. Three different strategies are derived for dealing with the unknown number of components in the mixture model. These strategies are thoroughly tested on artificial datasets, density–estimation datasets and image–segmentation problems and compared with state-of-the-art initialization methods for the EM. Our proposal shows promising results in terms of clustering and density-estimation performance as well as in terms of computational efficiency. All the improvements are implemented in the rebmix R package.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Guarino, Cassandra M., Mark D. Reckase und Jeffrey M. Wooldridge. „Can Value-Added Measures of Teacher Performance Be Trusted?“ Education Finance and Policy 10, Nr. 1 (Januar 2015): 117–56. http://dx.doi.org/10.1162/edfp_a_00153.

Der volle Inhalt der Quelle
Annotation:
We investigate whether commonly used value-added estimation strategies produce accurate estimates of teacher effects under a variety of scenarios. We estimate teacher effects in simulated student achievement data sets that mimic plausible types of student grouping and teacher assignment scenarios. We find that no one method accurately captures true teacher effects in all scenarios, and the potential for misclassifying teachers as high- or low-performing can be substantial. A dynamic ordinary least squares estimator is more robust across scenarios than other estimators. Misspecifying dynamic relationships can exacerbate estimation problems.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

ODEN, J. TINSLEY, SERGE PRUDHOMME, TIM WESTERMANN, JON BASS und MARK E. BOTKIN. „ERROR ESTIMATION OF EIGENFREQUENCIES FOR ELASTICITY AND SHELL PROBLEMS“. Mathematical Models and Methods in Applied Sciences 13, Nr. 03 (März 2003): 323–44. http://dx.doi.org/10.1142/s0218202503002520.

Der volle Inhalt der Quelle
Annotation:
In this paper, a method for deriving computable estimates of the approximation error in eigenvalues or eigenfrequencies of three-dimensional linear elasticity or shell problems is presented. The analysis for the error estimator follows the general approach of goal-oriented error estimation for which the error is estimated in so-called quantities of interest, here the eigenfrequencies, rather than global norms. A general theory is developed and is then applied to the linear elasticity equations. For the shell analysis, it is assumed that the shell model is not completely known and additional errors are introduced due to modeling approximations. The approach is then based on recovering three-dimensional approximations from the shell eigensolution and employing the error estimator developed for linear elasticity. The performance of the error estimator is demonstrated on several test problems.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Mao, Zhi Jie, Zhi Jun Yan, Hong Wei Li und Jin Meng. „Exact Cram’er–Rao Lower Bound for Interferometric Phase Estimator“. Advanced Materials Research 1004-1005 (August 2014): 1419–26. http://dx.doi.org/10.4028/www.scientific.net/amr.1004-1005.1419.

Der volle Inhalt der Quelle
Annotation:
We are concerned with the problem of interferometric phase estimation using multiple baselines. Simple close-form efficient expressions for computing the Cramer-Rao lower bound (CRLB) for general phase estimation problems is derived. Performance analysis of the interferometric phase estimation is carried out based on Monte Carlo simulations and CRLB calculation. We show that by utilizing the Cramer-Rao lower bound we are able to determine the combination of baselines that will enable us to achieve the most accurate estimating performance.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Gao, Jing, Kehan Bai und Wenhao Gui. „Statistical Inference for the Inverted Scale Family under General Progressive Type-II Censoring“. Symmetry 12, Nr. 5 (05.05.2020): 731. http://dx.doi.org/10.3390/sym12050731.

Der volle Inhalt der Quelle
Annotation:
Two estimation problems are studied based on the general progressively censored samples, and the distributions from the inverted scale family (ISF) are considered as prospective life distributions. One is the exact interval estimation for the unknown parameter θ , which is achieved by constructing the pivotal quantity. Through Monte Carlo simulations, the average 90 % and 95 % confidence intervals are obtained, and the validity of the above interval estimation is illustrated with a numerical example. The other is the estimation of R = P ( Y < X ) in the case of ISF. The maximum likelihood estimator (MLE) as well as approximate maximum likelihood estimator (AMLE) is obtained, together with the corresponding R-symmetric asymptotic confidence intervals. With Bootstrap methods, we also propose two R-asymmetric confidence intervals, which have a good performance for small samples. Furthermore, assuming the scale parameters follow independent gamma priors, the Bayesian estimator as well as the HPD credible interval of R is thus acquired. Finally, we make an evaluation on the effectiveness of the proposed estimations through Monte Carlo simulations and provide an illustrative example of two real datasets.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Ayansola, Olufemi, und Adebowale Adejumo. „On the Performance of Some Estimation Methods in Models with Heteroscedasticity and Autocorrelated Disturbances (A Monte-Carlo Approach)“. Mathematical Modelling and Applications 9, Nr. 1 (02.04.2024): 23–31. http://dx.doi.org/10.11648/j.mma.20240901.13.

Der volle Inhalt der Quelle
Annotation:
The proliferation of panel data studies has been greatly motivated by the availability of data and capacity for modelling the complexity of human behaviour than a single cross-section or time series data and these led to the rise of challenging methodologies for estimating the data set. It is pertinent that, in practice, panel data are bound to exhibit autocorrelation or heteroscedasticity or both. In view of the fact that the presence of heteroscedasticity and autocorrelated errors in panel data models biases the standard errors and leads to less efficient results. This study deemed it fit to search for estimator that can handle the presence of these twin problems when they co- exists in panel data. Therefore, robust inference in the presence of these problems needs to be simultaneously addressed. The Monte-Carlo simulation method was designed to investigate the finite sample properties of five estimation methods: Between Estimator (BE), Feasible Generalized Least Square (FGLS), Maximum Estimator (ME) and Modified Maximum Estimator (MME), including a new Proposed Estimator (PE) in the simulated data infected with heteroscedasticity and autocorrelated errors. The results of the root mean square error and absolute bias criteria, revealed that Proposed Estimator in the presence of these problems is asymptotically more efficient and consistent than other estimators in the class of the estimators in the study. This is experienced in all combinatorial level of autocorrelated errors in remainder error and fixed heteroscedastic individual effects. For this reason, PE has better performance among other estimators.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "Performance estimation problems"

1

Vallone, Michael. „Parameter Estimation of Fundamental Technical Aircraft Information Applied to Aircraft Performance“. DigitalCommons@CalPoly, 2010. https://digitalcommons.calpoly.edu/theses/382.

Der volle Inhalt der Quelle
Annotation:
Inverse problems can be applied to aircraft in many areas. One of the disciplines within the aerospace industry with the most openly published data is in the area of aircraft performance. Many aircraft manufacturers publish performance claims, flight manuals and Standard Aircraft Characteristics (SAC) charts without any mention of the more fundamental technical information of the drag and engine data. With accurate tools, generalized aircraft models and a few curve-fitting techniques, it is possible to evaluate vehicle performance and estimate the drag, thrust and fuel consumption (TSFC) with some accuracy. This thesis is intended to research the use of aircraft performance information to deduce these aircraft--specific drag and engine models. The proposed method incorporates models for each performance metric, modeling options for drag, thrust and TSFC, and an inverse method to match the predicted performance to the actual performance. Each of the aircraft models is parametric in nature, allowing for individual parameters to be varied to determine the optimal result. The method discussed in this work shows both the benefits and pitfalls of using performance data to deduce engine and drag characteristics. The results of this method, applied to the McDonnell Douglas DC-10 and Northrop F-5, highlight many of these benefits and pitfalls, and show varied levels of success. A groundwork has been laid to show that this concept is viable, and extension of this work to additional aircraft is possible with recommendations on how to improve this technique.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Barré, Mathieu. „Worst-case analysis of efficient first-order methods“. Electronic Thesis or Diss., Université Paris sciences et lettres, 2021. http://www.theses.fr/2021UPSLE064.

Der volle Inhalt der Quelle
Annotation:
De nombreuses applications modernes reposent sur la résolution de problèmes d’optimisations (par exemple, en biologie numérique, en mécanique, en finance), faisant des méthodes d’optimisation des outils essentiels dans de nombreux domaines scientifiques. Apporter des garanties sur le comportement de ces méthodes constitue donc un axe de recherche important. Une façon classique d’analyser un algorithme d’optimisation consiste à étudier son comportement dans le pire cas. C'est-à-dire, donner des garanties sur son comportement (par exemple sa vitesse de convergence) qui soient indépendantes de la fonction en entrée de l’algorithme et vraies pour toutes les fonctions dans une classe donnée. Cette thèse se concentre sur l’analyse en pire cas de quelques méthodes du premier ordre réputées pour leur efficacité. Nous commençons par étudier les méthodes d’accélération d’Anderson, pour lesquelles nous donnons de nouvelles bornes de pire cas qui permettent de garantir précisément et explicitement quand l’accélération a lieu. Pour obtenir ces garanties, nous fournissons des majorations sur une variation du problème d’optimisation polynomiale de Tchebychev, dont nous pensons qu’elles constituent un résultat indépendant. Ensuite, nous prolongeons l’étude des Problèmes d’Estimation de Performances (PEP), développés à l’origine pour analyser les algorithmes d’optimisation à pas fixes, à l’analyse des méthodes adaptatives. En particulier, nous illustrons ces développements à travers l’étude des comportements en pire cas de la descente de gradient avec pas de Polyak, qui utilise la norme des gradients et les valeurs prises par la fonction objectif, ainsi que d’une nouvelle version accélérée. Nous détaillons aussi cette approche sur d’autres algorithmes adaptatifs standards. Enfin, la dernière contribution de cette thèse est de développer plus avant la méthodologie PEP pour l’analyse des méthodes du premier ordre se basant sur des opérations proximales inexactes. En utilisant cette approche, nous définissons des algorithmes dont les garanties en pire cas ont été optimisées et nous fournissons des analyses de pire cas pour quelques méthodes présentes dans la littérature
Many modern applications rely on solving optimization problems (e.g., computational biology, mechanics, finance), establishing optimization methods as crucial tools in many scientific fields. Providing guarantees on the (hopefully good) behaviors of these methods is therefore of significant interest. A standard way of analyzing optimization algorithms consists in worst-case reasoning. That is, providing guarantees on the behavior of an algorithm (e.g. its convergence speed), that are independent of the function on which the algorithm is applied and true for every function in a particular class. This thesis aims at providing worst-case analyses of a few efficient first-order optimization methods. We start by the study of Anderson acceleration methods, for which we provide new explicit worst-case bounds guaranteeing precisely when acceleration occurs. We obtained these guarantees by providing upper bounds on a variation of the classical Chebyshev optimization problem on polynomials, that we believe of independent interest. Then, we extend the Performance Estimation Problem (PEP) framework, that was originally designed for principled analyses of fixed-step algorithms, to study first-order methods with adaptive parameters. This is illustrated in particular through the worst-case analyses of the canonical gradient method with Polyak step sizes that use gradient norms and function values information, and of an accelerated version of it. The approach is also presented on other standard adaptive algorithms. Finally, the last contribution of this thesis is to further develop the PEP methodology for analyzing first-order methods relying on inexact proximal computations. Using this framework, we produce algorithms with optimized worst-case guarantees and provide (numerical and analytical) worst-case bounds for some standard algorithms in the literature
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Cioaca, Alexandru George. „A Computational Framework for Assessing and Optimizing the Performance of Observational Networks in 4D-Var Data Assimilation“. Diss., Virginia Tech, 2013. http://hdl.handle.net/10919/51795.

Der volle Inhalt der Quelle
Annotation:
A deep scientific understanding of complex physical systems, such as the atmosphere, can be achieved neither by direct measurements nor by numerical simulations alone. Data assimilation is a rigorous procedure to fuse information from a priori knowledge of the system state, the physical laws governing the evolution of the system, and real measurements, all with associated error statistics. Data assimilation produces best (a posteriori) estimates of model states and parameter values, and results in considerably improved computer simulations. The acquisition and use of observations in data assimilation raises several important scientific questions related to optimal sensor network design, quantification of data impact, pruning redundant data, and identifying the most beneficial additional observations. These questions originate in operational data assimilation practice, and have started to attract considerable interest in the recent past. This dissertation advances the state of knowledge in four dimensional variational (4D-Var) - data assimilation by developing, implementing, and validating a novel computational framework for estimating observation impact and for optimizing sensor networks. The framework builds on the powerful methodologies of second-order adjoint modeling and the 4D-Var sensitivity equations. Efficient computational approaches for quantifying the observation impact include matrix free linear algebra algorithms and low-rank approximations of the sensitivities to observations. The sensor network configuration problem is formulated as a meta-optimization problem. Best values for parameters such as sensor location are obtained by optimizing a performance criterion, subject to the constraint posed by the 4D-Var optimization. Tractable computational solutions to this "optimization-constrained" optimization problem are provided. The results of this work can be directly applied to the deployment of intelligent sensors and adaptive observations, as well as to reducing the operating costs of measuring networks, while preserving their ability to capture the essential features of the system under consideration.
Ph. D.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Bréhard, Thomas Le Cadre Jean-Pierre. „Estimation séquentielle et analyse de performances pour un problème de filtrage non linéaire partiellement observé application à la trajectographie par mesure d'angles /“. [S.l.] : [s.n.], 2005. ftp://ftp.irisa.fr/techreports/theses/2005/brehard.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Gaudette, Darrell. „H-Infinity Performance Limitations for Problems with Sensor Time Delays“. Thesis, 2008. http://hdl.handle.net/10012/3512.

Der volle Inhalt der Quelle
Annotation:
Motivated by ongoing research into automating radiotherapy, this thesis is concerned with linear feedback control and estimation problems where only a delayed output signal is measurable. Various discrete-time performance limitations are derived using tools from model-matching theory as well as the early $H_\infty$ literature. It is shown that there exist performance limitations for both one-degree-of-freedom control and estimation problems, but the nature of the limitations differs depending on whether the plant is stable or unstable. Some continuous-time performance limitations are also found, with more complete results in the case where the plant is unstable. Extensions of the various performance limitation to two-degree-of-freedom control are also studied.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Bücher zum Thema "Performance estimation problems"

1

Ontario. Esquisse de cours 12e année: Sciences de l'activité physique pse4u cours préuniversitaire. Vanier, Ont: CFORP, 2002.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Ontario. Esquisse de cours 12e année: Technologie de l'information en affaires btx4e cours préemploi. Vanier, Ont: CFORP, 2002.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Ontario. Esquisse de cours 12e année: Études informatiques ics4m cours préuniversitaire. Vanier, Ont: CFORP, 2002.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Ontario. Esquisse de cours 12e année: Mathématiques de la technologie au collège mct4c cours précollégial. Vanier, Ont: CFORP, 2002.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Ontario. Esquisse de cours 12e année: Sciences snc4m cours préuniversitaire. Vanier, Ont: CFORP, 2002.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Ontario. Esquisse de cours 12e année: English eae4e cours préemploi. Vanier, Ont: CFORP, 2002.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Ontario. Esquisse de cours 12e année: Le Canada et le monde: une analyse géographique cgw4u cours préuniversitaire. Vanier, Ont: CFORP, 2002.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Ontario. Esquisse de cours 12e année: Environnement et gestion des ressources cgr4e cours préemploi. Vanier, Ont: CFORP, 2002.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Ontario. Esquisse de cours 12e année: Histoire de l'Occident et du monde chy4c cours précollégial. Vanier, Ont: CFORP, 2002.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Ontario. Esquisse de cours 12e année: Géographie mondiale: le milieu humain cgu4u cours préuniversitaire. Vanier, Ont: CFORP, 2002.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "Performance estimation problems"

1

Husmeier, Dirk. „Demonstration of the Model Performance on the Benchmark Problems“. In Neural Networks for Conditional Probability Estimation, 69–85. London: Springer London, 1999. http://dx.doi.org/10.1007/978-1-4471-0847-4_5.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Pillonetto, Gianluigi, Tianshi Chen, Alessandro Chiuso, Giuseppe De Nicolao und Lennart Ljung. „Bias“. In Regularized System Identification, 1–15. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-95860-2_1.

Der volle Inhalt der Quelle
Annotation:
AbstractAdopting a quadratic loss, the performance of an estimator can be measured in terms of its mean squared error which decomposes into a variance and a bias component. This introductory chapter contains two linear regression examples which describe the importance of designing estimators able to well balance these two components. The first example will deal with estimation of the means of independent Gaussians. We will review the classical least squares approach which, at first sight, could appear the most appropriate solution to the problem. Remarkably, we will instead see that this unbiased approach can be dominated by a particular biased estimator, the so-called James–Stein estimator. Within this book, this represents the first example of regularized least squares, an estimator which will play a key role in subsequent chapters. The second example will deal with a classical system identification problem: impulse response estimation. A simple numerical experiment will show how the variance of least squares can be too large, hence leading to unacceptable system reconstructions. The use of an approach, known as ridge regression, will give first simple intuitions on the usefulness of regularization in the system identification scenario.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Gnatowski, Andrzej, und Teodor Niżyński. „On Estimating LON-Based Measures in Cyclic Assignment Problem in Non-permutational Flow Shop Scheduling Problem“. In Modelling and Performance Analysis of Cyclic Systems, 63–84. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-27652-2_4.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Klingner, Marvin, und Tim Fingscheidt. „Improved DNN Robustness by Multi-task Training with an Auxiliary Self-Supervised Task“. In Deep Neural Networks and Data for Automated Driving, 149–70. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-01233-4_5.

Der volle Inhalt der Quelle
Annotation:
AbstractWhile deep neural networks for environment perception tasks in autonomous driving systems often achieve impressive performance on clean and well-prepared images, their robustness under real conditions, i.e., on images being perturbed with noise patterns or adversarial attacks, is often subject to a significantly decreased performance. In this chapter, we address this problem for the task of semantic segmentation by proposing multi-task training with the additional task of depth estimation with the goal to improve the DNN robustness. This method has a very wide potential applicability as the additional depth estimation task can be trained in a self-supervised fashion, relying only on unlabeled image sequences during training. The final trained segmentation DNN is, however, still applicable on a single-image basis during inference without additional computational overhead compared to the single-task model. Additionally, our evaluation introduces a measure which allows for a meaningful comparison between different noise and attack types. We show the effectiveness of our approach on the Cityscapes and KITTI datasets, where our method improves the DNN performance w.r.t. the single-task baseline in terms of robustness against multiple noise and adversarial attack types, which is supplemented by an improved absolute prediction performance of the resulting DNN.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Cronin, Neil, Ari Lehtiö und Jussi Talaskivi. „Research for JYU: An AI-Driven, Fully Remote Mobile Application for Functional Exercise Testing“. In Communications in Computer and Information Science, 279–87. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-59091-7_18.

Der volle Inhalt der Quelle
Annotation:
AbstractAs people live longer, the incidence and severity of health problems increases, placing strain on healthcare systems. There is an urgent need for resource-wise approaches to healthcare. We present a system built using open-source tools that allows health and functional capacity data to be collected remotely. The app records performance on functional tests using the phone’s built-in camera and provides users with immediate feedback. Pose estimation is used to detect the user in the video. The x, y coordinates of key body landmarks are then used to compute further metrics such as joint angles and repetition durations. In a proof-of-concept study, we collected data from 13 patients who had recently undergone knee ligament or knee replacement surgery. Patients performed the sit-to-stand test twice, with an average difference in test duration of 1.12 s (range: 1.16–3.2 s). Y-coordinate locations allowed us to automatically identify repetition start and end times, while x, y coordinates were used to compute joint angles, a common rehabilitation outcome variable. Mean difference in repetition duration was 0.1 s (range: −0.4–0.4 s) between trials 1 and 2. Bland-Altman plots confirmed general test-retest consistency within participants. We present a mobile app that enables functional tests to be performed remotely and without supervision. We also demonstrate real-world feasibility, including the ability to automate the entire process, from testing to analysis and the provision of real-time feedback. This approach is scalable, and could form part of national health strategies, allowing healthcare providers to minimise the need for in-person appointments whilst yielding cost savings.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Irnanda, Cut Riska, Isfenti Sadalia und Nazaruddin. „Contract Analysis for Design and Build Lump Sum Price“. In Proceedings of the 19th International Symposium on Management (INSYMA 2022), 1162–69. Dordrecht: Atlantis Press International BV, 2022. http://dx.doi.org/10.2991/978-94-6463-008-4_143.

Der volle Inhalt der Quelle
Annotation:
AbstractA toll road is one of the National Strategic Projects with complete problems and needs highspeed performance during construction. So, contract construction that is usually applied to these projects is design and build contract. Furthermore, best practice for Design and Build is combined with Lump Sum as an estimation cost aspect, so the type of contract used is Design and Build Lump Sum Price. This study aims to review the implementation of the Design and Build Lup Sum Price Contract based on the terms and conditions in the Indonesian Legislation and FIDIC on the Trans Sumatra Toll Project Kuala Tanjung – Inderapura Section 2 according to the contractor’s point of view to mitigating the negative risks that arise especially in the financial aspect. This research is descriptive research with a qualitative data analysis method by comparing the implementation of the contract according to the applicable law. The first step was to collect secondary data in a contractual resume, then make a study based on existing legal standards. Based on the results, it can be stated that the design and build Lump Sum price contract is a construction work contract in order to complete a job within a certain period of time according to the basic design scope as the basis for quotation, which has fixed costs as long as no intervention results in changes contract documents that may give additional payments for the rights. In the construction of the Kuala Tanjung – Inderapura Toll Road Section 2, there was an additional scope of the contract which resulted in additional work due to instructions from the owner and other parties, as well as an intervention to change specifications due to differences in the basic design at the tender which poses negative risks to the financial aspect for the contractor.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Montesinos López, Osval Antonio, Abelardo Montesinos López und Jose Crossa. „Overfitting, Model Tuning, and Evaluation of Prediction Performance“. In Multivariate Statistical Machine Learning Methods for Genomic Prediction, 109–39. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-89010-0_4.

Der volle Inhalt der Quelle
Annotation:
AbstractThe overfitting phenomenon happens when a statistical machine learning model learns very well about the noise as well as the signal that is present in the training data. On the other hand, an underfitted phenomenon occurs when only a few predictors are included in the statistical machine learning model that represents the complete structure of the data pattern poorly. This problem also arises when the training data set is too small and thus an underfitted model does a poor job of fitting the training data and unsatisfactorily predicts new data points. This chapter describes the importance of the trade-off between prediction accuracy and model interpretability, as well as the difference between explanatory and predictive modeling: Explanatory modeling minimizes bias, whereas predictive modeling seeks to minimize the combination of bias and estimation variance. We assess the importance and different methods of cross-validation as well as the importance and strategies of tuning that are key to the successful use of some statistical machine learning methods. We explain the most important metrics for evaluating the prediction performance for continuous, binary, categorical, and count response variables.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

ZANCHETTA, CARLO, Martina Giorio, Maria Grazia Donatiello, Federico Rossi und Rossana Paparella. „Solar Potential and Energy Assessment Data in U-BEM Models: Interoperability Analysis Between Performance Simulation Tools and OpenBIM/GIS Platforms“. In CONVR 2023 - Proceedings of the 23rd International Conference on Construction Applications of Virtual Reality, 1021–32. Florence: Firenze University Press, 2023. http://dx.doi.org/10.36253/979-12-215-0289-3.102.

Der volle Inhalt der Quelle
Annotation:
To evaluate the energy and solar potential of the building stock and address feasibility studies of building retrofit interventions information standards are required to ensure proper data flow from building and urban models to simulation environments. Energy performance data are gathered from different information containers and therefore the result of simulations needs to be shared in BIM/GIS environments to better address energy policies and decision-making processes. Solar potential and energy retrofit estimation, developed by means of urban models (U-BEM) are too rough to support a decision-making process, even if at a feasibility stage. On the opposite, strategic decisions are defined with reference to large building stocks that require a U-BEM approach. To increase the reliability of this kind of simulations the study proposes to integrate U-BEMS with BIM-based data that are aggregated and published at urban scale as average performance indicators of built systems. The interoperability problem is analyzed both for simulation tools that need to manage this kind of data and openBIM/GIS platforms that need to share performance indicators and simulation results
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

ZANCHETTA, CARLO, Martina Giorio, Maria Grazia Donatiello, Federico Rossi und Rossana Paparella. „Solar Potential and Energy Assessment Data in U-BEM Models: Interoperability Analysis Between Performance Simulation Tools and OpenBIM/GIS Platforms“. In CONVR 2023 - Proceedings of the 23rd International Conference on Construction Applications of Virtual Reality, 1021–32. Florence: Firenze University Press, 2023. http://dx.doi.org/10.36253/10.36253/979-12-215-0289-3.102.

Der volle Inhalt der Quelle
Annotation:
To evaluate the energy and solar potential of the building stock and address feasibility studies of building retrofit interventions information standards are required to ensure proper data flow from building and urban models to simulation environments. Energy performance data are gathered from different information containers and therefore the result of simulations needs to be shared in BIM/GIS environments to better address energy policies and decision-making processes. Solar potential and energy retrofit estimation, developed by means of urban models (U-BEM) are too rough to support a decision-making process, even if at a feasibility stage. On the opposite, strategic decisions are defined with reference to large building stocks that require a U-BEM approach. To increase the reliability of this kind of simulations the study proposes to integrate U-BEMS with BIM-based data that are aggregated and published at urban scale as average performance indicators of built systems. The interoperability problem is analyzed both for simulation tools that need to manage this kind of data and openBIM/GIS platforms that need to share performance indicators and simulation results
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Xing, Rong, Junjie Zhang und Honggang Lei. „Full-Permeability Analysis of Normal Amplitude and Variable Amplitude Fracture of M24 High Strength Bolts“. In Advances in Frontier Research on Engineering Structures, 131–39. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-19-8657-4_12.

Der volle Inhalt der Quelle
Annotation:
AbstractFatigue problem on high strength bolts have already become more and more important in morden times and it is essential that we do research on bolt fatigue mechanism by the fatigue fracture analysis of the failed bolt because of fatigue. Through detailed analysis of constant amplitude stress and variable amplitude stress on M24 high strength large hexagonal head bolts, it is concluded that there are three common failure mode among all bolts’ fatigue failure. Fatigue failure characteristics and fracture developing status of source region, transient broken zone and extension zone were obtained, and it provides a important basis for the study on fatigue performance and life estimation of high strength bolt connection, and meanwhile it also expands the application range of bolt joint of prefabricated steel structure.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Performance estimation problems"

1

Gro¨nstedt, Tomas U. J. „Identifiability in Multi-Point Gas Turbine Parameter Estimation Problems“. In ASME Turbo Expo 2002: Power for Land, Sea, and Air. ASMEDC, 2002. http://dx.doi.org/10.1115/gt2002-30020.

Der volle Inhalt der Quelle
Annotation:
A method for estimating performance parameters in jet engines with limited instrumentation has been developed. The technique is applied on a nonlinear steady state performance code by making simultaneous use of a number of off-design operating points. A hybridized optimization tool using a genetic algorithm to obtain an initial estimate of the performance parameters, and a gradient method to refine this estimate, has been implemented. The method is tested on a set of simulated data that would be available during performance testing of a PW100 engine. The simulated data is generated assuming realistic noise levels. The technique has been successfully applied to the estimation of ten performance parameters using six simulated measurement signals. The determination of identifiability (the property governing whether the performance parameters of the model can be uniquely determined from the measured data) and the selection of parameters in the performance model has been based on the analysis of the system Hessian, i.e. the multidimensional second derivative of the goal function. It is shown, theoretically as well as in practice, how the process of selecting model parameters can be approached in a systematic manner when nonlinear multi-point problems are studied.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Rubbens, Anne, Nizar Bousselmi, Sébastien Colla und Julien M. Hendrickx. „Interpolation Constraints for Computing Worst-Case Bounds in Performance Estimation Problems“. In 2023 62nd IEEE Conference on Decision and Control (CDC). IEEE, 2023. http://dx.doi.org/10.1109/cdc49753.2023.10384170.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Colla, Sebastien, und Julien M. Hendrickx. „Automated Performance Estimation for Decentralized Optimization via Network Size Independent Problems“. In 2022 IEEE 61st Conference on Decision and Control (CDC). IEEE, 2022. http://dx.doi.org/10.1109/cdc51059.2022.9993346.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Hakkaki-Fard, Ali, Hosein Molavi und Ramin K. Rahmani. „A Novel Methodology for Combined Parameter and Function Estimation Problems“. In ASME 2009 Heat Transfer Summer Conference collocated with the InterPACK09 and 3rd Energy Sustainability Conferences. ASMEDC, 2009. http://dx.doi.org/10.1115/ht2009-88400.

Der volle Inhalt der Quelle
Annotation:
This article presents a novel methodology, which is highly efficient and simple to implement, for simultaneous retrieval of a complete set of thermal coefficients in combined parameter and function estimation problems. Moreover, the effect of correlated parameters on convergence performance is examined. The present methodology is a combination of two different methods: The Conjugate Gradient Method with Adjoint Problem (CGMAP) and Box-Kanemasu method (BKM). The methodology uses the benefit of CGMAP in handling function estimation problems and BKM for parameter estimation problems. One of the unique features about the present method is that the correlation among the separate unknowns does not behave as a limiting factor to the convergence of the problem. Numerical experiments using measurement errors are performed to verify the proposed method in solving the combined parameter and function estimation problems. The obtained results show that the combined procedure can efficiently and reliably estimate the values of the thermal coefficients.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Van der Linde, S. G., M. Dukalski, M. Möller, N. M. P. Neumann, F. Phillipson und D. Rovetta. „HPC01 - Hybrid Classical-Quantum Computing in Geophysical Inverse Problems: The Case of Quantum Annealing for Residual Statics Estimation“. In Sixth EAGE High Performance Computing Workshop. European Association of Geoscientists & Engineers, 2022. http://dx.doi.org/10.3997/2214-4609.2022615002.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Annaswamy, A. M., C. Thanomsat, N. R. Mehta und A. P. Loh. „A New Approach to Estimation of Nonlinear Parametrization in Dynamic Systems“. In ASME 1997 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 1997. http://dx.doi.org/10.1115/imece1997-0398.

Der volle Inhalt der Quelle
Annotation:
Abstract Nonlinear parametrizations occur in dynamic models of several complex engineering problems. The theory of adaptive estimation and control has been applicable, by and large, to problems where parameters appear linearly. We have recently developed an adaptive controller that is capable of estimating parameters that appear nonlinearly in dynamic systems in a stable manner. In this paper, we present this algorithm and its applicability to the problem of temperature regulation in chemical reactors. It is shown in that the proposed controller leads to a significantly better performance than those based on linear parametrizations.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Martins, Marcella S. R., Mohamed El Yafrani, Roberto Santana, Myriam Delgado, Ricardo Luders und Belaid Ahiod. „On the Performance of Multi-Objective Estimation of Distribution Algorithms for Combinatorial Problems“. In 2018 IEEE Congress on Evolutionary Computation (CEC). IEEE, 2018. http://dx.doi.org/10.1109/cec.2018.8477970.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Vasylyshyn, Volodymyr, Oleksandr Barsukov, Maksym Kasianenko, Pavlo Open'ko und Oleksandr Kuznietsov. „Improving the Performance of DOA Estimation with Spatial Smoothing via Noise Reduction“. In 2020 IEEE International Conference on Problems of Infocommunications. Science and Technology (PIC S&T). IEEE, 2020. http://dx.doi.org/10.1109/picst51311.2020.9467996.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Olshevska, Tatyana, Volodymir Baliar und Roman Vasilchenko. „Estimation of DVB-S2 ACM Mode Performance for Broadband Satellite Infocommunication Applications“. In 2018 International Scientific-Practical Conference Problems of Infocommunications. Science and Technology (PIC S&T). IEEE, 2018. http://dx.doi.org/10.1109/infocommst.2018.8632159.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Pattel, Bibin, Hoseinali Borhan und Sohel Anwar. „An Evaluation of the Moving Horizon Estimation Algorithm for Online Estimation of Battery State of Charge and State of Health“. In ASME 2014 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2014. http://dx.doi.org/10.1115/imece2014-37140.

Der volle Inhalt der Quelle
Annotation:
Moving Horizon Estimation (MHE) has emerged as a powerful technique for tackling the estimation problems of the state of dynamic systems in the presence of constraints, nonlinearities and disturbances. In this paper, the Moving Horizon Estimation approach is applied in estimating the State of Charge (SoC) and State of Health (SoH) of a battery and the results are compared against those for the traditional estimation method of Extended Kalman Filter (EKF). The comparison of the results show that MHE provides improvement in performance over EKF in terms of different state initial conditions, convergence time, and process and sensor noise variations.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Berichte der Organisationen zum Thema "Performance estimation problems"

1

Muñoz, Juan Sebastián. Re-estimating the Gender Gap in Colombian Academic Performance. Inter-American Development Bank, Januar 2014. http://dx.doi.org/10.18235/0011529.

Der volle Inhalt der Quelle
Annotation:
This paper presents evidence of the relationship between the disparity in the academic performance of boys and girls in Colombia and the country's excessively high school dropout rates. By using the OLS and trimming for bounds techniques, and based on data derived from the PISA 2009 database, the presented findings show that the vast majority of this gender-related performance gap is explained by selection problems in the group of low-skilled and poor male students. In particular, the high dropout rate overestimates male performance means, creating a selection bias in the regular OLS estimation. In order to overcome this issue, unobservable male students are simulated and bounding procedures used. The results of this analysis suggest that low-income men are vulnerable to dropping out of school in the country, which leads to overestimating the actual performance levels of Colombian men.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Meidani, Hadi, und Amir Kazemi. Data-Driven Computational Fluid Dynamics Model for Predicting Drag Forces on Truck Platoons. Illinois Center for Transportation, November 2021. http://dx.doi.org/10.36501/0197-9191/21-036.

Der volle Inhalt der Quelle
Annotation:
Fuel-consumption reduction in the truck industry is significantly beneficial to both energy economy and the environment. Although estimation of drag forces is required to quantify fuel consumption of trucks, computational fluid dynamics (CFD) to meet this need is expensive. Data-driven surrogate models are developed to mitigate this concern and are promising for capturing the dynamics of large systems such as truck platoons. In this work, we aim to develop a surrogate-based fluid dynamics model that can be used to optimize the configuration of trucks in a robust way, considering various uncertainties such as random truck geometries, variable truck speed, random wind direction, and wind magnitude. Once trained, such a surrogate-based model can be readily employed for platoon-routing problems or the study of pavement performance.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Nobile, F., Q. Ayoul-Guilmard, S. Ganesh, M. Nuñez, A. Kodakkal, C. Soriano und R. Rossi. D6.5 Report on stochastic optimisation for wind engineering. Scipedia, 2022. http://dx.doi.org/10.23967/exaqute.2022.3.04.

Der volle Inhalt der Quelle
Annotation:
This report presents the latest methods of optimisation under uncertainties investigated in the ExaQUte project, and their applications to problems related to civil and wind engineering. The measure of risk throughout the report is the conditional value at risk. First, the reference method is presented: the derivation of sensitivities of the risk measure; their accurate computation; and lastly, a practical optimisation algorithm with adaptive statistical estimation. Second, this method is directly applied to a nonlinear relaxation oscillator (FitzHugh–Nagumo model) with numerical experiments to demonstrate its performance. Third, the optimisation method is adapted to the shape optimisation of an airfoil and illustrated by a large-scale experiment on a computing cluster. Finally, the benchmark of the shape optimisation of a tall building under a turbulent flow is presented, followed by an adaptation of the optimisation method. All numerical experiments showcase the open-source software stack of the ExaQUte project for large-scale computing in a distributed environment.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Carrasco, Marine, und N'golo Koné. Test for Trading Costs Effect in a Portfolio Selection Problem with Recursive Utility. CIRANO, Januar 2023. http://dx.doi.org/10.54932/bjce8546.

Der volle Inhalt der Quelle
Annotation:
This paper addresses a portfolio selection problem with trading costs on stock market. More precisely, we develop a simple GMM-based test procedure to test the significance of rading costs effect in the economy with a áexible form of transaction costs. We also propose a two-step procedure to test overidentifying restrictions in our GMM estimation. In an empirical analysis, we apply our test procedures to the class of anomalies used in Novy-Marx and Velikov (2016). We show that transaction costs have a significant effect on investors behavior for many anomalies. In that case, investors significantly improve the out-of-sample performance of their portfolios by accounting for trading costs.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Tosi, R., R. Codina, J. Principe, R. Rossi und C. Soriano. D3.3 Report of ensemble based parallelism for turbulent flows and release of solvers. Scipedia, 2022. http://dx.doi.org/10.23967/exaqute.2022.3.06.

Der volle Inhalt der Quelle
Annotation:
In this work we focus on reducing the wall clock time required to compute statistical estimators of highly chaotic incompressible flows on high performance computing systems. Our approach consists of replacing a single long-term simulation by an ensemble of multiple independent realizations, which are run in parallel with different initial conditions. A failure probability convergence criteria must be satisfied by the statistical estimator of interest to assess convergence. Its error analysis leads to the identification of two error contributions: the initialization bias and the statistical error. We propose an approach to systematically detect the burn-in time in order to minimize the initialization bias, accompanied by strategies to reduce simulation cost. The framework is validated on two very high Reynolds number obstacle problems of wind engineering interest in a high performance computing environment.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Ayoul-Guilmard, Q., F. Nobile, S. Ganesh, M. Nuñez, R. Tosi, C. Soriano und R. Rosi. D5.5 Report on the application of multi-level Monte Carlo to wind engineering. Scipedia, 2022. http://dx.doi.org/10.23967/exaqute.2022.3.03.

Der volle Inhalt der Quelle
Annotation:
We study the use of multi-level Monte Carlo methods for wind engineering. This report brings together methodological research on uncertainty quantification and work on target applications of the ExaQUte project in wind and civil engineering. First, a multi-level Monte Carlo for the estimation of the conditional value at risk and an adaptive algorithm are presented. Their reliability and performance are shown on the time-average of a non-linear oscillator and on the lift coefficient of an airfoil, with both preset and adaptively refined meshes. Then, we propose an adaptive multi-fidelity Monte Carlo algorithm for turbulent fluid flows where multilevel Monte Carlo methods were found to be inefficient. Its efficiency is studied and demonstrated on the benchmark problem of quantifying the uncertainty on the drag force of a tall building under random turbulent wind conditions. All numerical experiments showcase the open-source software stack of the ExaQUte project for large-scale computing in a distributed environment.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Finkelstain, Israel, Steven Buccola und Ziv Bar-Shira. Pooling and Pricing Schemes for Marketing Agricultural Products. United States Department of Agriculture, August 1993. http://dx.doi.org/10.32747/1993.7568099.bard.

Der volle Inhalt der Quelle
Annotation:
In recent years there has been a growing concern over the performance of Israel and U.S. agricultural marketing organizations. In Israel, poor performance of some marketing institutions has led to radical reforms. Examples are the two leading export industries - citrus and flowers. In the U.S., growth of local market power is eliminating competitive row product prices which served as the basis for farmer cooperative payment plans. This research studies, theoretically, several aspects of the above problem and develops empirical methods to assess their relative importance. The theoretical part deals with two related aspects of the operation of processing and marketing firms. The first is the technological structure of these firms. To this end, we formalize a detailed theory that describes the production process itself and the firm's decision. The model accounts for multiple products and product characteristics. The usefulness of the theory for measurement of productivity and pricing of raw material is demonstrated. The second aspect of the processing and marketing firm that we study is unique to the agricultural sector, where many such firms are cooperatives. In such cooperative an efficient and fair mechanism for purchasing raw materials from members is crucial to successful performances of the firm. We focus on: 1) pricing of raw materials. 2) comparison of employment of quota and price regimes by the cooperative to regulate the quantities, supplied by members. We take into consideration that the cooperative management is subject to pressure from member farmers. 3) Tier pricing for raw materials in order to ensure efficiency and zero profits at the cooperative level. This problem is examined in both closed and open cooperatives. The empirical part focuses in: 1) the development of methodologies for estimating demand for differentiated products; 2) assessing farmers response to component pricing; 3) measurement of potential and actual exploitation of market power by an agricultural marketing firm. The usefulness of the developed methodologies are demonstrated by several application to agricultural sub-sectors, including: U.S. dairy industry, Oregon wine industry, Israeli Cotton industry and Israeli Citrus industry.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Koduru, Smitha, und Jason Skow. PR-244-153719-R01 Quantification of ILI Sizing Uncertainties and Improving Correction Factors. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), August 2018. http://dx.doi.org/10.55274/r0011518.

Der volle Inhalt der Quelle
Annotation:
Operators routinely perform verification digs to assess whether an inline inspection (ILI) tool meets the performance specified by the ILI vendors. Characterizing the actual ILI tool performance using available field and ILI data is a difficult problem due to uncertainties associated with measurements and geometric classification of features. The focus of this project is to use existing ILI and excavation data to develop better approaches for assessing ILI tool performance. For corrosion features, operators are primarily interested in quantifying magnetic flux leakage (MFL) ILI tool sizing error and its relationship to burst pressure estimates. In previously completed PRCI research, a limited MFL ILI dataset was used to determine the corrosion feature depth sizing bias and random error using principles published in API 1163 (2013). The research demonstrated the tendency for ILI predictions to be slightly lower than field measurements (i.e., under-call) for the dataset studied, and it provided a framework for characterizing this bias. The goal of this project was to expand on previous work by increasing the number and type of feature morphologies available for analysis, and by estimating the sizing error of ILI measured external corrosion features. New geometric classification criteria, complementing the current criteria suggested by the Pipeline Operator Forum (POF 2009), were also investigated. Lastly, correction factors based on burst pressure prediction accuracy were developed to account for the effect of adopting various feature interaction rules. This report has a related webinar (member login required).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Gunay, Selim, Fan Hu, Khalid Mosalam, Arpit Nema, Jose Restrepo, Adam Zsarnoczay und Jack Baker. Blind Prediction of Shaking Table Tests of a New Bridge Bent Design. Pacific Earthquake Engineering Research Center, University of California, Berkeley, CA, November 2020. http://dx.doi.org/10.55461/svks9397.

Der volle Inhalt der Quelle
Annotation:
Considering the importance of the transportation network and bridge structures, the associated seismic design philosophy is shifting from the basic collapse prevention objective to maintaining functionality on the community scale in the aftermath of moderate to strong earthquakes (i.e., resiliency). In addition to performance, the associated construction philosophy is also being modernized, with the utilization of accelerated bridge construction (ABC) techniques to reduce impacts of construction work on traffic, society, economy, and on-site safety during construction. Recent years have seen several developments towards the design of low-damage bridges and ABC. According to the results of conducted tests, these systems have significant potential to achieve the intended community resiliency objectives. Taking advantage of such potential in the standard design and analysis processes requires proper modeling that adequately characterizes the behavior and response of these bridge systems. To evaluate the current practices and abilities of the structural engineering community to model this type of resiliency-oriented bridges, the Pacific Earthquake Engineering Research Center (PEER) organized a blind prediction contest of a two-column bridge bent consisting of columns with enhanced response characteristics achieved by a well-balanced contribution of self-centering, rocking, and energy dissipation. The parameters of this blind prediction competition are described in this report, and the predictions submitted by different teams are analyzed. In general, forces are predicted better than displacements. The post-tension bar forces and residual displacements are predicted with the best and least accuracy, respectively. Some of the predicted quantities are observed to have coefficient of variation (COV) values larger than 50%; however, in general, the scatter in the predictions amongst different teams is not significantly large. Applied ground motions (GM) in shaking table tests consisted of a series of naturally recorded earthquake acceleration signals, where GM1 is found to be the largest contributor to the displacement error for most of the teams, and GM7 is the largest contributor to the force (hence, the acceleration) error. The large contribution of GM1 to the displacement error is due to the elastic response in GM1 and the errors stemming from the incorrect estimation of the period and damping ratio. The contribution of GM7 to the force error is due to the errors in the estimation of the base-shear capacity. Several teams were able to predict forces and accelerations with only moderate bias. Displacements, however, were systematically underestimated by almost every team. This suggests that there is a general problem either in the assumptions made or the models used to simulate the response of this type of bridge bent with enhanced response characteristics. Predictions of the best-performing teams were consistently and substantially better than average in all response quantities. The engineering community would benefit from learning details of the approach of the best teams and the factors that caused the models of other teams to fail to produce similarly good results. Blind prediction contests provide: (1) very useful information regarding areas where current numerical models might be improved; and (2) quantitative data regarding the uncertainty of analytical models for use in performance-based earthquake engineering evaluations. Such blind prediction contests should be encouraged for other experimental research activities and are planned to be conducted annually by PEER.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Meloche, Jean-Philippe, Jérôme Dupras, Andrew Gonzales, Justin Leroux und François Vaillancourt. Étude sur la mise en œuvre d’outils d’écofiscalité au service de la conservation et de l’adaptation aux changements climatiques dans les basses-terres du Saint-Laurent. CIRANO, Juni 2023. http://dx.doi.org/10.54932/kgdx2810.

Der volle Inhalt der Quelle
Annotation:
Ce rapport s’interroge sur la contribution des mesures d’écofiscalité à la protection des espaces naturels sur le territoire des basses-terres du Saint-Laurent dans le but d’améliorer la résilience aux changements climatiques et la préservation des habitats des espèces fauniques et floristiques à statut précaire. Deux mesures écofiscales novatrices basées sur l’empreinte écologique de l’usage du sol sont proposées : une mesure de taxation et une mesure de subvention. L’objectif de ce rapport est de valider la faisabilité de ces outils et d’en mesurer les effets à l’échelle du Québec. L’assiette fiscale des mesures d’écofiscalité suggérées repose sur les superficies des propriétés foncières pour différentes classes d’usage du sol. L’établissement des taux de taxation et de subventions repose sur une modélisation de la valeur des services écosystémiques, sur un choix de critères écologiques et sur une méthode de hiérarchisation des classes d’empreinte écologique. Pour évaluer les mesures, un échantillon de terrains du territoire de la Ville de Laval est analysé. Par extrapolation, les résultats sont ensuite généralisés à l’ensemble du Québec. La jonction entre les données sur les outils d’écofiscalité, les données du rôle d’évaluation foncière et les données de recensement permet d’estimer les impacts socioéconomiques de la mesure de taxation. Un sondage a également été mené auprès de la population québécoise pour évaluer l’acceptabilité sociale de l’outil. Nos résultats montrent la faisabilité technique de la taxe sur l’empreinte écologique de l’usage du sol. Les taux proposés sont le fruit d’une démarche rigoureuse fondée sur une estimation de la valeur des dommages à l’environnement. Pour le secteur résidentiel, les sommes d’impôt à payer sont raisonnables et relativement proportionnelles à la capacité de payer des ménages. L’impôt a également les effets recherchés, c’est-à-dire qu’il favorise la densité urbaine et l’ajout de végétation. En ce qui concerne les immeubles non -résidentiels, la charge fiscale est un peu plus lourde, mais offre un potentiel intéressant. Quant au secteur agricole, les taux de taxation génèrent des prélèvements très élevés qui risquent d’entraîner des distorsions négatives dans le marché de l’alimentation. Les données de sondage montrent que la majorité de la population s’oppose à l’implantation d’une nouvelle taxe sur l’empreinte écologique de l’usage du sol, mais que les moins de 35 ans soutiennent la mesure. Pour la mesure de subvention, un survol des programmes existants au Québec et ailleurs dans le monde permet de tirer des conclusions sur la performance et la pertinence d’un tel outil. Il montre que la relation entre superficie, valeur écologique et valeur marchande des terrains pose problème. La mesure de subvention est plus coûteuse et moins efficace que la mesure de taxation pour favoriser la préservation et la restauration des espaces naturels.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie