Academic literature on the topic 'Estimation par interval'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Estimation par interval.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Estimation par interval"

1

Magnussen, Steen, and Johannes Breidenbach. "Retrieval of among-stand variances from one observation per stand." Journal of Forest Science 66, No. 4 (April 30, 2020): 133–49. http://dx.doi.org/10.17221/141/2019-jfs.

Full text
Abstract:
Forest inventories provide predictions of stand means on a routine basis from models with auxiliary variables from remote sensing as predictors and response variables from field data. Many forest inventory sampling designs do not afford a direct estimation of the among-stand variance. As consequence, the confidence interval for a model-based prediction of a stand mean is typically too narrow. We propose a new method to compute (from empirical regression residuals) an among-stand variance under sample designs that stratify sample selections by an auxiliary variable, but otherwise do not allow a direct estimation of this variance. We test the method in simulated sampling from a complex artificial population with an age class structure. Two sampling designs are used (one-per-stratum, and quasi systematic), neither recognize stands. Among-stand estimates of variance obtained with the proposed method underestimated the actual variance by 30-50%, yet 95% confidence intervals for a stand mean achieved a coverage that was either slightly better or at par with the coverage achieved with empirical linear best unbiased estimates obtained under less efficient two-stage designs.
APA, Harvard, Vancouver, ISO, and other styles
2

Gomez, Mayra, Roberta Cimmino, Dario Rossi, Gianluigi Zullo, Giuseppe Campanile, Gianluca Neglia, and Stefano Biffani. "The present of Italian Mediterranean buffalo: precision breeding based on multi-omics data." Acta IMEKO 12, no. 4 (December 5, 2023): 1–4. http://dx.doi.org/10.21014/actaimeko.v12i4.1692.

Full text
Abstract:
Genetic evaluation in the Italian Mediterranean Buffalo (IMB) traditionally relied on the BLUP method (best linear unbiased predictor), a mixed model system incorporating both random and fixed effects simultaneously. However, recent advancements in genome sequencing technologies have opened up the opportunity to incorporate genomic information into genetic evaluations. The ssGBLUP (single-step best linear unbiased predictor) has become the method par excellence. It replaces the traditional relationship matrix with one that combines pedigree and genomic relationships, allowing for the estimation of genetic values for non-genotyped animals. The findings of this study highlight how genomic selection enhances the precision of breeding values, facilitates greater genetic advancement and reduces the generation interval, ultimately enabling a rapid return on investment.
APA, Harvard, Vancouver, ISO, and other styles
3

Bochenina, Marina V. "Price Forecasting in the Housing Market amid Changes in the Primary Trend." Теория и практика общественного развития, no. 8 (August 30, 2023): 137–42. http://dx.doi.org/10.24158/tipor.2023.8.16.

Full text
Abstract:
The development of digital technologies contributes to the growth of the use of nonparametric methods. The presented study proposes a methodology for the forecast assessment of prices in the residential real estate market, taking into account the possible determination of the direction of dynamics in the anticipation period based on the application of nonparametric Nadaraya – Watson estimation. The forecast model construction in the work is considered on the basis of the historically established tendency of determination of the price level of the primary or secondary housing market of Krasnodar Krai and does not take into account other factors. Par-ticular attention is paid to the application of the Chow test to identify the point in time at which there was a struc-tural shift, which allows to determine the period devoid of structural breaks for modeling the trend in order to determine the confidence interval of the forecast. The existing housing problem reflects the relevance of the development of methods for forecasting price dynamics in the housing market, and the absence of additional factors reduces the error and increases the forecast quality.
APA, Harvard, Vancouver, ISO, and other styles
4

Krishna, Hare, Madhulika Dube, and Renu Garg. "Estimation of Stress Strength Reliability of Inverse Weibull Distribution under Progressive First Failure Censoring." Austrian Journal of Statistics 48, no. 1 (December 17, 2018): 14–37. http://dx.doi.org/10.17713/ajs.v47i4.638.

Full text
Abstract:
In this article, estimation of stress-strength reliability $\delta=P\left(Y<X\right)$ based on progressively first failure censored data from two independent inverse Weibull distributions with different shape and scale parameters is studied. Maximum likelihood estimator and asymptotic confidence interval of $\delta$ are obtained. Bayes estimator of $\delta$ under generalized entropy loss function using non-informative and gamma informative priors is derived. Also, highest posterior density credible interval of $\delta$ is constructed. Markov Chain Monte Carlo (MCMC) technique is used for Bayes computation. The performance of various estimation methods are compared by a Monte Carlo simulation study. Finally, a pair of real life data is analyzed to illustrate the proposed methods of estimation.
APA, Harvard, Vancouver, ISO, and other styles
5

Radović, Dunja, and Mirko Stojčić. "Predictive modeling of critical headway based on machine learning techniques." Tehnika 77, no. 3 (2022): 354–59. http://dx.doi.org/10.5937/tehnika2203354r.

Full text
Abstract:
Due to the impossibility of directly measuring of critical headway, numerous methods and procedures have been developed for its estimation. This paper uses the maximum likelihood method for estimating the same at five roundabouts, and based on the obtained results and pairs of accepted and maximum rejected headways, several predictive models based on machine learning techniques were trained and tested. Therefore, the main goal of the research is to create a model for the prediction (classification) of the critical headway, which as inputs, i.e. independent variables use pairs - accepted and maximum rejected headways. The basic task of the model is to associate one of the previously estimated values of the critical headway with a given input pair of headways. The final predictive model is chosen from several offered alternatives based on the accuracy of the prediction. The results of training and testing of various models based on machine learning techniques in IBM SPSS Modeler software indicate that the highest prediction accuracy is shown by the C5 decision tree model (73.266%), which was trained and tested on an extended data set obtained by augmentation or data set augmentation (Data Augmentation - DA).
APA, Harvard, Vancouver, ISO, and other styles
6

Queiroga, F., J. Epstein, M. L. Erpelding, L. King, M. Soudant, E. Spitz, J. F. Maillefert, et al. "AB1678 CONSTRUCTION OF A COMPOSITE SCORE FOR PATIENT SELF-REPORT OF FLARE IN OSTEOARTHRITIS: A COMPARISON OF METHODS WITH THE FLARE-OA-16 QUESTIONNAIRE." Annals of the Rheumatic Diseases 82, Suppl 1 (May 30, 2023): 2076.1–2077. http://dx.doi.org/10.1136/annrheumdis-2023-eular.1380.

Full text
Abstract:
BackgroundHaving a score to assess the occurrence and severity of flares of knee or hip osteoarthritis (OA) to guide interventions is essential.ObjectivesTo compare different methods of constructing a composite score for the Flare-OA-16 self-reported questionnaire for measuring knee and hip OA flare, defined as a cluster of symptoms of sufficient duration and intensity to require initiation, change or increase in therapy [1].MethodsParticipants with a physician diagnosis of knee and hip OA completed a validated 16-item questionnaire [2,3] assessing five dimensions of flare in OA: pain, swelling, stiffness, psychological aspects, and consequences of symptoms, endorsed by OMERACT. Three estimation methods were compared: the score obtained i) by second-order confirmatory factor analysis (CFA) weighting the factor loadings in a linear combination of the five dimensions; ii) by logistic regression, modeling the probability of having a flare according to the participant’s self-report (yes/no); and iii) by Rasch method, using the average of the weighted scores from a Rasch model in each dimension. For the scores obtained by the three methods, the disordered items were modified, and then the scores were standardized on a scale from 0 to 10. The distribution (floor effect without flare (FF) and ceiling effect with flare (CF)) of the scores in each model was compared. The similarity between the scores was analyzed by intraclass correlation coefficient (ICC) and their performance were compared by areas under the ROC curves (AUC) and 95% confidence interval. The intra-score test-retest reliability at 15 days was assessed by ICC.ResultsIn a sample of 381 participants with complete questionnaires, 247 reported having a flare. With CFA, good fit indices (CFI=0.94; RMSEA=.08) justified the estimation of an overall score mean=3.90 (SD=2.79), with FF effect 27.6% and CF 2.0%. For the logistic regression estimation, the overall score was mean=6.48 (SD=3.13), with FF 0% and CF 34.0% effect. With the Rasch model, the composite score was mean=4.15 (SD=2.45), with FF 18.7% and CF 0% effect. Similarity analysis indicated a greater concordance between the CFA and Rasch scores (ICC=.99) than between the logistic regression score and the two others (ICC=.87 for each). The ROC curve indicated similar performance of the overall scores estimated by logistic model (AUC=.88 [.85-.92]), by CFA (AUC=.86 [.82-.90]) and by Rasch model (AUC=.86 [.82-.90]). The performance in terms of reproducibility was ICC=.84[.95-.90] for Rasch and CFA scores and ICC=.78[.66-86] for logistic model.ConclusionThis comparison of methods for constructing a global score for knee and hip OA flare explored three satisfactory alternatives. The second-order CFA confirmed the uniqueness of the flare construct measure, the logistic model had a slight superiority explained by the anchor variable used (patient-reported flare), and the Rasch model ensured that an interval scale was obtained for each dimension. The distribution of scores with the lowest combination of floor and ceiling effects was in favor of the Rasch model. The next step will be to document their respective performance in terms of sensitivity to change. A score obtained from the patient’s point of view can help increase the adherence to the prescribed treatment and help physicians to optimize the scheduling and delivery of medical consultation.References[1]Guillemin F., et al. Developing a Preliminary Definition and Domains of Flare in Knee and Hip Osteoarthritis (OA): Consensus Building of the Flare-in-OA OMERACT Group. J Rheumatol. 2019 Sep;46(9):1188–91.[2]Traore Y., et al. Development and validation of the Flare-OA questionnaire for measuring flare in knee and hip osteoarthritis. Osteoarthritis Cartilage. 2022 May;30(5):689–96.[3]Queiroga F., et al. Validation et réduction d’une échelle de mesure des poussées dans l’arthrose de la hanche et du genou par un modèle de Rasch. Rev Épidémiol Santé Publique. 2022 May;70:S70.AcknowledgementsWe acknowledge the participants of the study samples that were used in the study, without whom this research could not been possible.Financial supportThis work was supported partly by the French PIA project “Lorraine Université d’Excellence”, reference ANR-15-IDEX-04-LUE.Disclosure of InterestsNone Declared.
APA, Harvard, Vancouver, ISO, and other styles
7

Mihailovic, Zoran, Tatjana Atanasijevic, Vesna Popovic, Miroslav B. Milosevic, and Jan P. Sperhake. "Estimation of the Postmortem Interval by Analyzing Potassium in the Vitreous Humor." American Journal of Forensic Medicine and Pathology 33, no. 4 (December 2012): 400–403. http://dx.doi.org/10.1097/paf.0b013e31826627d0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Youn Ta, Marc, Amandine Carine Njeugeut Mbiafeu, Jean-Robert Kamenan Satti, Tchimou Vincent Assoma, and Jean Patrice Jourda. "Cartographie Automatique des Zones Inondées et Evaluation des Dommages dans le District d’Abidjan depuis Google Earth Engine." European Scientific Journal, ESJ 19, no. 32 (November 30, 2023): 54. http://dx.doi.org/10.19044/esj.2023.v19n32p54.

Full text
Abstract:
L'objectif de cette étude est de générer automatiquement des cartes de l'étendue des zones inondées dans le district d'Abidjan et d’évaluer les dommages causés. L’approche méthodologique a consisté à cartographier l'étendue des zones inondées en utilisant une méthode de détection des changements basée sur les données Sentinel-1 (SAR) avant et après une crue spécifique. Ensuite, les différentes classes d'enjeux (telles que les cultures, les zones habitées, les bâtiments, les routes et la densité de la population) ont été extraites à partir de diverses sources de données gratuites. Puis la superficie des enjeux affectés a été évaluée, en superposant les classes d’enjeux sur les zones inondées. De plus, une interface web a été conçue à l'aide des packages de Google Earth Engine. Cette interface web offre à l'utilisateur la possibilité de visualiser l'étendue des zones inondées et les cartes des enjeux affectés, avec une estimation statistique, pour une date donnée dans l'intervalle allant de 2015 à la date actuelle. La cartographie des zones inondées à la date du 25 juin 2020 a révélé une superficie totale de 25219,23 hectares de zones inondées soit 11,50% de la superficie totale du District d’Abidjan. Une estimation des dégâts causés par cette crue indique que 22 307,53 hectares d'enjeux ont été affectés en moyenne, ce qui représente 88,45 % des zones inondées. Cette répartition se décompose en 13 538,49 hectares (soit 53,68 %) de terres agricoles touchées et 8 769,04 hectares (soit 34,77 %) de zones urbaines touchées, impactant en moyenne 35 065 personnes. Les résultats de cette étude ont permis de constater que la partie centrale de la zone d'étude, au-dessus de la lagune, présente le plus grand potentiel de risque d'inondation en raison de la morphologie du terrain et de la vulnérabilité élevée des zones construites qui occupent la plaine inondable. The objective of this study is to automatically generate maps of the extent of flooded areas in the Abidjan district and assess the resulting damages. The methodological approach involved mapping the extent of flooded areas using a change detection method based on Sentinel-1 (SAR) data before and after a specific flood event. Subsequently, various classes of assets, such as crops, residential areas, buildings, roads, and population density, were extracted from various free data sources. The affected asset areas were then evaluated by overlaying the asset classes on the flooded areas. Furthermore, a web interface was designed using Google Earth Engine packages. This web interface allows users to visualize the extent of flooded areas and maps of the affected assets, along with statistical estimates, for a specific date within the interval from 2015 to the current date. Mapping of the flooded areas as of June 25, 2020, revealed a total area of 25219.23 hectares of flooded areas, representing 11.50% of the total area of the Abidjan District. An estimation of the damages caused by this flood indicates that, on average, 22307.53 hectares of assets were affected, accounting for 88.45% of the flooded areas. This distribution breaks down into 13538.49 hectares (53.68%) of affected agricultural lands and 8769.04 hectares (34.77%) of affected urban areas, impacting an average of 35,065 people. The study results revealed that the central part of the study area, located above the lagoon, presents the highest flood risk potential due to the terrain's morphology and the high vulnerability of built-up areas occupying the floodplain.
APA, Harvard, Vancouver, ISO, and other styles
9

Nikulchev, Evgeny, and Alexander Chervyakov. "Prediction Intervals: A Geometric View." Symmetry 15, no. 4 (March 23, 2023): 781. http://dx.doi.org/10.3390/sym15040781.

Full text
Abstract:
This article provides a review of the approaches to the construction of prediction intervals. To increase the reliability of prediction, point prediction methods are replaced by intervals for many aims. The interval prediction generates a pair as future values, including the upper and lower bounds for each prediction point. That is, according to historical data, which include a graph of a continuous and discrete function, two functions will be obtained as a prediction, i.e., the upper and lower bounds of estimation. In this case, the prediction boundaries should provide guaranteed probability of the location of the true values inside the boundaries found. The task of building a model from a time series is, by its very nature, incorrect. This means that there is an infinite set of equations whose solution is close to the time series for machine learning. In the case of interval use, the inverse problem of dynamics allows us to choose from the entire range of modeling methods, using confidence intervals as solutions, or intervals of a given width, or those chosen as a solution to the problems of multi-criteria optimization of the criteria for evaluating interval solutions. This article considers a geometric view of the prediction intervals and a new approach is given.
APA, Harvard, Vancouver, ISO, and other styles
10

Gubarev, Vyacheslav, Serhiy Melnychuk, and Nikolay Salnikov. "METHOD AND ALGORITHMS FOR CALCULATING HIGH-PRECISION ORIENTATION AND MUTUAL BINDING OF COORDINATE SYSTEMS OF SPACECRAFT STAR TRACKERS CLUSTER BASED ON INACCURATE MEASUREMENTS." Journal of Automation and Information sciences 1 (January 1, 2022): 74–92. http://dx.doi.org/10.34229/1028-0979-2022-1-8.

Full text
Abstract:
The problem of increasing the accuracy of determining the orientation of a spacecraft (SC) using a system of star trackers (ST) is considered. Methods are proposed that make it possible to use a joint field of view and refine the relative position of ST to improve the accuracy of orientation determination. The use of several star trackers leads to an increase in the angle between the directions to the stars into the joint field of view, which makes it possible to reduce the condition number of the matrices used in calculating the orientation parameters. The paper develops a combinatorial method for interval estimation of the SC orientation with an arbitrary number of star trackers. To calculate the ST orientation, a linear problem of interval estimation of the orthogonal orientation matrix for a sufficiently large number of stars is solved. The orientation quaternion is determined under the condition that the corresponding orientation matrix belongs to the obtained interval estimates. The case is considered when the a priori estimate of the mutual binding of star trackers can have an error comparable to or greater than the error in measuring the angular coordinates of stars. With inaccurately specified matrices of the mutual orientation of the star trackers, the errors in the mutual orientations of the STs are added to the errors of measuring the directions to the stars, which leads to an expansion of the uncertainty intervals of the right-hand sides of the system of linear algebraic equations used to determine the orientation parameters. A method is proposed for solving the problem of refining the mutual reference of the internal coordinate systems of a pair of ST as an independent task, after which the main problem of increasing the accuracy of spacecraft orientation is solved. The developed method and algorithms for solving such a complex problem are based on interval estimates of orthogonal orientation matrices. For additional narrowing of the intervals, the property of orthogonality of orientation matrices is used. The numerical simulation carried out made it possible to evaluate the advantages and disadvantages of each of the proposed methods.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Estimation par interval"

1

Rabehi, Djahid. "Estimation par Intervalles des Systèmes Cyber-physiques." Thesis, Orléans, 2019. http://www.theses.fr/2019ORLE3038.

Full text
Abstract:
Les systèmes cyber-physiques sont des intégrations intelligentes de calculateurs, de réseaux de communications, et de processus physiques. Dans cette thèse, nous travaillons dans le contexte erreur inconnue mais bornée de borne connue, et nous nous intéressons à l'estimation d'état des systèmes dynamiques sous contraintes de communication. Nous proposons des méthodes de synthèse d'observateurs par intervalles pour des systèmes linéaires à temps continu, et dont les mesures à temps discret sont transmises à travers un réseau de communication.Les contributions de cette thèse sont les suivantes: (i) nous concevons un observateur impulsif par intervalles pour des systèmes linéaires à temps continu avec des mesures sporadiques; (ii) nous proposons un observateur impulsif par intervalles avec gain L1 fini et échantillonnage contrôlé, puis, nous développons une méthode de synthèse pour concevoir simultanément le gain d’observation et la condition de contrôle de l'échantillonnage des mesures; (iii) en utilisant l'observateur impulsif par intervalles proposé dans (i), nous développons une stratégie d'estimation sécurisée pour des systèmes soumis à des cyber-attaques
Cyber-Physical Systems are smart integrations of computation, networking, and physical processes. In this thesis, we deal with interval observers for cyber-physical systems in which the continuous-time physical systems are estimated and monitored using discrete-time data transmitted over network.The contributions of the presented material are threefold: (i) we design an interval impulsive observer for continuous-time linear systems with sporadically available measurements; (ii) we propose a finite L1-gain event-triggered interval impulsive observer for continuous-time linear systems, in which we develop a co-design procedure to simultaneously design the observer gain and the event-triggering condition; (iii) using the interval impulsive observer, we develop a secure estimation strategy for multi-output systems under cyber-attacks
APA, Harvard, Vancouver, ISO, and other styles
2

Mohammedi, Irryhl. "Contribution à l’estimation robuste par intervalle des systèmes multivariables LTI et LPV : Application aux systèmes aérospatiaux." Electronic Thesis or Diss., Bordeaux, 2024. http://www.theses.fr/2024BORD0142.

Full text
Abstract:
Les travaux présentés dans ce mémoire de thèse visent à développer de nouvelles approches basées sur une nouvelle classe particulière d’estimateurs d´état : les filtres dits par intervalles. Tout comme la classe des observateurs intervalles, l’objectif est d’estimer les bornes supérieures et inférieures des états d’un système, à chaque instant de temps. L’approche proposée repose sur la théorie des systèmes monotones et sur la connaissance a priori du domaine d’appartenance, supposé borné, des incertitudes de modèle et des entrées exogènes (perturbations, bruit de mesure, etc). L’élément clé de l’approche proposée repose sur l’utilisation de filtre d’ordre quelconque, sans structure a priori fixée, plutôt qu’une structure basée sur l’observateur (reposant uniquement sur une structure dynamique du système étudié). La synthèse des paramètres du filtre repose sur la résolution d’un problème d’optimisation sous contraintes de type inégalités matricielles linéaires et bilinéaires (LMI et BMI) permettant de garantir simultanément les conditions d’existence du filtre ainsi qu’un niveau de performance, soit dans un contexte énergie, soit dans un contexte amplitude ou soit dans un contexte mixte énergie/amplitude. La méthodologie de synthèse proposée est illustrée sur un exemple académique et est comparée avec d’autres méthodes existantes dans la littérature. Enfin, la méthodologie est appliquée au cas du contrôle d’attitude et d’orbite d’un satellite, sous des conditions de simulations réalistes
The work of this thesis aims at developing new approaches based on a new particular class of state estimators, the so-called interval or ensemble filters.Like the class of interval observers, the objective is to estimate, in a guaranteed way, the upper and lower bounds of the states of a system, at each time instant.The proposed approach is based on the theory of monotonic systems and on the knowledge of the domain of membership, supposedly bounded, of the uncertainties of the system, such as disturbances, noise and bias of sensors, etc.The key element of the proposed approach is to use a filter structure advantage, rather than an observer-based structure (relying only on a dynamic structure of the studied system).The synthesis of the filter parameters is based on the resolution of a constrained optimization problem of linear and bilinear matrix inequalities (LMI and BMI) allowing to guarantee simultaneously the existence conditions of the filter as well as a performance level, either in an energy context for LTI systems, or in an amplitude context or in a mixed energy/amplitude context for LPV systemsThe proposed synthesis methodology is illustrated on an academic example and is compared with other existing methods in the literature. Finally, the methodology is applied to the case of attitude and acceleration control of a satellite, under realistic simulation conditions
APA, Harvard, Vancouver, ISO, and other styles
3

Filali, Rayen. "Estimation et commande robustes de culture de microalgues pour la valorisation biologique de CO2." Phd thesis, Supélec, 2012. http://tel.archives-ouvertes.fr/tel-00765421.

Full text
Abstract:
Cette thèse s'attache à la maximisation de la consommation du dioxyde de carbone par les microalgues. En effet, suite aux différentes problématiques environnementales actuelles liées principalement aux émissions importantes de gaz à effet de serre et notamment le CO2, il a été démontré que les microalgues jouent un rôle très prometteur pour la bio-fixation du CO2. Dans cette optique, nous nous intéressons à la mise en place d'une loi de commande robuste permettant de garantir des conditions opératoires optimales pour une culture de la microalgue Chlorella vulgaris dans un photobioréacteur instrumenté. Cette thèse repose sur trois axes principaux. Le premier porte sur la modélisation de la croissance de l'espèce algale choisie à partir d'un modèle mathématique traduisant l'influence de la lumière et de la concentration en carbone inorganique total. En vue de la commande, le deuxième axe est consacré à l'estimation de la concentration cellulaire à partir des mesures disponibles en temps réel du dioxyde de carbone dissous. Trois types d'observateurs ont été étudiés et comparés : filtre de Kalman étendu, observateur asymptotique et observateur par intervalles. Le dernier axe concerne l'implantation d'une loi de commande prédictive non-linéaire couplée à une stratégie d'estimation pour la régulation de la concentration cellulaire autour d'une valeur maximisant la consommation du CO2. Les performances et la robustesse de cette commande ont été validées en simulation et expérimentalement sur un photobioréacteur instrumenté à l'échelle de laboratoire. Cette thèse est une étude préliminaire pour la mise en œuvre de la maximisation de la fixation du dioxyde de carbone par les microalgues.
APA, Harvard, Vancouver, ISO, and other styles
4

Kharkovskaia, Tatiana. "Conception d'observateurs par intervalle pour les systèmes à paramètres distribués avec incertitudes." Thesis, Ecole centrale de Lille, 2019. http://www.theses.fr/2019ECLI0019.

Full text
Abstract:
Ce travail présente de nouveaux résultats sur l'estimation d'état par intervalle pour des systèmes distribués incertains, qui sont des systèmes de dimension infinie : leur état, fonctionnel, est régi par des équations aux dérivées partielles (EDP) ou fonctionnelles (EDF). Le principe de l'observation par intervalle est d’estimer à chaque instant un ensemble de valeurs admissibles pour l'état (un intervalle), de manière cohérente avec la sortie mesurée. Les chapitres 2 et 3 se concentrent sur la conception d'observateurs par intervalle pour une EDP parabolique avec des conditions aux limites de type Dirichlet. Dans le chapitre 2, on utilise une approximation en dimension finie (éléments finis de type Galerkin), l'intervalle d’inclusion tenant compte des erreurs de l'approximation. Le chapitre 3 présente un observateur par intervalle sous la forme d'EDP sans projection de Galerkin. Dans ces deux chapitres, les estimations par intervalle obtenues sont utilisées pour concevoir un contrôleur stabilisant par retour de sortie dynamique. Le chapitre 4 envisage le cas des systèmes différentiels fonctionnels (EDF) à retards, à travers une équation différentielle de deuxième ordre avec incertitudes. La méthode proposée contient deux observateurs par intervalle consécutifs : le premier calcule à chaque instant l'intervalle pour la position non retardée grâce à de nouvelles conditions de positivité dépendantes du retard. Le deuxième observateur calcule un intervalle pour la vitesse, grâce à une estimation de dérivée. Tous les résultats obtenus sont vérifiés par des simulations numériques. En particulier, le chapitre 2 inclut des expériences sur le modèle Black – Scholes
This work presents new results on interval state estimation for uncertain distributed systems, the state of which has an infinite dimension and is described by partial (PDEs) or (FDEs) functional differential equations. An interval observer evaluates at each time instant a set of admissible values for the state (an interval), consistently with the measured output. The design is based on the positive systems theory. Chapters 2 and 3 focus on an interval observer design for a parabolic PDE with Dirichlet boundary conditions. The method in Chapter 2 is based on a finite-element Galerkin approximation, the interval inclusion of the state is calculated using the error estimates of the approximation. Chapter 3 presents an interval observer in the form of PDEs without Galerkin projection. In both chapters, the obtained interval estimates are applied to the design of a dynamic output feedback stabilizing controller. Chapter 4 deals with a second-order delay differential equation with uncertainties corresponding, for instance, to a mechanical system with delayed position measurements, which has form of an FDE. The proposed method contains two consecutive interval observers. The first one estimates, at each instant of time, the interval for the delay-free position using new delay-dependent conditions on positivity. Then, derived estimates of the position are used to design the second observer providing an interval for the velocity. All the obtained results are supported by numerical simulations. In particular, Chapter 2 includes experiments on the Black–Scholes model
APA, Harvard, Vancouver, ISO, and other styles
5

Zammali, Chaima. "Robust state estimation for switched systems : application to fault detection." Electronic Thesis or Diss., Sorbonne université, 2020. http://www.theses.fr/2020SORUS124.

Full text
Abstract:
Cette thèse s’intéresse à l’estimation d’état et à la détection de défauts de systèmes linéaires à commutations. Deux approches d’estimation par intervalles sont développées. La première consiste à proposer une estimation d’état pour des systèmes linéaires à commutations à paramètres variants en temps continu et en temps discret. La deuxième approche consiste à proposer une nouvelle logique d’estimation du signal de commutations d’un système linéaire à commutations à entrée inconnue en combinant la technique par modes glissants et l’approche par intervalles. Le problème d’estimation d’état constitue une des étapes fondamentales pour traiter le problème de détection de défauts. Par conséquent, des solutions robustes pour la détection de défauts sont développées en utilisant la théorie des ensembles. Deux méthodologies ont été employées pour détecter les défauts : un observateur classique par intervalle et une nouvelle structure TNL d’observateur par intervalle. Les performances de détection de défauts sont améliorées en se basant sur un critère L∞. De plus, une stratégie robuste de détection de défauts est introduite en utilisant des techniques zonotopiques et ellipsoïdales. En se basant sur des critères d’optimisation, ces techniques sont utilisées pour fournir des seuils dynamiques pour l’évaluation du résidu et pour améliorer la précision des résultats de détection de défauts sans tenir compte de l’hypothèse de coopérativité. Les méthodes développées dans cette thèse sont illustrées par des exemples académiques et les résultats obtenus montrent leur efficacité
This thesis deals with state estimation and fault detection for a class of switched linear systems. Two interval state estimation approaches are proposed. The first one is investigated for both continuous and discrete-time linear parameter varying switched systems subject to measured polytopic parameters. The second approach is concerned with a new switching signal observer, combining sliding mode and interval techniques, for a class of switched linear systems with unknown input. State estimation remains one of the fundamental steps to deal with fault detection. Hence, robust solutions for fault detection are considered using set-membership theory. Two interval techniques are achieved to deal with fault detection for discrete-time switched systems. First, a commonly used interval observer is designed based on an L∞ criterion to obtain accurate fault detection results. Second, a new interval observer structure (TNL structure) is investigated to relax the cooperativity constraint. In addition, a robust fault detection strategy is considered using zonotopic and ellipsoidal analysis. Based on optimization criteria, the zonotopic and ellipsoidal techniques are used to provide a systematic and effective way to improve the accuracy of the residual boundaries without considering the nonnegativity assumption. The developed techniques in this thesis are illustrated using academic examples and the results show their effectiveness
APA, Harvard, Vancouver, ISO, and other styles
6

Khemane, Firas. "Estimation fréquentielle par modèle non entier et approche ensembliste : application à la modélisation de la dynamique du conducteur." Thesis, Bordeaux 1, 2011. http://www.theses.fr/2010BOR14282/document.

Full text
Abstract:
Les travaux de cette thèse traite de la modélisation de systèmes par fonctions de transfert non entières à partir de données fréquentielles incertaines et bornées. A cet effet, les définitions d'intégration et de dérivation non entières sont d'abord étendues aux intervalles. Puis des approches ensemblistes sont appliquées pour l'estimation de l'ensemble des coefficients et des ordres de dérivation sous la forme d'intervalles. Ces approches s'appliquent pour l'estimation des paramètres de systèmes linéaires invariants dans le temps (LTI) certains, systèmes LTI incertains et systèmes linéaires à paramètres variant dans le temps (LPV). L'estimation paramétrique par approche ensembliste est particulièrement adaptée à la modélisation de la dynamique du conducteur, car les études sur un, voire plusieurs, individus montrent que les réactions recueillies ne sont jamais identiques mais varient d'une expérience à l'autre, voire d'un individu à l'autre
This thesis deals with system identification and modeling of fractional transfer functions using bounded and uncertain frequency responses. Therefor, both of fractional differentiation and integration definitions are extended into intervals. Set membership approaches are then applied to estimate coefficients and derivative orders as intervals. These methods are applied to estimate certain Linear Time Invariant systems (LTI), uncertain LTI systems and Linear Parameter Varying systems (LPV). They are notably adopted to model driver's dynamics, since most of studies on one or several individuals shave shown that the collected reactions are not identical and are varying from an experiment to another
APA, Harvard, Vancouver, ISO, and other styles
7

Dinh, Ngoc Thach. "Observateur par intervalles et observateur positif." Thesis, Paris 11, 2014. http://www.theses.fr/2014PA112335/document.

Full text
Abstract:
Cette thèse est construite autour de deux types d'estimation de l'état d'un système, traités séparément. Le premier problème abordé concerne la construction d'observateurs positifs basés sur la métrique de Hilbert. Le second traite de la synthèse d'observateurs par intervalles pour différentes familles de systèmes dynamiques et la construction de lois de commande robustes qui stabilisent ces systèmes.Un système positif est un système dont les variables d'état sont toujours positives ou nulles lorsque celles-ci ont des conditions initiales qui le sont. Les systèmes positifs apparaissent souvent de façon naturelle dans des applications pratiques où les variables d'état représentent des quantités qui n'ont pas de signification si elles ont des valeurs négatives. Dans ce contexte, il parait naturel de rechercher des observateurs fournissant des estimées elles aussi positives ou nulles. Dans un premier temps, notre contribution réside dans la mise au point d'une nouvelle méthode de construction d'observateurs positifs sur l'orthant positif. L'analyse de convergence est basée sur la métrique de Hilbert. L'avantage concurrentiel de notre méthode est que la vitesse de convergence peut être contrôlée.Notre étude concernant la synthèse d'observateurs par intervalles est basée sur la théorie des systèmes dynamiques positifs. Les observateurs par intervalles constituent un type d'observateurs très particuliers. Ce sont des outils développés depuis moins de 15 ans seulement : ils trouvent leur origine dans les travaux de Gouzé et al. en 2000 et se développent très rapidement dans de nombreuses directions. Un observateur par intervalles consiste en un système dynamique auxiliaire fournissant un intervalle dans lequel se trouve l'état, en considérant que l'on connait des bornes pour la condition initiale et pour les quantités incertaines. Les observateurs par intervalles donnent la possibilité de considérer le cas où des perturbations importantes sont présentes et fournissent certaines informations à tout instant
This thesis presents new results in the field of state estimation based on the theory of positive systems. It is composed of two separate parts. The first one studies the problem of positive observer design for positive systems. The second one which deals with robust state estimation through the design of interval observers, is at the core of our work.We begin our thesis by proposing the design of a nonlinear positive observer for discrete-time positive time-varying linear systems based on the use of generalized polar coordinates in the positive orthant. For positive systems, a natural requirement is that the observers should provide state estimates that are also non-negative so they can be given a physical meaning at all times. The idea underlying the method is that first, the direction of the true state is correctly estimated in the projective space thanks to the Hilbert metric and then very mild assumptions on the output map allow to reconstruct the norm of the state. The convergence rate can be controlled.Later, the thesis is continued by studying the so-called interval observers for different families of dynamic systems in continuous-time, in discrete-time and also in a context "continuous-discrete" (i.e. a class of continuous-time systems with discrete-time measurements). Interval observers are dynamic extensions giving estimates of the solution of a system in the presence of various type of disturbances through two outputs giving an upper and a lower bound for the solution. Thanks to interval observers, one can construct control laws which stabilize the considered systems
APA, Harvard, Vancouver, ISO, and other styles
8

Li, Qiaochu. "Contribution à la planification d'expériences, à l'estimation et au diagnostic actif de systèmes dynamiques non linéaires : application au domaine aéronautique." Thesis, Compiègne, 2015. http://www.theses.fr/2015COMP2231/document.

Full text
Abstract:
Dans ce travail de thèse, nous nous focalisons sur le problème de l'intégration d'incertitude à erreurs bornées pour les systèmes dynamiques, dont les entrées et les états initiaux doivent être optimaux afin de réaliser certaines fonctionnalités.Le document comporte 5 chapitres: le premier est une introduction présentant le panorama du travail. Le deuxième chapitre présente les outils de base de l'analyse par intervalle. Le chapitre 3 est dédié à l'estimation d'états et de paramètres. Nous décrivons d'abord une procédure pour résoudre un système d'équations différentielles ordinaires avec l'aide de cet outil. Ainsi, une estimation des états à partir des conditions initiales peut être faite. Les systèmes différentiels considérés dépendent de paramètres qui doivent être estimés. Ce problème inverse pourra être résolu via l'inversion ensembliste. L'approche par intervalle est une procédure déterministe naturelle sans incertitude, tous les résultats obtenus sont garantis. Néanmoins, cette approche n'est pas toujours efficace, ceci est dû au fait que certaines opérations ensemblistes conduisent à des temps de calcul important. Nous présentons quelques techniques, par cela, nous nous plaçons dans un contexte à erreurs bornées permettant d'accélérer cette procédure. Celles-ci utilisent des contracteurs ciblés qui permettent ainsi une réduction de ce temps. Ces algorithmes ont été testés et ont montré leur efficacité sur plusieurs applications: des modèles pharmacocinétiques et un modèle du vol longitudinal d'avion en atmosphère au repos.Le chapitre 4 présente la recherche d'entrées optimales dans le cadre analyse par intervalle, ce qui est une approche originale. Nous avons construit plusieurs critères nouveaux permettant cette recherche. Certains sont intuitifs, d'autres ont nécessité un développement théorique. Ces critères ont été utilisés pour la recherche d'états initiaux optimaux. Des comparaisons ont été faites sur plusieurs applications et l'efficacité de certains critères a été mise en évidence.Dans le chapitre 5, nous appliquons les approches présentées précédemment au diagnostic via l'estimation de paramètres. Nous avons développé un processus complet pour le diagnostic et aussi formulé un processus pour le diagnostic actif avec une application en aéronautique. Le dernier chapitre résume les travaux réalisés dans cette thèse et essaye de donner des perspectives à la recherche.Les algorithmes proposés dans ce travail ont été développés en C++ et utilisent l'environnement du calcul ensembliste
In this work, we will study the uncertainty integration problem in a bounded error context for the dynamic systems, whose input and the initial state have to be optimized so that some other operation could be more easily and better obtained. This work is consisted of 6 chapters : the chapter 1 is an introduction to the general subject which we will discuss about. The chapter 2 represents the basic tools of interval analysis.The chapter 3 is dedicated to state estimation and parameter estimation. We explain at the first, how to solve the ordinary differential equation using interval analysis, which will be the basic tool for the state estimation problem given the initial condition of studied systems. On the other ride, we will look into the parameter estimation problem using interval analysis too. Based on a simple hypothesis over the uncertain variable, we calculate the system's parameter in a bounded error form, considering the operation of intervals as the operation of sets. Guaranteed results are the advantage of interval analysis, but the big time consumption is still a problem for its popularization in many non linear estimation field. We present our founding techniques to accelerate this time consuming processes, which are called contractor in constraint propagation field. At the end of this chapter, différent examples will be the test proof for our proposed methods.Chapter 4 presents the searching for optimal input in the context of interval analysis, which is an original approach. We have constructed several new criteria allow such searching. Some of them are intuitive, the other need a theoretical proof. These criteria have been used for the search of optimal initial States and le better parameter estimation results. The comparisons are done by using multiple applications and the efficiency is proved by evidence.In chapter 5, we applied the approaches proposed above in diagnosis by state estimation and parameter estimation. We have developed a complete procedure for the diagnosis. The optimal input design has been reconsidered in an active diagnosis context. Both state and parameter estimation are implemented using an aeronautical application in literature.The last chapter given a brief summary over the realized subject, some further research directions are given in the perspective section.All the algorithms are written in C/C++ on a Linux based operation system
APA, Harvard, Vancouver, ISO, and other styles
9

Boulanger, Xavier. "Modélisation du canal de propagation Terre-Espace en bandes Ka et Q/V : synthèse de séries temporelles, variabilité statistique et estimation de risque." Thesis, Toulouse, ISAE, 2013. http://www.theses.fr/2013ESAE0009/document.

Full text
Abstract:
Les bandes de fréquences utilisées conventionnellement pour les systèmes fixes de télécommunication par satellites (bandes C et Ku i.e. 4-15 GHz) sont congestionnées. Néanmoins, le marché des télécommunications civil et de défense accuse une demande de plus en plus importante en services multimédia haut-débit. Par conséquent, l'augmentation de la fréquence porteuse vers les bandes Ka et Q/V (20-40/50 GHz)est activement étudiée. Pour des fréquences supérieures à 5 GHz, la propagation des signaux radioélectriques souffre de l'atténuation troposphérique. Parmi les différents contributeurs à l'affaiblissement troposphérique total(atténuation, scintillation, dépolarisation, température de bruit du ciel), les précipitations jouent un rôle prépondérant. Pour compenser la détérioration des conditions de propagation, des techniques de compensation des affaiblissements (FMT: Fade Mitigation Technique) permettant d'adapter en temps réel les caractéristiques du système en fonction de l'état du canal de propagation doivent être employées. Une alternative à l'utilisation de séries temporelles expérimentales peu nombreuses est la génération de séries temporelles synthétiques d'atténuation due à la pluie et d'atténuation totale représentatives d'une liaison donnée.Le manuscrit est organisé autour de cinq articles. La première contribution est dédiée à la modélisation temporelle de l'affaiblissement troposphérique total. Le deuxième article porte sur des améliorations significatives du modèle de génération de séries temporelles d'atténuation due à la pluie recommandé par l'UITR.Les trois contributions suivantes constituent une analyse critique et une modélisation de la variabilité des statistiques du 1er ordre utilisées lors des tests des modèles de canal. La variance de l'estimateur statistique des distributions cumulatives complémentaires de l'atténuation due à la pluie et de l'intensité de précipitation est alors mise en évidence. Un modèle à application mondiale paramétré au moyen de données expérimentales est proposé. Celui-ci permet, d'une part, d'estimer les intervalles de confiance associés aux mesures de propagation et d'autre part, de quantifier le risque en termes de disponibilité annuelle associée à la prédiction d'une marge de propagation donnée. Cette approche est étendue aux variabilités des statistiques jointes. Elle permet alors une évaluation statistique de l'impact des techniques de diversité de site sur les performances systèmes, tant à microéchelle(quelques kms) qu'à macro-échelle (quelques centaines de kms)
Nowadays, C and Ku bands used for fixed SATCOM systems are totally congested. However, the demand of the end users for high data rate multimedia services is increasing. Consequently, the use of higher frequency bands (Ka: 20 GHz and Q/V 40/50 GHz) is under investigation. For frequencies higher than 5 GHz, radiowave propagation is strongly affected by tropospheric attenuation. Among the different contributors, rain is the most significant. To compensate the deterioration of the propagation channel, Fade Mitigation Techniques (FMT) are used. The lack of experimental data needed to optimize the real-time control loops of FMT leads tothe use of rain attenuation and total attenuation time series synthesizers. The manuscript is a compilation of five articles. The first contribution is dedicated to the temporal modelling of total impairments. The second article aims at providing significant improvements on the rain attenuation time series synthesizer recommended by ITU-R. The last three contributions are a critical analysis and a modelling of the variability observed on the 1st order statistics used to validate propagation channel models. The variance of the statistical estimator of the complementary cumulative distribution functions of rainfall rate and rain attenuation is highlighted. A worldwide model parameterized in compliance with propagation measurements is proposed. It allows the confidence intervals to be estimated and the risk on a required availability associated with a given propagation margin prediction to be quantified. This approach is extended to the variability of joint statistics. It allows the impact of site diversity techniques on system performances at small scale (few kms) and large scale (few hundred of kms) to be evaluated
APA, Harvard, Vancouver, ISO, and other styles
10

Dandach, Hoda. "Prédiction de l'espace navigable par l'approche ensembliste pour un véhicule routier." Thesis, Compiègne, 2014. http://www.theses.fr/2014COMP1892/document.

Full text
Abstract:
Les travaux de cette thèse porte sur le calcul d’un espace d’état navigable d’un véhicule routier, ainsi que sur l’observation et l’estimation de son état, à l’aide des méthodes ensemblistes par intervalles. Dans la première partie de la thèse, nous nous intéressons aux problèmes d’estimation d’état relevant de la dynamique du véhicule. Classiquement, l’estimation se fait en utilisant le filtrage de Kalman pour des problèmes d’estimation linéaires ou le filtrage de Kalman étendu pour les cas non-linéaires. Ces filtres supposent que les erreurs sur le modèle et sur les mesures sont blanches et gaussiennes. D’autre part, les filtres particulaires (PF), aussi connus comme Méthodes de Monte-Carlo séquentielles, constituent souvent une alternative aux filtres de Kalman étendus. Par contre, les performances des filtres PF dépendent surtout du nombre de particules utilisées pour l’estimation, et sont souvent affectées par les bruits de mesures aberrants. Ainsi, l’objectif principal de cette partie de travail est d’utiliser une des méthodes à erreurs bornées, qui est le filtrage par boites particulaires (Box Particle Filter (BPF)), pour répondre à ces problèmes. Cette méthode généralise le filtrage particulaire à l’aide des boites remplaçant les particules. A l’aide de l’analyse par intervalles, l’estimation de certains variables fortement reliées à la dynamique du véhicule comme le transfert de charge latérale, le roulis et la vitesse de roulis est donnée, à chaque instant, sous forme d’un intervalle contenant la vraie valeur simulée. Dans la deuxième partie de la thèse, une nouvelle formalisation du problème de calcul de l’espace navigable de l’état d’un véhicule routier est présentée. Un algorithme de résolution est construit, basé sur le principe de l’inversion ensembliste par intervalles et sur la satisfaction des contraintes. Nous cherchons à caractériser l’ensemble des valeurs de la vitesse longitudinale et la dérive au centre de gravité qui correspondent à un comportement stable du véhicule : pas de renversement ni dérapage. Pour décrire le risque de renversement, nous avons utilisé l’indicateur de transfert de charge latéral (LTR). Pour décrire le risque de dérapage, nous avons utilisé les dérives des roues. Toutes les variables sont liées géométriquement avec le vecteur d’état choisi. En utilisant ces relations, l’inversion ensembliste par intervalles est appliquée afin de trouver l’espace navigable de l’état tel que ces deux risques sont évités. L’algorithme Sivia est implémenté, approximant ainsi cet espace. Une vitesse maximale autorisée au véhicule est déduite. Elle est associée à un angle de braquage donné sur une trajectoire connue
In this thesis, we aim to characterize a vehicle stable state domain, as well as vehicle state estimation, using interval methods.In the first part of this thesis, we are interested in the intelligent vehicle state estimation.The Bayesian approach is one of the most popular and used approaches of estimation. It is based on the calculated probability of the density function which is neither evident nor simple all the time, conditioned on the available measurements.Among the Bayesian approaches, we know the Kalman filter (KF) in its three forms(linear, non linear and unscented). All the Kalman filters assume unimodal Gaussian state and measurement distributions. As an alternative, the Particle Filter(PF) is a sequential Monte Carlo Bayesian estimator. Contrary to Kalman filter,PF is supposed to give more information about the posterior even when it has a multimodal shape or when the noise follows non-Gaussian distribution. However,the PF is very sensitive to the imprecision due by bias or noise, and its efficiency and accuracy depend mainly on the number of propagated particles which can easily and significantly increase as a result of this imprecision. In this part, we introduce the interval framework to deal with the problems of the non-white biased measurements and bounded errors. We use the Box Particle Filter (BPF), an estimator based simultaneously on the interval analysis and on the particle approach. We aim to estimate some immeasurable state from the vehicle dynamics using the bounded error Box Particle algorithm, like the roll angle and the lateral load transfer, which are two dynamic states of the vehicle. BPF gives a guaranteed estimation of the state vector. The box encountering the estimation is guaranteed to encounter thereal value of the estimated variable as well.In the second part of this thesis, we aim to compute a vehicle stable state domain.An algorithm, based on the set inversion principle and the constraints satisfaction,is used. Considering the longitudinal velocity and the side slip angle at the vehicle centre of gravity, we characterize the set of these two state variables that corresponds to a stable behaviour : neither roll-over nor sliding. Concerning the roll-over risk,we use the lateral transfer ratio LTR as a risk indicator. Concerning the sliding risk, we use the wheels side slip angles. All these variables are related geometrically to the longitudinal velocity and the side slip angle at the centre of gravity. Using these constraints, the set inversion principle is applied in order to define the set ofthe state variables where the two mentioned risks are avoided. The algorithm of Sivia is implemented. Knowing the vehicle trajectory, a maximal allowed velocityon every part of this trajectory is deduced
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Estimation par interval"

1

Castillo-Santiago, Miguel Ángel, Edith Mondragón-Vázquez, and Roberto Domínguez-Vera. "Sample Data for Thematic Accuracy Assessment in QGIS." In Land Use Cover Datasets and Validation Tools, 85–96. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-90998-7_6.

Full text
Abstract:
AbstractWe present an approach that is widely used in the field of remote sensing for the validation of single LUC maps. Unlike other chapters in this book, where maps are validated by comparison with other maps with better resolution and/or quality, this approach requires a ground sample dataset, i.e. a set of sites where LUC can be observed in the field or interpreted from high-resolution imagery. Map error is assessed using techniques based on statistical sampling. In general terms, in this approach, the accuracy of single LUC maps is assessed by comparing the thematic map against the reference data and measuring the agreement between the two. When assessing thematic accuracy, three stages can be identified: the design of the sample, the design of the response, and the estimation and analysis protocols. Sample design refers to the protocols used to define the characteristics of the sampling sites, including sample size and distribution, which can be random or systematic. Response design involves establishing the characteristics of the reference data, such as the size of the spatial assessment units, the sources from which the reference data will be obtained, and the criteria for assigning labels to spatial units. Finally, the estimation and analysis protocols include the procedures applied to the reference data to calculate accuracy indices, such as user’s and producer’s accuracy, the estimated areas covered by each category and their respective confidence intervals. This chapter has two sections in which we present a couple of exercises relating to sampling and response design; the sample size will be calculated, the distribution of sampling sites will be obtained using a stratified random scheme, and finally, a set of reference data will be obtained by photointerpretation at the sampling sites (spatial units). The accuracy statistics will be calculated later in Sect. 5 in chapter “Metrics Based on a Cross-Tabulation Matrix to Validate Land Use Cover Maps” as part of the cross-tabulation exercises. The exercises in this chapter use fine-scale LUC maps obtained for the municipality of Marqués de Comillas in Chiapas, Mexico.
APA, Harvard, Vancouver, ISO, and other styles
2

Rankenhohn, Florian, Tido Strauß, and Paul Wermter. "Dianchi Shallow Lake Management." In Terrestrial Environmental Sciences, 69–102. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-80234-9_3.

Full text
Abstract:
AbstractLake Dianchi in the Chinese province Yunnan is a shallow lake suffering from algae blooms for years due to high pollution. We conducted a thorough survey of the water quality of the northern part of the lake called Caohai. This study was intended as the basis for the system understanding of the shallow lake of Caohai. The study consisted of two steps. First we collected available environmental, hydrological and pollution data from Kunming authorities and other sources. It was possible to parameterise a lake model model based on the preliminary data set. It supported first estimations of management scenarios. But the first and quick answers came with a relevant vagueness. Relevant monitoring data was still missing like P release from lake-internal sediment.Because data uncertainty causes model uncertainty and model uncertainty causes planning and management uncertainties, we recommended and conducted a thorough sediment and river pollution monitoring campaign in 2017. Examination of the sediment phosphorus release and additional measurements of N and P was crucial for the improvement of the shallow lake model of Caohai. In May 2018 we presented and discussed the results of StoLaM shallow lake model of Caohai and the outcomes of a set of management scenarios.The StoLaM shallow lake model for Caohai used in SINOWATER indicates that sediment dredging could contribute to the control of algae by limitation of phosphorus, but sediment management can only produce sustainable effects when the overall nutrient input and especially the phosphorus input from the inflows will be reduced significantly.
APA, Harvard, Vancouver, ISO, and other styles
3

Edge, M. D. "Bayesian estimation and inference." In Statistical Thinking from Scratch, 186–203. Oxford University Press, 2019. http://dx.doi.org/10.1093/oso/9780198827627.003.0012.

Full text
Abstract:
Bayesian methods allow researchers to combine precise descriptions of prior beliefs with new data in a principled way. The main object of interest in Bayesian statistics is the posterior distribution, which describes the uncertainty associated with parameters given prior beliefs about them and the observed data. The posterior can be difficult to compute mathematically, but computational methods can give arbitrarily good approximations in most cases. Bayesian point and interval estimates are features of the posterior, such as measures of its central tendency or intervals into which the parameter falls with specified probability. Bayesian hypothesis testing is complicated and controversial, but one relevant tool is the Bayes factor, which compares the probability of observing the data under a pair of distinct hypotheses.
APA, Harvard, Vancouver, ISO, and other styles
4

Prabhakar, C. J. "Analysis of Face Space for Recognition using Interval-Valued Subspace Technique." In Cross-Disciplinary Applications of Artificial Intelligence and Pattern Recognition, 108–27. IGI Global, 2012. http://dx.doi.org/10.4018/978-1-61350-429-1.ch007.

Full text
Abstract:
The major contribution of the research work presented in this chapter is the development of effective face recognition algorithm using analysis of face space in the interval-valued subspace. The analysis of face images is used for various purposes such as facial expression classification, gender determination, age estimation, emotion assessment, face recognition, et cetera. The research community of face image analysis has developed many techniques for face recognition; one of the successful techniques is based on subspace analysis. In the first part of the chapter, the authors present discussion of earliest face recognition techniques, which are considered as mile stones in the roadmap of subspace based face recognition techniques. The second part presents one of the efficient interval-valued subspace techniques, namely, symbolic Kernel Fisher Discriminant analysis (Symbolic KFD), in which the interval type features are extracted in contrast to classical subspace based techniques where single valued features are used for face representation and recognition.
APA, Harvard, Vancouver, ISO, and other styles
5

Edge, M. D. "Semiparametric estimation and inference." In Statistical Thinking from Scratch, 139–64. Oxford University Press, 2019. http://dx.doi.org/10.1093/oso/9780198827627.003.0010.

Full text
Abstract:
Nonparametric and semiparametric statistical methods assume models whose properties cannot be described by a finite number of parameters. For example, a linear regression model that assumes that the disturbances are independent draws from an unknown distribution is semiparametric—it includes the intercept and slope as regression parameters but has a nonparametric part, the unknown distribution of the disturbances. Nonparametric and semiparametric methods focus on the empirical distribution function, which, assuming that the data are really independent observations from the same distribution, is a consistent estimator of the true cumulative distribution function. In this chapter, with plug-in estimation and the method of moments, functionals or parameters are estimated by treating the empirical distribution function as if it were the true cumulative distribution function. Such estimators are consistent. To understand the variation of point estimates, bootstrapping is used to resample from the empirical distribution function. For hypothesis testing, one can either use a bootstrap-based confidence interval or conduct a permutation test, which can be designed to test null hypotheses of independence or exchangeability. Resampling methods—including bootstrapping and permutation testing—are flexible and easy to implement with a little programming expertise.
APA, Harvard, Vancouver, ISO, and other styles
6

Macedo, Maria Clara Tomé, Renato Nogueira Sousa Neto, Afonso Martins Estevam, Marcelo de Oliveira Sabino, and Isana Mara Aragão Frota. "O MICROBIOMA HUMANO E SUA RELAÇÃO COM O INTERVALO POSTMORTEM." In Avanços e Técnicas em Ciências Forenses, 29–36. Editora Humanize, 2024. http://dx.doi.org/10.29327/5403595.1-3.

Full text
Abstract:
Introdução: A microbiologia tem sido alvo de interesse crescente por parte de pesquisadores para meios de investigações criminais e reconhecimento de provas. Os microrganismos estudados dentro desta área, como fungos e bactérias, estão diretamente relacionados a cenas de crime, e desenvolvem importante papel no estudo forense, sendo assim possível estabelecer uma relação com o intervalo pós-morte (IPM) ou postmortem, baseando- se nas comunidades bacterianas formadas no corpo cadavérico. Compreender como as comunidades microbianas do corpo humano evoluem após a morte de um indivíduo podem oferecer esclarecimentos valiosos para uma estimativa mais precisa do intervalo postmortem. Objetivos: Identificar através de evidências científicas a influência do microbioma humano na estimativa do intervalo pós-morte. Metodologia: O estudo baseou-se em uma Revisão Integrativa de Literatura, onde os dados foram coletados a partir de fontes científicas, como MEDLINE, SciELO e Google Acadêmico, com artigos dos últimos 8 anos, utilizando os descritores em português “Microbioma humano'', “Microbiologia Forense'' e “Intervalo pós-morte'', e em inglês “Forensic microbiology'', “Postmortem'' e “Human microbiome''. Resultados e discussão: A partir dos artigos avaliados, observou-se que a análise do microbioma humano em diferentes estágios e condições locais, tais como o microbioma natural do cadáver, seu estágio de decomposição e bactérias anaeróbias que são influenciadas pela privação de oxigênio durante o estágio inicial, além de fatores ambientais, sazonais e sexuais, influenciam diretamente no estabelecimento do intervalo postmortem. Considerações Finais: Dessa forma, é notório a complexidade dos processos microbianos após a morte, pois a presença prévia de comunidades microbianas no corpo humano impulsiona a putrefação e o surgimento de novas bactérias no pós-morte, intercalado a partir de diversos fatores que mudam conforme a especificidade do caso criminal, contudo o potencial do microbioma humano como uma ferramenta complementar na estimativa do intervalo postmortem é promissor, oferecendo novas perspectivas para a resolução de casos criminais.
APA, Harvard, Vancouver, ISO, and other styles
7

Zhao, Zhiwei, Yingguang Li, Yee Mey Goh, Changqing Liu, and Peter Kinnell. "A Data-Driven Method for Predicting Deformation of Machined Parts Using Sparse Monitored Deformation Data." In Advances in Transdisciplinary Engineering. IOS Press, 2021. http://dx.doi.org/10.3233/atde210068.

Full text
Abstract:
In the aircraft industry, where high precision geometric control is vital, unexpected component deformation, due to the release of internal residual stress, can limit geometric accuracy and presents process control challenges. Prediction of component deformation is necessary so that corrective control strategy can be defined. However, existing prediction methods, that are mainly based on the prediction or measurement of residual stress, are limited and accurate deformation prediction is still a research challenge. To address this issue, this paper presents a data-driven method for deformation prediction based on the use of in-process monitored deformation data. Deformation, which is caused by an unbalanced internal residual stress field, can be accurately monitored during the machining process via an instrumented fixture device. The state of the internal stress field within the part is first estimated by the using the part deformation data collected during machining process, and then, the deformation caused by a subsequent machining process is predicted. Deep learning is used to establish the estimating module and predicting module. The estimating module is used to infer the unobservable residual stress field as vectors by using sparse deformation data. The inferred vector is then used to predict the deformation in the predicting module. The proposed method provides an effective way to predict deformation during the machining of monolithic components, which is demonstrated experimentally.
APA, Harvard, Vancouver, ISO, and other styles
8

Bukhari, Rahat, and Rahat Abdul Rehman. "Comparative Study of Conventional Techniques and Functional Nanomaterials for PMI." In Modeling and Simulation of Functional Nanomaterials for Forensic Investigation, 131–41. IGI Global, 2023. http://dx.doi.org/10.4018/978-1-6684-8325-1.ch007.

Full text
Abstract:
The most vital part of a medico-legal investigation is the estimation of the period since death. The procedures now in use for determining this time period depend on a wide range of variables that are frequently out of the forensic examiner's control. Therefore, using nanosensors to analyse changes in biological components has provided a special benefit to precisely estimate the PMI. The evaluation of post-mortem intervals using novel functional nanomaterials techniques is compared in detail to conventional methods in this chapter. It also discusses the difficulties in making an accurate assessment due to changing environmental conditions as well as the benefits of using nanomaterials.
APA, Harvard, Vancouver, ISO, and other styles
9

Brown, Philip J. "Multiple regression and calibration." In Measurement, Regression, and Calibration, 39–50. Oxford University PressOxford, 1994. http://dx.doi.org/10.1093/oso/9780198522454.003.0003.

Full text
Abstract:
Abstract The previous chapter looked at the relationship between two variables: the response Y and the explanatory variable x, and with n observations developed model fitting by least-squares. With a response and explanatory variable pair, it emphasized both the prediction of the future Y, denoted Z, for given x; and estimation and confidence intervals for unknown x, denoted , for given Z. In this chapter these ideas are extended to models where more than one variable is available to explain Y. This chapter concentrates on least-squares methods whereas the following chapter looks at less-standard shrinkage methods. These are capable of dealing sensibly with both non-singular and singular specifications of the linear model.
APA, Harvard, Vancouver, ISO, and other styles
10

Borchers, D. L., and K. P. Burnham. "General formulation for distance sampling." In Advanced Distance Sampling, 6–30. Oxford University PressOxford, 2004. http://dx.doi.org/10.1093/oso/9780198507833.003.0002.

Full text
Abstract:
Abstract Full likelihood functions for distance sampling methods involve assuming probability models for the encounter rate or animal distribution, for the detection process, for cluster size (if animals are detected in clusters), and possibly for other data. Conventional distance sampling (CDS) methods avoid assuming a probability model for encounter rate by using an empirical estimator of its variance and getting confidence intervals assuming density D to be log-normally distributed. Similarly, a point estimator and sampling variance of mean cluster size, E[s] can be obtained in a regression frame-work, so no probability model π(s) for cluster size s in the population need be assumed. Probability models and likelihood inference are conventionally used only for the distance part of the data.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Estimation par interval"

1

Munene, Isaac. "Estimating On-condition Direct Maintenance Cost (DMC)." In Vertical Flight Society 74th Annual Forum & Technology Display, 1–7. The Vertical Flight Society, 2018. http://dx.doi.org/10.4050/f-0074-2018-12849.

Full text
Abstract:
Aftermarket support has become a key component of cost and competitiveness in the rotorcraft industry. Both the operator and the rotorcraft manufacturer play a role in aftermarket support. Many rotorcraft original equipment manufacturers (OEMs) are offering fixed price maintenance service programs in their after-market support programs. Since product support may last well over two decades, the desire for low direct operating cost (DOC) and lower life cycle cost (LCC) has become a more visible consideration in the rotorcraft design phase. Direct maintenance cost (DMC), forms a significant part of the DOC and LCC. A subset of DMC, On-condition maintenance cost is a category with unspecified maintenance intervals and presents one of the more challenging estimating efforts, particularly on a new rotorcraft program with no history. The approach used for estimating maintenance costs can strongly influence decision making within the OEM while also educating the customer on better maintenance philosophy and planning. Incorrectly minimizing or excluding the effect of on-condition cost (within the DMC estimate) could have a profound impact on operator and service organizations of the OEM. This paper presents a high-level discussion on the potential refinements that can be made to the Helicopter Association International’s Economic Committee’s Guide for the Presentation of Helicopter Operating Cost Estimates 2010. Estimating the on-condition direct maintenance cost for airframe manufacturers is the focus of the discussion.
APA, Harvard, Vancouver, ISO, and other styles
2

Nitta, Yoshifuru, and Yudai Yamasaki. "Evaluation of Effective Active Site on Pd Methane Oxidation Catalyst in Exhaust Gas of Lean Burn Gas Engine." In ASME 2019 Internal Combustion Engine Division Fall Technical Conference. American Society of Mechanical Engineers, 2019. http://dx.doi.org/10.1115/icef2019-7152.

Full text
Abstract:
Abstract Lean-burn gas engines have recently attracted attentions in the maritime industry, because they can reduce NOx, SOx and CO2 emissions. However, since methane (CH4) is the main component of natural gas, the slipped methane which is the unburned methane emitted from the lean-burn gas engines likely contributes to global warming. It is thus important to make progress on exhaust aftertreatment technologies for lean-burn gas engines. A Palladium (Pd) catalyst for CH4 oxidation is expected to provide a countermeasure for slipped methane, because it can activate at lower exhaust gas temperature. However, a deactivation in higher water (H2O) concentration should be overcome, because H2O inhibits CH4 oxidation. This study was performed investigates the effects of exhaust gas temperature or gas composition on active Pd catalyst sites to clarify CH4 oxidation performance in the exhaust gas of lean-burn gas engines. The authors developed the method of estimating effective active sites for the Pd catalyst at various exhaust gas temperature. The estimation method is based on the assumption that active sites used for CH4 oxidation process can be shared with the active sites used for Carbon mono-oxide (CO) oxidation. The molecular of chemisorbed CO on the active sites of the Pd catalyst can provide effective active sites for CH4 oxidation process. To clarify the effects of exhaust gas temperature and compositions on active Pd catalyst sites, the authors developed an experimental system for the new estimation method. This paper introduces experimental results and verifications of the new method, showing that chemisorbed CO volume on a Pd/Al2O3 catalyst is increased with increasing Pd loading in 250–450 °C, simulated as a typical exhaust gas temperature range of lean-burn gas engines. The results provide a part of the criteria for the application of Pd catalysts to the reduction of slipped methane in exhaust gas of lean-burn gas engines.
APA, Harvard, Vancouver, ISO, and other styles
3

Lewander, Magnus, Per Tunesta˚l, and Bengt Johansson. "Cylinder Individual Efficiency Estimation for Online Fuel Consumption Optimization." In ASME 2010 Internal Combustion Engine Division Fall Technical Conference. ASMEDC, 2010. http://dx.doi.org/10.1115/icef2010-35113.

Full text
Abstract:
Engine efficiency is often controlled in an indirect way through combustion timing control. This requires a priori knowledge of where to phase the combustion for different operating points and conditions. With cylinder individual efficiency estimation, control strategies aiming directly at fuel consumption optimization can be developed. This paper presents a method to estimate indicated efficiency using the cylinder pressure trace as input. The proposed method is based on a heat release calculation that takes heat losses into account implicitly using an estimated, CAD resolved polytropic exponent. Experimental results from a multi-cylinder engine show that with this approach, the estimated efficiency error is within 5% for all operating points tested. The final part of the paper is a discussion of how to use the efficiency estimation for feedback control. Different control concepts are presented as well as suggestions on how to handle the non-linear connection between combustion timing and indicated efficiency.
APA, Harvard, Vancouver, ISO, and other styles
4

Kang, Myoungcheol, Stephen Ogaji, Pericles Pilidis, and Changduk Kong. "An Approach to Maintenance Cost Estimation for Aircraft Engines." In ASME Turbo Expo 2008: Power for Land, Sea, and Air. ASMEDC, 2008. http://dx.doi.org/10.1115/gt2008-50564.

Full text
Abstract:
This study presents a detailed analysis of aircraft engine maintenance cost based on the relationships between engine performance and geometric parameters. Estimating engine maintenance cost is very complicated and depends upon empirical work based on know-how and huge database to develop the industrial cost estimation program. Engine maintenance costs are basically influenced by the shop visit rates (SVR), the workscope of each shop visit, the shop visit pattern and the manhours and materials used in each shop visit. To estimate these values for a specific engine model, there is a need to develop the empirical correlations of engine maintenance. For this study, an engine performance and maintenance database was created. The engine performance data for each major component were simulated by an engine performance code, TURBOMATCH developed at Cranfield University, U.K., at static sea-level condition and geometric data were collected from open literature. Also, engine maintenance cost data for some current engines were collected and used. Some trend line equations based on the database were developed for the estimation of shop-visit interval, work-scope, Man-Hours, material cost and Life Limited Part cost. Comparisons of the results between trend equations and original data were carried out. The results show that this approach can give a more reasonable and detailed estimation of engine maintenance cost than older empirical methods.
APA, Harvard, Vancouver, ISO, and other styles
5

Nitta, Yoshifuru, and Yudai Yamasaki. "Effect of Support Materials on Pd Methane Oxidation Catalyst Using Dynamic Estimation Method." In ASME 2020 Internal Combustion Engine Division Fall Technical Conference. American Society of Mechanical Engineers, 2020. http://dx.doi.org/10.1115/icef2020-2930.

Full text
Abstract:
Abstract In the maritime industry, lean burn gas engines have been expected to reduce emissions such as NOx, SOx and CO2. On the other hand, the slipped methane, which is the unburned methane (CH4) emitted from lean burn gas engines have a concern for impact on global warming. It is therefore important to make a progress on the exhaust aftertreatment technologies for lean burn gas engines. As a countermeasure for the slipped methane, Palladium (Pd) catalyst for CH4 oxidation can be expected to provide one of the most feasible methods because Palladium (Pd) catalyst for CH4 oxidation can activate in the lower temperature. However, recent studies have shown that the reversible adsorption by water vapor (H2O) inhibits CH4 oxidation on the catalyst and deactivates its CH4 oxidation capacity. It can be known that the CH4 oxidation performance is influenced by active sites on the Pd catalyst. However, measuring methods for active sites on Pd catalyst under exhaust gas conditions could not be found. Authors thus proposed a dynamic estimation method for the quantity of effective active sites on Pd catalyst in exhaust gas temperature using water-gas shift reaction between the saturated chemisorbed CO and the pulse induced H2O. The previous study clarified the relationship between adsorbed CO volume and Pd loading in gas engine exhaust gas temperature and revealed the effects of flow conditions on the estimation of adsorbed CO volume. However, in order to improve CH4 oxidation performance on Pd catalyst under exhaust gas conditions, it is important that effects of support materials on active sites should clarify. This paper introduced experimental results of estimation of absorbed CO volume on different support materials of Pd catalysts by using the dynamic evaluation method. Experimental results show that chemisorbed CO volume on Pd/Al2O3 catalyst exhibits higher chemisorbed CO volume than that of Pd/SiO2 and Pd/Al2O3-SiO2 catalyst in 250–450 °C. These results can provide a part of the criteria for the application of Pd catalyst for reducing the slipped methane in exhaust gas of lean burn gas engines.
APA, Harvard, Vancouver, ISO, and other styles
6

Barclift, Michael, Andrew Armstrong, Timothy W. Simpson, and Sanjay B. Joshi. "CAD-Integrated Cost Estimation and Build Orientation Optimization to Support Design for Metal Additive Manufacturing." In ASME 2017 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/detc2017-68376.

Full text
Abstract:
Cost estimation techniques for Additive Manufacturing (AM) have limited synchronization with the metadata of 3D CAD models. This paper proposes a method for estimating AM build costs through a commercial 3D solid modeling program. Using an application programming interface (API), part volume and surface data is queried from the CAD model and used to generate internal and external support structures as solid-body features. The queried data along with manipulation of the part’s build orientation allows users to estimate build time, feedstock requirements, and optimize parts for AM production while they are being designed in a CAD program. A case study is presented with a macro programmed using the SolidWorks API with costing for a metal 3D-printed automotive component. Results reveal that an imprecise support angle can under-predict support volume by 34% and build time by 20%. Orientation and insufficient build volume packing can increase powder depreciation costs by nearly twice the material costs.
APA, Harvard, Vancouver, ISO, and other styles
7

Worthingham, Robert, Tom Morrison, and Guy Desjardins. "Comparison of Estimates From a Growth Model 5 Years After the Previous Inspection." In 2000 3rd International Pipeline Conference. American Society of Mechanical Engineers, 2000. http://dx.doi.org/10.1115/ipc2000-208.

Full text
Abstract:
A corrosion growth modelling procedure using repeated inline inspection data has been employed as part of the maintenance program planning for a pipeline in the Alberta portion of the TransCanada system. The methodology of matching corrosion features between the different in-line inspections, and estimating their severity at a future date, is shown to be an excellent proactive cost saving methodology. Throughout this paper estimated 80% confidence intervals for tool measurement error, total prediction error and growth methodology error are given. In this abstract the values have been rounded. For maximum penetration, for the features reported on three inspections, the confidence interval for total prediction error varies from ±12% to ±17%, and for the growth methodology from ±8% to ±10% of the wall thickness (for the 1998 and 1999 dig programs respectively). For features reported on two inspections the confidence interval varies from ±19% to ±22% for total prediction error (1998 and 1999 digs respectively), and is about ±17% for the growth methodology (for both dig programs). The estimated confidence interval for prediction error in failure pressure is about ±560 kPa for the 1998 dig program. For the 1999 dig program a good estimate of the confidence interval for total prediction error could not be obtained. Assuming the failure pressure data obtained from field measurements were perfect, the estimate of the maximum confidence interval was ±850 kPa. For the laser profile measurement field tool, compared to an ultrasonic pencil probe, the confidence interval for penetration is less than ±2% of the wall thickness. The true confidence interval values in some cases are expected to be smaller than reported above for several reasons discussed in this paper.
APA, Harvard, Vancouver, ISO, and other styles
8

Levon, Taylor, Kit Clemons, Ben Zapp, and Tim Foltz. "A Multi-Disciplinary Approach for Well Spacing and Treatment Design Optimization in the Midland Basin Using Lateral Pore Pressure Estimation and Depletion Modeling." In SPE Hydraulic Fracturing Technology Conference and Exhibition. SPE, 2021. http://dx.doi.org/10.2118/204195-ms.

Full text
Abstract:
Abstract With a recent trend in increased infill well development in the Midland basin and other unconventional plays, it has been shown that depletion has a significant impact on hydraulic fracture propagation. This is largely because production drawdown causes in-situ stress changes, resulting in asymmetric fracture growth toward the depleted regions. In turn, this can have a negative impact on production capacity. For the initial part of this study, an infill child well was drilled and completed adjacent to a parent well that had been producing for two years. Due to drilling difficulties, the child well was steered to a new target zone located 125 feet above the original target. However, relative to the original target, treatment data from the new zone indicated abnormal treatment responses leading to a study to evaluate the source of these variations and subsequent mitigation. The initial study was conducted using a pore pressure estimation derived from drill bit geomechanics data to investigate depletion effects on the infill child well. The pore pressure results were compared to the child well treatment responses and bottom hole pressure measurements in the parent well. Following the initial study, additional hydraulic fracture modeling studies were conducted on a separate pad to investigate depletion around the infill wells, determine optimal well spacing for future wells given the level of depletion, and optimize treatment designs for future wells in similar depletion scenarios. A depletion model workflow was implemented based on integrating hydraulic fracture modeling and reservoir analytics for future infill pad development. The geomechanical properties were calibrated by DFIT results and pressure matching of the parent well treatments for the in-situ virgin conditions. Parent well fracture geometries were used in an RTA for an analytical approach of estimating drainage area of the parent wells. These were then applied to a depletion profile in the hydraulic fracture model for well spacing analysis and treatment design sensitivities. Results of the initial study indicated that stages in the new, higher interval had higher breakdown pressures than the lower interval. Additionally, the child well drilled in the lower interval had normal breakdown pressures in line with the parent well treatments. This suggests that treatment differences in the wells were ultimately due to depletion of the offset parent well. Based on the modeling efforts, optimal infill well spacing was determined based on the on-production time of the parent wells. The optimal treatment designs were also determined under the same conditions to minimize offset frac hits and unnecessary completion costs. This case study presents the use of a multi-disciplinary approach for well spacing and treatment optimization. The integration of a novel method of estimating pore pressure and depletion modeling workflows were used in an inventive way to understand depletion effects on future development.
APA, Harvard, Vancouver, ISO, and other styles
9

Angerer, Juergen, B. Heinzow, D. O. Reimann, W. Knorz, and G. Lehnert. "Waste incineration: estimation of the workers' internal exposure to PCB, PAH chlorophenols and other relevant agents." In Environmental Sensing '92, edited by Tuan Vo-Dinh and Karl Cammann. SPIE, 1993. http://dx.doi.org/10.1117/12.140277.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Toledo, A. Ruiz, A. Ramos Fernandez, and D. K. Anthony. "A comparison of GA objective functions for estimating internal properties of piezoelectric transducers used in medical echo-graphic imaging." In 2010 Pan American Health Care Exchanges (PAHCE 2010). IEEE, 2010. http://dx.doi.org/10.1109/pahce.2010.5474572.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Estimation par interval"

1

Rana, Arnav, Sanjay Tiku, and Aaron Dinovitzer. PR-214-203806-R01 Improve Dent-Cracking Assessment Methods. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), June 2022. http://dx.doi.org/10.55274/r0012227.

Full text
Abstract:
This work was funded in part, under the Department of Transportation, Pipeline and Hazardous Materials Safety Administration. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Pipeline and Hazardous Materials Safety Administration, the Department of Transportation, or the U.S. Government. This project builds on mechanical damage (MD) assessment and management tools, developed on behalf of Pipeline Research Council International (PRCI), Interstate Natural Gas Association of America (INGAA), Canadian Energy Pipeline Association (CEPA), American Petroleum Institute (API), other research organizations and individual pipeline operators and were included in API RP 1183. These include dent shape, restraint condition and interacting feature characterization; operational maximum and cyclic internal pressure characterization, screening tools defining non-injurious dent shapes based on pipe size and operating condition, failure pressure and fatigue assessment tools for dents with/without interacting features (e.g., corrosion, welds, gouges) in the restrained and unrestrained condition, and direction on available remedial action and repair techniques. In completing this development, areas for improvement were identified. The current project enhances previously developed tools being adopted in an industry recommended practice (API RP 1183) for pipeline MD integrity assessment and management considering: - Enhancement of indentation crack formation strain estimation, - Understanding the role of ILI measurement accuracy on dent integrity assessment, and - Quantification of assessment method conservatism to support safety factor definition. Safety factors (Modeling bias) defined in the present study and evaluated for different fatigue life estimation approaches in the present work refer to the conservatism inherent in different in different fatigue life models and is represented as the ratio of experimental lives to predicted lives.
APA, Harvard, Vancouver, ISO, and other styles
2

Mas’ud, Abdulsalam, Sani Damamisau Mohammed, and Yusuf Abdu Gimba. Digitalisation and Subnational Tax Administration in Nigeria. Institute of Development Studies, August 2023. http://dx.doi.org/10.19088/ictd.2023.031.

Full text
Abstract:
Recently, there has been an expansion in the deployment of digital systems and digital IDs among taxing authorities. However, little is known about the extent to which such technologies are being adopted, or about whether the data from them is being used strategically to improve tax administration. Even less is known about this in the context of subnational tax administration, although this could be very relevant in some contexts, such as Nigeria. This study investigates the extent of the adoption and strategic usage of data from e-tax systems and digital IDs among state internal revenue services (SIRSs) in Nigeria. Data was collected through qualitative interviews conducted within the SIRSs – one from each of the country’s six geopolitical zones, and within the Federal Inland Revenue Service (FIRS). The qualitative data from the interviews was evaluated using thematic analysis. The findings revealed that there is scope for improvement in the adoption and usage of data from e-tax systems and digital IDs among the SIRSs. It was also found that the extent of adoption and strategic data usage from e-tax systems by SIRSs likely improves states’ per capita internally generated revenue (IGR), but similar insights on the impact of digital IDs have not been obtained. Lastly, it was found that there are some lessons SIRSs could learn from FIRS in terms of strategic use of data from e-tax systems and digital IDs. Specifically, SIRSs need to integrate an audit risk engine and machine learning for performing analytics into their e-tax systems, and also automate the estimation of annual credits for withholding tax suffered, tax refunds and penalties, as well as tax audit management including case selection, allocation of auditors and generating audit reports. Some policy recommendations are offered that are consistent with these findings.
APA, Harvard, Vancouver, ISO, and other styles
3

Ningthoujam, J., J. K. Clark, T. R. Carter, and H. A. J. Russell. Investigating borehole-density, sonic, and neutron logs for mapping regional porosity variation in the Silurian Lockport Group and Salina Group A-1 Carbonate Unit, Ontario. Natural Resources Canada/CMSS/Information Management, 2024. http://dx.doi.org/10.4095/332336.

Full text
Abstract:
The Oil, Gas and Salt Resources Library (OGSRL) is a repository for data from wells licenced under the Oil, Gas and Salt Resources Act for Ontario. It has approximately 50,000 porosity and permeability drill core analyses on bedrock cores. It also has in analogue format, geophysical logs (e.g., gamma ray, gamma-gamma density, neutron, sonic) from approximately 20,000 wells. A significant challenge for geotechnical and hydrogeological studies of the region is the accessibility of digital data on porosity and permeability. Recent work completed on approximately 12,000 core analyses for the Silurian Lockport Group and Salina Group A-1 Carbonate Unit are geographically concentrated within productive oil and gas pools. An opportunity therefore exists to expand the bedrock porosity characterization for southern Ontario by using geophysical logs collected in open-hole bedrock wells that are more geographically dispersed. As part of this study, hard copy files of analog geophysical logs are converted to digital data (LAS format), followed by quality assessment and quality control (QAQC) to obtain meaningful results. From the digitized geophysical data, density, neutron, and sonic logs are selected to mathematically derive porosity values that are then compared with the corresponding measured core porosity values for the same depth interval to determine the reliability of the respective log types. In this study, a strong positive correlation (R²=0.589) is observed between porosity computed from a density log (density log porosity) and the corresponding core porosity. Conversely, sonic log porosity and neutron porosity show weak (R2 = 0.1738) and very weak (R2 = 0.0574) positive correlation with the corresponding core porosity data. This finding can be attributed to different factors (e.g., the condition of the borehole walls and fluids, the type and limitations of the technology at different points in time, knowledge of formation variability for calculations), and as such requires more investigation. The density log measures the bulk density of the formation (solid and fluid phases), and as such the derived porosity values indicate total porosity i.e., interparticle (primary) pore spaces, and vugs and fractures (secondary) pore spaces. The sonic log measures the interval transit time of a compressional soundwave travelling through the formation. High quality first arrival waveforms usually correspond to a route in the borehole wall free of fractures and vugs, which ultimately result in the derived porosity reflecting only primary porosity. As molds, vugs and fractures contribute significantly to the total porosity of the Lockport Group and Salina A-1 Carbonate strata, sonic porosity may not reflect true bulk formation porosity. The neutron porosity log measures the hydrogen index in a formation as a proxy for porosity, however, the current limitations of neutron logging tool fail to account for formation-related complexities including: the gas effect, the chloride effect and the shale effect that can lead to over- or underestimation of formation porosity. As a result, the density log appears to be the most reliable geophysical log in the OGSRL archives for total porosity estimation in the Lockport Group and Salina A-1 Carbonate Unit. Nonetheless, sonic porosity can be combined with density porosity to determine secondary porosity, whereas a combination of density and neutron porosity logs can be used to identify gas-bearing zones.
APA, Harvard, Vancouver, ISO, and other styles
4

Galili, Naftali, Roger P. Rohrbach, Itzhak Shmulevich, Yoram Fuchs, and Giora Zauberman. Non-Destructive Quality Sensing of High-Value Agricultural Commodities Through Response Analysis. United States Department of Agriculture, October 1994. http://dx.doi.org/10.32747/1994.7570549.bard.

Full text
Abstract:
The objectives of this project were to develop nondestructive methods for detection of internal properties and firmness of fruits and vegetables. One method was based on a soft piezoelectric film transducer developed in the Technion, for analysis of fruit response to low-energy excitation. The second method was a dot-matrix piezoelectric transducer of North Carolina State University, developed for contact-pressure analysis of fruit during impact. Two research teams, one in Israel and the other in North Carolina, coordinated their research effort according to the specific objectives of the project, to develop and apply the two complementary methods for quality control of agricultural commodities. In Israel: An improved firmness testing system was developed and tested with tropical fruits. The new system included an instrumented fruit-bed of three flexible piezoelectric sensors and miniature electromagnetic hammers, which served as fruit support and low-energy excitation device, respectively. Resonant frequencies were detected for determination of firmness index. Two new acoustic parameters were developed for evaluation of fruit firmness and maturity: a dumping-ratio and a centeroid of the frequency response. Experiments were performed with avocado and mango fruits. The internal damping ratio, which may indicate fruit ripeness, increased monotonically with time, while resonant frequencies and firmness indices decreased with time. Fruit samples were tested daily by destructive penetration test. A fairy high correlation was found in tropical fruits between the penetration force and the new acoustic parameters; a lower correlation was found between this parameter and the conventional firmness index. Improved table-top firmness testing units, Firmalon, with data-logging system and on-line data analysis capacity have been built. The new device was used for the full-scale experiments in the next two years, ahead of the original program and BARD timetable. Close cooperation was initiated with local industry for development of both off-line and on-line sorting and quality control of more agricultural commodities. Firmalon units were produced and operated in major packaging houses in Israel, Belgium and Washington State, on mango and avocado, apples, pears, tomatoes, melons and some other fruits, to gain field experience with the new method. The accumulated experimental data from all these activities is still analyzed, to improve firmness sorting criteria and shelf-life predicting curves for the different fruits. The test program in commercial CA storage facilities in Washington State included seven apple varieties: Fuji, Braeburn, Gala, Granny Smith, Jonagold, Red Delicious, Golden Delicious, and D'Anjou pear variety. FI master-curves could be developed for the Braeburn, Gala, Granny Smith and Jonagold apples. These fruits showed a steady ripening process during the test period. Yet, more work should be conducted to reduce scattering of the data and to determine the confidence limits of the method. Nearly constant FI in Red Delicious and the fluctuations of FI in the Fuji apples should be re-examined. Three sets of experiment were performed with Flandria tomatoes. Despite the complex structure of the tomatoes, the acoustic method could be used for firmness evaluation and to follow the ripening evolution with time. Close agreement was achieved between the auction expert evaluation and that of the nondestructive acoustic test, where firmness index of 4.0 and more indicated grade-A tomatoes. More work is performed to refine the sorting algorithm and to develop a general ripening scale for automatic grading of tomatoes for the fresh fruit market. Galia melons were tested in Israel, in simulated export conditions. It was concluded that the Firmalon is capable of detecting the ripening of melons nondestructively, and sorted out the defective fruits from the export shipment. The cooperation with local industry resulted in development of automatic on-line prototype of the acoustic sensor, that may be incorporated with the export quality control system for melons. More interesting is the development of the remote firmness sensing method for sealed CA cool-rooms, where most of the full-year fruit yield in stored for off-season consumption. Hundreds of ripening monitor systems have been installed in major fruit storage facilities, and being evaluated now by the consumers. If successful, the new method may cause a major change in long-term fruit storage technology. More uses of the acoustic test method have been considered, for monitoring fruit maturity and harvest time, testing fruit samples or each individual fruit when entering the storage facilities, packaging house and auction, and in the supermarket. This approach may result in a full line of equipment for nondestructive quality control of fruits and vegetables, from the orchard or the greenhouse, through the entire sorting, grading and storage process, up to the consumer table. The developed technology offers a tool to determine the maturity of the fruits nondestructively by monitoring their acoustic response to mechanical impulse on the tree. A special device was built and preliminary tested in mango fruit. More development is needed to develop a portable, hand operated sensing method for this purpose. In North Carolina: Analysis method based on an Auto-Regressive (AR) model was developed for detecting the first resonance of fruit from their response to mechanical impulse. The algorithm included a routine that detects the first resonant frequency from as many sensors as possible. Experiments on Red Delicious apples were performed and their firmness was determined. The AR method allowed the detection of the first resonance. The method could be fast enough to be utilized in a real time sorting machine. Yet, further study is needed to look for improvement of the search algorithm of the methods. An impact contact-pressure measurement system and Neural Network (NN) identification method were developed to investigate the relationships between surface pressure distributions on selected fruits and their respective internal textural qualities. A piezoelectric dot-matrix pressure transducer was developed for the purpose of acquiring time-sampled pressure profiles during impact. The acquired data was transferred into a personal computer and accurate visualization of animated data were presented. Preliminary test with 10 apples has been performed. Measurement were made by the contact-pressure transducer in two different positions. Complementary measurements were made on the same apples by using the Firmalon and Magness Taylor (MT) testers. Three-layer neural network was designed. 2/3 of the contact-pressure data were used as training input data and corresponding MT data as training target data. The remaining data were used as NN checking data. Six samples randomly chosen from the ten measured samples and their corresponding Firmalon values were used as the NN training and target data, respectively. The remaining four samples' data were input to the NN. The NN results consistent with the Firmness Tester values. So, if more training data would be obtained, the output should be more accurate. In addition, the Firmness Tester values do not consistent with MT firmness tester values. The NN method developed in this study appears to be a useful tool to emulate the MT Firmness test results without destroying the apple samples. To get more accurate estimation of MT firmness a much larger training data set is required. When the larger sensitive area of the pressure sensor being developed in this project becomes available, the entire contact 'shape' will provide additional information and the neural network results would be more accurate. It has been shown that the impact information can be utilized in the determination of internal quality factors of fruit. Until now,
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography