Academic literature on the topic 'Calibration sets'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Calibration sets.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Calibration sets"

1

Ozdemir, Durmus, Matt Mosley, and Ron Williams. "Hybrid Calibration Models: An Alternative to Calibration Transfer." Applied Spectroscopy 52, no. 4 (April 1998): 599–603. http://dx.doi.org/10.1366/0003702981943932.

Full text
Abstract:
A new procedure for calibrating multiple instruments is presented in which spectra from each are used simultaneously during the construction of multivariate calibration models. The application of partial least-squares (PLS) and genetic regression (GR) to the problem of generating these hybrid calibrations is presented. Spectra of ternary mixtures of methylene chloride, ethyl acetate, and methanol were collected on a dispersive and a Fourier transform spectrometer. Calibration models were generated by using differing numbers of spectra from each instrument simultaneously in the calibration and prediction sets, and then validated by using a set of spectra from each instrument separately. Calibration models were found that perform well on both instruments, even when only a single spectrum from the second instrument was used during the calibration process. As a benchmark, comparison with PLS showed that GR is more effective than PLS in building these hybrid calibration models.
APA, Harvard, Vancouver, ISO, and other styles
2

Zhang, Bao Long, Shao Jing Zhang, Wei Qi Ding, and Hui Shuang Shi. "Fisheye Lens Distortion Calibration Based on the Lens Charactetic Curves." Applied Mechanics and Materials 519-520 (February 2014): 636–39. http://dx.doi.org/10.4028/www.scientific.net/amm.519-520.636.

Full text
Abstract:
The fisheye lens is a kind of ultra wide angle lens, which can produce a big super-wide-angle lens distortion. In order to cover a large scope of light, barrel distortion is artificially added to the optical system. However, in some cases this distortion is not allowed, then it requires calibrations of those distortions. Most of the traditional distortion calibration method uses target plane calibration to do it. This paper discusses the way of design fisheye lens, through which we can know the forming process of distortion clearly. Based on this paper, a simple and effective calibration method can be understood. Different from common camera calibration method, the proposed calibration method can avoid the error occurring in the process of calibrating test, that directly use the lens’ characteristic curve. Through multiple sets of experimental verifications, this method is effective and feasible.
APA, Harvard, Vancouver, ISO, and other styles
3

Alsam, Ali, and Graham Finlayson. "Metamer sets without spectral calibration." Journal of the Optical Society of America A 24, no. 9 (2007): 2505. http://dx.doi.org/10.1364/josaa.24.002505.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Fearn, Tom. "Flat Calibration Sets: At What Price?" NIR news 18, no. 8 (December 2007): 16–17. http://dx.doi.org/10.1255/nirn.1055.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Alarid-Escudero, Fernando, Richard F. MacLehose, Yadira Peralta, Karen M. Kuntz, and Eva A. Enns. "Nonidentifiability in Model Calibration and Implications for Medical Decision Making." Medical Decision Making 38, no. 7 (September 24, 2018): 810–21. http://dx.doi.org/10.1177/0272989x18792283.

Full text
Abstract:
Background. Calibration is the process of estimating parameters of a mathematical model by matching model outputs to calibration targets. In the presence of nonidentifiability, multiple parameter sets solve the calibration problem, which may have important implications for decision making. We evaluate the implications of nonidentifiability on the optimal strategy and provide methods to check for nonidentifiability. Methods. We illustrate nonidentifiability by calibrating a 3-state Markov model of cancer relative survival (RS). We performed 2 different calibration exercises: 1) only including RS as a calibration target and 2) adding the ratio between the 2 nondeath states over time as an additional target. We used the Nelder-Mead (NM) algorithm to identify parameter sets that best matched the calibration targets. We used collinearity and likelihood profile analyses to check for nonidentifiability. We then estimated the benefit of a hypothetical treatment in terms of life expectancy gains using different, but equally good-fitting, parameter sets. We also applied collinearity analysis to a realistic model of the natural history of colorectal cancer. Results. When only RS is used as the calibration target, 2 different parameter sets yield similar maximum likelihood values. The high collinearity index and the bimodal likelihood profile on both parameters demonstrated the presence of nonidentifiability. These different, equally good-fitting parameter sets produce different estimates of the treatment effectiveness (0.67 v. 0.31 years), which could influence the optimal decision. By incorporating the additional target, the model becomes identifiable with a collinearity index of 3.5 and a unimodal likelihood profile. Conclusions. In the presence of nonidentifiability, equally likely parameter estimates might yield different conclusions. Checking for the existence of nonidentifiability and its implications should be incorporated into standard model calibration procedures.
APA, Harvard, Vancouver, ISO, and other styles
6

Dinku, Tufa, and Emmanouil N. Anagnostou. "Investigating Seasonal PR–TMI Calibration Differences." Journal of Atmospheric and Oceanic Technology 25, no. 7 (July 1, 2008): 1228–37. http://dx.doi.org/10.1175/2007jtecha977.1.

Full text
Abstract:
Abstract Seasonal differences in the calibration of overland passive microwave rain retrieval are investigated using Tropical Rainfall Measuring Mission (TRMM) Microwave Imager (TMI) and precipitation radar (PR). Four geographic regions from southern Africa, South Asia, the Amazon basin, and the southeastern United States are selected. Three seasons are compared for each region. Two scenarios of algorithm calibration are considered. In the first, the parameter sets are derived by calibrating the TMI algorithm with PR in each season. In the second scenario, common parameter sets are derived from the combined dataset of all three seasons. The parameter sets from both scenarios are then applied to the validation dataset of each season to determine the effect of seasonal calibration. Furthermore, calibration parameters from one season are also applied to another season, and results are compared against those derived using the season’s own parameters. Appreciable seasonal differences are observed for the U.S. region, while there are no significant differences between using individual seasonal calibration and the all-season calibration for the other regions. However, using one season’s parameter set to retrieve rainfall for another season is associated with increased uncertainty. It is also shown that the performance of the retrieval varies by season.
APA, Harvard, Vancouver, ISO, and other styles
7

Peiris, K. H. S., G. G. Dull, R. G. Leffler, and S. J. Kays. "Near-infrared Spectrometric Method for Nondestructive Determination of Soluble Solids Content of Peaches." Journal of the American Society for Horticultural Science 123, no. 5 (September 1998): 898–905. http://dx.doi.org/10.21273/jashs.123.5.898.

Full text
Abstract:
A nondestructive method for measuring the soluble solids (SS) content of peaches [Prunus persica (L.) Batsch] was developed using near-infrared (NIR) spectrometry. NIR transmittance in the 800 to 1050 nm region was measured for four cultivars of peaches (`Blake', `Encore', `Red Haven', and `Winblo'), over a period of three seasons (1993 through 1995). Each fruit was scanned on both halves keeping the suture away from the incident light beam. Soluble solids contents of flesh samples taken from corresponding scanned areas were determined using a refractometer. Multiple linear regression models using two wavelengths were developed with second derivative spectral data and laboratory measurements of SS content. Multiple correlation coefficients (R) for individual cultivar calibrations within a single season ranged from 0.76 to 0.98 with standard error of calibration (SEC) values from 0.35% to 1.22%. Selected spectra and corresponding SS data in individual cultivar calibration data sets were combined to create season and cultivar calibration data sets to cover the entire range of SS contents within the season or within the cultivar. These combined calibrations resulted in R values of 0.92 to 0.97 with SEC values ranging from 0.37% to 0.79%. Simple correlations of validations (r) ranged from 0.20 to 0.94 and the standard error of prediction (SEP) ranged from 0.49% to 1.63% while the bias varied from -0.01% to -2.62%. Lower r values and higher SEP and bias values resulted when individual cultivar calibrations were used to predict SS levels in other cultivar validation data sets. Cultivar calibrations, season calibrations and the overall calibration predicted SS content of all validation data sets with a smaller bias and SEP and with higher r values. These results indicate that NIR spectrometry is suitable for rapid nondestructive determination of SS in peaches. Feasible applications of the method include packinghouse sorting of peaches for sweetness and parent and progeny fruit quality assessment in peach breeding programs. Using this technique fruit may be sorted into two or three sweetness classes. The technique may also potentially be extended to other fruit.
APA, Harvard, Vancouver, ISO, and other styles
8

Park, Byung-Seo, Woosuk Kim, Jin-Kyum Kim, Eui Seok Hwang, Dong-Wook Kim, and Young-Ho Seo. "3D Static Point Cloud Registration by Estimating Temporal Human Pose at Multiview." Sensors 22, no. 3 (January 31, 2022): 1097. http://dx.doi.org/10.3390/s22031097.

Full text
Abstract:
This paper proposes a new technique for performing 3D static-point cloud registration after calibrating a multi-view RGB-D camera using a 3D (dimensional) joint set. Consistent feature points are required to calibrate a multi-view camera, and accurate feature points are necessary to obtain high-accuracy calibration results. In general, a special tool, such as a chessboard, is used to calibrate a multi-view camera. However, this paper uses joints on a human skeleton as feature points for calibrating a multi-view camera to perform calibration efficiently without special tools. We propose an RGB-D-based calibration algorithm that uses the joint coordinates of the 3D joint set obtained through pose estimation as feature points. Since human body information captured by the multi-view camera may be incomplete, a joint set predicted based on image information obtained through this may be incomplete. After efficiently integrating a plurality of incomplete joint sets into one joint set, multi-view cameras can be calibrated by using the combined joint set to obtain extrinsic matrices. To increase the accuracy of calibration, multiple joint sets are used for optimization through temporal iteration. We prove through experiments that it is possible to calibrate a multi-view camera using a large number of incomplete joint sets.
APA, Harvard, Vancouver, ISO, and other styles
9

Wetherill, G. Z., and I. Murray. "The spread of the calibration set in near-infrared reflectance spectroscopy." Journal of Agricultural Science 109, no. 3 (December 1987): 539–44. http://dx.doi.org/10.1017/s0021859600081752.

Full text
Abstract:
SummaryFrequently in near-infrared reflectance spectroscopy, a calibration is developed using very restricted data sets, e.g. material from one season, a small area or of a limited type: consequently, the predictions may have limited validity. This paper describes the use of both restricted and wide calibration sets for the prediction of crude protein in grass, silage and hay. Results show that predictions from the wider calibration sets are often as good as or better than predictions from restricted calibration sets. Therefore the use of wide calibration sets should be considered much more frequently in near-infrared reflectance.
APA, Harvard, Vancouver, ISO, and other styles
10

Moody, Jordan N., Reid Redden, Faron A. Pfeiffer, Ronald Pope, and John W. Walker. "PSV-15 Using near infrared reflectance spectroscopy to predict lab scoured yield in Rambouillet sheep." Journal of Animal Science 99, Supplement_3 (October 8, 2021): 303. http://dx.doi.org/10.1093/jas/skab235.558.

Full text
Abstract:
Abstract Lab scoured yield (LSY) is a major indicator of wool quality. LSY is used for the valuation of wool in commercial settings and can be used by growers as selection criteria for breeding stock. Current laboratory methods for LSY are costly and labor intensive. Evaluation of fleece core samples using Near-Infrared Reflectance Spectroscopy (NIR) may present an efficient, cost-effective alternative to predict LSY. Lamb and yearling fleece core samples from flocks originating from Texas were scanned on a FOSS 6500 spectrometer. Constituent data were obtained from the Bill Sims Wool and Mohair Laboratory using ASTM methodology. LSY ranged from 48–68%. Spectral data were pretreated with a 14 nm moving average and Savitsky-Golay 2nd derivative. Eight outlier spectra were removed. Samples were parsed from the center of the distribution to minimize the Dunn effect creating calibration (n = 108) and test (n = 41) sets. Calibrations were executed using a partial least squares regression on spectra from 1100 to 2492 nm. Test set calibration statistics for LSY were: r2=0.64, RMSE=3.39, and slope=0.91. Independent validation statistics for LSY using spectra for different years were: r2=0.33, RMSE=3.69, and slope=0.29. RMSE for independent validation and lab methods on side samples are similar. Between flock independent validations were less promising. Accuracy of laboratory methods for estimating yield is 2 percentage units. NIRS calibrations can be improved by developing calibration sets with a uniform distribution, which can be difficult within flocks because of the small number of fleeces in the tails of the distribution. These data demonstrate that when calibration and test sets are developed such that test samples are drawn from the calibration population, NIR is a reliable predictor of LSY. However, when test samples are drawn from populations dissimilar to the calibration set, reliability of NIR predictions are greatly reduced.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Calibration sets"

1

Prateepasen, Asa. "Tool wear monitoring in turning using fused data sets of calibrated acoustic emission and vibration." Thesis, Brunel University, 2001. http://bura.brunel.ac.uk/handle/2438/5415.

Full text
Abstract:
The main aim of this research is to develop an on-line tool wear condition monitoring intelligent system for single-point turning operations. This is to provide accurate and reliable information on the different states of tool wear. Calibrated acoustic emission and vibration techniques were implemented to monitor the progress of wear on carbide tool tips. Previous research has shown that acoustic emission (AE) is sensitive to tool wear. However, AE, as a monitoring technique, is still not widely adopted by industry. This is because it is as yet impossible to achieve repeatable measurements of AE. The variability is due to inconsistent coupling of the sensor with structures and the fact that the tool structure may have different geometry and material property. Calibration is therefore required so that the extent of variability becomes quantifiable, and hence accounted for or removed altogether. Proper calibration needs a well-defined and repeatable AE source. In this research, various artificial sources were reviewed in order to assess their suitability as an AE calibration source for the single-point machining process. Two artificial sources were selected for studying in detail. These are an air jet and a pulsed laser; the former produces continuous-type AE and the latter burst type AE. Since the air jet source has a power spectrum resembling closely the AE produced from single-point machining and since it is readily available in a machine shop, not to mention its relative safety compared to laser, an air-jet source is a more appealing choice. The calibration procedure involves setting up an air jet at a fixed stand-off distance from the top rake of the tool tip, applying in sequence a set of increasing pressures and measuring the corresponding AE. It was found that the root-mean-square value of the AE obtained is linearly proportional to the pressure applied. Thus, irrespective of the layout of the sensor and AE source in a tool structure, AE can be expressed in terms of the common currency of 'pressure' using the calibration curve produced for that particular layout. Tool wear stages can then be defined in terms of the 'pressure' levels. In order to improve the robustness of the monitoring system, in addition to AE, vibration information is also used. In this case, the acceleration at the tool tip in the tangential and feed directions is measured. The coherence function between these two signals is then computed. The coherence is a function of the vibration frequency and has a value ranging from 0 to 1, corresponding to no correlation and full correlation respectively between the two acceleration signals. The coherence function method is an attempt to provide a solution, which is relatively insensitive to the dynamics and the process variables except tool wear. Three features were identified to be sensitive to tool wear and they are; AErms, and the coherence function of the acceleration at natural frequency (2.5-5.5 kHz) of the tool holder and at high frequency end (18-25kHz) respectively. A belief network, based on Bayes' rule, was created providing fusion of data from AE and vibration for tool wear classification. The conditional probabilities required for the belief network to operate were established from examples. These examples were presented to the belief network as a file of cases. The file contains the three features mentioned earlier, together with cutting conditions and the tool wear states. Half of the data in this file was used for training while the other half was used for testing the network. The performance of the network gave an overall classification error rate of 1.6 % with the WD acoustic emission sensor and an error rate of 4.9 % with the R30 acoustic emission sensor.
APA, Harvard, Vancouver, ISO, and other styles
2

Söhl, Jakob. "Central limit theorems and confidence sets in the calibration of Lévy models and in deconvolution." Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, 2013. http://dx.doi.org/10.18452/16732.

Full text
Abstract:
Zentrale Grenzwertsätze und Konfidenzmengen werden in zwei verschiedenen, nichtparametrischen, inversen Problemen ähnlicher Struktur untersucht, und zwar in der Kalibrierung eines exponentiellen Lévy-Modells und im Dekonvolutionsmodell. Im ersten Modell wird eine Geldanlage durch einen exponentiellen Lévy-Prozess dargestellt, Optionspreise werden beobachtet und das charakteristische Tripel des Lévy-Prozesses wird geschätzt. Wir zeigen, dass die Schätzer fast sicher wohldefiniert sind. Zu diesem Zweck beweisen wir eine obere Schranke für Trefferwahrscheinlichkeiten von gaußschen Zufallsfeldern und wenden diese auf einen Gauß-Prozess aus der Schätzmethode für Lévy-Modelle an. Wir beweisen gemeinsame asymptotische Normalität für die Schätzer von Volatilität, Drift und Intensität und für die punktweisen Schätzer der Sprungdichte. Basierend auf diesen Ergebnissen konstruieren wir Konfidenzintervalle und -mengen für die Schätzer. Wir zeigen, dass sich die Konfidenzintervalle in Simulationen gut verhalten, und wenden sie auf Optionsdaten des DAX an. Im Dekonvolutionsmodell beobachten wir unabhängige, identisch verteilte Zufallsvariablen mit additiven Fehlern und schätzen lineare Funktionale der Dichte der Zufallsvariablen. Wir betrachten Dekonvolutionsmodelle mit gewöhnlich glatten Fehlern. Bei diesen ist die Schlechtgestelltheit des Problems durch die polynomielle Abfallrate der charakteristischen Funktion der Fehler gegeben. Wir beweisen einen gleichmäßigen zentralen Grenzwertsatz für Schätzer von Translationsklassen linearer Funktionale, der die Schätzung der Verteilungsfunktion als Spezialfall enthält. Unsere Ergebnisse gelten in Situationen, in denen eine Wurzel-n-Rate erreicht werden kann, genauer gesagt gelten sie, wenn die Sobolev-Glattheit der Funktionale größer als die Schlechtgestelltheit des Problems ist.
Central limit theorems and confidence sets are studied in two different but related nonparametric inverse problems, namely in the calibration of an exponential Lévy model and in the deconvolution model. In the first set-up, an asset is modeled by an exponential of a Lévy process, option prices are observed and the characteristic triplet of the Lévy process is estimated. We show that the estimators are almost surely well-defined. To this end, we prove an upper bound for hitting probabilities of Gaussian random fields and apply this to a Gaussian process related to the estimation method for Lévy models. We prove joint asymptotic normality for estimators of the volatility, the drift, the intensity and for pointwise estimators of the jump density. Based on these results, we construct confidence intervals and sets for the estimators. We show that the confidence intervals perform well in simulations and apply them to option data of the German DAX index. In the deconvolution model, we observe independent, identically distributed random variables with additive errors and we estimate linear functionals of the density of the random variables. We consider deconvolution models with ordinary smooth errors. Then the ill-posedness of the problem is given by the polynomial decay rate with which the characteristic function of the errors decays. We prove a uniform central limit theorem for the estimators of translation classes of linear functionals, which includes the estimation of the distribution function as a special case. Our results hold in situations, for which a square-root-n-rate can be obtained, more precisely, if the Sobolev smoothness of the functionals is larger than the ill-posedness of the problem.
APA, Harvard, Vancouver, ISO, and other styles
3

FURLANETTO, GIULIA. "Quantitative reconstructions of climatic series in mountain environment based on paleoecological and ecological data." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2019. http://hdl.handle.net/10281/241319.

Full text
Abstract:
La vegetazione di ambiente montano è nota per essere sensibile alle variazioni climatiche. Il forte gradiente climatico altitudinale, che caratterizza le aree montane, dà luogo ad un marcato gradiente ecologico, in cui numerosi ecotoni sono presenti in una piccola area. Sequenze polliniche investigate all’interno o poco al di sopra dell’ecotono attuale della timberline sono archivi ideali per investigare le relazioni esistenti fra clima ed ecosistemi. Ricostruzioni quantitative delle condizioni climatiche del passato da record pollinici fossili richiedono la comprensione della rappresentazione pollinica attuale lungo gradienti climatici ed ecologici. Obbiettivi di questa ricerca di dottorato sono: lo sviluppo di transetti altitudinali di polline-vegetazione-clima attuale, l’elaborazione di modelli e la loro validazione per valutare le relazioni polline-clima grazie a gradienti altitudinali, la ricerca di nuovi archivi naturali ad alta risoluzione per ottenere dati proxy e ricostruire le variazioni paleoambientali e paleoclimatiche durante l’Olocene, l’applicazione di questi modelli ai dati pollinici stratigrafici ed infine il confronto dei risultati con diverse ricostruzioni su base proxy. E’ stata dimostrata l’importanza di transetti altitudinali locali con campioni pollinici moderni e temperature sito-specifiche come strumento per le ricostruzioni climatiche. Sono stati sviluppati due transetti altitudinali (la Valle di La Thuile e l’Alta Val Brembana) per ottenere consistenti correlazioni locali polline-clima, per trovare taxa pollinici sensibili utili per le ricostruzioni paleoclimatiche, per stimare gli effetti di parametri locali (lapse rate, clima, trasporto di polline verso l’alto e impatto antropico) e infine sono stati utilizzati come test sets per valutare i modelli polline-clima basati su calibration sets estratti dall’European Modern Pollen Database. Sono state investigate le relazioni rappresentazione pollinica attuale-vegetazione-clima lungo un gradiente altitudinale in Alta Val Brembana. Qui la rappresentazione pollinica attuale (trappole polliniche e campioni di muschio), vegetazione, quota e clima sono stati rilevati in 16 siti di campionamento situati lungo un gradiente altitudinale da 1240 m slm a 2390 m slm. I risultati della CCA mostrano una buona concordanza con studi precedenti, che hanno individuato la quota come il maggior gradiente nella variazione della rappresentazione pollinica attuale e della vegetazione in aree montane. Lo studio stratigrafico di proxies paleoecologici e sedimentari nella torbiera Armentarga hanno permesso di ricostruire la vegetazione e il clima negli ultimi 10 ka in un distretto oceanico di alta quota delle Alpi Italiane. Sono state ottenute e validate ricostruzioni quantitative di Tluglio e Pann applicando funzioni di trasferimento create da un vasto dataset di calibrazione polline-clima. Il record paleobotanico proveniente dalla torbiera Armentarga ha mostrato che questa composizione altitudinale della vegetazione è principalmente guidata da un aumento di precipitazioni nel medio-tardo Olocene, indipendente dalle sequenze millenarie di anomalie termiche già note da altri proxies alpini di alta quota (ad es. ghiacciai, timberline, chironomidi, speleotemi). Le variazioni delle precipitazioni annuali durante l’Olocene sono avvenute in tre step principali, partendo da un inizio Olocene moderatamente umido, caratterizzato dalla presenza precoce di foreste di Alnus viridis e seguito da un primo incremento delle precipitazioni a partire da 6.2 ka cal BP. Uno step prominente in avanti avvenne nella transizione medio-tardo Olocene, datata alla torbiera Armentarga tra 4.7 e 3.9 ka, che ha portato verso i valori attuali tipici di clima oceanici montani (Pann 1700-1850 mm) ed è stato probabilmente accompagnato da un incremento di precipitazioni nevose e runoff ed ha avuto un maggior impatto sulla depressione della timberline e l’espansione di praterie.
Montane vegetation is traditionally known to be particularly sensitive to climate changes. The strong elevational climatic gradient that characterises mountain areas results in a steep ecological slope, with several ecotones occurring in a small area. Pollen sequences investigated in or shortly above the modern timberline ecotone are ideal archives to analyze the relationships between climate and ecosystems. Quantitative reconstruction of past climate conditions from fossil pollen records requires understanding modern pollen representation along climatic and ecological gradients. The aims of this PhD research are the development of modern pollen-vegetation-climate elevational transects, model processing and validation for evaluating pollen-climate relationships thanks to elevational gradients, looking for new high-resolution natural archives to obtain proxy data and to reconstruct paleoenvironmental and palaeoclimatic changes during the Holocene, the application of these models to pollen-stratigraphical data and comparing the results with different proxy-based reconstructions. The importance of local elevational transects of modern pollen samples with site-specific temperature as a tool for paleoclimate reconstructions in the Alps was demonstrated. The two elevational transects (La Thuile Valley and Upper Brembana Valley) were developed to derive consistent local pollen-climate correlations, to find sensitive pollen taxa useful for paleoclimate reconstructions; to estimate the effects of local parameters (elevational lapse rate, climate, uphill pollen transport and human impact) and were used as test sets to evaluate pollen-climate models based on calibration sets extracted from the European Modern Pollen Database. Modern pollen assemblages-vegetation-climate relationships along an elevational gradient in the Upper Brembana Valley were investigated. Here modern pollen assemblages (pollen traps and moss samples), vegetation, elevation and climate have been collected at 16 sampling sites placed along an elevational gradient stretching from 1240 m asl to 2390 m asl. The results of CCA analysis demonstrated a general good agreement with previous studies, which identified elevation as the main gradient in the variation of modern pollen and vegetation assemblages in mountain areas. The stratigraphic study of paleoecological and sedimentary proxies in the Armentarga peat bog allowed to reconstruct the vegetation and climate history during the last 10 ka in a high-elevation, oceanic district of the Italian Alps. Quantitative reconstructions of Tjuly and Pann were obtained and validated by applying numerical transfer functions built on an extensive calibration pollen-climate dataset. The palaeobotanical record of the Armentarga peat bog has shown this elevational vegetation arrangement to be primarily driven by a Middle to late Holocene precipitation increase, substantially independent from the millennial sequence of thermal anomalies already known from other high-elevation Alpine proxies (i.e. glaciers, timberline, chironomids, speleothems). Changes in annual precipitation occurred in three main steps during the Holocene, starting with a moderately humid early Holocene marked by early occurrence of the Alnus viridis dwarf forests, and followed by a first step of precipitation increase starting at 6.2 ka cal BP. A prominent step forward occurred at the Middle to Late Holocene transition, dated between 4.7 and 3.9 ka at the Armentarga site, which led to present values typical for oceanic mountain climates (Pann 1700-1850 mm) and was probably accompanied by increased snowfall and runoff, and had a major impact on timberline depression and grassland expansion.
APA, Harvard, Vancouver, ISO, and other styles
4

Paudel, Danda Pani. "Local and global methods for registering 2D image sets and 3D point clouds." Thesis, Dijon, 2015. http://www.theses.fr/2015DIJOS077/document.

Full text
Abstract:
Pas de résumé
In this thesis, we study the problem of registering 2D image sets and 3D point clouds under threedifferent acquisition set-ups. The first set-up assumes that the image sets are captured using 2Dcameras that are fully calibrated and coupled, or rigidly attached, with a 3D sensor. In this context,the point cloud from the 3D sensor is registered directly to the asynchronously acquired 2D images.In the second set-up, the 2D cameras are internally calibrated but uncoupled from the 3D sensor,allowing them to move independently with respect to each other. The registration for this set-up isperformed using a Structure-from-Motion reconstruction emanating from images and planar patchesrepresenting the point cloud. The proposed registration method is globally optimal and robust tooutliers. It is based on the theory Sum-of-Squares polynomials and a Branch-and-Bound algorithm.The third set-up consists of uncoupled and uncalibrated 2D cameras. The image sets from thesecameras are registered to the point cloud in a globally optimal manner using a Branch-and-Prunealgorithm. Our method is based on a Linear Matrix Inequality framework that establishes directrelationships between 2D image measurements and 3D scene voxels
APA, Harvard, Vancouver, ISO, and other styles
5

Söhl, Jakob [Verfasser], Markus [Akademischer Betreuer] Reiß, Vladimir [Akademischer Betreuer] Spokoiny, and Richard [Akademischer Betreuer] Nickl. "Central limit theorems and confidence sets in the calibration of Lévy models and in deconvolution / Jakob Söhl. Gutachter: Markus Reiß ; Vladimir Spokoiny ; Richard Nickl." Berlin : Humboldt Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, 2013. http://d-nb.info/103457258X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

DEAK, SZABOLCS. "Essays on fiscal policy: calibration, estimation and policy analisys." Doctoral thesis, Università Bocconi, 2011. https://hdl.handle.net/11565/4054119.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Fery, Natacha [Verfasser]. "Nearshore wind-wave modelling in semi-enclosed seas : cross calibration and application / Natacha Fery." Kiel : Universitätsbibliothek Kiel, 2017. http://d-nb.info/1139253069/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Zhou, Alexandre. "Etude théorique et numérique de problèmes non linéaires au sens de McKean en finance." Thesis, Paris Est, 2018. http://www.theses.fr/2018PESC1128/document.

Full text
Abstract:
Cette thèse est consacrée à l'étude théorique et numérique de deux problèmes non linéaires au sens de McKean en finance. Nous abordons dans la première partie le problème de calibration d'un modèle à volatilité locale et stochastique pour tenir compte des prix d'options Européennes vanilles observés sur le marché. Ce problème se traduit par l'étude d'une équation différentielle stochastique (EDS) non linéaire au sens de McKean à cause de la présence dans le coefficient de diffusion d'une espérance conditionnelle du facteur de volatilité stochastique par rapport à la solution de l'EDS. Nous obtenons l'existence du processus dans le cas particulier où le facteur de volatilité stochastique est un processus de sauts ayant un nombre fini d'états. Nous obtenons de plus la convergence faible à l'ordre 1 de la discrétisation en temps de l'EDS non linéaire au sens de McKean pour des facteurs de volatilité stochastique généraux. Dans l'industrie, la calibration est effectuée efficacement à l'aide d'une régularisation de l'espérance conditionnelle par un estimateur à noyau de type Nadaraya-Watson, comme proposé par Guyon et Henry-Labordère dans [JGPHL]. Nous proposons également un schéma numérique demi-pas de temps et étudions le système de particules associé que nous comparons à l'algorithme proposé par [JGPHL]. Dans la deuxième partie de la thèse, nous nous intéressons à un problème de valorisation de contrat avec appels de marge, une problématique apparue avec l'application de nouvelles régulations depuis la crise financière de 2008. Ce problème peut être modélisé par une équation différentielle stochastique rétrograde (EDSR) anticipative avec dépendance en la loi de la solution dans le générateur. Nous montrons que cette équation est bien posée et proposons une approximation de sa solution à l'aide d'EDSR standards linéaires lorsque la durée de liquidation de l'option en cas de défaut est petite. Enfin, nous montrons que le calcul des solutions de ces EDSR standards peut être amélioré à l'aide de la méthode de Monte-Carlo multiniveaux introduite par Giles dans [G]
This thesis is dedicated to the theoretical and numerical study of two problems which are nonlinear in the sense of McKean in finance. In the first part, we study the calibration of a local and stochastic volatility model taking into account the prices of European vanilla options observed in the market. This problem can be rewritten as a stochastic differential equation (SDE) nonlinear in the sense of McKean, due to the presence in the diffusion coefficient of a conditional expectation of the stochastic volatility factor computed w.r.t. the solution to the SDE. We obtain existence in the particular case where the stochastic volatility factor is a jump process with a finite number of states. Moreover, we obtain weak convergence at order 1 for the Euler scheme discretizing in time the SDE nonlinear in the sense of McKean for general stochastic volatility factors. In the industry, Guyon and Henry Labordere proposed in [JGPHL] an efficient calibration procedure which consists in approximating the conditional expectation using a kernel estimator such as the Nadaraya-Watson one. We also introduce a numerical half-step scheme and study the the associated particle system that we compare with the algorithm presented in [JGPHL]. In the second part of the thesis, we tackle the pricing of derivatives with initial margin requirements, a recent problem that appeared along with new regulation since the 2008 financial crisis. This problem can be modelled by an anticipative backward stochastic differential equation (BSDE) with dependence in the law of the solution in the driver. We show that the equation is well posed and propose an approximation of its solution by standard linear BSDEs when the liquidation duration in case of default is small. Finally, we show that the computation of the solutions to the standard BSDEs can be improved thanks to the multilevel Monte Carlo technique introduced by Giles in [G]
APA, Harvard, Vancouver, ISO, and other styles
9

GURNY, Martin. "Default probabilities in credit risk management: estimation, model calibration, and backtesting." Doctoral thesis, Università degli studi di Bergamo, 2015. http://hdl.handle.net/10446/61848.

Full text
Abstract:
This doctoral thesis is devoted to estimation and examination of default probabilities (PDs) within credit risk management and comprises three various studies. In the first study, we introduce a structural credit risk model based on stable non-Gaussian processes in order to overcome distributional drawbacks of the classical Merton model. Following the Moody’s KMV estimation methodology, we conduct an empirical comparison between the results obtained from the Merton model and the stable Paretian one. Our results suggest that PDs are generally underestimated by the Merton model and that the stable Lévy model is substantially more sensitive to the periods of financial crises. The second study is devoted to examination of the performance of static and multi-period credit-scoring models for determining PDs of financial institutions. Using an extensive sample of U.S. commercial banks provided by the FFIEC, we focus on evaluating the performance of the considered scoring techniques. We find that our models provide a high predictive accuracy in distinguishing between default and non-default financial institutions. Despite the difficulty of predicting defaults in the financial sector, the proposed models perform very well also in comparison to results on scoring techniques for the corporate sector. Finally, in the third study, we examine the relationship between distress risk and returns of U.S. renewable energy companies. Using the Expected Default Frequency (EDF) measure obtained from Moody’s KMV, we demonstrate that there is a positive cross-sectional relationship between returns and evidence for a distress risk premium in this sector. The positively priced distress premium is also confirmed by investigating returns corrected for common Fama and French and Carhart risk factors. We further show that raw and risk-adjusted returns of value-weighted portfolios that take a long position in the 20% most distressed stocks and a short position in the 20% safest stocks generally outperforms the S&P 500 index throughout our sample period.
APA, Harvard, Vancouver, ISO, and other styles
10

Fiorin, Lucio. "Essays on Quantization in Financial Mathematics." Doctoral thesis, Università degli studi di Padova, 2018. http://hdl.handle.net/11577/3427145.

Full text
Abstract:
This thesis is devoted to the study of some applications of quantization to Financial Mathematics, especially to option pricing and calibration of financial data. Quantization is a technique that comes originally from numerical probability, and consists in approximating random variables and stochastic processes taking infinitely many values, with a discrete version of them, in order to simplify the quadrature algorithms for the computation of expected values. The purpose of this thesis is to show the great flexibility that quantization can have in the area of numerical probability and option pricing. In the literature, often there are ad hoc methods for a particular type of model or derivative, but no general framework seems to exist. Finite difference methods are heavily affected by the curse of dimensionality, while Monte Carlo methods need intense computational effort in order to have good precision, and are not designed for calibration purposes. Quantization can give an alternative methodology for a broad class of models and deriva- tives. The aim of the thesis is twofold: first, the extension of the literature about quantization to a broad class of models, namely local and stochastic volatility models, affine, pure jumps and polynomial processes, is an interesting theoretical exercise in itself. In fact, every time we deal with a different model we have to take in consideration the properties of the process and therefore the quantization algorithm must be adapted. Second, it is important to consider the computational results of the new types of quantization introduced. Indeed, the algorithms that we have developed turn out to be fast and numerically stable, and these aspects are very relevant, as we can overcome some of the issues present in literature for other types of approach. The first line of research deals with a technique called Recursive Marginal Quantization. Introduced in Pagès and Sagna (2015), this methodology exploits the conditional distribution of the Euler scheme of a one dimensional stochastic differential equation in order to construct a step-by-step approximation of the process. In this thesis we deal with the generalization of this technique to systems of stochastic differential equations, in particular to the case of stochastic volatility models. The Recursive Marginal Quantization of multidimensional stochastic process allows us to price European and path dependent options, in particular American options, and to perform calibration on financial data, giving then an alternative, and sometimes overcoming, to the usual Monte Carlo techniques. The second line of research takes a different perspective on quantization. Instead of using discretization schemes in order to compute the distribution of a stochastic process, we exploit the properties of the characteristic function and of the moment generating function for a broad class of processes. We consider the price process at maturity as a random variable, and we focus on the quantization of the stochastic variable, instead of focusing on the quantization of the whole stochastic process. This gives a faster and more precise technology for the pricing of options, and allows the quantization of a huge set of models for which the Recursive Marginal Quantization cannot be applied or is not numerically competitive.
Questa tesi si occupa dello studio delle applicazioni della quantizzazione alla Finanza Matematica, in particolare al prezzaggio di opzioni e alla calibrazione su dati finanziari. La quantizzazione è una tecnica che ha le sue origini dalla probabilità numerica, e consiste nell’approssimare variabili aleatorie e processi stocastici continui nello spazio delle realizzazioni con una versione discreta, allo scopo di semplificare gli algoritmi di quadratura per il calcolo di valori attesi. L’obiettivo di questa tesi è di mostrare la grande flessibilità che può avere la quantizzazione nell’ambiente della probabilità numerica e del prezzaggio di opzioni. Nella letteratura spesso esistono metodi ad hoc per ogni tipo di modello e di derivato, ma non sembra esserci una metodologia unica. I metodi alle differenze finite soffrono fortemente della curse of dimensionality, mentre i metodi Monte Carlo necessitano di un grande sforzo computazionale, e non sono pensati per esercizi di calibrazione. La quantizzazione può a volte risolvere problemi specifici di queste tecnologie, e presenta una metodologia alternativa per una grande classe di modelli e di derivati. Lo scopo della tesi è duplice: in primo luogo, l’estensione della letteratura sulla quantizzazione ad un’ampia gamma di processi, cioè processi a volatilità locale e stocastica, affini, di puro salto e polinomiali, è di per se un interessante esercizio teorico. Infatti, per ogni tipo di processo dobbiamo considerare le sue proprietà specifiche, adattando quindi l’algoritmo di quantizzazione. Inoltre, è importante considerare i risultati computazioni dei nuovi tipi di quantizzazione introdotti, in quanto è fondamentale sviluppare algoritmi che siano veloci e stabili numericamente, allo scopo di superare le problematiche presenti nella letteratura per altri tipi di approcci. Il primo filone di ricerca si occupa di una tecnica chiamata Quantizzazione Marginale Ricorsiva. Introdotta in Pagès and Sagna (2015), questa metodologia sfrutta la distribuzione condizionale dello schema di Eulero di un’equazione differenziale stocastica unidimensionale per costruire un’approssimazione passo passo del processo. In questa tesi generalizziamo questa tecnica ai sistemi di equazioni differenziali stocastiche, in particolare al caso dei modelli a volatilità stocastica. La Quantizzazione Marginale Ricorsiva di processi stocastici multidimensionali permette il prezzaggio di opzioni Europee e di opzioni path dependent, in particolare le opzioni Americane, e di effettuare calibrazione su dati finanziari, dando quindi un’alternativa, e spesso superandole, alle tipiche tecniche di Monte Carlo. La seconda linea di ricerca tratta la quantizzazione da una prospettiva differente. Invece di usare schemi di discretizzazione per il calcolo della distribuzione di un processo stocastico, viene sfruttata le proprietà della funzione caratteristica e della funzione generatrice dei momenti di una vasta classe di processi. Consideriamo infatti il processo del prezzo a maturità come una variabile aleatoria, e ci focalizziamo sulla quantizzazione della variabile casuale, invece di considerare tutto il processo stocastico. Questo approccio porta a una tecnologia più veloce e precisa per il prezzaggio di opzioni, e permette la quantizzazione di un vasto insieme di modelli, che non potevano essere affrontati dalla Quantizzazione Marginale Ricorsiva.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Calibration sets"

1

Ragin, Charles C. Measurement Versus Calibration: A Set‐Theoretic Approach. Edited by Janet M. Box-Steffensmeier, Henry E. Brady, and David Collier. Oxford University Press, 2009. http://dx.doi.org/10.1093/oxfordhb/9780199286546.003.0008.

Full text
Abstract:
This article distinguishes between ‘measurement’ and ‘calibration’. It is organized around the distinction between measurement and calibration. The main message of this article is that fuzzy sets, unlike conventional variables, must be calibrated. It also argues that fuzzy sets provide a middle path between quantitative and qualitative measurement. It explores the common measurement practices in quantitative and qualitative social research. It then further demonstrates that fuzzy sets resonate with both the measurement concerns of qualitative researchers, where the goal often is to recognize between relevant and irrelevant variation, and the measurement concerns of quantitative researchers, where the goal is the precise placement of cases relative to each other. Current practices in quantitative social science undercut serious attention to calibration. Set-theoretic analysis without careful calibration of set membership is an exercise in futility.
APA, Harvard, Vancouver, ISO, and other styles
2

Post-Launch Calibration of Satellite Sensors (Intl Soc for Photo. & Rem Sens Isprs Bk Series, Issn 1572-3348). Taylor & Francis, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wright, A. G. Collection and counting efficiency. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780199565092.003.0010.

Full text
Abstract:
Standards laboratories can provide a photocathode calibration for quantum efficiency, as a function of wavelength, but their measurements are performed with the photomultiplier operating as a photodiode. Each photoelectron released makes a contribution to the photocathode current but, if it is lost or fails to create secondary electrons at d1, it makes no contribution to anode current. This is the basis of collection efficiency, F. The anode detection efficiency, ε‎, allied to F, refers to the counting efficiency of output pulses. The standard method for determining F involves photocurrent, anode current, count rate, and the use of highly attenuating filters; F may also be measured using methods based on single-electron responses (SERs), shot noise, or the SER at the first dynode.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Calibration sets"

1

El-Melegy, Moumen T., and Nagi H. Al-Ashwal. "Lens Distortion Calibration Using Level Sets." In Lecture Notes in Computer Science, 356–67. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11567646_30.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Olsen, Wendy. "Calibration of Fuzzy Sets, Calibration of Measurement: A Realist Synthesis." In Systematic Mixed-Methods Research for Social Scientists, 157–74. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-93148-3_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Duan, Huixian, Lin Mei, Jun Wang, Lei Song, and Na Liu. "Properties of Central Catadioptric Circle Images and Camera Calibration." In Rough Sets and Knowledge Technology, 229–39. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-11740-9_22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hwang, Yongho, and Hyunki Hong. "Calibration of Omnidirectional Camera by Considering Inlier Distribution." In Rough Sets and Current Trends in Computing, 815–23. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11908029_84.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Yin, Ye, Zhitao Zhang, Deying Ke, and Chun Zhu. "An Automatic Virtual Calibration of RF-Based Indoor Positioning with Granular Analysis." In Rough Sets and Knowledge Technology, 557–68. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-11740-9_51.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Johansson, Ulf, Ernst Ahlberg, Henrik Boström, Lars Carlsson, Henrik Linusson, and Cecilia Sönströd. "Handling Small Calibration Sets in Mondrian Inductive Conformal Regressors." In Statistical Learning and Data Sciences, 271–80. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-17091-6_22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Rio, Simon, Alain Charcosset, Tristan Mary-Huard, Laurence Moreau, and Renaud Rincent. "Building a Calibration Set for Genomic Prediction, Characteristics to Be Considered, and Optimization Approaches." In Methods in Molecular Biology, 77–112. New York, NY: Springer US, 2022. http://dx.doi.org/10.1007/978-1-0716-2205-6_3.

Full text
Abstract:
AbstractThe efficiency of genomic selection strongly depends on the prediction accuracy of the genetic merit of candidates. Numerous papers have shown that the composition of the calibration set is a key contributor to prediction accuracy. A poorly defined calibration set can result in low accuracies, whereas an optimized one can considerably increase accuracy compared to random sampling, for a same size. Alternatively, optimizing the calibration set can be a way of decreasing the costs of phenotyping by enabling similar levels of accuracy compared to random sampling but with fewer phenotypic units. We present here the different factors that have to be considered when designing a calibration set, and review the different criteria proposed in the literature. We classified these criteria into two groups: model-free criteria based on relatedness, and criteria derived from the linear mixed model. We introduce criteria targeting specific prediction objectives including the prediction of highly diverse panels, biparental families, or hybrids. We also review different ways of updating the calibration set, and different procedures for optimizing phenotyping experimental designs.
APA, Harvard, Vancouver, ISO, and other styles
8

Engler, Camila, Carlos Marcelo Pais, Silvina Saavedra, Emanuel Juarez, and Hugo Leonardo Rufiner. "Prediction of the Impact of the End of year Festivities on the Local Epidemiology of COVID-19 Using Agent-Based Simulation with Hidden Markov Models." In Computational Science and Its Applications – ICCSA 2022, 61–75. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-10522-7_5.

Full text
Abstract:
AbstractTowards the end of 2020, as people changed their usual behavior due to end of year festivities, increasing the frequency of meetings and the number of people who attended them, the COVID-19 local epidemic’s dynamic changed. Since the beginnings of this pandemic, we have been developing, calibrating and validating a local agent-based model (AbcSim) that can predict intensive care unit and deaths’ evolution from data contained in the state electronic medical records and sociological, climatic, health and geographic information from public sources. In addition, daily symptomatic and asymptomatic cases and other epidemiological variables of interest disaggregated by age group can be forecast. Through a set of Hidden Markov Models, AbcSim reproduces the transmission of the virus associated with the movements and activities of people in this city, considering the behavioral changes typical of local holidays. The calibration and validation were performed based on official data from La Rioja city in Argentina. With the results obtained, it was possible to demonstrate the usefulness of these models to predict possible outbreaks, so that decision-makers can implement the necessary policies to avoid the collapse of the health system.
APA, Harvard, Vancouver, ISO, and other styles
9

Yeo, Rachel Hui-Min, Minyue Zhang, and Aung Phyo Wai Aung. "Evaluating the Effectiveness of Audio, Visual and Behavioural Calibrations on EEG-Based Relaxation Training." In IRC-SET 2020, 449–60. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-15-9472-4_38.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kleinfeller, Nikolai, Christopher M. Gehb, Maximilian Schaeffner, Christian Adams, and Tobias Melz. "Assessment of Model Uncertainty in the Prediction of the Vibroacoustic Behavior of a Rectangular Plate by Means of Bayesian Inference." In Lecture Notes in Mechanical Engineering, 264–77. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-77256-7_21.

Full text
Abstract:
AbstractDesigning the vibroacoustic properties of thin-walled structures is of particularly high practical relevance in the design of vehicle structures. The vibroacoustic properties of thin-walled structures, e.g., vehicle bodies, are usually designed using finite element models. Additional development effort, e.g., experimental tests, arises if the quality of the model predictions are limited due to inherent model uncertainty. Model uncertainty of finite element models usually occurs in the modeling process due to simplifications of the geometry or boundary conditions. The latter highly affect the vibroacoustic properties of a thin-walled structure. The stiffness of the boundary condition is often assumed to be infinite or zero in the finite element model, which can lead to a discrepancy between the measured and the calculated vibroacoustic behavior. This paper compares two different boundary condition assumptions for the finite element (FE) model of a simply supported rectangular plate in their capability to predict the vibroacoustic behavior. The two different boundary conditions are of increasing complexity in assuming the stiffness. In a first step, a probabilistic model parameter calibration via Bayesian inference for the boundary conditions related parameters for the two FE models is performed. For this purpose, a test stand for simply supported rectangular plates is set up and the experimental data is obtained by measuring the vibrations of the test specimen by means of scanning laser Doppler vibrometry. In a second step, the model uncertainty of the two finite element models is identified. For this purpose, the prediction error of the vibroacoustic behavior is calculated. The prediction error describes the discrepancy between the experimental and the numerical data. Based on the distribution of the prediction error, which is determined from the results of the probabilistic model calibration, the model uncertainty is assessed and the model, which most adequately predicts the vibroacoustic behavior, is identified.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Calibration sets"

1

Friedlander, B. "Array self calibration using multiple data sets." In 2016 50th Asilomar Conference on Signals, Systems and Computers. IEEE, 2016. http://dx.doi.org/10.1109/acssc.2016.7869129.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Draxler, Karel, Jan Hlavacek, and Renata Styblikova. "Calibration of Instrument Current Transformer Test Sets." In 2019 International Conference on Applied Electronics (AE). IEEE, 2019. http://dx.doi.org/10.23919/ae.2019.8866993.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Eppeldauer, George P. "Uniform calibration of night vision goggles and test sets." In Optics/Photonics in Security and Defence, edited by David A. Huckridge and Reinhard R. Ebert. SPIE, 2007. http://dx.doi.org/10.1117/12.737814.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lao, Lingling, Prakash Murali, Margaret Martonosi, and Dan Browne. "Designing Calibration and Expressivity-Efficient Instruction Sets for Quantum Computing." In 2021 ACM/IEEE 48th Annual International Symposium on Computer Architecture (ISCA). IEEE, 2021. http://dx.doi.org/10.1109/isca52012.2021.00071.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Mohns, E., A. Mortara, H. Cayci, E. Houtzager, S. Fricke, M. Agustoni, and B. Ayhan. "Calibration of Commercial Test Sets for Non-Conventional Instrument Transformers." In 2017 IEEE International Workshop on Applied Measurements for Power Systems (AMPS). IEEE, 2017. http://dx.doi.org/10.1109/amps.2017.8078324.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Agustoni, Marco, and Alessandro Mortara. "A calibration setup for IEC 61850-9-2 test sets." In 2016 Conference on Precision Electromagnetic Measurements (CPEM 2016). IEEE, 2016. http://dx.doi.org/10.1109/cpem.2016.7540644.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Abtahi, Mansour, Hodjat Pendar, Aria Alasty, and Gholamreza Vossoughi. "Calibration of Hexaglide Parallel Manipulators Using Only Input Joint Variables." In ASME 2008 International Mechanical Engineering Congress and Exposition. ASMEDC, 2008. http://dx.doi.org/10.1115/imece2008-66162.

Full text
Abstract:
In the application of parallel robots, it is necessary to calibrate the geometric parameters and improve the positioning accuracy for accurate task performance. Traditionally, to perform system calibration, one needs to measure a number of robot poses using an external measuring device. However, this process is often time-consuming, expensive and difficult for robot on-line calibration. In this paper, a methodical way of self-calibrating of Hexaglide parallel robot is introduced. This method is performable only by measuring input joint variables in some sets of configurations where in each set center of the end-effector is fixed, but orientations are different. Simulations give us an idea about the number of points that must be measured, the number of orientations in each point and the effect of noise on the calibration accuracy.
APA, Harvard, Vancouver, ISO, and other styles
8

Teillet, P. M., Brian L. Markham, and Richard R. Irish. "Landsat radiometric cross-calibration: extended analysis of tandem image data sets." In Remote Sensing, edited by Roland Meynart, Steven P. Neeck, and Haruhisa Shimoda. SPIE, 2005. http://dx.doi.org/10.1117/12.626324.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Abtahi, Mansour, Hodjat Pendar, Aria Alasty, and Gholamreza Vossoughi. "Kinematic Calibration of the Hexaglide Parallel Robot Using a Simple Measurement System." In ASME 2008 International Mechanical Engineering Congress and Exposition. ASMEDC, 2008. http://dx.doi.org/10.1115/imece2008-66133.

Full text
Abstract:
Because of errors in the geometric parameters of the parallel robots, it is necessary to calibrate them to improve the positioning accuracy for accurate task performance. Traditionally, to perform system calibration, one needs to measure a number of robot poses using an external measuring device. However, this process is often time-consuming, expensive and difficult for robot on-line calibration. In this paper, a methodical way of self-calibrating of Hexaglide parallel robot is introduced. This method is performable only by measuring input joint variables and errors of positioning relative to the desired position in some sets of configurations where in each set the desired position is fixed, but orientations of the moving platform are different. In this method, measurements are relative, so it is performable by using a simple measurement device. Simulations give us an idea about the number of desired points, the number of orientations in each point and the effect of noise on the calibration accuracy.
APA, Harvard, Vancouver, ISO, and other styles
10

Fang, Yaling, Zhongke Shi, and Jinliang Cao. "Calibration of an interrupted traffic flow system using NGSIM trajectory data sets." In 2014 11th World Congress on Intelligent Control and Automation (WCICA). IEEE, 2014. http://dx.doi.org/10.1109/wcica.2014.7053542.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Calibration sets"

1

Teillet, P. M., G. Fedosejevs, R. R. Irish, J. Barker, and B L Markham. Landsat-7 ETM+ and Landsat-5 TM Cross-Calibration Based on Tandem Data Sets. Natural Resources Canada/ESS/Scientific and Technical Publishing Services, 2000. http://dx.doi.org/10.4095/219729.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Crow, H. L., K. D. Brewer, T. J. Cartwright, S. Gaines, D. Heagle, A. J. M. Pugin, and H. A. J. Russell. New core and downhole geophysical data sets from the Bells Corners Borehole Calibration Facility Ottawa, Ontario. Natural Resources Canada/CMSS/Information Management, 2021. http://dx.doi.org/10.4095/328837.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Teillet, P. M., J. Barker, B. L. Markham, R. R. Irish, G. Fedosejevs, and J. C. Storey. Radiometric cross-calibration of the Landsat-7 ETM+ and Landsat-5 TM aensors based on tandem data sets. Natural Resources Canada/ESS/Scientific and Technical Publishing Services, 2001. http://dx.doi.org/10.4095/219720.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Brown K. A. and J. W. Glenn. Calibration of Target SECs Based on Single Beam Transports. Office of Scientific and Technical Information (OSTI), May 1996. http://dx.doi.org/10.2172/1132423.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Baral, Aniruddha, Jeffrey Roesler, M. Ley, Shinhyu Kang, Loren Emerson, Zane Lloyd, Braden Boyd, and Marllon Cook. High-volume Fly Ash Concrete for Pavements Findings: Volume 1. Illinois Center for Transportation, September 2021. http://dx.doi.org/10.36501/0197-9191/21-030.

Full text
Abstract:
High-volume fly ash concrete (HVFAC) has improved durability and sustainability properties at a lower cost than conventional concrete, but its early-age properties like strength gain, setting time, and air entrainment can present challenges for application to concrete pavements. This research report helps with the implementation of HVFAC for pavement applications by providing guidelines for HVFAC mix design, testing protocols, and new tools for better quality control of HVFAC properties. Calorimeter tests were performed to evaluate the effects of fly ash sources, cement–fly ash interactions, chemical admixtures, and limestone replacement on the setting times and hydration reaction of HVFAC. To better target the initial air-entraining agent dosage for HVFAC, a calibration curve between air-entraining dosage for achieving 6% air content and fly ash foam index test has been developed. Further, a digital foam index test was developed to make this test more consistent across different labs and operators. For a more rapid prediction of hardened HVFAC properties, such as compressive strength, resistivity, and diffusion coefficient, an oxide-based particle model was developed. An HVFAC field test section was also constructed to demonstrate the implementation of a noncontact ultrasonic device for determining the final set time and ideal time to initiate saw cutting. Additionally, a maturity method was successfully implemented that estimates the in-place compressive strength of HVFAC through wireless thermal sensors. An HVFAC mix design procedure using the tools developed in this project such as the calorimeter test, foam index test, and particle-based model was proposed to assist engineers in implementing HVFAC pavements.
APA, Harvard, Vancouver, ISO, and other styles
6

Hodul, M., H. P. White, and A. Knudby. A report on water quality monitoring in Quesnel Lake, British Columbia, subsequent to the Mount Polley tailings dam spill, using optical satellite imagery. Natural Resources Canada/CMSS/Information Management, 2022. http://dx.doi.org/10.4095/330556.

Full text
Abstract:
In the early morning on the 4th of August 2014, a tailings dam near Quesnel, BC burst, spilling approximately 25 million m3 of runoff containing heavy metal elements into nearby Quesnel Lake (Byrne et al. 2018). The runoff slurry, which included lead, arsenic, selenium, and vanadium spilled through Hazeltine Creek, scouring its banks and picking up till and forest cover on the way, and ultimately ended up in Quesnel Lake, whose water level rose by 1.5 m as a result. While the introduction of heavy metals into Quesnel Lake was of environmental concern, the additional till and forest cover scoured from the banks of Hazeltine Creek added to the lake has also been of concern to salmon spawning grounds. Immediate repercussions of the spill involved the damage of sensitive environments along the banks and on the lake bed, the closing of the seasonal salmon fishery in the lake, and a change in the microbial composition of the lake bed (Hatam et al. 2019). In addition, there appears to be a seasonal resuspension of the tailings sediment due to thermal cycling of the water and surface winds (Hamilton et al. 2020). While the water quality of Quesnel Lake continues to be monitored for the tailings sediments, primarily by members at the Quesnel River Research Centre, the sample-and-test methods of water quality testing used, while highly accurate, are expensive to undertake, and not spatially exhaustive. The use of remote sensing techniques, though not as accurate as lab testing, allows for the relatively fast creation of expansive water quality maps using sensors mounted on boats, planes, and satellites (Ritchie et al. 2003). The most common method for the remote sensing of surface water quality is through the use of a physics-based semianalytical model which simulates light passing through a water column with a given set of Inherent Optical Properties (IOPs), developed by Lee et al. (1998) and commonly referred to as a Radiative Transfer Model (RTM). The RTM forward-models a wide range of water-leaving spectral signatures based on IOPs determined by a mix of water constituents, including natural materials and pollutants. Remote sensing imagery is then used to invert the model by finding the modelled water spectrum which most closely resembles that seen in the imagery (Brando et al 2009). This project set out to develop an RTM water quality model to monitor the water quality in Quesnel Lake, allowing for the entire surface of the lake to be mapped at once, in an effort to easily determine the timing and extent of resuspension events, as well as potentially investigate greening events reported by locals. The project intended to use a combination of multispectral imagery (Landsat-8 and Sentinel-2), as well as hyperspectral imagery (DESIS), combined with field calibration/validation of the resulting models. The project began in the Autumn before the COVID pandemic, with plans to undertake a comprehensive fieldwork campaign to gather model calibration data in the summer of 2020. Since a province-wide travel shutdown and social distancing procedures made it difficult to carry out water quality surveying in a small boat, an insufficient amount of fieldwork was conducted to suit the needs of the project. Thus, the project has been put on hold, and the primary researcher has moved to a different project. This document stands as a report on all of the work conducted up to April 2021, intended largely as an instructional document for researchers who may wish to continue the work once fieldwork may freely and safely resume. This research was undertaken at the University of Ottawa, with supporting funding provided by the Earth Observations for Cumulative Effects (EO4CE) Program Work Package 10b: Site Monitoring and Remediation, Canada Centre for Remote Sensing, through the Natural Resources Canada Research Affiliate Program (RAP).
APA, Harvard, Vancouver, ISO, and other styles
7

Lieth, J. Heiner, Michael Raviv, and David W. Burger. Effects of root zone temperature, oxygen concentration, and moisture content on actual vs. potential growth of greenhouse crops. United States Department of Agriculture, January 2006. http://dx.doi.org/10.32747/2006.7586547.bard.

Full text
Abstract:
Soilless crop production in protected cultivation requires optimization of many environmental and plant variables. Variables of the root zone (rhizosphere) have always been difficult to characterize but have been studied extensively. In soilless production the opportunity exists to optimize these variables in relation to crop production. The project objectives were to model the relationship between biomass production and the rhizosphere variables: temperature, dissolved oxygen concentration and water availability by characterizing potential growth and how this translates to actual growth. As part of this we sought to improve of our understanding of root growth and rhizosphere processes by generating data on the effect of rhizosphere water status, temperature and dissolved oxygen on root growth, modeling potential and actual growth and by developing and calibrating models for various physical and chemical properties in soilless production systems. In particular we sought to use calorimetry to identify potential growth of the plants in relation to these rhizosphere variables. While we did experimental work on various crops, our main model system for the mathematical modeling work was greenhouse cut-flower rose production in soil-less cultivation. In support of this, our objective was the development of a Rose crop model. Specific to this project we sought to create submodels for the rhizosphere processes, integrate these into the rose crop simulation model which we had begun developing prior to the start of this project. We also sought to verify and validate any such models and where feasible create tools that growers could be used for production management. We made significant progress with regard to the use of microcalorimetry. At both locations (Israel and US) we demonstrated that specific growth rate for root and flower stem biomass production were sensitive to dissolved oxygen. Our work also identified that it is possible to identify optimal potential growth scenarios and that for greenhouse-grown rose the optimal root zone temperature for potential growth is around 17 C (substantially lower than is common in commercial greenhouses) while flower production growth potential was indifferent to a range as wide as 17-26C in the root zone. We had several set-backs that highlighted to us the fact that work needs to be done to identify when microcalorimetric research relates to instantaneous plant responses to the environment and when it relates to plant acclimation. One outcome of this research has been our determination that irrigation technology in soilless production systems needs to explicitly include optimization of oxygen in the root zone. Simply structuring the root zone to be “well aerated” is not the most optimal approach, but rather a minimum level. Our future work will focus on implementing direct control over dissolved oxygen in the root zone of soilless production systems.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography