Academic literature on the topic 'Calibration transfer method'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Calibration transfer method.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Calibration transfer method"

1

Blanco, Marcelo, Jordi Coello, Hortensia Iturriaga, Santiago Maspoch, and Esther Rovira. "Wavelength Calibration Transfer between Diode Array UV-Visible Spectrophotometers." Applied Spectroscopy 49, no. 5 (May 1995): 593–97. http://dx.doi.org/10.1366/0003702953964084.

Full text
Abstract:
The need to obtain expeditious results in control analyses of complex mixtures has turned multivariate calibration procedures into major choices for routine analyses. The inherent complexity of the calibration process and the practical need for analyses to be carried out as near the manufacturing line as possible occasionally entail calibrating with a different instrument from that subsequently employed for the analytical measurements proper. This paper exposes the problems potentially arising in transferring calibrations between diode array UV-Vis spectrophotometers. Basically, such problems originate in wavelength differences between spectrophotometers, even if they meet the manufacturer's specifications and the pharmacopoeia recommendations. We developed a straightforward method for harmonizing instrumental responses on the basis of reference wavelengths corresponding to zero values in the first-derivative spectra for potassium dichromate and benzoic acid standards. The method was applied to the analysis of binary mixtures of theophylline and doxylamine by multiple linear and partial least-squares regression with the use of one spectrophotometer for calibration and four others for analyses.
APA, Harvard, Vancouver, ISO, and other styles
2

Guenard, Robert D., Christine M. Wehlburg, Randy J. Pell, and David M. Haaland. "Importance of Prediction Outlier Diagnostics in Determining a Successful Inter-Vendor Multivariate Calibration Model Transfer." Applied Spectroscopy 61, no. 7 (July 2007): 747–54. http://dx.doi.org/10.1366/000370207781393280.

Full text
Abstract:
This paper reports on the transfer of calibration models between Fourier transform near-infrared (FT-NIR) instruments from four different manufacturers. The piecewise direct standardization (PDS) method is compared with the new hybrid calibration method known as prediction augmented classical least squares/partial least squares (PACLS/PLS). The success of a calibration transfer experiment is judged by prediction error and by the number of samples that are flagged as outliers that would not have been flagged as such if a complete recalibration were performed. Prediction results must be acceptable and the outlier diagnostics capabilities must be preserved for the transfer to be deemed successful. Previous studies have measured the success of a calibration transfer method by comparing only the prediction performance (e.g., the root mean square error of prediction, RMSEP). However, our study emphasizes the need to consider outlier detection performance as well. As our study illustrates, the RMSEP values for a calibration transfer can be within acceptable range; however, statistical analysis of the spectral residuals can show that differences in outlier performance can vary significantly between competing transfer methods. There was no statistically significant difference in the prediction error between the PDS and PACLS/PLS methods when the same subset sample selection method was used for both methods. However, the PACLS/PLS method was better at preserving the outlier detection capabilities and therefore was judged to have performed better than the PDS algorithm when transferring calibrations with the use of a subset of samples to define the transfer function. The method of sample subset selection was found to make a significant difference in the calibration transfer results using the PDS algorithm, while the transfer results were less sensitive to subset selection when the PACLS/PLS method was used.
APA, Harvard, Vancouver, ISO, and other styles
3

Zarzhetskaya, Natalya, and Vladimir Litovchenko. "COAXIAL CONTACT DEVICE AND METHOD OF CALIBRATION." Interexpo GEO-Siberia 9 (2019): 77–86. http://dx.doi.org/10.33764/2618-981x-2019-9-77-86.

Full text
Abstract:
A coaxial contact device for connection to the microwave transition analyzer is proposed as coaxial measures for calibrating the analyzer, and the investigated strip line junctions for measuring their S-parameters. In addition, a method for calibrating this device with one or two calculated coaxial-to-stripline calibrators, providing the transfer of the results of calibration of the analyzer by coaxial measures to the measurement of S-parameters of the strip line junctions is presented.
APA, Harvard, Vancouver, ISO, and other styles
4

Bateman, Vesta I., William B. Leisher, Fred A. Brown, and Neil T. Davie. "Calibration of a Hopkinson Bar with a Transfer Standard." Shock and Vibration 1, no. 2 (1993): 145–52. http://dx.doi.org/10.1155/1993/354290.

Full text
Abstract:
A program requirement for field test temperatures that are beyond the test accelerometer operational limits of −30° F and +150° F required the calibration of accelerometers at high shock levels and at the temperature extremes of −50° F and +160° F. The purposes of these calibrations were to insure that the accelerometers operated at the field test temperatures and to provide an accelerometer sensitivity at each test temperature. Because there is no National Institute of Standards and Technology traceable calibration capability at shock levels of 5,000–15,000 g for the temperature extremes of −50° F and +160° F, a method for calibrating and certifying the Hopkinson bar with a transfer standard was developed. Time domain and frequency domain results are given that characterize the Hopkinson bar. The National Institute of Standards and Technology traceable accuracy for the standard accelerometer in shock is ±5%. The Hopkinson bar has been certified with an uncertainty of 6%.
APA, Harvard, Vancouver, ISO, and other styles
5

Borell, G. J., and T. E. Diller. "A Convection Calibration Method for Local Heat Flux Gages." Journal of Heat Transfer 109, no. 1 (February 1, 1987): 83–89. http://dx.doi.org/10.1115/1.3248073.

Full text
Abstract:
An apparatus for calibrating local heat flux gages in convective air flows is described. Heat transfer from a “hot” gage to a “cold” fluid was measured using a guarded hot-plate technique. The system was used to calibrate Gardon-type circular foil heat flux gages of 1/8 in. and 1/16 in. outer diameters. The results indicate that the calibration curves are nonlinear, which is different from the linear calibration obtained using the standard radiation technique. The degree of nonlinearity matches the analysis which accounts for the effect of the temperature distribution in the gage foil. The effect of this temperature distribution can be neglected in the standard radiation calibration but is often significant in convection applications. These results emphasize the importance of calibrating heat flux gages in thermal environments similar to those in which they will be used.
APA, Harvard, Vancouver, ISO, and other styles
6

Wehlburg, Christine M., David M. Haaland, and David K. Melgaard. "New Hybrid Algorithm for Transferring Multivariate Quantitative Calibrations of Intra-Vendor Near-Infrared Spectrometers." Applied Spectroscopy 56, no. 7 (July 2002): 877–86. http://dx.doi.org/10.1366/000370202760171554.

Full text
Abstract:
A new prediction-augmented classical least-squares/partial least-squares (PACLS/PLS) hybrid algorithm is ideally suited for use in transferring multivariate calibrations between spectrometers. Spectral variations such as instrument response differences can be explicitly incorporated into the algorithm through the use of subset sample spectra collected on both spectrometers. Two current calibration transfer methods, subset recalibration and piecewise direct standardization (PDS), also utilize subset sample spectra to facilitate transfer of calibration. The three methods were applied to the transfer of quantitative multivariate calibration models for near-infrared (NIR) data of organic samples containing chlorobenzene, heptane, and toluene between a primary and three secondary spectrometers that were all the same model, called intra-vendor transfer of calibration. The hybrid PACLS/PLS method outperformed subset recalibration and provided predictions equivalent to PDS with additive background correction on the two secondary spectrometers whose instrument drift appeared to be dominated by simple linear baseline variations. One of the secondary spectrometers had complex instrument drift that was captured by repeatedly measuring the spectrum of a single repeat sample. In calculating a transfer function to correct prediction spectra, PDS assumes no instrumental drift on the secondary spectrometer. Therefore, PDS was unable to directly accommodate both the subset samples and the use of a single repeat sample to transfer and maintain a calibration on that secondary instrument. In order to implement the transfer of calibration with PDS in the presence of complex instrument drift, recalibrated PLS models that included the repeat spectra from the secondary spectrometer were used to predict the spectra transformed by PDS. The importance of correcting for drift on the secondary spectrometer during calibration transfer was illustrated by the improvements in prediction for all three methods vs. using only the instrument response differences derived from the subset sample spectra. When the effects of instrument drift were complex on the secondary spectrometer, the PACLS/PLS hybrid algorithm outperformed both PDS and subset recalibration. Through the explicit incorporation of spectral variations, due to instrument response differences and drift on the secondary spectrometer, the PACLS/PLS algorithm was successful at intra-vendor transfer of calibrations between NIR spectrometers.
APA, Harvard, Vancouver, ISO, and other styles
7

Piróg, Przemysław, and Mariusz Górecki. "AC/DC transfer standards calibration in Central Military Calibration Laboratory." Bulletin of the Military University of Technology 66, no. 4 (December 31, 2017): 217–28. http://dx.doi.org/10.5604/01.3001.0010.8333.

Full text
Abstract:
The article discusses the method used in the Central Military Calibration Laboratory to calibrate Fluke 5790 AC/DC transfer standard with reference transfer standard Fluke 792A. It presents the measurement equation and the uncertainty budget. The contribution of uncertainty components in the measurement uncertainty has been presented. The metrological traceability has been evaluated by comparing calibration results with the results in the last Fluke certificate of calibration. Keywords: AC/DC converters, AC/DC difference, thermal voltage converters (TVCs), AC voltage measurement.
APA, Harvard, Vancouver, ISO, and other styles
8

Chen, Aobei, Dapeng Li, Dezhi Zheng, Zhongxiang Li, and Rui Na. "A Fast Calibration Method for Sensors of Atmospheric Detection System." Applied Sciences 12, no. 22 (November 18, 2022): 11733. http://dx.doi.org/10.3390/app122211733.

Full text
Abstract:
To meet the needs of a large number of high-altitude meteorological detections, we need to perform fast, high-precision, and high-reliability calibrations of the sensors in the atmospheric detection system (ADS). However, using the traditional method to calibrate the sensor with high precision often takes a lot of time and increases the cost of workforce and material resources. Therefore, a method for realizing fast sensor calibration under the current system hardware conditions is required. A physical field model of Tube–Air–ADS is proposed for the first time, and the transfer function is obtained by combining the system identification, which provides the possibility for dynamic analysis of the calibration system. A Multi-Criteria Adaptive (MCA) PID controller design method is proposed, which provides a new idea for the parameter design of the controller. It controls the amplitude and switching frequency of the controller’s output signal, ensuring the safe and stable operation of the calibration system. Combined with the hardware parameters of the system, we propose the Variable Precision Steady-State Discrimination (VPSSD) method, which can further shorten the calibration time. Comparing and analyzing the current simulation results under Matlab/Simulink, the proposed MCA method, compared with other PID controller design methods, ensures the stable operation of the calibration system. At the same time, compared with the original system, the calibration time is shortened to 47.7%. Combined with the VPSSD method, the calibration time further shortens to 38.7 s.
APA, Harvard, Vancouver, ISO, and other styles
9

Drennen, Jim. "Calibration Transfer: A Critical Component of Analytical Method Validation." NIR news 14, no. 5 (October 2003): 14–15. http://dx.doi.org/10.1255/nirn.736.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Li, Xue-Ying, Guo-xing Ren, Ping-Ping Fan, Yan Liu, Zhong-Liang Sun, Guang-Li Hou, and Mei-Rong Lv. "Study on the Calibration Transfer of Soil Nutrient Concentration from the Hyperspectral Camera to the Normal Spectrometer." Journal of Spectroscopy 2020 (April 27, 2020): 1–10. http://dx.doi.org/10.1155/2020/8137142.

Full text
Abstract:
The calibration transfer between instruments is mainly aimed at the calibration transfer between normal spectrometers. There are few studies on the calibration transfer of soil nutrient concentration from a hyperspectral camera to a normal spectrometer. In this paper, 164 soil samples from three regions in Qingdao, China, were collected. The spectral data of normal spectrometer and hyperspectral camera and the concentration of total carbon and nitrogen were obtained. And then, the models of soil total carbon and nitrogen content were established by using the spectral data of a normal spectrometer. The hyperspectral data were transferred by a variety of methods, such as single conventional calibration transfer algorithm, combination of multiple calibration transfer algorithms, and calibration transfer algorithm after spectral pretreatment. The transferred hyperspectral data were predicted by the total carbon and total nitrogen concentration model established by using a normal spectrometer. The absolute coefficients Rt2 and root mean square error of prediction (RMSEP) were used to evaluate the prediction performance after calibration transfer. After trying many calibration transfer methods, the prediction performance of calibration transfer by the Repfile-PDS and Repfile-SNV methods was the best. In the calibration transfer of the Repfile-PDS method, when the number of PDS windows was 27 and the number of standard data was 40, the Rt2 and the RMSEP of TC concentration were 0.627 and 2.351. When the number of PDS windows was 25 and the number of standard data was 100, the Rt2 and the RMSEP of TN concentration were 0.666 and 0.297. In the calibration transfer of the Repfile-SNV method, when the number of TC and TN standard data was 120, the Rt2 was the largest, 0.701 and 0.722, respectively, and the RMSEP was 2.880 and 0.399, respectively. After the hyperspectral data were calibration transferred by the above algorithms, they could be predicted by the soil TC and TN concentration model established by using a normal spectrometer, and better prediction results can be obtained. The solution of the calibration transfer of soil nutrient concentration from the hyperspectral camera to the normal spectrometer provides a powerful basis for rapid prediction of a large number of image information data collected by using a hyperspectral camera. It greatly reduces the workload and promotes the application of hyperspectral camera in quantitative analysis and rapid measurement technology.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Calibration transfer method"

1

Mentré-Le, Sant Véronique. "Amelioration des methodes de mesure du flux par la technique des temperatures superficielles." Paris 6, 1988. http://www.theses.fr/1988PA066416.

Full text
Abstract:
Probleme de la variation des caracteristiques thermiques des materiaux avec la temperature et des effets de la geometrie de la maquette lors de dla determination des flux thermiques par convection en soufflerie hypersonique. Presentation de la technique utilisee pour l'etalonnage thermique et de differentes methodes numeriques de depouillement adaptees aux conditions d'essai
APA, Harvard, Vancouver, ISO, and other styles
2

Cibere, Joseph John. "Calibration transfer methods for feedforward neural network based instruments." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/NQ63857.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Košťál, Josef. "Posouzení tepelně-mechanické únavy výfukového potrubí." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2020. http://www.nusl.cz/ntk/nusl-418196.

Full text
Abstract:
Tato diplomová práce se zabývá posouzením tepelně-mechanické únavy výfukového potrubí. Nejprve byla provedena rešeršní studie, ve které je rozebrán fenomén tepelně-mechanické únavy. Byly prezentovány hlavní mechanismy poškození a přístupy k jejich modelování. Diskutována byla i specifická chování materiálu vystavenému tepelně-mechanickému zatěžování. Byl vypracován přehled vhodných modelů materiálu a modelů únavové životnosti společně s algoritmem predikce tepelně-mechanické únavy komponenty. Poté byl tento teoretický základ aplikován na praktický případ výfukového potrubí podléhajícího tepelně-mechanickému zatěžování. Dva tepelně závislé elasto-plastické modely materiálu byly nakalibrovány a validovány na základě experimentálních dat. Byl vytvořen diskretizovaný konečnoprvkový model sestavy výfukového potrubí. Model tepelných okrajových podmínek byl předepsán na základě výpočtů ustáleného sdruženého přestupu tepla. Slabě sdružená tepelně-deformační úloha byla vyřešena metodou konečných prvků pro oba modely materiálů. Bylo použito paradigma nesvázaného modelu únavy, které je vhodné pro nízkocyklovou únavu. Životnost byla tedy vyhodnocena jako součást post-procesoru. Použity byly dva modely únavové životnosti – energeticky založený model a deformačně založený model. Získané hodnoty životnosti byly porovnány vzhledem k použitým modelům materiálu a modelům únavové životnosti. Nakonec jsou diskutovány závěry této práce, oblasti dalšího výzkumu a navrženy možnosti na zlepšení použitých přístupů.
APA, Harvard, Vancouver, ISO, and other styles
4

Pawlik, Andreas H., Alireza Rahmati, Joop Schaye, Myoungwon Jeon, and Vecchia Claudio Dalla. "The Aurora radiation-hydrodynamical simulations of reionization: calibration and first results." OXFORD UNIV PRESS, 2017. http://hdl.handle.net/10150/623851.

Full text
Abstract:
We introduce a new suite of radiation- hydrodynamical simulations of galaxy formation and reionization called Aurora. The Aurora simulations make use of a spatially adaptive radiative transfer technique that lets us accurately capture the small- scale structure in the gas at the resolution of the hydrodynamics, in cosmological volumes. In addition to ionizing radiation, Aurora includes galactic winds driven by star formation and the enrichment of the universe with metals synthesized in the stars. Our reference simulation uses 2 x 512(3) dark matter and gas particles in a box of size 25 h(-1) comoving Mpc with a force softening scale of at most 0.28 h(-1) kpc. It is accompanied by simulations in larger and smaller boxes and at higher and lower resolution, employing up to 2 x 1024(3) particles, to investigate numerical convergence. All simulations are calibrated to yield simulated star formation rate functions in close agreement with observational constraints at redshift z = 7 and to achieve reionization at z approximate to 8.3, which is consistent with the observed optical depth to reionization. We focus on the design and calibration of the simulations and present some first results. The median stellar metallicities of low- mass galaxies at z = 6 are consistent with the metallicities of dwarf galaxies in the Local Group, which are believed to have formed most of their stars at high redshifts. After reionization, the mean photoionization rate decreases systematically with increasing resolution. This coincides with a systematic increase in the abundance of neutral hydrogen absorbers in the intergalactic medium.
APA, Harvard, Vancouver, ISO, and other styles
5

Spencer, Benjamin. "On-line C-arm intrinsic calibration by means of an accurate method of line detection using the radon transform." Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAS044/document.

Full text
Abstract:
Les ``C-arm'' sont des systémes de radiologie interventionnelle fréquemment utilisés en salle d'opération ou au lit du patient. Des images 3D des structures anatomiques internes peuvent être calculées à partir de multiples radiographies acquises sur un ``C-arm mobile'' et isocentrique décrivant une trajectoire généralement circulaire autour du patient. Pour cela, la géométrie conique d'acquisition de chaque radiographie doit être précisément connue. Malheureusement, les C-arm se déforment en général au cours de la trajectoire. De plus leur motorisation engendre des oscillations non reproductibles. Ils doivent donc être calibrés au cours de l'acquisition. Ma thèse concerne la calibration intrinsèque d'un C-arm à partir de la détection de la projection du collimateur de la source dans les radiographies.Nous avons développé une méthode de détection de la projection des bords linéaires du collimateur. Elle surpasse les méthodes classiques comme le filtre de Canny sur données simulées ou réelles. La précision que nous obtenons sur l'angle et la position (phi,s) des droites est de l'ordre de: phi{RMS}=+/- 0.0045 degrees et s{RMS}=+/- 1.67 pixels. Nous avons évalué nos méthodes et les avons comparés à des méthodes classiques de calibration dans le cadre de la reconstruction 3D
Mobile isocentric x-ray C-arm systems are an imaging tool used during a variety of interventional and image guided procedures. Three-dimensional images can be produced from multiple projection images of a patient or object as the C-arm rotates around the isocenter provided the C-arm geometry is known. Due to gravity affects and mechanical instabilities the C-arm source and detector geometry undergo significant non-ideal and possibly non reproducible deformation which requires a process of geometric calibration. This research investigates the use of the projection of the slightly closed x-ray tube collimator edges in the image field of view to provide the online intrinsic calibration of C-arm systems.A method of thick straight edge detection has been developed which outperforms the commonly used Canny filter edge detection technique in both simulation and real data investigations. This edge detection technique has exhibited excellent precision in detection of the edge angles and positions, (phi,s), in the presence of simulated C-arm deformation and image noise: phi{RMS} = +/- 0.0045 degrees and s{RMS} = +/- 1.67 pixels. Following this, the C-arm intrinsic calibration, by means of accurate edge detection, has been evaluated in the framework of 3D image reconstruction
APA, Harvard, Vancouver, ISO, and other styles
6

Villefranque, Najda. "Les effets radiatifs 3D des nuages de couche limite : de leur simulation explicite à leur paramétrisation." Thesis, Toulouse 3, 2019. http://www.theses.fr/2019TOU30191.

Full text
Abstract:
Le rayonnement est un processus clé pour l'évolution de l'atmosphère. Les ondes électromagnétiques émises par les corps chauds comme le soleil interagissent avec de nombreuses composantes du Système Terre. Elles peuvent par exemple être diffusées et absorbées par de microscopiques gouttelettes d'eau en suspension dans les nuages. Aux échelles globales, ces processus radiatifs controlent les bilans d'énergie de la surface et de l'atmosphère. L'effet des cumulus — ces nuages de couche limite liquides, principalement non-précipitants, fractionnés — sur le rayonnement solaire est étudié depuis de nombreuses années. L'importance de ces effets pour les prévisions météorologiques et pour l'évolution du climat de la Terre a déjà été démontrée. Pourtant, notre compréhension de ces interactions complexes et multiéchelles reste limitée. Dans cette thèse, le lien entre les caractéristiques macrophysiques des nuages et leur impact sur le rayonnement solaire, et en particulier sur leurs ``effets 3D" (différence entre un calcul 3D et un calcul 1D dans lequel le transport horizontal est négligé), est étudié. Une paramétrisation existante des effets radiatifs 3D des nuages pour les modèles de grande échelle est analysée et évaluée contre des modèles de référence et des observations.A cette fin, des simulations haute résolution de quatre cas de convection de couche limite idéalisée sont réalisées à l'aide du modèle français Méso-NH, ainsi que des simulations perturbées permettant d'analyser l'impact de la résolution, de la taille du domaine, du schéma d'advection et des paramétrisations de la turbulence et de la microphysique sur les caractéristiques des populations nuageuses. Pour simuler à posteriori la propagation du rayonnement dans ces champs nuageux 3D, des outils Monte Carlo inspirés de la communauté de la synthèse d'image sont implémentés sous la forme d'un ensemble de modules génériques formant une bibliothèque libre ; ces modules sont ensuite utilisés pour implémenter des codes de Monte Carlo produisant un temps de calcul insensible à la complexité des scènes nuageuses.Le transfert radiatif 3D est résolu dans l'ensemble des champs nuageux simulés. Le lien entre les caractéristiques nuageuses analysées dans les différentes scènes et leurs effets radiatifs est analysé. Une attention particulière est portée sur les effets radiatifs 3D des nuages, par la réalisation de simulations Monte Carlo 3D et 1D (sous l'approximation de colonnes indépendantes), permettant d'isoler la contribution du transport horizontal sur les flux radiatifs à la surface et au sommet de l'atmosphère. Ces effets 3D sont quantifiés en fonction de l'angle solaire zénithal, et séparés en composantes directe et diffuse. Il apparait que les effets radiatifs 3D sont le résultat d'effets de signe opposé sur les flux direct et diffus, qui ne se compensent pas pour tous les angles solaires. La différence entre l'effet radiatif nuageux total (respectivement, direct) pour des calculs Monte Carlo 3D et 1D intégrés horizontalement et sur un cycle diurne atteint -13 W/m² (respectivement, -45 W/m²) pour des courses solaires correspondant aux hautes latitudes (les effets 3D refroidissent la surface)
Radiation is a key process in the evolution of the atmosphere. Electromagnetic waves emitted by warm bodies like the sun interact with many components of the Earth system. For example, they can be scattered and absorbed by microscopic droplets in clouds. At global scales, these radiation processes drive the energy budgets of the surface and Earth. The impact of fractionated, mostly non-precipitating, liquid boundary-layer clouds (cumulus) on solar radiation has been studied for many years and is knowingly important for both Numerical Weather Predicitions and the evolution of the Earth's climate; yet our understanding of these complex, multi-scale interactions remains limited. In this thesis, the link between cloud macrophysic characteristics and their impact on solar radiation and in particular on their so-called 3D radiative effects (obtain from the difference between 3D and 1D computations where horizontal transport is neglected) is investigated. An existing parameterisation of 3D radiative effects of clouds for large-scale models is evaluated against reference models and observations. In order to do so, high-resolution simulations of four idealized convective boundary-layer cases are realized using the French Large-Eddy Model Méso-NH, along with perturbed simulations to assess the impact of resolution, domain size, advection scheme and parameterisations of turbulence and microphysics on cloud population characteristics. To simulate offline radiative transfer in these 3D cloudy fields, innovative Monte Carlo tools inspired from the community of computer graphics are implemented in the form of a collection of generic modules composing an open-source library, and used to build Monte Carlo codes that produce computing times that are insensitive to the complexity of the cloud scenes. 3D radiation is solved in all the simulated cloud fields, and the link between the characteristics of cloud populations from the various cases, and their radiative effect, is analysed. Special attention is dedicated to the 3D effects of clouds by performing 1D (under the independent column approximation) and 3D simulations by Monte Carlo, hence isolating the contribution of horizontal transport to the radiative fluxes at the surface and at the top of atmosphere. These 3D effects are quantified as a function of the solar zenith angle, and broken down into direct and diffuse components. It appears that the 3D bias on surface fluxes is the result of biases of opposite signs on direct and diffuse fluxes, that do not compensate each other at all solar angles. The difference between 1D and 3D total (respectively, direct) cloud radiative effect, integrated horizontally and over a diurnal cycle, can reach -13 W/m² (respectively, -45 W/m²) for sun paths corresponding to high latitudes (3D effects act to cool the surface)
APA, Harvard, Vancouver, ISO, and other styles
7

Andersson, Hjalmar. "Inverse Uncertainty Quantification using deterministic sampling : An intercomparison between different IUQ methods." Thesis, Uppsala universitet, Tillämpad kärnfysik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-447070.

Full text
Abstract:
In this thesis, two novel methods for Inverse Uncertainty Quantification are benchmarked against the more established methods of Monte Carlo sampling of output parameters(MC) and Maximum Likelihood Estimation (MLE). Inverse Uncertainty Quantification (IUQ) is the process of how to best estimate the values of the input parameters in a simulation, and the uncertainty of said estimation, given a measurement of the output parameters. The two new methods are Deterministic Sampling (DS) and Weight Fixing (WF). Deterministic sampling uses a set of sampled points such that the set of points has the same statistic as the output. For each point, the corresponding point of the input is found to be able to calculate the statistics of the input. Weight fixing uses random samples from the rough region around the input to create a linear problem that involves finding the right weights so that the output has the right statistic. The benchmarking between the four methods shows that both DS and WF are comparably accurate to both MC and MLE in most cases tested in this thesis. It was also found that both DS and WF uses approximately the same amount of function calls as MLE and all three methods use a lot fewer function calls to the simulation than MC. It was discovered that WF is not always able to find a solution. This is probably because the methods used for WF are not the optimal method for what they are supposed to do. Finding more optimal methods for WF is something that could be investigated further.
APA, Harvard, Vancouver, ISO, and other styles
8

Laref, Rachid. "Étude d’un système à base de microcapteurs de gaz pour le suivi et la cartographie de la pollution atmosphérique." Electronic Thesis or Diss., Université de Lorraine, 2020. http://www.theses.fr/2020LORR0048.

Full text
Abstract:
L’objectif de cette thèse est de concevoir un dispositif multi-capteurs de gaz à faible coût pour la surveillance des gaz polluants en environnement externe, et d’élaborer des méthodes de calibrage automatique et d’analyse de données adaptées à son exploitation. Ce dispositif vise à contribuer à la densification du réseau de surveillance de la pollution atmosphérique actuel afin de permettre une meilleure résolution spatiale et temporelle. Premièrement, nous avons sélectionné des micro-capteurs électrochimiques et des micro-capteurs à oxyde métallique capables de détecter et de quantifier des faibles concentrations de dioxyde d’azote et d’ozone dans l’atmosphère. Nous avons caractérisé ces micro-capteurs en laboratoire en termes de sensibilité et de reproductibilité, et les avons testés sur le terrain (à proximité d’un axe autoroutier) en suivant le protocole d’évaluation de micro-capteurs de gaz proposé par le Centre Commun de Recherche Européen (C.C.R.). L’analyse individuelle des données obtenues à partir de chaque capteur a mis en évidence la nécessité de fusionner les réponses de tous les capteurs ainsi que les valeurs de la température et de l’humidité de l’air, à l’aide des méthodes multi-variables de classification et de quantification. Les performances de plusieurs méthodes de classification, utilisées généralement dans le domaine des dispositifs multi-capteurs de gaz, ont été étudiées. Leur comparaison nous a conduit à choisir la régression SVM (Séparateur à Vaste Marge) grâce à sa précision et à sa robustesse. Les hyper-paramètres de cette méthode ont été optimisés à partir de l’algorithme de Recherche par Motifs Généralisés en raison de sa simplicité et de sa fiabilité. La méthode d’étalonnage proposée permet à notre dispositif de quantifier les deux polluants atmosphériques visés dans la gamme des précisions recommandées par les directives européennes concernant les mesures de type « indicative ». Ensuite, nous avons abordé le problème des dérives des capteurs qui engendrent des réétalonnages périodiques. Nous avons proposé une méthode de standardisation basée sur la régression SVM pour permettre de réduire le coût et l’effort d’un étalonnage complet. Cette même méthode de standardisation a été utilisée avec succès pour le transfert d’étalonnage entre plusieurs systèmes identiques afin d’éviter des étalonnages individuels. Nos travaux de recherche ont été valorisés par la publication de deux articles dans les journaux référencés JCR et deux communications dans des conférences internationales IEEE
The objective of this thesis is to design a low-cost gas sensor array for outdoor gas pollutants monitoring and to develop adapted standardization and data analysis methods. The device aims is to contribute to the densification of the current atmospheric pollution-monitoring network for better spatial and temporal resolution. First, we select electrochemical and metal oxide gas sensors capable to detecting and quantifying low concentrations of nitrogen dioxide and ozone in ambient air near a highway. We have characterized these micro-sensors in the laboratory using a diffusing controller system and tested them in the field by following procedures inspired from the evaluation protocol of low cost gas sensors proposed by the European Joint Research Center (JRC). The individual analysis of the data from each sensor highlights the need to merge the responses of all the sensors as well as the values of the temperature and the humidity of the atmosphere using multivariate methods of classification. The performance of several classification methods generally used in the field of multi-sensor gas systems was studied. This comparison leads us to choose Support Vector Machine regression (SVM) thanks to its performance in terms of precision and robustness. We also conduct a study on the optimization of the SVM hyper-parameters using optimization algorithms. The Generalized Pattern Search algorithm (GPS) was chosen among other algorithms due to its simplicity and reliability. Then, we highlight the problem of sensor drift over time and the need for periodic calibration. We proposed a standardization method based on SVM regression, thereby reducing the cost and the effort of full recalibration. This standardization method was also used for calibration transfer between several identical systems in order to avoid individual calibration of each system. The research work elaborated in this PhD was published in two journal papers and two IEEE conference papers
APA, Harvard, Vancouver, ISO, and other styles
9

Sonono, Masimba Energy. "Applications of conic finance on the South African financial markets /| by Masimba Energy Sonono." Thesis, North-West University, 2012. http://hdl.handle.net/10394/9206.

Full text
Abstract:
Conic finance is a brand new quantitative finance theory. The thesis is on the applications of conic finance on South African Financial Markets. Conic finance gives a new perspective on the way people should perceive financial markets. Particularly in incomplete markets, where there are non-unique prices and the residual risk is rampant, conic finance plays a crucial role in providing prices that are acceptable at a stress level. The theory assumes that price depends on the direction of trade and there are two prices, one for buying from the market called the ask price and one for selling to the market called the bid price. The bid-ask spread reects the substantial cost of the unhedgeable risk that is present in the market. The hypothesis being considered in this thesis is whether conic finance can reduce the residual risk? Conic finance models bid-ask prices of cashows by applying the theory of acceptability indices to cashows. The theory of acceptability combines elements of arbitrage pricing theory and expected utility theory. Combining the two theories, set of arbitrage opportunities are extended to the set of all opportunities that a wide range of market participants are prepared to accept. The preferences of the market participants are captured by utility functions. The utility functions lead to the concepts of acceptance sets and the associated coherent risk measures. The acceptance sets (market preferences) are modeled using sets of probability measures. The set accepted by all market participants is the intersection of all the sets, which is convex. The size of this set is characterized by an index of acceptabilty. This index of acceptability allows one to speak of cashows acceptable at a level, known as the stress level. The relevant set of probability measures that can value the cashows properly is found through the use of distortion functions. In the first chapter, we introduce the theory of conic finance and build a foundation that leads to the problem and objectives of the thesis. In chapter two, we build on the foundation built in the previous chapter, and we explain in depth the theory of acceptability indices and coherent risk measures. A brief discussion on coherent risk measures is done here since the theory of acceptability indices builds on coherent risk measures. It is also in this chapter, that some new acceptability indices are introduced. In chapter three, focus is shifted to mathematical tools for financial applications. The chapter can be seen as a prerequisite as it bridges the gap from mathematical tools in complete markets to incomplete markets, which is the market that conic finance theory is trying to exploit. As the chapter ends, models used for continuous time modeling and simulations of stochastic processes are presented. In chapter four, the attention is focussed on the numerical methods that are relevant to the thesis. Details on obtaining parameters using the maximum likelihood method and calibrating the parameters to market prices are presented. Next, option pricing by Fourier transform methods is detailed. Finally a discussion on the bid-ask formulas relevant to the thesis is done. Most of the numerical implementations were carried out in Matlab. Chapter five gives an introduction to the world of option trading strategies. Some illustrations are used to try and explain the option trading strategies. Explanations of the possible scenarios at the expiration date for the different option strategies are also included. Chapter six is the appex of the thesis, where results from possible real market scenarios are presented and discussed. Only numerical results were reported on in the thesis. Empirical experiments could not be done due to limitations of availabilty of real market data. The findings from the numerical experiments showed that the spreads from conic finance are reduced. This results in reduced residual risk and reduced low cost of entering into the trading strategies. The thesis ends with formal discussions of the findings in the thesis and some possible directions for further research in chapter seven.
Thesis (MSc (Risk Analysis))--North-West University, Potchefstroom Campus, 2013.
APA, Harvard, Vancouver, ISO, and other styles
10

Silva, Arnaldo Peixoto da. "Acoplamento de técnicas espectrométricas com métodos quimiométricos de classificação e calibração multivariada em alimentos." Universidade do Estado do Rio de Janeiro, 2011. http://www.bdtd.uerj.br/tde_busca/arquivo.php?codArquivo=3556.

Full text
Abstract:
Este trabalho de pesquisa descreve três estudos de utilização de métodos quimiométricos para a classificação e caracterização de óleos comestíveis vegetais e seus parâmetros de qualidade através das técnicas de espectrometria de absorção molecular no infravermelho médio com transformada de Fourier e de espectrometria no infravermelho próximo, e o monitoramento da qualidade e estabilidade oxidativa do iogurte usando espectrometria de fluorescência molecular. O primeiro e segundo estudos visam à classificação e caracterização de parâmetros de qualidade de óleos comestíveis vegetais utilizando espectrometria no infravermelho médio com transformada de Fourier (FT-MIR) e no infravermelho próximo (NIR). O algoritmo de Kennard-Stone foi usado para a seleção do conjunto de validação após análise de componentes principais (PCA). A discriminação entre os óleos de canola, girassol, milho e soja foi investigada usando SVM-DA, SIMCA e PLS-DA. A predição dos parâmetros de qualidade, índice de refração e densidade relativa dos óleos, foi investigada usando os métodos de calibração multivariada dos mínimos quadrados parciais (PLS), iPLS e SVM para os dados de FT-MIR e NIR. Vários tipos de pré-processamentos, primeira derivada, correção do sinal multiplicativo (MSC), dados centrados na média, correção do sinal ortogonal (OSC) e variação normal padrão (SNV) foram utilizados, usando a raiz quadrada do erro médio quadrático de validação cruzada (RMSECV) e de predição (RMSEP) como parâmetros de avaliação. A metodologia desenvolvida para determinação de índice de refração e densidade relativa e classificação dos óleos vegetais é rápida e direta. O terceiro estudo visa à avaliação da estabilidade oxidativa e qualidade do iogurte armazenado a 4C submetido à luz direta e mantido no escuro, usando a análise dos fatores paralelos (PARAFAC) na luminescência exibida por três fluoróforos presentes no iogurte, onde pelo menos um deles está fortemente relacionado com as condições de armazenamento. O sinal fluorescente foi identificado pelo espectro de emissão e excitação das substâncias fluorescentes puras, que foram sugeridas serem vitamina A, triptofano e riboflavina. Modelos de regressão baseados nos escores do PARAFAC para a riboflavina foram desenvolvidos usando os escores obtidos no primeiro dia como variável dependente e os escores obtidos durante o armazenamento como variável independente. Foi visível o decaimento da curva analítica com o decurso do tempo da experimentação. Portanto, o teor de riboflavina pode ser considerado um bom indicador para a estabilidade do iogurte. Assim, é possível concluir que a espectroscopia de fluorescência combinada com métodos quimiométricos é um método rápido para monitorar a estabilidade oxidativa e a qualidade do iogurte
This research work describes three studies of chemometric methods employed for the classification and characterization of edible oils and its quality parameters through Fourier Transform mid infrared spectroscopy and near infrared spectroscopy, and for the monitoring the oxidative stability and quality of yogurt using fluorescence spectroscopy . The first and second studies aimed the classification and characterization of edible oil and its quality parameters using Fourier Transform mid infrared spectroscopy (FT-MIR) and near infrared spectroscopy (NIR) measurements, respectively. Kennard-Stone algorithm was used for selecting the training set, after a principal component analysis (PCA) was applied. The discrimination of canola oils from sunflower, corn and soybean was investigated using SVM-DA, SIMCA and PLS-DA. The quality parameters refraction index and relative density of edible oil was investigated using partial least squares (PLS), iPLS, LS- SVM multivariate calibration of FT-MIR and NIR data were evaluated. Several preprocessing alternatives (first derivative, multiplicative scatter correction, mean centering, orthogonal signal correction and standard normal variate) were investigated by using the root mean square error of validation cross validation (RMSECV) and prediction (RMSEP), as control parameters. In fact, the methodology developed is proposed for direct relative density and refraction index in edible oils and their classification, requiring a few minutes per sample without any previous treatment. The third study aimed to evaluate the oxidative stability and quality of yogurt stored at 40C with light or dark using the combined parallel factor (PARAFAC) analysis and fluorescence spectroscopy. PARAFAC analysis of the fluorescence landscapes exhibited three fluorophores present in the yogurt, where, at least one of them was strongly related to the storage conditions. The fluorescence signal was resolved into excitation and emission profiles of the pure fluorescent compounds, which are suggested to be vitamin A, tryptophan and riboflavin. Regression model based on PARAFAC scores for riboflavin were built using the scores obtained in the first day as dependent variable and the scores obtained during the storage as independent variable. It was clear demonstrated that the slope of the analytical curve has become smaller throughout the experiment. Therefore, riboflavin level could be considered a good indicator for the yogurt stability. Thus, it is concluded that fluorescence spectroscopy in combination with chemometrics has a potential as a fast method for monitoring the oxidative stability and quality of yogurt
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Calibration transfer method"

1

Urbain, Gabriel, Alexander Vandesompele, Francis Wyffels, and Joni Dambre. "Calibration Method to Improve Transfer from Simulation to Quadruped Robots." In From Animals to Animats 15, 102–13. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-97628-0_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Godfrey, Thomas A., and Gary N. Proulx. "A Heat Transfer Analysis and Alternative Method for Calibration of Copper Slug Calorimeters." In Performance of Protective Clothing and Equipment: 10th Volume, Risk Reduction Through Research and Testing, 42–62. 100 Barr Harbor Drive, PO Box C700, West Conshohocken, PA 19428-2959: ASTM International, 2016. http://dx.doi.org/10.1520/stp159320160002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Huang, Yue. "Calibration Transfer Methods." In Chemometric Methods in Analytical Spectroscopy Technology, 451–501. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-1625-0_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Pattanayak, S., V. K. Smith, and G. Van Houtven. "Improving the Practice of Benefits Transfer: A Preference Calibration Approach." In Environmental Value Transfer: Issues and Methods, 241–60. Dordrecht: Springer Netherlands, 2007. http://dx.doi.org/10.1007/1-4020-5405-x_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ayad, Sami M. M. E., Carlos R. P. Belchior, and José R. Sodré. "Methods in S.I. Engine Modelling: Auto-calibration of Combustion and Heat Transfer Models, and Exergy Analysis." In Energy, Environment, and Sustainability, 267–98. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-8618-4_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

van den Berg, Frans, and Åsmund Rinnan. "Calibration Transfer Methods." In Infrared Spectroscopy for Food Quality Analysis and Control, 105–18. Elsevier, 2009. http://dx.doi.org/10.1016/b978-0-12-374136-3.00005-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Haaland, David M. "Multivariate Calibration Methods Applied to Quantitative FT-IR Analyses." In Practical Fourier Transform Infrared Spectroscopy, 395–468. Elsevier, 1990. http://dx.doi.org/10.1016/b978-0-12-254125-4.50013-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Gardner, J. L. "Appendix B. Uncertainty Example: Spectral Irradiance Transfer With Absolute Calibration By Reference To Illuminance." In Experimental Methods in the Physical Sciences, 547–57. Elsevier, 2005. http://dx.doi.org/10.1016/s1079-4042(05)41012-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Tygesen, U. T., E. J. Cross, P. Gardner, K. Worden, B. A. Qadri, and T. J. Rogers. "Digital Transformation by the Implementation of the True Digital Twin Concept and Big Data Technology for Structural Integrity Management." In Ageing and Life Extension of Offshore Facilities, 143–57. ASME, 2022. http://dx.doi.org/10.1115/1.885789_ch9.

Full text
Abstract:
For re-assessment of ‘traditional’ industry asset management methods the key technology is Structural Health Monitoring (SHM) combined with the recent development within novel Big Data technologies. The technologies support the digital transformation of the industry with the purpose of cost reduction and increase of structural safety level. Today’s State-of-the-Art methods encompass novel advanced analysis methods ranging from linear and nonlinear system identification, virtual sensing, Bayesian FEM updating, load calibration, quantification and propagation of uncertainties and predictive maintenance. Challenges approachable with the new methods cover structural re-assessment analysis, Risk- and Reliability-Based Inspection Planning (RBI), and new ground-breaking methods for damage detection; many of which exploit recent advances in Machine Learning and AI and the concept of the ‘True’ Digital Twin. In this paper, a selection of the new disruptive technologies is presented along with a summary of the limitations of current approaches, leading to suggestions as to where tomorrows’ new methods will emerge. New frameworks are suggested for the way forward for future R&D activities based on an Ontological Approach, founded on a shared communication purpose and the systemising/standardisation of the methods for performing SHM. The Ontology Approach can be embedded in, or made compatible with, organising (and decision-supporting) frameworks based on Population-based SHM methods and extended Probabilistic Risk Analysis. The new ideas also offer the potential benefit of gaining information/learning from a large pool of structures (the population) over time and by transfer learning, transferring missing information to individual structures where less (or no) specific data are available.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Calibration transfer method"

1

Dong, Yazhou, Shi-wei Dong, Ying Wang, and Liming Gong. "Calibration method of retrodirective antenna array for microwave power transmission." In 2013 IEEE Wireless Power Transfer Conference (WPTC). IEEE, 2013. http://dx.doi.org/10.1109/wpt.2013.6556876.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Chan, T. L., K. Jambunathan, T. P. Leung, and Shirley A. Ashforth-Frost. "A SURFACE TEMPERATURE CALIBRATION METHOD FOR THERMOCHROMIC LIQUID CRYSTALS USING TRUE-COLOUR IMAGE PROCESSING." In International Heat Transfer Conference 10. Connecticut: Begellhouse, 1994. http://dx.doi.org/10.1615/ihtc10.3000.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wang, Jinlei, Nanzhe Ding, Jie Qian, and Wenlong Jin. "Research on Calibration Transfer Method of Portable Near Infrared Instrument." In 2020 IEEE 9th Joint International Information Technology and Artificial Intelligence Conference (ITAIC). IEEE, 2020. http://dx.doi.org/10.1109/itaic49862.2020.9338806.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Mao, Jiafa, Linlin Yu, Hui Yu, Yahong Hu, and Weiguo Sheng. "A Transfer Learning Method with Multi-feature Calibration for Building Identification." In 2020 International Joint Conference on Neural Networks (IJCNN). IEEE, 2020. http://dx.doi.org/10.1109/ijcnn48605.2020.9207693.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ding, Jianxin, Susu Ma, Wenze Yuan, Jinwen Liu, Chao Jia, and Xiaohai Cui. "A New Method for Measuring Calibration Factors of Microwave Power Transfer Standard." In 2021 IEEE MTT-S International Wireless Symposium (IWS). IEEE, 2021. http://dx.doi.org/10.1109/iws52775.2021.9499531.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Li, Chuanrong, Lingling Ma, Xinfang Yuan, Ning Wang, Lingli Tang, Shi Qiu, and Jianjian Li. "A color calibration method for spectral image based on radiative transfer mechanism." In SPIE Defense + Security, edited by Miguel Velez-Reyes and Fred A. Kruse. SPIE, 2014. http://dx.doi.org/10.1117/12.2050297.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Chen, Yinyuan, Jay I. Frankel, and Majid Keyhani. "A Nonlinear, Rescaling Based Inverse Heat Conduction Calibration Method and Optimal Regularization Parameter Strategy." In 11th AIAA/ASME Joint Thermophysics and Heat Transfer Conference. Reston, Virginia: American Institute of Aeronautics and Astronautics, 2014. http://dx.doi.org/10.2514/6.2014-2386.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Zhiguo, Han, Liang Faguo, Luan Peng, Li Suoyin, and Li Jingqiang. "Research on Calibration Uncertainty Evaluation Method of Transfer Standards for Calibrating the On-Wafer Load-Pull System." In 2013 Third International Conference on Instrumentation, Measurement, Computer, Communication and Control (IMCCC). IEEE, 2013. http://dx.doi.org/10.1109/imccc.2013.82.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kerbaol, V. "A global ERS-1/2 SAR wave mode imagettes calibration method." In Seventh International Conference on Electronic Engineering in Oceanography - Technology Transfer from Research to Industry. IEE, 1997. http://dx.doi.org/10.1049/cp:19970690.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ahrend, Ulf, Angelika Hartmann, and Juergen Koehler. "Measurements of Local Heat Transfer Coefficients in Heat Exchangers With Inclined Flat Tubes by Means of the Ammonia Absorption Method." In 2010 14th International Heat Transfer Conference. ASMEDC, 2010. http://dx.doi.org/10.1115/ihtc14-23092.

Full text
Abstract:
For high efficiency compact heat exchangers one needs to gain detailed knowledge of the distribution of the local heat transfer. For a profound assessment of heat enhancing mechanisms like secondary flow structures which are often found at rather small scales it is necessary to perform heat transfer measurements with high spatial resolution. A technique that satisfies this need is the ammonia absorption method (AAM). It is based on the analogy between heat and mass transfer. The here presented paper describes a new calibration approach for the AAM. It is done through the use of a well established heat transfer correlation for the hydrodynamic and thermal entry in parallel plate channels. This calibration approach is applied to heat transfer measurements in compact heat exchangers with inclined flat tubes and plane fins at Redh = 3000. The heat transfer performance is compared to fin-and-tube heat exchangers with round tubes. It is found that the novel devices show consistently higher global Nusselt numbers than comparable round tube heat exchangers.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Calibration transfer method"

1

Anderson, Gerald L., and Kalman Peleg. Precision Cropping by Remotely Sensed Prorotype Plots and Calibration in the Complex Domain. United States Department of Agriculture, December 2002. http://dx.doi.org/10.32747/2002.7585193.bard.

Full text
Abstract:
This research report describes a methodology whereby multi-spectral and hyperspectral imagery from remote sensing, is used for deriving predicted field maps of selected plant growth attributes which are required for precision cropping. A major task in precision cropping is to establish areas of the field that differ from the rest of the field and share a common characteristic. Yield distribution f maps can be prepared by yield monitors, which are available for some harvester types. Other field attributes of interest in precision cropping, e.g. soil properties, leaf Nitrate, biomass etc. are obtained by manual sampling of the filed in a grid pattern. Maps of various field attributes are then prepared from these samples by the "Inverse Distance" interpolation method or by Kriging. An improved interpolation method was developed which is based on minimizing the overall curvature of the resulting map. Such maps are the ground truth reference, used for training the algorithm that generates the predicted field maps from remote sensing imagery. Both the reference and the predicted maps are stratified into "Prototype Plots", e.g. 15xl5 blocks of 2m pixels whereby the block size is 30x30m. This averaging reduces the datasets to manageable size and significantly improves the typically poor repeatability of remote sensing imaging systems. In the first two years of the project we used the Normalized Difference Vegetation Index (NDVI), for generating predicted yield maps of sugar beets and com. The NDVI was computed from image cubes of three spectral bands, generated by an optically filtered three camera video imaging system. A two dimensional FFT based regression model Y=f(X), was used wherein Y was the reference map and X=NDVI was the predictor. The FFT regression method applies the "Wavelet Based", "Pixel Block" and "Image Rotation" transforms to the reference and remote images, prior to the Fast - Fourier Transform (FFT) Regression method with the "Phase Lock" option. A complex domain based map Yfft is derived by least squares minimization between the amplitude matrices of X and Y, via the 2D FFT. For one time predictions, the phase matrix of Y is combined with the amplitude matrix ofYfft, whereby an improved predicted map Yplock is formed. Usually, the residuals of Y plock versus Y are about half of the values of Yfft versus Y. For long term predictions, the phase matrix of a "field mask" is combined with the amplitude matrices of the reference image Y and the predicted image Yfft. The field mask is a binary image of a pre-selected region of interest in X and Y. The resultant maps Ypref and Ypred aremodified versions of Y and Yfft respectively. The residuals of Ypred versus Ypref are even lower than the residuals of Yplock versus Y. The maps, Ypref and Ypred represent a close consensus of two independent imaging methods which "view" the same target. In the last two years of the project our remote sensing capability was expanded by addition of a CASI II airborne hyperspectral imaging system and an ASD hyperspectral radiometer. Unfortunately, the cross-noice and poor repeatability problem we had in multi-spectral imaging was exasperated in hyperspectral imaging. We have been able to overcome this problem by over-flying each field twice in rapid succession and developing the Repeatability Index (RI). The RI quantifies the repeatability of each spectral band in the hyperspectral image cube. Thereby, it is possible to select the bands of higher repeatability for inclusion in the prediction model while bands of low repeatability are excluded. Further segregation of high and low repeatability bands takes place in the prediction model algorithm, which is based on a combination of a "Genetic Algorithm" and Partial Least Squares", (PLS-GA). In summary, modus operandi was developed, for deriving important plant growth attribute maps (yield, leaf nitrate, biomass and sugar percent in beets), from remote sensing imagery, with sufficient accuracy for precision cropping applications. This achievement is remarkable, given the inherently high cross-noice between the reference and remote imagery as well as the highly non-repeatable nature of remote sensing systems. The above methodologies may be readily adopted by commercial companies, which specialize in proving remotely sensed data to farmers.
APA, Harvard, Vancouver, ISO, and other styles
2

Hodul, M., H. P. White, and A. Knudby. A report on water quality monitoring in Quesnel Lake, British Columbia, subsequent to the Mount Polley tailings dam spill, using optical satellite imagery. Natural Resources Canada/CMSS/Information Management, 2022. http://dx.doi.org/10.4095/330556.

Full text
Abstract:
In the early morning on the 4th of August 2014, a tailings dam near Quesnel, BC burst, spilling approximately 25 million m3 of runoff containing heavy metal elements into nearby Quesnel Lake (Byrne et al. 2018). The runoff slurry, which included lead, arsenic, selenium, and vanadium spilled through Hazeltine Creek, scouring its banks and picking up till and forest cover on the way, and ultimately ended up in Quesnel Lake, whose water level rose by 1.5 m as a result. While the introduction of heavy metals into Quesnel Lake was of environmental concern, the additional till and forest cover scoured from the banks of Hazeltine Creek added to the lake has also been of concern to salmon spawning grounds. Immediate repercussions of the spill involved the damage of sensitive environments along the banks and on the lake bed, the closing of the seasonal salmon fishery in the lake, and a change in the microbial composition of the lake bed (Hatam et al. 2019). In addition, there appears to be a seasonal resuspension of the tailings sediment due to thermal cycling of the water and surface winds (Hamilton et al. 2020). While the water quality of Quesnel Lake continues to be monitored for the tailings sediments, primarily by members at the Quesnel River Research Centre, the sample-and-test methods of water quality testing used, while highly accurate, are expensive to undertake, and not spatially exhaustive. The use of remote sensing techniques, though not as accurate as lab testing, allows for the relatively fast creation of expansive water quality maps using sensors mounted on boats, planes, and satellites (Ritchie et al. 2003). The most common method for the remote sensing of surface water quality is through the use of a physics-based semianalytical model which simulates light passing through a water column with a given set of Inherent Optical Properties (IOPs), developed by Lee et al. (1998) and commonly referred to as a Radiative Transfer Model (RTM). The RTM forward-models a wide range of water-leaving spectral signatures based on IOPs determined by a mix of water constituents, including natural materials and pollutants. Remote sensing imagery is then used to invert the model by finding the modelled water spectrum which most closely resembles that seen in the imagery (Brando et al 2009). This project set out to develop an RTM water quality model to monitor the water quality in Quesnel Lake, allowing for the entire surface of the lake to be mapped at once, in an effort to easily determine the timing and extent of resuspension events, as well as potentially investigate greening events reported by locals. The project intended to use a combination of multispectral imagery (Landsat-8 and Sentinel-2), as well as hyperspectral imagery (DESIS), combined with field calibration/validation of the resulting models. The project began in the Autumn before the COVID pandemic, with plans to undertake a comprehensive fieldwork campaign to gather model calibration data in the summer of 2020. Since a province-wide travel shutdown and social distancing procedures made it difficult to carry out water quality surveying in a small boat, an insufficient amount of fieldwork was conducted to suit the needs of the project. Thus, the project has been put on hold, and the primary researcher has moved to a different project. This document stands as a report on all of the work conducted up to April 2021, intended largely as an instructional document for researchers who may wish to continue the work once fieldwork may freely and safely resume. This research was undertaken at the University of Ottawa, with supporting funding provided by the Earth Observations for Cumulative Effects (EO4CE) Program Work Package 10b: Site Monitoring and Remediation, Canada Centre for Remote Sensing, through the Natural Resources Canada Research Affiliate Program (RAP).
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography