Dissertations / Theses on the topic 'Boundary estimation'

To see the other types of publications on this topic, follow the link: Boundary estimation.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Boundary estimation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Mu, Yingfei. "Boundary Estimation." Diss., North Dakota State University, 2015. http://hdl.handle.net/10365/25195.

Full text
Abstract:
The existing statistical methods do not provide a satisfactory solution to determining the spatial pattern in spatially referenced data, which is often required by research in many areas including geology, agriculture, forestry, marine science and epidemiology for identifying the source of the unusual environmental factors associated with a certain phenomenon. This work provides a novel algorithm which can be used to delineate the boundary of an area of hot spots accurately and e ciently. Our algorithm, rst of all, does not assume any pre-speci ed geometric shapes for the change-curve. Secondly, the computation complexity by our novel algorithm for changecurve detection is in the order of O(n2), which is much smaller than 2O(n2) required by the CUSP algorithm proposed in M uller&Song [8] and Carlstein's [2] estimators. Furthermore, our novel algorithm yields a consistent estimate of the change-curve as well as the underlying distribution mean of observations in the regions. We also study the hypothesis test of the existence of the change-curve in the presence of independence of the spatially referenced data. We then provide some simulation studies as well as a real case study to compare our algorithm with the popular boundary estimation method : Spatial scan statistic.
APA, Harvard, Vancouver, ISO, and other styles
2

Pasha, Hasan G. "Estimation of Static Stiffnesses from Free Boundary Dynamic (FRF) Measurements." University of Cincinnati / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1416569956.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

YANG, FAN Carolan Christopher. "ESTIMATION OF THE PROBABILITY A BROWNIAN BRIDGE CROSSES A CONCAVE BOUNDARY." [Greenville, N.C.] : East Carolina University, 2010. http://hdl.handle.net/10342/2798.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wright, Andrew D. "Acoustic boundary condition estimation in a near-scale gas-turbine combustor." Thesis, This resource online, 1996. http://scholar.lib.vt.edu/theses/available/etd-08222008-063430/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zhang, Hanzhong. "A moving boundary problem in a distributed parameter system with application to diode modeling." Access restricted to users with UT Austin EID, 2001. http://wwwlib.umi.com/cr/utexas/fullcit?p3037035.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Monokrousos, Antonios. "Optimisation and control of boundary layer flows." Licentiate thesis, Stockholm : Skolan för teknikvetenskap, Kungliga Tekniska högskolan, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-10652.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Van, der Kamp Derek. "Ceilometer observations of Vancouver's urban boundary layer : validation and mixed-layer height estimation." Thesis, University of British Columbia, 2008. http://hdl.handle.net/2429/1599.

Full text
Abstract:
A mini-lidar system, Vaisala's CL31 ceilometer, was installed within a suburban area of Vancouver, BC, for the purpose of making continuous observations of the boundary layer over a period of 11 months. Initial validation of the ceilometer for use in boundary layer observations was undertaken. This involved the comparison of ceilometer data with eight months of ground-level particulate matter measurements, as well as with 16 vertical profiles of particulate matter and meteorological data. Once a variety of persistent noise structures within the data were accounted for, it was found that the ceilometer data showed good agreement with the particulate matter data, suggesting its usefulness for assessing air-quality throughout the bottom 1km of the atmosphere. Additionally, two algorithms were developed in order to estimate the height of the convective boundary layer, or the mixed-layer height, from the ceilometer data. One involved the fitting of an ideal-profile to the measured data, while the other involved the location of a minimum-gradient in the backscatter profile. The performance of these two techniques were assessed and compared, and it was found that the ideal-profile method was the more robust of the two. Finally, mixed-layer heights were estimated for fair weather, convectively active days. In order to isolate such conditions, an automatic flagging algorithm was developed. However, additional manual assessment was needed to avoided unsuitable conditions. Mixed-layer heights were estimated for 19 days over an 11 month period. the estimates presented here were found to agree with previous observations. Daily maximum mixed-layer heights ranged from 650m in July to 350m in December, indicating that the height of the convective boundary layer within Vancouver is significantly suppressed due to the city's coastal location.
APA, Harvard, Vancouver, ISO, and other styles
8

Ruda, Harald. "Estimation of the parameters of a boundary contour system using psychophysical hyperacuity experiments." Thesis, Boston University, 2013. https://hdl.handle.net/2144/12843.

Full text
Abstract:
Dissertation (Ph.D.)--Boston University
Visual hyperacuity enables observers to make accurate judgments of the relative positions of stimuli when the differences are smaller than the size of a single cone in the fovea. Because hyperacuity can serve as a gauge for precisely measuring characteristics of the visual system, it can provide stringent tests for models of the visual system. A variant of the Boundary Contour System (BCS) model is here used to clarify previously unexplained psychophysical hyperacuity results involving contrast polarity, stimulus separation, and sinusoidal masking gratings. Two-dot alignment thresholds were studied by Levi & Waugh (1996) by varying the gap between the dots, with same and opposite contrast polarity with respect to the background, and also with and without band-limited sinusoidal grating masks of different orientations. They found that when the gap between the dots is small (6 arcmin), different patterns of misalignment thresholds are obtained for the same and different contrast polarity conditions. However, when the gap is large (24 arcmin), the same pattern of thresholds was obtained irrespective of contrast polarity. The simulations presented here replicate these findings, producing the same pattern of results when varying the gap between the dots, with same and opposite contrast polarity with respect to the background, and also with and without sinusoidal grating masks of different orientations. The vision model used (BCS) is able to produce these patterns because of its inherent processing using contrast insensitivity, spatial and oriented competition, and long-range completion layers. A novel aspect of the model is the use of sampled field processing, which simplifies the model's equations. Modified Hebbian learning and a neural decision module are proposed as mechanisms that link the vision model's outputs to a decision criterion. All model parts have plausible neurobiological correlates. In addition, psychophysical hyperacuity experiments served to map the limits of inhibitory spatial interactions. The results show that inhibition occurs even when only half of the split flanking line of Badcock & Westheimer (1985b) is used, suggesting that subthreshold activity in units representing the line extends beyond the end of the line. Furthermore, strong inhibition was observed with a flanking illusory line grating.
APA, Harvard, Vancouver, ISO, and other styles
9

Freeman, Matthew A. "Multi-area power system state estimation utilizing boundary measurements and phasor measurement units ( PMUs)." Thesis, Texas A&M University, 2006. http://hdl.handle.net/1969.1/4178.

Full text
Abstract:
The objective of this thesis is to prove the validity of a multi-area state estimator and investigate the advantages it provides over a serial state estimator. This is done utilizing the IEEE 118 Bus Test System as a sample system. This thesis investigates the benefits that stem from utilizing a multi-area state estimator instead of a serial state estimator. These benefits are largely in the form of increased accuracy and decreased processing time. First, the theory behind power system state estimation is explained for a simple serial estimator. Then the thesis shows how conventional measurements and newer, more accurate PMU measurements work within the framework of weighted least squares estimation. Next, the multi-area state estimator is examined closely and the additional measurements provided by PMUs are used to increase accuracy and computational efficiency. Finally, the multi-area state estimator is tested for accuracy, its ability to detect bad data, and computation time.
APA, Harvard, Vancouver, ISO, and other styles
10

Sun, Walter. "Learning the dynamics of deformable objects and recursive boundary estimation using curve evolution techniques." Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/34978.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Includes bibliographical references (p. 161-176).
The primary objective of this thesis is to develop robust algorithms for the incorporation of statistical information in the problem of estimating object boundaries in image data. We propose two primary algorithms, one which jointly estimates the underlying field and boundary in a static image and another which performs image segmentation across a temporal sequence. Some motivating applications come from the earth sciences and medical imaging. In particular, we examine the problems of oceanic front and sea surface temperature estimation in oceanography, soil boundary and moisture estimation in hydrology, and left ventricle boundary estimation across a cardiac cycle in medical imaging. To accomplish joint estimation in a static image, we introduce a variational technique that incorporates the spatial statistics of the underlying field to segment the boundary and estimate the field on either side of the boundary. For image segmentation across a sequence of frames, we propose a method for learning the dynamics of a deformable boundary that uses these learned dynamics to recursively estimate the boundary in each frame over time. In the recursive estimation algorithm, we extend the traditional particle filtering approach by applying sample-based methods to a complex shape space.
(cont.) We find a low-dimensional representation for this shape-shape to make the learning of the dynamics tractable and then incorporate curve evolution into the state estimates to recursively estimate the boundaries. Experimental results are obtained on cardiac magnetic resonance images, sea surface temperature data, and soil moisture maps. Although we focus on these application areas, the underlying mathematical principles posed in the thesis are general enough that they can be applied to other applications as well. We analyze the algorithms on data of differing quality, with both high and low SNR data and also full and sparse observations.
by Walter Sun.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
11

Åkervik, Espen. "Global stability and feedback control of boundary layer flows." Doctoral thesis, KTH, Linné Flow Center, FLOW, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-9547.

Full text
Abstract:
In this thesis the stability of generic boundary layer flows is studied from a global viewpoint using optimization methods. Global eigenmodes of the incompressible linearized Navier-Stokes equations are computed using the Krylov subspace Arnoldi method. These modes serve as a tool both to study asymptotic stability and as a reduced basis to study transient growth. Transient growth is also studied using adjoint iterations. The knowledge obtained from the stability analysis is used to device systematic feedback control in the Linear Quadratic Gaussian framework. The dynamics is assumed to be described by the linearized Navier-Stokes equations. Actuators and sensors are designed and a Kalman filtering technique is used to reconstruct the unknown flow state from noisy measurements. This reconstructed flow state is used to determine the control feedback which is applied to the Navier-Stokes equations through properly designed actuators. Since the control and estimation gains are obtained through an optimization process, and the Navier-Stokes equations typically forms a very high-dimensional system when discretized there is an interest in reducing the complexity of the equations. A standard method to construct a reduced order model is to perform a Galerkin projection of the full equations onto the subspace spanned by a suitable set of vectors, such as global eigenmodes and balanced truncation modes.
QC 20100924
APA, Harvard, Vancouver, ISO, and other styles
12

Saeed, Umar. "Atmospheric-boundary-layer height retrieval using microwave radiometer and lidar sensors : algorithms and error estimation." Doctoral thesis, Universitat Politècnica de Catalunya, 2016. http://hdl.handle.net/10803/396606.

Full text
Abstract:
The Atmospheric Boundary Layer Height (ABLH) is an important parameter in weather forecasting, meteorology, avionics, and air-quality and dispersion models. Local development of the Atmospheric Boundary Layer (ABL) over the full diurnal cycle is a function of several parameters which, among others, include geographical location of the place, its topography, time of the year, and day and night conditions. There are several remote sensing instruments and methods to retrieve the ABLH, however, none of these can fully measure ABL development under all atmospheric conditions. This Ph.D. thesis deals with estimation of the ABLH over the full diurnal cycle, which includes day-time mixing layer, nocturnal stable boundary layer, and morning/evening transition boundary layer, by using ground-based microwave-radiometer (MWR) and ceilometer (lidar principle) remote-sensing instruments as well as related signal processing techniques. ABLH estimates from Doppler lidar and radiosondes are used as references. Aim of this thesis is also to combine data from these two instruments, thus, exploiting their individual strengths and overcoming their limitations. In this context, this thesis has been structured around three main goals: First, a synergetic method for estimation of the Mixing Layer Height (MLH) is presented. Towards this end, uncertainties in the MLH derived from backscattered ceilometer signals and MWR-retrieved potential temperature profiles are analysed and compared. While the Extended Kalman Filter (EKF) is used as adaptive filter to process backscattered lidar signals from the ceilometer, the parcel method is used with the MWR-retrieved potential temperature profile. Finally, the two methods are combined into a new methodology for synergetic MLH retrieval. Second, methods for the estimation of the nocturnal Stable-Boundary-Layer Height (SBLH) from ceilometer and MWR data, in stand-alone and in synergetic fashion, are investigated. The SBLH from ceilometer backscattered lidar signals is retrieved by using Minimum Variance Regions (MVRs) as signatures of aerosol stratification in the SBL. For the MWR, idealized physical models from the literature are used to estimate the SBLH. Next, a synergetic SBLH retrieval method is developed, which combines measurement data from both instruments. Finally, a preliminary study on the feasibility of Large Eddy Simulation (LES) as a tool for understanding the ABL is presented. To this end, LES-simulated lidar backscatter and potential temperature profiles are compared against instrumental measurements. In addition, a new method for direct retrieval of the MLH from LES-simulated brightness temperature measurements is presented, hence, alleviating the need for physical temperature retrieval first. The impact of retrieval errors on MLH estimates is also investigated. The techniques developed in this Ph.D. have been tested in the HOPE measurement campaign (Jülich, Germany), where different test cases under different atmospheric conditions have been considered.
La altura de Capa Límite Atmosférica ("Atmospheric Boundary Layer Height", ABLH) es un parámetro importante en la predicción del tiempo, la meteorología, la aviónica, y los modelos de calidad del aire y dispersión. El desarrollo local de la Capa Límite Atmosférica (ABL) durante el ciclo diurno completo es una función de múltiples parámetros que, entre otros, incluyen la ubicación geográfica del lugar, su topografía, época del año, y las condiciones de día y de noche. Existen múltiples instrumentos de teledetección y métodos para recuperar la ABLH, sin embargo, ninguno de ellos es capaz de medir completamente el desarrollo de la ABL bajo cualquier condición atmosférica. Esta tesis doctoral versa sobre la estimación de la ABLH durante el ciclo diurno completo (capa de mezcla diurna, capa límite estable nocturna, y capa límite en la transición mañana / tarde), mediante radiómetros de microondas (MWR) y ceilómetros (principio lidar) como instrumentos terrestres de teledetección, así como las técnicas de procesado de la señal relacionadas. Como referencia se utilizan las estimaciones de la ABLH procedentes de lidar Doppler y radiosondeos. Meta de esta tesis es también la combinación de los datos procedentes de estos dos instrumentos a fin de explotar sus fortalezas individuales y superar sus limitaciones. En este contexto, la tesis se estructura en torno a tres objetivos principales: En primer lugar, se presenta un método sinérgico para la estimación de la altura de la capa de mezcla (MLH). Con este fin, se analizan y comparan las incertezas en la estimación de la MLH obtenida a partir de las señales retrodispersadas medidas por el ceilómetro y los perfiles de temperatura recuperados por el MWR. Mientras que el filtro de Kalman extendido (EKF) se utiliza como filtro adaptativo para procesar las señales de retrodispersión lidar del ceilómetro, el "parcel method" se utiliza para tratar el perfil de temperatura potencial recuperado por el MWR. Por último, los dos métodos se combinan en una nueva metodología para la recuperación sinérgica de la MLH. En segundo lugar, se investigan métodos, no sinérgicos y sinérgicos, para la estimación de la altura de la capa límite estable (SBLH) nocturna a partir de datos procedente del ceilómetro y del MWR. La SBLH a partir de la señales retrodispersadas medidas por el ceilómetro se recupera utilizando Regiones de Mínima Varianza (MVR) como trazadores de la estratificación de los aerosoles en la SBL. Para el MWR, se utilizan modelos físicos idealizados de la literatura para estimar la SBLH. A continuación, se desarrolla un método sinérgico de recuperación de la SBLH que combina las medidas de ambos instrumentos. Por último, se presenta un estudio preliminar sobre la viabilidad del Large Eddy Simulation (LES) como herramienta de estudio de la ABL. En esta línea, los perfiles de retrodispersión lidar y los perfiles de temperatura potencial simulados mediante LES se comparan con las medidas procedentes de estos dos instrumentos. Además, se presenta un nuevo método para la recuperación directa de la MLH a partir de mediciones de temperatura de brillo simuladas mediante LES, lo cual, evita la necesidad de recuperar el perfil de temperatura física primero. También se investiga el impacto de los errores de inversión en las estimaciones de la MLH. Las técnicas desarrolladas en esta tesis doctoral han sido probadas en la campaña de medidas HOPE (Jülich, Alemania), donde se han considerado diferentes casos de prueba bajo diferentes condiciones atmosféricas
L'altura de Capa Límit Atmosfèrica ("Atmospheric Boundary Layer Height", ABLH) és un paràmetre important en la predicció del temps, la meteorologia, l'aviònica, i els models de qualitat de l'aire i dispersió. El desenvolupament local de la Capa Límit Atmosfèrica (ABL) durant el cicle diürn complet és una funció de múltiples paràmetres que, entre d'altres, inclouen la ubicació geogràfica del lloc, la seva topografia, època de l'any, i les condicions de dia i de nit. Hi ha múltiples instruments de teledetecció i mètodes per recuperar la ABLH, però, cap d'ells és capaç de mesurar completament el desenvolupament de l'ABL sota qualsevol condició atmosfèrica. Aquesta tesi doctoral versa sobre l'estimació de la ABLH durant el cicle diürn complet (capa de mescla diürna, capa límit estable nocturna, i capa límit en la transició matí / tarda), mitjançant radiòmetres de microones (MWR) i ceilòmetres (principi lidar) com a instruments terrestres de teledetecció, així com les tècniques de processament del senyal relacionades. Com a referència s'utilitzen les estimacions de la ABLH procedents de lidar Doppler i radiosondatges. Meta d'aquesta tesi és també la combinació de les dades procedents d'aquests dos instruments a fi d'explotar les seves fortaleses individuals i superar les seves limitacions. En aquest context, la tesi s'estructura al voltant de tres objectius principals: En primer lloc, es presenta un mètode sinèrgic per a l'estimació de l'alçada de la capa de mescla (MLH). Amb aquesta finalitat, s'analitzen i comparen les incerteses en l'estimació de la MLH obtinguda a partir dels senyals retrodispersats mesurats pel ceilòmetre i els perfils de temperatura recuperats pel MWR. Mentre que el filtre de Kalman estès (EKF) s'utilitza com a filtre adaptatiu per processar els senyals de retrodispersió lidar del ceilòmetre, el "parcel method" s'utilitza per tractar el perfil de temperatura potencial recuperat pel MWR. Finalment, els dos mètodes es combinen en una nova metodologia per a la recuperació sinèrgica de la MLH.En segon lloc, s'investiguen mètodes, no sinèrgics i sinèrgics, per a l'estimació de l'alçada de la capa límit estable (SBLH) nocturna a partir de dades procedent del ceilòmetre i del MWR. La SBLH a partir dels senyals retrodispersats mesurats pel ceilòmetre es recupera utilitzant Regions de Mínima Variància (MVR) com a traçadors de l'estratificació dels aerosols a la SBL. Pel MWR, s'utilitzen models físics idealitzats de la literatura per estimar la SBLH. A continuació, es desenvolupa un mètode sinèrgic de recuperació de la SBLH que combina les mesures de tots dos instruments.Finalment, es presenta un estudi preliminar sobre la viabilitat del Large Eddy Simulation (LES) com a eina d'estudi de l'ABL. En aquesta línia, els perfils de retrodispersió lidar i els perfils de temperatura potencial simulats mitjançant LES es comparen amb les mesures procedents d'aquests dos instruments. A més, es presenta un nou mètode per a la recuperació directa de la MLH a partir de mesuraments de temperatura de brillantor simulades mitjançant LES, la qual cosa, evita la necessitat de recuperar el perfil de temperatura física primer. També s'investiga l'impacte dels errors d'inversió en les estimacions de la MLH.Les tècniques desenvolupades en aquesta tesi doctoral han estat provades en la campanya de mesures HOPE (Jülich, Alemanya), on s'han considerat diferents casos de prova sota diferents condicions atmosfèriques.
APA, Harvard, Vancouver, ISO, and other styles
13

Fox, Colin. "Conductance imaging : estimation of isotropic conductance perturbations from low-frequency boundary measurements in circular geometries." Thesis, University of Cambridge, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.303010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Sehlstedt, Niklas. "Hybrid methods for inverse force estimation in structural dynamics." Doctoral thesis, KTH, Vehicle Engineering, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-3528.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Strydom, Willem Jacobus. "Recovery based error estimation for the Method of Moments." Thesis, Stellenbosch : Stellenbosch University, 2015. http://hdl.handle.net/10019.1/96881.

Full text
Abstract:
Thesis (MEng)--Stellenbosch University, 2015.
ENGLISH ABSTRACT: The Method of Moments (MoM) is routinely used for the numerical solution of electromagnetic surface integral equations. Solution errors are inherent to any numerical computational method, and error estimators can be effectively employed to reduce and control these errors. In this thesis, gradient recovery techniques of the Finite Element Method (FEM) are formulated within the MoM context, in order to recover a higher-order charge of a Rao-Wilton-Glisson (RWG) MoM solution. Furthermore, a new recovery procedure, based specifically on the properties of the RWG basis functions, is introduced by the author. These recovered charge distributions are used for a posteriori error estimation of the charge. It was found that the newly proposed charge recovery method has the highest accuracy of the considered recovery methods, and is the most suited for applications within recovery based error estimation. In addition to charge recovery, the possibility of recovery procedures for the MoM solution current are also investigated. A technique is explored whereby a recovered charge is used to find a higher-order divergent current representation. Two newly developed methods for the subsequent recovery of the solenoidal current component, as contained in the RWG solution current, are also introduced by the author. A posteriori error estimation of the MoM current is accomplished through the use of the recovered current distributions. A mixed second-order recovered current, based on a vector recovery procedure, was found to produce the most accurate results. The error estimation techniques developed in this thesis could be incorporated into an adaptive solver scheme to optimise the solution accuracy relative to the computational cost.
AFRIKAANSE OPSOMMING: Die Moment Metode (MoM) vind algemene toepassing in die numeriese oplossing van elektromagnetiese oppervlak integraalvergelykings. Numeriese foute is inherent tot die prosedure: foutberamingstegnieke is dus nodig om die betrokke foute te analiseer en te reduseer. Gradiënt verhalingstegnieke van die Eindige Element Metode word in hierdie tesis in die MoM konteks geformuleer. Hierdie tegnieke word ingespan om die oppervlaklading van 'n Rao-Wilton-Glisson (RWG) MoM oplossing na 'n verbeterde hoër-orde voorstelling te neem. Verder is 'n nuwe lading verhalingstegniek deur die outeur voorgestel wat spesifiek op die eienskappe van die RWG basis funksies gebaseer is. Die verhaalde ladingsverspreidings is geïmplementeer in a posteriori fout beraming van die lading. Die nuut voorgestelde tegniek het die akkuraatste resultate gelewer, uit die groep verhalingstegnieke wat ondersoek is. Addisioneel tot ladingsverhaling, is die moontlikheid van MoM-stroom verhalingstegnieke ook ondersoek. 'n Metode vir die verhaling van 'n hoër-orde divergente stroom komponent, gebaseer op die verhaalde lading, is geïmplementeer. Verder is twee nuwe metodes vir die verhaling van die solenodiale komponent van die RWG stroom deur die outeur voorgestel. A posteriori foutberaming van die MoM-stroom is met behulp van die verhaalde stroom verspreidings gerealiseer, en daar is gevind dat 'n gemengde tweede-orde verhaalde stroom, gebaseer op 'n vektor metode, die beste resultate lewer. Die foutberamingstegnieke wat in hierdie tesis ondersoek is, kan in 'n aanpasbare skema opgeneem word om die akkuraatheid van 'n numeriese oplossing, relatief tot die berekeningskoste, te optimeer.
APA, Harvard, Vancouver, ISO, and other styles
16

McBride, Marvin B. "Estimation of stratocumulus-topped boundary layer depth using sea surface and remotely sensed cloud-top temperatures." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2000. http://handle.dtic.mil/100.2/ADA380378.

Full text
Abstract:
Thesis (M.S. in Meteorology and Physical Oceanography)--Naval Postgraduate School, June 2000.
Thesis advisors, Durkee, Philip A. ; Wash, Carlyle H. "June 2000." Includes bibliographical references (p. 79-80). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
17

Al-Romaih, Jamal S. "Stage-discharge assessment in meandering channels : conveyance estimation and boundary shear stress distribution in meandering channels." Thesis, University of Bradford, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.680818.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Wang, Xiuquan. "Parameter Estimation in the Advection Diffusion Reaction Model With Mean Occupancy Time and Boundary Flux Approaches." OpenSIUC, 2014. https://opensiuc.lib.siu.edu/dissertations/976.

Full text
Abstract:
In this dissertation, we examine an advection diffusion model for insects inhabiting a spatially heterogeneous environment and moving toward a more favorable environment. We first study the effects of adding a term describing drift or advection toward a favorable environment to diffusion models for population dynamics. The diffusion model is a basic linear two-dimensional diffusion equation describing local dispersal of species. The mathematical advection terms are taken to be Fickian and describe directed movement of the population toward the favorable environment. For this model, the landscape is composed of one homogeneous habitat patch embedded in a spatially heterogeneous environment and the boundary of the habitat inhabited by the population acts as a lethal edge. We also derived the mean occupancy time and the boundary flux of the habitat patch. The diffusion rate and advection parameters of the advection diffusion model are estimated based on mean occupancy time and boundary flux. We then introduce two methods for the identification of these coefficients in the model as well as the capture rate. These two new methods have some advantages over other methods of estimating those parameters, including reduced computational cost and ease of use in the field. We further examine the statistical properties of new methods through simulation, and discuss how mean occupancy time and boundary flux could be estimated in field experiments.
APA, Harvard, Vancouver, ISO, and other styles
19

Zhou, Mi. "Advancing Assessments on Aerosol Radiative Effect by Measurement-based Direct Effect Estimation and through Developing an Explicit Climatological Convective Boundary Layer Model." Diss., Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/14091.

Full text
Abstract:
The first part of the thesis assesses the aerosol direct radiative effect (ADRE) with a focus on ground-based AERONET and satellite MODIS measurements. The AERONET aerosol climatology is used, in conjunction with surface albedo and cloud products from MODIS, to calculate the ADRE and its normalized form (NADRE) for distinct aerosol regimes. The NADRE is defined as the ADRE normalized by optical depth at 550 nm and is mainly determined by internal aerosol optical properties and geographical parameters. These terms are evaluated for cloud-free and cloudy conditions and for all-mode and fine-mode aerosols. We find that the NADRE of fine-mode aerosol is larger at the TOA but smaller at the surface in comparison to that of all-mode aerosol. Cloudy-sky TOA ADRE with clouds is sensitive to the relative location of aerosols and cloud layer. The high-resolution MODIS land surface albedo is also applied to study the clear-sky ADRE over North Africa and the Arabian Peninsula for summer 2001. TOA ADRE shows the high spatial variability with close similarity to that of surface albedo. The second part of the thesis is to develop a 2-D conceptual model for a climatological convective boundary layer over land as a persistent and distinct component in climate models, where the convective-scale motion is explicitly described by fluid dynamics and thermodynamics while the smaller scale effect is parameterized for a neutral stratification. Our conceptual model reasonably reproduces essential statistics of a convective boundary layer in comparison to large eddy simulations. The major difference is that our model produces a better organized and more constrained spatial distribution with coherent convective cells. The simulations for a climatological convective boundary layer are conducted for a prescribed constant and homogenous surface heat flux and a specified cooling term representing the background large scale thermal balance. The results show the 2-D coherent structures of convective cells with characteristic scales comparable with PBL height; downward maximum velocities being 70-80% of the accompanying upward maxima; vertical profiles of a constant potential temperature and linear decreasing heat fluxes; a square-root increase in the velocity magnitude with increasing surface heat flux.
APA, Harvard, Vancouver, ISO, and other styles
20

Tobias, Brännvall. "Source Term Estimation in the Atmospheric Boundary Layer : Using the adjoint of the Reynolds Averaged Scalar Transport equation." Thesis, Umeå universitet, Institutionen för fysik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-103671.

Full text
Abstract:
This work evaluates whether the branch of Reynolds Averaging in Computational Fluid Dynamics can be used to, based on real field measurements, find the source of the measured gas in question. The method to do this is via the adjoint to the Reynolds Averaged Scalar Transport equation, explained and derived herein. Since the Inverse is only as good as the main equation, forward runs are made to evaluate the turbulence model. Reynolds Averaged Navier Stokes is solved in a domain containing 4 cubes in a 2x2 grid, generating a velocity field for said domain. The turbulence model in question is a union of two modifications to the standard two equation k-ε model in order to capture blunt body turbulence but also to model the atmospheric boundary layer. This field is then inserted into the Reynolds Averaged Scalar Transport equation and the simulation is compared to data from the Environmental Flow wind tunnel in Surrey. Finally the adjoint scalar transport is solved, both for synthetic data that was generated in the forward run, but also for the data from EnFlo. It was discovered that the turbulent Schmidt number plays a major role in capturing the dispersed gas, three different Schmidt numbers were tested, the standard 0.7, the unconventional 0.3 and a height dependent Schmidt number. The widely accepted value of 0.7 did not capture the dispersion at all and gave a huge model error. As such the adjoint scalar transport was solved for 0.3 and a height dependent Schmidt number. The interaction between measurements, the real source strength (which is not used in the adjoint equation, but needed to find the source) and the location of the source is intricate indeed. Over estimation and under estimation of the forward model may cancel out in order to find the correct source, with the correct strength. It is found that Reynolds Averaged Computational fluid dynamics may prove useful in source term estimation.
Detta arbete utvärderar hurvida Reynolds medelvärdesmodellering inom flödessimuleringar kan användas till att finna källan till en viss gas baserat på verkliga mätningar ute i fält. Metoden går ut på att använda den adjungerade ekvationen till Reynolds tidsmedlade skalära transportekvationen, beskriven och härledd häri. Då bakåtmodellen bygger på framåtmodellen, måste såleds framåtmodellen utvärderas först. Navier-Stokes ekvationer med en turbulensmodell löses i en domän, innehållandes 4 kuber i en 2x2 orientering, för vilken en hastighetsprofil erhålles. Turbulensmodellen som användes är en union av två olika k-ε modeller, där den ena fångar turbulens runt tröga objekt och den andra som modellerar atmosfäriska gränsskiktet. Detta fält används sedan i framåtmodellen av skalära transportekvationen, som sedan jämförs med körningar från EnFlo windtunneln i Surrey. Slutligen testkörs även den adjungerade ekvationen, både för syntetiskt data genererat i framåtkörningen men även för data från EnFlo tunneln. Då det visade sig att det turbulenta Schmidttalet spelar stor roll inom spridning i det atmosfäriska gränsskiktet, gjordes testkörningar med tre olika Schmidttal, det normala 0.7, det väldigt låga talet 0.3 samt ett höjdberoende Schmidttal. Det visade sig att det vanligtvis använda talet 0.7 inte alls lyckas fånga spridningen tillfredställande och gav ett stort modellfel. Därför löstes den adjungerade ekvationen för 0.3 samt för ett höjdberoende Schmidttal. Interaktionen mellan mätningar, den riktiga källstyrkan (som är okänd i den adjungerade ekvationen) samt källpositionen är onekligen intrikat. Över- samt underestimationer av framåtmodellen kan ta ut varandra i bakåtmodellen för att finna rätt källa, med rätt källstyrka. Det ter sig som Reynolds turbulensmodellering mycket möjligt kan användas inom källtermsuppskattning.
APA, Harvard, Vancouver, ISO, and other styles
21

Schlöder, Johannes P. "Numerische Methoden zur Behandlung hochdimensionaler Aufgaben der Parameteridentifizierung." Bonn : [s.n.], 1987. http://catalog.hathitrust.org/api/volumes/oclc/18814825.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Rau, Christian, and rau@maths anu edu au. "Curve Estimation and Signal Discrimination in Spatial Problems." The Australian National University. School of Mathematical Sciences, 2003. http://thesis.anu.edu.au./public/adt-ANU20031215.163519.

Full text
Abstract:
In many instances arising prominently, but not exclusively, in imaging problems, it is important to condense the salient information so as to obtain a low-dimensional approximant of the data. This thesis is concerned with two basic situations which call for such a dimension reduction. The first of these is the statistical recovery of smooth edges in regression and density surfaces. The edges are understood to be contiguous curves, although they are allowed to meander almost arbitrarily through the plane, and may even split at a finite number of points to yield an edge graph. A novel locally-parametric nonparametric method is proposed which enjoys the benefit of being relatively easy to implement via a `tracking' approach. These topics are discussed in Chapters 2 and 3, with pertaining background material being given in the Appendix. In Chapter 4 we construct concomitant confidence bands for this estimator, which have asymptotically correct coverage probability. The construction can be likened to only a few existing approaches, and may thus be considered as our main contribution. ¶ Chapter 5 discusses numerical issues pertaining to the edge and confidence band estimators of Chapters 2-4. Connections are drawn to popular topics which originated in the fields of computer vision and signal processing, and which surround edge detection. These connections are exploited so as to obtain greater robustness of the likelihood estimator, such as with the presence of sharp corners. ¶ Chapter 6 addresses a dimension reduction problem for spatial data where the ultimate objective of the analysis is the discrimination of these data into one of a few pre-specified groups. In the dimension reduction step, an instrumental role is played by the recently developed methodology of functional data analysis. Relatively standar non-linear image processing techniques, as well as wavelet shrinkage, are used prior to this step. A case study for remotely-sensed navigation radar data exemplifies the methodology of Chapter 6.
APA, Harvard, Vancouver, ISO, and other styles
23

Balsubramanian, Ravishankar. "Error estimation and grid adaptation for functional outputs using discrete-adjoint sensitivity analysis." Master's thesis, Mississippi State : Mississippi State University, 2002. http://library.msstate.edu/etd/show.asp?etd=etd-10032002-113749.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Gahard, Claude F. "An estimation of the ability to forecast boundary layer mixing height and wind parameters through forecast verification over Fort Ord." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2003. http://library.nps.navy.mil/uhtbin/hyperion-image/03sep%5FGahard.pdf.

Full text
Abstract:
Thesis (M.S. in Meteorology and Physical Oceanography)--Naval Postgraduate School, September 2003.
Thesis advisor(s): Wendell A. Nuss, David S. Brown. Includes bibliographical references (p. 65-66). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
25

Boonvisut, Pasu. "Active Exploration of Deformable Object Boundary Constraints and Material Parameters Through Robotic Manipulation Data." Case Western Reserve University School of Graduate Studies / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=case1369078402.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Manomaiphiboon, Kasemsan. "Estimation of Emission Strength and Air Pollutant Concentrations by Lagrangian Particle Modeling." Diss., Georgia Institute of Technology, 2004. http://hdl.handle.net/1853/5141.

Full text
Abstract:
A Lagrangian particle model was applied to estimating emission strength and air pollutant concentrations specifically for the short-range dispersion of an air pollutant in the atmospheric boundary layer. The model performance was evaluated with experimental data. The model was then used as the platform of parametric uncertainty analysis, in which effects of uncertainties in five parameters (Monin-Obukhov length, friction velocity, roughness height, mixing height, and the universal constant of the random component) of the model on mean ground-level concentrations were examined under slightly and moderately stable conditions. The analysis was performed under a probabilistic framework using Monte Carlo simulations with Latin hypercube sampling and linear regression modeling. In addition, four studies related to the Lagrangian particle modeling was included. They are an alternative technique of formulating joint probability density functions of velocity for atmospheric turbulence based on the Koehler-Symanowski technique, analysis of local increments in a multidimensional single-particle Lagrangian particle model using the algebra of Ito integrals and the Wagner-Platen formula, analogy between the diffusion limit of Lagrangian particle models and the classical theory of turbulent diffusion, and evaluation of some proposed forms of the Lagrangian velocity autocorrelation of turbulence.
APA, Harvard, Vancouver, ISO, and other styles
27

Lange, Vega Diego. "Lidar and S-band radar profiling of the atmosphere : adaptive processing for boundary-layer monitoring, optical-parameter error estimation, and application cases." Doctoral thesis, Universitat Politècnica de Catalunya, 2014. http://hdl.handle.net/10803/279246.

Full text
Abstract:
This Ph.D. thesis addresses remote sensing of the atmosphere by means of lidar and S-band clear-air weather radar, and related data signal processing. Active remote sensing by means of these instruments offers unprecedented capabilities of spatial and temporal resolutions for vertical atmospheric profiling and the retrieval of key optical and physical atmospheric products in an increasing environmental regulatory framework. The first goal is this Ph.D. concerns the estimation of error bounds in the inversion of the profile of the atmospheric backscatter coefficient from elastic lidar signals (i.e., without wavelength shift in reception when interacting with atmospheric scatterers) by means of the two-component inversion algorithm (the so-called Klett-Fernald-Sasano¿s algorithm). This objective departs from previous works at the Remote Sensing Lab. (RSLab) of the Universitat Politècnica de Catalunya (UPC) and derives first-order error-propagated bounds (approximate) and total-increment bounds (exact). As distinctive feature in the state of the art, the error bounds merge into a single body both systematic (i.e., user-calibration inputs) and random error sources (finite signal-to-noise ratio, SNR) yielding an explicit mathematical form. The second goal, central to this Ph.D., tackles retrieval of the Atmospheric Boundary Layer Height (ABLH) from elastic lidar and S-band Frequency-Modulated Continuous-Wave (FMCW) radar observations by using adaptive techniques based on the Extended Kalman Filter (EKF). The filter is based on morphological modelling of the Mixing-Layer-to-Free-Troposphere transition and continuous estimation of the noise covariance information. In the lidar-EKF realization the proposed technique is shown to outperform classic ABLH estimators such as those based on derivative techniques, thresholded decision, or the variance centroid method. The EKF formulation is applied to both ceilometer and UPC lidar records in high- and low-SNR scenes. The lidar-EKF approach is re-formulated and successfully extended to S-band radar scenes (Bragg¿s scattering) in presence of interferent noise sources (Rayleigh scattering from e.g., insects and birds). In this context, the FMCW feature enables the range-resolved capability. EKF-lidar and EKF-radar ABLH estimates are cross-examined from field campaign results. Finally, the third goal deals with exploitation of the existing UPC lidar station: In a first introductory part, a modified algorithm for enhancing the dynamic range of elastic lidar channels by ¿gluing¿ analog and photon-counting data records is formulated. In a second part, two case examples (including application of the gluing algorithm) are presented to illustrate the capabilities of the UPC lidar in networked atmospheric observation of two recent volcano eruption events as part of the EARLINET (European Aerosol Research Lidar Network). The latter is part of GALION (Global Atmospheric Watch Atmospheric Lidar Observation Network)-GEOSS (Global Earth Observation System of Systems) framework.
La tesis doctoral aborda la teledetecció atmosfèrica amb tècniques lidar i radar (banda S) i llur tractament del senyal. La teledetecció activa amb aquests instruments ofereix resolucions espacials i temporals sense precedents en la perfilometria vertical de l'atmosfera i recuperació de productes de dades òptics i físics atmosfèrics en un marc de creixent regulació mediambiental. El primer objectiu d'aquesta tesi concerneix l'estimació de cotes d'error en la inversió del perfil del coeficient de retrodispersió atmosfèrica a partir de senyals lidar de tipus elàstic (és a dir, sense desplaçament de la longitud d'ona en recepció al interactuar amb els dispersors atmosfèrics) mitjançant l'algorisme d'inversió de dues components de Klett-Fernald-Sasano. Aquest objectiu parteix de treballs previs en el Remote Sensing Lab. (RSLab) de la Universitat Politècnica de Catalunya (UPC) i permet obtenir cotes de primer ordre (aproximades) basades en propagació d'errors i cotes (exactes) basades en el increment total de l'error. Característica diferencial en front l'estat de l'art és l'assimilació d'errors sistemàtics (per exemple, entrades de cal.libració d'usuari) i aleatoris (relació senyal-soroll, SNR, finita) en forma matemàtica explícita. El segon objectiu, central de la tesis, aborda l'estimació de l'altura de la capa límit atmosfèrica (ABLH) a partir de senyal lidar elàstics i d'observacions radar en banda S (ona continua amb modulació en freqüència, FMCW) utilitzant tècniques adaptatives basades en filtrat estès de Kalman (EKF). El filtre es basa en modelat morfològic de la transició atmosfèrica entre la capa de mescla i la troposfera lliure i en l'estimació continua de la informació de covariança del soroll. En el prototipus lidar-EKF la tècnica proposada millora clarament les tècniques clàssiques d'estimació de la ABLH como són les basades en mètodes derivatius, decisió de llindar, o el mètode de la variança-centroide. La formulació EKF s'aplica tant a mesures procedents de ceilòmetres lidar como de la pròpia estació lidar UPC en escenes d'alta i baixa SNR. Addicionalment, l'enfoc lidar-EKF es reformula i s'estén amb èxit a escenes radar en banda S (dispersió Bragg) en presència de fonts de soroll interferent (dispersió Rayleigh de, per exemple, insectes i ocells). En aquest context, la característica FMCW permet la capacitat de resolució en distància. L'estimació de la ABLH amb els prototipus lidar-EKF i radar-EKF s'intercompara en campanyes de mesura. Finalment, el tercer objectiu atén a l'explotació de l'estació lidar UPC existent: En una primera part introductòria, es formula un algorisme modificat de "gluing" per a la millora del marge dinàmic de canals lidar elàstics mitjançant combinació (o "enganxat") de senyals lidar adquirits analògicament i amb foto-comptatge. En una segona part, es presenten dos exemples (incloent l'aplicació de l'algorisme de "gluing") que il.lustren les capacitats del lidar de la UPC en l'observació atmosfèrica de dos recents erupcions volcàniques des de la xarxa d'observació EARLINET (European Aerosol Research Lidar Network). Aquesta última és part de GALION (Global Atmospheric Watch Atmospheric Lidar Observation Network)-GEOSS (Global Earth Observation System of Systems).
APA, Harvard, Vancouver, ISO, and other styles
28

Pettersson, Nicklas. "Multiple Kernel Imputation : A Locally Balanced Real Donor Method." Doctoral thesis, Stockholms universitet, Statistiska institutionen, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-89435.

Full text
Abstract:
We present an algorithm for imputation of incomplete datasets based on Bayesian exchangeability through Pólya sampling. Each (donee) unit with a missing value is imputed multiple times by observed (real) values on units from a donor pool. The donor pools are constructed using auxiliary variables. Several features from kernel estimation are used to counteract unbalances that are due to sparse and bounded data. Three balancing features can be used with only one single continuous auxiliary variable, but an additional fourth feature need, multiple continuous auxiliary variables. They mainly contribute by reducing nonresponse bias. We examine how the donor pool size should be determined, that is the number of potential donors within the pool. External information is shown to be easily incorporated in the imputation algorithm. Our simulation studies show that with a study variable which can be seen as a function of one or two continuous auxiliaries plus residual noise, the method performs as well or almost as well as competing methods when the function is linear, but usually much better when the function is nonlinear.

At the time of the doctoral defense, the following papers were unpublished and had a status as follows: Paper 1: In press. Paper 3: Submitted. Paper 4: Submitted.

APA, Harvard, Vancouver, ISO, and other styles
29

Wang, Qi. "Estimation of Refractivity Conditions in the Marine Atmospheric Boundary Layer from Range and Height Measurement of X-band EM Propagation and Inverse Solutions." The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1565885420888906.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

King, David R. "A bayesian solution for the law of categorical judgment with category boundary variability and examination of robustness to model violations." Thesis, Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/52960.

Full text
Abstract:
Previous solutions for the the Law of Categorical Judgment with category boundary variability have either constrained the standard deviations of the category boundaries in some way or have violated the assumptions of the scaling model. In the current work, a fully Bayesian Markov chain Monte Carlo solution for the Law of Categorical Judgment is given that estimates all model parameters (i.e. scale values, category boundaries, and the associated standard deviations). The importance of measuring category boundary standard deviations is discussed in the context of previous research in signal detection theory, which gives evidence of interindividual variability in how respondents perceive category boundaries and even intraindividual variability in how a respondent perceives category boundaries across trials. Although the measurement of category boundary standard deviations appears to be important for describing the way respondents perceive category boundaries on the latent scale, the inclusion of category boundary standard deviations in the scaling model exposes an inconsistency between the model and the rating method. Namely, with category boundary variability, the scaling model suggests that a respondent could experience disordinal category boundaries on a given trial. However, the idea that a respondent actually experiences disordinal category boundaries seems unlikely. The discrepancy between the assumptions of the scaling model and the way responses are made at the individual level indicates that the assumptions of the model will likely not be met. Therefore, the current work examined how well model parameters could be estimated when the assumptions of the model were violated in various ways as a consequence of disordinal category boundary perceptions. A parameter recovery study examined the effect of model violations on estimation accuracy by comparing estimates obtained from three response processes that violated the assumptions of the model with estimates obtained from a novel response process that did not violate the assumptions of the model. Results suggest all parameters in the Law of Categorical Judgment can be estimated reasonably well when these particular model violations occur, albeit to a lesser degree of accuracy than when the assumptions of the model are met.
APA, Harvard, Vancouver, ISO, and other styles
31

Nadal, Soriano Enrique. "Cartesian grid FEM (cgFEM): High performance h-adaptive FE analysis with efficient error control. Application to structural shape optimization." Doctoral thesis, Universitat Politècnica de València, 2014. http://hdl.handle.net/10251/35620.

Full text
Abstract:
More and more challenging designs are required everyday in today¿s industries. The traditional trial and error procedure commonly used for mechanical parts design is not valid any more since it slows down the design process and yields suboptimal designs. For structural components, one alternative consists in using shape optimization processes which provide optimal solutions. However, these techniques require a high computational effort and require extremely efficient and robust Finite Element (FE) programs. FE software companies are aware that their current commercial products must improve in this sense and devote considerable resources to improve their codes. In this work we propose to use the Cartesian Grid Finite Element Method, cgFEM as a tool for efficient and robust numerical analysis. The cgFEM methodology developed in this thesis uses the synergy of a variety of techniques to achieve this purpose, but the two main ingredients are the use of Cartesian FE grids independent of the geometry of the component to be analyzed and an efficient hierarchical data structure. These two features provide to the cgFEM technology the necessary requirements to increase the efficiency of the cgFEM code with respect to commercial FE codes. As indicated in [1, 2], in order to guarantee the convergence of a structural shape optimization process we need to control the error of each geometry analyzed. In this sense the cgFEM code also incorporates the appropriate error estimators. These error estimators are specifically adapted to the cgFEM framework to further increase its efficiency. This work introduces a solution recovery technique, denoted as SPR-CD, that in combination with the Zienkiewicz and Zhu error estimator [3] provides very accurate error measures of the FE solution. Additionally, we have also developed error estimators and numerical bounds in Quantities of Interest based on the SPR-CD technique to allow for an efficient control of the quality of the numerical solution. Regarding error estimation, we also present three new upper error bounding techniques for the error in energy norm of the FE solution, based on recovery processes. Furthermore, this work also presents an error estimation procedure to control the quality of the recovered solution in stresses provided by the SPR-CD technique. Since the recovered stress field is commonly more accurate and has a higher convergence rate than the FE solution, we propose to substitute the raw FE solution by the recovered solution to decrease the computational cost of the numerical analysis. All these improvements are reflected by the numerical examples of structural shape optimization problems presented in this thesis. These numerical analysis clearly show the improved behavior of the cgFEM technology over the classical FE implementations commonly used in industry.
Cada d'¿a dise¿nos m'as complejos son requeridos por las industrias actuales. Para el dise¿no de nuevos componentes, los procesos tradicionales de prueba y error usados com'unmente ya no son v'alidos ya que ralentizan el proceso y dan lugar a dise¿nos sub-'optimos. Para componentes estructurales, una alternativa consiste en usar procesos de optimizaci'on de forma estructural los cuales dan como resultado dise¿nos 'optimos. Sin embargo, estas t'ecnicas requieren un alto coste computacional y tambi'en programas de Elementos Finitos (EF) extremadamente eficientes y robustos. Las compa¿n'¿as de programas de EF son conocedoras de que sus programas comerciales necesitan ser mejorados en este sentido y destinan importantes cantidades de recursos para mejorar sus c'odigos. En este trabajo proponemos usar el M'etodo de Elementos Finitos basado en mallados Cartesianos (cgFEM) como una herramienta eficiente y robusta para el an'alisis num'erico. La metodolog'¿a cgFEM desarrollada en esta tesis usa la sinergia entre varias t'ecnicas para lograr este prop'osito, cuyos dos ingredientes principales son el uso de los mallados Cartesianos de EF independientes de la geometr'¿a del componente que va a ser analizado y una eficiente estructura jer'arquica de datos. Estas dos caracter'¿sticas confieren a la tecnolog'¿a cgFEM de los requisitos necesarios para aumentar la eficiencia del c'odigo cgFEM con respecto a c'odigos comerciales. Como se indica en [1, 2], para garantizar la convergencia del proceso de optimizaci'on de forma estructural se necesita controlar el error en cada geometr'¿a analizada. En este sentido el c'odigo cgFEM tambi'en incorpora los apropiados estimadores de error. Estos estimadores de error han sido espec'¿ficamente adaptados al entorno cgFEM para aumentar su eficiencia. En esta tesis se introduce un proceso de recuperaci'on de la soluci'on, llamado SPR-CD, que en combinaci'on con el estimador de error de Zienkiewicz y Zhu [3], da como resultado medidas muy precisas del error de la soluci'on de EF. Adicionalmente, tambi'en se han desarrollado estimadores de error y cotas num'ericas en Magnitudes de Inter'es basadas en la t'ecnica SPR-CD para permitir un eficiente control de la calidad de la soluci'on num'erica. Respecto a la estimaci'on de error, tambi'en se presenta un proceso de estimaci'on de error para controlar la calidad del campo de tensiones recuperado obtenido mediante la t'ecnica SPR-CD. Ya que el campo recuperado es por lo general m'as preciso y tiene un mayor orden de convergencia que la soluci'on de EF, se propone sustituir la soluci'on de EF por la soluci'on recuperada para disminuir as'¿ el coste computacional del an'alisis num'erico. Todas estas mejoras se han reflejado en esta tesis mediante ejemplos num'ericos de problemas de optimizaci'on de forma estructural. Los resultados num'ericos muestran claramente un mejor comportamiento de la tecnolog'¿a cgFEM con respecto a implementaciones cl'asicas de EF com'unmente usadas en la industria.
Nadal Soriano, E. (2014). Cartesian grid FEM (cgFEM): High performance h-adaptive FE analysis with efficient error control. Application to structural shape optimization [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/35620
TESIS
APA, Harvard, Vancouver, ISO, and other styles
32

Hou, Chuanchuan. "Vibration-based damage identification with enhanced frequency dataset and a cracked beam element model." Thesis, University of Edinburgh, 2016. http://hdl.handle.net/1842/20434.

Full text
Abstract:
Damage identification is an important topic in structural assessment and structural health monitoring (SHM). Vibration-based identification techniques use modal data to identify the existence, location and severity of possible damages in structures, often via a numerical model updating procedure. Among other factors influencing the practicality and reliability of a damage identification approach, two are of primary interest to this study. The first one concerns the amount and quality of modal data that can be used as ‘response’ data for the model updating. It is generally recognised that natural frequencies can be measured with relatively high accuracy; however, their number is limited. Mode shapes, on the other hand, are susceptible to larger measurement errors. Seeking additional modal frequency data is therefore of significant value. The second one concerns the errors at the numerical (finite element) model level, particularly in the representation of the effect of damage on the dynamic properties of the structure. An inadequate damage model can lead to inaccurate and even false damage identification. The first part of the thesis is devoted to enhancing the modal dataset by extracting the so called ‘artificial boundary condition’ (ABC) frequencies in a real measurement environment. The ABC frequencies correspond to the natural frequencies of the structure with a perturbed boundary condition, but can be generated without the need of actually altering the physical support condition. A comprehensive experimental study on the extraction of such frequencies has been conducted. The test specimens included steel beams of relatively flexible nature, as well as thick and stiffer beams made from metal material and reinforced concrete, to cover the typical variation of the dynamic characteristics of real-life structures in a laboratory condition. The extracted ABC frequencies are subsequently applied in the damage identification in beams. Results demonstrate that it is possible to extract the first few ABC frequencies from the modal testing in different beam settings for a variety of ABC incorporating one or two virtual pin supports. The inclusion of ABC frequencies enables the identification of structural damages satisfactorily without the necessity to involve the mode shape information. The second part of the thesis is devoted to developing a robust model updating and damage identification approach for beam cracks, with a special focus on thick beams which present a more challenging problem in terms of the effect of a crack than slender beams. The priority task has been to establish a crack model which comprehensively describes the effect of a crack to reduce the modelling errors. A cracked Timoshenko beam element model is introduced for explicit beam crack identification. The cracked beam element model is formulated by incorporating an additional flexibility due to a crack using the fracture mechanics principles. Complex effects in cracked thick beams, including shear deformation and coupling between transverse and longitudinal vibrations, are represented in the model. The accuracy of the cracked beam element model for predicting modal data of cracked thick beams is first verified against numerically simulated examples. The consistency of predictions across different modes is examined in comparison with the conventional stiffness reduction approach. Upon satisfactory verification, a tailored model updating procedure incorporating an adaptive discretisation approach is developed for the implementation of the cracked beam element model for crack identification. The updating procedure is robust in that it has no restriction on the location, severity and number of cracks to be identified. Example updating results demonstrate that satisfactory identification can be achieved for practically any configurations of cracks in a beam. Experimental study with five solid beam specimens is then carried out to further verify the developed cracked beam element model. Both forward verification and crack damage identification with the tested beams show similar level of accuracy to that with the numerically simulated examples. The cracked beam element model can be extended to crack identification of beams with complex cross sections. To do so the additional flexibility matrix for a specific cross-section type needs to be re-formulated. In the present study this is done for box sections. The stress intensity factors (SIF) for a box section as required for the establishment of the additional flexibility matrix are formulated with an empirical approach combining FE simulation, parametric analysis and regression analysis. The extended cracked beam element model is verified against both FE simulated and experimentally measured modal data. The model is subsequently incorporated in the crack identification for box beams. The successful extension of the cracked beam element model to the box beams paves the way for similar extension to the crack identification of other types of sections in real-life engineering applications.
APA, Harvard, Vancouver, ISO, and other styles
33

M, M. Galib Asadullah. "Robust wireless communications under co-channel interference and jamming." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/22571.

Full text
Abstract:
Thesis (Ph. D.)--Electrical and Computer Engineering, Georgia Institute of Technology, 2008.
Committee Chair: Gordon L. Stuber; Committee Member: Alfred D. Andrew; Committee Member: John A. Buck; Committee Member: Steven W. McLaughlin; Committee Member: Ye (Geoffrey) Li.
APA, Harvard, Vancouver, ISO, and other styles
34

Dekou, Tiomajou Raoul Florent. "Organisation à grandes échelles de la turbulence de paroi." Thesis, Ecole centrale de Lille, 2016. http://www.theses.fr/2016ECLI0006/document.

Full text
Abstract:
Ce travail porte sur l’étude des structures cohérentes dans une couche limite de plaque plane à hauts nombres de Reynolds (Reθ=9830 et Reθ=19660). L'estimation Stochastique Linéaire est utilisée pour reconstruire un champ de vitesses résolu en espace et en temps à partir des mesures aux fils chauds à 30 KHz et des mesures PIV à 4 Hz. Une base de données DNS d’un écoulement de canal turbulent a été utilisée pour valider la procédure de reconstruction. Le champ de vitesse reconstruit est comparé à un champ de référence extrait de la DNS avec la visualisation des contours du champ de vitesse, l’analyse spectrale, les corrélations, etc. Par ailleurs, une analyse statistique est réalisée sur le champ de PIV originale, les données issues des fils chauds et celles reconstruites. Les résultats obtenus sont comparés à ceux de Carlier et Stanislas (2005) qui servent de référence. Des algorithmes ont été développés pour extraire les structures cohérentes du champ reconstruit. Les régions de vitesse uniforme sont caractérisées avec leur diamètre hydraulique, leur durée de vie et leur contribution aux tenseurs de Reynolds. Pour les tourbillons, on leur associe un rayon, une circulation et une vorticité en plus de leur durée de vie et leur nombre calculé à une distance fixe de la paroi. L’organisation spatiale des structures est étudiée avec la corrélation de leurs fonctions indicatives. Le modèle simplifié à grandes échelles qui en découle est comparé à ceux qui existent dans la littérature
This study lies in the context of large scale coherent structures (uniform momentum regions and vortical motions) investigation in a near wall turbulent boundary layer at high Reynolds numbers (Reθ=9830 and Reθ=19660). With a Linear Stochastic Estimation procedure based on correlations computation, a full time-resolved 3 component field is reconstructed at high frequency from stereo-PIV at 4 Hz and hot wire data at 30 kHz . A DNS database of turbulent channel flow was used to validate the reconstruction method. To evaluate the quality of the method, reconstructed velocity fluctuations are compared to refence ones extracted directly from the DNS database. Furthermore, a statistical analysis is performed on HWA, PIV and LSE velocity fields. The results obtained are compared to those from Carlier and Stanislas (2005). Algorithms were developed to extract coherent structures from the reconstructed field. Uniform momentum regions are characterized with their mean hydraulic diameter in the YZ plane, their life time and their contribution to Reynolds stresses. The vortical motions are characterized by their position, radius, circulation and vorticity in addition to their life time and their number computed at a fixed position from the wall. The spatial organization of the structures was investigated through a correlation of their respective indicative functions in the spanwise direction . The simplified large scale model that arised is compared to the ones in the literature
APA, Harvard, Vancouver, ISO, and other styles
35

Gunson, James Reginald. "Estimating open-ocean boundary conditions : sensitivity studies." Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/53024.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Earth, Atmospheric, and Planetary Sciences, 1995.
Includes bibliographical references (leaves 145-149).
by James Reginald Gunson.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
36

Pierson, William E. "Using boundary methods for estimating class separability /." The Ohio State University, 1998. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487951907958628.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Saeed, Usman. "Adaptive numerical techniques for the solution of electromagnetic integral equations." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/41173.

Full text
Abstract:
Various error estimation and adaptive refinement techniques for the solution of electromagnetic integral equations were developed. Residual based error estimators and h-refinement implementations were done for the Method of Moments (MoM) solution of electromagnetic integral equations for a number of different problems. Due to high computational cost associated with the MoM, a cheaper solution technique known as the Locally-Corrected Nyström (LCN) method was explored. Several explicit and implicit techniques for error estimation in the LCN solution of electromagnetic integral equations were proposed and implemented for different geometries to successfully identify high-error regions. A simple p-refinement algorithm was developed and implemented for a number of prototype problems using the proposed estimators. Numerical error was found to significantly reduce in the high-error regions after the refinement. A simple computational cost analysis was also presented for the proposed error estimation schemes. Various cost-accuracy trade-offs and problem-specific limitations of different techniques for error estimation were discussed. Finally, a very important problem of slope-mismatch in the global error rates of the solution and the residual was identified. A few methods to compensate for that mismatch using scale factors based on matrix norms were developed.
APA, Harvard, Vancouver, ISO, and other styles
38

Lovón, Oscar Bayardo Ramos. "Formulação h-adaptativa do método dos elementos de contorno para elasticidade bidimensional com ênfase na propagação da fratura." Universidade de São Paulo, 2006. http://www.teses.usp.br/teses/disponiveis/18/18134/tde-25092006-092346/.

Full text
Abstract:
Neste trabalho desenvolveu-se uma formulação adaptativa do método de elementos de contorno (MEC) para a análise de problemas de fratura elástica linear. Foi utilizado o método da colocação para a formulação das equações integrais de deslocamento e de tensão. Para a discretização das equações integrais foram utilizados elementos lineares que possibilitaram a obtenção das expressões exatas das integrais (integração analítica) sobre elementos de contorno e fratura. Para a montagem do sistema de equações algébricas foram utilizadas apenas equações de deslocamento, apenas equações de forças de superfície, ou as duas escritas para nós opostos da fratura levando, portanto ao método dos elementos de contorno dual usualmente empregado na análise de fratura. Para o processo de crescimento da trinca foi desenvolvido um procedimento especial objetivando a correta determinação da direção de crescimento da trinca. Os fatores de intensidade de tensão são calculados por meio da conhecida técnica de correlação de deslocamentos a qual relaciona os deslocamentos atuantes nas faces da fissura. Após a determinação dos fatores de intensidade de tensão é utilizada a teoria da máxima tensão circunferencial para a determinação do ângulo de propagação. O modelo adaptativo empregado é do tipo h onde apenas a sub-divisão dos elementos é feita com base em erros estimados. O erro a ser considerado foi estimado a partir de normas onde se consideraram: a variação aproximada dos deslocamentos, a variação das forças de superfície e a variação da energia de deformação do sistema, calculada com a sua integração sobre o contorno. São apresentados exemplos numéricos para demonstrar a eficiência dos procedimentos propostos.
In this work, an adaptative formulation of the boundary element method is developed to analyze linear elastic fracture problems. The collocation point method was used to formulate the integral equations for the displacements and stresses (or tractions). To discretize the integral equations, linear elements were used to obtain the exact expressions of the integrals over boundary elements and fracture. To construct the linear system of equations were used only displacement equations, traction equations or both of them written for opposite nodes of the fracture, leading to the dual boundary element formulation usually employed in the fracture analyses. For the process of growth of the crack a special procedure was developed aiming at the correct determination of the direction of growth of the crack. The stress intensity factors, to calculate he crack growth angle, are calculated through of correlation displacements technique which relates the displacements actuants in the faces of the crack. The employed adaptative model is the h-type where only the sub-division of the elements is done based on error estimate. The error estimates considered in this work are based on the following norms: displacement, traction and strain energy variations, this last considered from the integration over the boundary. Numerical examples are presented to demonstrate the efficiency of the proposed procedures.
APA, Harvard, Vancouver, ISO, and other styles
39

Libengue, Dobele-kpoka Francial Giscard Baudin. "Méthode non-paramétrique des noyaux associés mixtes et applications." Thesis, Besançon, 2013. http://www.theses.fr/2013BESA2007/document.

Full text
Abstract:
Nous présentons dans cette thèse, l'approche non-paramétrique par noyaux associés mixtes, pour les densités àsupports partiellement continus et discrets. Nous commençons par rappeler d'abord les notions essentielles d'estimationpar noyaux continus (classiques) et noyaux associés discrets. Nous donnons la définition et les caractéristiques desestimateurs à noyaux continus (classiques) puis discrets. Nous rappelons aussi les différentes techniques de choix deparamètres de lissage et nous revisitons les problèmes de supports ainsi qu'une résolution des effets de bord dans le casdiscret. Ensuite, nous détaillons la nouvelle méthode d'estimation de densités par les noyaux associés continus, lesquelsenglobent les noyaux continus (classiques). Nous définissons les noyaux associés continus et nous proposons laméthode mode-dispersion pour leur construction puis nous illustrons ceci sur les noyaux associés non-classiques de lalittérature à savoir bêta et sa version étendue, gamma et son inverse, gaussien inverse et sa réciproque le noyau dePareto ainsi que le noyau lognormal. Nous examinons par la suite les propriétés des estimateurs qui en sont issus plusprécisément le biais, la variance et les erreurs quadratiques moyennes ponctuelles et intégrées. Puis, nous proposons unalgorithme de réduction de biais que nous illustrons sur ces mêmes noyaux associés non-classiques. Des études parsimulations sont faites sur trois types d’estimateurs à noyaux lognormaux. Par ailleurs, nous étudions lescomportements asymptotiques des estimateurs de densité à noyaux associés continus. Nous montrons d'abord lesconsistances faibles et fortes ainsi que la normalité asymptotique ponctuelle. Ensuite nous présentons les résultats desconsistances faibles et fortes globales en utilisant les normes uniformes et L1. Nous illustrons ceci sur trois typesd’estimateurs à noyaux lognormaux. Par la suite, nous étudions les propriétés minimax des estimateurs à noyauxassociés continus. Nous décrivons d'abord le modèle puis nous donnons les hypothèses techniques avec lesquelles noustravaillons. Nous présentons ensuite nos résultats minimax tout en les appliquant sur les noyaux associés non-classiquesbêta, gamma et lognormal. Enfin, nous combinons les noyaux associés continus et discrets pour définir les noyauxassociés mixtes. De là, les outils d'unification d'analyses discrètes et continues sont utilisés, pour montrer les différentespropriétés des estimateurs à noyaux associés mixtes. Une application sur un modèle de mélange des lois normales et dePoisson tronquées est aussi donnée. Tout au long de ce travail, nous choisissons le paramètre de lissage uniquementavec la méthode de validation croisée par les moindres carrés
We present in this thesis, the non-parametric approach using mixed associated kernels for densities withsupports being partially continuous and discrete. We first start by recalling the essential concepts of classical continuousand discrete kernel density estimators. We give the definition and characteristics of these estimators. We also recall thevarious technical for the choice of smoothing parameters and we revisit the problems of supports as well as a resolutionof the edge effects in the discrete case. Then, we describe a new method of continuous associated kernels for estimatingdensity with bounded support, which includes the classical continuous kernel method. We define the continuousassociated kernels and we propose the mode-dispersion for their construction. Moreover, we illustrate this on the nonclassicalassociated kernels of literature namely, beta and its extended version, gamma and its inverse, inverse Gaussianand its reciprocal, the Pareto kernel and the kernel lognormal. We subsequently examine the properties of the estimatorswhich are derived, specifically, the bias, variance and the pointwise and integrated mean squared errors. Then, wepropose an algorithm for reducing bias that we illustrate on these non-classical associated kernels. Some simulationsstudies are performed on three types of estimators lognormal kernels. Also, we study the asymptotic behavior of thecontinuous associated kernel estimators for density. We first show the pointwise weak and strong consistencies as wellas the asymptotic normality. Then, we present the results of the global weak and strong consistencies using uniform andL1norms. We illustrate this on three types of lognormal kernels estimators. Subsequently, we study the minimaxproperties of the continuous associated kernel estimators. We first describe the model and we give the technicalassumptions with which we work. Then we present our results that we apply on some non-classical associated kernelsmore precisely beta, gamma and lognormal kernel estimators. Finally, we combine continuous and discrete associatedkernels for defining the mixed associated kernels. Using the tools of the unification of discrete and continuous analysis,we show the different properties of the mixed associated kernel estimators. All through this work, we choose thesmoothing parameter using the least squares cross-validation method
APA, Harvard, Vancouver, ISO, and other styles
40

Jung, Hyun Ju. "The generation and flow of knowledge in technology development." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/50296.

Full text
Abstract:
Scholars in strategy, economics, and sociology of science and technology have studied technology development as a source of firms’ economic gains as well as institutional changes. Drawing on the extant research of technology and innovation strategy, I investigate the problem of knowledge generation and flows in technology development. Specifically, I explore how firms generate novel technology and develop technological breakthroughs; how knowledge flows between firms affect interfirm cooperation in a knowledge network; and how science and technology programs impact the institutions of knowledge production. In Essay 1 (Chapter 2), I examine the antecedents of knowledge recombination and technological breakthroughs. Conceptualizing a firm’s exploration as a combinatory search of prior new-recombination (an original technology component), I investigate the impacts of prior new-recombination and search boundary (local vs. boundary-spanning) on the characteristics of focal invention. In particular, I theorize and juxtapose the contrasting effects of the boundary of technological search of prior new-recombination on the propensities that the focal invention generates new recombination and becomes a technological breakthrough. Specifically, I hypothesize that, when the technological search involves new recombination in prior inventions, 1) the likelihood of generating new recombination in the focal invention is greatest for a boundary spanning search, smallest for a local search, and intermediate for a hybrid search (which involves both types of search); but 2) the likelihood for the focal invention to become a technological breakthrough is greatest for a local search, smallest for a boundary spanning search, and intermediate for a hybrid search. I find supporting evidence from the analysis of U.S. nanotechnology patents granted between 1980 and 2006. The purpose of Essay 2 (Chapter 3) is to determine the effect of knowledge flows on the formation of interfirm cooperation. By distinguishing codified knowledge flows from tacit knowledge flows, this paper demonstrates that antecedents of interfirm cooperation lie in codified knowledge flows that precede interfirm cooperation. Two properties of asymmetry in directional codified knowledge flows, intensity and uncertainty, underpin this paper’s arguments and empirical tests. The main finding in this study is that intense codified knowledge flows weaken the formation of interfirm cooperation. By mapping dyadic firms to a center and a periphery firm within a knowledge network, I theorize that the uncertainty of directional codified knowledge flows induces the center and the periphery firms to pursue interfirm cooperation differently. The results show that while uncertainty caused by distant technology components in knowledge flows hinders a center firm from pursuing interfirm cooperation, uncertainty stimulates a periphery firm to pursue interfirm cooperation. A statistical analysis performed on a sample of enterprise software firms between 1992 and 2009 supports the hypotheses of this paper. In Essay 3 (Chapter 4), I examine how the National Nanotechnology Initiative (NNI), a most recent U.S. government’s science and technology (S&T) program launched in 2000, impacts the nature of university research in nanotechnology. I characterize the NNI as a policy intervention that targets the commercialization of technology and a focused research direction to promote national economic growth. As such, I expect that the NNI has brought about unintended consequences in terms of the direction of university-industry knowledge flows and the characteristics of university research output in nanotechnology. Using the difference-in-differences analysis of the U.S. nanotechnology patents filed between 1996 and 2007, I find that, for the U.S. universities, the NNI has increased knowledge inflows from the industry, diminished the branching-out to novel technologies, reduced the research scope, and decreased the likelihood of technological breakthroughs, as compared to other U.S. and non-U.S. research institutions. The findings suggest that, at least in the case of the NNI, targeted S&T programs of the government may increase the efficiency of university research, but potentially do so at a considerable price.
APA, Harvard, Vancouver, ISO, and other styles
41

Šimek, Milan. "Výběr referenčních uzlů pro bezkotevní lokalizační techniky v bezdrátových senzorových sítích." Doctoral thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2010. http://www.nusl.cz/ntk/nusl-233511.

Full text
Abstract:
Dizertační práce se zabývá návrhem nového bezkotevního lokalizačního algoritmu sloužícího pro výpočet pozice uzlů v bezdrátových senzorových sítích. Provedené studie ukázaly, že dosavadní bezkotevní lokalizační algoritmy, pracující v paralelním režimu, dosahují malých lokalizačních chyb. Jejich nevýhodou ovšem je, že při sestavení množiny referenčních uzlu spotřebovávají daleko větší množství energie než algoritmy pracující v inkrementálním režimu. Paralelní lokalizační algoritmy využívají pro určení pozice referenční uzly nacházející se na protilehlých hranách bezdrátové sítě. Nový lokalizační algoritmus označený jako BRL (Boundary Recognition aided Localization) je založen na myšlence decentralizovaně detekovat uzly ležící na hranici síti a pouze z této množiny vybrat potřebný počet referenčních uzlu. Pomocí navrženého přístupu lze znažně snížit množství energie spotřebované v průběhu procesu výběru referenčních uzlů v senzorovém poli. Dalším přínosem ke snížení energetických nároku a zároveň zachování nízké lokalizační chyby je využití procesu multilaterace se třemi, eventuálně čtyřmi referenčními body. V rámci práce byly provedeny simulace několika dílčích algoritmu a jejich funkčnost byla ověřena experimentálně v reálné senzorové síti. Navržený algoritmus BRL byl porovnán z hlediska lokalizační chyby a počtu zpracovaných paketů s několika známými lokalizačními algoritmy. Výsledky simulací dokázaly, že navržený algoritmus představuje efektivní řešení pro přesnou a zároveň nízkoenergetickou lokalizaci uzlů v bezdrátových senzorových sítích.
APA, Harvard, Vancouver, ISO, and other styles
42

Botha, Christoffel Philippus. "The near field boundary of dewatering systems : estimating individual yields for wells operating under gravity flow." Thesis, University of Southampton, 2017. https://eprints.soton.ac.uk/416903/.

Full text
Abstract:
The subject of this thesis is the estimation of individual well yields, which is a fundamental part of a multi-well abstraction system design. A literature review of the subject shows that the current best practice for estimating individual well yields has several shortcomings and that further research on the topic is required for individual wells operating under gravity flow. The proposal by Sichardt (1927) for estimating the hydraulic gradient at entry into wells is reviewed and his suggestions are compared to the findings in the field. Pumping test data from eight individual abstraction wells, operating under gravity flow in aquifers having a range of permeability values, are presented. The permeability of the aquifer and the implied hydraulic entry gradient into the well were calculated from the data. The findings also show that Sichardt’s (1927) formula provides reasonable results for permeabilities in the range 1 x 10-5 up to 2.15 x 10-3m/s. For permeability values below 1 x 10-5 m/s Sichardt (1927) a reasonable estimation, but the results need to be used with caution.
APA, Harvard, Vancouver, ISO, and other styles
43

Wittebol, Laura. "Refinement and verification of the nocturnal boundary layer budget method for estimating greenhouse gas emissions from Eastern Canadian agricultural farms." Thesis, McGill University, 2009. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=66706.

Full text
Abstract:
Measuring greenhouse gas (GHG) emissions directly at the farm scale is most relevant to the agricultural sector and has the potential to eliminate some of the uncertainty arising from scaling up from plot or field studies or down from regional or national levels. The stable nighttime atmosphere acts as a chamber within which sequentially-measured GHG concentration profiles determine the flux of GHGs. With the overall goal of refining the nocturnal boundary layer (NBL) budget method to obtain reliable flux estimates at a scale representative of the typical eastern Canadian farm (approximately 1 km2), fluxes of CO2, N2O, and CH4 were measured at two agricultural farms in Eastern Canada. Field sites in 1998 and 2002 were located on an experimental farm adjacent to a suburb southwest of the city of Ottawa, ON, a relatively flat area with corn, hay, and soy as the dominant crops. The field site in 2003 was located in the rural community of Coteau-du-Lac, QC, about 20 km southwest of the island of Montreal, a fairly flat area bordered by the St. Lawrence River to the south, consisting mainly of corn and hay with a mixture of soy and vegetable crops. A good agreement was obtained between the overall mean NBL budget-measured CO2 flux at both sites, near-in-time windy night eddy covariance data and previously published results. The mean NBL-measured N2O flux from all wind directions and farming management was of the same order of magnitude as, but slightly higher than, previously published baseline N2O emissions from agroecosystems. Methane fluxes results were judged to be invalid as they were extremely sensitive to wind direction change. Spatial sampling of CO2, N2O, and CH4 around the two sites confirmed that [CH4] distribution was particularly sensitive to the nature of the emission source, field conditions, and wind direction. Optimal NBL conditions for measuring GHG fluxes, present approximately 60% of the t
Les don nées sur les émissions des gaz à effet de serre (GES) obtenues au niveau des fermes entières agricoles sont pertinentes au secteur agricole et ont le potentiel d'éliminer une partie de l'incertitude qui se produit quant à l'extrapolation du niveau de la parcelle jusqu'au niveau du champ. La couche limite nocturne (CLN) agit comme une chambre virtuelle dans laquelle on fait plusieurs ascensions pour déterminer les fluxes de GES. Dans le but géneral de raffiner la méthode du budget de la CLN afin d'obtenir de plus fiables estimées au niveau de la ferme typique (environ 1 kilomètre carré), les fluxes de CO2, N2O, et CH4 ont été mesurés sur deux fermes agricoles dans l'est du Canada. En 1998 et 2002, les sites d'étude se trouvaient sur une ferme près d'une banlieue au sud-ouest d'Ottawa (Ontario), où le terrain est relativement plat et les principales cultures sont le maïs, le foin et le soya. En 2003, le site d'étude se situait dans la communauté rurale de Coteau-du-Lac (Québec), environ 20 km au sud-ouest de Montréal. Bordé par le fleuve St-Laurent au sud, ce terrain est plat et on y cultive surtout le maïs, le foin et un mélange de soya et de légumes. Le flux moyen de CO2 mesuré aux deux sites par la méthode du budget de la CLN correspondait bien avec celui mesuré par la technique de la covariance des fluctuations et aussi avec ce qui est rapporté dans la littérature. Considérant toutes les directions de vent et toutes les pratiques agricoles, la moyenne des flux de N2O mesurés par la technique de NBL était du même ordre de grandeur, quoiqu'un peu plus élevée, que ce qui est rapporté dans la littérature pour les émissions de base de N2O des écosystèmes agricoles. Les résultats pour le CH4 ont été jugés non-valides car l'échantillonage concurrente des trois gaz aux alentours des deux sites a confirmé que le CH4 était particulièrement sensible à la
APA, Harvard, Vancouver, ISO, and other styles
44

Conti, Tatiana de. "Avaliação do método do balanço da camada limite para a estimativa de fluxos turbulentos noturnos." Universidade Federal de Santa Maria, 2006. http://repositorio.ufsm.br/handle/1/9203.

Full text
Abstract:
This work is about to verify the performance of the boundary layer budgeting method as alternative for measuring the surface turbulent fluxes in conditions that allow the validation of the measurements with data taken from a tower equipped with sensors, which were processed through the eddy correlation method. A research is made on the knowledge generated to the present, which reveals the limitations of the eddy correlation method and the promises of the boundary layer budgeting in relation to measuring the surface turbulent fluxes. This work describes its theoretical underpinnings, regarding the energy budget in the atmosphere and the role of turbulence in the transport phenomena and in energy exchanges. These concepts are interpreted physically and described in the form of ruling equations and hypotheses used for the analysis of the behavior of such phenomena. The eddy correlation is described as a method that directly measures the sensible heat and latent heat fluxes in function of the data picked up by the sensors. The boundary layer budgeting method, by its turn, is described from the general formulation of scalar accumulation, on which are taken the hypotheses of elimination of the horizontal advection and of any sources or drains in the atmosphere. The context of the measuring work is described in the following, from the presentation of the project, its places, and its campaigns and to the day and times in which the measurements took place. The instruments used for data acquisition are also described, as well as the processing systematics of the raw data. The obtained results reveal significant differences in the sensible heat flux estimates, which increase with the progress of the night and smaller differences in the latent heat flux, which remain constant in the time.
Este trabalho trata de verificar o desempenho do método de balanço da camada limite como alternativa para a medição dos fluxos turbulentos superficiais em condições que permitem a validação das medidas com dados tomados a partir de uma torre equipada com sensores, os quais foram processados com o método de correlação de vórtices. É feita uma pesquisa do conhecimento gerado até o presente, que revela as limitações da sistemática de medição por correlação de vórtices e as promessas do método de balanço em relação à medição dos fluxos turbulentos. É descrita neste trabalho a fundamentação teórica necessária, a respeito do balanço energético na atmosfera e do papel da turbulência nos fenômenos de transporte e de troca de energia. Tais conceitos são interpretados fisicamente e descritos na forma de equações governantes e hipóteses utilizadas para a análise do comportamento de tais fenômenos. A correlação de vórtices é descrita como um método que mede diretamente os fluxos de calor sensível e de calor latente em função dos dados recolhidos pelos sensores. O método de balanço da camada limite, por sua vez é descrito a partir da formulação geral de acumulação de escalares, sobre a qual são tomadas as hipóteses de eliminação da advecção horizontal e de quaisquer fontes ou sumidouros na atmosfera. O contexto das medições é descrito a seguir, a partir da apresentação do projeto, dos locais, das campanhas e do dia e dos horários em que as medições foram realizadas. A instrumentação utilizada para a aquisição dos dados é também descrita, assim como a sistemática de processamento dos dados brutos. Os resultados obtidos revelam diferenças significativas na estimativa do fluxo de calor sensível, que aumentam com o avanço da noite e diferenças menores no fluxo de calor latente, que permanecem praticamente constantes no tempo.
APA, Harvard, Vancouver, ISO, and other styles
45

Dinh, Van Duong. "Strichartz estimates and the nonlinear Schrödinger-type equations." Thesis, Toulouse 3, 2018. http://www.theses.fr/2018TOU30247/document.

Full text
Abstract:
Cette thèse est consacrée à l'étude des aspects linéaires et non-linéaires des équations de type Schrödinger [ i partial_t u + |nabla|^sigma u = F, quad |nabla| = sqrt {-Delta}, quad sigma in (0, infty).] Quand $sigma = 2$, il s'agit de l'équation de Schrödinger bien connue dans de nombreux contextes physiques tels que la mécanique quantique, l'optique non-linéaire, la théorie des champs quantiques et la théorie de Hartree-Fock. Quand $sigma in (0,2) backslash {1}$, c'est l'équation Schrödinger fractionnaire, qui a été découverte par Laskin (voir par exemple cite{Laskin2000} et cite{Laskin2002}) en lien avec l'extension de l'intégrale de Feynman, des chemins quantiques de type brownien à ceux de Lévy. Cette équation apparaît également dans des modèles de vagues (voir par exemple cite{IonescuPusateri} et cite{Nguyen}). Quand $sigma = 1$, c'est l'équation des demi-ondes qui apparaît dans des modèles de vagues (voir cite{IonescuPusateri}) et dans l'effondrement gravitationnel (voir cite{ElgartSchlein}, cite{FrohlichLenzmann}). Quand $sigma = 4$, c'est l'équation Schrödinger du quatrième ordre ou biharmonique introduite par Karpman cite{Karpman} et par Karpman-Shagalov cite{KarpmanShagalov} pour prendre en compte le rôle de la dispersion du quatrième ordre dans la propagation d'un faisceau laser intense dans un milieu massif avec non-linéarité de Kerr. Cette thèse est divisée en deux parties. La première partie étudie les estimations de Strichartz pour des équations de type Schrödinger sur des variétés comprenant l'espace plat euclidien, les variétés compactes sans bord et les variétés asymptotiquement euclidiennes. Ces estimations de Strichartz sont utiles pour l'étude de l'équations dispersives non-linéaire à régularité basse. La seconde partie concerne l'étude des aspects non-linéaires tels que les caractères localement puis globalement bien posés sous l'espace d'énergie, ainsi que l'explosion de solutions peu régulières pour des équations non-linéaires de type Schrödinger. [...]
This dissertation is devoted to the study of linear and nonlinear aspects of the Schrödinger-type equations [ i partial_t u + |nabla|^sigma u = F, quad |nabla| = sqrt {-Delta}, quad sigma in (0, infty).] When $sigma = 2$, it is the well-known Schrödinger equation arising in many physical contexts such as quantum mechanics, nonlinear optics, quantum field theory and Hartree-Fock theory. When $sigma in (0,2) backslash {1}$, it is the fractional Schrödinger equation, which was discovered by Laskin (see e.g. cite{Laskin2000} and cite{Laskin2002}) owing to the extension of the Feynman path integral, from the Brownian-like to Lévy-like quantum mechanical paths. This equation also appears in the water waves model (see e.g. cite{IonescuPusateri} and cite{Nguyen}). When $sigma = 1$, it is the half-wave equation which arises in water waves model (see cite{IonescuPusateri}) and in gravitational collapse (see cite{ElgartSchlein}, cite{FrohlichLenzmann}). When $sigma =4$, it is the fourth-order or biharmonic Schrödinger equation introduced by Karpman cite {Karpman} and by Karpman-Shagalov cite{KarpmanShagalov} taking into account the role of small fourth-order dispersion term in the propagation of intense laser beam in a bulk medium with Kerr nonlinearity. This thesis is divided into two parts. The first part studies Strichartz estimates for Schrödinger-type equations on manifolds including the flat Euclidean space, compact manifolds without boundary and asymptotically Euclidean manifolds. These Strichartz estimates are known to be useful in the study of nonlinear dispersive equation at low regularity. The second part concerns the study of nonlinear aspects such as local well-posedness, global well-posedness below the energy space and blowup of rough solutions for nonlinear Schrödinger-type equations.[...]
APA, Harvard, Vancouver, ISO, and other styles
46

Petrovic, Milena. "Effects of the Object’s Mass and Distance on the Location of Preferred Critical Boundary, Discomfort, and Muscle Activation during a Seated Reaching Task." Miami University / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=miami1343567265.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Somé, Sobom Matthieu. "Estimations non paramétriques par noyaux associés multivariés et applications." Thesis, Besançon, 2015. http://www.theses.fr/2015BESA2030/document.

Full text
Abstract:
Dans ce travail, l'approche non-paramétrique par noyaux associés mixtes multivariés est présentée pour les fonctions de densités, de masse de probabilité et de régressions à supports partiellement ou totalement discrets et continus. Pour cela, quelques aspects essentiels des notions d'estimation par noyaux continus (dits classiques) multivariés et par noyaux associés univariés (discrets et continus) sont d'abord rappelés. Les problèmes de supports sont alors révisés ainsi qu'une résolution des effets de bords dans les cas des noyaux associés univariés. Le noyau associé multivarié est ensuite défini et une méthode de leur construction dite mode-dispersion multivarié est proposée. Il s'ensuit une illustration dans le cas continu utilisant le noyau bêta bivarié avec ou sans structure de corrélation de type Sarmanov. Les propriétés des estimateurs telles que les biais, les variances et les erreurs quadratiques moyennes sont également étudiées. Un algorithme de réduction du biais est alors proposé et illustré sur ce même noyau avec structure de corrélation. Des études par simulations et applications avec le noyau bêta bivarié avec structure de corrélation sont aussi présentées. Trois formes de matrices des fenêtres, à savoir, pleine, Scott et diagonale, y sont utilisées puis leurs performances relatives sont discutées. De plus, des noyaux associés multiples ont été efficaces dans le cadre de l'analyse discriminante. Pour cela, on a utilisé les noyaux univariés binomial, catégoriel, triangulaire discret, gamma et bêta. Par la suite, les noyaux associés avec ou sans structure de corrélation ont été étudiés dans le cadre de la régression multiple. En plus des noyaux univariés ci-dessus, les noyaux bivariés avec ou sans structure de corrélation ont été aussi pris en compte. Les études par simulations montrent l'importance et les bonnes performances du choix des noyaux associés multivariés à matrice de lissage pleine ou diagonale. Puis, les noyaux associés continus et discrets sont combinés pour définir les noyaux associés mixtes univariés. Les travaux ont aussi donné lieu à la création d'un package R pour l'estimation de fonctions univariés de densités, de masse de probabilité et de régression. Plusieurs méthodes de sélections de fenêtres optimales y sont implémentées avec une interface facile d'utilisation. Tout au long de ce travail, la sélection des matrices de lissage se fait généralement par validation croisée et parfois par les méthodes bayésiennes. Enfin, des compléments sur les constantes de normalisations des estimateurs à noyaux associés des fonctions de densité et de masse de probabilité sont présentés
This work is about nonparametric approach using multivariate mixed associated kernels for densities, probability mass functions and regressions estimation having supports partially or totally discrete and continuous. Some key aspects of kernel estimation using multivariate continuous (classical) and (discrete and continuous) univariate associated kernels are recalled. Problem of supports are also revised as well as a resolution of boundary effects for univariate associated kernels. The multivariate associated kernel is then defined and a construction by multivariate mode-dispersion method is provided. This leads to an illustration on the bivariate beta kernel with Sarmanov's correlation structure in continuous case. Properties of these estimators are studied, such as the bias, variances and mean squared errors. An algorithm for reducing the bias is proposed and illustrated on this bivariate beta kernel. Simulations studies and applications are then performed with bivariate beta kernel. Three types of bandwidth matrices, namely, full, Scott and diagonal are used. Furthermore, appropriated multiple associated kernels are used in a practical discriminant analysis task. These are the binomial, categorical, discrete triangular, gamma and beta. Thereafter, associated kernels with or without correlation structure are used in multiple regression. In addition to the previous univariate associated kernels, bivariate beta kernels with or without correlation structure are taken into account. Simulations studies show the performance of the choice of associated kernels with full or diagonal bandwidth matrices. Then, (discrete and continuous) associated kernels are combined to define mixed univariate associated kernels. Using the tools of unification of discrete and continuous analysis, the properties of the mixed associated kernel estimators are shown. This is followed by an R package, created in univariate case, for densities, probability mass functions and regressions estimations. Several smoothing parameter selections are implemented via an easy-to-use interface. Throughout the paper, bandwidth matrix selections are generally obtained using cross-validation and sometimes Bayesian methods. Finally, some additionnal informations on normalizing constants of associated kernel estimators are presented for densities or probability mass functions
APA, Harvard, Vancouver, ISO, and other styles
48

Lovato, Rodrigo Gularte, and Rodrigo Gularte Lovato. "Estimativa da constante de Kolmogorov (C0) para a camada limite estável usando simulação dos grandes turbilhões." Universidade Federal de Pelotas, 2007. http://repositorio.ufpel.edu.br/handle/ri/2182.

Full text
Abstract:
Made available in DSpace on 2014-08-20T14:25:46Z (GMT). No. of bitstreams: 1 dissertacao_rodrigo_lovato_paginas_pre_e_pos_textuais.pdf: 119173 bytes, checksum: e57cbc1af9ef2f4d32dabdf2e4714175 (MD5) Previous issue date: 2007-02-28
Neste trabalho, nós estimamos a constante de Kolmogorov, C01 através da determinação de um conjunto de trajetórias Lagrangeanas em uma Camada Limite Estável simulada pelo modelo de simulação dos grandes turbilhões (Large-Eddy Simulation model). A partir do conjunto de trajetórias, a forma da função de estrutura da velocidade Lagrangeana no subintervalo inercial é avaliada, permitindo a determinação de C0. O valor de C0 encontrado é 3,70.
APA, Harvard, Vancouver, ISO, and other styles
49

Júnior, Rosiberto Salustiano da Silva. "Sensibilidade na estimativa da concentração de poluentes fotoquímicos com a aplicação de diferentes parametrizações de camada limite planetária utilizando o modelo de qualidade do ar WRF/Chem." Universidade de São Paulo, 2009. http://www.teses.usp.br/teses/disponiveis/14/14133/tde-23012010-102058/.

Full text
Abstract:
Neste trabalho foi avaliado o impacto na simulação dos poluentes fotoquímicos do uso de diferentes parametrizações da Camada Limite Planetária (CLP) para a Região Metropolitana de São Paulo, utilizando o modelo de mesoescala não-hidrostático Weather Research and Forecasting (WRF) com o acoplamento dos módulos químicos (WRF/Chem). Para análise da acurácia dos resultados obtidos com o modelo, foram utilizadas as medições realizadas pelas estações automáticas da CETESB em superfície, e a validação do perfil vertical da concentração de O3 com base nos dados do experimento de lançamento de ozoniosondas realizado dentro do contexto de um projeto de políticas públicas (recursos FAPESP) realizado durante o período de estudo. A motivação para a realização do presente trabalho foi a grande dificuldade de representação da estrutura da CLP em especial durante o período noturno sobre áreas urbanas. O estudo contou com a utilização inédita do modelo on-line para simulação da composição química da atmosfera da RMSP, e por fim analisou o impacto do acoplamento com o modelo urbano Urban Canopy Model (UCM), com o modelo WRF-Chem. A área de estudo é a Região Metropolitana de São Paulo e adjacências, e o período estudado foi de 28/10 a 01/11/2006. As variações anuais da temperatura do ar, velocidade do vento e radiação solar para o ano de 2006, mostraram que o mês de outubro foi marcado por grande disponibilidade de radiação solar, altos valores de temperatura do ar e ventos intensos. As simulações com casos idealizados mostraram que o esquema numérico de CLP, YSU (Yonsei University) representou melhor a variação da Camada Limite na área urbana, o que influenciou diretamente na dispersão dos poluentes. Já para as simulações de casos reais os resultados mostraram uma boa correlação entre simulado e observado para as concentrações superficiais de O3 e CO e uma grande dificuldade do modelo em representar o perfil vertical da concentração de O3, quando utilizada a versão 2.1 do modelo WRF/Chem. Problema este parcialmente resolvido (podendo melhorar ainda mais) quando é utilizada uma versão mais recente do modelo WRF-Chem, como o caso da versão 3, acoplada com o modelo urbano UCM. Os experimentos realizados para estimar o impacto do tipo do uso do solo sobre a variação da altura da CLP e conseqüente relação com a composição química da atmosfera, mostraram que a altura da CLP é mais alta sobre áreas urbanas em comparação com áreas rurais, portanto influenciando de forma efetiva na dispersão dos poluentes. No presente trabalho foi mostrado o transporte de poluentes existente entre as regiões metropolitanas de São Paulo, Campinas e Sorocaba, devido principalmente a topografia que atua canalizando o vento na direção de Campinas e Sorocaba. A proximidade com as fontes de emissão mostrou ser um fator importante para o nível de concentração de O3, pois à medida que se afasta das fontes de emissão tanto vertical quanto horizontalmente a concentração de O3 aumenta. E devido a sua relativa proximidade com o Oceano Atlântico, a RMSP é beneficiada com a formação da Brisa Marítima que auxilia na dispersão dos poluentes emitidos e formados na RMSP.
This work evaluated the parameterizations´ sensibility of the Planetary Boundary Layer (PBL) in the simulation of photochemical pollutants above the Metropolitan Region of São Paulo, by using the Weather Research and Forecasting (WRF) non-hydrostatic mesoscale model, with the coupling of chemical modules (WRF/Chem). In order to analyze the accuracy of the results obtained with the model, there will be used the measurements made by the automatic air quality stations from CETESB on surface, and the validation of the O3 vertical profile, will be based on ozonesoundings experiments conducted during the period of the study. The motivation for this study was the great difficulty in representing the PBLs structure specially during the night, witch is an new utilization of an on-line model to simulate the chemical composition of the atmosphere of the MASP, and finally, to analyze the impact of the coupling with the Urban Canopy Model (UCM). The study area is the Metropolitan Region of São Paulo and surroundings, and the period was between 10/28 11/01/2006. The annual variations of the air temperature, wind speed and solar radiation for 2006, showed that the month of October was marked by wide availability of solar radiation, high values of air temperature and intense winds. The simulations with idealized cases showed that the PBLs order of height based on the numerical scheme is YSU, MYJ, BOULAC and UCM, which influences directly the dispersion of pollutants with lower concentrations of the CO for YSU and higher for UCM respectively, and for the concentrations of the O3, the scheme YSU shows lower concentrations but unlike the CO, the other schemes are equal. However, for the simulations of real cases, the results showed a good correlation between simulated and observed for the surface concentrations of the O3 and CO, and a great difficulty of the model on the vertical profiles representation of the concentration of O3, when the 2.1 version of the WRF/Chem model is used. This problem was partially solved (can be better improved) when used with the latest version, as version 3, and coupled with the Urban Canopy Model. The experiments performed to estimate the impact of the land use type on the variation of the PBL height and consequent relationship with the chemical composition of the atmosphere, showed that the PBL height is higher on the urban areas compared to rural areas, thus influencing effectively in the dispersal of pollutants, and a lower PBL height was associated with higher concentrations due to lower dispersion of pollutants. This work has demonstrated the transport of pollutants between the metropolitan regions of São Paulo - Campinas and São Paulo - Sorocaba, due to topography that acts targeting the wind in the direction of Campinas. The proximity of emission sources has been important for the level of concentration of O3, as it moves away from sources of emissions both vertically and horizontally increases its concentration. Due the proximity with the Atlantic Ocean, the formation of Sea Breeze helps dispersion of pollutants emitted and formed in the MASP.
APA, Harvard, Vancouver, ISO, and other styles
50

Brondani, Daiane de Vargas. "ESTIMATIVA DA PARTIÇÃO DE ENERGIA NA SUPERFÍCIE A PARTIR DE SÉRIES METEOROLÓGICAS DE TEMPERATURA E UMIDADE ESPECÍFICA DO AR." Universidade Federal de Santa Maria, 2015. http://repositorio.ufsm.br/handle/1/10285.

Full text
Abstract:
Fundação de Amparo a Pesquisa no Estado do Rio Grande do Sul
The developed method is proposed with the aim of estimating the energy partition on the surface, according to the Bowen ratio method, and the height of convective boundary layer on the monthly scale. The basic hypothesis is that the average temporal evolution of the variables air temperature and specific humidity is controlled solely by the convergence of surface fluxes of sensible and latent heat. This assumption is valid for the monthly scale and in mid-latitude regions away from the coast. Thus, it is assumed that the advective terms of the balance equation of these quantities in the convective boundary layer in prefrontal and post-frontal situations have opposite sign. Therefore, using for a longer time scale than the typical scale of the passage of synoptic systems, the cancellation of terms hypothesis can be tested. In this study, the method is applied to the region of Santa Maria, where it is assumed that the conditions allowing despise the advective terms in a monthly time scale are valid. Although the model is simplified, it was possible to estimate the surface fluxes of latent and sensible heat from simple meteorological data, such as hourly observations of air temperature and specific humidity and a vertical profile of these quantities in the morning; the method was efficient in most of the cases, and more flawed in the winter months. In general, the method overestimated consistently, but slightly, the latent heat fluxes, while the sensible heat fluxes were slightly underestimated in the winter and overestimated in the remaining months. This pattern caused a major deviation in energy partition in the winter months. The method can also be used to provide time evolution of the convective boundary layer. This is a variable of great interest for studies of pollutant dispersion and in many cases it is difficult to estimate.
O método desenvolvido é proposto com as finalidades de estimar a partição de energia na superfície pelo método da razão de Bowen e a altura da camada limite convectiva na escala mensal. A hipótese básica é de que a evolução temporal média das variáveis temperatura e umidade específica do ar é controlada unicamente pela convergência dos fluxos superficiais de calor sensível e calor latente. Essa suposição é válida para escala mensal e em regiões de latitudes médias afastadas da costa. Dessa forma, presume-se que os termos advectivos, da equação de balanço dessas quantidades na camada limite convectiva, nas situações pré-frontais e pós-frontais apresentam sinal oposto. Assim, utilizando-se para uma escala temporal mais longa que a escala típica da passagem dos sistemas sinóticos, a hipótese de cancelamento dos termos pode ser testada. No presente trabalho, o método é aplicado para a região de Santa Maria, onde é assumido que as condições que permitem desprezar os termos advectivos em escala mensal são válidas. Ainda que o modelo apresentado esteja simplificado, foi possível estimar os fluxos superficiais de calor latente e sensível a partir de dados meteorológicos simples, como observações horárias de temperatura do ar e umidade específica e um perfil vertical dessas quantidades no período da manhã; o método se mostrou eficiente em boa parte dos casos, sendo mais falho nos meses de inverno. De modo geral, o método superestimou consistentemente, mas de maneira leve, os fluxos de calor latente, enquanto os fluxos de calor sensível foram levemente subestimados no inverno e superestimados nos demais meses. Esse padrão causou um maior desvio na partição de energia nos meses de inverno. O método também pode ser utilizado para fornecer a evolução da altura da camada limite convectiva. Esta é uma variável de grande interesse para estudos de dispersão de poluentes e, em muitos casos, de difícil estimativa.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography