Dissertations / Theses on the topic 'Model transfer approach'

To see the other types of publications on this topic, follow the link: Model transfer approach.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Model transfer approach.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Ramström, Eva. "Mass transfer and slag-metal reaction in ladle refining : a CFD approach." Licentiate thesis, Stockholm : KTH, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-11864.

Full text
Abstract:

 

In order to optimise the ladle treatment mass transfer modelling of aluminium addition and homogenisation time was carried out. It was stressed that incorporating slag-metal reactions into the mass transfer modelling strongly would enhance the reliability and amount of information to be analyzed from the CFD calculations.

 

In the present work, a thermodynamic model taking all the involved slag metal reactions into consideration was incorporated into a 2-D fluid flow model of an argon stirred ladle. Both thermodynamic constraints and mass balance were considered. The activities of the oxide components in the slag phase were described using the thermodynamic model by Björkvall and the liquid metal using the dilute solution model. Desulphurization was simulated using the sulphide capacity model developed by KTH group. A 2-D fluid flow model considering the slag, steel and argon phases was adopted.

 

The model predictions were compared with industrial data and the agreement was found quite satisfactory. The promising model calculation would encourage new CFD simulation of 3-D along this direction.

 

APA, Harvard, Vancouver, ISO, and other styles
2

Jeong, Kideog. "OBJECT MATCHING IN DISJOINT CAMERAS USING A COLOR TRANSFER APPROACH." UKnowledge, 2007. http://uknowledge.uky.edu/gradschool_theses/434.

Full text
Abstract:
Object appearance models are a consequence of illumination, viewing direction, camera intrinsics, and other conditions that are specific to a particular camera. As a result, a model acquired in one view is often inappropriate for use in other viewpoints. In this work we treat this appearance model distortion between two non-overlapping cameras as one in which some unknown color transfer function warps a known appearance model from one view to another. We demonstrate how to recover this function in the case where the distortion function is approximated as general affine and object appearance is represented as a mixture of Gaussians. Appearance models are brought into correspondence by searching for a bijection function that best minimizes an entropic metric for model dissimilarity. These correspondences lead to a solution for the transfer function that brings the parameters of the models into alignment in the UV chromaticity plane. Finally, a set of these transfer functions acquired from a collection of object pairs are generalized to a single camera-pair-specific transfer function via robust fitting. We demonstrate the method in the context of a video surveillance network and show that recognition of subjects in disjoint views can be significantly improved using the new color transfer approach.
APA, Harvard, Vancouver, ISO, and other styles
3

Lakhanpal, Chetan. "Mathematical modelling of applied heat transfer in temperature sensitive packaging systems. Design, development and validation of a heat transfer model using lumped system approach that predicts the performance of cold chain packaging systems under dynamically changing environmental thermal conditions." Thesis, University of Bradford, 2009. http://hdl.handle.net/10454/5776.

Full text
Abstract:
Development of temperature controlled packaging (TCP) systems involves a significant lead-time and cost as a result of the large number of tests that are carried out to understand system performance in different internal and external conditions. This MPhil project aims at solving this problem through the development of a transient spreadsheet based model using lumped system approach that predicts the performance of packaging systems under a wide range of internal configurations and dynamically changing environmental thermal conditions. Experimental tests are conducted with the aim of validating the predictive model. Testing includes monitoring system temperature in a wide range of internal configurations and external thermal environments. A good comparison is seen between experimental and model predicted results; increasing the mass of the chilled phase change material (PCM) in a system reduces the damping in product performance thereby reducing the product fluctuations or amplitude of the product performance curve. Results show that the thermal mathematical model predicts duration to failure within an accuracy of +/- 15% for all conditions considered.
APA, Harvard, Vancouver, ISO, and other styles
4

Martínez, Ballester Santiago. "NUMERICAL MODEL FOR MICROCHANNEL CONDENSERS AND GAS COOLERS WITH AN IMPROVED AIR-SIDE APPROACH." Doctoral thesis, Universitat Politècnica de València, 2012. http://hdl.handle.net/10251/17453.

Full text
Abstract:
La presente tesis se ha llevado a cabo en el Instituto de Ingeniería Energética de la Universitat Politècnica de València y durante una estancia en el National Institute of Standards and Technology (NIST). El objetivo principal de la tesis es desarrollar un modelo de alta precisión para intercambiadores de calor de microcanales (MCHX), que tiene que ser útil, en términos de coste computacional, para tareas de diseño. En la opinión del autor, existen algunos inconvenientes cuando los modelos existentes se aplican a algunos diseños recientes de intercambiador de calor, tales como MCHXs, bien de tubos en serpentín o en paralelo. Por lo tanto, la primera etapa de la tesis identifica los fenómenos que tienen el mayor impacto en la precisión de un modelo para MCHX. Adicionalmente, se evaluó el grado de cumplimiento de varias simplificaciones y enfoques clásicos. Con este fin, se desarrolló el modelo de alta precisión Fin2D como una herramienta para llevar a cabo la investigación mencionada. El modelo Fin2D es una herramienta útil para analizar los fenómenos que tienen lugar, sin embargo requiere un gran coste computacional, y por tanto no es útil para trabajos de diseño. Es por ello que en base a los conocimientos adquiridos con el modelo Fin2D, se ha desarrollado un nuevo modelo, el Fin1Dx3. Este modelo tan sólo tiene en cuenta los fenómenos más importantes, reteniendo casi la misma precisión que Fin2D, pero con una reducción en el tiempo de cálculo de un orden de magnitud. Se introduce una novedosa discretización y un esquema numérico único para el modelado de la transferencia de calor del lado del aire. Este nuevo enfoque permite modelar los fenómenos existentes de forma consistente con mayor precisión y con mucho menos simplificaciones que los modelos actuales de la literatura. Por otra parte, se logra un coste razonable de cálculo para el objetivo fijado. La tesis incluye la validación experimental de este modelo tanto para un condensador y un enfriador de gas. Con e
Martínez Ballester, S. (2012). NUMERICAL MODEL FOR MICROCHANNEL CONDENSERS AND GAS COOLERS WITH AN IMPROVED AIR-SIDE APPROACH [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/17453
Palancia
APA, Harvard, Vancouver, ISO, and other styles
5

Singh, Ogesh. "Regulatory T cell diversity analysis and a gene transfer approach to cellular immunotherapy in a murine model of type one diabetes." Thesis, Royal Veterinary College (University of London), 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.522749.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Young, Cindy L. "A satellite and ash transport model aided approach to assess the radiative impacts of volcanic aerosol in the Arctic." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/53404.

Full text
Abstract:
The Arctic radiation climate is influenced substantially by anthropogenic and natural aerosols. There have been numerous studies devoted to understanding the radiative impacts of anthropogenic aerosols (e.g. those responsible for producing the Arctic haze phenomenon) and natural aerosols (e.g. dust and smoke) on the Arctic environment, but volcanic aerosols have received less attention. Volcanic eruptions occur frequently in the Arctic and have the capacity to be long duration, high intensity events, expelling large amounts of aerosol-sized ash and gases, which form aerosols once in the atmosphere. Additionally, volcanic eruptions deposit ash, which can alter the surface reflectivity, and remain to influence the radiation balance long after the eruptive plume has passed over and dissipated. The goal of this dissertation is to quantify the radiative effects of volcanic aerosols in the Arctic caused by volcanic plumes and deposits onto ice and snow covered surfaces. The shortwave, longwave, and net direct aerosol radiative forcing efficiencies and atmospheric heating/cooling rates caused by volcanic aerosol from the 2009 eruption of Mt. Redoubt were determined by performing radiative transfer modeling constrained by NASA A-Train satellite data. The optical properties of volcanic aerosol were calculated by introducing a compositionally resolved microphysical model developed for both ash and sulfates. Two compositions of volcanic aerosol were considered in order to examine a fresh, ash rich plume and an older, ash poor plume. The results indicate that environmental conditions, such as surface albedo and solar zenith angle, can influence the sign and the magnitude of the radiative forcing at the top of the atmosphere and at the surface. Environmental conditions can also influence the magnitude of the forcing in the aerosol layer. For instance, a fresh, thin plume with a high solar zenith angle over snow cools the surface and warms the top of the atmosphere, but the opposite effect is seen by the same layer over ocean. The layer over snow also warms more than the same plume over seawater. It was found that plume aging can alter the magnitude of the radiative forcing. For example, an aged plume over snow at a high solar zenith angle would warm the top of the atmosphere and layer by less than the fresh plume, while the aged plume cools the surface more. These results were compared with those reported for other aerosols typical to the Arctic environment (smoke from wildfires, Arctic haze, and dust) to demonstrate the importance of volcanic aerosols. It is found that the radiative impacts of volcanic aerosol plumes are comparable to those of other aerosol types, and those compositions rich in volcanic ash can have greater impacts than other aerosol types. Volcanic ash deposited onto ice and snow in the Arctic has the potential to perturb the regional radiation balance by altering the surface reflectivity. The areal extent and loading of ash deposits from the 2009 eruption of Mt. Redoubt were assessed using an Eulerian volcanic ash transport and dispersion model, Fall3D, combined with satellite and deposit observations. Because observations are often limited in remote Arctic regions, we devised a novel method for modeling ash deposit loading fields for the entire eruption based on best-fit parameters of a well-studied eruptive event. The model results were validated against NASA A-train satellite data and field measurements reported by the Alaska Volcano Observatory. Overall, good to moderate agreement was found. A total cumulative deposit area of 3.7 X 10^6 km2 was produced, and loadings ranged from ~7000 ± 3000 gm-2 near the vent to <0.1 ± 0.002 gm-2 on the outskirts of the deposits. Ash loading histories for total deposits showed that fallout ranged from ~5 – 17 hours. The deposit loading results suggest that ash from short duration events can produce regionally significant deposits hundreds of kilometers from the volcano, with the potential of significantly modifying albedo over wide regions of ice and snow covered terrain. The solar broadband albedo change, surface radiative forcing, and snowmelt rates associated with the ash deposited from the 2009 eruption of Mt. Redoubt were calculated using the loadings from Fall3D and the snow, ice, and aerosol radiative models. The optical properties of ash were calculated from Mie theory, based on size information recovered from the Fall3D model. Two sizes of snow were used in order to simulate a young and old snowpack. Deposited ash sizes agree well with field measurements. Only aerosol-sized ashes in deposits were considered for radiative modeling, because larger particles are minor in abundance and confined to areas very close to the vent. The results show concentrations of ash in snow range from ~ 6.9x10^4 – 1x10^8 ppb, with higher values closer to the vent and lowest at the edge of the deposits, and integrated solar albedo reductions of ~ 0 – 59% for new snow and ~ 0 – 85% for old snow. These albedo reductions are much larger than those typical for black carbon, but on the same order of magnitude as those reported for volcanic deposits in Antarctica. The daily mean surface shortwave forcings associated with ash deposits on snow ranged from 0 – 96 Wm-2 from the outmost deposits to the vent. There were no significantly accelerated snowmelts calculated for the outskirts of the deposits. However, for areas of higher ash loadings/concentrations, daily melt rates are significantly higher (~ 220 – 320%) because of volcanic ash deposits.
APA, Harvard, Vancouver, ISO, and other styles
7

Fallatah, Basem Abdullrahman. "Systems Approach: Concept Proposal to Develop Saudi Arabia Low-Complexity-Defense-Spare-Parts Manufacturing Industries, Utilizing Technology Transfer and Business Incubator." University of Dayton / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1544620225738681.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Skoglund, Emil. "A NUMERICAL MODEL OF HEAT- AND MASS TRANSFER IN POLYMER ELECTROLYTE FUEL CELLS : A two-dimensional 1+1D approach to solve the steady-state temperature- and mass- distributions." Thesis, Mälardalens högskola, Framtidens energi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-55223.

Full text
Abstract:
Methods of solving the steady state characteristics of a node matrix equation system over a polymer electrolyte fuel cell (PEFC) were evaluated. The most suitable method, referred to as the semi-implicit method, was set up in a MATLAB program. The model covers heat transfer due to thermal diffusion throughout the layers and due to thermal advection+diffusion in the gas channels. Included mass transport processes cover only transport of water vapor and consist of the same diffusion/advection schematics as the heat transfer processes. The mass transport processes are hence Fickian diffusion throughout all the layers and diffusion+advection in the gas channels. Data regarding all the relevant properties of the layer materials were gathered to simulate these heat- and mass transfer processes.Comparing the simulated temperature profiles obtained with the model to the temperature profiles of a previous work’s model, showed that the characteristics and behavior of the temperature profile are realistic. There were however differences between the results, but due to the number of unknown parameters in the previous work’s model it was not possible to draw conclusions regarding the accuracy of the model by comparing the results.Comparing the simulated water concentration profiles of the model and measured values, showed that the model produced concentration characteristics that for the most part alignedwell with the measurement data. The part of the fuel cell where the concentration profile did not match the measured data was the cathode side gas diffusion layer (GDL). This comparison was however performed with the assumption that relative humidity corresponds to liquid water concentration, and that this liquid water concentration is in the same range as the measured data. Because of this assumption it was not possible to determine the accuracy of the model.
APA, Harvard, Vancouver, ISO, and other styles
9

Mannschatz, Theresa. "Site evaluation approach for reforestations based on SVAT water balance modeling considering data scarcity and uncertainty analysis of model input parameters from geophysical data." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2015. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-175309.

Full text
Abstract:
Extensive deforestations, particularly in the (sub)tropics, have led to intense soil degradation and erosion with concomitant reduction in soil fertility. Reforestations or plantations on those degraded sites may provide effective measures to mitigate further soil degradation and erosion, and can lead to improved soil quality. However, a change in land use from, e.g., grassland to forest may have a crucial impact on water balance. This may affect water availability even under humid tropical climate conditions where water is normally not a limiting factor. In this context, it should also be considered that according to climate change projections rainfall may decrease in some of these regions. To mitigate climate change related problems (e.g. increases in erosion and drought), reforestations are often carried out. Unfortunately, those measures are seldom completely successful, because the environmental conditions and the plant specific requirements are not appropriately taken into account. This is often due to data-scarcity and limited financial resources in tropical regions. For this reason, innovative approaches are required that are able to measure environmental conditions quasi-continuously in a cost-effective manner. Simultaneously, reforestation measures should be accompanied by monitoring in order to evaluate reforestation success and to mitigate, or at least to reduce, potential problems associated with reforestation (e.g. water scarcity). To avoid reforestation failure and negative implications on ecosystem services, it is crucial to get insights into the water balance of the actual ecosystem, and potential changes resulting from reforestation. The identification and prediction of water balance changes as a result of reforestation under climate change requires the consideration of the complex feedback system of processes in the soil-vegetation-atmosphere continuum. Models that account for those feedback system are Soil-Vegetation-Atmosphere-Transfer (SVAT) models. For the before-mentioned reasons, this study targeted two main objectives: (i) to develop and test a method combination for site evaluation under data scarcity (i.e. study requirements) (Part I) and (ii) to investigate the consequences of prediction uncertainty of the SVAT model input parameters, which were derived using geophysical methods, on SVAT modeling (Part II). A water balance modeling approach was set at the center of the site evaluation approach. This study used the one-dimensional CoupModel, which is a SVAT model. CoupModel requires detailed spatial soil information for (i) model parameterization, (ii) upscaling of model results and accounting for local to regional-scale soil heterogeneity, and (iii) monitoring of changes in soil properties and plant characteristics over time. Since traditional approaches to soil and vegetation sampling and monitoring are time consuming and expensive (and therefore often limited to point information), geophysical methods were used to overcome this spatial limitation. For this reason, vis-NIR spectroscopy (visible to near-infrared wavelength range) was applied for the measurement of soil properties (physical and chemical), and remote sensing to derive vegetation characteristics (i.e. leaf area index (LAI)). Since the estimated soil properties (mainly texture) could be used to parameterize a SVAT model, this study investigated the whole processing chain and related prediction uncertainty of soil texture and LAI, and their impact on CoupModel water balance prediction uncertainty. A greenhouse experiment with bamboo plants was carried out to determine plant-physiological characteristics needed for CoupModel parameterization. Geoelectrics was used to investigate soil layering, with the intent of determining site-representative soil profiles for model parameterization. Soil structure was investigated using image analysis techniques that allow the quantitative assessment and comparability of structural features. In order to meet the requirements of the selected study approach, the developed methodology was applied and tested for a site in NE-Brazil (which has low data availability) with a bamboo plantation as the test site and a secondary forest as the reference (reference site). Nevertheless, the objective of the thesis was not the concrete modeling of the case study site, but rather the evaluation of the suitability of the selected methods to evaluate sites for reforestations and to monitor their influence on the water balance as well as soil properties. The results (Part III) highlight that one needs to be aware of the measurement uncertainty related to SVAT model input parameters, so for instance the uncertainty of model input parameters such as soil texture and leaf area index influences meaningfully the simulated model water balance output. Furthermore, this work indicates that vis-NIR spectroscopy is a fast and cost-efficient method for soil measurement, mapping, and monitoring of soil physical (texture) and chemical (N, TOC, TIC, TC) properties, where the quality of soil prediction depends on the instrument (e.g. sensor resolution), the sample properties (i.e. chemistry), and the site characteristics (i.e. climate). Additionally, also the sensitivity of the CoupModel with respect to texture prediction uncertainty with respect to surface runoff, transpiration, evaporation, evapotranspiration, and soil water content depends on site conditions (i.e. climate and soil type). For this reason, it is recommended that SVAT model sensitivity analysis be carried out prior to field spectroscopic measurements to account for site specific climate and soil conditions. Nevertheless, mapping of the soil properties estimated via spectroscopy using kriging resulted in poor interpolation (i.e. weak variograms) results as a consequence of a summation of uncertainty arising from the method of field measurement to mapping (i.e. spectroscopic soil prediction, kriging error) and site-specific ‘small-scale’ heterogeneity. The selected soil evaluation method (vis-NIR spectroscopy, structure comparison using image analysis, traditional laboratory analysis) showed that there are significant differences between the bamboo soil and the adjacent secondary forest soil established on the same soil type (Vertisol). Reflecting on the major study results, it can be stated that the selected method combination is a way forward to a more detailed and efficient way to evaluate the suitability of a specific site for reforestation. The results of this study provide insights into where and when during soil and vegetation measurements a high measurement accuracy is required to minimize uncertainties in SVAT modeling
Umfangreiche Abholzungen, besonders in den (Sub-)Tropen, habe zu intensiver Bodendegradierung und Erosion mit einhergehendem Verlust der Bodenfruchtbarkeit geführt. Eine wirksame Maßnahme zur Vermeidung fortschreitender Bodendegradierung und Erosion sind Aufforstungen auf diesen Flächen, die bisweilen zu einer verbesserten Bodenqualität führen können. Eine Umwandlung von Grünland zu Wald kann jedoch einen entscheidenden Einfluss auf den Wasserhaushalt haben. Selbst unter humid-tropischen Klimabedingungen, wo Wasser in der Regel kein begrenzender Faktor ist, können sich Aufforstungen negativ auf die Wasserverfügbarkeit auswirken. In diesem Zusammenhang muss auch berücksichtigt werden, dass Klimamodelle eine Abnahme der Niederschläge in einigen dieser Regionen prognostizieren. Um die Probleme, die mit dem Klimawandel in Verbindung stehen zu mildern (z.B. Zunahme von Erosion und Dürreperioden), wurden und werden bereits umfangreiche Aufforstungsmaßnahmen durchgeführt. Viele dieser Maßnahmen waren nicht immer umfassend erfolgreich, weil die Umgebungsbedingungen sowie die pflanzenspezifischen Anforderungen nicht angemessen berücksichtigt wurden. Dies liegt häufig an der schlechten Datengrundlage sowie an den in vielen Entwicklungs- und Schwellenländern begrenzter verfügbarer finanzieller Mittel. Aus diesem Grund werden innovative Ansätze benötigt, die in der Lage sind quasi-kontinuierlich und kostengünstig die Standortbedingungen zu erfassen und zu bewerten. Gleichzeitig sollte eine Überwachung der Wiederaufforstungsmaßnahme erfolgen, um deren Erfolg zu bewerten und potentielle negative Effekte (z.B. Wasserknappheit) zu erkennen und diesen entgegenzuwirken bzw. reduzieren zu können. Um zu vermeiden, dass Wiederaufforstungen fehlschlagen oder negative Auswirkungen auf die Ökosystemdienstleistungen haben, ist es entscheidend, Kenntnisse vom tatsächlichen Wasserhaushalt des Ökosystems zu erhalten und Änderungen des Wasserhaushalts durch Wiederaufforstungen vorhersagen zu können. Die Ermittlung und Vorhersage von Wasserhaushaltsänderungen infolge einer Aufforstung unter Berücksichtigung des Klimawandels erfordert die Berücksichtigung komplex-verzahnter Rückkopplungsprozesse im Boden-Vegetations-Atmosphären Kontinuum. Hydrologische Modelle, die explizit den Einfluss der Vegetation auf den Wasserhaushalt untersuchen sind Soil-Vegetation-Atmosphere-Transfer (SVAT) Modelle. Die vorliegende Studie verfolgte zwei Hauptziele: (i) die Entwicklung und Erprobung einer Methodenkombination zur Standortbewertung unter Datenknappheit (d.h. Grundanforderung des Ansatzes) (Teil I) und (ii) die Untersuchung des Einflusses der mit geophysikalischen Methoden vorhergesagten SVAT-Modeleingangsparameter (d.h. Vorhersageunsicherheiten) auf die Modellierung (Teil II). Eine Wasserhaushaltsmodellierung wurde in den Mittelpunkt der Methodenkombination gesetzt. In dieser Studie wurde das 1D SVAT Model CoupModel verwendet. CoupModel benötigen detaillierte räumliche Bodeninformationen (i) zur Modellparametrisierung, (ii) zum Hochskalierung von Modellergebnissen unter Berücksichtigung lokaler und regionaler Bodenheterogenität, und (iii) zur Beobachtung (Monitoring) der zeitlichen Veränderungen des Bodens und der Vegetation. Traditionelle Ansätze zur Messung von Boden- und Vegetationseigenschaften und deren Monitoring sind jedoch zeitaufwendig, teuer und beschränken sich daher oft auf Punktinformationen. Ein vielversprechender Ansatz zur Überwindung der räumlichen Einschränkung sind die Nutzung geophysikalischer Methoden. Aus diesem Grund wurden vis-NIR Spektroskopie (sichtbarer bis nah-infraroter Wellenlängenbereich) zur quasi-kontinuierlichen Messung von physikalischer und chemischer Bodeneigenschaften und Satelliten-basierte Fernerkundung zur Ableitung von Vegetationscharakteristika (d.h. Blattflächenindex (BFI)) eingesetzt. Da die mit geophysikalisch hergeleiteten Bodenparameter (hier Bodenart) und Pflanzenparameter zur Parametrisierung eines SVAT Models verwendet werden können, wurde die gesamte Prozessierungskette und die damit verbundenen Unsicherheiten und deren potentiellen Auswirkungen auf die Wasserhaushaltsmodellierung mit CoupModel untersucht. Ein Gewächshausexperiment mit Bambuspflanzen wurde durchgeführt, um die zur CoupModel Parametrisierung notwendigen pflanzenphysio- logischen Parameter zu bestimmen. Geoelektrik wurde eingesetzt, um die Bodenschichtung der Untersuchungsfläche zu untersuchen und ein repräsentatives Bodenprofil zur Modellierung zu definieren. Die Bodenstruktur wurde unter Verwendung einer Bildanalysetechnik ausgewertet, die die qualitativen Bewertung und Vergleichbarkeit struktureller Merkmale ermöglicht. Um den Anforderungen des gewählten Standortbewertungsansatzes gerecht zu werden, wurde die Methodik auf einem Standort mit einer Bambusplantage und einem Sekundärregenwald (als Referenzfläche) in NO-Brasilien (d.h. geringe Datenverfügbarkeit) entwickelt und getestet. Das Ziel dieser Arbeit war jedoch nicht die Modellierung dieses konkreten Standortes, sondern die Bewertung der Eignung des gewählten Methodenansatzes zur Standortbewertung für Aufforstungen und deren zeitliche Beobachtung, als auch die Bewertung des Einfluss von Aufforstungen auf den Wasserhaushalt und die Bodenqualität. Die Ergebnisse (Teil III) verdeutlichen, dass es notwendig ist, sich den potentiellen Einfluss der Messunsicherheiten der SVAT Modelleingangsparameter auf die Modellierung bewusst zu sein. Beispielsweise zeigte sich, dass die Vorhersageunsicherheiten der Bodentextur und des BFI einen bedeutenden Einfluss auf die Wasserhaushaltsmodellierung mit CoupModel hatte. Die Arbeit zeigt weiterhin, dass vis-NIR Spektroskopie zur schnellen und kostengünstigen Messung, Kartierung und Überwachung boden-physikalischer (Bodenart) und -chemischer (N, TOC, TIC, TC) Eigenschaften geeignet ist. Die Qualität der Bodenvorhersage hängt vom Instrument (z.B. Sensorauflösung), den Probeneigenschaften (z.B. chemische Zusammensetzung) und den Standortmerkmalen (z.B. Klima) ab. Die Sensitivitätsanalyse mit CoupModel zeigte, dass der Einfluss der spektralen Bodenartvorhersageunsicherheiten auf den mit CoupModel simulierten Oberflächenabfluss, Evaporation, Transpiration und Evapotranspiration ebenfalls von den Standortbedingungen (z.B. Klima, Bodentyp) abhängt. Aus diesem Grund wird empfohlen eine SVAT Model Sensitivitätsanalyse vor der spektroskopischen Feldmessung von Bodenparametern durchzuführen, um die Standort-spezifischen Boden- und Klimabedingungen angemessen zu berücksichtigen. Die Anfertigung einer Bodenkarte unter Verwendung von Kriging führte zu schlechten Interpolationsergebnissen in Folge der Aufsummierung von Mess- und Schätzunsicherheiten (d.h. bei spektroskopischer Feldmessung, Kriging-Fehler) und der kleinskaligen Bodenheterogenität. Anhand des gewählten Bodenbewertungsansatzes (vis-NIR Spektroskopie, Strukturvergleich mit Bildanalysetechnik, traditionelle Laboranalysen) konnte gezeigt werden, dass es bei gleichem Bodentyp (Vertisol) signifikante Unterschiede zwischen den Böden unter Bambus und Sekundärwald gibt. Anhand der wichtigsten Ergebnisse kann festgehalten werden, dass die gewählte Methodenkombination zur detailreicheren und effizienteren Standortuntersuchung und -bewertung für Aufforstungen beitragen kann. Die Ergebnisse dieser Studie geben einen Einblick darauf, wo und wann bei Boden- und Vegetationsmessungen eine besonders hohe Messgenauigkeit erforderlich ist, um Unsicherheiten bei der SVAT Modellierung zu minimieren
Extensos desmatamentos que estão sendo feitos especialmente nos trópicos e sub-trópicos resultam em uma intensa degradação do solo e num aumento da erosão gerando assim uma redução na sua fertilidade. Reflorestamentos ou plantações nestas áreas degradadas podem ser medidas eficazes para atenuar esses problemas e levar a uma melhoria da qualidade do mesmo. No entanto, uma mudança no uso da terra, por exemplo de pastagem para floresta pode ter um impacto crucial no balanço hídrico e isso pode afetar a disponibilidade de água, mesmo sob condições de clima tropical úmido, onde a água normalmente não é um fator limitante. Devemos levar também em consideração que de acordo com projeções de mudanças climáticas, as precipitações em algumas dessas regiões também diminuirão agravando assim, ainda mais o quadro apresentado. Para mitigar esses problemas relacionados com as alterações climáticas, reflorestamentos são frequentemente realizados mas raramente são bem-sucedidos, pois condições ambientais como os requisitos específicos de cada espécie de planta, não são devidamente levados em consideração. Isso é muitas vezes devido, não só pela falta de dados, como também por recursos financeiros limitados, que são problemas comuns em regiões tropicais. Por esses motivos, são necessárias abordagens inovadoras que devam ser capazes de medir as condições ambientais quase continuamente e de maneira rentável. Simultaneamente com o reflorestamento, deve ser feita uma monitoração a fim de avaliar o sucesso da atividade e para prevenir, ou pelo menos, reduzir os problemas potenciais associados com o mesmo (por exemplo, a escassez de água). Para se evitar falhas e reduzir implicações negativas sobre os ecossistemas, é crucial obter percepções sobre o real balanço hídrico e as mudanças que seriam geradas por esse reflorestamento. Por este motivo, esta tese teve como objetivo desenvolver e testar uma combinação de métodos para avaliação de áreas adequadas para reflorestamento. Com esse intuito, foi colocada no centro da abordagem de avaliação a modelagem do balanço hídrico local, que permite a identificação e estimação de possíveis alterações causadas pelo reflorestamento sob mudança climática considerando o sistema complexo de realimentação e a interação de processos do continuum solo-vegetação-atmosfera. Esses modelos hidrológicos que investigam explicitamente a influência da vegetação no equilíbrio da água são conhecidos como modelos Solo-Vegetação-Atmosfera (SVAT). Esta pesquisa focou em dois objetivos principais: (i) desenvolvimento e teste de uma combinação de métodos para avaliação de áreas que sofrem com a escassez de dados (pré-requisito do estudo) (Parte I), e (ii) a investigação das consequências da incerteza nos parâmetros de entrada do modelo SVAT, provenientes de dados geofísicos, para modelagem hídrica (Parte II). A fim de satisfazer esses objetivos, o estudo foi feito no nordeste brasileiro,por representar uma área de grande escassez de dados, utilizando como base uma plantação de bambu e uma área de floresta secundária. Uma modelagem do balanço hídrico foi disposta no centro da metodologia para a avaliação de áreas. Este estudo utilizou o CoupModel que é um modelo SVAT unidimensional e que requer informações espaciais detalhadas do solo para (i) a parametrização do modelo, (ii) aumento da escala dos resultados da modelagem, considerando a heterogeneidade do solo de escala local para regional e (iii) o monitoramento de mudanças nas propriedades do solo e características da vegetação ao longo do tempo. Entretanto, as abordagens tradicionais para amostragem de solo e de vegetação e o monitoramento são demorados e caros e portanto muitas vezes limitadas a informações pontuais. Por esta razão, métodos geofísicos como a espectroscopia visível e infravermelho próximo (vis-NIR) e sensoriamento remoto foram utilizados respectivamente para a medição de propriedades físicas e químicas do solo e para derivar as características da vegetação baseado no índice da área foliar (IAF). Como as propriedades estimadas de solo (principalmente a textura) poderiam ser usadas para parametrizar um modelo SVAT, este estudo investigou toda a cadeia de processamento e as incertezas de previsão relacionadas à textura de solo e ao IAF. Além disso explorou o impacto destas incertezas criadas sobre a previsão do balanço hídrico simulado por CoupModel. O método geoelétrico foi aplicado para investigar a estratificação do solo visando a determinação de um perfil representante. Já a sua estrutura foi explorada usando uma técnica de análise de imagens que permitiu a avaliação quantitativa e a comparabilidade dos aspectos estruturais. Um experimento realizado em uma estufa com plantas de bambu (Bambusa vulgaris) foi criado a fim de determinar as caraterísticas fisiológicas desta espécie que posteriormente seriam utilizadas como parâmetros para o CoupModel. Os resultados do estudo (Parte III) destacam que é preciso estar consciente das incertezas relacionadas à medição de parâmetros de entrada do modelo SVAT. A incerteza presente em alguns parâmetros de entrada como por exemplo, textura de solo e o IAF influencia significantemente a modelagem do balanço hídrico. Mesmo assim, esta pesquisa indica que vis-NIR espectroscopia é um método rápido e economicamente viável para medir, mapear e monitorar as propriedades físicas (textura) e químicas (N, TOC, TIC, TC) do solo. A precisão da previsão dessas propriedades depende do tipo de instrumento (por exemplo da resolução do sensor), da propriedade da amostra (a composição química por exemplo) e das características das condições climáticas da área. Os resultados apontam também que a sensitividade do CoupModel à incerteza da previsão da textura de solo em respeito ao escoamento superficial, transpiração, evaporação, evapotranspiração e ao conteúdo de água no solo depende das condições gerais da área (por exemplo condições climáticas e tipo de solo). Por isso, é recomendado realizar uma análise de sensitividade do modelo SVAT prior a medição espectral do solo no campo, para poder considerar adequadamente as condições especificas do área em relação ao clima e ao solo. Além disso, o mapeamento de propriedades de solo previstas pela espectroscopia usando o kriging, resultou em interpolações de baixa qualidade (variogramas fracos) como consequência da acumulação de incertezas surgidas desde a medição no campo até o seu mapeamento (ou seja, previsão do solo via espectroscopia, erro do kriging) e heterogeneidade especifica de uma pequena escala. Osmétodos selecionados para avaliação das áreas (vis-NIR espectroscopia, comparação da estrutura de solo por meio de análise de imagens, análise de laboratório tradicionais) revelou a existência de diferenças significativas entre o solo sob bambu e o sob floresta secundária, apesar de ambas terem sido estabelecidas no mesmo tipo de solo (vertissolo). Refletindo sobre os principais resultados do estudo, pode-se afirmar que a combinação dos métodos escolhidos e aplicados representam uma forma mais detalhada e eficaz de avaliar se uma determinada área é adequada para ser reflorestada. Os resultados apresentados fornecem percepções sobre onde e quando, durante a medição do solo e da vegetação, é necessário se ter uma precisão mais alta a fim de minimizar incertezas potenciais na modelagem com o modelo SVAT
APA, Harvard, Vancouver, ISO, and other styles
10

D'Ascia-Berger, Valerie. "Stratégie d'implantation d'une échelle d'évaluation du risque de constipation : approche éducative et collaborative." Thesis, Aix-Marseille, 2014. http://www.theses.fr/2014AIXM3081.

Full text
Abstract:
Cette recherche porte sur la co-construction d'une stratégie pour implanter dans la pratique infirmière une échelle d'évaluation du risque de constipation du patient hospitalisé (ERCoPH). Elle s'appuie sur le Modèle humaniste des soins infirmiers (Girard et Cara, 2011) et sur le modèle d'apprentissage socio-constructivisme (Vygotsky, 1997). Le design s'inscrit dans une approche collaborative (Desgagné, 1997). Les objectifs sont de co-construire une stratégie pour implanter cette nouvelle échelle et d'évaluer l'impact de cette approche sur le développement professionnel continu (PDC) des infirmières ayant participé à cette étude et sur le raisonnement clinique de leurs pairs. Cette approche a permis à un groupe d'infirmières lors de séances d'analyse en groupe (Van Campenhoudt &al. 2005) de modéliser des perspectives pour implanter l'échelle ERCoPH. L'impact sur le DPC des équipes non participantes s'est appuyé sur une enquête avant-après. A partir de l'observation d'entretiens d'accueil de patients hospitalisés et d'une enquête sur la capacité à catégoriser les patients à risque de constipation. L'approche collaborative a entrainé chez les infirmières du groupe collaboratif un développement professionnel, notamment dans leurs capacités réflexives. La co-construction de cette stratégie d'implantation de l'échelle ERCoPH peut être associé à un modèle de transfert de connaissances tel que défini par Fixsen et al. (2005) et Graham et al. (2006) dont le but est de permettre l'intégration dans la pratique de nouvelles connaissances et ainsi réduire les écarts avec la pratique
This study focuses on the co-construction of a strategy aiming to implement, in nursing practice, a rating scale to assess the risk of constipation in hospitalised patients (ARCoPH). It is based on humanistic model of nursing (Girard et Cara, 2011) and on the social constructivist approach to learning (Vygotsky, 1997). The research design uses a collaborative approach (Desgagné, 1997). The objectives are to co-construct a strategy to implement this new scale and the impact of this approach on the continuing professional development (CPD) of nurses who participated in the study and on the clinical reasoning of their peers. Using a collaborative approach, a group of five nurses developed, during group analysis sessions (Van Campenhoudt et al., 2005), practical insights to implement the ARCoHP scale. The impact on their CPD was determined through a group interview and a questionnaire. The effect of this approach on the clinical reasoning of the teams was established using a before and after survey based on the observation of patient intake interviews, and to assess the nurses' ability to identify patients at risk of constipation. This collaborative approach led to the professional development of participating nurses, specifically to the improvement of their reflective skills.The co-construction of this implementation strategy for the ARCoHP scale can be associated with the transfer of learning model as defined by Fixsen et al. (2005) and Graham et al. (2006), and thus help close the gaps between theory and practice
APA, Harvard, Vancouver, ISO, and other styles
11

Goomanee, Salvish. "Rigorous Approach to Quantum Integrable Models at Finite Temperature." Thesis, Lyon, 2019. http://www.theses.fr/2019LYSEN039/document.

Full text
Abstract:
Cette thèse développe un cadre rigoureux qui permet de démontrer des représentations exactes associées à divers observables de la chaîne XXZ de Heisenberg de spin 1/2 à température finie. Il a était argumenté dans la littérature que l’énergie libre par site ou les longueurs de corrélations admettent des représentations intégrales où les intégrandes sont exprimées en termes de solutions d’équations intégrales non-linéaires. Les dérivations de ces représentations reposaient sur divers conjectures telles que l’existence d’une valeur propre de la matrice de transfert quantique, real, non-dégénérée, de module maximale, de l’échangeabilitée de la limite du volume infinie et du nombre de Trotter à l’infinie, de l’existence et de l’unicité des solutions des equation intégrales non-linéaires auxiliaires et finalement de l’identification des valeurs propers de la matrice de transfert quantiques avec les solutions de l’équations intégrales non-linéaires. Nous démontrons toutes ces conjectures dans le regime de haute température. Nôtre analyse nous permet aussi de démontrer que pour ces température suffisamment élevées, il est possible d’avoir une description d’un certain sous-ensemble de valeurs propres sous-dominante de la matrice de transfert quantique décrite en terme de solutions d’une chaîne de spin-1 de taille finie
This thesis develops a rigorous framework allowing one to prove the exact representations for various observables in the XXZ Heisenberg spin-1/2 chain at finite temperature. Previously it has been argued in the literature that the per-site free energy or the correlation lengths admit integral representations whose integrands are expressed in terms of solutions of non-linear integral equations. The derivations of such representations relied on various conjectures such as the existence of a real, non-degenerate, maximal in modulus Eigenvalue of the quantum transfer matrix, the exchangeability of the infinite volume limit and the Trotter number limits, the existence and uniqueness of the solutions to the auxiliary non-linear integral equations and finally the identification of the quantum transfer matrix’s Eigenvalues with solutions to the non-linear integral equation. We rigorously prove all these conjectures in the high temperature regime. Our analysis also allows us to prove that for temperatures high enough, one may describe a certain subset of sub-dominant Eigenvalues of the quantum transfer matrix described in terms of solutions to a spin-1 chain of finite length
APA, Harvard, Vancouver, ISO, and other styles
12

Brogna, Gianluigi. "Probabilistic Bayesian approaches to model the global vibro-acoustic performance of vehicles." Thesis, Lyon, 2018. http://www.theses.fr/2018LYSEI082.

Full text
Abstract:
Dans le domaine automobile, bien qu’assez élaborées, les approches actuellement mises en œuvre pour analyser et prédire l’état vibro-acoustique d’un véhicule ne sont pas encore représentatives de la complexité réelle des systèmes mis en jeu. Entre autres limitations, les spécifications pour la conception restent essentiellement basées sur des cas de chargement extrêmes, utiles pour la tenue des structures mais non représentatifs de l’usage client pour les prestations vibro-acoustiques. Un objectif principal est ainsi de construire des modèles probabilistes aptes à prendre en compte les usages client et les conditions de fonctionnement réelles, en même temps que les incertitudes structurelles du véhicule comme les dispersions en fabrication. Ces modèles sont destinés à maîtriser un domaine s’étendant jusqu’aux moyennes fréquences. Pour ce faire, quatre étapes sont proposées : (1) une modélisation générique du système mécanique constitué par un véhicule, cohérente avec les réponses dynamiques dont la prédiction est souhaitée par les ingénieurs automobile ; (2) l’estimation de l’ensemble des efforts qui s’appliquent sur ce système, pour une large plage de conditions de fonctionnement véhicule ; (3) l’analyse et la modélisation de ces efforts considérés comme fonctions des conditions de fonctionnement; (4) l’étude de l’application des efforts modélisés à une structure dont les fonctions de transfert ont été calculées par une méthode d’élément finis stochastique non-paramétrique. La réponse ainsi obtenue est une image bien plus fidèle des conditions de fonctionnement du véhicule et de ses incertitudes structurelles. Pour ces étapes, des algorithmes bayésiens ad hoc sont développés et mis en œuvre sur une importante base de données issue de projets automobiles. Le cadre bayésien est particulièrement utile dans ce travail pour prendre en compte toute connaissance a priori, notamment celle des experts véhicule, et pour facilement propager l’incertitude entre les différents niveaux du modèle probabilisé. Enfin, les méthodes d’analyse choisies ici se révèlent intéressantes non seulement pour la réduction effective des données, mais aussi pour aider la compréhension physique et l’identification des phénomènes dynamiquement dominants
In the automotive domain, although already quite elaborate, the current approaches to predict and analyse the vibro-acoustic behaviour of a vehicle are still far from the complexity of the real system. Among other limitations, design specifications are still essentially based on extreme loading conditions, useful when verifying the mechanical strength, but not representative of the actual vehicle usage, which is instead important when addressing the vibro-acoustic performance. As a consequence, one main aim here is to build a prediction model able to take into account the loading scenarios representative of the actual vehicle usage, as well as the car structural uncertainty (due, for instance, to production dispersion). The proposed model shall cover the low and mid-frequency domain. To this aim, four main steps are proposed in this work: (1) the definition of a model for a general vehicle system, pertinent to the vibro-acoustic responses of interest; (2) the estimation of the whole set of loads applied to this system in a large range of operating conditions; (3) the statistical analysis and modelling of these loads as a function of the vehicle operating conditions; (4) the analysis of the application of the modelled loads to non-parametric stochastic transfer functions, representative of the vehicle structural uncertainty. To achieve the previous steps, ad hoc Bayesian algorithms have been developed and applied to a large industrial database. The Bayesian framework is considered here particularly valuable since it allows taking into account prior knowledge, namely from automotive experts, and since it easily enables uncertainty propagation between the layers of the probabilistic model. Finally, this work shows that the proposed algorithms, more than simply yielding a model of the vibro-acoustic response of a vehicle, are also useful to gain deep insights on the dominant physical mechanisms at the origin of the response of interest
APA, Harvard, Vancouver, ISO, and other styles
13

Eng, Catherine. "Développement de méthodes de fouille de données basées sur les modèles de Markov cachés du second ordre pour l'identification d'hétérogénéités dans les génomes bactériens." Thesis, Nancy 1, 2010. http://www.theses.fr/2010NAN10041/document.

Full text
Abstract:
Les modèles de Markov d’ordre 2 (HMM2) sont des modèles stochastiques qui ont démontré leur efficacité dans l’exploration de séquences génomiques. Cette thèse explore l’intérêt de modèles de différents types (M1M2, M2M2, M2M0) ainsi que leur couplage à des méthodes combinatoires pour segmenter les génomes bactériens sans connaissances a priori du contenu génétique. Ces approches ont été appliquées à deux modèles bactériens afin d’en valider la robustesse : Streptomyces coelicolor et Streptococcus thermophilus. Ces espèces bactériennes présentent des caractéristiques génomiques très distinctes (composition, taille du génome) en lien avec leur écosystème spécifique : le sol pour les S. coelicolor et le milieu lait pour S. thermophilus
Second-order Hidden Markov Models (HMM2) are stochastic processes with a high efficiency in exploring bacterial genome sequences. Different types of HMM2 (M1M2, M2M2, M2M0) combined to combinatorial methods were developed in a new approach to discriminate genomic regions without a priori knowledge on their genetic content. This approach was applied on two bacterial models in order to validate its achievements: Streptomyces coelicolor and Streptococcus thermophilus. These bacterial species exhibit distinct genomic traits (base composition, global genome size) in relation with their ecological niche: soil for S. coelicolor and dairy products for S. thermophilus. In S. coelicolor, a first HMM2 architecture allowed the detection of short discrete DNA heterogeneities (5-16 nucleotides in size), mostly localized in intergenic regions. The application of the method on a biologically known gene set, the SigR regulon (involved in oxidative stress response), proved the efficiency in identifying bacterial promoters. S. coelicolor shows a complex regulatory network (up to 12% of the genes may be involved in gene regulation) with more than 60 sigma factors, involved in initiation of transcription. A classification method coupled to a searching algorithm (i.e. R’MES) was developed to automatically extract the box1-spacer-box2 composite DNA motifs, structure corresponding to the typical bacterial promoter -35/-10 boxes. Among the 814 DNA motifs described for the whole S. coelicolor genome, those of sigma factors (B, WhiG) could be retrieved from the crude data. We could show that this method could be generalized by applying it successfully in a preliminary attempt to the genome of Bacillus subtilis
APA, Harvard, Vancouver, ISO, and other styles
14

Gump, Brandon Adam. "Automated Transforms of Software Models: A Design Pattern Approach." Wright State University / OhioLINK, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=wright1260287805.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Rhee, Seung Hyong. "Optimal flow control and bandwidth allocation in multiservice networks : decentralized approaches /." Digital version accessible at:, 1999. http://wwwlib.umi.com/cr/utexas/main.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Söderdahl, Fabian. "A Cross-Validation Approach to Knowledge Transfer for SVM Models in the Learning Using Privileged Information Paradigm." Thesis, Uppsala universitet, Statistiska institutionen, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-385378.

Full text
Abstract:
The learning using privileged information paradigm has allowed support vector machine models to incorporate privileged information, variables available in the training set but not in the test set, to improve predictive ability. The consequent introduction of the knowledge transfer method has enabled a practical application of support vector machine models utilizing privileged information. This thesis describes a modified knowledge transfer method inspired by cross-validation, which unlike the current standard knowledge transfer method does not create the knowledge transfer function and the approximated privileged features used in the support vector machines on the same observations. The modified method, the robust knowledge transfer, is described and evaluated versus the standard knowledge transfer method and is shown to be able to improve the predictive performance of the support vector machines for both binary classification and regression.
APA, Harvard, Vancouver, ISO, and other styles
17

Neuser, Hannah. "Source Language of Lexical Transfer in Multilingual Learners : A Mixed Methods Approach." Doctoral thesis, Stockholms universitet, Engelska institutionen, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-142050.

Full text
Abstract:
The study reported in this thesis investigates the source language of lexical transfer in multilingual learners using a mixed methods approach. Previous research has shown that the source language of crosslinguistic influence can be related to factors such as proficiency, recency/exposure, psychotypology, the L2 status, and item-specific transferability. The present study employed a mixed methods approach in order to best serve the particularities of each of the five factors under investigation. Multinomial logistic regression was emloyed to test the predictive power of the first four factors, thereby addressing the issue of confounding variables found in previous studies. A more exploratory qualitative analysis was used to investigate item-specific transferability due to the lack of prior empirical studies focusing on this aspect. Both oral and written data were collected, offering an analysis of modal differences in direct comparison. The results show a significant effect of proficiency and exposure, but inconsistent patterns for psychotypology. Most importantly, in this study of lexical transfer, a significant L1 status effect was found, rather than an L2 status effect. In addition, the statistical model predicted the source language of transfer better in the spoken than in the written mode. Finally, learners were found to assess, as well as actively improve, an item’s transferability in relation to target language norms and constraints. All of these findings contribute to our understanding of lexical organization, activation, and access in the multilingual mind.
APA, Harvard, Vancouver, ISO, and other styles
18

Kim, Eui-Jong. "Development of numerical models of vertical ground heat exchangers and experimental verification : domain decomposition and state model reduction approach." Thesis, Lyon, INSA, 2011. http://www.theses.fr/2011ISAL0026/document.

Full text
Abstract:
Dans le contexte énergétique actuel, les pompes à chaleur (PAC) géothermiques sont parmi les technologies les plus performantes pour augmenter l’efficacité énergétique des bâtiments. Par contre le coût initial et l’encombrement des capteurs enterrés traditionnels peuvent être un obstacle à sa diffusion sur le marché des énergies renouvelables. Pour réduire ces coût et encombrement, une réflexion sur l’adjonction d’un système d’appoint et/ou de recharge thermique du sol aux capteurs enterrés est actuellement en cours de tests. Les outils actuels de modélisation des capteurs enterrés obtiennent en effet de bons résultats mais seulement pour un dimensionnement classique en régime permanent. Les modèles existants ne permettent donc pas de représenter correctement les dynamiques rapides des échanges entre le sol et les tubes et cela est d’autant plus vrai si l’on adjoint le système de recharge solaire. Par conséquence, cette thèse a pour objectif de développer les modèles fins et dynamiques nécessaires à l’analyse des phénomènes transitoires dans les capteurs enterrés eux-mêmes. Un maillage fin, sur les bases de la triangulation de Delaunay, est choisi pour le forage ainsi que pour le sol avoisinant. Une approche numérique en 3D (FVM + FEM) peut être obtenue sur les bases de la discrétisation spatiale du domaine. Cette approche appliquée brutalement induirait des temps de calcul très élevés et de toute façon incompatible avec les moyens informatiques ordinaires. Afin de répondre à l’ensemble de ces problèmes, différentes techniques ont été utilisées afin d’accélérer le temps de calcul: décomposition de domaine, emboîtement des pas de temps de calcul pour chaque sous-domaine, réduction des modèles d’états de chaque sous-domaine et finalement couplages temporels et spatiaux des équations de transferts de l’ensemble du problème. Ce dernier est développé particulièrement sur les bases de la méthode des éléments finis. Par ailleurs, un modèle hybride est développé en combinaison de différentes approches. Une approche numérique est adoptée pour la modélisation du puits et la modélisation des transferts de chaleur dans le sol environnant est faite par l’utilisation de solutions analytiques. Ainsi, ce modèle est implanté dans TRNSYS. Une plate-forme expérimentale comprenant trois puits verticaux couplés à une pompe à chaleur géothermique est également présentée. Les résultats expérimentaux sont comparés avec les résultats de la simulation aussi bien au niveau de la température du fluide qu’à la température à différentes profondeurs dans les puits. Le modèle développé donne des résultats très similaires avec ceux qui sont obtenus grâce à l’expérimentation même lors que les pas de temps sont très petits. Il y a des choses à améliorer dans ce modèle développé, mais cela concerne essentiellement l’accélération du temps de calcul. Nous avons constaté que les modèles que nous avons dévéloppés donnent des résultats meilleurs à pas de temps courts que les modèles classiques. Il faut donc bien préciser le domaine d’utilisation de chacun des modèles: consommation sur le long terme, stratégie de contrôle de la PAC, les transferts de chaleur à l’intérieur du puits et etc. De plus, une application du modèle dans le dimensionnement d’échangeurs ainsi que l’investigation de son impact sur le sol avoisinant est également envisagée. Finalement, la méthodologie de modélisation présentée dans ce travail pourrait être aussi utilisé pour différents types d’échangeurs, ouvrant aussi la porte à une analyse fine dans le domaine géothermique
Ground-source heat pump systems with vertical ground heat exchangers (GHE) are gaining popularity worldwide for their higher coefficients of performance and lower CO2 emissions. However, the higher initial cost of installing the borehole GHEs is a main obstacle to spread the systems. To reduce the required total GHE length and efficiently operate the systems, various systems such as hybrid ones (e.g. solar heat injection) have recently been introduced. Accurate prediction of heat transfer in and around boreholes of such systems is crucial to avoid costly overdesigns or catastrophic failures of undersized systems as it is for typical GCHP systems. However, unlike the traditional sizing methods, it is increasingly required to take into account detailed borehole configuration and transient effects (e.g. short circuit effects between U-tubes). Many of the existing GHE models have been reviewed. Some of these models have serious limitations when it comes to transient heat transfer, particularly in the borehole itself. Accordingly, the objective of this thesis is to develop a model that is capable to accurately predict thermal behaviors of the GHEs. A precise response to input variations even in a short time-step is also expected in the model. The model also has to account for a correct temperature and flux distribution between the U-tubes and inside the borehole that seems to be important in the solar heat injection case. Considering these effects in 3D with a detailed mesh used for describing the borehole configurations is normally time-consuming. This thesis attempts to alleviate the calculation time using state model reduction techniques that use fewer modes for a fast calculation but predict similar results. Domain decomposition is also envisaged to sub-structure the domain and vary the time-step sizes. Since the decomposed domains should be coupled one another spatially as well as temporally, new coupling methods are proposed and validated particularly in the FEM. For the simulation purpose, a hybrid model (HM) is developed that combines a numerical solution, the same one as the 3D-RM but only for the borehole, and well-known analytical ones for a fast calculation. An experimental facility used for validation of the model has been built and is described. A comparison with the experimental results shows that the relatively fast transients occurring in the borehole are well predicted not only for the outlet fluid temperature but also for the grout temperatures at different depths even in very short time-steps. Even though the current version of 3D-RM is experimentally validated, it is still worth optimizing the model in terms of the computational time. Further simulations with the 3D-RM are expected to be carried out to estimate the performance of new hybrid systems and propose its appropriate sizing with correspondent thermal impacts on the ground. Finally, the development of the model 3D-RM can be an initiation to accurately model various types of GHE within an acceptable calculation time
APA, Harvard, Vancouver, ISO, and other styles
19

Baggu, Gnanesh. "Efficient Approach for Order Selection of Projection-Based Model Order Reduction." Thesis, Université d'Ottawa / University of Ottawa, 2018. http://hdl.handle.net/10393/37967.

Full text
Abstract:
The present thrust in the electronics industry towards integrating multiple functions on a single chip while operating at very high frequencies has highlighted the need for efficient Electronic Design Automation (EDA) tools to shorten the design cycle and capture market windows. However, the increasing complexity in modern circuit design has made simulation a computationally cumbersome task. The notion of model order reduction has emerged as an effective tool to address this difficulty. Typically, there are numerous approaches and several issues involved in the implementation of model-order reduction techniques. Among the important ones of those issues is the problem of determining a suitable order (or size) for the reduced system. An optimal order would be the minimal order that enables the reduced system to capture the behavior of the original (more complex and larger) system up to a user-defined frequency. The contribution presented in this thesis describes a new approach aimed at determining the order of the reduced system. The proposed approach is based on approximating the impulse response of the original system in the time-domain. The core methodology in obtaining that approximation is based on numerically inverting the Laplace-domain of the representation of the impulse response from the complex-domain (s-domain) into the time-domain. The main advantage of the proposed approach is that it allows the order selection algorithm to operate directly on the time-domain form of the impulse response. It is well-known that numerically generating the impulse response in the time-domain is very difficult and its not impossible, since it requires driving the original network with the Dirac-delta function, which is a mathematical abstraction rather than a concrete waveform that can be implemented on a digital computer. However, such a difficulty is avoided in the proposed approach since it uses the Laplace-domain image of the impulse response to obtain its time-domain representation. The numerical simulations presented in the thesis demonstrate that using the time-domain waveform of the impulse response, computed using the proposed approach and properly filtered with a Butterworth filter, guides the order selection algorithm to select a smaller order, i.e., the reduced system becomes more compact in size. The phrase "smaller or more compact" in this context refers to the comparison with existing techniques currently in use, which seek to generate some form of time-domain approximations for the impulse response through driving the original network with pulse-shaped function (e.g., Gaussian pulse).
APA, Harvard, Vancouver, ISO, and other styles
20

Wright, Janice Kathleen. "A GIS approach to implementing and improving benefit transfer models for the valuation of rural recreational resources." Thesis, University of East Anglia, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.268587.

Full text
Abstract:
Organisations managing recreational sites commonly need to understand the factors influencing visitation choices made by the public and the impact they have on the value of their sites. This need is particularly pertinent with an increasing societal reliance on cost benefit analysis for project appraisal. Whilst on-site visitor surveys can provide information on preferences and values, the potential to transfer findings to predict visitor numbers and values at unsurveyed sites provides an attractive policy option. Indeed, the demand for these benefit transfer methodologies is increasing as more Government emphasis is placed on evaluating the economic potential of rural outdoor recreation. This research concerns the development of benefit transfer models to estimate visitor numbers from outset zones to British Waterways and Forestry Commission sites. Employing a GIS, the research uses multilevel statistical modelling techniques to quantify the impacts of the proximity to competing recreation sites, resource accessibility and quality, and the characteristics of visiting populations. The models are constructed using visitor survey data and applied to unsurveyed sites, testing their use in benefit transfer. Methods are also developed that allow their output to be used to estimate the non-market value of the recreational opportunities afforded by the resources. The findings show some robust relationships determined visit patterns, with travel times from outset zones being a consistent predictor of visitor numbers. A range of other indicators were also significant including socio-demographic measures, site characteristics and substitute availability values. Nevertheless, when individual sites were compared, considerable variability was detected in the strength and direction of these relationships. The methodology developed explicitly addresses the frequently ignored spatial dimension of benefit transfer. Here the GIS provides the functionality to produce a range of measures of the underlying determinants of recreational visits. Although further refinements are needed, the future for spatial benefit transfer models appears promising.
APA, Harvard, Vancouver, ISO, and other styles
21

Hamilton, Steven. "A Time-Dependent Slice Balance Method for High-Fidelity Radiation Transport Computations." Thesis, Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/14608.

Full text
Abstract:
A general finite difference discretization of the time-dependent radiation transport equation is developed around the framework of an existing steady-state three dimensional radiation transport solver based on the slice-balance approach. Three related algorithms are outlined within the general finite difference scheme: an explicit, an implicit, and a semi-implicit approach. The three algorithms are analyzed with respect to the discretizations of each element of the phase space in the transport solver. The explicit method, despite its small computational cost per time step, is found to be unsuitable for many purposes due to its inability to accurately handle rapidly varying solutions. The semi-implicit method is shown to produce results nearly as reliable as the fully implicit solver, while requiring significantly less computational effort.
APA, Harvard, Vancouver, ISO, and other styles
22

Leonardo, Barros Silva Bruno. "Sec-MoSC Translation Framework: An approach to transform business process models into executable process considering security requirements." Universidade Federal de Pernambuco, 2011. https://repositorio.ufpe.br/handle/123456789/2837.

Full text
Abstract:
Made available in DSpace on 2014-06-12T16:01:30Z (GMT). No. of bitstreams: 2 arquivo9415_1.pdf: 2190260 bytes, checksum: 2972a41af6edc33657e680fccdd03a29 (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2011
O surgimento da Computação Orientada a Serviços (COS) como um novo paradigm de programação trouxe muitas características boas, como permitir a programação em massa, integração mais fácil entre sistemas de empresas diferentes, escalabilidade, mas, principalmente, o foco na lógica de mais alto nível do negócio. Mas, enquanto este foco na lógica do negócio é mais produtivo e muito crucial aos usuários de negócio que não possuem conhecimento tecnológico (e de fato, não precisam dele), ainda há uma brecha semântica entre o que o usuário descreve na lógica de negócio e no que realmente é executado na máquina. Neste contexto, apresentamos a Ferramenta de Tradução Sec-MoSC, uma abordagem proposta para evitar esta brecha, responsável por fazer a tradução entre modelos de processo de negócio alto-nível e processos executáveis. Dois avanços principais são extraídos deste trabalho: uma forma mais fácil e reusável de criar novos artefatos de tradução e a incorporação de requisitos não-funcionais (como segurança) no processo de tradução modelagem-para-execução, especificamente, com uma implementação para as duas linguagens mais usadas para modelar e executar processos de negócio, respectivamente, BPMN e BPEL
APA, Harvard, Vancouver, ISO, and other styles
23

Lauvergne, Muriel. "Réservation de connexions avec reroutage pour les réseaux ATM : une approche hybride par programmation par contraintes." Nantes, 2002. http://www.theses.fr/2002NANT2004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Villeger, David. "Restitution d'énergie élastique et locomotion (REEL) : une approche adimensionnelle." Toulouse 3, 2014. http://thesesups.ups-tlse.fr/4068/.

Full text
Abstract:
L'objectif de ce travail est de développer une approche adimensionnelle de la locomotion humaine, et plus précisément de la marche et de la course. En d'autres termes, le principal enjeu de cette thèse est d'induire des similitudes locomotrices entre des hommes de tailles différentes. Ces similitudes locomotrices attendues entre des individus de différentes tailles sont les mêmes que celles que les physiciens recherchent lors de l'élaboration de prototype à partir de maquette. Dans l'approche que nous tentons de développer tout au long de ce document, nous considérons qu'un individu petit est le modèle réduit d'un plus grand. Notre approche est au croisement des champs de la physique, de la modélisation et de la biomécanique. L'application de l'analyse dimensionnelle aux modèles simples de locomotion permet de mettre en avant l'intérêt des nombres adimensionnels de Froude (vitesse adimensionnelle) et Strouhal (fréquence adimensionnelle) pour étudier la locomotion humaine. Ces modèles simples de locomotion simplifient le corps humain à la masse du corps concentrée au centre de gravité oscillant à l'extrémité d'un ressort. Ils prennent en compte une composante élastique et mettent en avant des transferts se réalisant au centre de gravité entre les énergies cinétique, potentielle de pesanteur et potentielle élastique. Le rapport de ces énergies est appelé Modela. Modela possède deux variantes, une pour la marche et l'autre pour la course, et est dépendant de Froude et Strouhal. Dans un premier temps, les conditions expérimentales de vitesse de déplacement (à partir de Froude) et de fréquence de pas (à partir de Strouhal), toutes deux relatives à l'anthropométrie des individus, ont permis d'engendrer des similitudes locomotrices pour la marche et la course chez des individus de tailles différentes. Ces résultats révèlent tout l'intérêt d'une approche adimensionnelle de la locomotion en montrant qu'exprimés indépendamment de l'anthropométrie des individus, leurs comportements adimensionnels est le même. Utiliser cette approche pour comparer des locomotions au sein même de l'espèce humaine a un grand intérêt pour étudier des comportements déviants d'un comportement standard. Aussi, cette approche peut être un moyen de mettre en avant des organisations du mouvement communes à différentes espèces. Dans un second temps, l'accent est mis sur la comparaison entre le modèle simple et le modèle complexe du corps humain. D'un coté, le modèle simple du corps humain prend en compte une composante élastique et ne s'intéresse qu'au centre de gravité. De l'autre coté, le corps humain peut être modélisé comme un ensemble de segments corporels articulés entre eux. Ici, un lien est fait entre le mouvement global du centre de gravité et les coordinations des segments poly-articulés, lors du mouvement, et tout ce que cela engendre en termes de transfert d'énergie. Le rapprochement des deux modèles explique comment un individu peut se comporter comme une masse bondissante lors de la marche et la course ou comment les expérimentations futures pourront investir le champ de l'élasticité humaine et de l'économie d'énergie
The aim of this paper is to develop a dimensionless approach of the human locomotion, and more specifically of walking and running gaits. In other terms, the main goal of this PhD thesis is to induce locomotor similarity between different-sized humans. These similarities are the same that the physicians look for when they design a prototype from a scale model. Throughout the thesis paper, this approach allows the consideration that a small human is a reduced model of a tall one. Our approach is cross-fielded like Physics, Modelization, and Biomechanics. The dimensional analysis application to the common locomotion models allows to highlight the interest of using the dimensionless numbers of Froude (dimensionless speed) and Strouhal (dimensionless frequency) to study human locomotion. These locomotion models are reduced to the body mass represented at its center of mass oscillating at the end of a massless spring. They take into account an elastic component and enlighten transfers occurring at the center of mass between the kinetic, potential and elastic energies. The ratio of these energies is called Modela. A Modela corresponds to both walking and running, and depends on Froude and Strouhal. First, the experimental conditions such as speed displacement relative to anthropometry (from Froude) and step frequency relative to anthropometry (from Strouhal) allow us to generate locomotor similarity between different-sized subjects for walking and running. These results reveal the interest of the dimensionless approach of the locomotion by showing that the dimensionless behaviors are the same when they are expressed independently of the subject anthropometry. The use of this approach to compare human locomotions is interesting to study behavior different to the gold dimensionless standard. Also, this approach may be a means to highlight a global organization of the movement which is common to many species. Then, the comparison between the simple model and the complex model is investigated. In one hand, the model takes into account an elastic component and only describe the center of mass movement. In the other hand, the human body is represented as a whole of body segment poly-articulated. A link is done between the global movement of the center of mass and the movement of the poly-articular model, and especially regarding for the energy transfers. The link between the models explain how a subject has the same behavior of a spring mass, and how the future works will be able to investigate the fields of the human elasticity and the saving energy mechanisms
APA, Harvard, Vancouver, ISO, and other styles
25

He, Ping. "NOVEL EXPERIMENTAL APPROACHES AND THEORETICAL MODELS FOR IMPROVING SENSITIVITY AND INFORMATION CONTENT OF NMR AND MRI SPECTROSCOPY." OpenSIUC, 2013. https://opensiuc.lib.siu.edu/dissertations/757.

Full text
Abstract:
The ongoing effort to improve the sensitivity and information content of NMR spectroscopy and MRI has important implications in scientific research and medical diagnostics. In this dissertation, a variety of approaches have been investigated and expanded on in an effort to contribute to this field. First, cryptophanes are cage-shaped molecules that have previously been used to encapsulate molecules of interest for a number of potential applications--including gas sensing and biosensing. In one set of studies, encapsulation of molecular hydrogen gas (H2) has shown different behavior compared to other small organic molecules in C111 (up until now, the smallest cryptophane). The transient, non-covalent binding was studied by variable-temperature NMR at different fields up to 950 MHz. A mathematical model that considers multiple-H2 binding was developed to better understand the physics and binding process, with predictions compared to experimental data (and rationalized in light of quantum chemical calculations on possible H2@C111 complexes). To our knowledge, C111 is the only system to reversibly trap multiple H2 gas molecules non-covalently under mild conditions. In a second series of studies, the interaction of laser-polarized xenon and a water-soluble cryptophane was studied. Despite the low concentration of xenon in aqueous solution, it was possible to achieve polarization transfer from xenon to cryptophane spins via the SPINOE (spin-polarization induced nuclear Overhauser effect). The SPINOE enhancements, along with the 129Xe NMR spectra, provide information about the interaction of the Xe-cryptophane complex (variants of which are now used in so-called xenon biosensors). This was our first in-house successful application of hyperpolarized xenon as a signal source for the spins of other molecules, leading the way to a number of ongoing studies. Although the absolute NMR enhancements obtained via the SPINOE were small, much larger enhancements were studied in a technique that uses para-hydrogen (pH2)--a spin isomer of normal molecular hydrogen)--as the source of spin order. As with the xenon experiments (and the H2 binding experiments), pH2 must be delivered as a gas to a sealed sample prior to performing the NMR experiments. Parahydrogen-induced polarization (PHIP) is an emerging field in enhancing the sensitivity in NMR experiments and may play an important role in MRI studies. Within this field a very recent phenomena of signal amplification by reversible exchange (SABRE) was investigated. The reproducibility of this recent discovery has been examined and new conclusions about the mechanism of this technique are delineated. NMR signal enhancements of nearly ~400-fold are reported. Moreover, a new water soluble NHC-Iridium catalyst was synthesized and investigated in SABRE related studies. We also report the first studies of SABRE-enhancement in biologically tolerable solvents--opening a door to the development of SABRE-hyperpolarized metabolic contrast agents for subsecond molecular imaging in the body. Although much of the above work was motivated by the desire to improve NMR/MRI sensitivity enhancement, other efforts concerned the other side of the equation--improving NMR/MRI information content. The next section concerns our efforts to investigate use of Variable-Angle (VA) NMR to study composite liquid crystal (LC) media comprised of stretched polyacrylamide gels (SAG) and embedded bacteriophage Pf1 particles. This in situ combination exploited the apparent interference between the different solute-aligning properties of the two LC components--yielding composite media with alignment properties that can differ in a tunable manner from those obtained with each medium alone. Characterization of alignment of both large and small molecules provides more insight into the nature of solute alignment that those composite phases introduce--with the goal of developing this approach as a new technique for studying molecular structure and dynamics via the dipolar and quadrupolar couplings that are restored in liquid-crystalline media. Finally the use of SPIONS--superparamagnetic iron oxide nanoparticles--as contrast agents is a relatively new approach to enhance information content in MRI studies; this is particularly true for SPIONs that have been surface-functionalized to achieve an environment-sensitive MR response. Novel surface-functionalized SPIONs were investigated by examining their effect on nuclear spin relaxation in aqueous environments simulating bodily tissues. More specifically, the pH and ionic strength dependent properties of selected dendron-functionalized and polymer-functionalized SPIONs have been examined. Of particular interest to this dissertation is how environment-mediated transient clustering of the SPIONs gives rise to changes in so-called transverse (homogeneous) spin relaxation rates as measured by following the decay of MR signals detected after the application of a series of radio-frequency (RF) pulses. In order to better understand these effects in the context of the SPIONs' behavior, a mathematical model is under development whose predictions are compared with experimental data. Aspects of the model are also compared to transmission electron micrography (TEM) and dynamic light scattering (DLS).
APA, Harvard, Vancouver, ISO, and other styles
26

Mahdavi, Mostafa. "Study of flow and heat transfer features of nanofluids using multiphase models : eulerian multiphase and discrete Lagrangian approaches." Thesis, University of Pretoria, 2016. http://hdl.handle.net/2263/61309.

Full text
Abstract:
Choosing correct boundary conditions, flow field characteristics and employing right thermal fluid properties can affect the simulation of convection heat transfer using nanofluids. Nanofluids have shown higher heat transfer performance in comparison with conventional heat transfer fluids. The suspension of the nanoparticles in nanofluids creates a larger interaction surface to the volume ratio. Therefore, they can be distributed uniformly to bring about the most effective enhancement of heat transfer without causing a considerable pressure drop. These advantages introduce nanofluids as a desirable heat transfer fluid in the cooling and heating industries. The thermal effects of nanofluids in both forced and free convection flows have interested researchers to a great extent in the last decade. Investigating the interaction mechanisms happening between nanoparticles and base fluid is the main goal of the study. These mechanisms can be explained via different approaches through some theoretical and numerical methods. Two common approaches regarding particle-fluid interactions are Eulerian-Eulerian and Eulerian-Lagrangian. The dominant conceptions in each of them are slip velocity and interaction forces respectively. The mixture multiphase model as part of the Eulerian-Eulerian approach deals with slip mechanisms and somehow mass diffusion from the nanoparticle phase to the fluid phase. The slip velocity can be induced by a pressure gradient, buoyancy, virtual mass, attraction and repulsion between particles. Some of the diffusion processes can be caused by the gradient of temperature and concentration. The discrete phase model (DPM) is a part of the Eulerian-Lagrangian approach. The interactions between solid and liquid phase were presented as forces such as drag, pressure gradient force, virtual mass force, gravity, electrostatic forces, thermophoretic and Brownian forces. The energy transfer from particle to continuous phase can be introduced through both convective and conduction terms on the surface of the particles. A study of both approaches was conducted in the case of laminar and turbulent forced convections as well as cavity flow natural convection. The cases included horizontal and vertical pipes and a rectangular cavity. An experimental study was conducted for cavity flow to be compared with the simulation results. The results of the forced convections were evaluated with data from literature. Alumina and zinc oxide nanoparticles with different sizes were used in cavity experiments and the same for simulations. All the equations, slip mechanisms and forces were implemented in ANSYS-Fluent through some user-defined functions. The comparison showed good agreement between experiments and numerical results. Nusselt number and pressure drops were the heat transfer and flow features of nanofluid and were found in the ranges of the accuracy of experimental measurements. The findings of the two approaches were somehow different, especially regarding the concentration distribution. The mixture model provided more uniform distribution in the domain than the DPM. Due to the Lagrangian frame of the DPM, the simulation time of this model was much longer. The method proposed in this research could also be a useful tool for other areas of particulate systems.
Thesis (PhD)--University of Pretoria, 2016.
Mechanical and Aeronautical Engineering
PhD
Unrestricted
APA, Harvard, Vancouver, ISO, and other styles
27

Burkhard, Remo Burkhard Remo Aslak. "Knowledge visualization : the use of complementary visual representations for the transfer of knowledge : a model, a framework, and four new approaches /." [S.l.] : [s.n.], 2005. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=15918.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Marzo, i. Lázaro Josep Lluís. "Enhanced convolution approach for CAC in ATM networks, an analytical study and implementation." Doctoral thesis, Universitat de Girona, 1997. http://hdl.handle.net/10803/7715.

Full text
Abstract:
The characteristics of service independence and flexibility of ATM networks make the control problems of such networks very critical. One of the main challenges in ATM networks is to design traffic control mechanisms that enable both economically efficient use of the network resources and desired quality of service to higher layer applications. Window flow control mechanisms of traditional packet switched networks are not well suited to real time services, at the speeds envisaged for the future networks.
In this work, the utilisation of the Probability of Congestion (PC) as a bandwidth decision parameter is presented. The validity of PC utilisation is compared with QOS parameters in buffer-less environments when only the cell loss ratio (CLR) parameter is relevant. The convolution algorithm is a good solution for CAC in ATM networks with small buffers. If the source characteristics are known, the actual CLR can be very well estimated. Furthermore, this estimation is always conservative, allowing the retention of the network performance guarantees.
Several experiments have been carried out and investigated to explain the deviation between the proposed method and the simulation. Time parameters for burst length and different buffer sizes have been considered. Experiments to confine the limits of the burst length with respect to the buffer size conclude that a minimum buffer size is necessary to achieve adequate cell contention. Note that propagation delay is a no dismiss limit for long distance and interactive communications, then small buffer must be used in order to minimise delay.
Under previous premises, the convolution approach is the most accurate method used in bandwidth allocation. This method gives enough accuracy in both homogeneous and heterogeneous networks. But, the convolution approach has a considerable computation cost and a high number of accumulated calculations.
To overcome this drawbacks, a new method of evaluation is analysed: the Enhanced Convolution
Approach (ECA). In ECA, traffic is grouped in classes of identical parameters. By using the multinomial distribution function instead of the formula-based convolution, a partial state corresponding to each class of traffic is obtained. Finally, the global state probabilities are evaluated by multi-convolution of the partial results. This method avoids accumulated calculations and saves storage requirements, specially in complex scenarios.
Sorting is the dominant factor for the formula-based convolution, whereas cost evaluation is the dominant factor for the enhanced convolution. A set of cut-off mechanisms are introduced to reduce the complexity of the ECA evaluation. The ECA also computes the CLR for each j-class of
traffic (CLRj), an expression for the CLRj evaluation is also presented.
We can conclude that by combining the ECA method with cut-off mechanisms, utilisation of ECA in real-time CAC environments as a single level scheme is always possible.
APA, Harvard, Vancouver, ISO, and other styles
29

Soderquist, Daniel Robert. "Analysis of Distortion Transfer and Generation through a Fan and a Compressor Using Full-annulus Unsteady RANS and Harmonic Balance Approaches." BYU ScholarsArchive, 2019. https://scholarsarchive.byu.edu/etd/7401.

Full text
Abstract:
Understanding distortion transfer and generation through fan and compressor blade rows is able to assist in blade design and performance prediction. Using full annulus unsteady RANS simulations, the effects of distortion as it passes through the rotor of a transonic fan at five radial locations (10%, 30%, 50%, 70%, and 90% span) are analyzed. The inlet distortion profile is a 90-degree sector with a 15% total pressure deficit. Fourier distortion descriptors are used in this study to quantitatively describe distortion transfer and generation. Results are presented and compared for three operating points (near-stall, design, and choke). These results are used to explain the relationship between inlet total pressure distortion, pressure-induced swirl, total pressure distortion transfer, total temperature distortion generation, and circumferential rotor work variation. It is shown that very large changes in pressure-induced swirl and distortion transfer and generation occur between near-stall and design, but only small changes are seen between design and choke. The greatest changes are shown to be near the tip. Local power variations are shown to correlate with total pressure distortion transfer and total temperature distortion generation.It can be difficult to predict the transfer of distortion through a fan or compressor because traditional experimental and computational methods are very expensive and time consuming. The Harmonic Balance approach is a promising alternative which uses Fourier techniques to represent fluid flow solutions and which can provide unsteady solutions much more quickly than traditional unsteady solvers. Relatively little work has been done to assess how much Fourier information is necessary to calculate a sufficiently accurate solution with the Harmonic Balance Solver. A study is performed to analyze the effects of varying the amount of modal content that is used in Harmonic Balance simulations. Inlet distortion profiles with varying magnitudes are used in order to analyze trends and provide insight into the distortion flow physics for various inlet conditions. The geometry is a single stage axial compressor that consists of an inlet guide vane followed by the NASA Stage 37 rotor. It is shown that simulations with greater magnitudes of distortion require more modal content in order to achieve sufficiently accurate results. Harmonic Balance simulations are shown to have significantly lower computational costs than simulations with a conventional unsteady solver.
APA, Harvard, Vancouver, ISO, and other styles
30

Hamilton, Erin Kinzel. "Multiscale and meta-analytic approaches to inference in clinical healthcare data." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/47600.

Full text
Abstract:
The field of medicine is regularly faced with the challenge of utilizing information that is complicated or difficult to characterize. Physicians often must use their best judgment in reaching decisions or recommendations for treatment in the clinical setting. The goal of this thesis is to use innovative statistical tools in tackling three specific challenges of this nature from current healthcare applications. The first aim focuses on developing a novel approach to meta-analysis when combining binary data from multiple studies of paired design, particularly in cases of high heterogeneity between studies. The challenge is in properly accounting for heterogeneity when dealing with a low or moderate number of studies, and with a rarely occurring outcome. The proposed approach uses a Rasch model for translating data from multiple paired studies into a unified structure that allows for properly handling variability associated with both pair effects and study effects. Analysis is then performed using a Bayesian hierarchical structure, which accounts for heterogeneity in a direct way within the variances of the separate generating distributions for each model parameter. This approach is applied to the debated topic within the dental community of the comparative effectiveness of materials used for pit-and-fissure sealants. The second and third aims of this research both have applications in early detection of breast cancer. The interpretation of a mammogram is often difficult since signs of early disease are often minuscule, and the appearance of even normal tissue can be highly variable and complex. Physicians often have to consider many important pieces of the whole picture when trying to assess next steps. The final two aims focus on improving the interpretation of findings in mammograms to aid in early cancer detection. When dealing with high frequency and irregular data, as is seen in most medical images, the behaviors of these complex structures are often difficult or impossible to quantify by standard modeling techniques. But a commonly occurring phenomenon in high-frequency data is that of regular scaling. The second aim in this thesis is to develop and evaluate a wavelet-based scaling estimator that reduces the information in a mammogram down to an informative and low-dimensional quantification of the innate scaling behavior, optimized for use in classifying the tissue as cancerous or non-cancerous. The specific demands for this estimator are that it be robust with respect to distributional assumptions on the data, and with respect to outlier levels in the frequency domain representation of the data. The final aim in this research focuses on enhancing the visualization of microcalcifications that are too small to capture well on screening mammograms. Using scale-mixing discrete wavelet transform methods, the existing detail information contained in a very small and course image will be used to impute scaled details at finer levels. These "informed" finer details will then be used to produce an image of much higher resolution than the original, improving the visualization of the object. The goal is to also produce a confidence area for the true location of the shape's borders, allowing for more accurate feature assessment. Through the more accurate assessment of these very small shapes, physicians may be more confident in deciding next steps.
APA, Harvard, Vancouver, ISO, and other styles
31

Erbas, Cihan. "Validation of remotely-sensed soil moisture observations for bare soil at 1.4 GHz a quantitative approach through radiative transfer models to characterize abrupt transitions caused by a ponding event in an agricultural field, modifications to the radiative transfer models, and a mobile ground-based system /." [Ames, Iowa : Iowa State University], 2009. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3371777.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Bisson, Anne. "Influence de l'organisation spatiale et de la pression d'herbivorie sur les transferts de fertilité et la productivité des systèmes agro-sylvo-pastoraux : approche écologique de questions agronomiques par l'utilisation de modèles mathématiques." Thesis, Montpellier, SupAgro, 2018. http://www.theses.fr/2018NSAM0052.

Full text
Abstract:
La durabilité du fonctionnement des agro-écosystèmes et la gestion des services écosystémiques associés représente un des enjeux majeurs des sciences agronomiques et environnementales.Les systèmes agro-sylvo-pastoraux d'Afrique de l'Ouest (SASP-AO), étudiés depuis longtemps par la communauté scientifique, offrent un cas d’étude pertinent. La fertilité de ces agro-écosystèmes repose traditionnellement sur un taux de recyclage des nutriments très élevé au sein de l'agro-écosystème via la pratique de la jachère et des transferts de nutriments par les mouvements du bétail.Les SASP-AO sont soumis à des pressions socioéconomiques et démographiques fortes qui entrainent des modifications de leur organisation spatiotemporelle et des pratiques agricoles, notamment celles relatives à l'élevage.Dans cette thèse, nous nous sommes intéressés à l'impact de ces modifications sur la production végétale et animale à l'échelle de l'agro-écosystème.Nous avons choisi d’étudier les SASP en développant et analysant des modèles mathématiques de type méta-écosystème. Dans chacun des trois modèles proposés, nous avons cherché à représenter un SASP le plus simplement possible, en incluant les mécanismes biogéochimiques les plus importants (croissance des plantes, minéralisation, lessivage, dépositions…) et les pratiques agricoles d’intérêt. L’objectif était à la fois de comprendre comment ces mécanismes interagissent en fonction des pratiques et d’identifier des propriétés émergentes à l'échelle de l'agro-écosystème.Chacun des modèles a été développé pour étudier l'effet d'un nombre limité de pratiques agricoles portant sur l'organisation des composantes spatiales ou sur la connectivité entre les composantes spatiales.Dans la première partie de cette thèse, nous avons étudié l'influence de la structure des SASP-AO sur la production agricole de ces systèmes. Dans le modèle, quatre sous-systèmes interconnectés sont représentés : l’auréole de case, l’auréole de brousse, la savane et le village. Le modèle est de plus saisonnalisé, la dynamique de la saison sèche étant différente de celle de la saison humide. Avec ce modèle, nous avons étudié l’influence de trois leviers : (1) la durée de rotation et la durée des jachères dans les rotations, (2) la proportion de surface allouée aux différentes zones cultivées (case/brousse) de l'agro-écosystème et (3) la présence/absence du bétail dans l'agro-écosystème. Les résultats issus de ces travaux ont mis en évidence les services écosystémiques fournis par la savane, le rôle de du bétail comme « pompe à nutriments » des zones de pâturage vers les zones cultivées et les interactions entre les effets du bétail et les effets de la jachère sur les flux de nutriments. Dans la seconde partie, nous avons utilisé des outils issus de la théorie du contrôle afin de tenir compte de la variabilité dans le temps des pratiques agricoles. Nous avons ainsi montré qu'en faisant varier la pression d'herbivorie de manière adéquate, un gain supplémentaire de production est possible par rapport à une pression d'herbivorie constante pour une même quantité de nutriments transférée des pâturages vers les cultures.Dans la dernière partie de ce travail, l’optimisation multicritère du fonctionnement de l’agro-écosystème permet d’aborder la complexité des objectifs des SASP-AO comme système de production et de prendre en compte la gestion des risques dans ces systèmes. Nos résultats mettent en avant que les compromis entre production végétale et animale sont liés au choix des plantes cultivées. Nos résultats montrent également que les sources extérieures de nutriments permettent d’augmenter les productions, mais que leur efficience diminue quand leur quantité augmente.À l’interface entre écologie et agronomie, et grâce à l’utilisation conjointe d'outils issus d'autres disciplines, ces travaux de modélisation offrent de nouvelles perspectives pour l'optimisation de la production végétale et la gestion de la fertilité dans les SASP
The sustainability of agro-ecosystem functioning and the management of the associated ecosystem services is one of the major challenges of agronomic and environmental sciences. West African agro-sylvo-pastoral systems (WA-ASPS), which have been studied by the scientific community for a long time, offer an interesting case study. Traditionally, the fertility of these agro-ecosystems relies on a very high rate of nutrient recycling within the agro-ecosystem maintained by both fallowing and livestock induced nutrient transfers.Socio-economic and demographic pressures lead to major changes in the spatial and temporal organization of WA-ASPS and in the associated agricultural practices, including those related to livestock. In this thesis, we are interested in the impact of these changes on crop and meat production at the scale of the agro-ecosystem.We chose to study ASPS by developing and analyzing mathematical models using the ecological concept of meta-ecosystems. In each of the three models proposed, we tried to represent the ASPS as simply as possible, by including the key biogeochemical mechanisms (plant growth, mineralization, leaching, deposition...) and the agricultural practices of interest. The aim was both to understand how mechanisms interact according to the practices and to identify emerging properties at the scale of the agro-ecosystem.Each model was developed to study the effect of a limited number of agricultural practices on the organization of spatial components or on the connectivity between spatial components.In the first part of this work, we studied the influence of the structure of the WA-ASPS on the agricultural production of these systems. In the model, four interconnected subsystems are represented: the compound ring, the bush ring, the savanna and the dwellings. The year is decomposed in two seasons: the dry and the rainy seasons, the dynamic of the system being different for each season. With this model, we studied the influence of three driving-forces on the crop production: (1) the rotation duration and duration of fallows within rotations, (2) the proportion of the agro-ecosystem surface allocated to the different cropland areas (compound/bush) and (3) the presence/absence of livestock in the agro-ecosystem. The results of this work highlight the ecosystem services provided by the savanna, the role of livestock as a "nutrient pump" from rangeland to cropland and the interactions between livestock effects and fallow effects on nutrient fluxes. In the second part, we used tools provided by control theory to take into account the variability over time of agricultural practices. We showed that by varying the herbivory pressure over time in an appropriate way, an additional gain in production is possible (compared to the one obtained with a constant herbivory pressure) for the same amount of nutrients transferred from rangelands to croplands.In the last part of this work, the multi-criteria optimization of the functioning of the agro-ecosystem makes it possible to address the complexity of the objectives of WA-ASPS as a production system and to take into account risks management in these systems. Our results highlight that trade-offs between crop and animal production may be linked to the choice of crops. Our results also show that external sources of nutrients can lead to an increase in productions, but that their efficiency decreases as their quantity increases. At the interface between ecology and agronomy, and by using tools from other fields, this modeling work offers new perspectives for optimizing crop production and fertility management in ASPS
APA, Harvard, Vancouver, ISO, and other styles
33

Aubras, Farid. "Contribution à l’étude de l’influence des régimes bi-phasiques sur les performances des électrolyseurs de type PEM basse pression : approche numérique, analytique et expérimentale." Thesis, La Réunion, 2018. http://www.theses.fr/2018LARE0011/document.

Full text
Abstract:
Les électrolyseurs à membrane échangeuse de protons basse pression (E-PEMs) apparaissent comme une solution efficace et durable pour la production d’hydrogène. Cette technologie pourrait permettre de pallier l’intermittence des énergies renouvelables (notamment solaire et éolien) en convertissant l’énergie électrique produite en énergie chimique (hydrogène). Durant ces travaux de thèse, trois aspects ont été développés : une approche analytique, une approche numérique, ainsi que approche expérimentale. Ces trois approches ont permis de comprendre l’influence du mélange bi-phasique eau/oxygène à l’anode du système sur les performances électrochimiques des E-PEMS ainsi que déterminer les paramètres opérationnels et intrinsèques qui impactent les performances des E-PEMs. À propos de l'approche expérimentale, des mesures d'impédance électrochimique ainsi que des courbes de polarisation ont été réalisées sur deux différentes cellules d'électrolyseurs de type PEM basse pression (la cellule ITW power de l'Electrochimical innovation Lab (UCL) et la cellule réversible Q-URFC du Laboratoire d'Énergétique, d'Électronique et Procédés (LE2P). À propos de la modélisation numérique, Le modèle expérimentale conjugue une approche multi-échelle macroscopique 2D et mésoscopique 1D. Ce modèle prend en compte le transfert de matière, le transfert de chaleur, les réactions électrochimiques anodique et cathodique et le transfert de charges présents dans le cœur des E-PEMs. D’un point de vue mésoscopique, une attention particulière a été portée sur l’influence des régimes bi-phasiques anodiques (régime de bulles coalescées (BC régime) et régime de bulles non coalescées (NCB régime) sur le transfert de matière à l’anode et sur l’humidification de la membrane. Ces travaux démontrent et confirment l’hypothèse que la transition du NCB régime vers le CB régime augmente le transfert de matière anodique, diminue la résistance ohmique de la membrane et améliore l’efficacité des E-PEMs. À propos de la modèle analytique, l’étude analytique explore une approche adimensionnelle de l'assemblage membrane électrode (AME) en régime stationnaire et isotherme. À l’échelle locale, en 1D, les équations prises en compte sont la conservation du courant dans l’AME, les réactions électrochimiques au sein des couches actives et le transfert de matière à travers la membrane. La résolution a permis d’obtenir des expressions analytiques des surtensions aux électrodes, de la chute ohmique et de la teneur en eau dans la membrane. L’approche adimensionnelle a permis de quantifier analytiquement les sources d’irréversibilités (chute ohmique, surtensions d’activations anodique et cathodique, et de la surtension induite par le bouchonnement des canaux anodiques) respectivement pour les faibles densités de courant, les moyennes densités de courant et les hautes densités de courant. En outre, ce modèle analytique peut être implémenté dans une boucle de contrôle commande. Ces travaux de thèse proposent une contribution à la compréhension du fonctionnement des E-PEMs basse pression en général, et en particulier de l'impact des régimes bi-phasiques sur leurs performances électro-chimiques
Based on proton conduction of polymeric electrolyte membrane (PEM) technology, the water electrolysis (PEMWE) offers an interesting solution for efficiency hydrogen production. During the electrolysis process of water in PEMWE, the anodic side is the place where the water is splitting into oxygen, protons and electrons. The aim of this study is to recognize the link between two-phase flows (anode side) and cell performance under low pressure conditions. We have developed three approaches: the analytical approach and the numerical approach validated by the experimental data. For the numerical model, we have developed a two-dimensional stationary PEMWE model that takes into account electro-chemical reaction, mass transfer (bubbly flow), heat transfer and charges balance through the Membrane Electrodes Assembly (MEA). In order to take into account the changing electrical behavior, our model combines two scales of descriptions: at microscale within anodic active layer and MEA scale. The water management at both scales is strongly linked to the slug flow regime or the bubbly flow regime. Therefore, water content close to active surface areas depends on two-phase flow regimes. Our simulation results demonstrate that the transition from bubble to slug flow in the channel is associated with improvement in mass transport, a reduction of the ohmic resistance and an enhancement of the PEMWE efficiency. Regarding the analytical model, we have developed a one-dimensional stationary isothermal PEMWE model that takes into account electro-chemical reaction, mass transfer and charges balance through the Membrane Electrodes Assembly (MEA). The analytical approach permit to obtain mathematical solution of the activation overpotential, the ohmic losses and the bubbles overpotential respectively for the low current density, the middle current density and the high current density. This approach quantify the total overpotential of the cell, function of the operational and intrinsic numbers. In terms of perspective, the analytical model could be used for the diagnostic of the electrolyzer PEM
APA, Harvard, Vancouver, ISO, and other styles
34

Ko, Kyungduk. "Bayesian wavelet approaches for parameter estimation and change point detection in long memory processes." Diss., Texas A&M University, 2004. http://hdl.handle.net/1969.1/2804.

Full text
Abstract:
The main goal of this research is to estimate the model parameters and to detect multiple change points in the long memory parameter of Gaussian ARFIMA(p, d, q) processes. Our approach is Bayesian and inference is done on wavelet domain. Long memory processes have been widely used in many scientific fields such as economics, finance and computer science. Wavelets have a strong connection with these processes. The ability of wavelets to simultaneously localize a process in time and scale domain results in representing many dense variance-covariance matrices of the process in a sparse form. A wavelet-based Bayesian estimation procedure for the parameters of Gaussian ARFIMA(p, d, q) process is proposed. This entails calculating the exact variance-covariance matrix of given ARFIMA(p, d, q) process and transforming them into wavelet domains using two dimensional discrete wavelet transform (DWT2). Metropolis algorithm is used for sampling the model parameters from the posterior distributions. Simulations with different values of the parameters and of the sample size are performed. A real data application to the U.S. GNP data is also reported. Detection and estimation of multiple change points in the long memory parameter is also investigated. The reversible jump MCMC is used for posterior inference. Performances are evaluated on simulated data and on the Nile River dataset.
APA, Harvard, Vancouver, ISO, and other styles
35

Kadio, Kadio Eric. "Education, justice sociale et développement en Afrique de l'Ouest : une analyse multidimensionnelle de l'articulation des référentiels internationaux aux stratégies nationales." Thesis, Aix-Marseille, 2019. http://www.theses.fr/2019AIXM0537.

Full text
Abstract:
De la décennie 80 aux années 2000, la qualité de l’enseignement en Afrique subsaharienne s’est progressivement dégradée sous l’influence de multiples facteurs. Déjà caractérisés par un faible niveau d’efficacité interne, de scolarisation et d’acquis scolaires, eux-mêmes parsemés de disparités et d’inégalités, les transformations du secteur vont être accentuées par la hausse de la population scolarisable. Face à cette situation, les pouvoirs publics adopteront à l’aune de l’an 2000 une réforme des curricula par l’Approche Par Compétences (APC). Attachée à des enjeux de justice et d’amélioration des apprentissages, la mise en œuvre de l'APC n’a pas donné lieu à une large évaluation dans la littérature économique. C’est fort de ce constat, que cette thèse s’est fixée pour objectifs d’analyser son transfert et son impact par la comparaison des expériences ivoirienne et sénégalaise. Pour y parvenir, elle prend appui sur les mix methods. Aussi, les chapitres 1 et 2 identifient les caractéristiques et les particularités de chaque système, puis les déterminants et les enjeux de la réforme. Quant au chapitre 3, il analyse son transfert et son effectivité. A sa suite, le chapitre 4 évalue son impact sur les indicateurs d’efficacité interne et les acquis scolaires à travers un modèle multiniveaux. Les résultats obtenus suggèrent que l'APC ne permet pas d’expliquer l’amélioration de l’efficacité interne, qui a été le fait d'une révision des règles de régulation inter-cycles intervenue dans le cadre de la politique universelle d'éducation. Concernant la qualité des apprentissages, l’analyse économétrique corrobore l’évaluation qualitative du transfert
From 1980 to 2000, the education quality in sub-Saharan Africa decreased gradually under multiple influence. Already characterized by a low level of internal efficiency, schooling and learning outcomes, themselves dotted with regional disparities, gender and unequal access, the transformations of the education sector will be accentuated by the rise in school-age population. To deal with this situation, Governments adopt a curriculum reform at the beginning of 2000 through the Skills-Based Approach.Attached to social justice issues and learning quality, the Skills-Based Approach’ implementation has not always been conducive to rigorous evaluation in the economic literature. Due this situation, our thesis tempts to analyze its transfer and impact by comparing the Ivorian and Senegalese experiences. To achieve this goal, our work has been based on mix methods. In doing so, chapters 1 and 2 successively identify each system particularities and then the curriculum determinants and main objectives. Consecutively, chapter 3 analyzes its transfer, articulation and effectiveness in each educational system, whereas Chapter 4 assesses its impact on internal efficiency and learning quality by a multilevel model.By comparing the results from each methods, we observe that the Skills-based Approach does not explain internal efficiency improvement, which is rather the consequence of inter-cycle transition rules revision. Concerning learning quality, the econometric analysis corroborates the transfer assessment, and suggests a new approach to educational product quality: it insists to pay particular attention to the way in which educational policy is conceived and disseminated
APA, Harvard, Vancouver, ISO, and other styles
36

Ouhsaine, Lahoucine. "Modélisation et simulation de l’intégration des systèmes combinés PV-thermiques aux bâtiments basée sur une approche d’ordre réduit en représentation d’état." Thesis, Université de Lorraine, 2018. http://www.theses.fr/2018LORR0259.

Full text
Abstract:
Cette thèse porte sur le développement d’une approche pratique de modélisation/simulation des systèmes solaires combinés Photovoltaïques/Thermiques PV/T. Il s’agit d’une approche basée sur un modèle d’ordre réduit en représentation d’état (ORRE). En effet, les systèmes solaires thermiques, électriques et combinés intégrés aux bâtiments possèdent des spécificités permettant de s’affranchir des méthodes numériques classiques (mécanique des fluides numérique et thermique numérique). Ces méthodes sont réputées dans le domaine de l’aérodynamique, de l’aéraulique…etc. Par contre, dans le domaine du mix-énergétique tels que celui considéré dans ce mémoire, l’application directe de ce modèle peut conduire à des dépassements des capacités mémoire ou des temps de calcul exorbitants. Une alternative est de développer des méthodes adaptées au problème physique considéré, en traitant l’aspect multi-physique toute en restant dans une taille de données raisonnable et du temps de calcul réduit. La méthodologie de modélisation consiste à réduire les dimensions des équations qui régissent le problème. En se basant sur la symétrie du système, puis en découpant le système en zones de contrôle basées sur une valeur moyenne gouvernée par les nombres adimensionnels de Biot (Bi) et de Fourier (Fo). Les résultats obtenus en fonctionnement dynamique pourront nous fournir des paramètres de sorties, plus particulièrement, les rendements électrique, thermique et la puissance de circulation du fluide caloporteur. L’avantage de l’approche proposée réside dans la simplification du modèle résultant, qui est représenté par un seul système d’équations algébriques en représentation d’état regroupant tous les éléments physiques du système en fonctionnement dynamique (conditions aux limites variables dans le temps). Ce modèle regroupe la variable fondamentale qui est la température, et les deux types de contrôle et de conception. De plus, le modèle d’ORRE est intégrable dans le fonctionnement en temps réel des systèmes PV/T intégrés aux bâtiments (PV/T-Bât) afin d’accompagner leurs régulation et gestion des flux mise en jeu. Le modèle ainsi proposé a fait l’objet d’une validation où les résultats numériques ont été comparés aux résultats expérimentaux. En effet, quatre configurations ont été étudiées et évoquées dans une approche linéaire. Les résultats obtenus montrent une cohérence tolérable entre les résultats expérimentaux, et numériques. Cette cohérence a été évaluée en termes d’incertitude entre les résultats du modèle et le cas étudié expérimentalement. Le cas d’un système non-linéaire a été également abordé. En effet, rares sont les travaux qui ont été publiés mettant en valeur les phénomènes non-linéaires dans les systèmes complexes PV/T-Bât, Ainsi, on a développé avec la même stratégie, des modèles bilinéaires qui modélise le mieux possible le comportement thermique dans les systèmes PV/T-Bât. Une étude d’optimisation du système multi-physique en introduisant une étude paramétrique est menée en terme afin d’étudier la sensibilité des paramètres sur le rendement énergétique. Cependant, les études d’optimisation paramétriques restent limitées et insuffisantes à cause de la résolution mono-objectif du problème d’optimisation, alors que notre système manifeste un comportement combiné et multi-physique de nature contradictoire. Pour ce faire, une optimisation multi-objectifs est introduite avec trois fonctions objectif en employant l’algorithme génétique NSGA-II. L’originalité de notre méthode est d’employer l’algorithme en régime dynamique afin de choisir la conception du système la plus optimale. Les résultats trouvés peuvent contribuer à améliorer la conception des systèmes PV/T-Bât et l’optimisation de leur fonctionnement
This thesis consists to develop a simplified model approach for Photovoltaic / Thermal (PV / T) combined solar system based on state-space reduced order model. The building integrated solar systems are getting high attention in these last decencies, as well as the performance increasing which require high numerical methods to improve the design and reducing the costs. In one hand, the CFD methods are useful tool to predict the energy (mechanical and thermal) of combined PV/T systems, but it requires an expensive computing capacities and exorbitant calculation times, On the other hand, the PV/T systems can generate both the electrical and thermal flows, and requires an easily and performant optimization model. An alternative is to develop methods that are adapted to the physical problem under consideration, treating the multi-physics aspect while remaining in a reasonable data size and reduced computing time. The first part of the current thesis consists to develop a mathematical model which consists of reducing the dimensions of the governed equations. Based on the symmetry of the geometry, the system is subdivided into control areas which governed by the dimensionless Biot (Bi) and Fourier (Fo) numbers. The obtained results in dynamic mode can provide output key parameters, more particularly the electrical and thermal efficiencies and the dissipated hydrodynamic power. The advantage of this approach lies in the simplification of the resulting model, which is represented by a single state-space representation that groups all the physical elements of the system into dynamic mode, i.e. in continuous variation of the boundary condition. This model groups the fundamental variable, which is the temperature, and two type parameters, which are the control parameters and the design parameters. In addition, the reduced order model can be integrated into real-time operation of building-integrated PV / T (BIPV/T) systems in order to support their regulation and management of intervening flows. In order to validate the use of our model, it is necessary to test it for several cases of Building Integrated PV/T systems (BIPV/T). For this, four major configurations were studied and discussed in a linear approach; the found results show a good agreement with experimental works. A second level has been developed as part of our thesis work, which is the non-linearity in combined PV / T and BIPV/T systems; in particular, bilinear models have been developed with the same strategy which best models the thermal behavior in BIPV/T systems. The second issue, related to Multi-physics aspect. Furthermore, in order to evaluate the sensitivity of the parameters, a parametric optimization has been made with dimensionless numbers. However, parametric optimization studies remain limited and insufficient because of the single-objective resolution of the optimization problem, whereas our system manifests a mixed and multi-physics behavior with contradictory nature. To do this, a multi-objective optimization is introduced with three objective functions using the NSGA-II genetic algorithm. The originality of our method is to use the algorithm in dynamic mode in order to choose the design of the optimal system. The found results can contribute to the design of BIPV/T systems and optimize their operation
APA, Harvard, Vancouver, ISO, and other styles
37

Hackenberg, Manuela [Verfasser], Gerhard H. [Akademischer Betreuer] [Gutachter] Müller, and Dalsen Karel N. [Gutachter] van. "A Coupled Integral Transform Method - Finite Element Method Approach to Model the Soil-Structure-Interaction / Manuela Hackenberg ; Gutachter: Karel N. van Dalsen, Gerhard H. Müller ; Betreuer: Gerhard H. Müller." München : Universitätsbibliothek der TU München, 2016. http://d-nb.info/1122286082/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Lochon, Hippolyte. "Modélisation et simulation d'écoulements transitoires eau-vapeur en approche bifluide." Thesis, Aix-Marseille, 2016. http://www.theses.fr/2016AIXM4726/document.

Full text
Abstract:
Cette thèse traite de la modélisation et de la simulation des écoulements diphasiques transitoires eau-vapeur. Dans de nombreuses installations industrielles, des écoulements monophasiques d'eau liquide sont susceptibles de devenir diphasiques lors de certaines situations transitoires. La modélisation de ces écoulements peut s'avérer délicate car deux phénomènes physiques interagissant fortement entre eux, le changement de phase et la propagation d'ondes de pression, sont alors à prendre en compte. Une approche bifluide statistique, ne supposant aucun équilibre entre les phases, est utilisée afin de modéliser de tels écoulements. Les modèles obtenus sont de type convection-source et s'apparentent au modèle de Baer-Nunziato. Différentes lois de fermeture pour ces modèles sont comparées sur des cas expérimentaux de transitoires eau-vapeur tels que les coups de bélier et la dépressurisation d'une tuyauterie d'eau liquide suite à une rupture.La simulation numérique des différents modèles est effectuée grâce à une méthode à pas fractionnaires. Un nouveau schéma de convection, robuste et efficace, capable de gérer toute equation d'état est utilisé dans la première étape de cette méthode. La seconde étape est dédiée au traitement des termes sources et requiert différents schémas implicites. Une grande attention est accordée à la vérification de tous les schémas numériques utilisés grâce à des études de convergence. Une nouvelle modélisation du transfert de masse est également proposée, sur la base de travaux dédiés à la brusque dépressurisation d'eau liquide en approche homogène. La validation du modèle est effectuée grâce de nombreuses comparaisons calcul-expérience
This thesis deals with the modelling and the computation of steam-water two-phase flows. Liquid water flows are involved in many industrial facilities and a second phase may appear in some transients situations. Thus, pressure wave propagation and mass transfer are physical phenomena that need to be properly included in the modelling of such two-phase flows. A statistical two-fluid approach is used, leading to models similar to the Baer-Nunziato one. They include both convective and source terms without any assumption on the equilibrium between phases. Different closure laws for such models are compared on steam-water transient experiments including water-hammers and fast depressurization of liquid water. The computation of the different models is based on a fractional step method. A new convective scheme, able to deal with any Equation Of State, is used in the first step of the method. When compared with other schemes, it appears to be accurate, efficient and very robust. The second step of the method is dedicated to the treatment of source terms and requires several implicit schemes. Particular attention is paid to the verification of every scheme involved in the method. Convergence studies are carried out on test-cases with analytical solutions to that purpose. Based on existing work on fast depressurization of liquid water in an homogeneous approach, a new formulation of the mass transfer is proposed. Many comparisons between computational and experimental results are detailled in order to validate the models
APA, Harvard, Vancouver, ISO, and other styles
39

Smith, Quentin D. "AN EVOLUTIONARY APPROACHTO A COMMUNICATIONS INFRASTRUCTURE FOR INTEGRATED VOICE, VIDEO AND HIGH SPEED DATA FROM RANGETO DESKTOP USING ATM." International Foundation for Telemetering, 1993. http://hdl.handle.net/10150/608864.

Full text
Abstract:
International Telemetering Conference Proceedings / October 25-28, 1993 / Riviera Hotel and Convention Center, Las Vegas, Nevada
As technology progresses we are faced with ever increasing volumes and rates of raw and processed telemetry data along with digitized high resolution video and the less demanding areas of video conferencing, voice communications and general LAN-based data communications. The distribution of all this data has traditionally been accomplished by solutions designed to each particular data type. With the advent of Asynchronous Transfer Modes or ATM, a single technology now exists for providing an integrated solution to distributing these diverse data types. This allows an integrated set of switches, transmission equipment and fiber optics to provide multi-session connection speeds of 622 Megabits per second. ATM allows for the integration of many of the most widely used and emerging low, medium and high speed communications standards. These include SONET, FDDI, Broadband ISDN, Cell Relay, DS-3, Token Ring and Ethernet LANs. However, ATM is also very well suited to handle unique data formats and speeds, as is often the case with telemetry data. Additionally, ATM is the only data communications technology in recent times to be embraced by both the computer and telecommunications industries. Thus, ATM is a single solution for connectivity within a test center, across a test range, or between ranges. ATM can be implemented in an evolutionary manner as the needs develop. This means the rate of capital investment can be gradual and older technologies can be replaced slowly as they become the communications bottlenecks. However, success of this evolution requires some planning now. This paper provides an overview of ATM, its application to test ranges and telemetry distribution. A road map is laid out which can guide the evolutionary changeover from today's technologies to a full ATM communications infrastructure. Special applications such as the support of high performance multimedia workstations are presented.
APA, Harvard, Vancouver, ISO, and other styles
40

Khoueiry, Nicole. "Study of granular platforms behaviour over soft subgrade reinforced by geosynthetics : Experimental and numerical approaches." Thesis, Lyon, 2020. http://www.theses.fr/2020LYSEI027.

Full text
Abstract:
Les géosynthétiques sont utilisés depuis les années 70 dans le renforcement des plateformes granulaires reposant sur des sols de faible portance pour des applications de routes non revêtues. La complexité des mécanismes développés et la diversité des produits de renforcement nécessitent encore d’étudier ces plateformes renforcées. Un essai au laboratoire permettant de tester des plateformes à échelle réelle a été développé. Une plateforme granulaire non revêtue reposant sur un sol de faible portance a été reproduite. Un protocole de mise en place de ce sol a été élaboré pour assurer son homogénéité et la répétabilité des essais. Une instrumentation spécifique a été développée pour collecter le maximum de mesures utiles pour l’interprétation du transfert de charge et du comportement des géogrilles utilisées. Trois types de géogrille ont été testées : une géogrille extrudée et deux géogrilles tricotées de rigidité différente. Après de nombreux essais de faisabilité, dix essais ont été effectués sous un chargement cyclique sur plaque circulaire, la plateforme testée a été placée dans un banc d’essai de 1,8 m de large, 1,9 m de long et 1,1 m de haut. Sur la base du même protocole de mise en œuvre, des essais de circulation avec un Simulateur Accélérateur de Traffic (SAT) ont été effectués. Ce simulateur a été spécifiquement conçu et construit pour cette application. Pour ces essais, la plateforme testée a été placée dans le banc d’essai allongé à 5 m. La plateforme a été soumise à deux types de sollicitations : un chargement cyclique sur plaque et un chargement de circulation. Des essais de répétabilité ont permis de vérifier le protocole mis en place. A partir des essais, plusieurs observations ont pu être faites sur le comportement des plateformes granulaires, le sol peu porteur, et sur l’efficacité du renforcement. De plus, ces essais ont permis de montrer que le chargement de circulation est beaucoup plus endommageant que le chargement sur plaque. Parallèlement à ces essais, un modèle numérique a été développé en se basant sur la méthode des différences finies avec le logiciel FLAC 3D. Cette modélisation a permis de prédire le comportement de la plateforme sous le premier chargement de plaque
Geosynthetics were used since 1970 in the base course reinforcement supported by soft subgrade in unpaved road application. The various factors and parameters influencing the dominant mechanism and its relative contribution on the platform improvement explain the need of more investigations in this topic. In this research work, large-scale laboratory test was developed to study the reinforcement contribution in the unpaved road improvement. Therefore, an unpaved platform was built of 600 mm of artificial subgrade supporting a base course layer. A detailed experimental Protocol was established regarding the soil preparation, the installation and the soils compaction procedure to reproduce the site conditions and insure the platform repeatability for each test. Three geosynthetics were tested first under a cyclic plate load test. Cyclic load was performed on the prepared platform, with a maximum load of 40 kN resulting in a maximum applied pressure of 560 kPa. The platform was subjected to 10,000 cycles with a frequency of 0.77 Hz. An advanced and complete soil instrumentation was provided in order to collect the maximum data needed for thorough analysis. Quality control tests were performed before each test to verify the soil layers homogeneity and properties. Two base course thicknesses were tested under this test condition, 350 and 220 mm. Once the developed protocol was confirmed under the circular plate load tests, further tests using the Simulator Accelerator of Traffic (SAT) were performed. Indeed, the laboratory prepared platform was placed in a larger box of 1.8 m in large, 5 m in length and 1.1 m in height. The prepared platform was subjected to two solicitations: a particular plate and traffic load. The Simulator Accelerator of Traffic was developed specially for this application. A machine that simulates the traffic load under an effective length of 2 m and a velocity of 4 km/h. The two areas were instrumented: the area under the circulation load, and the area under the plat load, located aside. In addition, a numerical model based on the differential element method using FLAC 3D was developed. The model simulated the circular plate load test with the same platform configuration under monotonic load. The results were compared to the first monotonic load applied on the rigid plate experimentally
APA, Harvard, Vancouver, ISO, and other styles
41

Viana, do Espírito Santo Ilísio. "Inspection automatisée d’assemblages mécaniques aéronautiques par vision artificielle : une approche exploitant le modèle CAO." Thesis, Ecole nationale des Mines d'Albi-Carmaux, 2016. http://www.theses.fr/2016EMAC0022/document.

Full text
Abstract:
Les travaux présentés dans ce manuscrit s’inscrivent dans le contexte de l’inspection automatisée d’assemblages mécaniques aéronautiques par vision artificielle. Il s’agit de décider si l’assemblage mécanique a été correctement réalisé (assemblage conforme). Les travaux ont été menés dans le cadre de deux projets industriels. Le projet CAAMVis d’une part, dans lequel le capteur d’inspection est constitué d’une double tête stéréoscopique portée par un robot, le projet Lynx© d’autre part, dans lequel le capteur d’inspection est une caméra Pan/Tilt/Zoom (vision monoculaire). Ces deux projets ont pour point commun la volonté d’exploiter au mieux le modèle CAO de l’assemblage (qui fournit l’état de référence souhaité) dans la tâche d’inspection qui est basée sur l’analyse de l’image ou des images 2D fournies par le capteur. La méthode développée consiste à comparer une image 2D acquise par le capteur (désignée par « image réelle ») avec une image 2D synthétique, générée à partir du modèle CAO. Les images réelles et synthétiques sont segmentées puis décomposées en un ensemble de primitives 2D. Ces primitives sont ensuite appariées, en exploitant des concepts de la théorie de graphes, notamment l’utilisation d’un graphe biparti pour s’assurer du respect de la contrainte d’unicité dans le processus d’appariement. Le résultat de l’appariement permet de statuer sur la conformité ou la non-conformité de l’assemblage. L’approche proposée a été validée à la fois sur des données de simulation et sur des données réelles acquises dans le cadre des projets sus-cités
The work presented in this manuscript deals with automated inspection of aeronautical mechanical parts using computer vision. The goal is to decide whether a mechanical assembly has been assembled correctly i.e. if it is compliant with the specifications. This work was conducted within two industrial projects. On one hand the CAAMVis project, in which the inspection sensor consists of a dual stereoscopic head (stereovision) carried by a robot, on the other hand the Lynx© project, in which the inspection sensor is a single Pan/Tilt/Zoom camera (monocular vision). These two projects share the common objective of exploiting as much as possible the CAD model of the assembly (which provides the desired reference state) in the inspection task which is based on the analysis of the 2D images provided by the sensor. The proposed method consists in comparing a 2D image acquired by the sensor (referred to as "real image") with a synthetic 2D image generated from the CAD model. The real and synthetic images are segmented and then decomposed into a set of 2D primitives. These primitives are then matched by exploiting concepts from the graph theory, namely the use of a bipartite graph to guarantee the respect of the uniqueness constraint required in such a matching process. The matching result allows to decide whether the assembly has been assembled correctly or not. The proposed approach was validated on both simulation data and real data acquired within the above-mentioned projects
APA, Harvard, Vancouver, ISO, and other styles
42

El, Farissi Anass. "Prédiction de la durée d'utilisation des ouvrages en béton armé par une approche performantielle dans le cas de la corrosion induite par la carbonatation ou l'attaque des ions chlorure." Thesis, La Rochelle, 2020. http://www.theses.fr/2020LAROS025.

Full text
Abstract:
La corrosion des armatures en acier est la plus grande cause de défaillance des ouvrages en béton armé. Ce phénomène électrochimique est déclenché par la présence d’ions chlorure en quantité suffisante au niveau de l’armature ou la carbonatation du béton d’enrobage (action du CO2). L’objectif de cette thèse est de développer des modèles utilisables par l’ingénieur dans une démarche d’approche performantielle pour la prédiction de la durée d’utilisation des ouvrages en béton armé soumis à l’attaque par les ions chlorure ou la carbonatation, suite à l’amorçage et au développement de la corrosion en leur sein. Il s’agit du développement de trois modèles : un modèle de transfert des ions chlorure, un modèle de carbonatation et un modèle de corrosion qui permettent d’estimer la durée d’initiation et la durée de propagation de corrosion. Ces modèles prennent en considération les facteurs liés au matériau (i.e. indicateurs de durabilité), à la mise en œuvre, à l’environnement et à la géométrie. La démarche adoptée pour le développement de ces modèles repose sur l’exploitation de plusieurs bases de données, sur des ouvrages vieillissants et des corps d’épreuve de bétons, issues de la littérature (BHP-2000, Perfdub, etc.). Ces exploitations ont permis d’améliorer la capacité prédictive de modèles existants (transfert des ions chlorure) et de développer de nouveaux modèles (carbonatation et corrosion)
Steel reinforcement corrosion is the major cause of failure in reinforced concrete structures. This electrochemical process is induced by presence in sufficient quantity of chloride ions at the reinforcement or by concrete carbonation (CO2 action). This thesis aims to develop engineering performance-based models for the service life prediction of reinforced concrete structures subjected to chloride-induced or carbonation-induced corrosion initiation and propagation. It consists in developing three models : a chloride ions ingress model, a carbonation model and a corrosion model that allows to estimate the corrosion initiation time and propagation time. These models consider factors related to the material (i.e. durability indicators), processing, environment and geometry. The approach used to develop these models is based on using several literature databases of ageing structures and concrete testing specimens (BHP-2000, Perfdub, etc.). The use of these data allowed to improve the predictive capacity of existing models (chloride ingress model) and to develop new models (carbonation and corrosion models)
APA, Harvard, Vancouver, ISO, and other styles
43

Yacoub, Aznam. "Une approche de vérification formelle et de simulation pour les systèmes à événements : application à PROMELA." Thesis, Aix-Marseille, 2016. http://www.theses.fr/2016AIXM4373/document.

Full text
Abstract:
De nos jours, la mise au point de logiciels ou de systèmes fiables est de plus en plus difficile. Les nouvelles technologies impliquent de plus en plus d'interactions entre composants complexes, dont l'analyse et la compréhension deviennent de plus en plus délicates. Pour pallier ce problème, les domaines de la vérification et de la validation ont connu un bond significatif avec la mise au point de nouvelles méthodes, réparties en deux grandes familles : la vérification formelle et la simulation. Longtemps considérées comme à l'opposée l'une de l'autre, les recherches récentes essaient de rapprocher ces deux grandes familles de méthodologies. Dans ce cadre, les travaux de cette thèse proposent une nouvelle approche pour intégrer la simulation dites à évènements discrets aux méthodes formelles. L'objectif est d'améliorer les méthodes formelles existantes, en les combinant à la simulation, afin de leur permettre de détecter des erreurs qu'elles ne pouvaient déceler avant, notamment sur des systèmes temporisés. Cette approche nous a conduit à la mise au point d'un nouveau langage formel, le DEv-PROMELA. Ce nouveau langage, créé à partir du PROMELA et du formalisme DEVS, est à mi-chemin entre un langage de spécifications formelles vérifiables et un formalisme de simulation. En combinant alors un model-checking traditionnel et une simulation à évènements discrets sur le modèle exprimé dans ce nouveau langage, il est alors possible de détecter et de comprendre des dysfonctionnements qu'un model-checking seul ou qu'une simulation seule n'auraient pas permis de trouver. Ce résultat est notamment illustré à travers les différents exemples étudiés dans ces travaux
Nowadays, making reliable software and systems is become harder. New technologies imply more and more interactions between complex components, whose the analysis and the understanding are become arduous.To overcome this problem, the domains of verification and validation have known a significant progress, with the emergence of new automatic methods that ensure reliability of systems. Among all these techniques, we can find two great families of tools : the formal methods and the simulation. For a long time, these two families have been considered as opposite to each other. However, recent work tries to reduce the border between them. In this context, this thesis proposes a new approach in order to integrate discrete-event simulation in formal methods. The main objective is to improve existing model-checking tools by combining them with simulation, in order to allow them detecting errors that they were not previously able to find, and especially on timed systems. This approach led us to develop a new formal language, called DEv-PROMELA. This new language, which relies on the PROMELA and on the DEVS formalism, is like both a verifiable specifications language and a simulation formalism. By combining a traditional model-checking and a discrete-event simulation on models expressed in DEv-PROMELA, it is therefore possible to detect and to understand dysfunctions which could not be found by using only a formal checking or only a simulation. This result is illustrated through the different examples which are treated in this work
APA, Harvard, Vancouver, ISO, and other styles
44

Vestin, Albin, and Gustav Strandberg. "Evaluation of Target Tracking Using Multiple Sensors and Non-Causal Algorithms." Thesis, Linköpings universitet, Reglerteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-160020.

Full text
Abstract:
Today, the main research field for the automotive industry is to find solutions for active safety. In order to perceive the surrounding environment, tracking nearby traffic objects plays an important role. Validation of the tracking performance is often done in staged traffic scenarios, where additional sensors, mounted on the vehicles, are used to obtain their true positions and velocities. The difficulty of evaluating the tracking performance complicates its development. An alternative approach studied in this thesis, is to record sequences and use non-causal algorithms, such as smoothing, instead of filtering to estimate the true target states. With this method, validation data for online, causal, target tracking algorithms can be obtained for all traffic scenarios without the need of extra sensors. We investigate how non-causal algorithms affects the target tracking performance using multiple sensors and dynamic models of different complexity. This is done to evaluate real-time methods against estimates obtained from non-causal filtering. Two different measurement units, a monocular camera and a LIDAR sensor, and two dynamic models are evaluated and compared using both causal and non-causal methods. The system is tested in two single object scenarios where ground truth is available and in three multi object scenarios without ground truth. Results from the two single object scenarios shows that tracking using only a monocular camera performs poorly since it is unable to measure the distance to objects. Here, a complementary LIDAR sensor improves the tracking performance significantly. The dynamic models are shown to have a small impact on the tracking performance, while the non-causal application gives a distinct improvement when tracking objects at large distances. Since the sequence can be reversed, the non-causal estimates are propagated from more certain states when the target is closer to the ego vehicle. For multiple object tracking, we find that correct associations between measurements and tracks are crucial for improving the tracking performance with non-causal algorithms.
APA, Harvard, Vancouver, ISO, and other styles
45

Mallangi, Siva Sai Reddy. "Low-Power Policies Based on DVFS for the MUSEIC v2 System-on-Chip." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-229443.

Full text
Abstract:
Multi functional health monitoring wearable devices are quite prominent these days. Usually these devices are battery-operated and consequently are limited by their battery life (from few hours to a few weeks depending on the application). Of late, it was realized that these devices, which are currently being operated at fixed voltage and frequency, are capable of operating at multiple voltages and frequencies. By switching these voltages and frequencies to lower values based upon power requirements, these devices can achieve tremendous benefits in the form of energy savings. Dynamic Voltage and Frequency Scaling (DVFS) techniques have proven to be handy in this situation for an efficient trade-off between energy and timely behavior. Within imec, wearable devices make use of the indigenously developed MUSEIC v2 (Multi Sensor Integrated circuit version 2.0). This system is optimized for efficient and accurate collection, processing, and transfer of data from multiple (health) sensors. MUSEIC v2 has limited means in controlling the voltage and frequency dynamically. In this thesis we explore how traditional DVFS techniques can be applied to the MUSEIC v2. Experiments were conducted to find out the optimum power modes to efficiently operate and also to scale up-down the supply voltage and frequency. Considering the overhead caused when switching voltage and frequency, transition analysis was also done. Real-time and non real-time benchmarks were implemented based on these techniques and their performance results were obtained and analyzed. In this process, several state of the art scheduling algorithms and scaling techniques were reviewed in identifying a suitable technique. Using our proposed scaling technique implementation, we have achieved 86.95% power reduction in average, in contrast to the conventional way of the MUSEIC v2 chip’s processor operating at a fixed voltage and frequency. Techniques that include light sleep and deep sleep mode were also studied and implemented, which tested the system’s capability in accommodating Dynamic Power Management (DPM) techniques that can achieve greater benefits. A novel approach for implementing the deep sleep mechanism was also proposed and found that it can obtain up to 71.54% power savings, when compared to a traditional way of executing deep sleep mode.
Nuförtiden så har multifunktionella bärbara hälsoenheter fått en betydande roll. Dessa enheter drivs vanligtvis av batterier och är därför begränsade av batteritiden (från ett par timmar till ett par veckor beroende på tillämpningen). På senaste tiden har det framkommit att dessa enheter som används vid en fast spänning och frekvens kan användas vid flera spänningar och frekvenser. Genom att byta till lägre spänning och frekvens på grund av effektbehov så kan enheterna få enorma fördelar när det kommer till energibesparing. Dynamisk skalning av spänning och frekvens-tekniker (såkallad Dynamic Voltage and Frequency Scaling, DVFS) har visat sig vara användbara i detta sammanhang för en effektiv avvägning mellan energi och beteende. Hos Imec så använder sig bärbara enheter av den internt utvecklade MUSEIC v2 (Multi Sensor Integrated circuit version 2.0). Systemet är optimerat för effektiv och korrekt insamling, bearbetning och överföring av data från flera (hälso) sensorer. MUSEIC v2 har begränsad möjlighet att styra spänningen och frekvensen dynamiskt. I detta examensarbete undersöker vi hur traditionella DVFS-tekniker kan appliceras på MUSEIC v2. Experiment utfördes för att ta reda på de optimala effektlägena och för att effektivt kunna styra och även skala upp matningsspänningen och frekvensen. Eftersom att ”overhead” skapades vid växling av spänning och frekvens gjordes också en övergångsanalys. Realtidsoch icke-realtidskalkyler genomfördes baserat på dessa tekniker och resultaten sammanställdes och analyserades. I denna process granskades flera toppmoderna schemaläggningsalgoritmer och skalningstekniker för att hitta en lämplig teknik. Genom att använda vår föreslagna skalningsteknikimplementering har vi uppnått 86,95% effektreduktion i jämförelse med det konventionella sättet att MUSEIC v2-chipets processor arbetar med en fast spänning och frekvens. Tekniker som inkluderar lätt sömn och djupt sömnläge studerades och implementerades, vilket testade systemets förmåga att tillgodose DPM-tekniker (Dynamic Power Management) som kan uppnå ännu större fördelar. En ny metod för att genomföra den djupa sömnmekanismen föreslogs också och enligt erhållna resultat så kan den ge upp till 71,54% lägre energiförbrukning jämfört med det traditionella sättet att implementera djupt sömnläge.
APA, Harvard, Vancouver, ISO, and other styles
46

Krusevac, Zarko. "Model-based approach to reliable information transfer over time-varying communication channels." Phd thesis, 2007. http://hdl.handle.net/1885/148225.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Wang, Shuo-Hong, and 王碩鴻. "An Approach to Transfer an EPC based Business Process into Hierarchical Function Model." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/9j58zu.

Full text
Abstract:
碩士
國立臺灣科技大學
工業管理系
95
With the increasing complicacy of business process, to offer an effective and efficient process modeling/analyzing tool is an important issue. By different class of management or user, the need of breadth and depth of process are also different. Like the staff who manages the order process, what he aims to is the detail order procedure. But if the staff is a policymaker, what he cares about is the general idea of the process, which can help him make decisions quickly. At present, most of the process modeling tools applied hierarchical modeling concepts to construct a process. At first, from a general idea step by step expand a process to a detailed workflow. This kind of modeling used a simple to complicated way to build up a realizable and specific workflow. In this research, an approach to allow have user have their own way to simplify workflow was proposed. User can hierarchically simplify workflow step by step and then transform complicated workflow into a function tree model. By using the function tree model, People can grasp the concept of the analzying process quickly and clearly. In order to reach the goal of the research, a software EPC Tool was used to detect the structure errors in EPC workflow and propose the rules to correct the structure errors. With using the concept of corresponging control elements, a workflow can be divided into different blocks. The final step was to find out where those blocks locate in and then to transform the workflow into a function tree model.
APA, Harvard, Vancouver, ISO, and other styles
48

Wafa, Zeina. "A latent-segmentation based approach to investigating the spatial transferability of activity-travel models." Thesis, 2014. http://hdl.handle.net/2152/28098.

Full text
Abstract:
Spatial transferability of travel demand models has been an issue of considerable interest, particularly for small and medium sized planning areas that often do not have the resources and staff time to collect large scale travel survey data and estimate model components native to the region. With the advent of more sophisticated microsimulation-based activity-travel demand models, the interest in spatial transferability has surged in the recent past as smaller metropolitan planning organizations seek to take advantage of emerging modeling methods within the limited resources they can marshal. Traditional approaches to identifying geographical contexts that may borrow and transfer models between one another involve the exogenous a priori identification of a set of variables that are used to characterize the similarity between geographic regions. However, this ad hoc procedure presents considerable challenges as it is difficult to identify the most appropriate criteria a priori. To address this issue, this thesis proposes a latent segmentation approach whereby the most appropriate criteria for identifying areas with similar profiles are determined endogenously within the model estimation phase, customized for every model type. The end products are a set of optimal similarity measures that link regions to one another as well as a fully transferred model, segmented to account for heterogeneity in the population. The methodology is demonstrated and its efficacy established through a case study that utilizes the National Household Travel Survey (NHTS) dataset for information on weekday activities unemployed individuals within 9 regions in the states of California and Florida engage in. A multiple discrete continuous extreme value (MDCEV) model is developed that simulates the discrete nature of activity selection as well as the continuous nature of activity participation. The estimated model is then applied onto the Austin–San Marcos MSA, a context withheld from the original estimation in order to assess its performance. The performance of the segmented model was then examined vis-à-vis that of other models that are similar to the local region in only one dimension. It is found that the methodology offers a robust mechanism for identifying latent segments and establishing criteria for transferring models between areas.
text
APA, Harvard, Vancouver, ISO, and other styles
49

Dudalev, Mikhail. "Computational approach to scaling and criticality in planar Ising models." Phd thesis, 2012. http://hdl.handle.net/1885/9860.

Full text
Abstract:
In this thesis, we study the critical behaviour of the two-dimensional Ising model on the regular lattices. Using the numerical solution of the model on the square, triangular and honeycomb lattices we compute the universal scaling function, which turns out to be identical on each of the lattices, in addition to being identical to the scaling function of the Ising Field Theory, computed previously by Fonseca and Zamolodchikov. To cope with the lattice contributions we carefully examined series expansions of the lattice free energy derivatives. We included the non-scaling regular part of the free energy as well as non-linear Aharony-Fisher scaling elds, which all have non-universal expansions. Using as many of the previously known exact results as possible, we were able to t the unknown coe cients of the scaling function expansion and obtain some non-universal coe cients. In contrast to the IFT approach of Fonseca and Zamolodchikov, all coe cients were obtained independently from separate datasets, without using dispersion relations. These results show that the Scaling and Universality hypotheses, with the help of the Aharony-Fisher corrections, hold on the lattice to very high precision and so there should be no doubt of their validity. For all numerical computations we used the Corner Transfer Matrix Renormalisation Group (CTMRG) algorithm, introduced by Nishino and Okunishi. The algorithm combines Baxter's variational approach (which gives Corner Transfer Matrix (CTM) equations), and White's Density Matrix Renormalisation Group (DMRG) method to solve the CTM equations e ciently. It was shown that given su cient distance from the critical point, the algorithmic precision is exceptionally good and is unlikely to be exceeded with any other general algorithm using the same amount of numerical computations. While performing tests we also con rmed several critical parameters of the three-state Ising and Blume-Capel models, although no extra precision was gained, compared to previous results from other methods. In addition to the results presented here, we produced an efficient and reusable implementation of the CTMRG algorithm, which after minor modifications could be used for a variety of lattice models, such as the Kashiwara-Miwa and the chiral Potts models.
APA, Harvard, Vancouver, ISO, and other styles
50

Dorigo, Wouter [Verfasser]. "Retrieving canopy variables by radiative transfer model inversion : a regional approach for imaging spectrometer data / Wouter Dorigo." 2008. http://d-nb.info/988116782/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography