Thèses sur le sujet « Large-Scales »

Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Large-Scales.

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleures thèses pour votre recherche sur le sujet « Large-Scales ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.

1

Drinkwater, Michael John. « Quasar clustering on large scales ». Thesis, University of Cambridge, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.330222.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Frith, William James. « The clustering of galaxies on large scales ». Thesis, Durham University, 2005. http://etheses.dur.ac.uk/2390/.

Texte intégral
Résumé :
We investigate the local large-scale structure of the Universe, addressing various possible issues confronting the ACDM paradigm. Primarily, we investigate the clustering statistics of the newly-completed 2 Micron All Sky Survey (2MASS), the largest all sky galaxy survey to date.The 2MASS galaxy number counts over the ͌ 4000 deg(^2) APM survey area are found to be low compared to predictions but are in good agreement with previous optical results. Surprisingly, the number counts over almost the entire sky ([b] >20 ͌ 27000 deg(^2) ) are also deficient compared to our predictions. These results do not appear to be significantly affected by systematic errors. Assuming a ACDM cosmology, the observed deficiencies in the APM survey area and for [b] >20 represent ͌ 2.5σ and ͌ 4.0σ fluctuations in the local galaxy distribution respectively. These results are therefore potentially at odds with the form of clustering expected on large scales. We examine the form of galaxy clustering to Ṯ < 1000 h (^-1) Mpc scales using the 2MASS angular power spectrum. We find a 3σ excess over mock ACDM results; however this is not enough to account for the observed number counts mentioned above. We determine the implied cosmological constraints; the 2MASS galaxy angular power spectrum is, in fact, in strong support of ACDM, with a measured power spectrum shape of Γ (_eff) = 0.14±0.02. In addition, we determine a K(_8)-band galaxy bias of b(_K) = 1.39 ± 0.12.We determine high-order correlation functions of the 2MASS galaxy sample to extremely large scales (Ṯ < 1000 h (^-1)). The results are in strong support of Gaussian initial conditions and hierarchical clustering; we reject primordial strong non-Gaussianity at the ͌ 2.5σ confidence level. Unlike all previous such analyses, our results are relatively robust to the removal of large superclusters from the sample. We also measure a K(_8)-band quadratic galaxy bias of c(_2) = 0.57 ± 0.33. This result differs significantly from previous negative constraints; we discuss a possible explanation for this apparent discrepancy. Finally, we examine the extent of possible Sunyaev-Zeldovich contamination in the first year Wilkinson Microwave Anisotropy Probe (WMAP) data using various foreground galaxy cluster catalogues. We find evidence suggesting that the associated temperature decrements extend to > 1 scales. Such a result would indicate a much higher baryon density than the concordance value; in addition, CMB power spectrum fits and the associated cosmological constraints would also be compromised.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Carmona, Loaiza Juan Manuel. « AGN fuelling : bridging large and small scales ». Doctoral thesis, SISSA, 2015. http://hdl.handle.net/20.500.11767/3887.

Texte intégral
Résumé :
One of the biggest challenges in understanding the fuelling of supermassive black holes in active galactic nuclei (AGN) is not on accounting for the source of fuel, as a galaxy can comfortably supply the required mass budget, but on its actual delivery. While a clear picture has been developed for the large scale (~ kpc) down to the intermediate one (~ 100 pc), and for the smallest scales (~ 0.1 pc) where an accretion disc likely forms, a bridge that has proven difficult to build is that between ~ 100 pc and ~ 0.1 pc. It is feared that gas at these scales might still retain enough angular momentum and settle into a larger scale disc with very low or no inflow to form or replenish the inner accretion disc (on ~ 0.01 pc scales). In this Thesis, we present numerical simulations in which a rotating gaseous shell flows towards a SMBH because of its lack of rotational support. As inflow proceeds, gas from the shell impacts an already present nuclear (~ 10pc) disc. The cancellation of angular momentum and redistribution of gas, due to the misalignment between the angular momentum of the shell and that of the disc, is studied in this scenario. The underlying hypothesis is that even if transport of angular momentum at these scales may be inefficient, the interaction of an inflow with a nuclear disc would still provide a mechanism to bring mass inwards because of the cancellation of angular momentum. We quantify the amount of gas such a cancellation would bring to the central parsec under different circumstances: Co- and counter-rotation between the disc and the shell and the presence or absence of an initial turbulent kick; we also discuss the impact of self gravity in our simulations. The scenario we study is highly idealized and designed to capture the specific outcomes produced by the mechanism proposed. We find that angular momentum cancellation and redistribution via hydrodynamical shocks leads to sub-pc inflows enhanced by more than 2-3 orders of magnitude. In all of our simulations, the gas inflow rate across the inner parsec is higher than in the absence of the interaction. Gas mixing changes the orientation of the nuclear disc as the interaction proceeds until warped discs or nested misaligned rings form as relic structures. The amount of inflow depends mainly on the spin orientation of the shell relative to the disc, while the relic warped disc structure depends mostly on the turbulent kick given to the gaseous shell in the initial conditions. The main conclusion of this Thesis is that actual cancellation of angular momentum within galactic nuclei can have a significant impact on feeding super massive black holes. Such cancellation by inflowdisc interactions would leave warped 10 - 20 pc discs as remnants.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Feldman, Richard. « Toward a theory of abundance at large spatial scales ». Thesis, McGill University, 2011. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=104707.

Texte intégral
Résumé :
Fundamentally, ecology is the study of the diversity, distribution, and abundance of organisms. Recent advances in technology coupled with expanding research goals have lead to studies of how the first two of these properties vary over large spatial scales. There has been relatively few cases documenting large scale spatial variation in abundance and very little theoretical development explaining such variation. Yet a general pattern exists: a species is abundant in very few places and rare in most places in its range. Current theory suggests that such a pattern of abundance reflects underlying spatial variation in the environment. In this thesis, I used observational, experimental, theoretical, and statistical approaches to test the type of environmental variation and how such environmental variation combines with interspecific competition to generate spatial variation in abundance. For two species of hummingbirds, I found that different environmental factors related to abundance than to occupancy. Interspecific competition altered spatial variation in abundance in different ways depending on the niche differences among competing species. Interspecific competition also mediated the effect of the environment on abundance by influencing the relative costs and benefits of different hummingbird foraging strategies. I also found that abundance data can be used to predict species' response to climate change because statistical models minimize the noise inherent in abundance datasets. Despite my findings, a theory of abundance is still in its infancy. It is not known whether there is generality in the number and identity of large scale environmental gradients that affect abundance. Similarly, more work needs to be done connecting the small scale interplay between environment, species traits, behaviour, and competition to a broader geographic context. There are also dispersal and non-niche based approaches to spatial variation in abundance that need to be reconciled with current theory. In this way, a more general theory relating macroevolutionary dynamics to macroecological patterns can be developed.
L'écologie est l'étude de la diversité, des distributions et des abondances des organismes vivants. Les avancées technologiques récentes couplées à une expansion des objets de recherche ont permis à une étude approfondie de la variation de ces deux premières propriétés sur de très grandes échelles spatiales. Les variations en abondance sont, quant à elles, peu documentées aux grandes échelles spatiales et les développements théoriques correspondant restent limités. Il existe pourtant un pattern prévalent : une espèce donnée est généralement abondante dans une partie extrêmement réduite de sa zone géographique et rare partout ailleurs. Cette observation est aujourd'hui communément expliquée par une variation environnementale sous-jacente. Cette thèse s'appuie sur des approches à la fois empiriques et expérimentales, statistiques et théoriques pour tester le type de variation environnementale ainsi que les interactions entre environnement et compétition interspécifique pouvant générer les variations spatiales en abondances observées. Il est montré que présence-absence et abondance sont affectées par des facteurs environnementaux distincts. Il apparaît en outre que l'effet de la compétition interspécifique dépend des différences de niches entre espèces et module l'impact de l'environnement sur l'abondance en modifiant des coûts et bénéfices relatifs des différentes stratégies d'acquisition des ressources. Finalement, la possibilité de prédire les réponses aux changements climatiques grâce aux données d'abondance et à des modèles statistiques minimisant le bruit inhérent à ce type de données est démontrée. Pour autant, une véritable théorie des distributions d'abondance reste à développer. Le nombre, et a fortiori l'identité, des gradients environnements affectant les abondances à grande échelle spatiale sont encore mal connus. Un effort de recherche considérable est ainsi nécessaire pour améliorer la compréhension du lien entre phénomènes locaux, dont l'interaction entre environnement, traits, comportement et compétition, et patterns à grandes échelles. Par ailleurs, l'unification entre approches basées sur la dispersion, négligeant les différences de niches, avec la théorie actuelle doit encore être accomplie pour qu'une véritable théorie générale des dynamiques macro-évolutive et patterns macro-écologiques puisse voir le jour.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Javanmardi, Behnam [Verfasser]. « Cosmological Investigations On Large And Small Scales / Behnam Javanmardi ». Bonn : Universitäts- und Landesbibliothek Bonn, 2017. http://d-nb.info/1130704599/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Robinson, Mark. « Accessing large length and time scales with density functional theory ». Thesis, University of Cambridge, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.609128.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Pujol, Vallribera Arnau. « Cosmology with galaxy surveys : how galaxies trace mass at large scales ». Doctoral thesis, Universitat Autònoma de Barcelona, 2016. http://hdl.handle.net/10803/385515.

Texte intégral
Résumé :
Els cartografiats galàctics són una eina important per la cosmologia. No obstant això, la majoria de la matèria està en forma de matèria fosca, que no interacciona amb la llum. Per tant, les galàxies que observem des dels nostres telescopis són una petita fracció de la matèria total de l'univers. Per això és necessari entendre la connexió entre galàxies i matèria fosca per tal d'inferir la distribució de tota la matèria de l'univers a partir dels cartografiats galàctics. Les simulacions són una eina important per a predir la formació i evolució de les estructures de matèria fosca i galàxies. Les simulacions permeten estudiar l'impacte de diferents cosmologies i models de formació de galàxies en les estructures a gran escala finals que formen les galàxies i la matèria. A gran escala, les fluctuacions de densitat de galàxies a gran escala són proporcionals a les fluctuacions de matèria per un factor anomenat bias galàctic. Aquest factor permet inferir la distribució de matèria total a partir de la distribució de galàxies, i per tant el coneixement del bias galàctic té un impacte molt important en els nostres estudis cosmològics. Aquesta tesi doctoral està focalitzada en l'estudi del bias galàctic i el bias d'halos a grans escales. Hi ha diferents tècniques per a estudiar el bias galàctic, en aquesta tesi ens focalitzem en dues d'elles. La primera tècnica utilitza l'anomendat Halo Occupation Distribution (HOD), que assumeix que les galàxies poblen halos de matèria fosca només segons la massa dels halos. No obstant això, aquesta hipòtesi no sempre és suficientment precisa. Utilitzem la simulació Millennium per a estudiar el bias galàctic i d'halos, la dependència en la massa del bias d'halos i els seus efectes en les prediccions del bias galàctic. Trobem que l'ocupació de galàxies en halos no depèn només de la seva massa, i assumir això causa un error en la predicció del bias galàctic. També estudiem la dependència del bias d'halos en l'ambient, i mostrem que l'ambient restringeix molt més el bias que la massa. Quan un conjunt de galàxies és seleccionat per propietats que estan correlacionades amb l'ambient, l'assumpció de que el bias d'halos només depèn de la massa falla. Mostrem que en aquests casos utilitzant la dependència en l'ambient del bias d'halos produeix una predicció del bias galàctic molt més bona. Una altra tècnica per estudiar el bias galàctic és utilitzant Weak gravitational lensing per mesurar directament la massa en observacions. Weak lensing és el camp que estudia les distorsions lleus en les imatges de les galàxies degut a la deflexió de la llum produïda per la distribució de matèria del davant de la galàxia. Aquestes distorsions permeten inferir la distribució a gran escala de la matèria total. Desenvolupem i estudiem un nou mètode per mesurar el bias galàctic a partir de la combinació dels mapes de weak lensing i el camp de distribució de galàxies. El mètode consisteix en reconstruïr el mapa de weak lensing a partir de la distribució de les galàxies de davant del mapa. El bias és mesurat a partir de les correlacions entre el mapa de weak lensing reconstruït i el real. Testegem diferents sistemàtics del mètode i estudiem en quins règims el mètode és consistent amb altres mètodes per mesurar el bias lineal. Trobem que podem mesurar el bias galàctic utilitzant aquesta tècnica. Aquest mètode és un bon complement d'altres mètodes per mesurar el bias galàctic, perquè utilitza assumpcions diferents. Juntes, les diferents tècniques per mesurar el bias galàctic permetran restringir millor el bias galàctic i la cosmologia en els futurs cartografiats galàctics.
Galaxy surveys are an important tool for cosmology. The distribution of galaxies allow us to study the formation of structures and their evolution, which are needed ingredients to study the evolution and content of the Universe. However, most of the matter is made of dark matter, which gravitates but does not interact with light. Hence, the galaxies that we observe from our telescopes only represent a small fraction of the total mass of the Universe. Because of this, we need to understand the connection between galaxies and dark matter in order to infer the total mass distribution of the Universe from galaxy surveys. Simulations are an important tool to predict the structure formation and evolution of dark matter and galaxy formation. Simulations allow us to study the impact of different cosmologies and galaxy formation models on the final large scale structures that galaxies and matter form. Simulations are also useful to calibrate our tools before applying them to real surveys. At large scales, galaxies trace the matter distribution. In particular, the galaxy density fluctuations at large scales are proportional to the underlying matter fluctuations by a factor that is called galaxy bias. This factor allows us to infer the total matter distribution from the distribution of galaxies, and hence knowledge of galaxy bias has a very important impact on our cosmological studies. This PhD thesis is focused on the study of galaxy and halo bias at large scales. There are several techniques to study galaxy bias, here we focus on two of them. The first technique is the Halo Occupation Distribution (HOD) model, that assumes that galaxies populate dark matter haloes depending only on the halo mass. With this hypothesis and a halo bias model, we can relate galaxy clustering with matter clustering and halo occupation. However, this hypothesis is not always accurate enough. We use the Millennium Simulation to study galaxy and halo bias, the halo mass dependence of halo bias, and its effects on galaxy bias prediction. We find that the halo occupation of galaxies does not only depend on mass, and assuming so causes an error in the galaxy bias predictions. We also study the environmental dependence of halo bias, and we show that environment constrains much more bias than mass. When a galaxy sample is selected by properties that are correlated with environment, the assumption that halo bias only depends on mass fails. We show that in these cases using the environmental dependence of halo bias produces a much better prediction of galaxy bias. Another technique to study galaxy bias is by using weak gravitational lensing to directly measure mass. Weak lensing is the field that studies the weak image distortions of galaxies due to the light deflections produced by the presence of a foreground mass distribution. Theses distortions can be used to infer the total mass (baryonic and dark) distribution at large scales. We develop and study a new method to measure bias from the combination of weak lensing and galaxy density fields. The method consists on reconstructing the weak lensing maps from the distribution of the foreground galaxies. Bias is then measured from the correlations between the reconstructed and real weak lensing fields. We test the different systematics of the method and the regimes where this method is consistent with other methods to measure linear bias. We find that we can measure galaxy bias using this technique. This method is a good complement to other methods to measure bias because it uses different assumptions. Together the different techniques will allow to constrain better bias and cosmology in future surveys.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Vanneste, Sylvain. « Constraints on primordial gravitational waves from the large scales CMB data ». Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS314/document.

Texte intégral
Résumé :
Cette thèse s’articule autour du développement d'outils d’analyse des modes B du fond diffus cosmologique (CMB) dans le but d'estimer l’amplitude des ondes gravitationnelles primordiales produites durant la période inflationnaire.Nous nous intéressons plus précisément aux grandes échelles angulaires, pour lesquelles le signal attendu des modes B primordiaux est dominant. Ces échelles étant particulièrement contaminées par des émissions polarisées galactiques, nous avons étudié et développé des méthodes permettant de réduire ces contaminations et de caractériser les résidus. Ces outils peuvent être utilisés pour analyser les données des satellites tels que Planck ou LiteBIRD. Afin de quantifier l’amplitude des modes B, nous avons développé et caractérisé un estimateur de spectre en puissance des anisotropies du CMB. Celui-ci s’exécute dans l'espace des pixels et permet de croiser des cartes mesurées par différent détecteurs. La méthode est optimale, et minimise les fuites de variance des modes E vers les modes B.Nous avons appliqué les méthodes de nettoyage et d’estimation de spectre aux cartes de données et de simulations en polarisation fournies publiquement par Planck. Nos contraintes sur la comportement spectral de la poussière et du rayonnement synchrotron galactique sont en accord avec les analyses précédentes. Enfin, nous avons pu déduire une limite supérieure sur l’amplitude des ondes gravitationnelles primordiales
This thesis focuses on the development of analysis tools of the primordial B modes of the Cosmic Microwave Background (CMB). Our goal is to extract the amplitude of the primordial gravitational waves produced during the inflationary period.Specifically, we are interested in the large angular scales, for which the primary B modes signal is expected to be dominant. Since these scales are particularly contaminated by polarised galactic emissions, we have studied and developed approaches to reduce those contaminations and to characterise their residuals. Those methods are applicable to satellite missions such as Planck or LiteBIRD.In order to estimate the B modes amplitude, we developed and characterised a CMB anisotropies power spectrum estimator. The algorithm is pixels-based and allows to cross-correlate maps measured by different detectors. The method is optimal and minimises the E-to-B variance leakage.We applied the cleaning and spectrum estimation approaches to the polarisation data and simulation maps publicly provided by Planck. The constraints that we deduce are in agreement with past analysis. Ultimately, we derive an upper limit on the primordial gravitational waves amplitude
Styles APA, Harvard, Vancouver, ISO, etc.
9

Monroe, Emy M. « Population Genetics and Phylogeography of Two Large-River Freshwater Mussel Species at Large and Small Spatial Scales ». Miami University / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=miami1218129323.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Mohammed, Abdulwasey. « Scaling up of peatland methane emission hotspots from small to large scales ». Thesis, University of Edinburgh, 2015. http://hdl.handle.net/1842/15772.

Texte intégral
Résumé :
Methane is an important greenhouse gas that is relatively long-lived in the atmosphere, and wetlands are a major natural source of atmospheric methane. Methane emissions from wetlands are variable across both space and time at scales ranging from meters to continents and a comprehensive accounting of wetland methane efflux is critical for quantifying the atmospheric methane balance. Major uncertainties in quantifying methane efflux arise when measuring and modelling its physical and biological determinants, including water table depth, microtopography, soil temperature, the distribution of aerenchymous vegetation, and the distribution of mosses. Further complications arise with the nonlinear interaction between flux and derivers in highly-heterogeneous wetland landscape. A possible solution for quantifying wetland methane efflux at multiple scales in space (‘upscaling’) is repeated observations using remote sensing technology to acquire information about the land surface across time, space, and spectra. These scaling issues must be resolved to progress in our understanding of the role of wetlands in the global atmospheric methane budget from peatlands. In this thesis, data collected from multiple aircraft and satellite-based remote sensing platforms were investigated to characterize the fine scale spatial heterogeneity of a peatland in southwestern Scotland for the purpose of developing techniques for quantifying (‘upscaling’) methane efflux at multiple scales and space. Seasonal variation in pools such as expansion and contraction was simulated with the LiDAR data to investigate the expansion and contraction of the lakes and pools that could give an idea of increase or decrease in methane emissions. Concepts from information theory applied on the different data sets also revealed the relative loss in some features on peatland surface and relative gain on others and find a natural application for reducing bias in multi-scale spatial classification as well as quantifying the length scales (or scales) at which important surface features for methane fluxes are lost. Results from the wavelet analysis demonstrated the preservation of fine scale heterogeneity up to certain length scale and the pattern on peatland surface was preserved. Variogram techniques were also tested to determine sample size, range and orientation in the data set. All the above has implications on estimating methane budget from the peatland landscape and could reduce the bias in the overall flux estimates. All the methods used can also be applied to contrasting sites.
Styles APA, Harvard, Vancouver, ISO, etc.
11

Chippendale, Aaron Paul. « Detecting cosmological reionization on large scales through the 21 cm HI line ». Thesis, The University of Sydney, 2009. http://hdl.handle.net/2123/6256.

Texte intégral
Résumé :
This thesis presents the development of new techniques for measuring the mean redshifted 21 cm line of neutral hydrogen during reionization. This is called the 21 cm cosmological reionization monopole. Successful observations could identify the nature of the first stars and test theories of galaxy and large-scale structure formation. The goal was to specify, construct and calibrate a portable radio telescope to measure the 21 cm monopole in the frequency range 114 MHz to 228 MHz, which corresponds to the redshift range 11.5 > z > 5.2. The chosen approach combined a frequency independent antenna with a digital correlation spectrometer to form a correlation radiometer. The system was calibrated against injected noise and against a modelled galactic foreground. Components were specified for calibration of the sky spectrum to 1 mK/MHz relative accuracy. Comparing simulated and measured spectra showed that bandpass calibration is limited to 11 K, that is 1% of the foreground emission, due to larger than expected frequency dependence of the antenna pattern. Overall calibration, including additive contributions from the system and the radio foreground, is limited to 60 K. This is 160 times larger than the maximum possible monopole amplitude at redshift eight. Future work will refine and extend the system known as the Cosmological Reionization Experiment Mark I (CoRE Mk I).
Styles APA, Harvard, Vancouver, ISO, etc.
12

Chippendale, Aaron Paul. « Detecting cosmological reionization on large scales through the 21 cm HI line ». University of Sydney, 2009. http://hdl.handle.net/2123/6256.

Texte intégral
Résumé :
Doctor of Philosophy (PhD)
This thesis presents the development of new techniques for measuring the mean redshifted 21 cm line of neutral hydrogen during reionization. This is called the 21 cm cosmological reionization monopole. Successful observations could identify the nature of the first stars and test theories of galaxy and large-scale structure formation. The goal was to specify, construct and calibrate a portable radio telescope to measure the 21 cm monopole in the frequency range 114 MHz to 228 MHz, which corresponds to the redshift range 11.5 > z > 5.2. The chosen approach combined a frequency independent antenna with a digital correlation spectrometer to form a correlation radiometer. The system was calibrated against injected noise and against a modelled galactic foreground. Components were specified for calibration of the sky spectrum to 1 mK/MHz relative accuracy. Comparing simulated and measured spectra showed that bandpass calibration is limited to 11 K, that is 1% of the foreground emission, due to larger than expected frequency dependence of the antenna pattern. Overall calibration, including additive contributions from the system and the radio foreground, is limited to 60 K. This is 160 times larger than the maximum possible monopole amplitude at redshift eight. Future work will refine and extend the system known as the Cosmological Reionization Experiment Mark I (CoRE Mk I).
Styles APA, Harvard, Vancouver, ISO, etc.
13

Folesky, Jonas [Verfasser]. « Rupture Propagation Imaging Across Scales : from Large Earthquakes to Microseismic Events / Jonas Folesky ». Berlin : Freie Universität Berlin, 2019. http://d-nb.info/1176635158/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
14

Folesky, Jonas T. [Verfasser]. « Rupture Propagation Imaging Across Scales : from Large Earthquakes to Microseismic Events / Jonas Folesky ». Berlin : Freie Universität Berlin, 2019. http://d-nb.info/1176635158/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
15

Sanderson, Philip John 1974. « Experimental verification of the simplified scaling laws for bubbling fluidized beds at large scales ». Monash University, Dept. of Chemical Engineering, 2002. http://arrow.monash.edu.au/hdl/1959.1/7891.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
16

Turner, Sara M. « Understanding river herring movement patterns at small and large spatial scales through geochemical markers ». Thesis, State University of New York Col. of Environmental Science & ; Forestry, 2014. http://pqdtopen.proquest.com/#viewpdf?dispub=3628508.

Texte intégral
Résumé :

Environmentally-derived elemental and isotopic ratios in the otoliths of anadromous river herring (alewife, Alosa pseudoharengus, and blueback herring, A. aestivalis) were used to distinguish among groups of fish at spatial scales ranging from sub-populations within a watershed to populations from throughout the species' ranges. These ratios were also used to understand early life migrations and habitat use within and among populations. Sub-populations within the Hudson River, NY were accurately distinguished (> 95%), and populations from the Hudson River and Long Island, NY were distinct from each other and outgroups at varying distances, but accurate classification was dependent on the inclusion of oxygen isotopic ratios. Populations from Maine to Florida showed strong separation based on otolith signatures excluding (∼ 70%) and including (> 90%) oxygen isotopes. Reclassification accuracies improved for both models by including genetic results in a hierarchical assignment model. Though all natural tags were effective for stock discrimination, the accuracy varied depending on the markers included; while inclusion of oxygen isotopes resulted in the highest reclassification rates, accurate application requires intensive sampling because of high interannual variability. Genetic markers reduce the effects of interannual variation because they are generally stable over generations.

Variations in otolith chemistry across an otolith (i.e. the fish's life history) can provide information about movements among habitats and especially along salinity gradients. Juvenile alewife within the Hudson River, NY (a large watershed) moved among multiple freshwater habitats, and trends varied widely among individuals while in the Peconic River, NY (a small, coastal watershed) three distinct movement patterns were observed. Retrospective analysis of Hudson River adult otoliths showed that multiple nursery habitats contribute to the spawning stock. Throughout the coast, retrospective analysis of adult otoliths showed that juveniles used fresh waters, estuaries, or a combination of both as nursery habitats and migratory behavior varied among populations, and were correlated to the latitude of the watershed, the watershed area, the amount of accessible river kilometers, and the percentage of the watershed in urban use.

Styles APA, Harvard, Vancouver, ISO, etc.
17

Chen, Chih-Chieh. « Transient mountain waves in an evolving synoptic-scale flow and their interaction with large scales / ». Thesis, Connect to this title online ; UW restricted, 2005. http://hdl.handle.net/1773/10078.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
18

Gallienne, Christopher Paul. « The development of novel techniques for characterisation of marine zooplankton over very large spatial scales ». Thesis, University of Plymouth, 1997. http://hdl.handle.net/10026.1/1727.

Texte intégral
Résumé :
Marine zooplankton play an important role in the transfer of CO2 from the atmosphere/ocean system to deeper waters and the sediments. They also provide food for much of the world's fish stocks and in some areas of the ocean depleted of nutrients they sustain phytoplankton growth by recycling nutrients. They therefore have a profound effect on the carbon cycle and upon life in the oceans. There is a perceived lack of information about global distributions of zooplankton needed to validate ecosystems dynamics models, and the traditional methods of survey are inadequate to provide this information. There is a need to develop new technologies for the large scale survey of zooplankton, which should provide data either suitable for quick and easy subsequent processing, or better still, processed in real time. New technologies for large scale zooplankton survey fall into three main categories: acoustic, optical and video. No single method is capable of providing continuous real time data at the level of detail required. A combination of two of the new technologies (optical and video) has the potential to provide broad scale data on abundance, size and species distributions of zooplankton routinely, reliably, rapidly and economically. Such a combined method has been developed in this study. The optical plankton counter (OPC) is a fairly well established instrument in marine and freshwater zooplankton survey. A novel application of the benchtop version of this instrument (OPC-IL) for real time data gathering at sea over ocean basin scales has been developed in this study. A new automated video zooplankton analyser (ViZA) has been designed and developed to operate together with the OPC-IL. The two devices are eventually to be deployed in tandem on the Undulating Oceanographic Recorder (UOR) for large scale ocean survey of zooplankton. During the initial development of the system, the two devices are used in benchtop flow through mode using the ship's uncontaminated sea water supply. The devices have been deployed on four major oceanographic cruises in the North and South Atlantic, covering almost 40,000 km. of transect. Used in benchtop mode, it has been shown that the OPC can simply and reliably survey thousands of kilometres of ocean surface waters for zooplankton abundance and size distribution in the size range 250|im. to 11.314 mm. in real time. The ViZA system can add the dimension of shape to the OPC size data, and provide supporting data on size distributions and abundance. Sampling rate in oligotrophic waters, and image quality problems are two main limitations to current ViZA performance which must be addressed, but where sufficient abundance exists and good quality images are obtained, the initial version of the ViZA system is shown to be able reliably to classify zooplankton to six major groups. The four deployments have shown that data on zooplankton distributions on oceanic scales can be obtained without the delays and prohibitive costs associated with sample analysis for traditional sampling methods. The results of these deployments are presented, together with an assessment of the performance of the system and proposals for improvements to meet the requirements specified before a fiill in-situ system is deployed.
Styles APA, Harvard, Vancouver, ISO, etc.
19

Scukins, Arturs. « Bridging large and small scales of water models using hybrid Molecular Dynamics/Fluctuating Hydrodynamics framework ». Thesis, Aston University, 2014. http://publications.aston.ac.uk/24546/.

Texte intégral
Résumé :
This thesis presents a two-dimensional water model investigation and development of a multiscale method for the modelling of large systems, such as virus in water or peptide immersed in the solvent. We have implemented a two-dimensional ‘Mercedes Benz’ (MB) or BN2D water model using Molecular Dynamics. We have studied its dynamical and structural properties dependence on the model’s parameters. For the first time we derived formulas to calculate thermodynamic properties of the MB model in the microcanonical (NVE) ensemble. We also derived equations of motion in the isothermal–isobaric (NPT) ensemble. We have analysed the rotational degree of freedom of the model in both ensembles. We have developed and implemented a self-consistent multiscale method, which is able to communicate micro- and macro- scales. This multiscale method assumes, that matter consists of the two phases. One phase is related to micro- and the other to macroscale. We simulate the macro scale using Landau Lifshitz-Fluctuating Hydrodynamics, while we describe the microscale using Molecular Dynamics. We have demonstrated that the communication between the disparate scales is possible without introduction of fictitious interface or approximations which reduce the accuracy of the information exchange between the scales. We have investigated control parameters, which were introduced to control the contribution of each phases to the matter behaviour. We have shown, that microscales inherit dynamical properties of the macroscales and vice versa, depending on the concentration of each phase. We have shown, that Radial Distribution Function is not altered and velocity autocorrelation functions are gradually transformed, from Molecular Dynamics to Fluctuating Hydrodynamics description, when phase balance is changed. In this work we test our multiscale method for the liquid argon, BN2D and SPC/E water models. For the SPC/E water model we investigate microscale fluctuations which are computed using advanced mapping technique of the small scales to the large scales, which was developed by Voulgarakisand et. al.
Styles APA, Harvard, Vancouver, ISO, etc.
20

Helmert, Jürgen. « Determination of characteristic turbulence length scales from large-eddy simulation of the convective planetary boundary layer ». Doctoral thesis, Universitätsbibliothek Leipzig, 2004. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-37561.

Texte intégral
Résumé :
Turbulente Austauschprozesse in der atmosphärischen Grenzschicht spielen eine Schlüsselrolle beim vertikalen Impuls-, Energie- und Stofftransport in der Erdatmosphäre. In meso- und globalskaligen Atmosphärenmodellen sind turbulente Austauschprozesse jedoch subskalig und müssen unter Verwendung geeigneter Schliessungsansätze parametrisiert werden. Hierbei spielt die Spezifikation der charakteristischen Turbulenzlängenskala in Abhängigkeit vom Stabilitätszustand der Atmosphäre eine entscheidende Rolle. Gegenwärtig verwendete Ansätze, die auf der Verwendung der turbulenten Mischungslänge für neutrale Schichtung sowie dimensionsloser Stabilitätsfunktionen basieren, zeigen vor allem Defizite im oberen Bereich der konvektiven Grenzschicht sowie in der Entrainmentzone, wo starke vertikale Gradienten auftreten. In der vorliegenden Arbeit wurden hochaufgelöste dreidimensionale Grobstruktursimulationen der trockenen und feuchten Grenzschicht für ein weites Spektrum von Labilitätsbedingungen durchgeführt. Erste und zweite Momente atmosphärischer Strömungsvariablen wurden aus den simulierten hydro- und thermodynamischen Feldern berechnet und diskutiert. Die Spektraleigenschaften turbulenter Fluktuationen der Strömungsvariablen, das raumzeitliche Verhalten kohärenter Strukturen sowie charakteristische Turbulenzlängenskalen wurden abgeleitet. Eine Verifizierung der charakteristischen Turbulenzlängenskalen erfolgte durch Vergleich mit Ergebnissen früherer numerischer Simulationen, mit Turbulenzmessungen in der atmosphärischen Grenzschicht sowie mit Laborexperimenten. Mit Hilfe der nichtlinearen Datenmodellierung wurden leicht verwendbare Approximationen der charakteristischen Turbulenzlängenskalen abgeleitet und deren statistische Signifikanz diskutiert. Unter Verwendung dieser Approximationen wurde ein existierendes Parametrisierungsmodell revidiert und mit Hilfe von Grobstruktursimulationen verifiziert. Desweiteren wurde der Einfluß der turbulenten Mischungslänge auf die Prognose mesokaliger Felder untersucht. Hierzu wurde mit dem Lokal-Modell des Deutschen Wetterdienstes eine entsprechende Sensitivitätsstudie durchgeführt. Anhand von Satellitendaten und Analysedaten aus der 4D-Datenassimilation wurden die Simulationsergebnisse verifiziert.
Styles APA, Harvard, Vancouver, ISO, etc.
21

Pond, Jarrad W. T. « Perturbation analysis of fluctuations in the universe on large scales, including decaying solutions and rotational velocities ». Honors in the Major Thesis, University of Central Florida, 2009. http://digital.library.ucf.edu/cdm/ref/collection/ETH/id/1309.

Texte intégral
Résumé :
This item is only available in print in the UCF Libraries. If this is your Honors Thesis, you can help us make it available online for use by researchers around the world by following the instructions on the distribution consent form at http://library.ucf.edu/Systems/DigitalInitiatives/DigitalCollections/InternetDistributionConsentAgreementForm.pdf You may also contact the project coordinator, Kerri Bottorff, at kerri.bottorff@ucf.edu for more information.
Bachelors
Sciences
Physics
Styles APA, Harvard, Vancouver, ISO, etc.
22

Hoult, Crispin. « The requirements, implementation and use of a generic foundation dataset for large-scales spatial data management ». Thesis, University of Newcastle Upon Tyne, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.295514.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
23

Tao, Ze. « Crestal fault reactivation on rising salt diapirs : an integrated analysis from large to small scales of observation ». Thesis, Cardiff University, 2018. http://orca.cf.ac.uk/116170/.

Texte intégral
Résumé :
The modes in which faults can propagate and grow through subsurface rocks and strata are key to the establishment of fluid paths in sedimentary basins; faults are potential conduits for fluid in some regions, at the same time they are associated with fault-related traps in others. The classical fault propagation models addressed in the published literature have so-far considered isolated, linkage (lateral-tip linkage and dip-linkage), constant-length, and coherent models. However, the propagation histories of faults in regions dominated by salt tectonics are scarcely documented; rather, the existing fault propagation models lack critical thinking when applied to crestal faults, particularly due to the limited resolution of imaged strata in most publications, and the relatively small size of crestal faults (length < 2.3 km, maximum throw < 50 m). With the increasing use of high-resolution seismic data in recent decades, it is now possible to undertake research into the evolution of both crestal faults and fluid flow paths in regions dominated by salt tectonics. In parallel, the uniqueness of crestal faults in terms of their scales has brought up important questions about how data resolution and scale variance influence many a fault analysis, and the current fault propagation models, when based on seismic and outcrop information. This research uses high resolution seismic data from the Espírito Santo Basin, offshore SE Brazil, to investigate the growth histories of crestal faults, fluid flow in an area of significant salt tectonics, and how crestal faults are associated with traps in supra- salt successions. To answer the question, in a second stage, of how scale variance can influence the analysis of faults’ propagation histories, data from Somerset (Bristol Channel) and the Ierapetra Basin (Crete) were collected in the field to broaden the database in this thesis from the larger, rift-basin scale to the seismic and sub-seismic scales. Segment linkage is predominant in areas where crestal faults grow. Interpreted crestal faults in SE Brazil propagated vertically and horizontally. Horizontal propagation was often hindered by natural barriers such as an accommodation zone (Chapter 4), or oblique transfer zones (Chapter 5), onto which faults terminate. Vertical propagation stopped when the fault meets the sea floor or when vertical propagation was accommodated by blind faults or larger (adjacent) faults showing relatively large displacements. Hence, this thesis shows that the propagation of crestal faults does not follow a ‘coherent growth model’. Rather, the geometry and history of propagation of discrete faults segments are not comparable. In SE Brazil, large fault segments propagated to link with non-reactivated small fault segments on the crest of the salt ridge, and can show later ‘blind’ propagation towards the surface. In terms of how scale variance can potentially (negatively) influence fault growth models interpreted on seismic data and in the field, a new quantitative method and two new parameters (sampling interval and module error) are introduced in this thesis for faults of multiple scales - from a few meters to 10s of kilometers. Sampling interval has a significant influence on the interpretation of fault growth histories. By changing one’s sampling interval: 1) the interpretation of fault geometries is significantly changed; 2) maximum fault throw values are underestimated; 3) fault segments are underrepresented; 4) the geometry of fault linkage zones is changed; 5) the width of fault linkage zones is underestimated; 6) fault interaction zones are lost. Using the SE Brazil seismic data, the accuracy of Throw-Distance plots was shown to be quantitatively lost when sampling intervals were larger than 37.5 m (every 3 shot-points) for the ‘unique’ crestal fault families in this thesis. However, this thesis demonstrates that sampling intervals adopted by interpreters should differ depending on the resolution of seismic data used, and the total length of investigated structures. A practical sampling interval/fault length ratio is therefore proposed in this work to address the caveats behind using variable (and indiscriminate) sampling intervals when analysing faults. Supra-salt sequences capable of promoting episodic fluid flow in regions of salt tectonics are of vital economic importance. Following on the two latter themes (crestal faulting and fault scaling), the thesis addressed the episodic fluid flow documented in the Espírito Santo Basin in a third stage. The results of this section are proposed as a case study for supra-salt sequences. In detail, seal failure is systematically recorded in the study area, and is interpreted to have contributed to most of the supra-salt fluid flow events investigated in SE Brazil. Six types of traps are therefore widely identified in supra-salt successions of the Espírito Santo Basin – all forming examples of trapping geometries in sedimentary basins associated with salt tectonics. Regardless of a thermogenic or diagenetic origin for fluid off Espírito Santo, the results in this thesis demonstrate important (and focused) fluid flow above salt giants when, at least, two critical conditions are observed: 1) a certain thickness of overburden strata is deposited on top of the salt structures, 2) the generation of highly developed (i.e. large) crestal fault systems is observed over these same salt structures. It is therefore postulated that, if overburden strata is thinner than a certain value, or pressure imposed by growing salt increases significantly, active salt intrusion occurring together with fluid flow will replace more focused fluid flow features in salt giants.
Styles APA, Harvard, Vancouver, ISO, etc.
24

Smith, Russell Julian. « Streaming motions of Abell clusters : new evidence for a high-amplitude bulk flow on very large scales ». Thesis, Durham University, 1998. http://etheses.dur.ac.uk/4826/.

Texte intégral
Résumé :
Streaming motions of galaxies and clusters provide the only method for probing the distribution of mass, as opposed to light, on scales of 20 - 100 h(^-1)Mpc. This thesis presents a new survey of the local peculiar velocity field, based upon Fundamental Plane (FP) distances for an all-sky sample of 56 clusters to cz = 12000 km s(^-1). Central velocity dispersions have been determined from new spectroscopic data for 429 galaxies. From new R-band imaging data the FP photometric parameters (effective diameter and effective surface brightness) have been measured for 324 galaxies. The new spectroscopic and photometric data have been carefully combined with an extensive body of measurements compiled from the literature, to yield a closely homogeneous catalogue of FP data for 725 early type galaxies. Fitting the inverse FP relation to the merged catalogue yields distance estimates with a scatter of 22% per galaxy, resulting in cluster distance errors of 2-13%. The distances are consistent, on a cluster-by-cluster basis, with those determined from Tully-Fisher-Fisher studies and from earlier FP determinations. The distances are marginally inconsistent with distance estimates based on brightest cluster galaxies, but this disagreement can be traced to a few highly discrepant clusters. The resulting peculiar velocity field is dominated by a bulk streaming component, with amplitude of 810 ± 180km s(^-1) (directed towards l = 260 ,b = -5 ), a result which is robust against a range of potential systematic effects. The flow direction is ~35 from the CMB dipole and ~15 from the X-ray cluster dipole direction. Two prominent superclusters (the Shapley Concentration and the Horologium-Reticulum Supercluster) may contribute significantly to the generation of this flow. More locally, there is no far- side infall into the 'Great Attractor' (GA), apparently due to the opposing pull of the Shapley Concentration. A simple model of the flow in this direction suggests that the GA region generates no more than ~60% of the Local Group's motion in this direction. Contrary to some previous studies, the Perseus-Pisces supercluster is found to exhibit no net streaming motion. On small scales the velocity field is extremely quiet, with an rms cluster peculiar velocity of < 270 km s(^-1) in the frame defined by the bulk-flow. The results of this survey suggest that very distant mass concentrations contribute significantly to the local peculiar velocity field. This result is difficult to accommodate within currently popular cosmological models, which have too little large-scale power to generate the observed flow. The results may instead favour models with excess fluctuation power on 60-150h(^-1)Mpc scales.
Styles APA, Harvard, Vancouver, ISO, etc.
25

Zieser, Britta [Verfasser], et Matthias [Akademischer Betreuer] Bartelmann. « Probing the matter distribution on intermediate and large scales with weak light deflections / Britta Zieser ; Betreuer : Matthias Bartelmann ». Heidelberg : Universitätsbibliothek Heidelberg, 2015. http://d-nb.info/1180500024/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
26

Ayromlou, Mohammadreza [Verfasser], et Guinevere [Akademischer Betreuer] Kauffmann. « Physical processes that determine the clustering of different types of galaxies on large scales / Mohammadreza Ayromlou ; Betreuer : Guinevere Kauffmann ». München : Universitätsbibliothek der Ludwig-Maximilians-Universität, 2021. http://d-nb.info/1241963819/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
27

Golinkoff, Jordan Seth. « Estimation and modeling of forest attributes across large spatial scales using BiomeBGC, high-resolution imagery, LiDAR data, and inventory data ». Thesis, University of Montana, 2013. http://pqdtopen.proquest.com/#viewpdf?dispub=3568103.

Texte intégral
Résumé :

The accurate estimation of forest attributes at many different spatial scales is a critical problem. Forest landowners may be interested in estimating timber volume, forest biomass, and forest structure to determine their forest's condition and value. Counties and states may be interested to learn about their forests to develop sustainable management plans and policies related to forests, wildlife, and climate change. Countries and consortiums of countries need information about their forests to set global and national targets to deal with issues of climate change and deforestation as well as to set national targets and understand the state of their forest at a given point in time.

This dissertation approaches these questions from two perspectives. The first perspective uses the process model Biome-BGC paired with inventory and remote sensing data to make inferences about a current forest state given known climate and site variables. Using a model of this type, future climate data can be used to make predictions about future forest states as well. An example of this work applied to a forest in northern California is presented. The second perspective of estimating forest attributes uses high resolution aerial imagery paired with light detection and ranging (LiDAR) remote sensing data to develop statistical estimates of forest structure. Two approaches within this perspective are presented: a pixel based approach and an object based approach. Both approaches can serve as the platform on which models (either empirical growth and yield models or process models) can be run to generate inferences about future forest state and current forest biogeochemical cycling.

Styles APA, Harvard, Vancouver, ISO, etc.
28

Rodgers, Erin V. « Scales of Resilience : Community Stability, Population Dynamics, and Molecular Ecology of Brook Trout in a Riverscape after a Large Flood ». Antioch University / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=antioch1422195420.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
29

Kröniger, Konstantin [Verfasser], et M. [Akademischer Betreuer] Mauder. « Surface-atmosphere interactions of heterogeneous surfaces on multiple scales by means of large-eddy simulations and analytical approaches / Konstantin Kröniger ; Betreuer : M. Mauder ». Karlsruhe : KIT-Bibliothek, 2018. http://d-nb.info/1176022369/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
30

Vercelloni, Julie. « Quantifying the state of populations and effects of disturbances at large spatio-temporal scales : The case of coral populations in the great barrier reef ». Thesis, Queensland University of Technology, 2015. https://eprints.qut.edu.au/87812/1/Julie_Vercelloni_Thesis.pdf.

Texte intégral
Résumé :
This project was a step forward in applying statistical methods and models to provide new insights for more informed decision-making at large spatial scales. The model has been designed to address complicated effects of ecological processes that govern the state of populations and uncertainties inherent in large spatio-temporal datasets. Specifically, the thesis contributes to better understanding and management of the Great Barrier Reef.
Styles APA, Harvard, Vancouver, ISO, etc.
31

Savino, Sandro. « A solution to the problem of the cartographic generalization of Italian geographical databases at large-medium scales : approach definition, process design and operators implementation ». Doctoral thesis, Università degli studi di Padova, 2011. http://hdl.handle.net/11577/3421671.

Texte intégral
Résumé :
During many years in which the generalization of cartographic data has been studied many developments have been achieved. As some national mapping agencies in Europe and in the world are beginning to introduce automated processes in their production lines, the original dream of a completely automated system that could perform generalization is getting closer, even though it has not been reached yet. The aim of this dissertation is to investigate whether it is possible to design and implement a working generalization process for the Italian large-medium scale geographical databases. In this thesis we argue that the models, the approaches and the algorithms developed so far provide a robust and sound base to the problem of automated cartographic generalization, but that to build an effective generalization process it is necessary to deal with all the small details deriving from the actual implementation of the process on defined scales and data models of input and output. We propose/speculate that our goal can be reached by capitalizing the research results achieved so far and customizing the process on the data models and scales treated. This is the approach at the basis of this research work: the design of the cartographic generalization process and the algorithms implemented, either developed from scratch or deriving from previous works, have all been customized to solve a well defined problem: i.e. they expect input data that comply to a consistent data model and are tailored to obtain the results at defined scale and data model. This thesis explains how this approach has been brought into practice in the frame of the CARGEN project that aims at the development of a complete cartographic process to generalize the Italian medium scale geographical databases at 1:25000 and 1:50000 scale from the official Italian large scale geographical database at 1:5000 scale. This thesis will focus on the generalization to the 1:25000 scale, describing the approach that has been adopted, the overall process that has been designed and will provide details on the most important operators implemented for the generalization at such scale.
Questa tesi di dottorato sviluppa la problematica della generalizzazione cartografica applicata ai database geografici italiani alla alta e media scala. Il lavoro di ricerca si è sviluppato all'interno del progetto CARGEN, un progetto di ricerca tra l'Università di Padova e la Regione Veneto, con la collaborazione dell'IGMI per lo sviluppo di una procedura automatica di generalizzazione del database DB25 IGMI in scala 1:25000 a partire dal database regionale GeoDBR in scala 1:5000. Il lavoro di tesi affronta tutte le tematiche relative al processo di generalizzazione, partendo dalla generalizzazione del modello fino alla descrizione degli algoritmi di generalizzazione delle geometrie.
Styles APA, Harvard, Vancouver, ISO, etc.
32

Graham, Tabitha. « INVESTIGATION OF MEDIA INGREDIENTS AND WATER SOURCES FOR ALGAE CO2 CAPTURE AT DIFFERENT SCALES TO DEMONSTRATE THE CORRELATIONS BETWEEN LAB-SCALE AND LARGE-SCALE GROWTH ». UKnowledge, 2013. http://uknowledge.uky.edu/bae_etds/16.

Texte intégral
Résumé :
As energy use increases globally the environmental burdens increase alike. Many accusations have been made that carbon dioxide is a culprit of climate change. The University of Kentucky and Duke Energy Power have partnered to test carbon capture technology in a large scale project. To this end, the objective of this thesis is to investigate potential water media sources and nutrient sources at different volume scales for algae cultivation to help create a more environmentally viable and economically feasible solution. This work will conduct a life cycle assessment of water media sources and the effects of the inputs and outputs needed for each medium. The up-scaling objective of the research is to identify which parameters vary as a result of up-scaling and how to maintain a culture at the large scale that is standardized to the lab scale culture.
Styles APA, Harvard, Vancouver, ISO, etc.
33

Boudehane, Abdelhak. « Structured-joint factor estimation for high-order and large-scale tensors ». Electronic Thesis or Diss., université Paris-Saclay, 2022. http://www.theses.fr/2022UPASG085.

Texte intégral
Résumé :
Les données et les signaux multidimensionnels occupent une place importante dans les applications modernes. La décomposition tensorielle est un outil mathématique puissant permettant de modéliser les données et les signaux multidimensionnels, tout en préservant les relations interdimensionnelles. Le modèle Canonique Polyadique (CP), un modèle de décomposition tensorielle largement utilisé, est unique à des indéterminations d'échelle et de permutation près. Cette propriété facilite l'interprétation physique, ce qui a encouragé l'intégration du modèle CP dans divers contextes. Le défi auquel est confrontée la modélisation tensorielle est la complexité de calcul et l'espace mémoire requis. Les tenseurs d'ordre élevé représentent un problème délicat, car la complexité de calcul et l'espace mémoire requis augmentent de façon exponentielle en fonction de l'ordre. Les tenseurs de grandes tailles (lorsque le nombre de variables selon une ou plusieurs dimensions du tenseur est important) ajoute un fardeau supplémentaire. La théorie des réseaux de tenseurs (Tensor Networks - TN) est une piste prometteuse, permettant de réduire les problèmes d'ordre élevé en un ensemble de problèmes d'ordre réduit. En particulier, le modèle Tensor-Train (TT), l'un des modèles TN, est un terrain intéressant pour la réduction de la dimensionnalité. Cependant, représenter un modèle CP par une représentation TT est extrêmement coûteux dans le cas des tenseurs de grande taille, car il nécessite la matricisation complète du tenseur, ce qui peut dépasser la capacité mémoire.Dans cette thèse, nous étudions la réduction de la dimensionnalité dans le contexte de la décomposition tensorielle sous-contrainte de sparsité et la décomposition couplée d'ordre élevé. Sur la base des résultats du schéma JIRAFE (Joint dImensionality Reduction And Factor rEtrieval), nous utilisons la flexibilité du modèle TT pour intégrer les contraintes physiques et les connaissances préalables sur les facteurs, dans le but de réduire le temps de calcul. Pour les problèmes de grandes tailles, nous proposons un schéma permettant de paralléliser et de randomiser les différentes étapes, i.e., la réduction de dimensionnalité et l'estimation des facteurs du modèle CP. Nous proposons également une stratégie basée sur la grille de tenseur, permettant un traitement entièrement parallèle pour le cas des très grandes tailles et de la décomposition tensorielle dynamique
Multidimensional data sets and signals occupy an important place in recent application fields. Tensor decomposition represents a powerful mathematical tool for modeling multidimensional data and signals, without losing the interdimensional relations. The Canonical Polyadic (CP) model, a widely used tensor decomposition model, is unique up to scale and permutation indeterminacies. This property facilitates the physical interpretation, which has led the integration of the CP model in various contexts. The main challenge facing the tensor modeling is the computational complexity and memory requirements. High-order tensors represent a important issue, since the computational complexity and the required memory space increase exponentially with respect to the order. Another issue is the size of the tensor in the case of large-scale problems, which adds another burden to the complexity and memory. Tensor Networks (TN) theory is a promising framework, allowing to reduce high-order problems into a set of lower order problems. In particular, the Tensor-Train (TT) model, one of the TN models, is an interesting ground for dimensionality reduction. However, respresenting a CP tensor using a TT model, is extremely expensive in the case of large-scale tensors, since it requires full matricization of the tensor, which may exceed the memory capacity.In this thesis, we study the dimensionality reduction in the context of sparse-coding and high-order coupled tensor decomposition. Based on the results of Joint dImensionality Reduction And Factor rEtrieval (JIRAFE) scheme, we use the flexibility of the TT model to integrate the physical driven constraints and the prior knowledge on the factors, with the aim to reduce the computation time. For large-scale problems, we propose a scheme allowing to parallelize and randomize the different steps, i.e., the dimensionality reduction and the factor estimation. We also propose a grid-based strategy, allowing a full parallel processing for the case of very large scales and dynamic tensor decomposition
Styles APA, Harvard, Vancouver, ISO, etc.
34

Bhowmik, Avit Kumar [Verfasser], Ralf B. [Akademischer Betreuer] Schäfer et Thomas [Gutachter] Horvath. « Human and Ecological Impacts of Freshwater Degradation on Large Scales. Development and Integration of Spatial Models with Ecological Models for Spatial-ecological Analyses / Avit Kumar Bhowmik. Betreuer : Ralf B. Schäfer. Gutachter : Thomas Horvath ». Landau : Universität Koblenz-Landau, Campus Landau, 2015. http://d-nb.info/1107775760/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
35

Padié, Sophie. « Réponse des cervidés à la chasse : stratégies d’utilisation de l’espace à multiples échelles et conséquences sur la végétation ». Thesis, Montpellier 2, 2014. http://www.theses.fr/2014MON20185.

Texte intégral
Résumé :
La chasse – comme la prédation naturelle - induit des réponses comportementales par les individus chassés qui cherchent ainsi à éviter ou à reduire le risque. Il est en particulier fréquent d'observer un changement dans leur utilisation de l'espace, mais l'articulation et les déterminants des réponses aux différentes échelles spatiales restent mal compris. De même, s'il a été suggéré que ces modifications comportementales pouvaient affecter en cascade la végétation, cela reste à tester. Pour combler ces lacunes, j'ai, (1) étudié, dans un paysage agricole du sud de la France, une population chassée de chevreuils et leur utilisation des milieux ouverts risqués et des couverts boisés, au cours de périodes de risque contrasté ; (2) testé, sur une population canadienne de cerfs à queue noire dépourvue de prédateurs et exempte de chasse, l'influence d'une chasse expérimentale sur le comportement des animaux et sur la végétation. J'ai montré que les chevreuils répondaient à une augmentation du risque à plusieurs échelles spatiales. Ils réduisaient leur utilisation des habitats risqués, et dans certains cas se rapprochaient des couverts, de jour ces deux réponses étant couplées au niveau individuel. Le gradient paysager d'ouverture du milieu contraignait cependant les niveaux de réponses observées et les stratégies individuelles. Au Canada, j'ai observé un évitement de la zone chassée par les cerfs les plus sensibles à la présence humaine, corrélé à une diminution de l'abroutissement pour deux des quatre espèces de plantes étudiées. J'ai intégré ces résultats dans une discussion sur l'utilisation de la chasse pour gérer les populations d'herbivores et leurs impacts sur la végétation
Hunting – similarly to natural predation – induces behavioural responses of hunted individuals which aims at avoiding or reducing risk. Particularly, changes in space use are frequently observed, but the articulation and determinants of these changes at multiple spatial scales are still poorly understood. Also, although it has been suggested that these changes might cascade on the vegetation, this remains to be tested. To fill these gaps, I (1) studied a hunted roe deer population living in an agricultural landscape in southern France where roe deer can find open risky habitats and woody covers; and (2) tested black-tailed deer behavioural response to an experimental hunt in a predator- and hunting-free population in the Haïda-Gwaii archipelago (BC, Canada). I also investigated the possible cascading effects on the vegetation. I showed that roe deer responded to increased hunting pressure at multiple scales, reducing their use of the risky habitats and, in specific situations, their distance to the nearest cover. At day-time those two responses were coupled at the individual level. Generally, landscape openness constrained individual responses and strategies. In the hunting-for-fear experiment conducted on Haida Gwaii, I found that only the deer less-tolerant to human disturbance avoided the hunting area. A simultaneous reduction in browsing pressure on two out of the four plant species monitored was found however. I integrated these results in a general discussion on the possible role of hunting as a tool to manage abundant deer populations and their impacts on the vegetation
Styles APA, Harvard, Vancouver, ISO, etc.
36

Lambie-Hanson, Christopher. « Covering Matrices, Squares, Scales, and Stationary Reflection ». Research Showcase @ CMU, 2014. http://repository.cmu.edu/dissertations/368.

Texte intégral
Résumé :
In this thesis, we present a number of results in set theory, particularly in the areas of forcing, large cardinals, and combinatorial set theory. Chapter 2 concerns covering matrices, combinatorial structures introduced by Viale in his proof that the Singular Cardinals Hypothesis follows from the Proper Forcing Axiom. In the course of this proof and subsequent work with Sharon, Viale isolated two reflection principles, CP and S, which can hold of covering matrices. We investigate covering matrices for which CP and S fail and prove some results about the connections between such covering matrices and various square principles. In Chapter 3, motivated by the results of Chapter 2, we introduce a number of square principles intermediate between the classical and (+). We provide a detailed picture of the implications and independence results which exist between these principles when is regular. In Chapter 4, we address three questions raised by Cummings and Foreman regarding a model of Gitik and Sharon. We first analyze the PCF-theoretic structure of the Gitik-Sharon model, determining the extent of good and bad scales. We then classify the bad points of the bad scales existing in both the Gitik-Sharon model and various other models containing bad scales. Finally, we investigate the ideal of subsets of singular cardinals of countable cofinality carrying good scales. In Chapter 5, we prove that, assuming large cardinals, it is consistent that there are many singular cardinals such that every stationary subset of + reflects but there are stationary subsets of + that do not reflect at ordinals of arbitrarily high cofinality. This answers a question raised by Todd Eisworth and is joint work with James Cummings. In Chapter 6, we extend a result of Gitik, Kanovei, and Koepke regarding intermediate models of Prikry-generic forcing extensions to Radin generic forcing extensions. Specifically, we characterize intermediate models of forcing extensions by Radin forcing at a large cardinal using measure sequences of length less than. In the final brief chapter, we prove some results about iterations of w1-Cohen forcing with w1-support, answering a question of Justin Moore.
Styles APA, Harvard, Vancouver, ISO, etc.
37

Zech, Alraune [Verfasser], Sabine [Akademischer Betreuer] Attinger et Olaf [Akademischer Betreuer] Kolditz. « Impact of Aqifer Heterogeneity on Subsurface Flow and Salt Transport at Different Scales : from a method determine parameters of heterogeneous permeability at local scale to a large-scale model for the sedimentary basin of Thuringia / Alraune Zech. Gutachter : Sabine Attinger ; Olaf Kolditz ». Jena : Thüringer Universitäts- und Landesbibliothek Jena, 2014. http://d-nb.info/1048047229/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
38

Bowman, Adam Shoresworth. « The Born-Oppenheimer Approximation for Triatomic Molecules with Large Angular Momentum in Two Dimensions ». Thesis, Virginia Tech, 2010. http://hdl.handle.net/10919/36249.

Texte intégral
Résumé :
We study the Born-Oppenheimer approximation for a symmetric linear triatomic molecule in two space dimensions. We compute energy levels up to errors of order ε5, uniformly for three bounded vibrational quantum numbers n1, n2, and n3; and nuclear angular momentum quantum numbers â â ¤ kε-3/4 for k > 0. Here the small parameter ε is the fourth root of the ratio of the electron mass to an average nuclear mass.
Master of Science
Styles APA, Harvard, Vancouver, ISO, etc.
39

Carter, Benjamin. « Water-wave propagation through very large floating structures ». Thesis, Loughborough University, 2012. https://dspace.lboro.ac.uk/2134/12031.

Texte intégral
Résumé :
Proposed designs for Very Large Floating Structures motivate us to understand water-wave propagation through arrays of hundreds, or possibly thousands, of floating structures. The water-wave problems we study are each formulated under the usual conditions of linear wave theory. We study the frequency-domain problem of water-wave propagation through a periodically arranged array of structures, which are solved using a variety of methods. In the first instance we solve the problem for a periodically arranged infinite array using the method of matched asymptotic expansions for both shallow and deep water; the structures are assumed to be small relative to the wavelength and the array periodicity, and may be fixed or float freely. We then solve the same infinite array problem using a numerical approach, namely the Rayleigh-Ritz method, for fixed cylinders in water of finite depth and deep water. No limiting assumptions on the size of the structures relative to other length scales need to be made using this method. Whilst we aren t afforded the luxury of explicit approximations to the solutions, we are able to compute diagrams that can be used to aid an investigation into negative refraction. Finally we solve the water-wave problem for a so-called strip array (that is, an array that extends to infinity in one horizontal direction, but is finite in the other), which allows us to consider the transmission and reflection properties of a water-wave incident on the structures. The problem is solved using the method of multiple scales, under the assumption that the evolution of waves in a horizontal direction occurs on a slower scale than the other time scales that are present, and the method of matched asymptotic expansions using the same assumptions as for the infinite array case.
Styles APA, Harvard, Vancouver, ISO, etc.
40

Muñoz, Blanc Carlos. « Simulación físico-matemática de las turbulencias en los incendios de edificación. Propuesta de una nueva metodología de análisis relativa a la verificación cualitativa de las turbulencias simuladas ». Doctoral thesis, Universitat Politècnica de Catalunya, 2014. http://hdl.handle.net/10803/145388.

Texte intégral
Résumé :
En la Unión Europea, y más concretamente en España, el análisis prestacional de cualquier edificio frente a la acción del fuego es aún un hecho aislado y poco habitual, a pesar de las ventajas que el mismo comporta. No obstante, incluso en aquellos países donde hace años se estudia el comportamiento estructural en situación accidental de incendio en base a los métodos prestacionales, como es el caso de Estados Unidos, el campo científico de las simulaciones computacionales basadas en la Dinámica de fluidos y en la Termodinámica está aún en lo que podríamos denominar, haciendo un símil con el crecimiento del ser humano, la fase adolescente. Mejorar en la medida de lo posible los criterios relativos a la caracterización de un fenómeno tan importante durante el desarrollo de un fuego como es la turbulencia y disponer de una nueva metodología de análisis relativa a la verificación cualitativa de la misma permitirá avanzar con seguridad a la sociedad a medida que estas simulaciones computacionales en edificación se extiendan al terreno profesional
In the European Union, and more specifically in Spain, the analysis of the effect of fire on any building remains an isolated and unusual fact, despite the advantages it may involve. However, even in the countries (such as the United States) where the structural behavior under a fire accidental situation has been studied under benefit methods for many years, the scientific field of computational simulations based on Fluid Dynamics and Thermodynamics remains in what could be called, in comparison to human growth, the teenage years. The improvement of the criteria used to characterize the phenomenon of turbulence and the supply of a new analysis methodology focused on its qualitative verification, so important during the development of fire, will improve society’s security, as these computational simulations are extended to the professional field.
Styles APA, Harvard, Vancouver, ISO, etc.
41

Papin, Morgane. « Apport de la bioacoustique pour le suivi d’une espèce discrète : le Loup gris (Canis lupus) ». Thesis, Université de Lorraine, 2018. http://www.theses.fr/2018LORR0258/document.

Texte intégral
Résumé :
Le nombre croissant de travaux réalisés ces dernières années a montré que la bioacoustique est particulièrement intéressante pour le suivi d’espèces discrètes. L’émergence de dispositifs d’enregistrement autonomes, associée à de nouvelles méthodes d’analyse, ont récemment participé à l’accroissement des études dans ce domaine. Au cours des 30 dernières années, le Loup gris (Canis lupus), mammifère carnivore aux mœurs discrètes connu pour ses hurlements de longue portée, a fait l’objet de nombreuses études acoustiques. Ces dernières visaient notamment à améliorer son suivi, qui s’avère complexe du fait des grandes capacités de déplacement des loups, de l’étendue de leurs territoires et de la diversité des milieux dans lesquels ils vivent. Cependant, la bioacoustique passive a jusqu’alors très peu été exploitée pour le suivi du Loup. C’est dans ce contexte que la présente thèse s’est organisée autour de trois axes de recherche. Les deux premiers axes portent sur l’apport de la bioacoustique passive pour le suivi du Loup gris en milieu naturel. En combinant des analyses acoustiques, statistiques et cartographiques, le premier objectif a été d’élaborer une méthode pour l’échantillonnage spatial de vastes zones d’étude, afin d’y détecter des hurlements de loups à l’aide de réseaux d’enregistreurs autonomes. Ce même dispositif a ensuite permis, dans un second temps, de tester la possibilité de localiser les loups grâce à leurs hurlements. Les expérimentations conduites en milieu de moyenne montagne (Massif des Vosges) et de plaine (Côtes de Meuse), sur deux zones d’étude de 30 km² et avec un réseau de 20 enregistreurs autonomes, ont permis de démontrer l’intérêt de la bioacoustique passive pour le suivi du Loup gris. En effet, près de 70% des émissions sonores (son synthétique aux propriétés similaires à celles de hurlements de loups) ont été détectés par au moins un enregistreur autonome en milieu de moyenne montagne et plus de 80% en milieu de plaine, pour des distances enregistreurs– source sonore atteignant respectivement plus de 2.7 km et plus de 3.5 km. Grâce à un modèle statistique et à un Système d’Information Géographique, la probabilité de détection des hurlements a pu être cartographiée sur les deux zones. En moyenne montagne, elle était forte à très forte (>0.5) sur 5.72 km² de la zone d’étude, contre 21.43 km² en milieu de plaine. Les sites d’émission ont été localisés avec une précision moyenne de 315 ± 617 (SD) m, réduite à 167 ± 308 (SD) m après l’application d’un seuil d’erreur temporelle défini d’après la distribution des données. Le troisième axe de travail porte quant à lui sur l’application d’indices de diversité acoustique pour estimer le nombre d’individus participant à un chorus et ainsi contribuer au suivi de l’effectif des meutes. Les valeurs obtenues pour les six indices (H, Ht, Hf, AR, M et ACI) étaient corrélées avec le nombre de loups hurlant dans les chorus artificiels testés. De bonnes prédictions de l’effectif ont été obtenues sur des chorus réels avec l’un de ces indices (ACI). L’influence de plusieurs biais sur la précision des prédictions de chacun des six indices a ensuite pu être étudiée, montrant que trois d’entre eux y étaient relativement peu sensibles (Hf, AR et ACI). Finalement, les résultats obtenus avec les enregistreurs autonomes montrent le potentiel des méthodes acoustiques passives pour la détection de la présence de loups mais aussi pour les localiser avec une bonne précision, dans des milieux contrastés et à de larges échelles spatiale et temporelle. L’utilisation des indices de diversité acoustique ouvre également de nouvelles perspectives pour l’estimation de l’effectif des meutes. Prometteuses, l’ensemble des méthodes émergeant de ce travail nécessite à présent quelques investigations complémentaires avant d’envisager une application concrète pour le suivi du Loup gris dans son milieu naturel
The growing number of studies carried out in recent years has shown that bioacoustics is particularly interesting for the monitoring of secretive species. The emergence of autonomous recording devices, combined with new methods of analysis, have recently contributed to the increase of studies in this field. Over the last 30 years, many bioacoustic studies have been developed for the Grey wolf (Canis lupus), a secretive large carnivore known for its howls spreading over distances up to several kilometers. These researches notably aimed to improve its monitoring, which is complex because of the strong wolf dispersal capacities over long distances, the large extent of their territories and the various natural contexts in which they live. In this context, this PhD thesis was organized around three research axes. The first two axes focused on the contribution of passive bioacoustics for the Grey wolf monitoring in the field. By combining acoustic, statistical and cartographic analysis, the first objective was to develop a spatial sampling method adapted to large study areas for the detection of wolf howls by using autonomous recorders. Then, the same protocol was used to investigate the possibility to localize wolves thanks to their howls. Field experimentations, conducted in mid-mountain (Massif des Vosges) and lowland (Côtes de Meuse) environments, in two study areas of 30 km² and with an array of 20 autonomous recorders, demonstrated the high potential of passive bioacoustics for the Grey wolf monitoring. Indeed, nearly 70% of broadcasts (synthetic sound with similar acoustic properties to howls) were detected by at least one autonomous recorder in mid-mountain environment and more than 80% in lowland environment, for sound source-recorders distances of up to 2.7 km and 3.5 km respectively. By using statistical model and Geographic Information System, the detection probability of wolf howls was modeled in both study areas. In the mid-mountain environment, this detection probability was high or very high (greater than 0.5) in 5.72 km² of the study area, compared with 21.43 km² in lowland environment. The broadcast sites were localized with an overall mean accuracy of 315 ± 617 (SD) m, reducing until 167 ± 308 (SD) m after setting a temporal error threshold defined from the data distribution. The third axe focused on the application of acoustic diversity indices to estimate the number of howling wolves in choruses and thus to contribute to pack size monitoring. Index values of the six indices (H, Ht, Hf, AR, M, and ACI) were positively correlated with the number of howling wolves in the artificial tested choruses. Interesting size predictions based on real choruses were obtained with one of the indices (ACI). The effects of several biases on the reference values for the acoustic indices were then explored, showing that three of them were relatively insensitive (Hf, AR and, ACI). Finally, results obtained with autonomous recorders confirm the real potential of passive acoustic methods for detecting the presence of wolves but also for localizing individuals with high precision, in contrasting natural environments, at large spatial and temporal scales. The use of acoustic diversity indices also opens new perspectives for estimating pack sizes. All of the promising methods emerging from this thesis require now further investigations before considering a concrete application for monitoring the Grey wolf in its natural environment
Styles APA, Harvard, Vancouver, ISO, etc.
42

Wibking, Benjamin Douglas. « Cosmic structure formation on small scales : From non-linear galaxy clustering to the interstellar medium ». The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1561556033289855.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
43

Sarpa, Elena. « Velocity and density fields on cosmological scales : modelisation via non-linear reconstruction techniques and application to wide spectroscopic surveys ». Thesis, Aix-Marseille, 2019. http://www.theses.fr/2019AIXM0524.

Texte intégral
Résumé :
Un nouvel algorithme de reconstruction entièrement non-linéaire, basé sur le principe de moindre action, extension du Fast Action Minimization method (Nusser & Branchini, 2000), conçu pour applications aux sondages spectroscopiques de galaxies, est présenté. Sa capacité de reconstruire le champ de vitesse à partir du champ de densité observé et testée sur des catalogues de halo de matière noire: les trajectoires de 10^6 halos sont tracées en arrier dans le temps, bien au-delà de l'approximation lagrangienne du premier ordre. Les vittes propres sont modélisées avec succès à la fois dans l'espace réel et dans l'espace du redshift. Le nouvel algorithme est utilisé pour déterminer avec plud grande précision l'échelle des Oscillations Acoustiques de Baryons (BAO) à partir de la fonction de corrélation à deux points. Des tests sur des catalogues de halos de matière montrent comment le nouvel algorithme récupère avec succès les BAO dans l'espace réel et du redshift, également pour des catalogues synthétiques (mocks) exceptionnelles où la signature des BAO est à la mauvaise échelle où absent. Cette technique se révèle plus puissante que l'approximation linéaire, en fournissant une mesure non baiaisée de l'échelle BAO. L'algorithme est ensuite testée sur des mocks de galaxies à faible redshift spécifiquement conçues pour correspondre au functions de corrélation des galaxies lumineuses rouges du catalogue SDSS-DR12. Enfin, l'application à l'analyse des vides cosmiques est présentée, montrant la grande potentialité d'une modélisation non-linéaire du champ de vitesse pour restaurer l'isotropie intrinsèque des vides
A new fully non-linear reconstruction algorithm, based on the least-action principle and extending the Fast Action Minimisation method by is presented, intended for applications with the next-generation massive spectroscopic surveys. Its capability of recovering the velocity field starting from the observed density field is tested on dark-matter halo catalogues simulation to trace the trajectories of up to 10^6 haloes backward-in-time. Both in real and redshift-space it successfully recovers the peculiar velocities. The new algorithm is first employed for the accurate recovery of the Baryonic Acoustic Oscillations (BAO) scale in two-point correlation functions. Tests on dark-matter halo catalogues show how the new algorithm successfully recovers the BAO feature in real and redshift-space, also for anomalous samples showing misplaced or absent signature of BAO. A comparison with the first-order Lagrangian reconstruction is presented, showing that this techniques outperforms the linear approximation in recovering an unbiased measurement of the BAO scale. A second version of the algorithm accounting for the survey geometry and the bias of tracers is finally tested on low-redshift galaxy samples extracted form mocks specifically designed to match the SDSS-DR12 LRG clustering. The analysis of the anisotropic clustering indicates the non-linear reconstruction as a fundamental tool to brake the degeneracy between redshift-space distortion and the Alcock-Paczynski effect. Finally the application to the cosmic voids analysis is introduced, showing the great potentiality of a non-linear modelling of the velocity field in restoring the intrinsic isotropy of voids
Styles APA, Harvard, Vancouver, ISO, etc.
44

Clerjaud, Lilian. « Méthode d’hétérodynage pour la caractérisation de propriétés thermophysiques par thermographie infrarouge dans une large gamme spatiale et temporelle ». Thesis, Bordeaux 1, 2010. http://www.theses.fr/2010BOR14040/document.

Texte intégral
Résumé :
De nos jours, l’apport de la miniaturisation a permis d’innombrables progrès scientifiques et techniques : de la microélectronique à la microfluidique et dernièrement les nanotechnologies. Autant de domaines où les enjeux économiques de suivi de qualité ou d’optimisation de production peuvent nécessiter une étape de caractérisation des propriétés intrinsèques des constituants. Parmi ces propriétés, les données thermophysiques permettent notamment de définir la capacité à stocker ou diffuser la chaleur (conductivité, effusivité ou conductivité thermique par exemple). Une manière d’estimer ces propriétés passe par la connaissance du champ de température. Aux échelles microscopiques, seules des mesures de températures sans contact sont plus adaptées. Les travaux de cette thèse rentrent dans cette catégorie en présentant une méthode de caractérisation de propriétés thermophysiques aux échelles microscopiques par le biais de la thermographie infrarouge. En prenant exemple sur les méthodes hétérodynes développées pour la thermoréflectance, nous avons mis au point un stroboscope électronique dédiée à la thermographie infrarouge et permettant de suivre des excitations thermiques locales et périodiques de fréquences caractéristiques de l’ordre du kilohertz avec fréquence d’acquisition caméra de 25 Hz. En couplant cette méthode, que nous qualifierons de méthode d’Hétérodynage, avec une observation microscopique nous pouvons ainsi observer des phénomènes de diffusion longitudinale localisée à la surface d’échantillons diffusifs tels que les métaux et impossible à obtenir avec les applications standard de thermographie infrarouge. A partir de ces données expérimentales, nous montrons sur deux échantillons la manière de remonter à des valeurs de diffusivité dans le plan et dans l'épaisseur. De ces résultats, nous discuterons sur les limitations des estimations notamment dues à l'effet filtre passe bas du temps d'intégration de la caméra prépondérant lorsque l'excitation devient haute fréquence ou à la présence d'une couche émissive (dépôt de spray de peinture noire pour augmenter le contraste thermique) qui peut empêcher la propagation des ondes thermiques de la source au sein du matériau à caractériser dés que la fréquence d'excitation dépasse un seuil dépendant des propriétés thermiques du bicouche étudié. D'une autre manière, nous montrerons que des estimations de diffusivité thermique dans le plan ou transverse peuvent également être possible par une méthode d'hétérodynage en flash périodique. A titre d'applications futures, nous présenterons une première approche académique de modèle de diffusion avec transport sur un disque tournant pour des futures applications d'écoulement en goutes pour la microfluidique, une extension des estimations de diffusivité dans le plan pour obtenir des cartographies en scannant la zone étudiée et des résultats d'hétérodynage en régime périodique transitoire qui pourraient s'assimiler à une réponse de température en échelon
Nowadays, the contribution of the miniaturization has led to countless advances in science and technology: microelectronics, microfluidics, nanotechnologies... All areas where the economics of quality monitoring and the optimization of production may require a step of characterizing the intrinsic properties of these constituents. Among these porperties, the thermophysical datas can defined the ability to store or distribute the heat (thermal conductivity, effusivity, diffusivity for example). A way to estimate these properties needs the knowledge of the temperature field. At microscale, the measurement temperature without contact is well adapted. The work of this thesis fall into this category by offering a method to characterize the thermophysical properties at microscopic scales by means of infrared thermography. With the help of the heterodyne methods developed for the Thermoreflectance, an electronic stroboscope has been developped. This method is dedied to the infrared thermography and allowing to follow thermal local and periodical excitations with a characteristic frequency around with a frame camera frequency of . By coupling this heterodyne method with microscope lens, it is possible to observe thermal diffusion phenomena longitudinal and transverse localized to the surface of the diffusive sample like metals and impossible to obtain with standard infrared thermography. From experimental data, the values of in-plane or transverse thermal diffusivity are obtained on two samples. Depending of these results, a debate is organized about the limitation of these estimations as the lowpass filter effect of the intregation time of the infrared camera which becomes important with high frequency excitation or the presence of an emissive of thin layer on the surface of the sample (dark spray coating for enhancing the thermal contrast) which can stopped the thermal waves propagation into the layer sample to characterize soon as the excitation frequency exceeds a threshold dependent on the thermal properties of the sample studied. In another way, the estimation of thermal in-plane or transverse diffusivity with an heterodyne method with repeated flash is shown in first results. For future applications, a first academic approach of thermal diffusion model with transport on rotating disk, an extension of the thermal in-plane diffusivity estimation to obtain cartography by scanning the sample area and few heterodyne results in transient periodic regime which are assilimated to a response level were shown
Styles APA, Harvard, Vancouver, ISO, etc.
45

Oruganti, Surya Kaundinya. « Stochastic models on residual scales in LES of sprays in diesel-like conditions : spray formation, turbulent dispersion and evaporation of droplets ». Thesis, Lyon, 2020. http://www.theses.fr/2020LYSEC042.

Texte intégral
Résumé :
Dans le cadre de la simulation à grandes échelles (LES), la thèse aborde la simulation des sprays dans les conditions d’un moteur à injection directe. La vitesse de l’injection des sprays dans ces conditions est très élevée. En conséquence, des structures énergétiques intermittentes aux petites échelles turbulentes peuvent se manifester dans l’écoulement produit. C’est pourquoi l’accent est mis sur la simulation stochastique des effets turbulents aux échelles non-résolues par LES dans les conditions d’un moteur à injection directe. L’impact de ces effets sur l’atomisation primaire et secondaire, la dispersion des gouttelettes et leur vaporisation représente l’élément essentiel de cette thèse. Dans le but de modéliser ces effets d’intermittence aux échelles non-résolues, deux différentes approches ont été proposées récemment dans la littérature. Dans la thèse, l’accent est mis sur leur application et une éventuelle amélioration pour les conditions d’un moteur à injection directe. La première approche est LES-SSAM (Stochastic Subgrid Acceleration Model). Contrairement aux LES classiques, la LES-SSAM modélise l’accélération turbulente non résolue par le forçage de sous-maille des équations de Navier-Stokes. Ce forçage représente un processus stochastique de type Ornstein-Uhlenbeck construit de telle façon que les propriétés stochastiques de l’accélération, observées par les expériences et les simulations directes, sont représentées. Une telle LES-SSAM, où l’expression de la norme de l’accélération de sous-maille est modifiée, a été appliquée et testée pour la modélisation de l’écoulement interne de l’injecteur d’une simple configuration. Les résultats ont démontré l’efficacité de cette approche malgré la résolution grossière du maillage. Une autre application de LES-SSAM, dans la thèse, concerne sa combinaison avec la méthode VoF pour la simulation de l’écoulement à l’interface au voisinage de l’injecteur. Ici aussi, l’efficacité de cette combinaison a été démontrée en comparaison avec l’expérience et les méthodes numériques actuellement employées pour la simulation de l’atomisation primaire. La deuxième approche abordée dans la majeure partie de la thèse, et qui vise aussi à représenter les effets de l’intermittence aux échelles non-résolues, se base sur la formulation stochastique de la dynamique des gouttes en pulvérisation et en vaporisation, tout en couplage two-way avec l’écoulement turbulent. Les travaux contribuent à la vérification et l’amélioration de cette formulation stochastique. Ainsi le modèle stochastique d’atomisation secondaire est contrôlé par le processus stochastique log-normal pour la dissipation visqueuse. La même variable est la variable-clé pour le modèle de dispersion de gouttes, ces dernières étant soit inférieures soit supérieures à l’échelle de Kolmogorov. La dernière situation a été décrite par la modification de l’équation de mouvement d’une goutte. Enfin, un nouveau modèle stochastique de vaporisation des gouttes, dont le mélange turbulent fait partie du modèle, a été proposé et testé. Tous ces modèles stochastiques ont été implantés dans le code OpenFoam puis testés en comparaison avec d’autres modèles et avec les données expérimentales présentées par le réseau Engine Combustion Network (ECN). L’avantage de l’application de ces modèles sur les maillages à la résolution grossière a été clairement démontré
This thesis concerns with the Large Eddy Simulations (LES) of fuel sprays in direct-injection engines. Given the high injection velocities of sprays, the resulting turbulent flow may be characterized by energetic intermittent structures at small spatial scales. Therefore, the emphasis in this thesis is put on stochastic simulation of turbulent effects on unresolved scales in the engine relevant conditions. The impact of this effect on spray primary and secondary atomization, on droplets dispersion and evaporation represents the main focus in this thesis. The further assessment and modification of two different approaches, developed recently, was the main objective in this thesis. The first one is addressed to LES-SSAM (stochastic sub-grid acceleration model) approach, in which the Navier-Stokes equations are forced on residual scales. This forcing is given by the Ornstein-Uhlenbeck stochastic process constructed in a way to represent the stochastic properties of the subgrid acceleration, known from the experiment and DNS. In the framework of this approach, with the expression of the acceleration norm modified for the wall-bounded conditions, the first step concerned the simulation of the nozzle internal flow on the coarse grid. The results showed the efficiency of this approach. Another step in this part was to combine LES-SSAM with the interface tracking VOF method in the simulation of the near-field of the spray. The performed assessment of this approach in comparison with measurements and with alternative approaches known from the literature demonstrated a potential of such combination of two methods. The second approach in this thesis, in which the intermittency effects on residual scales are also on target, concerned the stochastic modeling of the secondary breakup, dispersion and evaporation of droplets; introducing the two-way coupling between droplets and a highly turbulent flow. Here, the assessment and further development of stochastic models of droplets represent the main contribution in this thesis. So, the model of the secondary breakup is controlled by the stochastic log-normal process for the viscous dissipation rate. The same stochastic variable is the key variable for the dispersion model of droplets below and above the Kolmogrov scale. The droplet equation of motion for the latter case was modified addressing the significant role to simulation the stochastic direction of the droplet acceleration. Finally, the new stochastic model of the turbulent evaporation, in which the stochastic mixing process is a part of the evaporation model, is also represented in this thesis. The different stochastic models outlined above are assessed in comparison to the state-of-art models available in literature and the experiments of Engine Combustion Network (ECN). The results have shown that stochastic models give a good representation of both macroscopic and microscopic spray characteristics on relatively coarse grids
Styles APA, Harvard, Vancouver, ISO, etc.
46

Seltz, Andréa. « Application of deep learning to turbulent combustion modeling of real jet fuel for the numerical prediction of particulate emissions Direct mapping from LES resolved scales to filtered-flame generated manifolds using convolutional neural networks Solving the population balance equation for non-inertial particles dynamics using probability density function and neural networks : application to a sooting flame ». Thesis, Normandie, 2020. http://www.theses.fr/2020NORMIR08.

Texte intégral
Résumé :
Face à l'urgence climatique, l’efficacité énergétique et la réduction des émissions polluantes est devenue une priorité pour l'industrie aéronautique. La précision de la modélisation des phénomènes physicochimiques joue un rôle critique dans qualité de la prédiction des émissions de suie et des gaz à effet de serre par les chambres de combustion. Dans ce contexte, des méthodes d’apprentissage profond sont utilisées pour construire des modélisations avancées des émissions de particules. Une méthode automatisée de réduction et d’optimisation de la cinétique chimique d’un combustible aéronautique réel est dans un premier temps appliquée à la simulation aux grandes échelles pour la prédiction des émissions de monoxyde de carbone. Ensuite, des réseaux de neurones sont entraînés pour simuler le comportement dynamique des suies dans la chambre de combustion et prédire la distribution de taille des particules émises
With the climate change emergency, pollutant and fuel consumption reductions are now a priority for aircraft industries. In combustion chambers, the chemistry and soot modeling are critical to correctly quantify engines soot particles and greenhouse gases emissions. This thesis aimed at improving aircraft numerical pollutant tools, in terms of computational cost and prediction level, for engines high fidelity simulations. It was achieved by enhancing chemistry reduction tools, allowing to predict CO emissions of an aircraft engines at affordable cost for the industry. Next, a novel closure model for unresolved terms in the LES filtered transport equations is developed, based on neural networks (NN), to propose a better flame modeling. Then, an original soot model for engine high fidelity simulations is presented, also based on NN. This new model is applied to a one-dimensional premixed sooted flame, and finally to an industrial combustion chamber LES with measured soot comparison
Styles APA, Harvard, Vancouver, ISO, etc.
47

Meng, Xiangyi. « Understanding classical and quantum information at large scales ». Thesis, 2020. https://hdl.handle.net/2144/42063.

Texte intégral
Résumé :
This dissertation contributes to a better understanding of concepts, theories, and applications of classical and quantum information in various large-scale systems. The dissertation is structured in two parts: The first half is concerned with classical systems: First, we study a basic-yet-never-fully-appreciated property of many real-world complex networks---the scale-free (SF) property---which was often considered to be only represented by the degree distribution. We define a new fundamental quantity, however, the degree--degree distance, which can better represent the SF property by showing statistically more significant power laws and better explain the evolution of real-world networks, e.g., Wikipedia Webpages. Second, we study brain tractography of a healthy subject by diffusion-weighted magnetic resonance imaging (dMRI) data and find that the dependence of dMRI signal on the interpulse time can decode the smaller-than-resolution brain structure and might unravel how information transmits. This finding is confirmed by Monte-Carlo simulation of water-molecule diffusion which let us understand how to optimally measure the thickness of axon sheets in the brain. The second half is concerned with quantum systems: Our first work is to understand how to establish long-distance entanglement transmission in a quantum network where each link has non-zero concurrence---a measure of bipartite entanglement. We introduce a fundamental statistical theory, concurrence percolation theory (ConPT), and find the existence of an entanglement transmission threshold predicted by ConPT which is lower than the known classical-percolation-based results---a “quantum advantage” that is more general and efficient than expected. ConPT also shows a percolation-like universal critical behavior derived by finite-size analysis. Our second work is to study continuous-time quantum walk as an open system that strongly interacts with the environment where non-Markovianity may significantly speed up the dynamics. We confirm this speed-up by first introducing a general multi-scale perturbation method that works on integro-differential equations and then building the Hamiltonian on regular networks, e.g., star or complete graphs, which can be mapped to an error correction algorithm scheme of practical significance. Our third work explores the possible use of entanglement entropy (EE) in machine-learning fields. We introduce a new long-short-term-memory-based recurrent neural network architecture using tensorization techniques to forecast chaotic time series, the learnability of which is determined not only by the number of free parameters but also the tensorization complexity---recognized as how EE scales.
Styles APA, Harvard, Vancouver, ISO, etc.
48

Charney, Noah. « Movin' & ; groovin' salamanders : Conservation implications of large scales and quirky sex ». 2011. https://scholarworks.umass.edu/dissertations/AAI3461997.

Texte intégral
Résumé :
Mole salamanders (Ambystoma) and woodfrogs ( Lithobates sylvaticus) are abundant in New England and depend on ephemeral wetlands for breeding. Their aquatic habitats have been well studied and are protected by several local and regional regulations. State endangered species laws also protect mabled salamanders (A. opacum), Jefferson salamanders (A. jeffersonianum), and blue-spotted salamanders (A. laterale). However, these amphibians spend most of their adult lives in terrestrial habitats that remain poorly protected and elusive to researchers. In chapter 1, I developed a novel technique using passive integrated transponders for tracking small animals. I used this technique to track marbled salamanders walking up to 200 m from their breeding pond during post-breeding migrations. In Chapter 2, I examined the importance of multiple habitat variables for controlling the distributions of woodfrogs and spotted salamanders at 455 ponds in western Massachusetts. Based on a variable-comparison technique I developed, the best predictor for either species of amphibian was the amount of forest in the surrounding landscape. Both species were found more frequently in upland forests where the ponds are least protected by state and federal wetland regulations. In chapter 3, I used my data from chapter 2 and three other similar data sets to conduct an analysis of spatial scale and to parameterize a recently published resistant kernel model. The complex model parameterized by an expert panel did significantly worse than the null model. The distributions of both amphibians were best predicted by measuring the landscape at very large scales (over 1000 m). The most effective scales for conservation may be largest for organisms of intermediate dispersal capability. In chapter 4, I explored the evolution and genetics of the Jefferson/blue-spotted/unisexual salamander complex. I framed research into the fascinating unisexual reproductive system with a model that relates nuclear genome replacement, positive selection on hybrids, and biogeography of the species complex. I parameterized this model using genetic data taken from salamanders spanning Massachusetts and an individual-based breeding simulation. If paternal genomes are transmitted to offspring with the frequencies reported from laboratory experiments, then my model suggests that there must be strong selection favoring unisexuals with hybrid nuclei.
Styles APA, Harvard, Vancouver, ISO, etc.
49

Charney, Noah D. « Movin' & ; Groovin' Salamanders : Conservation Implications of Large Scales and Quirky Sex ». 2011. https://scholarworks.umass.edu/open_access_dissertations/373.

Texte intégral
Résumé :
Mole salamanders (Ambystoma) and woodfrogs (Lithobates sylvaticus) are abundant in New England and depend on ephemeral wetlands for breeding. Their aquatic habitats have been well studied and are protected by several local and regional regulations. State endangered species laws also protect mabled salamanders (A. opacum), Jefferson salamanders (A. jeffersonianum), and blue-spotted salamanders (A. laterale). However, these amphbibians spend most of their adult lives in terrestrial habitats that remain poorly protected and elusive to researchers. In chapter 1, I developed a novel technique using passive integrated transponders for tracking small animals. I used this technique to track marbled salamanders walking up to 200 m from their breeding pond during post-breeding migrations. In Chapter 2, I examined the importance of multiple habitat variables for predicting the distributions of woodfrogs and spotted salamanders at 455 ponds in western Massachusetts. Based on a variable-comparison technique I developed, the best predictor for either species of amphibian was the amount of forest in the surrounding vii landscape. Both species were found more frequently in upland forests where the ponds are least protected by state and federal wetland regulations. In chapter 3, I used my data from chapter 2 and three other similar data sets to conduct an analysis of spatial scale and to parameterize a recently published resistant kernel model. The complex model parameterized by an expert panel did significantly worse than the null model. The distributions of both amphibians were best predicted by measuring the landscape at very large scales (over 1000 m). The most effective scales for conservation may be largest for organisms of intermediate dispersal capability. In chapter 4, I explored the evolution and genetics of the Jefferson/blue-spotted/unisexual salamander complex. I framed research into the fascinating unisexual reproductive system with a model that relates nuclear genome replacement, positive selection on hybrids, and biogeography of the species complex. I parameterized this model using genetic data taken from salamanders spanning Massachusetts and an individual-based breeding simulation. If paternal genomes are transmitted to offspring with the frequencies reported from laboratory experiments, then my model suggests that there must be strong selection favoring unisexuals with hybrid nuclei.
Styles APA, Harvard, Vancouver, ISO, etc.
50

Tuttle, Samuel Everett. « Interrelationships between soil moisture and precipitation large scales, inferred from satellite observations ». Thesis, 2015. https://hdl.handle.net/2144/14060.

Texte intégral
Résumé :
Soil moisture influences the water and energy cycles of terrestrial environments, and thus plays an important climatic role. However, the behavior of soil moisture at large scales, including its impact on atmospheric processes such as precipitation, is not well characterized. Satellite remote sensing allows for indirect observation of large-scale soil moisture, but validation of these data is complicated by the difference in scales between remote sensing footprints and direct ground-based measurements. To address this problem, a method, based on information theory (specifically, mutual information), was developed to determine the useful information content of satellite soil moisture records using precipitation observations. This method was applied to three soil moisture datasets derived from Advanced Microwave Scanning Radiometer for EOS (AMSR-E) measurements over the contiguous U.S., allowing for spatial identification of the algorithm with the least inferred error. Ancillary measures of biomass and topography revealed a strong dependence between algorithm performance and confounding surface properties. Next, statistical causal identification methods (i.e. Granger causality) were used to examine the link between AMSR-E soil moisture and the occurrence of next day precipitation, accounting for long term variability and autocorrelation in precipitation. The probability of precipitation occurrence was modeled using a probit regression framework, and soil moisture was added to the model in order to test for statistical significance and sign. A contrasting pattern of positive feedback in the western U.S. and negative feedback in the east was found, implying a possible amplification of drought and flood conditions in the west and damping in the east. Finally, observations and simulations were used to demonstrate the pitfalls of determining causality between soil moisture and precipitation. It is shown that ignoring long term variability and precipitation autocorrelation can result in artificial positive correlation between soil moisture and precipitation, unless explicitly accounted for in the analysis. In total, this dissertation evaluates large-scale soil moisture measurements, outlines important factors that can cloud the determination of land surface-atmosphere hydrologic feedback, and examines the causal linkage between soil moisture and precipitation at large scales.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie