Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Dynamic experimental data.

Дисертації з теми "Dynamic experimental data"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-47 дисертацій для дослідження на тему "Dynamic experimental data".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Urgueira, Antonio Paulo Vale. "Dynamic analysis of coupled structures using experimental data." Thesis, Imperial College London, 1990. http://hdl.handle.net/10044/1/46590.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Henning, Gustav. "Visualization of neural data : Dynamic representation and analysis of accumulated experimental data." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-166770.

Повний текст джерела
Анотація:
The scientific method is an integral part of the investigation and exploration of hypotheses. Although procedures may vary from one field to the next, most have common identifiable stages. Today, there is no lack of tools that illustrate data in different graphical mediums. This thesis focuses instead on the type of tools that researchers use to investigate their hypotheses’ validity.When a sufficient amount of data is gathered, it can be presented for analysis in meaningful ways to illustrate patterns or abnormalities that would otherwise go unnoticed when only viewed in raw numbers. However useful static visualization of data can be when presented in ascientific paper, researchers are often overwhelmed by the number of plots and graphs that can be made using only a sliver of data. Therefore, this thesis will introduce software which purpose is to demonstrate the needs of researchers inanalyzing data from repeated experiments in order to speed up the process of recognizing variations between them.
Den vetenskapliga metoden är en integral del av undersökningen och utforskandet av hypoteser. Medan procedurer varierar mellan fält liknar de varandra i stora drag. Idag finns det ingen brist på verktyg som visualiserar data i olika grafiska kontexter. Istället fokuserar denna tes på de typ av verktyg som forskare använder för att undersöka integriteten av hypoteser.           När tillräckligt med data samlats finns det olika sätt att presentera denna på ett meningsfullt sätt för att demonstrera mönster och avvikelser som skulle förbli osedda i endast siffror.             Hurvida användbar statisk visualisering av data är som grafik till vetenskapliga rapporter gäller nödvändigtvis inte samma sak vid analys på grund av de många kombinationer av visualisering som ofta finns. Mjukvara kommer att introduceras för att demonstrera behovet av dynamisk representation vid analys av ackumulerad data för att påskynda upptäckten av mönster och avvikelser.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Kennedy, Stephen James M. C. P. Massachusetts Institute of Technology. "Transforming big data into knowledge : experimental techniques in dynamic visualization." Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/73818.

Повний текст джерела
Анотація:
Thesis (M.C.P.)--Massachusetts Institute of Technology, Dept. of Urban Studies and Planning, 2012.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 122-124).
Information visualizations, especially those utilizing web-based platforms, are becoming an increasingly common medium for exchanging ideas. This emergent class of tools enabling web-based, interactive platforms for visualizing data should be considered by urban planners and designers as an opportunity to create new modes of disseminating and communicating information. This thesis provides an overview of new visualization tools: how they are being developed and combined, their applications, and their potential future uses. It also explores the implications of such tools in contexts where technology is not yet mainstream. This is illustrated through a case study of using mobile phones to gather data on the bus system in Dhaka, Bangladesh. The research draws from literature written on critical cartography, visualization, and visual culture in urban design and planning. The work first develops a best practices workflow of existing and emerging visualization tools and platforms. It then constructs prototypes of products for various audiences that illustrate the outputs of data collected on Dhaka's bus system, showcasing the potentials of these new tools and platforms.
by Stephen James Kennedy.
M.C.P.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Davis, Roosevelt. "Determinig dynamic properties of elastic coupling using experimental data and finite element analysis." Master's thesis, Mississippi State : Mississippi State University, 2003. http://library.msstate.edu/etd/show.asp?etd=etd-07242003-163336.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Jain, Jhilmil Cross James H. "User experience design and experimental evaluation of extensible and dynamic viewers for data structures." Auburn, Ala., 2007. http://repo.lib.auburn.edu/2006%20Fall/Dissertations/JAIN_JHILMIL_3.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Jang, Seunghee Shelly. "Parameter estimation of stochastic nonlinear dynamic processes using multiple experimental data sets : with biological applications." Thesis, University of British Columbia, 2009. http://hdl.handle.net/2429/7294.

Повний текст джерела
Анотація:
The dynamic behavior of many chemical and biological processes is defined by a set of nonlinear differential equations that constitute a model. These models typically contain parameters that need to be estimated using experimental data. A number of factors such as sampling intervals, number of measurements and noise level characterize the quality of data, and have a direct effect on the quality of estimated parameters. The quality of experimental data is rather poor in many processes due to instrument limitations or other physical and economical constraints. Traditional parameter estimation methods either yield inaccurate results or are not applicable when applied to such data. Despite this, it is common practice to apply them on a merged data set obtained by pooling together data from multiple experiments. Considering the difficulties in maintaining consistent experimental conditions, straightforward integration of multiple data sets will not provide the best estimates of parameters. In this thesis, a new approach to estimate parameters of nonlinear dynamic models using multiple experimental data is proposed. The approach uses Bayesian inference, and sequentially updates prior probability distribution of parameters for systematic integration of multiple data sets. An expression for posterior probability distribution of parameters conditional on all experimental data sets is derived. This expression is often analytically intractable; therefore two instances of numerical approximation method called Markov Chain Monte Carlo - Metropolis-Hastings (MH) algorithm and Gibbs sampler (GS) - are implemented. The two algorithms form inner and outer levels of iterations, where the MH algorithm is used in the inner level to estimate conditional probability distributions of individual parameters, which is used in the outer level in conjunction with the GS to estimate joint probability distributions of the parameters. The proposed method is applied to three nonlinear biological processes to estimate probability distribution of parameters with a small number of irregular samples. The approximated probability distribution provides a straightforward tool to calculate confidence interval of parameter estimates and is robust to initial guess of parameter value. Correlation among model parameters, quality of each model, and the approach taken to optimize the high cost of MCMC sampling are discussed.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Culotta, Vittorio G. "Theory versus experiment of the rotordynamic and leakage characteristics of smooth annular bushing oil seals." Texas A&M University, 2004. http://hdl.handle.net/1969.1/1512.

Повний текст джерела
Анотація:
This thesis provides a comparison of experimental rotordynamic coefficients for laminar, smooth bushing oil seals to theoretical predictions from XLLubeGT and XLAnSeal. The experimental results come from a new test rig developed at the Turbomachinery Laboratory at Texas A&M University. The two software programs were developed to predict the static and dynamic characteristics of seals. XLLubeGT is a Reynolds equation based program while XLAnSeal is based on a bulk-flow Navier- Stokes model that includes temporal and convective acceleration terms. XLAnSeal was used to predict the added-mass terms of the seals since XLLubeGT assumes those terms to be zero or negligible. The data used for input into the two seals code was the actual measured conditions from the test rig. As part of the input parameters, inlet inertia effects and thermal gradients along the seal were included. Both XLLubeGT and XLAnSeal have the capability to analyze straight bore seals with different inlet and outlet clearances – essentially a tapered seal – but seal expansion caused by the radial differential pressure across the seal bushing was not included. Theoretical and experimentally determined dynamic characteristics include stiffness, damping, inertia terms and Whirl Frequency Ratio (WFR). Seal static characteristics are also reported. They include: leakage, shaft center line loci and Reynolds numbers. Test conditions include three shaft speeds: 4000, 7000 and 10,000 rpm, three test pressures: 21, 45 and 69 bar [300, 650, and 1000 psi] and multiple eccentricities from 0.0 to 0.7. The results for the dynamic characteristics show good correlation of the experimental data to the theoretical values up to an eccentricity of about 0.5. At higher eccentricities, the theory generally under-predicts the dynamic characteristics. Inertia terms are greatly under-predicted. The results for the static characteristics also show good correlation to the experimental data, but they also have a tendency to be under-predicted at higher eccentricities.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Williams, Vincent Troy. "An Experimental Study of Distance Sensitivity Oracles." Scholar Commons, 2010. http://scholarcommons.usf.edu/etd/3697.

Повний текст джерела
Анотація:
The paper \A Nearly Optimal Oracle for Avoiding Failed Vertices and Edges" by Aaron Bernstein and David Karger lays out a nearly optimal algorithm for nding the shortest distances and paths between vertices with any given single failure in constant time without reconstructing the oracle. Using their paper as a guideline, we have implemented their algorithm in C++ and recorded each step in this thesis. Each step has its own pseudo-code and its own analysis to prove that the entire oracle construction stays within the stated running time and total space bounds, from the authors. The effciency of the algorithm is compared against that of the brute-force methods total running time and total space needed. Using multiple test cases with an increasing number of vertices and edges, we have experimentally validated that their algorithm holds true to their statements of space, running time, and query time.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Fillius, James B. "An experimental study of steady state high heat flux removal using spray cooling." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2004. http://library.nps.navy.mil/uhtbin/hyperion/04Dec%5FFillius.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Šteffek, Libor. "Simulace energetické náročnosti a reálné užívání budov." Doctoral thesis, Vysoké učení technické v Brně. Fakulta stavební, 2020. http://www.nusl.cz/ntk/nusl-433629.

Повний текст джерела
Анотація:
This dissertation thesis primarily focuses on the experimental measurement of energy consumption of a given energy-passive family house as well as theoretical research in the field of energy calculations using computer simulations. The results of quasi-stationary and dynamic simulations, with varying computational and real-time climate data, are compared with experimental measurements. Using the dynamic calculation model, which was validated by actually measured data, the relationship between architectural design and the energy performance of the building was analyzed. The influence of selected different operating modes for heat consumption on heating, cooling, ventilation, and interior overheating is observed. The result of the mutual interaction of several input parameters of variant solutions provides the basis for optimization of the whole design.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Rosenblum, Michael. "Phase synchronization of chaotic systems : from theory to experimental applications." Thesis, Universität Potsdam, 2003. http://opus.kobv.de/ubp/volltexte/2005/70/.

Повний текст джерела
Анотація:
In einem klassischen Kontext bedeutet Synchronisierung die Anpassung der Rhythmen von selbst-erregten periodischen Oszillatoren aufgrund ihrer schwachen Wechselwirkung.
Der Begriff der Synchronisierung geht auf den berühmten niederläandischen Wissenschaftler Christiaan Huygens im 17. Jahrhundert zurück, der über seine Beobachtungen mit Pendeluhren berichtete. Wenn zwei solche Uhren auf der selben Unterlage plaziert wurden, schwangen ihre Pendel in perfekter Übereinstimmung.
Mathematisch bedeutet das, daß infolge der Kopplung, die Uhren mit gleichen Frequenzen und engverwandten Phasen zu oszillieren begannen.
Als wahrscheinlich ältester beobachteter nichtlinearer Effekt wurde die Synchronisierung erst nach den Arbeiten von E. V. Appleton und B. Van der Pol gegen 1920 verstanden, die die Synchronisierung in Triodengeneratoren systematisch untersucht haben. Seitdem wurde die Theorie gut entwickelt, und hat viele Anwendungen gefunden.

Heutzutage weiss man, dass bestimmte, sogar ziemlich einfache, Systeme, ein chaotisches Verhalten ausüben können. Dies bedeutet, dass ihre Rhythmen unregelmäßig sind und nicht durch nur eine einzige Frequenz charakterisiert werden können.
Wie in der Habilitationsarbeit gezeigt wurde, kann man jedoch den Begriff der Phase und damit auch der Synchronisierung auf chaotische Systeme ausweiten. Wegen ihrer sehr schwachen Wechselwirkung treten Beziehungen zwischen den Phasen und den gemittelten Frequenzen auf und führen damit zur Übereinstimmung der immer noch unregelmäßigen Rhythmen. Dieser Effekt, sogenannter Phasensynchronisierung, konnte später in Laborexperimenten anderer wissenschaftlicher Gruppen bestätigt werden.

Das Verständnis der Synchronisierung unregelmäßiger Oszillatoren erlaubte es uns, wichtige Probleme der Datenanalyse zu untersuchen.
Ein Hauptbeispiel ist das Problem der Identifikation schwacher Wechselwirkungen zwischen Systemen, die nur eine passive Messung erlauben. Diese Situation trifft häufig in lebenden Systemen auf, wo Synchronisierungsphänomene auf jedem Niveau erscheinen - auf der Ebene von Zellen bis hin zu makroskopischen physiologischen Systemen; in normalen Zuständen und auch in Zuständen ernster Pathologie.
Mit unseren Methoden konnten wir eine Anpassung in den Rhythmen von Herz-Kreislauf und Atmungssystem in Menschen feststellen, wobei der Grad ihrer Interaktion mit der Reifung zunimmt. Weiterhin haben wir unsere Algorithmen benutzt, um die Gehirnaktivität von an Parkinson Erkrankten zu analysieren. Die Ergebnisse dieser Kollaboration mit Neurowissenschaftlern zeigen, dass sich verschiedene Gehirnbereiche genau vor Beginn des pathologischen Zitterns synchronisieren. Außerdem gelang es uns, die für das Zittern verantwortliche Gehirnregion zu lokalisieren.
In a classical context, synchronization means adjustment of rhythms of self-sustained periodic oscillators due to their weak interaction. The history of synchronization goes back to the 17th century when the famous Dutch scientist Christiaan Huygens reported on his observation of synchronization of pendulum clocks: when two such clocks were put on a common support, their pendula moved in a perfect agreement. In rigorous terms, it means that due to coupling the clocks started to oscillate with identical frequencies and tightly related phases. Being, probably, the oldest scientifically studied nonlinear effect, synchronization was understood only in 1920-ies when E. V. Appleton and B. Van der Pol systematically - theoretically and experimentally - studied synchronization of triode generators. Since that the theory was well developed and found many applications.
Nowadays it is well-known that certain systems, even rather simple ones, can exhibit chaotic behaviour. It means that their rhythms are irregular, and cannot be characterized only by one frequency. However, as is shown in the Habilitation work, one can extend the notion of phase for systems of this class as well and observe their synchronization, i.e., agreement of their (still irregular!) rhythms: due to very weak interaction there appear relations between the phases and average frequencies. This effect, called phase synchronization, was later confirmed in laboratory experiments of other scientific groups.
Understanding of synchronization of irregular oscillators allowed us to address important problem of data analysis: how to reveal weak interaction between the systems if we cannot influence them, but can only passively observe, measuring some signals. This situation is very often encountered in biology, where synchronization phenomena appear on every level - from cells to macroscopic physiological systems; in normal states as well as in severe pathologies. With our methods we found that cardiovascular and respiratory systems in humans can adjust their rhythms; the strength of their interaction increases with maturation. Next, we used our algorithms to analyse brain activity of Parkinsonian patients. The results of this collaborative work with neuroscientists show that different brain areas synchronize just before the onset of pathological tremor. Morevoever, we succeeded in localization of brain areas responsible for tremor generation.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Lindholm, Brian Eric. "Reconciliation of a Rayleigh-Ritz beam model with experimental data." Thesis, This resource online, 1994. http://scholar.lib.vt.edu/theses/available/etd-06102009-063201/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Radou, Gaël. "Helicase functional dynamics from low-resolution experimental data and simulation." Thesis, University of Leeds, 2015. http://etheses.whiterose.ac.uk/11694/.

Повний текст джерела
Анотація:
The biological function of large macromolecular assemblies depends on their structure and their dynamics over a broad range of timescales; for this reason its investigation poses significant challenges to conventional experimental techniques. A promising experimental technique is hydrogen-deuterium exchange detected by mass spectrometry (HDX-MS). I begin by presenting a new computational method for quantitative interpretation of deuterium exchange kinetics. The method is tested on a hexameric viral helicase φ12 P4 that pumps RNA into a virus capsid at the expense of ATP hydrolysis. Molecular dynamics simulations predict accurately the exchange kinetics of most peptide fragments and provide a residue-level interpretation of the low-resolution experimental results. This approach is also a powerful tool to probe mechanisms that cannot be observed by X-ray crystallography, or that occur over timescales longer than those that can be realistically simulated, such as the opening of the hexameric ring. Once validated, the method is applied on a homologous system, the packaging motor φ8 P4, for which RNA loading and translocation mechanisms remain elusive. Quantitative interpretation of HDX-MS data, as well as Förster resonance energy transfer (FRET) and computational observations, suggest that the C-terminal domain of the motor plays a crucial role. A new translocation model of φ8 P4 is proposed, for which the affinity between the motor and RNA is modulated by the C-termini. In the final result chapter, the amount of the structural information carried by HDX-MS data is quantitatively analysed. The impact of the averaging of the exchange over peptide fragments on the information content is investigated. The complementarity of data obtained from HDX-MS and data obtained from other techniques (such as NMR, FRET or SAXS) is also examined.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Corus, Mathieu. "Amélioration de méthodes de modification structurale par utilisation de techniques d'expansion et de réduction de modèle." Phd thesis, Ecole Centrale Paris, 2003. http://tel.archives-ouvertes.fr/tel-00011298.

Повний текст джерела
Анотація:
Les méthodes de modification structurale sont l'ensemble des techniques qui permettent d'estimer l'influence d'une modification quelconque sur le comportement dynamique d'une structure. Dans le cadre général, le modèle de comportement de la structure cible, comme celui de la modification, peut être aussi bien numérique qu'expérimental. Dans le cadre de ce travail, seul le cas particulier du couplage d'un modèle expérimental de la structure et d'un modèle numérique de la modification sera traité.
Les concepts fondamentaux utilisés dans cette thèse sont ensuite présentés. Les relations de la dynamique des structures pour les problèmes discrets sont rappelées, ainsi que les principes de la synthèse modale, de la sous-structuration dynamique et de la réduction de modèle, tout comme la notion de modes d'interface. Les formulations classiques des méthodes de modification structurale sont ensuite détaillées pour en illustrer les limitations et les restrictions.
Une formulation originale permettant de prendre en compte les incompatibilités entre les mesures et les DDL de l'interface structure/modification et de régulariser la construction d'un modèle de comportement couplé est alors proposée. Cette première contribution de la thèse repose sur l'utilisation des techniques d'expansion de données et de réduction de modèle. Des indicateurs sont également construits pour estimer la cohérence de la prédiction réalisée. Les évolutions sont appliquées au cas d'un démonstrateur numériques et les résultats sont comparés avec les prédictions réalisées par les méthodes classiques. La méthodologie associée à cette nouvelle formulation est alors largement exposée.
L'influence des différents facteurs intervenant dans la construction du modèle couplé et la qualité de la prédiction est ensuite analysée en détail. Cette analyse permet de dresser une liste non exhaustive des précautions à prendre lors de la mise en œuvre de la méthode proposée, depuis la réalisation pratique de l'analyse modale expérimentale jusqu'à l'interprétation des premiers résultats.
Enfin, plusieurs applications sont présentées. Une première structure académique démontre la faisabilité de la méthode. Une deuxième étude, réalisée sur un cas industriel, illustre les gains de temps potentiels en comparant la prédiction avec les résultats d'une étude basée sur un modèle EF recalé de la structure. La troisième étude illustre l'application de la méthode dans un cas type. L'analyse modale de la structure cible permet de comprendre le problème, une modification est conçue, réalisée et mise en place. La prédiction est ensuite comparée aux résultats de l'analyse modale de la structure modifiée. Enfin, la dernière application montre les limites de la méthodologie. L'étude multi-objectifs sur une large bande de fréquences d'une structure industrielle permet de faire une ouverture vers la suite des travaux et montre la nature des difficultés à surmonter.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Deng, Jiantao. "Adaptation of A TruckSim Model to Experimental Heavy Truck Hard Braking Data." The Ohio State University, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=osu1259633762.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Martinez-Flores, Rene. "DAMAGE ASSESSMENT POTENTIAL OF A NOVEL SYSTEM IDENTIFICATION TECHNIQUE - EXPERIMENTAL VERIFICATION." Diss., Tucson, Arizona : University of Arizona, 2005. http://etd.library.arizona.edu/etd/GetFileServlet?file=file:///data1/pdf/etd/azu%5Fetd%5F1028%5F1%5Fm.pdf&type=application/pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Farkas, I., A. Aszodi, J. Elter, J. Klepac, J. Remis, S. Kliem, T. Höhne, T. Toppila, and I. Boros. "The European project FLOMIX-R: Description of the experimental and numerical studies of flow distribution in the reactor primary circuit(Final report on WP 3)." Forschungszentrum Dresden, 2010. http://nbn-resolving.de/urn:nbn:de:bsz:d120-qucosa-28628.

Повний текст джерела
Анотація:
The flow distribution in the primary circuit of the pressurized water reactor was studied with experiments and Computational Fluid Dynamics (CFD) simulations. The main focus was on the flow field and mixing in the downcomer of the pressure vessel: how the different factors like the orientation of operating loops, the total loop flow rate and the asymmetry of the loop flow rates affect the outcome. In addition to the flow field studies the overall applicability of CFD methods for primary circuit thermal-hydraulic analysis was evaluated based on the CFD simulations of the mixing experiments of the ROCOM (Rossendorf Coolant Mixing Model) test facility and the mixing experiments of the Paks NPP. The experimental part of the work in work package 3 included series of steady state mixing experiments with the ROCOM test facility and the publication of results of Paks VVER-440 NPP thermal mixing experiments. The ROCOM test facility models a 4-loop KONVOI type reactor. In the steady-state mixing experiments the velocity field in the downcomer was measured using laser Doppler anemometry and the concentration of the tracer solution fed from one loop was measured at the downcomer and at the core inlet plane. The varied parameters were the number and orientation of the operating loops, the total flow rate and the (asymmetric) flow rate of individual loops. The Paks NPP thermal mixing experiments took place during commissioning tests of replaced steam generator safety valves in 1987-1989. It was assumed that in the reactor vessels of Paks VVER-440 NPP equipped with six loops the mixing of the coolant is not ideal. For the realistic determination of the active core inlet temperature field for the transients and accidents associated with different level temperature asymmetry a set of mixing factors were determined. Based on data from the online core monitoring system and a separate mathematical model the mixing factors for loop flows at the core inlet were determined. In the numerical simulation part of the work package 3 the detailed measurements of ROCOM tests were used for the validation of CFD methods for primary circuit studies. The selected steady state mixing experiments were simulated with CFD codes CFX-4, CFX-5 and FLUENT. The velocity field in the downcomer and the mixing of the scalar were compared between CFD simulations and experiments. The CFD simulations of full scale PWR included the simulation of Paks VVER-440 mixing experiment and the simulation of Loviisa VVER-440 downcomer flow field. In the simulations of Paks experiments the experimental and simulated concentration field at the core inlet were compared and conclusions made concerning the results overall and the VVER-440 specific geometry modelling aspects like how to model the perforated elliptic bottom plate and what is the effect of the cold leg bends to the flow field entering to the downcomer. With Loviisa simulations the qualitative comparison was made against the original commissioning experiments but the emphasis was on the CFD method validation and testing. The overall conclusion concerning the CFD modelling of the flow field and mixing in the PWR primary circuit could be that the current computation capacity and physical models also in commercial codes is beginning to be sufficient for simulations giving reliable and useful results for many real primary circuit applications. However the misuse of CFD methods is easy, and the general as well as the nuclear power specific modelling guidelines should be followed when the CFD simulations are made.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Farkas, I., A. Aszodi, J. Elter, J. Klepac, J. Remis, S. Kliem, T. Höhne, T. Toppila, and I. Boros. "The European project FLOMIX-R: Description of the experimental and numerical studies of flow distribution in the reactor primary circuit(Final report on WP 3)." Forschungszentrum Rossendorf, 2005. https://hzdr.qucosa.de/id/qucosa%3A21689.

Повний текст джерела
Анотація:
The flow distribution in the primary circuit of the pressurized water reactor was studied with experiments and Computational Fluid Dynamics (CFD) simulations. The main focus was on the flow field and mixing in the downcomer of the pressure vessel: how the different factors like the orientation of operating loops, the total loop flow rate and the asymmetry of the loop flow rates affect the outcome. In addition to the flow field studies the overall applicability of CFD methods for primary circuit thermal-hydraulic analysis was evaluated based on the CFD simulations of the mixing experiments of the ROCOM (Rossendorf Coolant Mixing Model) test facility and the mixing experiments of the Paks NPP. The experimental part of the work in work package 3 included series of steady state mixing experiments with the ROCOM test facility and the publication of results of Paks VVER-440 NPP thermal mixing experiments. The ROCOM test facility models a 4-loop KONVOI type reactor. In the steady-state mixing experiments the velocity field in the downcomer was measured using laser Doppler anemometry and the concentration of the tracer solution fed from one loop was measured at the downcomer and at the core inlet plane. The varied parameters were the number and orientation of the operating loops, the total flow rate and the (asymmetric) flow rate of individual loops. The Paks NPP thermal mixing experiments took place during commissioning tests of replaced steam generator safety valves in 1987-1989. It was assumed that in the reactor vessels of Paks VVER-440 NPP equipped with six loops the mixing of the coolant is not ideal. For the realistic determination of the active core inlet temperature field for the transients and accidents associated with different level temperature asymmetry a set of mixing factors were determined. Based on data from the online core monitoring system and a separate mathematical model the mixing factors for loop flows at the core inlet were determined. In the numerical simulation part of the work package 3 the detailed measurements of ROCOM tests were used for the validation of CFD methods for primary circuit studies. The selected steady state mixing experiments were simulated with CFD codes CFX-4, CFX-5 and FLUENT. The velocity field in the downcomer and the mixing of the scalar were compared between CFD simulations and experiments. The CFD simulations of full scale PWR included the simulation of Paks VVER-440 mixing experiment and the simulation of Loviisa VVER-440 downcomer flow field. In the simulations of Paks experiments the experimental and simulated concentration field at the core inlet were compared and conclusions made concerning the results overall and the VVER-440 specific geometry modelling aspects like how to model the perforated elliptic bottom plate and what is the effect of the cold leg bends to the flow field entering to the downcomer. With Loviisa simulations the qualitative comparison was made against the original commissioning experiments but the emphasis was on the CFD method validation and testing. The overall conclusion concerning the CFD modelling of the flow field and mixing in the PWR primary circuit could be that the current computation capacity and physical models also in commercial codes is beginning to be sufficient for simulations giving reliable and useful results for many real primary circuit applications. However the misuse of CFD methods is easy, and the general as well as the nuclear power specific modelling guidelines should be followed when the CFD simulations are made.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Luo, Huiyang. "EXPERIMENTAL AND ANALYTICAL INVESTIGATION OF DYNAMIC COMPRESSIVE BEHAVIOR OF INTACT AND DAMAGED CERAMICS." Diss., Tucson, Arizona : University of Arizona, 2005. http://etd.library.arizona.edu/etd/GetFileServlet?file=file:///data1/pdf/etd/azu%5Fetd%5F1087%5F1%5Fm.pdf&type=application/pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Fleischfresser, Natália Argene Lovate Pereira 1985. "Correlations for the prediction of the head curve of centrifugal pumps based on experimental data = Correlações para a predição da curva de altura de bombas centrífugas baseadas em dados experimentais." [s.n.], 2015. http://repositorio.unicamp.br/jspui/handle/REPOSIP/265777.

Повний текст джерела
Анотація:
Orientador: Antonio Carlos Bannwart
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecânica e Instituto de Geociências
Made available in DSpace on 2018-08-27T23:44:09Z (GMT). No. of bitstreams: 1 Fleischfresser_NataliaArgeneLovatePereira_M.pdf: 3225949 bytes, checksum: 3fa7cfd39e0757130d24fa627a6c856c (MD5) Previous issue date: 2015
Resumo: A performance hidráulica de bombas centrífugas depende de várias entre suas dimensões hidráulicas, mas a maioria delas não é facilmente acessível. Por este motivo, a performance hidráulica da bomba deve sempre ser disponibilizada pelo seu fabricante. No entanto, para proteger sua propriedade intelectual, fabricantes raramente compartilham com o público informações mais detalhadas sobre a hidráulica da bomba. Como consequência, os usuários dos equipamentos e pesquisadores não têm acesso a todas as informações de que podem necessitar. Na literatura há diversos modelos propostos para predizer a performance hidráulica de bombas centrífugas. No entanto, até as melhores simulações por CFD não conseguem predizer a performance das bombas com precisão. Neste trabalho, uma abordagem simples (modelo) disponível na literatura, baseada nos princípios de fluido dinâmica, que prediz a performance hidráulica de bombas com poucas e acessíveis dimensões hidráulicas, é validada com uma grande variedade de dados experimentais. Os dados de oitenta testes de diferentes tipos de bomba, cobrindo uma ampla extensão de velocidades específicas, foram considerados. A partir desta análise, correlações entre os coeficientes da equação do modelo e os principais dados hidráulicos das bombas foram propostas. Em seguida, diversos métodos de predição da altura no shut-off disponíveis na literatura foram analisados para que fosse possível definir o que melhor predissesse a altura no shut-off considerando os dados de teste. Finalmente, para cada tipo de bomba, a melhor combinação entre correlações e método de predição de altura no shut-off foi selecionada para reduzir o erro na predição das curvas de altura completas. Dadas todas as premissas e simplificações, o objetivo deste trabalho é apresentar um método aplicável para diversos tipos de bomba que facilmente prediz a curva de altura com erro razoável
Abstract: The hydraulic performance of centrifugal pumps depends on several hydraulic dimensions of the pump, but most of them are not easily accessible. Therefore, the pump¿s hydraulic performance always has to be informed by the pump manufacturer. Furthermore, in order to protect their intellectual property, manufacturers rarely share more detailed information about the pump hydraulics with the public. As a consequence, pump users and researchers don¿t have access to all the data they possibly need. In literature, there are several proposed models to predict the hydraulic performance of centrifugal pumps. However, even the best CFD simulations can¿t predict the pump performance accurately. In this work, a simple approach (model) available in literature, based on fluid dynamics principles, that predicts a pump¿s hydraulic performance with only a few accessible hydraulic dimensions, is validated with several experimental data. Eighty tests of different types of pumps, with a large range of specific speeds were considered. From this analysis, correlations among the coefficients of the model equation and the main hydraulic data of the pumps were proposed. Afterwards, several shut-off head prediction methods available in literature were analyzed in order to define the one that best predicts the shut-off head of the given tested data. Finally, for each pump type, the best combination of correlations and shut-off head prediction method was selected to reduce the error on the whole head curve prediction. Given all the assumptions and simplifications, the objective of this work is to present a method applicable to several pump types that easily provides a prediction of the whole head curve with reasonable error
Mestrado
Explotação
Mestra em Ciências e Engenharia de Petróleo
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Higgins, K. "Comparison of engineering correlations for predicting heat transfer in zero-pressure-gradient compressible boundary layers with CFD and experimental data." Fishermans Bend, Victoria : Defence Science and Technology Organisation, 2008. http://hdl.handle.net/1947/9653.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Blasch, Kyle William. "Streamflow timing and estimation of infiltration rates in an ephemeral stream channel using variably saturated heat and fluid transport methods." Diss., The University of Arizona, 2003. http://etd.library.arizona.edu/etd/GetFileServlet?file=file:///data1/pdf/etd/azu_e9791_2003_253_sip1_w.pdf&type=application/pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Lance, Blake. "Experimental Validation Data for CFD of Steady and Transient Mixed Convection on a Vertical Flat Plate." DigitalCommons@USU, 2015. http://digitalcommons.usu.edu/etd/4546.

Повний текст джерела
Анотація:
Simulations are becoming increasingly popular in science and engineering. One type of simulation is Computation Fluid Dynamics (CFD) that is used when closed forms solutions are impractical. The field of Verification & Validation emerged from the need to assess simulation accuracy as they often contain approximations and calibrations. Validation involves the comparison of experimental data with simulation outputs and is the focus of this work. Errors in simulation predictions may be assessed in this way. Validation requires highly-detailed data and description to accompany these data, and uncertainties are very important. The purpose of this work is to provide highly complete validation data to assess the accuracy of CFD simulations. This aim is fundamentally different from the typical discovery experiments common in research. The measurement of these physics was not necessarily original but performed with modern, high fidelity methods. Data were tabulated through an online database for direct use in Reynolds-Averaged Navier Stokes simulations. Detailed instrumentation and documentation were used to make the data more useful for validation. This work fills the validation data gap for steady and transient mixed convection. The physics in this study included mixed convection on a vertical flat plate. Mixed convection is a condition where both forced and natural convection influence fluid momentum and heat transfer phenomena. Flow was forced over a vertical flat plate in a facility built for validation experiments. Thermal and velocity data were acquired for steady and transient flow conditions. The steady case included both buoyancy-aided and buoyancy-opposed mixed convection while the transient case was for buoyancy-opposed flow. The transient was a ramp-down flow transient, and results were ensemble-averaged for improved statistics. Uncertainty quantification was performed on all results with bias and random sources. An independent method of measuring heat flux was devised to assess the accuracy of commercial heat flux sensors used in the heated wall. It measured the convective heat flux by the temperature gradient in air very near the plate surface. Its accuracy was assessed by error estimations and uncertainty quantification.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Nelson, Graham Martin. "Development of an Experimentally-Validated Compact Model of a Server Rack." Thesis, Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/19726.

Повний текст джерела
Анотація:
A simplified computational fluid dynamics and heat transfer (CFD-HT) model of an electronics enclosure was developed. The compact model was based on a server simulator, which dissipates a variable amount of heat at an adjustable air flow rate. Even though a server simulator does not accurately represent the geometry of an actual electronics enclosure, the modeling of such a unit deals with many of the same issues as the modeling of actual enclosures. Even at the server simulator level, a disparity in length scales prevents detailed modeling of intricate components most notably grilles, fins, and fans. Therefore, a compact model for each of these components was developed. Fan performance curves were determined experimentally for varying fan rotational speeds. In addition, component pressure drop characteristics were found experimentally for grilles and fin banks, and these empirical relationships were applied to the model as well. To determine the validity of the simplifications employed in the model, experimental outlet temperature and velocity measurements were taken to compare to those provided by the CFD-HT simulations.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Carlà, Michele. "Transverse beam dynamics studies from turn-by-turn beam position monitor data in the ALBA storage ring." Doctoral thesis, Universitat Autònoma de Barcelona, 2017. http://hdl.handle.net/10803/457886.

Повний текст джерела
Анотація:
LBA es una fuente de luz de sincrotrón de tercera generación, en funcionamiento desde 2011, al servicio de una comunidad científica e industrial nacional e internacional. Varios laboratorios (lineas de luz) explotan la radiación electromagnética hasta los rayos X, para una amplia gama de experimentos físicos, químicos, y biológicos. Con el fin de conseguir un elevado flujo de radiación y pequeña divergencia, el anillo de almacenamiento de electrones emplea imanes intensos distribuidos en una malla optimizada para dar forma a las características del haz de electrones. Por otra parte, los imanes pueden tener varios errores qué afectan, de forma perjudicial, a las características del haz de electrones tales como el tamaño, la divergencia o el tiempo de vida. Los errores pueden ser debidos a tolerancias de fabricación mecánica, histéresis magnética, variaciones térmicas o desalineaciones mecánicas. El delicado "equilibrio magnético" requerido para operar éstas fuentes de luz no podría cumplirse sin una herramienta para medir y corregir la malla magnética. Para este propósito, han sido desarrollados procedimientos que utilizan el haz almacenado como sonda para inspeccionar los campos magnéticos reales. Entre las distintas técnicas, las medidas "vuelta-vuelta" permiten construir un modelo de los errores magnéticos a través de la observación del movimiento transversal del haz de electrones vuelta tras vuelta. El objetivo principal de esta tesis doctoral es implementar por primera vez la técnica vuelta-vuelta en ALBA a fin de establecer las capacidades de las medidas de errores magneticos lineales y no lineales. Un primer conjunto de experimentos está dedicado a la caracterización de los errores lineales que muestra un nivel de acuerdo (beta-beat < 2%) comparable con el de otros métodos basados en medidas de órbita cerrada. Pruebas adicionales para establecer la máxima sensibilidad a pequeñas variaciónes de las funciones ópticas se has obtenido manipulando elementos magnéticos y midiendo las variaciones ópticas producidas. La técnica vuelta-vuelta se ha aplicado también a la caracterización del acoplamiento de betatrón en ALBA. Se ha observado un alto grado de precisión en la localización de la fuente de error, y sólo un 10% de desacuerdo entre las medidas de la fuerza de la fuente de acoplamiento y las predicciones teóricas. Una prueba similar también se ha llevado a cabo para las familias de sextupolos, utilizando uno shunt resistivo para cambiar de manera individual la corriente de excitación de un sextupolo. La capacidad de localizar la posición del error sextupolar ha sido demostrada con éxito. Los experimentos mostraron cómo la técnica vuelta-vuelta destaca por sensibilidad a pequeñas variaciones de las funciones ópticas. Esto hizo posible, por primera vez en una fuente de luz de sincrotrón, la aplicación de la técnica vuelta-vuelta a la medida de fuentes localizadas de impedancia transversal. El experimento, llevado a cabo en ALBA, ha permitido distinguir y caracterizar el efecto de desfocalización producido por diferentes fuentes de impedancia transversal, incluyendo elementos como el scraper, cámara de vacío de los imanes de inyección o de un ondulador y la cámara de vacío estándar. La buena concordancia entre medidas y modelo de impedancia transversal, basado en el cálculo analítico de la pared resistiva y en la simulación del código GdfidL de la impedancia geométrica, ha confirmado que la técnica vuelta-vuelta es una herramienta de diagnóstico muy sensible. Asimismo, se ha demostrado que fuentes de impedancia más pequeñas pueden ser caracterizadas adecuadamente variando la óptica de la máquina de modo de obtener una ampliación del efecto de desfocalización inducido. Este método se ha utilizado para caracterizar impedancias tan pequeñas como el de un "in vacuum undulator" de ALBA.
ALBA is a third generation light source, commissioned in 2011, serving a national and international scientific and industrial community. It provides synchrotron radiation up to the hard x-rays as a tool to multiple laboratories (beamlines) for a wide range of physical, chemical, and biological experiments. In order to achieve the required radiation flux and small divergence, the electron storage ring employs an optimized design where strong magnets are combined in a rather complex lattice to properly shape the characteristics of the electron beam. However, the lattice can have several errors, which detrimentally affect the electron beam characteristics such as size, divergence, or lifetime. Unavoidable lattice errors can be due to manufacturing mechanical tolerances, magnet hysteresis, thermal variations and/or mechanical misalignments. The delicate "magnetic equilibrium" required to operate such light sources could be hardly met without a tool to measure and correct the actual magnetic lattice. For this purpose beam-based methods, where the stored beam serves as probe to inspect the lattice, have been developed. Among the various techniques, turn-by-turn measurements allow to asses a lattice error model by sampling turn after turn the transverse motion of the beam. The main purpose of this PhD work is to implement for the first time the turn-by-turn technique at ALBA in order to establish the capabilities of the measurements in the context of linear and non-linear lattice errors. A first set of experiments was dedicated to the characterization of the linear lattice, showing a level of agreement (beta-beat < 2%), comparable to what observed with other methods based on the closed orbit technique. Further tests to establish the ultimate sensitivity to small optical functions variations were obtained by manipulating single lattice elements and measuring the resulting optics variations. Turn-by-turn technique has been applied to the characterization of coupling in the ALBA light source. The ability of turn-by-turn to correctly localize a single source of coupling was challenged by introducing in the storage ring lattice a controlled coupling source. A high degree of precision was observed in localizing the error source, and only a 10% disagreement between measurements and theoretical predictions on the coupling source strength was observed. A similar test was also carried out for the sextupole families using a resistive shunt to change the excitation current of a single element. The ability to localize the sextupolar error position in the lattice was successfully demonstrated. The experiments showed how the turn-by-turn acquisitions shine as for sensibility, enabling the detection of very small variations of the optics function. This made it possible to apply the turn-by-turn technique, for the first time in a light source, to the measurement of localized transverse impedance sources. The experiment, carried out in the ALBA storage ring, led to the characterization of the individual defocusing effects produced by different transverse impedance sources, including elements like scraper, injection zone, in-vacuum undulator and standard vacuum beam pipe. The good agreement between the measurements and the transverse impedance model based on analytical calculation of the resistive wall and GdfidL simulation of the geometrical impedance confirmed that the turn-by-turn technique is a valid diagnostic tool to carry out very sensitive and non-intrusive optics measurements. Furthermore it has been shown how the smaller impedance sources can still be properly characterized by manipulating the machine optics in order to obtain a magnification of the induced defocusing kick. This method has been used to characterize impedances as small as the one of the ALBA IVUs.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Shields, Shawn. "Dynamic thermal response of the data center to cooling loss during facility power failure." Thesis, Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/29725.

Повний текст джерела
Анотація:
Thesis (M. S.)--Mechanical Engineering, Georgia Institute of Technology, 2010.
Committee Chair: Yogendra K. Joshi; Committee Member: Mostafa Ghiaasiaan; Committee Member: Sheldon Jeter. Part of the SMARTech Electronic Thesis and Dissertation Collection.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Hemström, B., P. Mühlbauer, a. Nijeholt J. A. Lycklama, I. Farkas, I. Boros, A. Aszodi, M. Scheuerer, et al. "The European project FLOMIX-R: Fluid mixing and flow distribution inthe reactor circuit - Final summary report." Forschungszentrum Dresden, 2010. http://nbn-resolving.de/urn:nbn:de:bsz:d120-qucosa-28619.

Повний текст джерела
Анотація:
The project was aimed at describing the mixing phenomena relevant for both safety analysis, particularly in steam line break and boron dilution scenarios, and mixing phenomena of interest for economical operation and the structural integrity. Measurement data from a set of mixing experiments, gained by using advanced measurement techniques with enhanced resolution in time and space help to improve the basic understanding of turbulent mixing and to provide data for Computational Fluid Dynamics (CFD) code validation. Slug mixing tests simulating the start-up of the first main circulation pump are performed with two 1:5 scaled facilities: The Rossendorf coolant mixing model ROCOM and the VATTENFALL test facility, modelling a German Konvoi type and a Westinghouse type three-loop PWR, respectively. Additional data on slug mixing in a VVER-1000 type reactor gained at a 1:5 scaled metal mock-up at EDO Gidropress are provided. Experimental results on mixing of fluids with density differences obtained at ROCOM and the FORTUM PTS test facility are made available. Concerning mixing phenomena of interest for operational issues and thermal fatigue, flow distribution data available from commissioning tests (Sizewell-B for PWRs, Loviisa and Paks for VVERs) are used together with the data from the ROCOM facility as a basis for the flow distribution studies. The test matrix on flow distribution and steady state mixing performed at ROCOM comprises experiments with various combinations of running pumps and various mass flow rates in the working loops. Computational fluid dynamics calculations are accomplished for selected experiments with two different CFD codes (CFX-5, FLUENT). Best practice guidelines (BPG) are applied in all CFD work when choosing computational grid, time step, turbulence models, modelling of internal geometry, boundary conditions, numerical schemes and convergence criteria. The BPG contain a set of systematic procedures for quantifying and reducing numerical errors. The knowledge of these numerical errors is a prerequisite for the proper judgement of model errors. The strategy of code validation based on the BPG and a matrix of CFD code validation calculations have been elaborated. Besides of the benchmark cases, additional experiments were calculated by new partners and observers, joining the project later. Based on the "best practice solutions", conclusions on the applicability of CFD for turbulent mixing problems in PWR were drawn and recommendations on CFD modelling were given. The high importance of proper grid generation was outlined. In general, second order discretization schemes should be used to minimise numerical diffusion. First order schemes can provide physically wrong results. With optimised "production meshes" reasonable results were obtained, but due to the complex geometry of the flow domains, no fully grid independent solutions were achieved. Therefore, with respect to turbulence models, no final conclusions can be given. However, first order turbulence models like K-e or SST K-w are suitable for momentum driven slug mixing. For buoyancy driven mixing (PTS scenarios), Reynolds stress models provide better results.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Hemström, B., P. Mühlbauer, a. Nijeholt J. A. Lycklama, I. Farkas, I. Boros, A. Aszodi, M. Scheuerer, et al. "The European project FLOMIX-R: Fluid mixing and flow distribution inthe reactor circuit - Final summary report." Forschungszentrum Rossendorf, 2005. https://hzdr.qucosa.de/id/qucosa%3A21688.

Повний текст джерела
Анотація:
The project was aimed at describing the mixing phenomena relevant for both safety analysis, particularly in steam line break and boron dilution scenarios, and mixing phenomena of interest for economical operation and the structural integrity. Measurement data from a set of mixing experiments, gained by using advanced measurement techniques with enhanced resolution in time and space help to improve the basic understanding of turbulent mixing and to provide data for Computational Fluid Dynamics (CFD) code validation. Slug mixing tests simulating the start-up of the first main circulation pump are performed with two 1:5 scaled facilities: The Rossendorf coolant mixing model ROCOM and the VATTENFALL test facility, modelling a German Konvoi type and a Westinghouse type three-loop PWR, respectively. Additional data on slug mixing in a VVER-1000 type reactor gained at a 1:5 scaled metal mock-up at EDO Gidropress are provided. Experimental results on mixing of fluids with density differences obtained at ROCOM and the FORTUM PTS test facility are made available. Concerning mixing phenomena of interest for operational issues and thermal fatigue, flow distribution data available from commissioning tests (Sizewell-B for PWRs, Loviisa and Paks for VVERs) are used together with the data from the ROCOM facility as a basis for the flow distribution studies. The test matrix on flow distribution and steady state mixing performed at ROCOM comprises experiments with various combinations of running pumps and various mass flow rates in the working loops. Computational fluid dynamics calculations are accomplished for selected experiments with two different CFD codes (CFX-5, FLUENT). Best practice guidelines (BPG) are applied in all CFD work when choosing computational grid, time step, turbulence models, modelling of internal geometry, boundary conditions, numerical schemes and convergence criteria. The BPG contain a set of systematic procedures for quantifying and reducing numerical errors. The knowledge of these numerical errors is a prerequisite for the proper judgement of model errors. The strategy of code validation based on the BPG and a matrix of CFD code validation calculations have been elaborated. Besides of the benchmark cases, additional experiments were calculated by new partners and observers, joining the project later. Based on the "best practice solutions", conclusions on the applicability of CFD for turbulent mixing problems in PWR were drawn and recommendations on CFD modelling were given. The high importance of proper grid generation was outlined. In general, second order discretization schemes should be used to minimise numerical diffusion. First order schemes can provide physically wrong results. With optimised "production meshes" reasonable results were obtained, but due to the complex geometry of the flow domains, no fully grid independent solutions were achieved. Therefore, with respect to turbulence models, no final conclusions can be given. However, first order turbulence models like K-e or SST K-w are suitable for momentum driven slug mixing. For buoyancy driven mixing (PTS scenarios), Reynolds stress models provide better results.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Park, Jangho. "Efficient Global Optimization of Multidisciplinary System using Variable Fidelity Analysis and Dynamic Sampling Method." Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/91911.

Повний текст джерела
Анотація:
Work in this dissertation is motivated by reducing the design cost at the early design stage while maintaining high design accuracy throughout all design stages. It presents four key design methods to improve the performance of Efficient Global Optimization for multidisciplinary problems. First, a fidelity-calibration method is developed and applied to lower-fidelity samples. Function values analyzed by lower fidelity analysis methods are updated to have equivalent accuracy to that of the highest fidelity samples, and these calibrated data sets are used to construct a variable-fidelity Kriging model. For the design of experiment (DOE), a dynamic sampling method is developed and includes filtering and infilling data based on mathematical criteria on the model accuracy. In the sample infilling process, multi-objective optimization for exploitation and exploration of design space is carried out. To indicate the fidelity of function analysis for additional samples in the variable-fidelity Kriging model, a dynamic fidelity indicator with the overlapping coefficient is proposed. For the multidisciplinary design problems, where multiple physics are tightly coupled with different coupling strengths, multi-response Kriging model is introduced and utilizes the method of iterative Maximum Likelihood Estimation (iMLE). Through the iMLE process, a large number of hyper-parameters in multi-response Kriging can be calculated with great accuracy and improved numerical stability. The optimization methods developed in the study are validated with analytic functions and showed considerable performance improvement. Consequentially, three practical design optimization problems of NACA0012 airfoil, Multi-element NLR 7301 airfoil, and all-moving-wingtip control surface of tailless aircraft are performed, respectively. The results are compared with those of existing methods, and it is concluded that these methods guarantee the equivalent design accuracy at computational cost reduced significantly.
Doctor of Philosophy
In recent years, as the cost of aircraft design is growing rapidly, and aviation industry is interested in saving time and cost for the design, an accurate design result during the early design stages is particularly important to reduce overall life cycle cost. The purpose of the work to reducing the design cost at the early design stage with design accuracy as high as that of the detailed design. The method of an efficient global optimization (EGO) with variable-fidelity analysis and multidisciplinary design is proposed. Using the variable-fidelity analysis for the function evaluation, high fidelity function evaluations can be replaced by low-fidelity analyses of equivalent accuracy, which leads to considerable cost reduction. As the aircraft system has sub-disciplines coupled by multiple physics, including aerodynamics, structures, and thermodynamics, the accuracy of an individual discipline affects that of all others, and thus the design accuracy during in the early design states. Four distinctive design methods are developed and implemented into the standard Efficient Global Optimization (EGO) framework: 1) the variable-fidelity analysis based on error approximation and calibration of low-fidelity samples, 2) dynamic sampling criteria for both filtering and infilling samples, 3) a dynamic fidelity indicator (DFI) for the selection of analysis fidelity for infilled samples, and 4) Multi-response Kriging model with an iterative Maximum Likelihood estimation (iMLE). The methods are validated with analytic functions, and the improvement in cost efficiency through the overall design process is observed, while maintaining the design accuracy, by a comparison with existing design methods. For the practical applications, the methods are applied to the design optimization of airfoil and complete aircraft configuration, respectively. The design results are compared with those by existing methods, and it is found the method results design results of accuracies equivalent to or higher than high-fidelity analysis-alone design at cost reduced by orders of magnitude.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Igue, Roberto T. "Experimental Investigation of a lift augmented ground effect platform." Wright-Patterson AFB, OH : Air Force Institute of Technology, 2005. http://handle.dtic.mil/100.2/ADA440437.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Hosseini, SayedMohammad. "A Statistical Approach to Modeling Wheel-Rail Contact Dynamics." Thesis, Virginia Tech, 2021. http://hdl.handle.net/10919/101864.

Повний текст джерела
Анотація:
The wheel-rail contact mechanics and dynamics that are of great importance to the railroad industry are evaluated by applying statistical methods to the large volume of data that is collected on the VT-FRA state-of-the-art roller rig. The intent is to use the statistical principles to highlight the relative importance of various factors that exist in practice to longitudinal and lateral tractions and to develop parametric models that can be used for predicting traction in conditions beyond those tested on the rig. The experiment-based models are intended to be an alternative to the classical traction-creepage models that have been available for decades. Various experiments are conducted in different settings on the VT-FRA Roller Rig at the Center for Vehicle Systems and Safety at Virginia Tech to study the relationship between the traction forces and the wheel-rail contact variables. The experimental data is used to entertain parametric and non-parametric statistical models that efficiently capture this relationship. The study starts with single regression models and investigates the main effects of wheel load, creepage, and the angle of attack on the longitudinal and lateral traction forces. The assumptions of the classical linear regression model are carefully assessed and, in the case of non-linearities, different transformations are applied to the explanatory variables to find the closest functional form that captures the relationship between the response and the explanatory variables. The analysis is then extended to multiple models in which interaction among the explanatory variables is evaluated using model selection approaches. The developed models are then compared with their non-parametric counterparts, such as support vector regression, in terms of "goodness of fit," out-of-sample performance, and the distribution of predictions.
Master of Science
The interaction between the wheel and rail plays an important role in the dynamic behavior of railway vehicles. The wheel-rail contact has been extensively studied through analytical models, and measuring the contact forces is among the most important outcomes of such models. However, these models typically fall short when it comes to addressing the practical problems at hand. With the development of a high-precision test rig—called the VT-FRA Roller Rig, at the Center for Vehicle Systems and Safety (CVeSS)—there is an increased opportunity to tackle the same problems from an entirely different perspective, i.e. through statistical modeling of experimental data. Various experiments are conducted in different settings that represent railroad operating conditions on the VT-FRA Roller Rig, in order to study the relationship between wheel-rail traction and the variables affecting such forces. The experimental data is used to develop parametric and non-parametric statistical models that efficiently capture this relationship. The study starts with single regression models and investigates the main effects of wheel load, creepage, and the angle of attack on the longitudinal and lateral traction forces. The analysis is then extended to multiple models, and the existence of interactions among the explanatory variables is examined using model selection approaches. The developed models are then compared with their non-parametric counterparts, such as support vector regression, in terms of "goodness of fit," out-of-sample performance, and the distribution of the predictions. The study develops regression models that are able to accurately explain the relationship between traction forces, wheel load, creepage, and the angle of attack.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Phalippou, Pierre. "Data-enabled, reduced-order simulation of dynamical systems with application to vehicle crash simulation." Thesis, Compiègne, 2020. http://www.theses.fr/2020COMP2544.

Повний текст джерела
Анотація:
Lors de la conception de nouveaux véhicules, les constructeurs automobiles font face à de nombreuses contraintes telles que la sécurité, le confort, le poids et la consommation de carburant. La simulation numérique, et plus précisément la méthode des éléments finis, y est très largement utilisée car elle permet de tester l'influence des différents paramètres sur le comportement du système. Ainsi, de nombreux tests réels sur prototypes sont évités. La conception de nouveaux véhicules est néanmoins ralentie par l'utilisation de modèles de plus en plus complexes, en particulier dans les applications au crash. De nombreuses méthodes de réduction de modèles sont développées dans le but d'accélérer les simulations en tirant parti de l'énorme quantité de données disponibles et de la répétitivité des calculs réalisés avec de petites variations dans les paramètres de conception. Cette thèse vise à appliquer de telles méthodes aux simulations de crash automobiles comprenant des non linéarités matérielles et géométriques ainsi que de nombreux contacts. Effectuée en étroite collaboration entre le groupe PSA, le développeur de logiciel Altair Engineering et le laboratoire de mécanique UTC CNRS Roberval, ce projet met l'accent sur les méthodes de réduction intrusives nécessitant des modifications dans le code source du solveur éléments finis. Une base réduite de fonctions de forme globales est formée pendant une phase d'apprentissage hors ligne sur un ensemble de données et utilisée en ligne pour réduire la taille du modèle et permettre l'hyper-réduction des forces internes et l'accélération du temps de calcul. Les contributions de cette thèse concernent trois aspects de la phase d'apprentissage. Les modifications apportées à la décomposition en valeurs singulières incrémentale la rendent plus facile à utiliser tout en accélérant le temps d'apprentissage des bases réduites. Le critère sparse est proposé pour la sélection des fonctions de base réduite, offrant de meilleures performances dans la phase de réduction ainsi qu'une meilleure corrélation entre l'erreur d'apprentissage hors ligne et l'erreur d'approximation en ligne. Une formulation innovante du problème d'optimisation d'hyper-réduction tirant parti de la programmation en variables mixtes est suggérée comme référence pour le développement d'heuristiques. Additionnellement, de nouvelles contraintes sur l'intégration polynomiale et la conservation du volume sont utilisées. Tous les développements informatiques sont réalisés dans le solveur éléments finis explicite industriel Altair Radioss avec un accès complet au code source. À la connaissance de l'auteur, cette caractéristique constitue un différentiateur majeur de la présente contribution. Les difficultés relatives à cet aspect sont discutées. Enfin, des recommandations et perspectives pour l'application robuste de la méthode à l'étude de véhicules automobiles en tenue de crash sont données
Vehicle manufacturers face numerous constraints regarding security, comfort, weight, and fuel consumption when designing new automotive vehicles. Numerical simulation, and more precisely, the finite element method, is extensively used in the process to achieve cost reduction while allowing to test the impact of multiple design parameters on the system behavior. Nevertheless, industrial finite element models of automotive crash are growing prohibitively expensive in computation time, slowing down the design process. Numerous model order reduction methods have been developed in order to speed-up simulations by making use of the humongous amount of collected data and the repetitiveness of computations with slight changes to the design variables. This thesis aims to apply model order reduction methods to finite element simulations of automotive crash, involving material and geometric nonlinearity as well as contact. This project, conducted in close collaboration between the car manufacturer PSA group, the software developer Altair engineering, and the computational mechanics group at the UTC-CNRS joint laboratory Roberval, targets intrusive methods necessitating modifications in the finite element solver source code. Amongst different promising methods reviewed in the state-of-the-art, our work focuses on projection-based reduced order modeling and hyper-reduction. A reduced-basis of global shape functions is built upon relevant data in the online training phase and used online to reduce the model size, enabling the hyper-reduction of internal forces and computational speed-up. The contributions of this thesis concern three aspects of the training phase. Developments of the incremental singular value decomposition allow for a more flexible and faster reduced-basis training. The proposed sparse criterion enables the formation of basis functions subsets with enhanced performances and better correlation between the online training phase approximation error and the actual error in the online reduction phase. An innovative formulation of the hyper-reduction optimization problem involving mixed-integer programming, added constraints on polynomial integration, and volume conservation is suggested for the benchmarking of future heuristic methods. All implementations are performed in the industrial explicit finite element solver Altair Radioss with complete access to the source code. To the author's best knowledge, the implementation of such a method in industrial explicit finite element solver represents a significant differentiator of the present contribution. Implementation specific difficulties relevant to this aspect of the thesis are discussed. Finally, recommendations, as well as perspectives and further developments required for the robust application of the method to industrial numerical simulations of vehicle crash, are given
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Robertson, Francis. "An experimental investigation of the drag on idealised rigid, emergent vegetation and other obstacles in turbulent free-surface flows." Thesis, University of Manchester, 2016. https://www.research.manchester.ac.uk/portal/en/theses/an-experimental-investigation-of-the-drag-on-idealised-rigid-emergent-vegetation-and-other-obstacles-in-turbulent-freesurface-flows(07165357-67da-461d-a6a2-ed4970e2cb0c).html.

Повний текст джерела
Анотація:
Vegetation is commonly modelled as emergent arrays of rigid, circular cylinders. However, the drag coefficient (CD) of real stems or trunks is closer to that of cylinders with a square cross-section. In this thesis, vegetation has been idealised as square cylinders in laboratory experiments with a turbulence intensity of the order of 10% which is similar to that of typical river flows. These cylinders may also represent other obstacles such as architectural structures. This research has determined CD of an isolated cylinder and cylinder pairs as a function of position as well as the average drag coefficient (CDv) of larger arrays. A strain gauge was used to measure CD whilst CDv was computed with a momentum balance which was validated by strain gauge measurements for a regularly spaced array. The velocity and turbulence intensity surrounding a pair of cylinders arranged one behind the other with respect to mean flow (in tandem) were also measured with an Acoustic Doppler Velocimeter. The isolated cylinder CD was found to be 2.11 in close agreement with other researchers. Under fixed flow conditions CD for a cylinder in a pair was found to be as low as -0.40 and as high as 3.46 depending on their relative positioning. For arrays, CDv was influenced more by the distribution of cylinders than the flow conditions over the range of conditions tested. Mean values of CDv for each array were found to be between 1.52 and 3.06. This new insight therefore suggests that CDv for vegetation in bulk may actually be much higher than the typical value of 1 which is often assumed to apply in practice. If little other information is available, a crude estimate of CDv = 2 would be reasonable for many practical applications. The validity of a 2D realizable k-epsilon turbulence model for predicting the flow around square cylinders was evaluated. The model was successful in predicting CD for an isolated cylinder. In this regard the model performed as well as Large Eddy Simulations by other authors with a significant increase in computational efficiency. However, the numerical model underestimates CD of downstream cylinders in tandem pairs and overestimates velocities in their wake. This suggests it may be necessary to expand the model to three-dimensions when attempting to simulate the flow around two or more bluff obstacles with sharp edges.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Pham, Hong Son. "Investigation of the supercritical CO2 cycle : mapping of the thermodynamic potential for different applications; further understanding of the physical processes, in particular through simulations and analysis of experimental data." Thesis, Aix-Marseille, 2015. http://www.theses.fr/2015AIXM4338.

Повний текст джерела
Анотація:
Cette thèse évalue d'abord le potentiel thermodynamique du cycle au CO2 supercritique (sc-CO2) pour une large gamme de température de source chaude et étudie son couplage aux applications nucléaires, 45.7% d’efficacité thermique étant obtenu pour un réacteur à neutrons rapides refroidi au sodium. Des simulations CFD sont réalisées sur un compresseur à échelle réduite et confrontées à une expérience, apportant des éléments de qualification. Des simulations sur un compresseur à échelle 1:1 révèlent des particularités liées à la compression du sc-CO2 au comportement gaz réel, offrant un retour d’expérience pour la conception. Dans ce cadre, une approche de cartes de performance est proposée et validée à l'aide de simulations. Enfin, une étude de la collapse d’une bulle dans le CO2 liquide au voisinage du point critique est réalisée et indique l'absence d’effet destructif de cavitation, ouvrant la voie au fonctionnement du compresseur en phase liquide, lieu optimum de l'efficacité du cycle
This study first evaluates the thermodynamic performance of the supercritical CO2 (sc-CO2) cycle in a large range of heat source temperature, with a focus on the nuclear applications; a thermal efficiency of 45.7% is reported for a Sodium-cooled Fast Reactor. Second, CFD simulations have been performed on a small scale sc-CO2 compressor and results have been confronted positively with the experimental data. Simulation results on a real scale compressor have then revealed some particularities during the compression of a real fluid, providing feedbacks for the component design. In addition, a reliable performance maps approach has been proposed for the sc-CO2 compressor and validated using the CFD results. Finally, an investigation of bubble collapse in the liquid CO2 near the critical point has disclosed the likely absence of detrimental effects. As such, risks of cavitation damage should be low, favoring the compressor operation in the liquid region for cycle efficiency improvement
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Katzenbach, Michael. "Individual Approaches in Rich Learning Situations Material-based Learning with Pinboards." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2012. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-80328.

Повний текст джерела
Анотація:
Active Approaches provide chances for individual, comprehension-oriented learning and can facilitate the acquirement of general mathematical competencies. Using the example of pinboards, which were developed for different areas of the secondary level, workshop participants experience, discuss and further develop learning tasks, which can be used for free activities, for material based concept formation, for coping with heterogeneity, for intelligent exercises, as tool for the presentation of students’ work and as basis for games. The material also allows some continuous movements and can thus prepare an insightful usage of dynamic geometry programs. Central Part of the workshop is a work-sharing group work with learning tasks for grades 5 to 8. The workshop will close with a discussion of general aspects of material-based learning.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Roy, Pamphile. "Uncertainty quantification in high dimensional problems." Thesis, Toulouse, INPT, 2019. http://www.theses.fr/2019INPT0038.

Повний текст джерела
Анотація:
Les incertitudes font partie du monde qui nous entoure. Se limiter à une seule valeur nominale est bien souvent trop restrictif, et ce d'autant plus lorsqu'il est question de systèmes complexes. Comprendre la nature et l'impact de ces incertitudes est devenu un aspect important de tout travail d'ingénierie. D'un point de vue sociétal, les incertitudes jouent un rôle important dans les processus de décision. Les dernières recommandations de la Commission européenne en matière d'analyses des risques souligne l'importance du traitement des incertitudes. Afin de comprendre les incertitudes, une nouvelle discipline mathématique appelée la quantification des incertitudes a été créée. Ce domaine regroupe un large éventail de méthodes d'analyse statistique qui visent à lier des perturbations sur les paramètres d'entrée d'un système (plan d'expérience) à une quantité d'intérêt. L'objectif de ce travail de thèse est de proposer des améliorations sur divers aspects méthodologiques de la quantification des incertitudes dans le cadre de simulation numérique coûteuse. Cela passe par une utilisation des méthodes existantes avec une approche multi-stratégie mais aussi la création de nouvelles méthodes. Dans ce contexte, de nouvelles méthodes d'échantillonnage et de ré-échantillonnage ont été développées afin de mieux capturer la variabilité dans le cas d'un problème de grande dimension. Par ailleurs, de nouvelles méthodes de visualisation des incertitudes sont proposées dans le cas d'une grande dimension des paramètres d'entrée et d'une grande dimension de la quantité d'intérêt. Les méthodes développées peuvent être utilisées dans divers domaines comme la modélisation hydraulique ou encore la modélisation aérodynamique. Leur apport est démontré sur des systèmes réalistes en faisant appel à des outils de mécanique des fluides numérique. Enfin, ces méthodes ne sont pas seulement utilisables dans le cadre de simulation numérique, mais elles peuvent être utilisées sur de réels dispositifs expérimentaux
Uncertainties are predominant in the world that we know. Referring therefore to a nominal value is too restrictive, especially when it comes to complex systems. Understanding the nature and the impact of these uncertainties has become an important aspect of engineering work. On a societal point of view, uncertainties play a role in terms of decision-making. From the European Commission through the Better Regulation Guideline, impact assessments are now advised to take uncertainties into account. In order to understand the uncertainties, the mathematical field of uncertainty quantification has been formed. UQ encompasses a large palette of statistical tools and it seeks to link a set of input perturbations on a system (design of experiments) towards a quantity of interest. The purpose of this work is to propose improvements on various methodological aspects of uncertainty quantification applied to costly numerical simulations. This is achieved by using existing methods with a multi-strategy approach but also by creating new methods. In this context, novel sampling and resampling approaches have been developed to better capture the variability of the physical phenomenon when dealing with a high number of perturbed inputs. These allow to reduce the number of simulation required to describe the system. Moreover, novel methods are proposed to visualize uncertainties when dealing with either a high dimensional input parameter space or a high dimensional quantity of interest. The developed methods can be used in various fields like hydraulic modelling and aerodynamic modelling. Their capabilities are demonstrated in realistic systems using well established computational fluid dynamics tools. Lastly, they are not limited to the use of numerical experiments and can be used equally for real experiments
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Mallangi, Siva Sai Reddy. "Low-Power Policies Based on DVFS for the MUSEIC v2 System-on-Chip." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-229443.

Повний текст джерела
Анотація:
Multi functional health monitoring wearable devices are quite prominent these days. Usually these devices are battery-operated and consequently are limited by their battery life (from few hours to a few weeks depending on the application). Of late, it was realized that these devices, which are currently being operated at fixed voltage and frequency, are capable of operating at multiple voltages and frequencies. By switching these voltages and frequencies to lower values based upon power requirements, these devices can achieve tremendous benefits in the form of energy savings. Dynamic Voltage and Frequency Scaling (DVFS) techniques have proven to be handy in this situation for an efficient trade-off between energy and timely behavior. Within imec, wearable devices make use of the indigenously developed MUSEIC v2 (Multi Sensor Integrated circuit version 2.0). This system is optimized for efficient and accurate collection, processing, and transfer of data from multiple (health) sensors. MUSEIC v2 has limited means in controlling the voltage and frequency dynamically. In this thesis we explore how traditional DVFS techniques can be applied to the MUSEIC v2. Experiments were conducted to find out the optimum power modes to efficiently operate and also to scale up-down the supply voltage and frequency. Considering the overhead caused when switching voltage and frequency, transition analysis was also done. Real-time and non real-time benchmarks were implemented based on these techniques and their performance results were obtained and analyzed. In this process, several state of the art scheduling algorithms and scaling techniques were reviewed in identifying a suitable technique. Using our proposed scaling technique implementation, we have achieved 86.95% power reduction in average, in contrast to the conventional way of the MUSEIC v2 chip’s processor operating at a fixed voltage and frequency. Techniques that include light sleep and deep sleep mode were also studied and implemented, which tested the system’s capability in accommodating Dynamic Power Management (DPM) techniques that can achieve greater benefits. A novel approach for implementing the deep sleep mechanism was also proposed and found that it can obtain up to 71.54% power savings, when compared to a traditional way of executing deep sleep mode.
Nuförtiden så har multifunktionella bärbara hälsoenheter fått en betydande roll. Dessa enheter drivs vanligtvis av batterier och är därför begränsade av batteritiden (från ett par timmar till ett par veckor beroende på tillämpningen). På senaste tiden har det framkommit att dessa enheter som används vid en fast spänning och frekvens kan användas vid flera spänningar och frekvenser. Genom att byta till lägre spänning och frekvens på grund av effektbehov så kan enheterna få enorma fördelar när det kommer till energibesparing. Dynamisk skalning av spänning och frekvens-tekniker (såkallad Dynamic Voltage and Frequency Scaling, DVFS) har visat sig vara användbara i detta sammanhang för en effektiv avvägning mellan energi och beteende. Hos Imec så använder sig bärbara enheter av den internt utvecklade MUSEIC v2 (Multi Sensor Integrated circuit version 2.0). Systemet är optimerat för effektiv och korrekt insamling, bearbetning och överföring av data från flera (hälso) sensorer. MUSEIC v2 har begränsad möjlighet att styra spänningen och frekvensen dynamiskt. I detta examensarbete undersöker vi hur traditionella DVFS-tekniker kan appliceras på MUSEIC v2. Experiment utfördes för att ta reda på de optimala effektlägena och för att effektivt kunna styra och även skala upp matningsspänningen och frekvensen. Eftersom att ”overhead” skapades vid växling av spänning och frekvens gjordes också en övergångsanalys. Realtidsoch icke-realtidskalkyler genomfördes baserat på dessa tekniker och resultaten sammanställdes och analyserades. I denna process granskades flera toppmoderna schemaläggningsalgoritmer och skalningstekniker för att hitta en lämplig teknik. Genom att använda vår föreslagna skalningsteknikimplementering har vi uppnått 86,95% effektreduktion i jämförelse med det konventionella sättet att MUSEIC v2-chipets processor arbetar med en fast spänning och frekvens. Tekniker som inkluderar lätt sömn och djupt sömnläge studerades och implementerades, vilket testade systemets förmåga att tillgodose DPM-tekniker (Dynamic Power Management) som kan uppnå ännu större fördelar. En ny metod för att genomföra den djupa sömnmekanismen föreslogs också och enligt erhållna resultat så kan den ge upp till 71,54% lägre energiförbrukning jämfört med det traditionella sättet att implementera djupt sömnläge.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Liu, Bo-Jin, and 劉柏進. "Research of Dynamic Modeling of a Small Methanol Burner System Using Experimental Data." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/38881959201743305688.

Повний текст джерела
Анотація:
碩士
國立聯合大學
電機工程學系碩士班
103
Unlike the hydrogen storage conister to provide fuel cells, the fuel cell system design tends to methanol steam reforming to produce hydrogen. The efficiency of hydrogen production is derived from the distributed characteristics of temperature among the burner. This thesis presents a small burner dynamic system model. It is suitable for the binary support catalysts configuration. The entire burner system is divided into five separate subsystems, which matched the distributed characteristics of temperature among the burner. The equations of the experimental data are generated with the aid of Curve Fitting Tool of MATLAB, it becomes the reference functions of optimization of the system model. The functions of optimization are implemented with lsqnonlin function of MATLAB. The designed mathematical models are confirmed with SIMULINK of MATLAB. Dynamic simulation equations of five subsystems of the burner are implemented. This five subsystems can exhibit individual the response of output temperature. The feasibility of the proposed models is confirmed with simulation and experimental results.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Chowdhury, Rezaul Alam. "Algorithms and data structures for cache-efficient computation: theory and experimental evaluation." Thesis, 2007. http://hdl.handle.net/2152/3170.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Chen, Chun-Ming, and 陳俊名. "Experimental Validation on the Dynamic Characteristics of a Small-scaled Hydrodynamic Bearing Used for Data-storage Devices." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/28444037229473291364.

Повний текст джерела
Анотація:
碩士
中原大學
機械工程研究所
92
The objective of the study is to design and construct an experimental system that would be used to evaluate the dynamic performance of the small-scaled Hydrodynamic bearings often used for data-storage devices. To this end, novel design ideas of the usage of a high-stiffness air bearing and suspending the test bearing by the specially-designed fixture frame are first proposed, which are accompanied by the special requirements in the manufacturing precisions for various dimensions and geometry characteristics of the components of the experiment system. The capacitor-type sensors, including two custom-made are next utilized to measure the motions of the spindle and bearing. Finally, a calculating algorithm for deriving performance parameters of the test bearing are proposed, which is intended primarily to estimate rotor stiffness coefficients, load capacity, and eccentricity ratio to determine if the test bearing meet expected dynamic performance expected by the original design.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

FRASCA, FRANCESCA. "An effective strategy for preventive conservation in historic buildings coupling dynamic simulation and experimental data of indoor climate." Doctoral thesis, 2019. http://hdl.handle.net/11573/1261291.

Повний текст джерела
Анотація:
La conservazione preventiva consiste in tutte quelle attività che consentono di mitigare il degrado dei Beni Culturali. Tra queste attività, lo studio delle condizioni ambientali è fondamentale per valutare il processo di degrado così come per gestire e tutelare il patrimonio culturale. L’invecchiamento di un oggetto e l’alterazione delle sue proprietà chimico-fisiche e strutturali sono processi innescati e regolati in modo diretto e indiretto dal microclima e dalle sue fluttuazioni. Qualsiasi allontanamento dalle condizioni ambientali, in particolar modo dall’umidità relativa (UR), che ha favorito la conservazione del manufatto fino a oggi (clima storico), potrebbe essere deleterio alla sua futura tutela. Per questo motivo, l’interesse dei conservatori scientifici è rivolto a trovare metodologie di studio che consentano di rallentare, prevedere e prevenire il degrado. La combinazione di misure sperimentali e simulazione dinamica del clima interno risulta efficace (a) a diagnosticare le cause che determinano il microclima e (B) a prevedere il suo comportamento in caso di modifiche delle condizioni a contorno. Tuttavia, l’efficacia della simulazione dinamica degli edifici dipende fortemente dall’accuratezza del modello di edificio, che dovrebbe esser in grado di derivare le fluttuazioni a medio e lungo termine, in particolar modo quelle di UR, che è complessa da simulare a causa della sua dipendenza da molti fattori. Di conseguenza, l’uso della simulazione dinamica può essere efficace solo quando l’umidità relativa è misurata, analizzata e modellata accuratamente. Questa tesi affronta un argomento molto importante nel campo della conservazione preventiva, fornendo una strategia per il controllo e la gestione del microclima all’interno di edifici storici che ospitano collezioni permanenti. Per raggiungere questo obiettivo, la ricerca si è focalizzata sull’uso combinato di studi sperimentali e di simulazione dinamica. Particolare attenzione è stata indirizzata alla modellazione dell’umidità così come ai fenomeni di degrado meccanico indotti dall’umidità nei materiali igroscopici. Esistevano quattro ragioni per condurre questa ricercar: (1) fornire una valutazione oggettiva circa la qualità delle misure microclimatiche; (2) sviluppare una funzione di danno specifica per il degrado meccanico; (3) estendere le caratteristiche di una software commerciale di simulazione dinamica degli edifici con un modello monodimensionale di trasferimento simultaneo di calore e vapore attraverso le pareti; (4) facilitare il settaggio dei parametri necessari alla costruzione del modello di edificio a partire dai dati orari di temperatura e umidità relativa. I punti (3) e (4) erano necessaria per usare la simulazione dinamica come uno strumento diagnostico. Il punto (2) era necessario per estendere l’uso della simulazione anche a strumento prognostico. La metodologia proposta da questa ricerca consiste di tre fasi: (i) monitoraggio microclimatico e sua caratterizzazione per la valutazione del rischio di degrado basata un modello dose-risposta; (ii) creazione del modello di edificio e sua taratura; (iii) uso dei modelli tarati di edificio e di degrado per prevedere l’evoluzione del microclima dopo una nuova strategia di controllo microclimatico. Gli obiettivi specifici precedentemente elencati sono stati raggiunti usando differenti casi studio, mentre l’intera metodologia è stata applicata con successo al Museo Archeologico di Priverno che potrebbe essere definito come caso studio pilota. La combinazione di misure microclimatiche insieme alla simulazione dinamica si è dimostrata uno strumento potente and flessibile per la valutazione di una soluzione di controllo microclimatico in edifici storici. L’approccio proposto risulta essere completamente non invasivo, non distruttivo e con costo-zero in termini di materiali (se si esclude il costo del monitoraggio microclimatico). Infatti, le qualità conservative degli spazi da esposizione dopo la modifica del microclima sono direttamente valutate nell’ambiente di simulazione. In questo modo, i risultati possono sostenere vantaggiosamente i processi decisionali riguardanti il controllo e la gestione dell’ambiente espositivo.
The preventive conservation consists in all activities that allow to mitigate the degradation of cultural heritage. Among these activities, the study of environmental conditions is crucial to assess the degradation process as well as to manage and preserve the cultural heritage. The ageing of an object and the alteration of chemical-physical properties are activated and controlled, directly and indirectly, by the microclimate and its fluctuations. Any departure from the microclimate, especially the relative humidity (RH), that has promoted the conservation of an object (historical climate) might be harmful to its future preservation. For this reason, conservation scientists focus on methodologies able to reduce, predict and prevent the degradation. Combining experimental and modelling approaches in studies of indoor climate proves to be effective (a) to diagnose key factors that determine the microclimate and (b) to predict its dynamic behaviour if boundary conditions change. However, the efficacy of the building dynamic simulation strongly depends on the accuracy of the building model, that should derive both short- and long-term fluctuations of the indoor climate variables, especially those concerning RH, which is, besides, complex to simulate due to its dependence on many factors. Consequently, the use of dynamic simulation can be effective only when the relative humidity is accurately measured, analysed and modelled. This thesis addresses a very important timely topic in the preventive conservation providing a strategy in the control and management of the indoor climate within historic buildings which house permanent collections. To achieve this purpose, the research focused on combining experimental and dynamic simulation studies. Particular attention was paid to moisture modelling as well as to the moisture-induced damage in hygroscopic materials. There were four main reasons to have prompted this research: (1) providing an objective assessment about the quality of indoor climate measurements; (2) developing a damage function specific for mechanical degradation; (3) extending the features of a commercial building dynamic simulation software with a one-dimensional heat and moisture transfer model; (4) easing the set-up of the building model using hourly climate variables instead of energy data. The issues (3) and (4) were needed for using the dynamic simulation as a diagnostic tool. The issue (2) was needed for extending the use of simulation from a diagnostic tool to a predictive tool. The methodology proposed by this research consists of three steps: (i) microclimate monitoring and its characterization for conservation risk assessment based on dose-response model; (ii) creation of a building model and its calibration; (iii) use of calibrated building and dose-response models to predict the microclimate evolution after a new strategy of microclimate control. The specific purposes were achieved using different case studies and the whole strategy (i.e. the general purpose) was successfully exploited in the case of “Archaeological Museum of Priverno”, which might be defined as the pilot case study. The combination of indoor climate measurements jointly with the dynamic simulation has demonstrated to be a powerful tool to assess a climate control solution within historic buildings. The proposed approach results to be completely non-invasive, non-destructive and with zero-costs in terms materials. Indeed, the conservative quality of the exhibition spaces after modification of the indoor climate is directly assessed in the simulation environment. In this way, outcomes can support advantageously decision-making for a better control and management of the exhibition environment.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Monchetti, Silvia. "On the role of uncertainties in the seismic risk assessment of historic masonry towers." Doctoral thesis, 2018. http://hdl.handle.net/2158/1130895.

Повний текст джерела
Анотація:
The research activity developed in the present work achieved both technical and scientific results on the seismic risk assessment of masonry towers. It provided semi empirical formulations for the main frequency estimation based on a wide database of historic masonry towers and a novel framework for the seismic risk assessment starting from the Bayesian model updating using dynamic experimental data. The recent Italian earthquakes have dramatically highlighted the vulnerability of historic masonry constructions and the need to improve the knowledge on their seismic response. Due to this fact, the challenging issue of obtaining a reliable structural model has become increasingly relevant to the scientific community, promoting the employment of sophisticated tools of analysis and the need of numerical models to be set up. However, the significant lack of knowledge on historical structures (e.g., material properties, geometry, construction techniques, boundary conditions), still makes their numerical modelling difficult. The knowledge of the seismic response of this structural typology, taking into account uncertain parameters, represents a critical issue and the main scientific question to which this work aimed to answer. The framework proposed in this research started with the Finite Element (FE) model updating of masonry towers through Bayesian approach; the prior distribution of relevant uncertain model parameters was converted into the posterior one by using experimental dynamic data. Nonlinear static analyses were employed to estimate the uncertainties in the seismic response of masonry towers. Taking into account both the modelling and the measurement uncertainties, fragility curves were defined related to different towers damage levels. Eventually, the seismic hazard was considered thus leading to the seismic risk assessment of masonry towers. The idea of this work started from the necessity of obtaining a better awareness of the uncertainties involved in the seismic capacity prediction of historic constructions, and their effect on the overall reliability. Although the proposed results are based on the application to a real case study, the procedure may be used for other similar structures and may represent an effective and alternative tool for the seismic risk quantification of historic masonry constructions.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Wang, Wei. "Exploration of Tikhonov regularization for the fusion of experimental data and computational fluid dynamics." Thesis, 1999. http://hdl.handle.net/1911/17309.

Повний текст джерела
Анотація:
A method is developed to fuse Computational Fluid Dynamics (CFD) simulations and experimental data through the use of Tikhonov regularization. Inviscid-Viscous Interaction and Thin-Layer Navier-Stokes Equation models are used to provide CFD solutions for the flow past NACA 0012 and RAE 2822 airfoils, respectively. The velocity profile within the boundary layer and the pressure coefficient on the surface of the airfoil are merged with the corresponding experimental data. A finite element approach is applied to accomplish the numerical solution of the Tikhonov regularization method. By using over- or under-relaxation technique, relatively few iterations are needed to achieve the convergence of the fusion method. The results demonstrate that a-priori CFD solutions of low fidelity can be improved by the experimental data with less computational cost compared with more sophisticated CFD models. Alternatively, the sparse and scattered experimental data are efficiently processed by utilizing CFD models as regularization. The limitations of the Tikhonov regularization method have been examined. The result shows that the fusion method has significant advantages over a nonlinear least-square polynomial approach for interpolating and extrapolating experimental data.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Badza, Aleksandar. "A Quantitative Evaluation of Lagrangian Coherent Structure Detection Methods Based on Computational and Experimental Limitations." Thesis, 2022. https://hdl.handle.net/2440/135608.

Повний текст джерела
Анотація:
Lagrangian coherent structures are used in uid mechanics and the analysis of dynamic systems to visualise the most in uential ow structures present within a velocity system over a nite period of time. Over the last two decades, a wide variety of methods have been conceptualised for the numerical detection of various forms of these structures within di erent ows. These include continuous curves of maximal particle repulsion which act as ow barriers, two dimensional objects such as jets or eddies formed from more robust ow behaviour, or larger partitions which remain separated from the rest of the domain over an entire ow interval. While some studies which focus on comparing the basic functionality of groups of these methods have been undertaken, the impact of certain computational factors such as the uncertainty of velocity data or the available resolution of said data on the resultant structures generated from these methods has seldom been investigated. In this Thesis, we address both of these issues by performing a systematic analysis of eight of these Lagrangian coherent structure detection methods using a variety of velocity systems including analytically de ned ows (such as the Double Gyre, a non-autonomous Stuart vortex system and the Bickley jet), computational uid dynamics velocity data (corresponding to ows which each contain two layers of Kelvin-Helmholtz instability) and an oceanographic velocity data set representing the Gulf Stream. The methods we consider here are the nite time Lyapunov exponent (a measure of the exponential stretching rate of ow trajectories), variational Lagrangian coherent structures (geodesic solutions of variational problems related to ow stretching), Lagrangian averaged vorticity deviation (an objective measure of the vorticity of a ow trajectory against that of the entire domain), stochastic sensitivity (the expected uncertainty of a Lagrangian ow trajectory), the transfer operator (a probabilistic method which seeks density distributions that remain coherent), the dynamic Laplace operator (an extension of the transfer operator method which explicitly includes di usivity), fuzzy c-means clustering (grouping together collections of ow trajectories based on their consistent proximity) and coherent structure colouring (identifying coherent ow objects from how similarly groups of ow trajectories evolve as a ow advances). We compare the types of Lagrangian coherent structure each method is able to produce, and test how these methods react to the addition of stochastic noise to the velocity data which represents a ow. From our results, methods which detect two-dimensional coherent ow structures rather than the boundaries which separate them, such as coherent structure colouring, Lagrangian averaged vorticity deviation, stochastic sensitivity, the transfer operator and dynamic Laplace operator; are less sensitive to velocity uncertainty and give a more thorough picture of the most in uential ow behaviour observable. We also perform a detailed analysis on the impact of spatial resolution in comparison to the size of coherent structures for each of the methods, both qualitatively by visually comparing the coherent structures produced and quantitatively using the absolute errors of various LCS quantities against a \reference case" produced from the best velocity data resolution available.
Thesis (Ph.D.) -- University of Adelaide, School of Mathematical Sciences, 2022
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Vijaykumar, Anand. "Numerical Simulation of the Flow Field in 3D Eccentric Annular and 2D Centered Labyrinth Seals for Comparison with Experimental LDA Data." 2010. http://hdl.handle.net/1969.1/ETD-TAMU-2010-12-8701.

Повний текст джерела
Анотація:
The flow field in an annular seal is simulated for synchronous circular whirl orbits with 60Hz whirl frequency and a clearance/radius ratio of 0.0154 using the Fluent Computational Fluid Dynamics (CFD) code. Fluent's Moving Reference Frame model (MRF) is used to render the flow quasi-steady by making transformations to a rotating frame. The computed flow fields for velocity, pressure and shear stress measurements are compared with the experimental data of Winslow, Thames and Cusano. The CFD predictions are found to be in good agreement with the experimental results. The present CFD methodology can be extended to other whirl frequencies and clearances. The dynamic wall pressure distributions in an annular seal for non-circular whirl orbits were obtained using CFD. The simulations were performed using a time dependant solver utilizing Fluent's Dynamic Mesh model and User Defined Functions (UDFs). The wall pressure distributions obtained from the simulations are compared with data of Cusano. The CFD simulations over predicted the pressure field when compared to experimental results however the general trends in pressure contours are similar. The flow fields for varying rotor eccentricities are also studied by performing coordinate transformations and rendering the flow quasi-steady at set eccentricities using Fluent's MRF model. The computed velocity and pressure fields are compared with the time dependant solution obtained using Fluent's Dynamic Mesh model and UDFs for the same eccentricity. Good agreement in the velocity fields is obtained; however the pressure fields require further investigation. 2D Labyrinth seal simulations were performed for comparisons with experimental LDA data from Johnson. The velocity fields match the experimental LDA data to a fair degree of extent; however, Fluent simulations under predicted the secondary recirculation zones in Labyrinth Backward Swirl (LBS) case.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Marshall, Lucianne M. "Progression of marine phytoplankton blooms and environmental dynamics from sea-ice coverage to open waters in the coastal Arctic: comparing experimental data with continuous cabled observations." Thesis, 2018. https://dspace.library.uvic.ca//handle/1828/10131.

Повний текст джерела
Анотація:
In this thesis, I present a unique temporal study of phytoplankton, nutrient and environmental dynamics that focussed on the transitional period between sea-ice cover conditions and open waters in a coastal inlet of the Canadian Arctic during 2016. I also compared the 2016 experimental data with continuous observations made by the Ocean Networks Canada (ONC) underwater observatory. Surface seawater sampling was conducted in Cambridge Bay with high temporal resolution from June 16 to August 3, to measure phytoplankton carbon and nitrate utilisation, silica production, phytoplankton biomass, phytoplankton taxonomy and dissolved nutrients. Throughout the study period, nitrate concentrations averaged 0.67  0.08 µmol L-1, and chlorophyll a and primary production were low at 0.11  0.005 µg L-1 and 0.25  0.02 µmol C L-1 d-1, respectively. The presence of sea-ice reduced physical mixing, which resulted in low surface nitrate concentrations. Phytoplankton assemblages, production rates and biomass were dominated by small flagellated cells (<5 µm) until late July, yet increases in temperature and nitrate later in the season enabled larger Chaetoceros spp. diatoms to bloom. The Chaetoceros bloom coincided with a peak in silica production (0.429 µmol Si L-1 d-1), which was otherwise low, but variable. The time series was divided into three phases based on changes in environmental conditions, these phases were used to evaluate changes in biological dynamics. Phase I was characterised by sea-ice, low nitrate and increasing phytoplankton biomass and primary production. Phase II was a transitional period, with calm water conditions a drop in phytoplankton biomass, however, an increase in the mean nitrate concentration enabled more consistent carbon fixation. PIII had greater environmental variability driven by mixing events. The mixing of the water column in PIII enabled larger Chaetoceors spp. to become prevalent in the surface waters contributing increasingly to the biomass and carbon utilisation. Overall, the nutrient concentrations, levels of biomass and production rates in Cambridge Bay were more reflective of those from oligotrophic regions. When comparing experimental data with observations made by the ONC observatory, a strong relationship between carbon utilisation and apparent oxygen utilisation became evident. This finding suggests that long-term in situ observations can potentially be used to monitor biological rates in the Arctic. The temporal resolution of this field study adds a seasonal perspective to our understanding of Arctic ecosystems, complements studies with greater spatial and interannual coverage, and can contribute to future numerical modelling of Arctic change. Furthermore, this study provides a first-time comparison between experimentally-measured phytoplankton production and cabled observations in the Arctic.
Graduate
2019-09-07
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Gottschall, Julia [Verfasser]. "Modelling the variability of complex systems by means of Langevin processes : on the application of a dynamical approach to experimental data / von Julia Gottschall." 2009. http://d-nb.info/996615105/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії