Thèses sur le sujet « Ellipsoidal analysis »

Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Ellipsoidal analysis.

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 23 meilleures thèses pour votre recherche sur le sujet « Ellipsoidal analysis ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.

1

Kharechko, Andriy. « Linear and ellipsoidal pattern separation : theoretical aspects and experimental analysis ». Thesis, University of Southampton, 2009. https://eprints.soton.ac.uk/195011/.

Texte intégral
Résumé :
This thesis deals with a pattern classification problem, which geometrically implies data separation in some Euclidean feature space. The task is to infer a classifier (a separating surface) from a set or sequence of observations. This classifier would later be used to discern observations of different types. In this work, the classification problem is viewed from the perspective of the optimization theory: we suggest an optimization problem for the learning model and adapt optimization algorithms for this problem to solve the learning problem. The aim of this research is twofold, so this thesis can be split into two self-contained parts because it deals with two different type of classifiers each in a different learning setting. The first part deals with linear classification in the online learning setting and includes analysis of existing polynomial-time algorithms: the ellipsoid algorithm and the perceptron rescaling algorithm. We establish that they are based on different types of the same space dilation technique, and derive the parametric version of the latter algorithm, which allows to improve its complexity bound and exploit some extra information about the problem. We also interpret some results from the information-based complexity theory to the optimization model to suggest tight lower bounds on the learning complexity of this family of problems. To conclude this study, we experimentally test both algorithms on the positive semidefinite constraint satisfaction problem. Numerical results confirm our conjectures on the behaviour of the algorithms when the dimension of the problem grows. In the second part, we shift our focus from linear to ellipsoidal classifiers, which form a subset of second-order decision surfaces, and tackle a pattern separation problem with two concentric ellipsoids where the inner encloses one class (which is normally our class of interest, if we have one) and the outer excludes inputs of the other class(es). The classification problem leads to semidefinite program, which allows us to harness the efficient interior-point algorithms for solving it. This part includes analysis of the maximal separation ratio algorithm
Styles APA, Harvard, Vancouver, ISO, etc.
2

Poltera, Carina M. « Numerical analysis of spline generated surface Laplacian for ellipsoidal head geometry ». Virtual Press, 2007. http://liblink.bsu.edu/uhtbin/catkey/1371849.

Texte intégral
Résumé :
Electroencephalography (EEG) is a valuable tool for clinical and cognitive applications. EEG allows for measuring and imaging of scalp potentials emitted by brain activity and allows researchers to draw conclusions about underlying brain activity and function. However EEG is limited by poor spatial resolution due to various factors. One reason is the fact that EEG electrodes are separated from current sources in the brain by cerebrospinal fluid (CSF), the skull, and the scalp. Unfortunately the conductivities of these tissues are not yet well known which limits the spatial resolution of EEG.Based on prior research, spatial resolution of the EEG can be improved via use of various mathematical techniques that provide increased accuracy of the representation of scalp potentials. One such method is the surface Laplacian. It has been shown to be a direct approach to improving EEG spatial resolution. Yet this approach depends on a geometric head model and much work has been done on assuming the human head to be spherical.In this project, we will develop a mathematical model for ellipsoidal head geometry based on surface Laplacian calculations by Law [1]. The ellipsoidal head model is more realistic to the human head shape and can therefore improve accuracy of the EEG imaging calculations. We will construct a computational program that utilizes the ellipsoidal head geometry in hopes to provide a more accurate representation of data fits compared to the spherical head models. Also, we will demonstrate that the spline surface Laplacian calculations do indeed increase the spatial resolution thereby affording a greater impact to the clinical and cognitive study community involving EEG.
Department of Physics and Astronomy
Styles APA, Harvard, Vancouver, ISO, etc.
3

Ansell, Seth. « A study of ellipsoidal variance as a function of mean CIELAB values in a textile data set / ». Online version of thesis, 1995. http://hdl.handle.net/1850/12232.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Zammali, Chaima. « Robust state estimation for switched systems : application to fault detection ». Electronic Thesis or Diss., Sorbonne université, 2020. http://www.theses.fr/2020SORUS124.

Texte intégral
Résumé :
Cette thèse s’intéresse à l’estimation d’état et à la détection de défauts de systèmes linéaires à commutations. Deux approches d’estimation par intervalles sont développées. La première consiste à proposer une estimation d’état pour des systèmes linéaires à commutations à paramètres variants en temps continu et en temps discret. La deuxième approche consiste à proposer une nouvelle logique d’estimation du signal de commutations d’un système linéaire à commutations à entrée inconnue en combinant la technique par modes glissants et l’approche par intervalles. Le problème d’estimation d’état constitue une des étapes fondamentales pour traiter le problème de détection de défauts. Par conséquent, des solutions robustes pour la détection de défauts sont développées en utilisant la théorie des ensembles. Deux méthodologies ont été employées pour détecter les défauts : un observateur classique par intervalle et une nouvelle structure TNL d’observateur par intervalle. Les performances de détection de défauts sont améliorées en se basant sur un critère L∞. De plus, une stratégie robuste de détection de défauts est introduite en utilisant des techniques zonotopiques et ellipsoïdales. En se basant sur des critères d’optimisation, ces techniques sont utilisées pour fournir des seuils dynamiques pour l’évaluation du résidu et pour améliorer la précision des résultats de détection de défauts sans tenir compte de l’hypothèse de coopérativité. Les méthodes développées dans cette thèse sont illustrées par des exemples académiques et les résultats obtenus montrent leur efficacité
This thesis deals with state estimation and fault detection for a class of switched linear systems. Two interval state estimation approaches are proposed. The first one is investigated for both continuous and discrete-time linear parameter varying switched systems subject to measured polytopic parameters. The second approach is concerned with a new switching signal observer, combining sliding mode and interval techniques, for a class of switched linear systems with unknown input. State estimation remains one of the fundamental steps to deal with fault detection. Hence, robust solutions for fault detection are considered using set-membership theory. Two interval techniques are achieved to deal with fault detection for discrete-time switched systems. First, a commonly used interval observer is designed based on an L∞ criterion to obtain accurate fault detection results. Second, a new interval observer structure (TNL structure) is investigated to relax the cooperativity constraint. In addition, a robust fault detection strategy is considered using zonotopic and ellipsoidal analysis. Based on optimization criteria, the zonotopic and ellipsoidal techniques are used to provide a systematic and effective way to improve the accuracy of the residual boundaries without considering the nonnegativity assumption. The developed techniques in this thesis are illustrated using academic examples and the results show their effectiveness
Styles APA, Harvard, Vancouver, ISO, etc.
5

Loukkas, Nassim. « Synthèse d'observateurs ensemblistes pour l’estimation d’état basées sur la caractérisation explicite des bornes d’erreur d’estimation ». Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAT040/document.

Texte intégral
Résumé :
Dans ce travail, nous proposons deux nouvelles approches ensemblistes pourl’estimation d’état basées sur la caractérisation explicite des bornes d’erreur d’estimation. Ces approches peuvent être vues comme la combinaison entre un observateur ponctuel et une caractérisation ensembliste de l’erreur d’estimation. L’objectif est de réduire la complexité de leur implémentation, de réduire le temps de calcul en temps réel et d’améliorer la précision et des encadrements des vecteurs d’état.La première approche propose un observateur ensembliste basé sur des ensembles invariants ellipsoïdaux pour des systèmes linéaires à temps-discret et aussi des systèmes à paramètres variables. L’approche proposée fournit un intervalle d’état déterministe qui est construit comme une somme entre le vecteur état estimé du système et les bornes de l’erreur d’estimation. L’avantage de cette approche est qu’elle ne nécessite pas la propagation des ensemble d’état dans le temps.La deuxième approche est une version intervalle de l’observateur d’état de Luenberger, pour les systèmes linéaires incertains à temps-discret, basés sur le calcul d’intervalle et les ensembles invariants. Ici, le problème d’estimation ensembliste est considéré comme un problème d’estimation d’état ponctuel couplé à une caractérisation intervalle de l’erreur d’estimation
In This work, we propose two main new approaches for the set-membershipstate estimation problem based on explicit characterization of the estimation error bounds. These approaches can be seen as a combination between a punctual observer and a setmembership characterization of the observation error. The objective is to reduce the complexity of the on-line implimentation, reduce the on-line computation time and improve the accuracy of the estimated state enclosure.The first approach is a set-membership observer based on ellipsoidal invariant sets for linear discrete-time systems and also for Linear Parameter Varying systems. The proposed approach provides a deterministic state interval that is build as the sum of the estimated system states and its corresponding estimation error bounds. The important feature of the proposed approach is that does not require propagation of sets.The second approach is an interval version of the Luenberger state observer for uncertain discrete-time linear systems based on interval and invariant set computation. The setmembership state estimation problem is considered as a punctual state estimation issue coupled with an interval characterization of the estimation error
Styles APA, Harvard, Vancouver, ISO, etc.
6

Hellwig, Michael [Verfasser]. « Analysis of mutation strength adaptation within evolution strategies on the ellipsoid model and methods for the treatment of fitness noise / Michael Hellwig ». Ulm : Universität Ulm, 2017. http://d-nb.info/1126579572/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Hellwig, Michael Lorenz [Verfasser]. « Analysis of mutation strength adaptation within evolution strategies on the ellipsoid model and methods for the treatment of fitness noise / Michael Hellwig ». Ulm : Universität Ulm, 2017. http://d-nb.info/1126579572/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Picasso, Bruno. « Stabilization of quantized linear systems : analysis, synthesis, performance and complexity ». Doctoral thesis, Scuola Normale Superiore, 2008. http://hdl.handle.net/11384/85705.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Filho, Sylvio Celso Tartari. « Modelagem e otimização de um robô de arquitetura paralela para aplicações industriais ». Universidade de São Paulo, 2006. http://www.teses.usp.br/teses/disponiveis/3/3152/tde-07122006-151723/.

Texte intégral
Résumé :
Este trabalho trata do estudo de robôs de arquitetura paralela, focando na modelagem e otimização dos mesmos. Não foi construído nenhum tipo de protótipo físico, contudo os modelos virtuais poderão, no futuro, habilitar tal façanha. Após uma busca por uma aplicação que se beneficie do uso de um robô de arquitetura paralela, fez-se uma pesquisa por arquiteturas viáveis já existentes ou relatadas na literatura. Escolheu-se a mais apta e prosseguiu-se com os estudos e modelagem cinemática e dinâmica, dando uma maior ênfase na cinemática e dinâmica inversa, esta última utilizando a formulação de Newton - Euler. Foi construído um simulador virtual em ambiente MATLAB 6.5, dotado de várias capacidades como interpolação linear e circular, avanço e uso de múltiplos eixos coordenados. Seu propósito principal é o de demonstrar a funcionalidade e eficácia dos métodos utilizados. Depois foi incorporado ao simulador um algoritmo de cálculo do volume de trabalho da máquina que utiliza alguns dados do usuário para calcular o volume, que pode ser aquele atrelado a uma postura em particular ou o volume de trabalho de orientação total. Algoritmos para medir o desempenho da máquina quanto à uniformidade e utilização da força dos atuadores foram construídos e também incorporados ao simulador, que consegue mostrar o elipsóide de forças ao longo de quaisquer movimentos executados pela plataforma móvel. Quanto à otimização, parte do ferramental previamente construído foi utilizado para que se pudesse chegar a um modelo de uma máquina que respeitasse restrições mínimas quanto ao tamanho e forma de seu volume de trabalho, mas ainda mantendo o melhor desempenho possível dentro deste volume.
This work is about the study of parallel architecture robots, focusing in modeling and optimization. No physical prototypes were built, although the virtual models can help those willing to do so. After searching for an application that could benefit from the use of a parallel robot, another search was made, this time for the right architecture type. After selecting the architecture, the next step was the kinematics and dynamics analysis. The dynamics model is developed using the Newton ? Euler method. A virtual simulator was also developed in MATLAB 6.5 environment. The simulator?s main purpose was to demonstrate that the methods applied were correct and efficient, so it has several features such as linear and circular interpolations, capacity to use multiple coordinate systems and others. After finishing the simulator, an algorithm to calculate the machine workspace was added. The algorithm receives as input some desired requirements regarding the manipulator pose and then calculates the workspace, taking into consideration imposed constraints. Lastly, algorithms capable to measure the manipulator?s performance regarding to its actuator and end-effector force relationship were also incorporated into the simulator that calculates the machine?s force ellipsoid during any movement, for each desired workspace point. For the optimization procedures, some previously developed tools were used, so that the resulting model was capable to respect some workspace constraints regarding size and shape, but also maintaining the best performance possible inside this volume.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Liu, Ming-yi, et 劉明益. « Synthesis and Optical Analysis of Ellipsoidal Hematite (α-Fe2O3) Colloidal Particles ». Thesis, 2010. http://ndltd.ncl.edu.tw/handle/90064517751926768957.

Texte intégral
Résumé :
碩士
國立中正大學
化學工程所
98
The dispersion property and structural features of colloidal particles can be characterized by light scattering techniques. Previous studies of ellipsoidal colloidal particles lack complete light scattering characterizations. In this study, we describe the synthesis and optical analysis of ellipsoidal colloidal particles made of hematite. The ellipsoidal hematite particles were prepared by forced hydrolysis method, which disperse well in water after adding TMAH surfactant. The dynamics and structure of bare hematite particles and TMAH-stabilized hematite particles were investigated by dynamic light scattering (DLS), depolarized dynamic light scattering (DDLS), and static light scattering (SLS). The dispersity of the two types of colloidal particles can be characterized by DLS, giving the effective radius of particles. Compared with SEM results, the size and shape of the particles in solution state can be reasonably obtained by DDLS and SLS. Overall, we found that the SLS analysis using the ellipsoid form factor can best described the size and shape of the colloidal particles synthesized.
Styles APA, Harvard, Vancouver, ISO, etc.
11

Chen, Chin-Chih, et 陳勁志. « Skew Ray Tracing and Sensitivity Analysis of Variable Focus Ellipsoidal Optical Boundary Surfaces ». Thesis, 2014. http://ndltd.ncl.edu.tw/handle/18229330907469238360.

Texte intégral
Résumé :
碩士
正修科技大學
電子工程研究所
102
One of the most popular mathematical tools in the fields of robotics, mechanisms and computer graphics is the 4x4 homogeneous transformation matrix. Our group’s previous application of the homogeneous transformation matrix to optical systems containing flat and spherical boundaries is extended in this study to optical systems containing variable focus ellipsoidal surfaces for: (1) Skew ray tracing to determine the paths of reflected/refracted skew rays; (2) Sensitivity analysis for direct mathematical expression of the differential changes of incident points and reflected/refracted vectors with respect to changes in incident light sources and boundary geometric parameters; (3) Mathematical design of variable focus ellipsoidal optical boundary surfaces for a wider range of applications than presently considered. The presented methodology is highly suited to digital implementation, allowing direct and rapid analytical statement of ray path, chief ray, marginal rays and merit functions.
Styles APA, Harvard, Vancouver, ISO, etc.
12

Ghobadi, Far Khosro. « Analysis of time-variable gravity signal from GRACE data ». Thesis, 2020. http://hdl.handle.net/1959.13/1413263.

Texte intégral
Résumé :
Research Doctorate - Doctor of Philosophy (PhD)
The Gravity Recovery and Climate Experiment (GRACE) satellite mission revolutionized our understanding of mass redistribution in the Earth system from 2002 to 2017 by measuring time-variable gravity field with unprecedented accuracy. The conventional data products of GRACE are global monthly-mean snapshots of Level-2 (L2) time-variable gravity, and Level-3 or mascon surface mass change. The global monthly fields are obtained from the fundamental measurements of inter-satellite ranging acquired by the K-band ranging (KBR) system. Relying exclusively on the monthly data confines the application of GRACE to geophysical processes that are mainly characterized by seasonal and inter-annual variations such as terrestrial water, ice and ocean mass change. The primary aim of this thesis is to show that direct analysis of inter-satellite ranging data opens the way for detecting new geophysical mass changes at time-scales of significantly less than one month, such as tsunamis. By pushing the limit of GRACE, this thesis brings new opportunities to study new areas of the Earth system mass change. To study the gravitational effect of regional mass changes using GRACE, we first develop a transfer function based on correlation-admittance spectral analysis for accurate estimation of line-of-sight gravity difference (LGD) from inter-satellite range-acceleration. The correlation spectrum between LGD and range-acceleration shows near-unity correlation for frequencies above 1 mHz or 5 cycles-per-revolution (CPR), and the admittance spectrum quantifies the LGD response to range-acceleration at the correlated frequency band. As the first application, we employ the GRACE LGD observations to quantify surface water storage change and calibrate the stream flow velocity of runoff routing models in large river basins. Our results show that the optimal stream flow velocity for the Amazon and Siberian basins is ~0.3 m/s, while surface water in the Congo and Parana basins is better simulated with a velocity larger than 2.0 m/s. Consequently, surface water change explains as much as half of total water storage anomaly in the Amazon, while its contribution in Congo and Parana basins is almost negligible at the monthly temporal resolution [Ghobadi-Far et al., 2018, JGR Solid Earth]. Secondly, we examine the gravitational effect of tsunami-induced transient ocean mass change at 500 km altitude and its observation using GRACE. By upward continuing the gravitational effect of tsunami wave field to satellite altitude and comparison with GRACE LGD, we show that GRACE satellites have detected the tsunamis triggered by the great 2004 Sumatra, 2010 Maule, and 2011 Tohoku earthquakes. GRACE provides an independent source of information useful to discriminate among various seismic source models. This study in particular points to the potential of GRACE Follow-On to deliver low-latency gravimetric data for monitoring transient mass change due to extreme events such as tsunamis and hurricanes [Ghobadi-Far et al., 2019, under review, J. Geodesy]. Regional-scale co- and post-seismic gravity changes caused by great earthquakes are now routinely observed by GRACE L2 time-variable gravity data. Earthquakes also excite global-scale transient gravity changes at certain frequencies associated with Earth’s free oscillations which could last up to several days. In this study, we examine the global transient gravity changes excited by Earth’s free oscillations using the GRACE inter-satellite ranging data. By extending the Kaula orbit perturbation theory, we show that excited frequencies in GRACE KBR data are described by a linear combination of eigenfrequencies of the normal modes, Earth’s rotation rate, and satellite angular velocity. Wavelet analysis of the actual KBR residuals in December 2004 reveals the existence of a significant transient signal after the 2004 Sumatra earthquake with a frequency of ~0.022 mHz, which could be potentially related to the largest excitation due to the “football” mode. However, GRACE accelerometer noise seems to affect the reliability of the obtained results [Ghobadi-Far et al., 2019, JGR Solid Earth]. As the final contribution in this thesis, we put forward a rigorous theory for determining improved surface mass change from GRACE L2 data. The L2 time-variable gravity data are conventionally converted into surface mass change on the spherical Earth. Considering the accuracy of the current L2 data, we show that such simplistic spherical geometry is no longer tenable. We derive a unique one-to-one spectral relationship between the ellipsoidal harmonic coefficients of geopotential and surface mass. In conjunction with our ellipsoidal formulation, the linear transformation between spherical and ellipsoidal geopotential coefficients enables us to determine mass change on the ellipsoid from GRACE L2 data. Using the L2 data to degree 60, we show that the ellipsoidal approach determines mass change rate better than the spherical method by 3 – 4 cm/yr, equivalent to 10 – 15 % increase of total signal, in Greenland and West Antarctica. Our study emphasizes the importance of the ellipsoidal approach for quantifying mass change at polar regions from GRACE and GRACE Follow-On L2 data [Ghobadi-Far et al., 2019, Geophy. J. Int.].
Styles APA, Harvard, Vancouver, ISO, etc.
13

Παναγιωτοπούλου, Βασιλική Χριστίνα. « Ανάλυση της ευστάθειας κατά την ανάπτυξη ελλειψοειδών καρκινικών όγκων ». Thesis, 2015. http://hdl.handle.net/10889/8483.

Texte intégral
Résumé :
Τα τελευταία χρόνια, γίνεται πολύς λόγος για τους καρκινικούς όγκους, καθώς η νόσος αυτή προσβάλλει ολοένα και περισσότερα άτομα κάθε χρόνο. Ιδιαίτερη βαρύτητα έχει δοθεί τόσο ερευνητικά όσο και ιατρικά στην αντιμετώπιση του καρκίνου μέσω θεραπευτικών τεχνικών (χημειοθεραπείες, χειρουργικές επεμβάσεις κλπ) καθώς και στην βελτίωση των συνθηκών διαβίωσης των καρκινοπαθών. Επίσης, αρκετή έμφαση στην έρευνα σχετικά με την ανάπτυξη του καρκίνου σε βιοχημικό επίπεδο και για την βαθύτερη κατανόηση της νόσου. Η έρευνα αφορά ερευνητές πολλών διαφορετικών ειδικοτήτων μεταξύ των οποίων και των μαθηματικών. Από το 1954 με την πρόταση των Armitage και Doll σχετικά με την μαθηματική μοντελοποίηση της γένεσης των καρκινικών όγκων, αρκετοί έχουν ασχοληθεί με την μαθηματική προτυποποίηση των διαφόρων φάσεων του καρκίνου, από την δημιουργία του μέχρι και την αντίσταση του σε φαρμακευτική αγωγή. Η εργασία αυτή πραγματεύεται την μαθηματική θεμελίωση και προτυποποίηση των καρκινικών όγκων όσον αφορά την γεωμετρική τους ανάπτυξη. Με βάση το θεμελιώδες μαθηματικό μοντέλο που προτάθηκε το 1976 από τον H. P. Greenspan, μελετάται η επίπτωση επιφανειακών διαταραχών στην ανάπτυξη σφαιρικών καθώς και ελλειψοειδών όγκων. Στην πρωτότυπη εργασία, η μελέτη περιορίστηκε στην ανάλυση των διαταραχών με μεταβλητή την πολική γωνία των σφαιρικών συντεταγμένων. Στην εργασία αυτή αρχικά μελετάται η γενίκευση του μοντέλου διαταραχών και στις δυο γωνίες του σφαιρικού συστήματος συντεταγμένων (πολική θ και αζιμουθιακή φ). Στην συνέχεια επεκτείνεται η μέθοδος σε τρία μοντέλα που γενικεύουν τις παραδοχές του αρχικού μοντέλου διατηρώντας την παραδοχή της σφαιρικής γεωμετρίας και μελετάται η ευστάθεια των αντίστοιχων επιφανειακών διαταραχών. Τέλος, μελετάται και η ευστάθεια του ίδιου προβλήματος στην ελλειψοειδή γεωμετρία επειδή η ανισοτροπία του ελλειψοειδούς σχήματος καθιστά πιο ρεαλιστική την προσέγγιση του πραγματικού σχήματος του καρκινικού όγκου.
The mathematical analysis of the tumour growth attracted a lot of interest in the last two decades. However, as of today no generally accepted model for tumour growth exists. This is due partially to the incomplete understanding of the related pathology as well as the extremely complicated procedure that guides the evolution of a tumour. Moreover, the growth of a tumour does depend on the available tissue surrounding the tumour and therefore it represents a physical case that is realistically modelled by ellipsoidal geometry. The remarkable aspect of the ellipsoidal shape is that it represents the sphere of the anisotropic space. It provides the appropriate geometrical model for any direction dependent physical quantity. In the present work we analyze the stability of a spherical tumour for four continuous models of an avascular tumour and the stability study of an ellipsoidal tumour. For all five models, conditions for the stability are stated and the results are implemented numerically. For the spherical cases, it is observed that the steady state radii that secure the stability of the tumour are different for each of the four models, and that results to differences in the stable and unstable modes. As for the ellipsoidal model, it is shown that, in contrast to the highly symmetric spherical case, where stability is possible to be achieved, there are no conditions that secure the stability of an ellipsoidal tumour. Hence, as in many physical cases, the observed instability is a consequence of the lack of symmetry.
Styles APA, Harvard, Vancouver, ISO, etc.
14

Biswal, Suryakanta. « Uncertainty Based Damage Identification and Prediction of Long-Time Deformation in Concrete Structures ». Thesis, 2016. http://etd.iisc.ac.in/handle/2005/3143.

Texte intégral
Résumé :
Uncertainties are present in the inverse analysis of damage identification with respect to the given measurements, mainly the modelling uncertainties and the measurement uncertainties. Modelling uncertainties occur due to constructing a representative model of the real structure through finite element modelling, and representing damage in the real structures through changes in material parameters of the finite element model (assuming smeared crack approach). Measurement uncertainties are always present in the measurements despite the accuracy with which the measurements are measured or the precision of the instruments used for the measurement. The modelling errors in the finite element model are assumed to be encompassed in the updated uncertain parameters of the finite element model, given the uncertainties in the measurements and in the prior uncertainties of the parameters. The uncertainties in the direct measurement data are propagated to the estimated output data. Empirical models from codal provisions and standard recommendations are normally used for prediction of long-time deformations in concrete structures. Uncertainties are also present in the creep and shrinkage models, in the parameters of these models, in the shrinkage and creep mechanisms, in the environmental conditions, and in the in-situ measurements. All these uncertainties are needed to be considered in the damage identification and prediction of long-time deformations in concrete structures. In the context of modelling uncertainty, uncertainties can be categorized into aleatory or epistemic uncertainty. Aleatory uncertainty deals with the irresolvable indeterminacy about how the uncertain variable will evolve over time, whereas epistemic uncertainty deals with lack of knowledge. In the field of damage detection and prediction of long time deformations, aleatory uncertainty is modeled through probabilistic analysis, whereas epistemic uncertainty can be modeled through (1) Interval analysis (2) Ellipsoidal modeling (3) Fuzzy analysis (4) Dempster-Shafer evidence theory or (5) Imprecise probability. Many a times it is di cult to determine whether a particular uncertainty is to be considered as an aleatory or as an epistemic uncertainty, and the model builder makes the distinction. The model builder makes the choice based on the general state of scientific knowledge, on the practical need for limiting the model sophistication to a significant engineering importance, and on the errors associated with the measurements. Measurement uncertainty can be stated as the dispersion of real data resulting from systematic error (instrumental error, environmental error, observational error, human error, drift in measurement, measurement of wrong quantity) and random error (all errors apart from systematic errors). Most of instrumental errors given by the manufacturers are in terms of plus minus ranges and can be better represented through interval bounds. The vagueness involved in the representation of human error, observational error, and drift in measurement can be represented through interval bounds. Deliberate measurement of wrong quantity through cheaper and more convenient measurement units can lead to bad quality data. Quality of data can be better handled through interval analysis, with good quality data having narrow width of interval bounds and bad quality data having wide interval bounds. The environmental error, the electronic noise coming from transmitting the data and the random errors can be represented through probability distribution functions. A major part of the measurement uncertainties is better represented through interval bounds and the other part, is better represented through probability distributions. The uncertainties in the direct measurement data are propagated to the estimated output data (in damage identification techniques, the damaged parameters, and in the long-time deformation, the uncertain parameters of the deformation models, which are then used for the prediction of long-time deformations). Uncertainty based damage identification techniques and long-time deformations in concrete structures require further studies, when the measurement uncertainties are expressed through interval bounds only, or through both interval and probability using imprecise techniques. The thesis is divided into six chapters. Chapter 1 provides a review of existing literature on uncertainty based techniques for damage identification and prediction of long-time deformations in concrete structures. A brief review of uncertainty based methods for engineering applications is made, with special highlight to the need of interval analysis and imprecise probability for modeling uncertainties in the damage identification techniques. The review identifies that the available techniques for damage identification, where the uncertainties in the measurements and in the structural and material parameters are expressed in terms of interval bounds, lack e ciency, when the size of the damaged parameter vector is large. Studies on estimating the uncertainties in the damage parameters when the uncertainties in the measurements are expressed through imprecise probability analysis, are also identified as problems that will be considered in this thesis. Also the need for estimating the short-term time period, which in turn helps in accurate prediction of long-time deformations in concrete structures, along with a cost effective and easy to use system of measuring the existing prestress forces at various time instances in the short-time period is noted. The review identifies that most of modelers and analysts have been inclined to select a single simulation model for the long-time deformations resulted from creep, shrinkage and relaxation, rather than take all the possibilities into consideration, where the model selection is made based on the hardly realistic assumption that we can certainly select a correct, and the lack of confidence associated with model selection brings about the uncertainty that resides in a given model set. The need for a single best model out of all the available deformation models is needed to be developed, when uncertainties are present in the models, in the measurements and in the parameters of each models is also identified as a problem that will be considered in this thesis. In Chapter 2, an algorithm is proposed adapting the existing modified Metropolis Hastings algorithm for estimating the posterior probability of the damage indices as well as the posterior probability of the bounds of the interval parameters, when the measurements are given in terms of interval bounds. A damage index is defined for each element of the finite element model considering the parameters of each element are intervals. Methods are developed for evaluating response bounds in the finite element software ABAQUS, when the parameters of the finite element model are intervals. Illustrative examples include reinforced concrete beams with three damage scenarios mainly (i) loss of stiffness, (ii) loss of mass, and (iii) loss of bond between concrete and reinforcement steel, that have been tested in our laboratory. Comparison of the prediction from the proposed method with those obtained from Bayesian analysis and interval optimization technique show improved accuracy and computational efficiency, in addition to better representation of measurement uncertainties through interval bounds. Imprecise probability based methods are developed in Chapter 3, for damage identifi cation using finite element model updating in concrete structures, when the uncertainties in the measurements and parameters are imprecisely defined. Bayesian analysis using Metropolis Hastings algorithm for parameter estimation is generalized to incorporate the imprecision present in the prior distribution, in the likelihood function, and in the measured responses. Three different cases are considered (i) imprecision is present in the prior distribution and in the measurements only, (ii) imprecision is present in the parameters of the finite element model and in the measurement only, and (iii) imprecision is present in the prior distribution, in the parameters of the finite element model, and in the measurements. Illustrative examples include reinforced concrete beams and prestressed concrete beams tested in our laboratory. In Chapter 4, a steel frame is designed to measure the existing prestressing force in the concrete beams and slabs when embedded inside the concrete members. The steel frame is designed to work on the principles of a vibrating wire strain gauge and is referred to as a vibrating beam strain gauge (VBSG). The existing strain in the VBSG is evaluated using both frequency data on the stretched member and static strain corresponding to a fixed static load, measured using electrical strain gauges. The crack reopening load method is used to compute the existing prestressing force in the concrete members and is then compared with the existing prestressing force obtained from the VBSG at that section. Digital image correlation based surface deformation and change in neutral axis monitored by putting electrical strain gauges across the cross section, are used to compute the crack reopening load accurately. Long-time deformations in concrete structures are estimated in Chapter 5, using short-time measurements of deformation responses when uncertainties are present in the measurements, in the deformation models and in the parameters of the deformation models. The short-time period is defined as the least time up to which if measurements are made available, the measurements will be enough for estimating the parameters of the deformation models in predicting the long time deformations. The short-time period is evaluated using stochastic simulations where all the parameters of the deformation models are defined as random variables. The existing deformation models are empirical in nature and are developed based on an arbitrary selection of experimental data sets among all the available data sets, and each model contains some information about the deformation patterns in concrete structures. Uncertainty based model averaging is performed for obtaining the single best model for predicting the long-time deformation in concrete structures. Three types of uncertainty models are considered namely, probability models, interval models and imprecise probability models. Illustrative examples consider experiments in the Northwestern University database available in the literature and prestressed concrete beams and slabs cast in our laboratory for prediction of long-time prestress losses. A summary of contributions made in this thesis, together with a few suggestions for future research, are presented in Chapter 6. Finally the references that were studies are listed.
Styles APA, Harvard, Vancouver, ISO, etc.
15

Biswal, Suryakanta. « Uncertainty Based Damage Identification and Prediction of Long-Time Deformation in Concrete Structures ». Thesis, 2016. http://hdl.handle.net/2005/3143.

Texte intégral
Résumé :
Uncertainties are present in the inverse analysis of damage identification with respect to the given measurements, mainly the modelling uncertainties and the measurement uncertainties. Modelling uncertainties occur due to constructing a representative model of the real structure through finite element modelling, and representing damage in the real structures through changes in material parameters of the finite element model (assuming smeared crack approach). Measurement uncertainties are always present in the measurements despite the accuracy with which the measurements are measured or the precision of the instruments used for the measurement. The modelling errors in the finite element model are assumed to be encompassed in the updated uncertain parameters of the finite element model, given the uncertainties in the measurements and in the prior uncertainties of the parameters. The uncertainties in the direct measurement data are propagated to the estimated output data. Empirical models from codal provisions and standard recommendations are normally used for prediction of long-time deformations in concrete structures. Uncertainties are also present in the creep and shrinkage models, in the parameters of these models, in the shrinkage and creep mechanisms, in the environmental conditions, and in the in-situ measurements. All these uncertainties are needed to be considered in the damage identification and prediction of long-time deformations in concrete structures. In the context of modelling uncertainty, uncertainties can be categorized into aleatory or epistemic uncertainty. Aleatory uncertainty deals with the irresolvable indeterminacy about how the uncertain variable will evolve over time, whereas epistemic uncertainty deals with lack of knowledge. In the field of damage detection and prediction of long time deformations, aleatory uncertainty is modeled through probabilistic analysis, whereas epistemic uncertainty can be modeled through (1) Interval analysis (2) Ellipsoidal modeling (3) Fuzzy analysis (4) Dempster-Shafer evidence theory or (5) Imprecise probability. Many a times it is di cult to determine whether a particular uncertainty is to be considered as an aleatory or as an epistemic uncertainty, and the model builder makes the distinction. The model builder makes the choice based on the general state of scientific knowledge, on the practical need for limiting the model sophistication to a significant engineering importance, and on the errors associated with the measurements. Measurement uncertainty can be stated as the dispersion of real data resulting from systematic error (instrumental error, environmental error, observational error, human error, drift in measurement, measurement of wrong quantity) and random error (all errors apart from systematic errors). Most of instrumental errors given by the manufacturers are in terms of plus minus ranges and can be better represented through interval bounds. The vagueness involved in the representation of human error, observational error, and drift in measurement can be represented through interval bounds. Deliberate measurement of wrong quantity through cheaper and more convenient measurement units can lead to bad quality data. Quality of data can be better handled through interval analysis, with good quality data having narrow width of interval bounds and bad quality data having wide interval bounds. The environmental error, the electronic noise coming from transmitting the data and the random errors can be represented through probability distribution functions. A major part of the measurement uncertainties is better represented through interval bounds and the other part, is better represented through probability distributions. The uncertainties in the direct measurement data are propagated to the estimated output data (in damage identification techniques, the damaged parameters, and in the long-time deformation, the uncertain parameters of the deformation models, which are then used for the prediction of long-time deformations). Uncertainty based damage identification techniques and long-time deformations in concrete structures require further studies, when the measurement uncertainties are expressed through interval bounds only, or through both interval and probability using imprecise techniques. The thesis is divided into six chapters. Chapter 1 provides a review of existing literature on uncertainty based techniques for damage identification and prediction of long-time deformations in concrete structures. A brief review of uncertainty based methods for engineering applications is made, with special highlight to the need of interval analysis and imprecise probability for modeling uncertainties in the damage identification techniques. The review identifies that the available techniques for damage identification, where the uncertainties in the measurements and in the structural and material parameters are expressed in terms of interval bounds, lack e ciency, when the size of the damaged parameter vector is large. Studies on estimating the uncertainties in the damage parameters when the uncertainties in the measurements are expressed through imprecise probability analysis, are also identified as problems that will be considered in this thesis. Also the need for estimating the short-term time period, which in turn helps in accurate prediction of long-time deformations in concrete structures, along with a cost effective and easy to use system of measuring the existing prestress forces at various time instances in the short-time period is noted. The review identifies that most of modelers and analysts have been inclined to select a single simulation model for the long-time deformations resulted from creep, shrinkage and relaxation, rather than take all the possibilities into consideration, where the model selection is made based on the hardly realistic assumption that we can certainly select a correct, and the lack of confidence associated with model selection brings about the uncertainty that resides in a given model set. The need for a single best model out of all the available deformation models is needed to be developed, when uncertainties are present in the models, in the measurements and in the parameters of each models is also identified as a problem that will be considered in this thesis. In Chapter 2, an algorithm is proposed adapting the existing modified Metropolis Hastings algorithm for estimating the posterior probability of the damage indices as well as the posterior probability of the bounds of the interval parameters, when the measurements are given in terms of interval bounds. A damage index is defined for each element of the finite element model considering the parameters of each element are intervals. Methods are developed for evaluating response bounds in the finite element software ABAQUS, when the parameters of the finite element model are intervals. Illustrative examples include reinforced concrete beams with three damage scenarios mainly (i) loss of stiffness, (ii) loss of mass, and (iii) loss of bond between concrete and reinforcement steel, that have been tested in our laboratory. Comparison of the prediction from the proposed method with those obtained from Bayesian analysis and interval optimization technique show improved accuracy and computational efficiency, in addition to better representation of measurement uncertainties through interval bounds. Imprecise probability based methods are developed in Chapter 3, for damage identifi cation using finite element model updating in concrete structures, when the uncertainties in the measurements and parameters are imprecisely defined. Bayesian analysis using Metropolis Hastings algorithm for parameter estimation is generalized to incorporate the imprecision present in the prior distribution, in the likelihood function, and in the measured responses. Three different cases are considered (i) imprecision is present in the prior distribution and in the measurements only, (ii) imprecision is present in the parameters of the finite element model and in the measurement only, and (iii) imprecision is present in the prior distribution, in the parameters of the finite element model, and in the measurements. Illustrative examples include reinforced concrete beams and prestressed concrete beams tested in our laboratory. In Chapter 4, a steel frame is designed to measure the existing prestressing force in the concrete beams and slabs when embedded inside the concrete members. The steel frame is designed to work on the principles of a vibrating wire strain gauge and is referred to as a vibrating beam strain gauge (VBSG). The existing strain in the VBSG is evaluated using both frequency data on the stretched member and static strain corresponding to a fixed static load, measured using electrical strain gauges. The crack reopening load method is used to compute the existing prestressing force in the concrete members and is then compared with the existing prestressing force obtained from the VBSG at that section. Digital image correlation based surface deformation and change in neutral axis monitored by putting electrical strain gauges across the cross section, are used to compute the crack reopening load accurately. Long-time deformations in concrete structures are estimated in Chapter 5, using short-time measurements of deformation responses when uncertainties are present in the measurements, in the deformation models and in the parameters of the deformation models. The short-time period is defined as the least time up to which if measurements are made available, the measurements will be enough for estimating the parameters of the deformation models in predicting the long time deformations. The short-time period is evaluated using stochastic simulations where all the parameters of the deformation models are defined as random variables. The existing deformation models are empirical in nature and are developed based on an arbitrary selection of experimental data sets among all the available data sets, and each model contains some information about the deformation patterns in concrete structures. Uncertainty based model averaging is performed for obtaining the single best model for predicting the long-time deformation in concrete structures. Three types of uncertainty models are considered namely, probability models, interval models and imprecise probability models. Illustrative examples consider experiments in the Northwestern University database available in the literature and prestressed concrete beams and slabs cast in our laboratory for prediction of long-time prestress losses. A summary of contributions made in this thesis, together with a few suggestions for future research, are presented in Chapter 6. Finally the references that were studies are listed.
Styles APA, Harvard, Vancouver, ISO, etc.
16

Sun, Peng, et Robert M. Freund. « Summary Conclusions : Computation of Minimum Volume Covering Ellipsoids* ». 2003. http://hdl.handle.net/1721.1/3896.

Texte intégral
Résumé :
We present a practical algorithm for computing the minimum volume n-dimensional ellipsoid that must contain m given points a₁,..., am ∈ Rn. This convex constrained problem arises in a variety of applied computational settings, particularly in data mining and robust statistics. Its structure makes it particularly amenable to solution by interior-point methods, and it has been the subject of much theoretical complexity analysis. Here we focus on computation. We present a combined interior-point and active-set method for solving this problem. Our computational results demonstrate that our method solves very large problem instances (m = 30,000 and n = 30) to a high degree of accuracy in under 30 seconds on a personal computer.
Singapore-MIT Alliance (SMA)
Styles APA, Harvard, Vancouver, ISO, etc.
17

Pan, Yu Jui, et 潘禹睿. « The Analysis of the Uniformity of the Coating on Ellipsoid Mirrors ». Thesis, 2015. http://ndltd.ncl.edu.tw/handle/34468006256981450997.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
18

Hung, Ren-Yi, et 洪任儀. « Entropy Generation Analysis of Laminar Film Condensation on a Vertical Ellipsoid ». Thesis, 2007. http://ndltd.ncl.edu.tw/handle/72542672814088818854.

Texte intégral
Résumé :
碩士
國立高雄應用科技大學
模具工程系碩士班
95
This thesis focuses on the thermodynamic second law analysis of saturated vapor flowing slowly onto and condensing on a vertical ellipsoid with variable wall temperature. Based on Bejan entropy generation method, the dependence of the local and average entropy generation on Rayleigh number, Jakob number, Brinkman number and irreversibility ratio is investigated via numerical analysis. An entropy generation technique is applied as a unique measure to study the entropy generation caused by heat transfer and film flow friction for the laminar film condensation on a non-isothermal vertical ellipsoid. The results provide us how the geometric parameter-ellipticity and the amplitude of non-isothermal wall temperature variation affect entropy generation during film-wise condensation heat transfer process. From the second law point of view, entropy generation increases with increasing the value of ellipticity.
Styles APA, Harvard, Vancouver, ISO, etc.
19

Yang, Shu-Chieh, et 楊書杰. « Entropy Generation Analysis of Laminar Film Condensation on Vertical Ellipsoids Under Constant Area surface ». Thesis, 2009. http://ndltd.ncl.edu.tw/handle/47914260005865680937.

Texte intégral
Résumé :
碩士
國立高雄應用科技大學
模具工程系
97
The paper aims to apply the thermodynamics second law to investigate the laminar film condensation heat transfer from the vapor flowing slowly onto various vertical ellipsoids under constant surface area, including under variable wall temperatures. The numerical approach to the entropy generation, based on Adrian Bejan`s entropy generation minimization, is conducted in terms of Brinkman; Jakob, and Rayleigh parameters . The result indicates that the total entropy generation rate is proportional to the ellipticity of the vertical ellipsoid under constant surface area. In other words, the vertical flat plate, the limit case with ellipticity, e=1, generates the highest entropy, and the sphere, the other limit case with ellipticity, e=0, generates the lowest entropy
Styles APA, Harvard, Vancouver, ISO, etc.
20

Wu, Tzung-You, et 吳宗祐. « Thermodynamic Second Law Analysis of Film Condensation from Vapor Flowing Slowly Inside a Semi-Ellipsoid ». Thesis, 2009. http://ndltd.ncl.edu.tw/handle/66598261824406068086.

Texte intégral
Résumé :
碩士
國立高雄應用科技大學
模具工程系
97
The present paper aims to perform the entropy generation analysis of film condensation from saturated vapor flowing slowly inside an isothermal vertical semi-ellipsoid. Previous literature shows that the condensation heat transfer performance of the elliptical tube with vertical major axis is better then that of circular tube. Thus, it is worth investigating the condensation heat transfer and thermodynamics second law analysis for vapor flowing inside a vertical semi-ellipsoid. The dependent working parameters of the local and average entropy generation caused by heat transfer and film flow friction are found to be Brinkman, Rayleigh, Bond, and geometric parameters. Based on Adrian Bejan entropy generation minimization technique, the result shows that local entropy generation rate increases with eccentricity, Brinkman, Rayleigh parameters, but decreases with the increase of surface tension. Furthermore, the heat transfer irreversibility dominates over the friction irreversibility in the upper half of a Semi-Ellipsoid. With the advantages of more time and cost savings for using CFD-Computational Fluid Dynamics to understand the system, VOF method is used to simulate two-phase flow problem and to verify the feasibility of numerical simulation compared with the solution of theoretical analysis. In the future, the present result can be used as a reference for the design of heat pipe and the condenser.
Styles APA, Harvard, Vancouver, ISO, etc.
21

Ho, Sheng-Ping, et 何晟平. « Thermodynamic Second Law Analysis of Film Condensation from Vapor Flowing Slowly on to Semi-Ellipsoid surface ». Thesis, 2011. http://ndltd.ncl.edu.tw/handle/65261644754873364147.

Texte intégral
Résumé :
碩士
國立高雄應用科技大學
應用工程科學研究所
99
This paper aims to study entropy generation analysis of saturated vapor flowing slowly onto, and condensing on an isothermal Semi-Ellipsoid based on Adrian Bejan entropy generation minimization method. The various parameters, including ellipticity, surface tension, Rayleigh number, and Brinkman number on heat transfer characteristics of Semi-Ellipsoid surface laminar film condensation are also investigated. The main entropy generations are caused by finite temperature difference heat transfer and film flow friction. The result shows that local entropy generation rate increase with elliptic, surface tension, Rayleigh number, Brinkman number, and Jakob number. Further, the heat transfer irreversibility dominates over the friction irreversibility on Semi-Ellipsoid. Furthermore, the analyzed laminar film condensation on Semi-Ellipsoid surface condition is also observed by Computation Fluid Dynamics (CFD) software. This paper adopts Volume of Fraction (VOF) method to analyze two phase transition due to film condensation. Comparing the result of theoretical analysis with that of numerical simulation shows in good agreement. In the future, the present result may provide reference data for the design of heat transfer system.
Styles APA, Harvard, Vancouver, ISO, etc.
22

Ku, Ming-Yao, et 古明曜. « Fractal Contact Properties of Ellipsoids Rough Surface with Adhesion and Coating Hardness Effect and Experimental Analysis of Parameters ». Thesis, 2006. http://ndltd.ncl.edu.tw/handle/yunn5r.

Texte intégral
Résumé :
碩士
國立虎尾科技大學
動力機械工程研究所
94
Most machined surfaces are oriented with respect to the direction of the machine tools relative to workpieces. In such cases, the profile of the asperities generally contains various curvature for various directions. In accordance with these facts, anisotropic roughness must be considered. In order to minimize the adhesive and wear problems of precise machine, numerous coating methods are being designed to improve these surface properties. In this work, fractal characterization of surface topography is applied to the study of contact mechanics and developed two new elliptic contact models of rough surface. According to the experimental measurement of Silicon surface, fractal parameters were calculated to determine the contact characteristics of real surfaces. New contact model I was presented incorporating the elastic-plastic transition deformation, elliptic summit and hardness effects in coating film from MB model. New model II also describes contact behavior between rough solids using the fractal model I that take into account the effect of adhesive properties. Adhesion occurs at the peaks of the asperities and is pronounced when the surface roughness effect is small. In order to account for the effects of asperities ranging from the nanometer to micrometer level in micro-machine or precision machine. Results show that larger real areas of contact are predicted due to the adhesive force, soft coating and high eccentricity of asperities. When the adhesive force value be fixed, the coefficient m of mimic hardness curve to reduce, will be induced the real contact area on the increase. In addition, the real contact area will increase with fractal dimension through increase of 1.1, up to some in fractal dimension Will reach maximum at the maximum, reduce with increase of fractal dimension. The critical value of fractal dimension is influenced by nondimensional external load, fractal roughness parameter, factor relating hardness to yield strength, material properties?and free of adhesive force? The analytical results of this fractal microcontact model are close to the real contact characteristics of machine surfaces. Numerical analysis and fractal parameters of experiments will be the base of manufacture and design in the future.
Styles APA, Harvard, Vancouver, ISO, etc.
23

Chien, Mu-Fan, et 簡慕帆. « Thermodynamic Second Law Analysis of Film Condensation from Vapor Flowing Slowly Inside a Semi-Ellipsoid with wavy wall ». Thesis, 2010. http://ndltd.ncl.edu.tw/handle/71594791404373289952.

Texte intégral
Résumé :
碩士
國立高雄應用科技大學
模具工程系
98
This paper aims to study entropy generation analysis for saturated vapor flowing slowly onto and being condensed on an isothermal Semi-Ellipsoid with wavy wall. To comparison about heat transfer relate between semi-elliptical with wavy wall and semi-elliptical with smooth wall. This research has combined the EGM technique ( entropy generation minimization ), proposed by Adrian Bejan, for the laminar flow with convection heat transfer situation and the heat transfer approach developed by my adviser, Dr. Yang for laminar film condensation, and numerical methods to investigate entropy generation inside a horizontal elliptical tube. The present paper studies the effects of various working parameters, including eccentricity, surface tension, Nusselt number, Ra/Ja number, and Brinkman number. Changed eccentricity, surface tension force to study the model-design's relation in effect of entropy which caused by heat transfer and film flow friction. Through Computational Fluid Dynamics (CFD) software assisted analysis can get more time and cost savings, and to understand the operation of system. This research used VOF ( Volume Of Fluid ) method to simulate two-phase flow problem. By comparison the numerical results, the entroy rate and heat transfer efficiency in semi-elliptical with waye wall is beter than semi-elliptical with smooth wall. Numerical simulations show the feasibility of enhancing heat transfer efficiency by change wall patterns. The study results can be used as a reference for heat exchange system development in the future.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie