To see the other types of publications on this topic, follow the link: Methods of three-dimensional optimization.

Dissertations / Theses on the topic 'Methods of three-dimensional optimization'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Methods of three-dimensional optimization.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Hambric, Stephen A. "Structural shape optimization of three dimensional finite element models." Thesis, Virginia Tech, 1987. http://hdl.handle.net/10919/45805.

Full text
Abstract:

The thesis presents a three dimensional shape optimization program which analyzes models made up of linear isoparametric elements. The goal of the program is to achieve a near uniform model stress state and thereby to minimize material volume.

The algorithm is iterative, and performs two analyses per iteration. The first analysis is a static stress analysis of the model for one or more load cases. Based on results from the static analysis, an expansion analysis is performed. Model elements are expanded or contracted based on whether they are stressed higher or lower than a reference stress. The shape changing is done by creating an expansion load vector using the differences between the calculated element stresses and the reference stress. Expansion displacements are solved for, and instead of using them to calculate stresses, the displacements are added to the nodal coordinates to reshape the structure. This process continues until a user defined convergence tolerance is met.

Four programs were used for the analysis process. Models were created using a finite element modeling program called I-IDEAS or CAIEDS. The I-IDEAS output files were converted to input files for the optimizer by a conversion program. The model was optimized using the shape optimization process described above. Post- processing was done using a program written with a graphical programming language called graPHIGS.

Models used to test the program were: a cylindrical pressure vessel with nonuniform thickness, a spherical pressure vessel with non-uniform thickness, a torque arm, and a draft sill casting o a railroad hopper car. Results were compared to similar studies from selected references.

Both pressure vessels converged to near uniform thicknesses, which compared ell with the reference work. In a two dimensional analysis, the torque arm volume decreased 24 percent, which compared well with published results. A three dimensional analysis showed a volume reduction of l3 percent, but there were convergence problems. Finally, the draft sill casting was reduced in volume by 9 percent from a manually optimized design.


Master of Science
APA, Harvard, Vancouver, ISO, and other styles
2

Singley, Andrew M. "Heuristic solution methods for the 1-dimensional and 2-dimensional mastermind problem." [Gainesville, Fla.] : University of Florida, 2005. http://purl.fcla.edu/fcla/etd/UFE0010554.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ren, Xuchun. "Novel computational methods for stochastic design optimization of high-dimensional complex systems." Diss., University of Iowa, 2015. https://ir.uiowa.edu/etd/1738.

Full text
Abstract:
The primary objective of this study is to develop new computational methods for robust design optimization (RDO) and reliability-based design optimization (RBDO) of high-dimensional, complex engineering systems. Four major research directions, all anchored in polynomial dimensional decomposition (PDD), have been defined to meet the objective. They involve: (1) development of new sensitivity analysis methods for RDO and RBDO; (2) development of novel optimization methods for solving RDO problems; (3) development of novel optimization methods for solving RBDO problems; and (4) development of a novel scheme and formulation to solve stochastic design optimization problems with both distributional and structural design parameters. The major achievements are as follows. Firstly, three new computational methods were developed for calculating design sensitivities of statistical moments and reliability of high-dimensional complex systems subject to random inputs. The first method represents a novel integration of PDD of a multivariate stochastic response function and score functions, leading to analytical expressions of design sensitivities of the first two moments. The second and third methods, relevant to probability distribution or reliability analysis, exploit two distinct combinations built on PDD: the PDD-SPA method, entailing the saddlepoint approximation (SPA) and score functions; and the PDD-MCS method, utilizing the embedded Monte Carlo simulation (MCS) of the PDD approximation and score functions. For all three methods developed, both the statistical moments or failure probabilities and their design sensitivities are both determined concurrently from a single stochastic analysis or simulation. Secondly, four new methods were developed for RDO of complex engineering systems. The methods involve PDD of a high-dimensional stochastic response for statistical moment analysis, a novel integration of PDD and score functions for calculating the second-moment sensitivities with respect to the design variables, and standard gradient-based optimization algorithms. The methods, depending on how statistical moment and sensitivity analyses are dovetailed with an optimization algorithm, encompass direct, single-step, sequential, and multi-point single-step design processes. Thirdly, two new methods were developed for RBDO of complex engineering systems. The methods involve an adaptive-sparse polynomial dimensional decomposition (AS-PDD) of a high-dimensional stochastic response for reliability analysis, a novel integration of AS-PDD and score functions for calculating the sensitivities of the failure probability with respect to design variables, and standard gradient-based optimization algorithms, resulting in a multi-point, single-step design process. The two methods, depending on how the failure probability and its design sensitivities are evaluated, exploit two distinct combinations built on AS-PDD: the AS-PDD-SPA method, entailing SPA and score functions; and the AS-PDD-MCS method, utilizing the embedded MCS of the AS-PDD approximation and score functions. In addition, a new method, named as the augmented PDD method, was developed for RDO and RBDO subject to mixed design variables, comprising both distributional and structural design variables. The method comprises a new augmented PDD of a high-dimensional stochastic response for statistical moment and reliability analyses; an integration of the augmented PDD, score functions, and finite-difference approximation for calculating the sensitivities of the first two moments and the failure probability with respect to distributional and structural design variables; and standard gradient-based optimization algorithms, leading to a multi-point, single-step design process. The innovative formulations of statistical moment and reliability analysis, design sensitivity analysis, and optimization algorithms have achieved not only highly accurate but also computationally efficient design solutions. Therefore, these new methods are capable of performing industrial-scale design optimization with numerous design variables.
APA, Harvard, Vancouver, ISO, and other styles
4

Winkelmann, Beate Maria. "Finite dimensional optimization methods and their application to optimal control with PDE constraints /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2005. http://wwwlib.umi.com/cr/ucsd/fullcit?p3205376.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Opdahl, Hanna Belle. "Investigation of IsoTruss® Structures in Compression Using Numerical, Dimensional, and Optimization Methods." BYU ScholarsArchive, 2020. https://scholarsarchive.byu.edu/etd/9243.

Full text
Abstract:
The purpose of this research is to investigate the structural efficiency of 8-node IsoTruss structures subject to uniaxial compression using numerical, dimensional, and optimization methods. The structures analyzed herein are based on graphite/epoxy specimens that were designed for light-weight space applications, and are approximately 10 ft. (3 m) long and 0.3 lb. (0.14 kg). The principal failure modes considered are material failure, global buckling, local buckling at the bay level, and longitudinal strut buckling. Studies were performed with the following objectives: to correlate finite element predictions with experimental and analytical methods; to derive analytical expressions to predict bay-level buckling; to characterize interrelations between design parameters and buckling behavior; to develop efficient optimization methods; and, to compare the structural efficiency of outer longitudinal configurations with inner longitudinal configurations. Finite element models were developed in ANSYS, validated with experimental data, and verified with traditional mechanics. Data produced from the finite element models were used to identify trends between non-dimensional Pi variables, derived with Buckingham's Pi Theorem. Analytical expressions were derived to predict bay-level buckling loads, and verified with dimensional analyses. Numerical and dimensional analyses were performed on IsoTruss structures with outer longitudinal members to compare the structural performance with inner longitudinal configurations. Analytical expressions were implemented in optimization studies to determine efficient and robust optimization techniques and optimize the inner and outer longitudinal configurations with respect to mass. Results indicate that the finite element predictions of axial stiffness and global buckling loads correlate with traditional mechanics equations, but overestimate the capacity demonstrated in previously published experimental results. The buckling modes predicted by finite element predictions correlate with traditional mechanics and experimental results, except when the local and global buckling loads coincide. The analytical expressions derived from mechanics to predict local buckling underestimate the constraining influence of the helical members, and therefore underestimate the local buckling capacity. The optimization analysis indicates that, in the specified design space, the structure with outer longitudinal members demonstrates a greater strength-to-weight ratio than the corresponding structure with inner longitudinal members by sustaining the same loading criteria with 10% less mass.
APA, Harvard, Vancouver, ISO, and other styles
6

Acevedo, Feliz Daniel. "A framework for the perceptual optimization of multivalued multilayered two-dimensional scientific visualization methods." View abstract/electronic edition; access limited to Brown University users, 2008. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3318287.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Schäfer, Christian. "Monte Carlo methods for sampling high-dimensional binary vectors." Phd thesis, Université Paris Dauphine - Paris IX, 2012. http://tel.archives-ouvertes.fr/tel-00767163.

Full text
Abstract:
This thesis is concerned with Monte Carlo methods for sampling high-dimensional binary vectors from complex distributions of interest. If the state space is too large for exhaustive enumeration, these methods provide a mean of estimating the expected value with respect to some function of interest. Standard approaches are mostly based on random walk type Markov chain Monte Carlo, where the equilibrium distribution of the chain is the distribution of interest and its ergodic mean converges to the expected value. We propose a novel sampling algorithm based on sequential Monte Carlo methodology which copes well with multi-modal problems by virtue of an annealing schedule. The performance of the proposed sequential Monte Carlo sampler depends on the ability to sample proposals from auxiliary distributions which are, in a certain sense, close to the current distribution of interest. The core work of this thesis discusses strategies to construct parametric families for sampling binary vectors with dependencies. The usefulness of this approach is demonstrated in the context of Bayesian variable selection and combinatorial optimization of pseudo-Boolean objective functions.
APA, Harvard, Vancouver, ISO, and other styles
8

Yi, Congrui. "Penalized methods and algorithms for high-dimensional regression in the presence of heterogeneity." Diss., University of Iowa, 2016. https://ir.uiowa.edu/etd/2299.

Full text
Abstract:
In fields such as statistics, economics and biology, heterogeneity is an important topic concerning validity of data inference and discovery of hidden patterns. This thesis focuses on penalized methods for regression analysis with the presence of heterogeneity in a potentially high-dimensional setting. Two possible strategies to deal with heterogeneity are: robust regression methods that provide heterogeneity-resistant coefficient estimation, and direct detection of heterogeneity while estimating coefficients accurately in the meantime. We consider the first strategy for two robust regression methods, Huber loss regression and quantile regression with Lasso or Elastic-Net penalties, which have been studied theoretically but lack efficient algorithms. We propose a new algorithm Semismooth Newton Coordinate Descent to solve them. The algorithm is a novel combination of Semismooth Newton Algorithm and Coordinate Descent that applies to penalized optimization problems with both nonsmooth loss and nonsmooth penalty. We prove its convergence properties, and show its computational efficiency through numerical studies. We also propose a nonconvex penalized regression method, Heterogeneity Discovery Regression (HDR) , as a realization of the second idea. We establish theoretical results that guarantees statistical precision for any local optimum of the objective function with high probability. We also compare the numerical performances of HDR with competitors including Huber loss regression, quantile regression and least squares through simulation studies and a real data example. In these experiments, HDR methods are able to detect heterogeneity accurately, and also largely outperform the competitors in terms of coefficient estimation and variable selection.
APA, Harvard, Vancouver, ISO, and other styles
9

Yan, Enxu. "Sublinear-Time Learning and Inference for High-Dimensional Models." Research Showcase @ CMU, 2018. http://repository.cmu.edu/dissertations/1207.

Full text
Abstract:
Across domains, the scale of data and complexity of models have both been increasing greatly in the recent years. For many models of interest, tractable learning and inference without access to expensive computational resources have become challenging. In this thesis, we approach efficient learning and inference through the leverage of sparse structures inherent in the learning objective, which allows us to develop algorithms sublinear in the size of parameters without compromising the accuracy of models. In particular, we address the following three questions for each problem of interest: (a) how to formulate model estimation as an optimization problem with tractable sparse structure, (b) how to efficiently, i.e. in sublinear time, search, maintain, and utilize the sparse structures during training and inference, (c) how to guarantee fast convergence of our optimization algorithm despite its greedy nature? By answering these questions, we develop state-of-the-art algorithms in varied domains. Specifically, in the extreme classification domain, we utilizes primal and dual sparse structures to develop greedy algorithms of complexity sublinear in the number of classes, which obtain state-of-the-art accuracies on several benchmark data sets with one or two orders of magnitude speedup over existing algorithms. We also apply the primal-dual-sparse theory to develop a state-of-the-art trimming algorithm for Deep Neural Networks, which sparsifies neuron connections of a DNN with a task-dependent theoretical guarantee, which results in models of smaller storage cost and faster inference speed. When it comes to structured prediction problems (i.e. graphical models) with inter-dependent outputs, we propose decomposition methods that exploit sparse messages to decompose a structured learning problem of large output domains into factorwise learning modules amenable to sublineartime optimization methods, leading to practically much faster alternatives to existing learning algorithms. The decomposition technique is especially effective when combined with search data structures, such as those for Maximum Inner-Product Search (MIPS), to improve the learning efficiency jointly. Last but not the least, we design novel convex estimators for a latent-variable model by reparameterizing it as a solution of sparse support in an exponentially high-dimensional space, and approximate it with a greedy algorithm, which yields the first polynomial-time approximation method for the Latent-Feature Models and Generalized Mixed Regression without restrictive data assumptions.
APA, Harvard, Vancouver, ISO, and other styles
10

Nikram, Elham. "Three essays on game theory and computation." Thesis, University of Exeter, 2016. http://hdl.handle.net/10871/28755.

Full text
Abstract:
The results section of my thesis includes three chapters. The first two chapters are on theoretical game theory. In both chapters, by mathematical modelling and game theoretical tools, I am predicting the behaviour of the players in some real world issues. Hoteling-Downs model plays an important role in the modern political interpretations. The first chapter of this study investigates an extension of Hoteling-Downs model to have multi-dimensional strategy space and asymmetric candidates. Chapter 3 looks into the inspection game where the inspections are not the same in the series of sequential inspections. By modelling the game as a series of recursive zero-sum games I find the optimal strategy of the players in the equilibrium. The forth chapter investigates direct optimization methods for large scale problems. Using Matlab implementations of Genetic and Nelder-Mead algorithms, I compare the efficiency and accuracy of the most famous direct optimization methods for unconstraint optimization problems based on differing number of variables.
APA, Harvard, Vancouver, ISO, and other styles
11

Шкіра, Анатолій Миколайович, Анатолий Николаевич Шкира, Anatolii Mykolaiovych Shkira, and Д. Я. Моісеєнко. "Аналіз методів одновимірної оптимізації." Thesis, Сумський державний університет, 2016. http://essuir.sumdu.edu.ua/handle/123456789/47183.

Full text
Abstract:
На сьогоднішній день розроблена і використовується достатня кількість чисельних методів оптимізації функції однієї змінної. Кожний метод має певні особливості, переваги та недоліки застосування до певного класу екстремальних задач. Задачі одновимірної оптимізації - це задачі, у яких вектор є одновимірним.
APA, Harvard, Vancouver, ISO, and other styles
12

Swinson, Michael D. "Statistical Modeling of High-Dimensional Nonlinear Systems: A Projection Pursuit Solution." Diss., Available online, Georgia Institute of Technology, 2005, 2005. http://etd.gatech.edu/theses/available/etd-11232005-204333/.

Full text
Abstract:
Thesis (Ph. D.)--Mechanical Engineering, Georgia Institute of Technology, 2006.
Shapiro, Alexander, Committee Member ; Vidakovic, Brani, Committee Member ; Ume, Charles, Committee Member ; Sadegh, Nader, Committee Chair ; Liang, Steven, Committee Member. Vita.
APA, Harvard, Vancouver, ISO, and other styles
13

Galiana, Blanch Savitri. "Two-dimensional modeling and inversion of the controlled-source electromagnetic and magnetotelluric methods using finite elements and full-space PDE-constrained optimization strategies." Doctoral thesis, Universitat de Barcelona, 2016. http://hdl.handle.net/10803/400616.

Full text
Abstract:
The controlled-source electromagnetics (CSEM) and magnetotellurics (MT) methods are common geophysical tools for imaging the Earth's electrical interior. To appreciate measured data, both methods require forward and inverse modeling of the subsurface with the ultimate goal of finding a feasible model for which the simulated data reasonably fits the observations. The goodness of this fit depends on the error in the measured data, on the numerical error and on the degree of approximation inferred by numerical modeling. Therefore, active research focuses on new methods for modeling and inversion to improve accuracy and reliability for increasingly complex scenarios. In a first step, physical factors such as anisotropy, topography and realistic sources must be taking into account. Second, numerical methods need to be assessed in terms of solution accuracy, time efficiency and memory demand. The finite elements (FE) methods offer much flexibility in model geometry and contain quality control mechanisms for the solution, as shape function order and adaptive mesh refinement. Most emerging modeling programs are based on FE, however, inversion programs are generally based on finite differences (FD) or integral equation (IE) methods. On the other hand, inverse modeling is usually based on gradient methods and formulated in the reduced-space, where the electrical conductivity is the only optimization variable. Originally, the inverse problem is stated for the EM fields and the conductivity parameter (in the full-space), and constrained by governing partial differential equations (PDEs). The reduced-space strategy eliminates the field variables by applying equality constraints and solves then, the unconstrained problem. A common drawback of this is the repeated costly computation of the forward solution. Solving the PDE-constrained optimization problem directly, in the full-space, has the advantage that it is only necessary to exactly solve the PDEs at the end of the optimization, but it comes at the cost of a larger number of variables. This thesis develops a robust and versatile adaptive unstructured mesh FE program to model the total field for two-dimensional (2-D) anisotropic CSEM and MT data, allowing for arbitrarily oriented, three-dimensional (3-D) sources, for which a two-and-a-half-dimensional (2.5-D) approximation is employed. The formulations of the problems in a FE framework are derived for isotropic and anisotropic subsurface conductivity structures. The accuracy of the solution is controlled and improved by a goal-oriented adaptive mesh refinement algorithm. Exhaustive numerical experiments validate the adaptive FE program for both CSEM and MT methods and on land and marine environments. The influence of the model dimensions, mesh design and order of shape functions on the solution accuracy is studied and notably, an outperformance of quadratic shape functions is found (compared to linear and cubic). Several examples demonstrate the effect of complex scenarios on EM data. In particular, we study the distortion caused by: the bathymetry, the orientation and geometry of the sources and the anisotropy, considering vertical and dipping cases. All examples showcase the importance of adequate consideration of these very common physical features of real world data. Further, a formulation for the 2.5-D CSEM inversion as a PDE-constrained optimization in full-space is derived within a FE framework following two strategies: discretize-optimize and optimize-discretize. The discretize-optimize formulation is implemented using a general purpose optimization algorithm. Two examples, a canonical reservoir model and a more realistic marine model with topography, demonstrate the performance of this inversion scheme, recovering in both cases the model’s main structures within an acceptable data misfit. Finally, the optimize-discretize formulation is derived in a FE framework, as a first step towards a development of an inversion scheme using adaptive FE meshes.
[cat] El mètode de font electromagnètica controlada (CSEM) i el mètode magnetotel.lúric (MT) són tècniques geofísiques usades habitualment per obtenir una imatge de les propietats elèctriques del subsòl terrestre i s'utilitzen independentment, conjuntament i en combinació amb altres tècniques geofísiques. Per poder interpretar les dades, ambdós mètodes necessiten la modelització directa i inversa de la conductivitat elèctrica del subsòl amb l'objectiu final d'obtenir un model coherent per al qual les dades simulades s'ajustin de forma raonable a les observacions. Naturalment, la qualitat d'aquest ajust no només depèn de l'error en les dades mesurades i de l'error numèric, sinó també del grau en l'aproximació física inferit per la modelització numèrica. D'aquesta manera, les recerques actuals se centren a investigar noves metodologies per a la modelització i inversió, per tal d'obtenir models acurats i fiables de les estructures de la Terra en escenaris cada cop més complexos. Un primer pas és millorar les aproximacions en la modelització tenint en compte factors físics com ara l'anisotropia, la topografia o fonts més realistes. En segon lloc, per tal d'acomodar aquests factors en un programa de modelització i inversió i per poder tractar els conjunts de dades típicament llargs, els mètodes numèrics han de ser avaluats en termes de la precisió de la solució, l'eficiència en temps i la demanda en memòria. Els mètodes de modelització en elements finits (FE) són coneguts per oferir una major flexibilitat en la modelització de la geometria i contenen mecanismes de control de la solució, com ara l'ordre de les funcions forma i la tècnica de refinament adaptatiu de la malla. La majoria de programes de modelització emergents estan basats en els FE, i mostren avantatges significatius, però gairebé tots els programes de modelització inversa, encara avui dia, estan basats en el mètode de les diferències finites (FD) o en el mètode de l'equació integral (IE). A més a més, la modelització inversa desenvolupada per a dades electromagnètiques (EM) es basa generalment en mètodes del gradient i es formula en un espai reduït, on les úniques variables d'optimització són els paràmetres del model, és a dir, la conductivitat elèctrica del subsòl. Originalment, el problema invers es formula per als camps EM i per al paràmetre conductivitat, i està constret per les equacions diferencials en derivades parcials (PDEs) que governen les variables camps EM. L'estratègia d'espai reduït elimina les variables camps aplicant lligams d'igualtat i soluciona, doncs, el problema no constret en l'espai reduït dels paràmetres del model. Un desavantatge general d'aquests mètodes és la costosa repetició del càlcul de la solució del problema directe i de la matriu jacobiana de sensibilitats (per mètodes basats en Newton). D'altra banda, també és possible de solucionar el problema invers en l'espai complet de les variables camps EM i del paràmetre conductivitat. Solucionar-hi el problema d'optimització constret per les PDEs té l'avantatge que només és necessari de solucionar exactament el problema directe al final del procés d'optimització, però això comporta el cost addicional de tenir moltes més variables d'optimització i de la presència de lligams d'igualtat. També, en particular, en el marc dels FE, el problema d'optimització constret per les PDEs té l'avantatge afegit d'incloure tècniques sofisticades pròpies dels FE en el procés d'inversió, com ara el refinament adaptatiu de la malla. Aquesta tesi desenvolupa un programa robust i versàtil amb FE i malles irregulars adaptatives per modelar numèricament el camp total de dades CSEM i MT bidimensionals (2D) i anisòtropes, que permet l'ús de fonts tridimensionals (3D) orientades arbitràriament. Per tal de representar fonts CSEM 3D en un model físic 2D, s'utilitza una aproximació dos i mig dimensional (2.5D). Les formulacions FE es deriven per a ambdós mètodes, per a estructures de conductivitat del subsòl isòtropes i anisòtropes. Encara que el cas anisòtrop no és general, inclou anisotropia vertical i de cabussament. La precisió en la solució es controla i millora amb un algoritme de refinament adaptatiu de la malla utilitzant mètodes d'estimació de l'error a posteriori. Una sèrie exhaustiva d'experiments numèrics valida el programa de FE adaptatius per ambdós mètodes, CSEM i MT, i en escenaris terrestres i marins. S'estudia la influència de les dimensions del model, del disseny de la malla i de l'ordre de les funcions forma en l'exactitud de la solució i es troba un comportament notablement superior de les funcions forma quadràtiques comparades amb les lineals o cúbiques. Diferents exemples mostren l'efecte d'escenaris complexos sobre les dades EM, en particular, un model amb batimetria, un model terrestre i un de marí amb fonts orientades i de dimensió finita, un medi amb anisotropia vertical amb un reservori encastat i un altre amb un reservori encastat en una estructura anticlinal. Aquests exemples demostren la importància de considerar adequadament (en termes de modelització directa) característiques físiques com la topografia, l'orientació i geometria de la font i l'anisotropia del medi, que sovint es troben en mesures reals. Juntament amb això, es deriva una formulació per al problema invers 2.5D CSEM com una optimització constreta per les PDEs en l'espai complet i en un marc de FE, seguint dues estratègies diferents: discretització-optimització i optimització-discretització. L'estratègia de discretització-optimització considera que el problema invers es troba en forma discretitzada i deriva les condicions d'optimitat de la Lagrangiana i el pas de Newton. Contràriament, l'aproximació optimització-discretització deriva primer les condicions d'optimitat i el pas de Newton o una aproximació d'aquest, i després discretitza les equacions resultants. La implementació de la formulació discretització-optimització es mostra en dos exemples, un model canònic de reservori i un model marí més realista amb topografia, utilitzant un programa d'optimització de propòsit general, que és una implementació d'un algoritme de programació quadràtica seqüencial (SQP). Encara que no s'utilitza una regularització explícita, l'ús de diferents malles per al paràmetre del model i per a les variables camps, permet recuperar les principals estructures del model i obtenir un ajust de les dades acceptable. Cal dir, però, que l'eficiència en temps i memòria del programa hauria de millorar-se. Finalment, el problema invers 2.5D CSEM es formula com un problema d'optimització constret per les PDEs en l'espai complet i en un marc de FE utilitzant una estratègia d'optimització-discretització i com un primer pas per al desenvolupament d'un esquema d'inversió que utilitzi malles adaptatives de FE.
APA, Harvard, Vancouver, ISO, and other styles
14

Frabolot, Ferdinand. "Optimisation de forme avec détection automatique de paramètres." Thesis, Compiègne, 2015. http://www.theses.fr/2015COMP2182/document.

Full text
Abstract:
L’objectif de ce travail de thèse est de pouvoir intégrer totalement l’optimisation de forme des raidisseurs de capot dans un processus de conception industrielle et cela afin d’optimiser la forme et la distribution des raidisseurs dans un contexte multi-objectif (voire multi-disciplinaire) d’une structure 3D surfacique. Pour ce faire, nous avons tout d’abord établi un aperçu de l’état de l’art dans l’optimisation de forme des structures en classifiant les différentes méthodes de paramétrage de forme, en trois catégories ; les méthodes basées sur la géométrie (telle la paramétrisation d’un modèle de type CAO), les méthodes basées sur une grille fixe (telles que les méthodes d’optimisation topologique) et les méthodes basées sur le maillage (telles que les méthodes de régularisation du maillage). Toutefois, aucune de ces méthodes ne satisfait pleinement aux objectifs posés. Nous introduisons ainsi dans cette thèse la méthode FEM-CsG : Finite Element Mesh - Constructive surface Geometry. Imprégnée d’un fort contexte industriel, cette méthode propose une réponse à des contraintes telles que la possibilité de représenter la solution optimale par un ensemble de paramètres CAO, la possibilité d’adapter le modèle EF à l’analyse souhaitée et la garantie d’une représentation géométrique et d’un maillage robuste. En proposant d’intégrer des formes élémentaires paramétrées et prémaillées issues d’une bibliothèque de formes dans une structure coque 3D maillée par l’utilisation des variables issues de la CAO, la méthode FEM-CsG permet une évolution constante de la topologie guidée par l’optimisation. Ainsi, même si la topologie est modifiée la forme résultante reste conforme avec une représentation CAO par construction, correspondant davantage à la réalité des optimisations réalisées en avant-projet. La méthode FEM-CsG a été validée sur deux études de cas, de complexité variable, permettant de mettre en avant la robustesse de cette dernière. Ainsi, avec un choix intelligent et cohérent des variables de formes, les problèmes d’optimisation peuvent avec un nombre restreint de variables explorer un nombre important de topologies ou de formes. Les changements de topologies s’effectuent de manière continue, validant ainsi la méthode à tout type d’analyse souhaitée
The objective of this thesis work is to be able to completely integrate shape optimization of car inner hood stiffeners in a complex industrial process, in order to fully optimize the shape and distribution of the stiffeners in a multi-objective approach (or even multi-disciplinary) of a 3D surfacic structure. To this end, we established, at the outset, an insight of the state-of-the-art in shape optimization of structures by classifying the different shape parametrizations in three distinct categories : geometry-based methods (a shape parametrization such as a CAD model), grid-based methods (such as topology optimization methods) and mesh-based methods (such as morphing methods or mesh regulation). However, none of these methods fully satisfies the set objectives. Thus, we will introduce in this work the FEM-CsG method : Finite Element Mesh - Constructive surface Geometry. Bolstered by its strong industrial context, this method offers a response to such constraints, i.e. the possibility to represent the optimal solution by a system of CAD parameters, the possibility to adapt the FE model to the wanted analysis and the guarantee of a robust geometrical representation and mesh stability. We offer to incorporate premeshed parameterized elementary forms into a 3D sheet meshed structures. Hence, these forms are arising from a CAD parameterized elementary form library. Furthermore, the FEM-CsG method uses a set of operators acting on the mesh allowing a constant evolution of the topology guided by optimization. Therefore, even if the topology may vary, the resulting shapes comply with CAD representations by construction, a solution better reflecting the reality of optimizations performed during the preliminary development stage. The FEM-CsG method has been validated on two simple case studies in order to bring forward its reliability. Thus, with an intelligent and coherent choice of the design variables, shape optimization issues may, with a restrictive number of variables, explore an important number of shapes and topologies. Topology changes are accomplished in a continuous manner, therefore validating the FEM-CsG method to any desired analysis
APA, Harvard, Vancouver, ISO, and other styles
15

Kannepalli, Sivaram [Verfasser], and O. [Akademischer Betreuer] Deutschmann. "Mathematical Methods for Design of Zone Structured Catalysts and Optimization of Inlet Trajectories in Selective Catalytic Reduction (SCR) and Three Way Catalyst (TWC) / Sivaram Kannepalli ; Betreuer: O. Deutschmann." Karlsruhe : KIT-Bibliothek, 2018. http://d-nb.info/1154856755/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Баранник, Валентин Сергеевич. "Пространственная аэродинамическая оптимизация направляющей решетки осевой турбины." Thesis, НТУ "ХПИ", 2016. http://repository.kpi.kharkov.ua/handle/KhPI-Press/22677.

Full text
Abstract:
Диссертация на соискание ученой степени кандидата технических наук по специальности 05.05.16 – турбомашины и турбоустановки. – Национальный технический университет "Харьковский политехнический институт", Харьков, 2016. Диссертация посвящена разработке методики пространственной аэродинамической оптимизации направляющих решеток осевых турбин путем поиска оптимальных формы профилей и меридиональных обводов межлопаточных каналов. Использование данной методики позволяет при решении оптимизационной задачи учесть дополнительные резервы повышения эффективности. Поиск оптимального варианта осуществлялся с использованием теории планирования эксперимента и ЛПτ – последовательности. Для описания полимодальных целевых функций исходная формальная макромодель в виде полного квадратичного полинома была уточнена путем замены суперпозиции параболы на суперпозицию кубического интерполяционного сплайна. На основе разработанной методики проведена оптимизация направляющей решетки третьей степени мощной паровой турбины с постоянным по высоте профилем при построении его различными типами кривых. Анализ результатов оптимизации показал, что наибольшее снижение интегральных потерь составило 7% в относительных величинах. Снижение потерь было достигнуто, как в ядре потока, так и в области вторичных течений. Существенно влиять на структуру течения в турбинных решетках, а следовательно получать дополнительных выигрыш при постановке оптимизационной задачи позволяет меридиональное профилирование поверхностей межлопаточного канала. Оптимизация периферийного меридионального обвода с помощью разработанного метода позволила дополнительно снизить интегральные потери 1,4%. в относительных величинах. Построение формы меридионального обвода осуществляется с использованием кривых Безье 4-го порядка для решеток без раскрытия и 3-го порядка – для решеток с раскрытием. Использование лопатки переменного по высоте профиля при постановке оптимизационной задачи также позволяет снизить интегральные потери.
Thesis for degree of Candidate of Sciences in Technique for speciality 05.05.16 – turbomachine and turbo-installation. – National Technical University "Kharkiv Polytechnical Institute", Kharkiv, 2016. The thesis is devoted to development the methods of the three-dimensional aerodynamic optimization of axial turbine nozzle cascades by defining the optimal shape of profiles and nozzle channel meridional shape. The formulation of an optimization problem using this methods allows to consider the additional efficiency reserves.While implementing developed method design of the turbine profiles using different kinds of curves was carried out. For each of the curve types the control parameters that allow to widely vary the profile geometry were determined. The results reliability was confirmed by providing verification of the nozzle and blade cascade simulations with experimental data. Using developed methods the optimization of the third stage nozzle cascade with a constant height profile of the powerful steam turbine using different types of curves was conducted. As a result of optimization the largest reduction of the integral losses by 7% in relative values was shown. Further optimization of the shroud meridional shape using developed optimization method increased this value by 1.4%. Formulation optimization task Using variable nozzle height profile also reduces the integral loses.
APA, Harvard, Vancouver, ISO, and other styles
17

Бараннік, Валентин Сергійович. "Просторова аеродинамічна оптимізація направляючої решітки осьової турбіни." Thesis, НТУ "ХПІ", 2016. http://repository.kpi.kharkov.ua/handle/KhPI-Press/22676.

Full text
Abstract:
Дисертація на здобуття наукового ступеня кандидата технічних наук за спеціальністю 05.05.16 – турбомашини та турбоустановки. – національний технічний університет "Харківський політехнічний інститут", Харків, 2016. Дисертація присвячена розробці методики просторової аеродинамічної оптимізації напрямних решіток осьових турбін шляхом пошуку оптимальних форми профілів та меридіональних обводів міжлопаткових каналів. Використання даної методики дозволяє при постановці оптимізаційної задачі врахувати додаткові резерви підвищення ефективності. При реалізації цієї методики було виконано проектування турбінних профілів з використанням різного роду кривих. Для кожного типу кривої визначені її параметри управління, що дозволяють в широких межах варіювати геометрію профілю. Достовірність отриманих результатів підтверджується проведеною верифікацією на направляючій та робочій решітці. На основі розробленої методики проведено оптимізацію направляючої решітки третього ступеня потужної парової турбіни з постійним по висоті профілем при побудові його різними типами кривих. Аналіз результатів оптимізації показав, що найбільше зниження інтегральних втрат склало 7% у відносних величинах. Подальша оптимізація периферійного меридіонального обводу за допомогою розробленого методу дозволила збільшити цю величину на 1,4%. Використання лопатки перемінного по висоті профілю при постановці оптимізаційної задачі також дозволяє знизити інтегральні втрати.
Thesis for degree of Candidate of Sciences in Technique for speciality 05.05.16 – turbomachine and turbo-installation. – National Technical University "Kharkiv Polytechnical Institute", Kharkiv, 2016. The thesis is devoted to development the methods of the three-dimensional aerodynamic optimization of axial turbine nozzle cascades by defining the optimal shape of profiles and nozzle channel meridional shape. The formulation of an optimization problem using this methods allows to consider the additional efficiency reserves.While implementing developed method design of the turbine profiles using different kinds of curves was carried out. For each of the curve types the control parameters that allow to widely vary the profile geometry were determined. The results reliability was confirmed by providing verification of the nozzle and blade cascade simulations with experimental data. Using developed methods the optimization of the third stage nozzle cascade with a constant height profile of the powerful steam turbine using different types of curves was conducted. As a result of optimization the largest reduction of the integral losses by 7% in relative values was shown. Further optimization of the shroud meridional shape using developed optimization method increased this value by 1.4%. Formulation optimization task Using variable nozzle height profile also reduces the integral loses.
APA, Harvard, Vancouver, ISO, and other styles
18

Dai, Ran Cochran John E. "Three-dimensional trajectory optimization in constrained airspace." Auburn, Ala., 2007. http://repo.lib.auburn.edu/2007%20Fall%20Dissertations/Dai_Ran_55.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Pearce, Lisa Jane. "Advanced methods of representing three-dimensional data." Thesis, University of Oxford, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.320657.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Mughal, Bilal Hafeez. "Integral methods for three-dimensional boundary layers." Thesis, Massachusetts Institute of Technology, 1998. http://hdl.handle.net/1721.1/9970.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Shrinivas, Gorur N. "Three-dimensional design methods for turbomachinery applications." Thesis, University of Oxford, 1996. http://ora.ox.ac.uk/objects/uuid:8ace58b5-e251-491e-9753-ae8b236d6c3b.

Full text
Abstract:
This thesis studies the application of sensitivity analysis and optimization methods to the design of turbomachinery components. Basic design issues and a survey of current design trends are presented. The redesign of outlet guide vanes (OGV's) in an aircraft high bypass turbofan engine is attempted. The redesign is necessitated by the interaction of the pylon induced static pressure field with the OGV's and the fan, leading to reduced OGV efficiency and shortened fan life. The concept of cyclically varying camber is used to redesign the OGV row to achieve suppression of the downstream disturbance in the domain upstream of the OGV row. The redesign is performed using (a) a linear perturbation CFD analysis and (b) a minimisation of the pressure mismatch integral by using a Newton method. In method (a) the sensitivity of the upstream flow field to changes in blade geometry is acquired from the linear perturbation CFD analysis, while in method (b) it is calculated by perturbing the blade geometry and differencing the resulting flow fields. Method (a) leads to a reduction in the pylon induced pressure variation at the fan by more than 70% while method (b) achieves up to 86%. An OGV row with only 3 different blade shapes is designed using the above method and is found to suppress the pressure perturbation by more than 73%. Results from these calculations are presented and discussed. The quasi-Newton design method is also used to redesign a three dimensional OGV row and achieves considerable reduction of upstream pressure variation. A concluding discussion summarises the experiences and suggests possible avenues for further work.
APA, Harvard, Vancouver, ISO, and other styles
22

Yuan, Jiankun. "Circulation methods in unsteady and three-dimensional flows." Link to electronic thesis, 2002. http://www.wpi.edu/Pubs/ETD/Available/etd-0502102-102046.

Full text
Abstract:
Thesis (Ph. D.)--Worcester Polytechnic Institute.
Keywords: Vortex; unsteady flow; circulation; three-dimensional flow; aerodynamics; instantaneous lift. Includes bibliographical references (p. 182-188).
APA, Harvard, Vancouver, ISO, and other styles
23

Jeans, Richard. "Innovative methods for three dimensional fluid-structure interaction." Thesis, Imperial College London, 1992. http://hdl.handle.net/10044/1/8189.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Sunki, Supriya. "Performance optimization in three-dimensional programmable logic arrays (PLAs)." [Tampa, Fla.] : University of South Florida, 2005. http://purl.fcla.edu/fcla/etd/SFE0001255.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Hanlon, Sebastien, and University of Lethbridge Faculty of Arts and Science. "Visualizing three-dimensional graph drawings." Thesis, Lethbridge, Alta. : University of Lethbridge, Faculty of Arts and Science, 2006, 2006. http://hdl.handle.net/10133/348.

Full text
Abstract:
The GLuskap system for interactive three-dimensional graph drawing applies techniques of scientific visualization and interactive systems to the construction, display, and analysis of graph drawings. Important features of the system include support for large-screen stereographic 3D display with immersive head-tracking and motion-tracked interactive 3D wand control. A distributed rendering architecture contributes to the portability of the system, with user control performed on a laptop computer without specialized graphics hardware. An interface for implementing graph drawing layout and analysis algorithms in the Python programming language is also provided. This thesis describes comprehensively the work on the system by the author—this work includes the design and implementation of the major features described above. Further directions for continued development and research in cognitive tools for graph drawing research are also suggested.
viii, 110 leaves : ill. (some col.) ; 29 cm.
APA, Harvard, Vancouver, ISO, and other styles
26

Smith, Joanna. "Methods for the analysis of three-dimensional anatomical surfaces." Thesis, University of Glasgow, 2012. http://theses.gla.ac.uk/3538/.

Full text
Abstract:
Shape is an inherent feature in everything around us and one that we, as humans, can process very efficiently. The question of how to analyse shape in an objective manner, however, is a more complex one. With continually improving and more readily available imaging technologies, there are an increasing number of fields in which it is of interest to have a more quantitative means of analysing the resulting images. Shape analysis is a rapidly growing branch of statistics that aims to meet these needs. In contrast to the early methods of shape analysis that were based on distances or angles from an object, shape is now generally assessed in terms of the full geometry of the object. This does not necessarily mean an analysis of the object in its entirety however; a simplified representation of the surface is more often used. This representation has traditionally been in the form of a set of landmarks - points of anatomical or mathematical interest on an object that act as key descriptors of its shape. A key feature of these points is therefore that they are in positions that correspond across all images. Clearly this makes them a very powerful tool for analysis, particularly useful for comparison across shapes, and as such many methods have been presented for their analysis. However, due to the fact that they are based on significant anatomical points and are more often than not manually placed on an image, landmark points tend to be fairly small in number. A major disadvantage to this type of approach is therefore that they give a very sparse description of the object of interest's shape. In order to improve on this, alternative methods have been developed that instead present the object in terms of a set of curves or, more recently, representative surface points. These representations give a richer description of shape but, provided they are created so that they also correspond across objects, can equally be analysed by means of the many existing landmark-based techniques. Nevertheless, although a surface-based approach clearly utilises a greater deal of the shape information of an image and hence provides a more comprehensive analysis, it remains a far less common technique for the analysis of shape. This thesis therefore aims to develop tools for the analysis of three-dimensional surface data, with focus lying specifically in the field of medical imaging. Three distinct studies are conducted, each of which has its own questions of interest and hence necessary techniques. The first study presented is based on a cohort of unilateral mastectomy and reconstruction patients, where interest lies in evaluating the breast asymmetry that is present post-surgery. A novel method is presented for the creation of corresponding surface points, and an analysis of these is conducted based on an established approach to the study of asymmetry. The second study then looks to investigate the `normal' patterns of facial growth that are seen in young children, specifically between the ages of 3 months and 5 years. From the multiple, longitudinal images that are available for each child, a set of corresponding surface representations are created by means of a well-established technique known as the Thin Plate Spline. A principal components analysis is then applied to the surfaces in order to reduce the dimensionality of the data, and the resulting principal components scores are modelled by a linear mixed effects approach. The third and final study presented is an investigation into the soft-tissue changes that are seen as a result of craniofacial surgery. A system is devised to index the location of all points on the surface, and this information is then used to model the changes taking place at various positions on the face. While most previous approaches to this problem have been based on complex finite element models, this study aims to investigate whether a simpler and more efficient statistical modelling approach can instead prove useful. The diversity of these studies hints at the wide-ranging applications of shape analysis within the medical imaging field alone, as well as many of the issues that can arise in the analysis of surface data. While it is intuitive that more informed conclusions can be drawn through these surface based analyses than would be possible by a more traditional landmark-based approach, it is also seen that the use of a surface representation allows for an improved visualisation and interpretation of the results. The techniques developed here are illustrated on specific medical applications, although it is hoped that they would prove similarly useful in a wider variety of shape analysis settings.
APA, Harvard, Vancouver, ISO, and other styles
27

Yucel, Osman. "Ballistic Design Optimization Of Three-dimensional Grains Using Genetic Algorithms." Master's thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12614857/index.pdf.

Full text
Abstract:
Within the scope of this thesis study, an optimization tool for the ballistic design of three-dimensional grains in solid propellant rocket motors is developed. The modeling of grain geometry and burnback analysis is performed analytically by using basic geometries like cylinder, cone, sphere, ellipsoid, prism and torus. For the internal ballistic analysis, a quasi-steady zero-dimensional flow solver is used. Genetic algorithms have been studied and implemented to the design process as an optimization algorithm. Lastly, the developed optimization tool is validated with the predesigned rocket motors.
APA, Harvard, Vancouver, ISO, and other styles
28

SOUZA, CAMILA GOMES PECANHA DE. "OPTIMIZATION OF THE THREE-DIMENSIONAL CHARACTERIZATION OF IRON ORE PELLETS." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2018. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=35971@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO
A porosidade e o arranjo espacial dos poros são essenciais para a transferência de calor e para o processo de redução das pelotas de minério de ferro em fornos siderúrgicos. Portanto, a caracterização microestrutural das pelotas torna-se importante para o controle de qualidade do produto final, o aço, auxiliando no entendimento de seu comportamento nos altos-fornos. Atualmente, as técnicas mais utilizadas para a caracterização são a microscopia ótica, que oferece resultados somente bidimensionais e com isso não representa exatamente a realidade; e a Porosimetria por intrusão de mercúrio, na qual utiliza-se mercúrio, que é altamente nocivo à saúde humana, e avalia apenas poros conectados com a superfície. Além disso, são técnicas consideradas destrutivas, ou seja, não é possível fazer outras análises porque há a perda do material. Este trabalho propõe otimizar uma metodologia de caracterização tridimensional de porosidade em pelotas a partir da técnica de Microtomografia Computadorizada de Raios X (microCT) – que é uma técnica não destrutiva e fornece informações tridimensionais, porém apresenta limitações relacionadas ao tempo de análise e resolução – e análise e processamento das imagens geradas. Foi possível caracterizar em 3D a porosidade de amostras cedidas pela empresa Vale, a partir da distribuição espacial e obtenção do volume dos poros, além da discriminação de poros abertos e fechados por uma nova metodologia desenvolvida. Assim, a metodologia de aquisição foi otimizada, alcançando-se uma redução de tempo para todas as análises - foram necessárias 3 horas para a análise de uma pelota inteira. Confirmou-se que a resolução de fato causa grande impacto na caracterização de porosidade em pelotas de minério de ferro, evidenciado na grande diferença entre os percentuais de porosidades medidos nas diferentes resoluções alcançadas: 14,83 por cento para 7,6 micrometros, 23,69 por cento para 4 micrometros e 26,75 por cento para 2 micrometros.
Porosity and pore space arrangement are essential for heat transfer and the reduction process of iron ore pellets in steelworks. Therefore, the pellet microstructural characterization becomes important for the quality control of the final product, steel, helping in the understanding of its behavior in the blast furnaces. Currently, the most used techniques for characterization are optical microscopy, which offers only two-dimensional results and thus does not represent exactly the reality; and mercury intrusion porosimetry that evaluates only pores connected to the surface, and uses mercury, which is highly harmful to human health. Moreover, they are techniques considered destructive as it is not possible to do other analyzes in the same samples, since they are destroyed. This work proposes to optimize a methodology of three-dimensional characterization of porosity in pellets using the technique of x-ray microtomography (microCT). This is a non - destructive technique that provides 3D information, but presents limitations related to the time of analysis and resolution. It was possible to characterize in 3D pellet samples provided by the Vale company, obtaining the porosity and the pore volume distribution. Open and closed porosity was also measured by a new developed methodology. Thus, the acquisition methodology was optimized, reaching a reduction of time for all the analyzes - it took 3 hours for the analysis of an entire ball. It was confirmed that the resolution had a great impact on the porosity characterization of iron ore pellets, evidenced by the great difference between the porosities measured at the different resolutions reached: 14.83 percent for 7.6 micrometers, 23.69 percent for 4 micrometers and 26.75 percent for 2 micrometers.
APA, Harvard, Vancouver, ISO, and other styles
29

Brutscher, Bernhard. "Développements méthodologiques en RMN multidimensionnelle des protéines : application à l'étude de différents cytochromes c." Université Joseph Fourier (Grenoble ; 1971-2015), 1995. http://www.theses.fr/1995GRE10115.

Full text
Abstract:
L'attribution des frequences rmn aux differents noyaux constitue la premiere partie d'une etude structurale et dynamique d'une proteine par rmn. La majeure partie des developpements methodologiques presentes dans ce travail contribue a l'optimisation de cette tache souvent laborieuse, dont la strategie generale est maintenant bien etablie. Dans le cas des proteines non enrichies en isotopes stables (#1#5n, #1#3c), ce sont les experiences de la rmn du #1h (tocsy, cosy et noesy) qui permettent d'effectuer l'attribution des resonances. Un nouveau schema d'edition de frequences a bande selective est presente et applique dans les differentes experiences de correlation #1h-#1h. Puis, une nouvelle experience hosqc est proposee pour obtenir des pics de correlation de type cosy en phase. Ensuite, l'apport d'une experience de correlation #1h-#1#3c pour une attribution #1h plus complete et plus fiable est demontre. Pour l'etude de proteines de taille intermediaire (100-150 residus) enrichies en isotopes stables #1#5n et #1#3c, nous avons developpe une serie d'experiences triple-resonance (#1h, #1#5n, #1#3c) a deux dimensions fournissant la meme information que les versions 3d initialement proposees en un temps reduit et/ou avec une meilleure resolution digitale. De plus, un programme d'attribution automatique (alps) a ete concu pour leur interpretation. Cette nouvelle approche permet d'accomplir l'attribution des resonances de la chaine peptidique en quelques semaines. Pour l'etape suivante la determination de contraintes experimentales dans le but d'une modelisation de la structure tridimensionnelle en solution nous avons developpe de nouvelles experiences 3d-mq noesy doublement filtrees par les deplacements chimiques de deux heteronoyaux (#1#3c, #1#5n). Un jeu de distances interprotons permettant le calcul ab initio d'une premiere structure du ferrocytochrome c#2 de rhodobacter capsulatus a pu en etre extrait de maniere semi-automatique
APA, Harvard, Vancouver, ISO, and other styles
30

Triantafyllou, Christina. "Three-dimensional registration methods for multi-modal magnetic resonance neuroimages." Thesis, King's College London (University of London), 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.392388.

Full text
Abstract:
In this thesis, image alignment techniques are developed and evaluated for applications in neuroimaging. In particular, the problem of combining cross-sequence MRI (Magnetic Resonance Imaging) intra-subject scans is considered. The challenge in this case is to find topographically uniform mappings in order to register (find a mapping between) low resolution echo-planar images and their high resolution structural counterparts. Such an approach enables us to effectually fuse, in a clinically useful way, information across scans. This dissertation devises an alternative framework by which this may be achieved, involving appropriate optimisation of the required mapping functions, which turn out to be non-linear and high-dimensional in nature. Novel ways to constrain and regularise these functions to enhance the computational speed of the process and the accuracy of the solution are also studied. The algorithms, whose characteristics are demonstrated for this specific application should be fully generalisable to other medical imaging modalities and potentially, other areas of image processing. To begin with, some existing registration methods are reviewed, followed by the introduction of an automated global 3-D registration method. Its performance is investigated on extracted cortical and ventricular surfaces by utilising the principles of the chamfer matching approach. Evaluations on synthetic and real data-sets, are performed to show that removal of global image differences is possible in principle, although the true accuracy of the method depends on the type of geometrical distortions present. These results also reveal that this class of algorithm is unable to solve more localised variations and higher order magnetic field distortions between the images. These facts motivate the development of a high-dimensional 3-D registration method capable of effecting a one-to-one correspondence by capturing the localised differences. This method was seen to account not only for topological differences but also for non-linear deformations in size and shape. Validation of the algorithm is carried out on geometrical objects, simulated data and real images to ensure that the important requirements for a topologically useful mapping; invertibility, smoothness of the deformation field and an almost perfect correspondence can be maintained between two image sequences.
APA, Harvard, Vancouver, ISO, and other styles
31

De, Grazia Daniele. "Three-dimensional discontinuous spectral/hp element methods for compressible flows." Thesis, Imperial College London, 2016. http://hdl.handle.net/10044/1/40416.

Full text
Abstract:
In this thesis we analyse and develop two high-order schemes which belong to the class of discontinuous spectral/hp element methods focusing on compressible aerodynamic studies and, more specifically, on boundary-layer flows. We investigate the discontinuous Galerkin method and the flux reconstruction approach providing a detailed analysis of the connections between these methods. The connections found enable a better understanding of the broader class of discontinuous spectral/hp element methods. From this perspective it was evident that some of the issues of the discontinuous Galerkin method are also encountered in the flux reconstruction approach, and in particular, the aliasing errors of the two schemes are identical. The techniques applied in the more famous discontinuous Galerkin method for tackling these errors can be also extended to the flux reconstruction approach. We present two dealiasing strategies based on the concept of consistent integration of the nonlinear terms. The first is a localised approach which targets in each element the nonlinearities arising in the problem, while the second is a more global approach which involves a higher quadrature of the overall right-hand side of the discretised equation(s). Both the strategies have been observed to be effective in enhancing the robustness of the schemes considered. We finally present the direct numerical simulation of a high-speed subsonic boundary-layer flow past a three-dimensional roughness element, achieved by means of the compressible aerodynamic solver developed. This type of analyses have been widely performed in the past with approximated theories. Only recently, has DNS been used due to the improvement of numerical techniques and an increase in computational resources for similar studies in low-speed subsonic, supersonic and hypersonic regimes. This thesis takes a first step to close the gap between the results for a high-speed subsonic regime and the results in supersonic and hypersonic regimes.
APA, Harvard, Vancouver, ISO, and other styles
32

Tsui, Patrick P. C. "Optimization of vision system pose for three-dimensional object motion estimation." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape8/PQDD_0004/MQ43228.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Katoozian, Hamidreza. "Three dimensional design optimization of femoral components of total hip endoprostheses." Case Western Reserve University School of Graduate Studies / OhioLINK, 1993. http://rave.ohiolink.edu/etdc/view?acc_num=case1060799761.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Pourbakhsh, Seyed Alireza. "Dummy TSV-Based Timing Optimization for 3D On-Chip Memory." Thesis, North Dakota State University, 2016. https://hdl.handle.net/10365/29093.

Full text
Abstract:
Design and fabrication of three-dimensional (3D) ICs is one the newest and hottest trends in semiconductor manufacturing industry. In 3D ICs, multiple 2D silicon dies are stacked vertically, and through silicon vias (TSVs) are used to transfer power and signals between different dies. The electrical characteristic of TSVs can be modeled with equivalent circuits consisted of passive elements. In this thesis, we use “dummy” TSVs as electrical delay units in 3D SRAMs. Our results prove that dummy TSVs based delay units are as effective as conventional delay cells in performance, increase the operational frequency of SRAM up to 110%, reduce the usage of silicon area up to 88%, induce negligible power overhead, and improve robustness against voltage supply variation and fluctuation.
APA, Harvard, Vancouver, ISO, and other styles
35

Visser, Hendrikus. "Energy management of three-dimensional minimum-time intercept." Diss., Virginia Polytechnic Institute and State University, 1985. http://hdl.handle.net/10919/49954.

Full text
Abstract:
A real-time computer algorithm to control and optimize aircraft flight profiles is described and applied to a three-dimensional minimum-time intercept mission. The proposed scheme has roots in two well-known techniques: singular perturbations and neighboring-optimal guidance. Use of singular-perturbation ideas is made in terms of the assumed trajectory-family structure. A heading/energy family of prestored point-mass-model state-Euler solutions is used as the baseline in this scheme. The next step is to generate a near-optimal guidance law that will transfer the aircraft to the vicinity of this reference family. The control commands fed to the autopilot consist of the reference controls plus correction terms which are linear combinations of the altitude and path-angle deviations from reference values, weighted by a set of precalculated gains. In this respect the proposed scheme resembles neighboring-optimal guidance. However, in contrast to the neighboring-optimal guidance scheme, the reference control and state variables as well as the feedback gains are stored as functions of energy and heading in the present approach. A detailed description of the feedback laws and of some of the mathematical tools used to construct the controller is presented. The construction of the feedback laws requires a substantial preflight computational effort, but the computation times for on-board execution of the feedback laws are very modest. Other issues relating to practical implementation are addressed as well. Numerical examples, comparing open-loop optimal and approximate feedback solutions for a sample high-performance fighter, illustrate the attractiveness of the guidance scheme. Optimal three-dimensional flight in the presence of a terrain limit is studied in some detail.
Ph. D.
incomplete_metadata
APA, Harvard, Vancouver, ISO, and other styles
36

Lindkvist, Gaute. "Indirect boundary element methods for modelling bubbles under three dimensional deformation." Thesis, Deaprtment of Engineering Systems and Management, 2009. http://hdl.handle.net/1826/3098.

Full text
Abstract:
The nonlinear behaviour of gas and vapour bubbles is a complex phenomenon which plays a signi cant role in many natural and man-made processes. For example, bubbles excited by an acoustic eld play important roles in lithotripsy, drug delivery, ultrasonic imaging, surface cleaning and give rise to the phenomenon of sonoluminescence (light emission from a bubble excited by sound). In such contexts, the oscillation of even a single bubble is not yet fully understood, let alone the behaviour of multiple bubbles interacting with each other. An essential part of understanding such problems is un- derstanding the complex and sometimes unpredictable coupling between the oscillation of the bubble volume and the bubble shape, a problem requiring experimental research, theoretical work and numerical studies. In this Thesis we focus on numerical simulation of a single gas bubble oscillating in a free liquid. Previously, such numerical simulations have al- most exclusively assumed axisymmetry and small amplitude oscillations. To avoid these assumptions we build upon and extend previous boundary ele- ment methods used for three dimensional simulations of other bubble prob- lems. We use high order elements and parallel processing to yield an indirect boundary element method capable of capturing ne surface e ects on three dimensional bubbles subjected to surface tension, over extended periods of time. We validate the method against the classical Rayleigh-Plesset equation for spherical oscillation problems before validating the indirect boundary el- ement method and the method used by Shaw (2006), against each other, on several small amplitude axisymmetric oscillation problems. We then proceed to study near-resonant non-axisymmetric shape oscillations of order 2 and 4 and the e ect these oscillations have on higher order modes, with a level of detail we believe has not been achieved in a non-axisymmetric study before. We also con rm some predictions made by Pozrikidis' on resonant interac- tions between the second order modes and the volume mode in addition. Finally we study the spherical instability of a bubble trapped in a uniform acoustic eld, demonstrating, as expected, that instabilities show up in all resonant shape modes, including non-axisymmetric ones.
APA, Harvard, Vancouver, ISO, and other styles
37

Sucharov, Leon. "An investigation of new methods of creating three-dimensional multiplanar displays." Thesis, University of Oxford, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.670223.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Côté, Mathieu M. Eng Massachusetts Institute of Technology. "Shear wall layout optimization of dynamically loaded three-dimensional tall building structures." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/119315.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, 2018.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 46-47).
Deciding on the appropriate layout of shear walls and the thickness of each member is an iterative process that is time consuming and often leads to suboptimal results. Every time the stiffness of the building is modified, the structural designer must ensure deflection and inter-story drift limits are respected followed by flexural, shear and torsional strength checks for each shear wall. A computational optimization framework has the potential to limit the design time, but most importantly to identify layout configurations with lower costs, weight, embodied carbon and with increased consideration for architectural constraints. Additionally, an optimization framework can provide a strong tool for early stage, pre-conceptual idea exploration and thereby lead to an increased collaboration between architects and engineers. This thesis presents an approach that allows the structural designer to design the shear wall layout of a three-dimensional structure using a linearized modal analysis and a modified genetic algorithm. The presented design scheme uses a ground structure approach as it allows for architectural constraints to be embedded in the design. The objective is defined as a cost function that incorporates material cost and constructability. The proposed framework is used to design the shear wall layout of a building under wind and seismic load cases and is compared to the design obtained with conventional methods. Key terms: Shear wall layout, reinforced concrete, structural optimization, topology optimization, genetic algorithm, dynamic analysis, three-dimensional analysis, cost analysis of lateral systems, tall buildings
by Mathieu Côté.
M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
39

Schwarz, Sebastian. "Depth Map Upscaling for Three-Dimensional Television : The Edge-Weighted Optimization Concept." Licentiate thesis, Mittuniversitetet, Institutionen för informationsteknologi och medier, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-17048.

Full text
Abstract:
With the recent comeback of three-dimensional (3D) movies to the cinemas, there have been increasing efforts to spread the commercial success of 3D to new markets. The possibility of a 3D experience at home, such as three-dimensional television (3DTV), has generated a great deal of interest within the research and standardization community. A central issue for 3DTV is the creation and representation of 3D content. Scene depth information plays a crucial role in all parts of the distribution chain from content capture via transmission to the actual 3D display. This depth information is transmitted in the form of depth maps and is accompanied by corresponding video frames, i.e. for Depth Image Based Rendering (DIBR) view synthesis. Nonetheless, scenarios do exist for which the original spatial resolutions of depth maps and video frames do not match, e.g. sensor driven depth capture or asymmetric 3D video coding. This resolution discrepancy is a problem, since DIBR requires accordance between the video frame and depth map. A considerable amount of research has been conducted into ways to match low-resolution depth maps to high resolution video frames. Many proposed solutions utilize corresponding texture information in the upscaling process, however they mostly fail to review this information for validity. In the strive for better 3DTV quality, this thesis presents the Edge-Weighted Optimization Concept (EWOC), a novel texture-guided depth upscaling application that addresses the lack of information validation. EWOC uses edge information from video frames as guidance in the depth upscaling process and, additionally, confirms this information based on the original low resolution depth. Over the course of four publications, EWOC is applied in 3D content creation and distribution. Various guidance sources, such as different color spaces or texture pre-processing, are investigated. An alternative depth compression scheme, based on depth map upscaling, is proposed and extensions for increased visual quality and computational performance are presented in this thesis. EWOC was evaluated and compared with competing approaches, with the main focus was consistently on the visual quality of rendered 3D views. The results show an increase in both objective and subjective visual quality to state-of-the-art depth map upscaling methods. This quality gain motivates the choice of EWOC in applications affected by low resolution depth. In the end, EWOC can improve 3D content generation and distribution, enhancing the 3D experience to boost the commercial success of 3DTV.
APA, Harvard, Vancouver, ISO, and other styles
40

Bero, Mamdouh A. "Development of a three-dimensional radiation dosimetry system." Thesis, University of Surrey, 2001. http://epubs.surrey.ac.uk/719/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Zhang, Hui. "Image-based boundary element computation of three-dimensional potential problems." Online access for everyone, 2008. http://www.dissertations.wsu.edu/Thesis/Summer2008/h_zhang_072308.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Daniele, Maurizio [Verfasser]. "Three Essays on Regularization Methods in High-Dimensional Factor Models / Maurizio Daniele." Konstanz : KOPS Universität Konstanz, 2020. http://d-nb.info/1216418632/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Bloodworth, Alan Graham. "Three-dimensional analysis of tunnelling effects on structures to develop design methods." Thesis, University of Oxford, 2002. http://ora.ox.ac.uk/objects/uuid:9c789d79-efa1-43fa-b2e1-e08d01de63db.

Full text
Abstract:
The subject of this thesis is the verification of a three-dimensional numerical modelling approach for the prediction of settlement damage to masonry buildings due to tunnelling in soft ground. The modelling approach was developed by previous researchers at Oxford, and was applied to three sites, representative of a range of practical configurations. The first involved the excavation of a shaft close to the corner of an eighteenth century church in London. The second involved tunnelling with very low cover beneath the foundations of a terrace of cottages at Ramsgate, Kent. The third was the relatively well-known case of tunnelling beneath the Mansion House, London, for construction of the extension to the Docklands Light Railway in the late 1980’s. The overall conclusion of the project is that the modelling procedures are suitable for application to the detailed assessment of the response of buildings to tunnelling. Particular features of the procedures are that the building is modelled together with the ground and a representation of the tunnel excavation, and in three dimensions. It has been confirmed that all these features are necessary to model the building response, which may include a combination of shear deformation, arching and bending behaviour. Further lessons have been learned concerning the importance of the self-weight of the building in determining overall settlements, how to model openings such as doors and windows in façades, and whether it is necessary to model the building foundation. It has not proved possible, through lack of time, to model the advance of tunnels beneath buildings within this thesis. This, however, is observed to be an important effect in the field, particularly in causing damage to internal walls. It is recommended that further research be carried out in this area. This project made use of large-scale non-linear finite element analysis. The demand on computing resources was high, stimulating many enhancements to the software, the most important of which was parallelisation of the analysis program for use on the Oxford Supercomputer. To obtain optimum results, larger model sizes are required. The computing resources to enable this should become more commonly available within the next few years, enabling the modelling techniques to be used routinely.
APA, Harvard, Vancouver, ISO, and other styles
44

Baurley, Sharon. "An exploration into technological methods to achieve three-dimensional form in textiles." Thesis, Royal College of Art, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.516679.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Ardila, Ricardo. "Optimization of three-dimensional branching networks of microchannels for thermal management of microelectronics." FIU Digital Commons, 2009. http://digitalcommons.fiu.edu/etd/1295.

Full text
Abstract:
The aim of this work is to present a methodology to develop cost-effective thermal management solutions for microelectronic devices, capable of removing maximum amount of heat and delivering maximally uniform temperature distributions. The topological and geometrical characteristics of multiple-story three-dimensional branching networks of microchannels were developed using multi-objective optimization. A conjugate heat transfer analysis software package and an automatic 3D microchannel network generator were developed and coupled with a modified version of a particle-swarm optimization algorithm with a goal of creating a design tool for 3D networks of optimized coolant flow passages. Numerical algorithms in the conjugate heat transfer solution package include a quasi-ID thermo-fluid solver and a steady heat diffusion solver, which were validated against results from high-fidelity Navier-Stokes equations solver and analytical solutions for basic fluid dynamics test cases. Pareto-optimal solutions demonstrate that thermal loads of up to 500 W/cm2 can be managed with 3D microchannel networks, with pumping power requirements up to 50% lower with respect to currently used high-performance cooling technologies.
APA, Harvard, Vancouver, ISO, and other styles
46

Anderson, Adam. "Studies in the improvement of two dimensional and three dimensional boundary integral methods with free moving surfaces." Thesis, University of Bristol, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.386058.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

O, Chol Gyu. "Shape optimization for Two-Dimensional transonic airfoil by using the coupling of FEM and BEM." [S.l. : s.n.], 2006. http://nbn-resolving.de/urn:nbn:de:bsz:93-opus-26978.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Wang, Yu. "Efficient modeling methods for freeform objects /." View abstract or full-text, 2006. http://library.ust.hk/cgi/db/thesis.pl?MECH%202006%20WANG.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Leung, Carlos Wai Yin. "Efficient methods for 3D reconstruction from multiple images /." [St. Lucia, Qld.], 2006. http://www.library.uq.edu.au/pdfserve.php?image=thesisabs/absthe19150.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Pendse, Nachiket Vishwas. "An effective dimensional inspection method based on zone fitting." Texas A&M University, 2004. http://hdl.handle.net/1969.1/3239.

Full text
Abstract:
Coordinate measuring machines are widely used to generate data points from an actual surface. The generated measurement data must be analyzed to yield critical geometric deviations of the measured part according to the requirements specified by the designer. However, ANSI standards do not specify the methods that should be used to evaluate the tolerances. The coordinate measuring machines employ different verification algorithms which may yield different results. Functional requirements or assembly conditions on a manufactured part are normally translated into geometric constraints to which the part must conform. Minimum zone evaluation technique is used when the measured data is regarded as an exact copy of the actual surface and the tolerance zone is represented as geometric constraints on the data. In the present study, a new zone-fitting algorithm is proposed. The algorithm evaluates the minimum zone that encompasses the set of measured points from the actual surface. The search for the rigid body transformation that places the set of points in the zone is modeled as a nonlinear optimization problem. The algorithm is employed to find the form tolerance of 2-D (line, circle) as well as 3-D geometries (cylinder). It is also used to propose an inspection methodology for turbine blades. By constraining the transformation parameters, the proposed methodology determines whether the points measured at the 2-D cross-sections fit in the corresponding tolerance zones simultaneously.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography