Dissertations / Theses on the topic 'Scale decomposition'

To see the other types of publications on this topic, follow the link: Scale decomposition.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Scale decomposition.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Hawley, Stephen Dwyer. "Adaptive time-scale decomposition for multiscale systems /." Thesis, Connect to this title online; UW restricted, 2008. http://hdl.handle.net/1773/6009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Finney, John D. "Decomposition and decentralized output control of large-scale systems." Diss., Georgia Institute of Technology, 1995. http://hdl.handle.net/1853/15606.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Shankar, Jayashree. "Analysis of a nonhierarchical decomposition algorithm." Thesis, This resource online, 1992. http://scholar.lib.vt.edu/theses/available/etd-09192009-040336/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Sanneman, Lindsay (Lindsay Michelle). "Decomposition techniques for large-scale optimization in the supply chain." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/118674.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2018.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 103-105).
Integrated supply chain models provide an opportunity to optimize costs and production times in the supply chain while taking into consideration the many steps in the production and delivery process and the many constraints on time, shared resources, and throughput capabilities. In this work, mixed integer linear programming (MILP) models are developed to describe the manufacturing plant, consolidation transport, and distribution center components of the supply chain. Initial optimization results are obtained for each of these models. Additionally, an integrated model including a single plant, multiple consolidation transport vehicles, and a single distribution center is formulated and initial results are obtained. All models are implemented and optimized for their given objectives using a standard MILP solver. Initial optimization results suggest that it is intractable to solve problems of relevant scale using standard MILP solvers. The natural hierarchical structure in the supply chain problem lends itself well to application of decomposition techniques intended to speed up solution time. Exact techniques, such as Benders decomposition, are explored as a baseline. Classical Benders decomposition is applied to the manufacturing plant model, and results indicate that Benders decomposition on its own will not improve solve times for the manufacturing plant problem and instead leads to longer solve times for the problems that are solved. This is likely due to the large number of discrete variables in manufacturing plant model. To improve upon solve times for the manufacturing plant model, an approximate decomposition technique is developed, applied to the plant model, and evaluated. The approximate algorithm developed in this work decomposes the problem into a three-level hierarchical structure and integrates a heuristic approach at two of the three levels in order to solve abstracted versions of the larger problem and guide towards high-quality solutions. Results indicate that the approximate technique solves problems faster than those solved by the standard MILP solver and all solutions are within approximately 20% of the true optimal solutions. Additionally, the approximate technique can solve problems twice the size of those solved by the standard MILP solver within a one hour timeframe.
by Lindsay Sanneman.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
5

Becker, Adrian Bernard Druke. "Decomposition methods for large scale stochastic and robust optimization problems." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/68969.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2011.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 107-112).
We propose new decomposition methods for use on broad families of stochastic and robust optimization problems in order to yield tractable approaches for large-scale real world application. We introduce a new type of a Markov decision problem named the Generalized Rest less Bandits Problem that encompasses a broad generalization of the restless bandit problem. For this class of stochastic optimization problems, we develop a nested policy heuristic which iteratively solves a series of sub-problems operating on smaller bandit systems. We also develop linear-optimization based bounds for the Generalized Restless Bandit problem and demonstrate promising computational performance of the nested policy heuristic on a large-scale real world application of search term selection for sponsored search advertising. We further study the distributionally robust optimization problem with known mean, covariance and support. These optimization models are attractive in their real world applications as they require the model consumer to only rely on those statistics of uncertainty that are known with relative confidence rather than making arbitrary assumptions about the exact dynamics of the underlying distribution of uncertainty. Known to be AP - hard, current approaches invoke tractable but often weak relaxations for real-world applications. We develop a decomposition method for this family of problems which recursively derives sub-policies along projected dimensions of uncertainty and provides a sequence of bounds on the value of the derived policy. In the development of this method, we prove that non-convex quadratic optimization in n-dimensions over a box in two-dimensions is efficiently solvable. We also show that this same decomposition method yields a promising heuristic for the MAXCUT problem. We then provide promising computational results in the context of a real world fixed income portfolio optimization problem. The decomposition methods developed in this thesis recursively derive sub-policies on projected dimensions of the master problem. These sub-policies are optimal on relaxations which admit "tight" projections of the master problem; that is, the projection of the feasible region for the relaxation is equivalent to the projection of that of master problem along the dimensions of the sub-policy. Additionally, these decomposition strategies provide a hierarchical solution structure that aids in solving large-scale problems.
by Adrian Bernard Druke Becker.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
6

Ortiz, Diaz Camilo. "Block-decomposition and accelerated gradient methods for large-scale convex optimization." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/53438.

Full text
Abstract:
In this thesis, we develop block-decomposition (BD) methods and variants of accelerated *9gradient methods for large-scale conic programming and convex optimization, respectively. The BD methods, discussed in the first two parts of this thesis, are inexact versions of proximal-point methods applied to two-block-structured inclusion problems. The adaptive accelerated methods, presented in the last part of this thesis, can be viewed as new variants of Nesterov's optimal method. In an effort to improve their practical performance, these methods incorporate important speed-up refinements motivated by theoretical iteration-complexity bounds and our observations from extensive numerical experiments. We provide several benchmarks on various important problem classes to demonstrate the efficiency of the proposed methods compared to the most competitive ones proposed earlier in the literature. In the first part of this thesis, we consider exact BD first-order methods for solving conic semidefinite programming (SDP) problems and the more general problem that minimizes the sum of a convex differentiable function with Lipschitz continuous gradient, and two other proper closed convex (possibly, nonsmooth) functions. More specifically, these problems are reformulated as two-block monotone inclusion problems and exact BD methods, namely the ones that solve both proximal subproblems exactly, are used to solve them. In addition to being able to solve standard form conic SDP problems, the latter approach is also able to directly solve specially structured non-standard form conic programming problems without the need to add additional variables and/or constraints to bring them into standard form. Several ingredients are introduced to speed-up the BD methods in their pure form such as: adaptive (aggressive) choices of stepsizes for performing the extragradient step; and dynamic updates of scaled inner products to balance the blocks. Finally, computational results on several classes of SDPs are presented showing that the exact BD methods outperform the three most competitive codes for solving large-scale conic semidefinite programming. In the second part of this thesis, we present an inexact BD first-order method for solving standard form conic SDP problems which avoids computations of exact projections onto the manifold defined by the affine constraints and, as a result, is able to handle extra large-scale SDP instances. In this BD method, while the proximal subproblem corresponding to the first block is solved exactly, the one corresponding to the second block is solved inexactly in order to avoid finding the exact solution of a linear system corresponding to the manifolds consisting of both the primal and dual affine feasibility constraints. Our implementation uses the conjugate gradient method applied to a reduced positive definite dual linear system to obtain inexact solutions of the latter augmented primal-dual linear system. In addition, the inexact BD method incorporates a new dynamic scaling scheme that uses two scaling factors to balance three inclusions comprising the optimality conditions of the conic SDP. Finally, we present computational results showing the efficiency of our method for solving various extra large SDP instances, several of which cannot be solved by other existing methods, including some with at least two million constraints and/or fifty million non-zero coefficients in the affine constraints. In the last part of this thesis, we consider an adaptive accelerated gradient method for a general class of convex optimization problems. More specifically, we present a new accelerated variant of Nesterov's optimal method in which certain acceleration parameters are adaptively (and aggressively) chosen so as to: preserve the theoretical iteration-complexity of the original method; and substantially improve its practical performance in comparison to the other existing variants. Computational results are presented to demonstrate that the proposed adaptive accelerated method performs quite well compared to other variants proposed earlier in the literature.
APA, Harvard, Vancouver, ISO, and other styles
7

Scott, Drew. "Decomposition Methods for Routing and Planning of Large-Scale Aerospace Systems." University of Cincinnati / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1617108065278479.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Prescott, Thomas Paul. "Large-scale layered systems and synthetic biology : model reduction and decomposition." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:205a18fb-b21f-4148-ba7d-3238f4b1f25b.

Full text
Abstract:
This thesis is concerned with large-scale systems of Ordinary Differential Equations that model Biomolecular Reaction Networks (BRNs) in Systems and Synthetic Biology. It addresses the strategies of model reduction and decomposition used to overcome the challenges posed by the high dimension and stiffness typical of these models. A number of developments of these strategies are identified, and their implementation on various BRN models is demonstrated. The goal of model reduction is to construct a simplified ODE system to closely approximate a large-scale system. The error estimation problem seeks to quantify the approximation error; this is an example of the trajectory comparison problem. The first part of this thesis applies semi-definite programming (SDP) and dissipativity theory to this problem, producing a single a priori upper bound on the difference between two models in the presence of parameter uncertainty and for a range of initial conditions, for which exhaustive simulation is impractical. The second part of this thesis is concerned with the BRN decomposition problem of expressing a network as an interconnection of subnetworks. A novel framework, called layered decomposition, is introduced and compared with established modular techniques. Fundamental properties of layered decompositions are investigated, providing basic criteria for choosing an appropriate layered decomposition. Further aspects of the layering framework are considered: we illustrate the relationship between decomposition and scale separation by constructing singularly perturbed BRN models using layered decomposition; and we reveal the inter-layer signal propagation structure by decomposing the steady state response to parametric perturbations. Finally, we consider the large-scale SDP problem, where large scale SDP techniques fail to certify a system’s dissipativity. We describe the framework of Structured Storage Functions (SSF), defined where systems admit a cascaded decomposition, and demonstrate a significant resulting speed-up of large-scale dissipativity problems, with applications to the trajectory comparison technique discussed above.
APA, Harvard, Vancouver, ISO, and other styles
9

Chan, Chi-keung, and 陳志強. "Minimum bounding boxes and volume decomposition of CAD models." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2003. http://hub.hku.hk/bib/B29947340.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kemenov, Konstantin A. "A New Two-Scale Decomposition Approach for Large-Eddy Simulation of Turbulent Flows." Diss., Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/11520.

Full text
Abstract:
A novel computational approach, Two Level Simulation (TLS), was developed based on the explicit reconstruction of the small-scale velocity by solving the small-scale governing equations on the domain with reduced dimension representing a collection of one-dimensional lines embedded in the three-dimensional flow domain. A coupled system of equations, that is not based on an eddy-viscosity hypothesis, was derived based on the decomposition of flow variables into the large-scale and the small-scale components without introducing the concept of filtering. Simplified treatment of the small-scale equations was proposed based on modeling of the small-scale advective derivatives and the small-scale dissipative terms in the directions orthogonal to the lines. TLS approach was tested to simulate benchmark cases of turbulent flows, including forced isotropic turbulence, mixing layers and well-developed channel flow, and demonstrated good capabilities to capture turbulent flow features using relatively coarse grids.
APA, Harvard, Vancouver, ISO, and other styles
11

Nehate, Girish. "Solving large scale support vector machine problems using matrix splitting and decomposition methods /." Search for this dissertation online, 2006. http://wwwlib.umi.com/cr/ksu/main.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Nagurney, Anna, and Dae-Shik Kim. "Parallel Computation of Large-Scale Dynamic Market Network Equilibria via Time Period Decomposition." Massachusetts Institute of Technology, Operations Research Center, 1990. http://hdl.handle.net/1721.1/5317.

Full text
Abstract:
In this paper we consider a dynamic market equilibrium problem over a finite time horizon in which a commodity is produced, consumed, traded, and inventoried over space and time. We first formulate the problem as a network equilibrium problem and derive the variational inequality formulation of the problem. We then propose a parallel decomposition algorithm which decomposes the large-scale problem into T + 1 subproblems, where T denotes the number of time periods. Each of these subproblems can then be solved simultaneously, that is, in parallel, on distinct processors. We provide computational results on linear separable problems and on nonlinear asymmetric problems when the algorithm is implemented in a serial and then in a parallel environment. The numerical results establish that the algorithm is linear in the number of time periods. This research demonstrates that this new formulation of dynamic market problems and decomposition procedure considerably expands the size of problems that are now feasible to solve.
APA, Harvard, Vancouver, ISO, and other styles
13

Locci-Lopez, Daniel Eduardo. "Permian Basin Reservoir Quantitative Interpretation Applying the Multi-Scale Boxcar Transform Spectral Decomposition." Thesis, University of Louisiana at Lafayette, 2019. http://pqdtopen.proquest.com/#viewpdf?dispub=10816133.

Full text
Abstract:

The Short Time Fourier transform and the S-transform are among the most used methods of spectral decomposition to localize spectra in time and frequency. The S-transform utilizes a frequency-dependent Gaussian analysis window that is normalized for energy conservation purposes. The STFT, on the other hand, has a selected fixed time window that does not depend on frequency. In previous literature, it has been demonstrated that the S-transform distorts the Fourier spectra, shifting frequency peaks, and could result in misleading frequency attributes. Therefore, one way of making the S-transform more appropriate for quantitative seismic signal analysis is to ignore the conservation of energy over time requirement. This suggests a hybrid approach between the Short Time Fourier transform and the S-transform for seismic interpretation purposes. In this work, we introduce the Multi-Scale Boxcar transform that has temporal resolution comparable to the S-transform while giving correct Fourier peak frequencies. The Multi-Scale Boxcar transform includes a special analysis window that focusses the analysis on the highest amplitude portion of the Gaussian window, giving a more accurate time-frequency representation of the spectra in comparison with the S-transform. Post-stack seismic data with a strong well logs control was used to demonstrate the differences of the Multi-Scale Boxcar transform and the S-transform. The analysis area in this work is the Pennsylvanian and Lower Permian Horseshoe Atoll Carbonate play in the Midland Basin, a sub-basin in the larger Permian Basin. The Multi-Scale Boxcar transform spectral decomposition method improved the seismic interpretation of the study area, showing better temporal resolution for resolving the layered reservoirs? heterogeneity. The time and depth scale values on the figures are shifted according to the sponsor request, but the relative scale is correct.

APA, Harvard, Vancouver, ISO, and other styles
14

Phipps, Maximillian Joshua Sebastian. "Energy decomposition analysis for large-scale first principles quantum mechanical simulations of biomolecules." Thesis, University of Southampton, 2017. https://eprints.soton.ac.uk/410305/.

Full text
Abstract:
Kohn-Sham density functional theory (DFT) is an extraordinarily powerful and versatile tool for calculating the properties of materials. In its conventional form, this approach scales cubically with the size of the system under study. This scaling becomes prohibitive when investigating larger arrangements such as biomolecules and nanostructures. More recently linear-scaling approaches have been developed that overcome this limitation, allowing calculations to be performed on systems many thousands of atoms in size. An example of such an approach is the ONETEP code which uses a plane wave-like basis set and is based upon the use of spherically-localised orbitals. A simple yet common calculation performed using ab initio codes is the total (ground state) energy calculation. By comparing the energy of isolated parts of a system to the energy of the combined system, we are able to obtain the energy of interaction. This quantity is useful as it provides a relative measure of the enthalpic stability of an interaction which can be compared to other systems. Equally, however, this quantity gives little indication of the driving forces that lead to the interaction energy we observe. A number of approaches have been developed that aim to identify these driving forces. Energy decomposition analysis (EDA) refers to the set of methods that decompose the interaction energy into physically relevant energy components which add to the full interaction energy. Few studies have applied EDA approaches to larger systems in the thousand-atom regime, with the vast majority of investigations focussing on small system studies (less than 100 atoms in size). These methods have shown varying degrees of success. In this work, we have evaluated the suitability of a selection of popular EDA methods in decomposing the interaction energies of small biomolecule-like systems. Based on the results of this review, we developed a linear-scaling EDA approach in the ONETEP code that separates the intermolecular interaction energy into chemically distinct components (electrostatic, exchange, correlation, Pauli repulsion, polarisation, and charge transfer). The intermediate state used to calculate polarisation, also known as the absolutely localised molecular orbital (ALMO) state, has the key advantage of being fully antisymmetric and variationally optimised. The linear-scaling capability of the scheme is based on use of an adaptive purification approach and sparse matrix equations. We demonstrate the accuracy of this approach in reproducing the energy component values of its Gaussian basis counterparts, and present a remedy to the limitation of polarisation and charge transfer basis set dependence that is based on the property of strict localisation of the ONETEP orbitals. Additionally, we show the method to have mild exchange-correlation functional and atomic coordinate dependence. We have demonstrated the high value of our method by applying it to the thrombin protein interacting with a number of small binders. Here, we used our scheme in combination with electron density difference (EDD) plots to identify the key protein and ligand regions that contribute to polarisation and charge transfer. In our studies, we assessed convergence of the EDA components with protein truncation up to a total system size of 4975 atoms. Additionally, we applied our EDA to binders that had been partitioned into smaller fragments. Here, we accurately quantified the bonding contributions of key ligand moieties with particular regions of the protein cavity. We assessed how accurately the ligand binding components are reproduced by the fragment contributions using an additivity measure. Using this measure, we showed the fragment binding components to add up to the full ligand binding component with overall minimal additivity error. We also investigated the energy components of a series of small thrombin S1 pocket binders all less than 30 atoms in size. In this study, we demonstrate the EDA and EDD plots as tools for understanding the relative importance of different binder structural features and positionings within the pocket. Overall, we show our EDA method to be a stable and powerful approach for the analysis of interaction energies in systems of large size. The application of this method is not limited to biomolecular studies, and we expect that this approach can be readily applied to analyses within other fields, for example materials, catalysts, and nanostructures.
APA, Harvard, Vancouver, ISO, and other styles
15

Sa, Shibasaki Rui. "Lagrangian Decomposition Methods for Large-Scale Fixed-Charge Capacitated Multicommodity Network Design Problem." Thesis, Université Clermont Auvergne‎ (2017-2020), 2020. http://www.theses.fr/2020CLFAC024.

Full text
Abstract:
Typiquement présent dans les domaines de la logistique et des télécommunications, le problème de synthèse de réseau multi-flot à charge fixe reste difficile, en particulier dans des contextes à grande échelle. Dans ce cas, la capacité à produire des solutions de bonne qualité dans un temps de calcul raisonnable repose sur la disponibilité d'algorithmes efficaces. En ce sens, cette thèse propose des approches lagrangiennes capables de fournir des bornes relativement proches de l'optimal pour des instances de grande taille. L'efficacité des méthodes dépend de l'algorithme appliqué pour résoudre les duals lagrangiens, nous choisissons donc entre deux des solveurs les plus efficaces de la littérature: l'algorithme de Volume et la méthode Bundle, fournissant une comparaison entre eux. Les résultats ont montré que l'algorithme de Volume est plus efficace dans le contexte considéré, étant celui choisi pour le développement du projet de recherche.Une première heuristique lagrangienne a été conçue pour produire des solutions réalisables de bonne qualité pour le problème, obtenant de bien meilleurs résultats que Cplex pour les plus grandes instances. Concernant les limites inférieures, un algorithme Relax-and-Cut a été implémenté intégrant une analyse de sensibilité et une mise à l'échelle des contraintes, ce qui a amélioré les résultats. Les améliorations des bornes inférieures ont atteint 11\%, mais en moyenne, elles sont restées inférieures à 1\%. L'algorithme Relax-and-Cut a ensuite été inclus dans un schéma Branch-and-Cut, pour résoudre des programmes linéaires dans chaque nœud de l'arbre de recherche. De plus, une heuristique Feasibility Pump lagrangienne a été implémentée pour accélérer la recherche de bonnes solutions réalisables. Les résultats obtenus ont montré que le schéma proposé est compétitif avec les meilleurs algorithmes de la littérature et fournit les meilleurs résultats dans des contextes à grande échelle. De plus, une version heuristique de l'algorithme Branch-and-Cut basé sur le Feasibility Pump lagrangien a été testée, fournissant les meilleurs résultats en général, par rapport aux heuristiques de la littérature
Typically present in logistics and telecommunications domains, the Fixed-Charge Multicommodity Capacitated Network Design Problem remains challenging, especially when large-scale contexts are involved. In this particular case, the ability to produce good quality soutions in a reasonable amount of time leans on the availability of efficient algorithms. In that sense, the present thesis proposed Lagrangian approaches that are able to provide relatively sharp bounds for large-scale instances of the problem. The efficiency of the methods depend on the algorithm applied to solve Lagrangian duals, so we choose between two of the most efficient solvers in the literature: the Volume Algorithm and the Bundle Method, providing a comparison between them. The results showed that the Volume Algorithm is more efficient in the present context, being the one kept for further research.A first Lagrangian heuristic was devised to produce good quality feasible solutions for the problem, obtaining far better results than Cplex, for the largests instances. Concerning lower bounds, a Relax-and-Cut algorithm was implemented embbeding sensitivity analysis and constraint scaling, which improved results. The increases in lower bounds attained 11\%, but on average they remained under 1\%.The Relax-and-Cut algorithm was then included in a Branch-and-Cut scheme, to solve linear programs in each node of the search tree. Moreover, a Feasibility Pump heuristic using the Volume Algorithm as solver for linear programs was implemented to accelerate the search for good feasible solutions in large-scale cases. The obtained results showed that the proposed scheme is competitive with the best algorithms in the literature, and provides the best results in large-scale contexts. Moreover, a heuristic version of the Branch-and-Cut algorithm based on the Lagrangian Feasibility Pump was tested, providing the best results in general, when compared to efficient heuristics in the literature
APA, Harvard, Vancouver, ISO, and other styles
16

Lange, Heinrich. "Solution of large-scale multicommodity network flow problems via a logarithmic barrier function decomposition/." Thesis, Monterey, California. Naval Postgraduate School, 1988. http://hdl.handle.net/10945/23397.

Full text
Abstract:
A new algorithm is presented using a logarithmic barrier function decomposition for the solution of the large-scale multicommodity network flow problem. Placing the complicating joint capacity constraints of the multicommodity network flow problem into a logarithmic barrier term of the objective function creates a nonlinear mathematical program with linear network flow constraints. Using the technique of restricted simplicial decomposition, we generate a sequence of extreme points by solving independent pure network problems for each commodity in a linear subproblem and optimize a nonlinear master problem over the convex hull of a fixed number of retained extreme points and the previous master problem solution. Computational results on a network with 3,300 nodes and 10,400 arcs are reported for four, ten and 100 commodities. Keywords: Multicommodity network flow problem, Large scale programming, Logarithmic barrier function, price directive decomposition
APA, Harvard, Vancouver, ISO, and other styles
17

Wang, Xiaochuan. "A Domain Decomposition Method for Analysis of Three-Dimensional Large-Scale Electromagnetic Compatibility Problems." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1338376950.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

MirHassani, S. Ali. "An investigation of tools for modelling and solving large scale linear programming problems under uncertainty." Thesis, Brunel University, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.263504.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Agunwamba, Chukwunomso. "A MATLAB Program to implement the band-pass method for discovering relevant scales in surface roughness measurement." Digital WPI, 2010. https://digitalcommons.wpi.edu/etd-theses/89.

Full text
Abstract:
This project explores how to use band-pass filtering with a variety of filters to filter both two and three dimensional surface data. The software developed collects and makes available these filtering methods to support a larger project. It is used to automate the filtering procedure. This paper goes through the work-flow of the program, explaining how each filter was implemented. Then it demonstrates how the filters work by applying them to surface data used to test correlation between friction and roughness [Berglund and Rosen, 2009]. It also provides some explanations of the mathematical development of the filtering procedures as obtained from literature.
APA, Harvard, Vancouver, ISO, and other styles
20

Gade, Dinakar. "Algorithms and Reformulations for Large-Scale Integer and Stochastic Integer Programs." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1343182054.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Sherbaf, Behtash Mohammad. "A Decomposition-based Multidisciplinary Dynamic System Design Optimization Algorithm for Large-Scale Dynamic System Co-Design." University of Cincinnati / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1535468984437623.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Culioli, Jean-Christophe. "Algorithmes de decomposition/coordination en optimisation stochastique." Paris, ENMP, 1987. http://www.theses.fr/1987ENMP0059.

Full text
Abstract:
Les systemes consideres, souvent complexes a modeliser et/ou optimiser peuvent etre constitues de sous-systemes heterogenes pour lesquels une technique globale de resolution n'est pas necessairement appropriee ou possible, meme s'ils sont equivalents et peu nombreux
APA, Harvard, Vancouver, ISO, and other styles
23

Ehrler, Christoph [Verfasser], and S. [Akademischer Betreuer] Hinz. "Scale-Wavelength Decomposition of Hyperspectral Signals - Use for Mineral Classification & Quantification / Christoph Ehrler. Betreuer: S. Hinz." Karlsruhe : KIT-Bibliothek, 2014. http://d-nb.info/1048384926/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Xu, Chaojun [Verfasser]. "Coordination and Decomposition of Large-Scale Planning and Scheduling Problems with Application to Steel Production / Chaojun Xu." Aachen : Shaker, 2013. http://d-nb.info/1049381610/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Ali, Rana Shahbaz [Verfasser], and Ellen [Akademischer Betreuer] Kandeler. "Microbial regulation of soil organic matter decomposition at the regional scale / Rana Shahbaz Ali ; Betreuer: Ellen Kandeler." Hohenheim : Kommunikations-, Informations- und Medienzentrum der Universität Hohenheim, 2019. http://d-nb.info/1176625047/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Sariyuce, Ahmet Erdem. "Fast Algorithms for Large-Scale Network Analytics." The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1429825578.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Nemoto, Jiro, and Mika Goto. "Productivity, Efficiency, Scale Economies and Technical Change: A New Decomposition Analysis of TFP Applied to the Japanese Prefectures." Elsevier, 2005. http://hdl.handle.net/2237/7770.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Weiglein, Tyler Lorenz. "A Continental-Scale Investigation of Factors Controlling the Vulnerability of Soil Organic Matter in Mineral Horizons to Decomposition." Thesis, Virginia Tech, 2019. http://hdl.handle.net/10919/101987.

Full text
Abstract:
Soil organic matter (SOM) is the largest terrestrial pool of organic carbon (C), and potential carbon-climate feedbacks involving SOM decomposition could exacerbate anthropogenic climate change. Despite the importance of SOM in the global C cycle, our understanding of the controls on SOM stabilization and decomposition is still developing, and as such, SOM dynamics are a source of major uncertainty in current Earth system models (ESMs), which reduces the effectiveness of these models in predicting the efficacy of climate change mitigation strategies. To improve our understanding of controls on SOM decomposition at scales relevant to such modeling efforts, A and upper B horizon soil samples from 22 National Ecological Observatory Network (NEON) sites spanning the conterminous U.S. were incubated for 52 weeks under conditions representing site-specific mean summer temperature and horizon-specific field capacity (-33 kPa) water potential. Cumulative CO2 respired was periodically measured and normalized by soil organic C content to obtain cumulative specific respiration (CSR). A two-pool decomposition model was fitted to the CSR data to calculate decomposition rates of fast- (kfast) and slow-cycling pools (kslow). Post-LASSO best subsets multiple linear regression was used to construct horizon-specific models of significant predictors for CSR, kfast, and kslow. Significant predictors for all three response variables consisted mostly of proximal factors related to clay-sized fraction mineralogy and SOM composition. Non-crystalline minerals and lower SOM lability negatively affected CSR for both A and B horizons. Significant predictors for decomposition rates varied by horizon and pool. B horizon decomposition rates were positively influenced by nitrogen (N) availability, while an index of pyrogenic C had a negative effect on kfast in both horizons. These results reinforce the recognized need to explicitly represent SOM stabilization via interactions with non-crystalline minerals in ESMs, and they also suggest that increased N inputs could enhance SOM decomposition in the subsoil, highlighting another mechanism beyond shifts in temperature and precipitation regimes that could alter SOM decomposition rates.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
29

Salinas, Norma. "Decomposition in tropical forests : results from a large-scale leaf and wood translocation experiment along an elevation gradient in Peru." Thesis, University of Oxford, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.669928.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Mukati, Kapil. "An alternative structure for next generation regulatory controllers and scale-up of copper(indium gallium)selenide thin film co-evaporative physical vapor deposition process." Access to citation, abstract and download form provided by ProQuest Information and Learning Company; downloadable PDF file, 311 p, 2007. http://proquest.umi.com/pqdweb?did=1397912441&sid=12&Fmt=2&clientId=8331&RQT=309&VName=PQD.

Full text
Abstract:
Thesis (Ph.D.)--University of Delaware, 2007.
Principal faculty advisor: Babatunde Ogunnaike, Dept. of Chemical Engineering, and Robert W. Birkmire, Dept. of Materials Science & Engineering. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
31

Burton, Ludovic Nicolas. "Multi-Scale Thermal Modeling Methodology for High Power-Electronic Cabinets." Thesis, Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/19808.

Full text
Abstract:
Future generation of all-electric ships will be highly dependent on electric power, since every single system aboard such as the drive propulsion, the weapon system, the communication and navigation systems will be electrically powered. Power conversion modules (PCM) will be used to transform and distribute the power as desired in various zone within the ships. As power densities increase at both components and systems-levels, high-fidelity thermal models of those PCMs are indispensable to reach high performance and energy efficient designs. Efficient systems-level thermal management requires modeling and analysis of complex turbulent fluid flow and heat transfer processes across several decades of length scales. In this thesis, a methodology for thermal modeling of complex PCM cabinets used in naval applications is offered. High fidelity computational fluid dynamics and heat transfer (CFD/HT) models are created in order to analyze the heat dissipation from the chip to the multi-cabinet level and optimize turbulent convection cooling inside the cabinet enclosure. Conventional CFD/HT modeling techniques for such complex and multi-scale systems are severely limited as a design or optimization tool. The large size of such models and the complex physics involved result in extremely slow processing time. A multi-scale approach has been developed to predict accurately the overall airflow conditions at the cabinet level as well as the airflow around components which dictates the chip temperature in details. Various models of different length scales are linked together by matching the boundary conditions. The advantage is that it allows high fidelity models at each length scale and more detailed simulations are obtained than what could have been accomplished with a single model methodology. It was found that the power cabinets under the prescribed design parameters, experience operating point airflow rates that are much lower than the design requirements. The flow is unevenly distributed through the various bays. Approximately 90 % of the cold plenum inlet flow rate goes exclusively through Bay 1 and Bay 2. Re-circulation and reverse flow are observed in regions experiencing a lack of flow motion. As a result high temperature of the air flow and consequently high component temperatures are also experienced in the upper bays of the cabinet. A proper orthogonal decomposition (POD) methodology has been performed to develop reduced-order compact models of the PCM cabinets. The reduced-order modeling approach based on POD reduces the numerical models containing 35 x 109 DOF down to less than 20 DOF, while still retaining a great accuracy. The reduced-order models developed yields prediction of the full-field 3-D cabinet within 30 seconds as opposed to the CFD/HT simulations that take more than 3 hours using a high power computer cluster. The reduced-order modeling methodology developed could be a useful tool to quickly and accurately characterize the thermal behavior of any electronics system and provides a good basis for thermal design and optimization purposes.
APA, Harvard, Vancouver, ISO, and other styles
32

Westhoff, Andreas. "Spatial Scaling of Large-Scale Circulations and Heat Transport in Turbulent Mixed Convection." Doctoral thesis, Niedersächsische Staats- und Universitätsbibliothek Göttingen, 2012. http://hdl.handle.net/11858/00-1735-0000-000D-FD19-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Badillo, Almaraz Hiram. "Numerical modelling based on the multiscale homogenization theory. Application in composite materials and structures." Doctoral thesis, Universitat Politècnica de Catalunya, 2012. http://hdl.handle.net/10803/83924.

Full text
Abstract:
A multi-domain homogenization method is proposed and developed in this thesis based on a two-scale technique. The method is capable of analyzing composite structures with several periodic distributions by partitioning the entire domain of the composite into substructures making use of the classical homogenization theory following a first-order standard continuum mechanics formulation. The need to develop the multi-domain homogenization method arose because current homogenization methods are based on the assumption that the entire domain of the composite is represented by one periodic or quasi-periodic distribution. However, in some cases the structure or composite may be formed by more than one type of periodic domain distribution, making the existing homogenization techniques not suitable to analyze this type of cases in which more than one recurrent configuration appears. The theoretical principles used in the multi-domain homogenization method were applied to assemble a computational tool based on two nested boundary value problems represented by a finite element code in two scales: a) one global scale, which treats the composite as an homogeneous material and deals with the boundary conditions, the loads applied and the different periodic (or quasi-periodic) subdomains that may exist in the composite; and b) one local scale, which obtains the homogenized response of the representative volume element or unit cell, that deals with the geometry distribution and with the material properties of the constituents. The method is based on the local periodicity hypothesis arising from the periodicity of the internal structure of the composite. The numerical implementation of the restrictions on the displacements and forces corresponding to the degrees of freedom of the domain's boundary derived from the periodicity was performed by means of the Lagrange multipliers method. The formulation included a method to compute the homogenized non-linear tangent constitutive tensor once the threshold of nonlinearity of any of the unit cells has been surpassed. The procedure is based in performing a numerical derivation applying a perturbation technique. The tangent constitutive tensor is computed for each load increment and for each iteration of the analysis once the structure has entered in the non-linear range. The perturbation method was applied at the global and local scales in order to analyze the performance of the method at both scales. A simple average method of the constitutive tensors of the elements of the cell was also explored for comparison purposes. A parallelization process was implemented on the multi-domain homogenization method in order to speed-up the computational process due to the huge computational cost that the nested incremental-iterative solution embraces. The effect of softening in two-scale homogenization was investigated following a smeared cracked approach. Mesh objectivity was discussed first within the classical one-scale FE formulation and then the concepts exposed were extrapolated into the two-scale homogenization framework. The importance of the element characteristic length in a multi-scale analysis was highlighted in the computation of the specific dissipated energy when strain-softening occurs. Various examples were presented to evaluate and explore the capabilities of the computational approach developed in this research. Several aspects were studied, such as analyzing different composite arrangements that include different types of materials, composites that present softening after the yield point is reached (e.g. damage and plasticity) and composites with zones that present high strain gradients. The examples were carried out in composites with one and with several periodic domains using different unit cell configurations. The examples are compared to benchmark solutions obtained with the classical one-scale FE method.
En esta tesis se propone y desarrolla un método de homogeneización multi-dominio basado en una técnica en dos escalas. El método es capaz de analizar estructuras de materiales compuestos con varias distribuciones periódicas dentro de un mismo continuo mediante la partición de todo el dominio del material compuesto en subestructuras utilizando la teoría clásica de homogeneización a través de una formulación estándar de mecánica de medios continuos de primer orden. La necesidad de desarrollar este método multi-dominio surgió porque los métodos actuales de homogeneización se basan en el supuesto de que todo el dominio del material está representado por solo una distribución periódica o cuasi-periódica. Sin embargo, en algunos casos, la estructura puede estar formada por más de un tipo de distribución de dominio periódico. Los principios teóricos desarrollados en el método de homogeneización multi-dominio se aplicaron para ensamblar una herramienta computacional basada en dos problemas de valores de contorno anidados, los cuales son representados por un código de elementos finitos (FE) en dos escalas: a) una escala global, que trata el material compuesto como un material homogéneo. Esta escala se ocupa de las condiciones de contorno, las cargas aplicadas y los diferentes subdominios periódicos (o cuasi-periódicos) que puedan existir en el material compuesto; y b) una escala local, que obtiene la respuesta homogenizada de un volumen representativo o celda unitaria. Esta escala se ocupa de la geometría, y de la distribución espacial de los constituyentes del compuesto así como de sus propiedades constitutivas. El método se basa en la hipótesis de periodicidad local derivada de la periodicidad de la estructura interna del material. La implementación numérica de las restricciones de los desplazamientos y las fuerzas derivadas de la periodicidad se realizaron por medio del método de multiplicadores de Lagrange. La formulación incluye un método para calcular el tensor constitutivo tangente no-lineal homogeneizado una vez que el umbral de la no-linealidad de cualquiera de las celdas unitarias ha sido superado. El procedimiento se basa en llevar a cabo una derivación numérica aplicando una técnica de perturbación. El tensor constitutivo tangente se calcula para cada incremento de carga y para cada iteración del análisis una vez que la estructura ha entrado en el rango no-lineal. El método de perturbación se aplicó tanto en la escala global como en la local con el fin de analizar la efectividad del método en ambas escalas. Se lleva a cabo un proceso de paralelización en el método con el fin de acelerar el proceso de cómputo debido al enorme coste computacional que requiere la solución iterativa incremental anidada. Se investiga el efecto de ablandamiento por deformación en el material usando el método de homogeneización en dos escalas a través de un enfoque de fractura discreta. Se estudió la objetividad en el mallado dentro de la formulación clásica de FE en una escala y luego los conceptos expuestos se extrapolaron en el marco de la homogeneización de dos escalas. Se enfatiza la importancia de la longitud característica del elemento en un análisis multi-escala en el cálculo de la energía específica disipada cuando se produce el efecto de ablandamiento. Se presentan varios ejemplos para evaluar la propuesta computacional desarrollada en esta investigación. Se estudiaron diferentes configuraciones de compuestos que incluyen diferentes tipos de materiales, así como compuestos que presentan ablandamiento después de que el punto de fluencia del material se alcanza (usando daño y plasticidad) y compuestos con zonas que presentan altos gradientes de deformación. Los ejemplos se llevaron a cabo en materiales compuestos con uno y con varios dominios periódicos utilizando diferentes configuraciones de células unitarias. Los ejemplos se comparan con soluciones de referencia obtenidas con el método clásico de elementos finitos en una escala.
APA, Harvard, Vancouver, ISO, and other styles
34

Badillo, Almaraz Hiram. "Numerial modelling based on the multiscale homogenization theory. Application in composite materials and structures." Doctoral thesis, Universitat Politècnica de Catalunya, 2012. http://hdl.handle.net/10803/83924.

Full text
Abstract:
A multi-domain homogenization method is proposed and developed in this thesis based on a two-scale technique. The method is capable of analyzing composite structures with several periodic distributions by partitioning the entire domain of the composite into substructures making use of the classical homogenization theory following a first-order standard continuum mechanics formulation. The need to develop the multi-domain homogenization method arose because current homogenization methods are based on the assumption that the entire domain of the composite is represented by one periodic or quasi-periodic distribution. However, in some cases the structure or composite may be formed by more than one type of periodic domain distribution, making the existing homogenization techniques not suitable to analyze this type of cases in which more than one recurrent configuration appears. The theoretical principles used in the multi-domain homogenization method were applied to assemble a computational tool based on two nested boundary value problems represented by a finite element code in two scales: a) one global scale, which treats the composite as an homogeneous material and deals with the boundary conditions, the loads applied and the different periodic (or quasi-periodic) subdomains that may exist in the composite; and b) one local scale, which obtains the homogenized response of the representative volume element or unit cell, that deals with the geometry distribution and with the material properties of the constituents. The method is based on the local periodicity hypothesis arising from the periodicity of the internal structure of the composite. The numerical implementation of the restrictions on the displacements and forces corresponding to the degrees of freedom of the domain's boundary derived from the periodicity was performed by means of the Lagrange multipliers method. The formulation included a method to compute the homogenized non-linear tangent constitutive tensor once the threshold of nonlinearity of any of the unit cells has been surpassed. The procedure is based in performing a numerical derivation applying a perturbation technique. The tangent constitutive tensor is computed for each load increment and for each iteration of the analysis once the structure has entered in the non-linear range. The perturbation method was applied at the global and local scales in order to analyze the performance of the method at both scales. A simple average method of the constitutive tensors of the elements of the cell was also explored for comparison purposes. A parallelization process was implemented on the multi-domain homogenization method in order to speed-up the computational process due to the huge computational cost that the nested incremental-iterative solution embraces. The effect of softening in two-scale homogenization was investigated following a smeared cracked approach. Mesh objectivity was discussed first within the classical one-scale FE formulation and then the concepts exposed were extrapolated into the two-scale homogenization framework. The importance of the element characteristic length in a multi-scale analysis was highlighted in the computation of the specific dissipated energy when strain-softening occurs. Various examples were presented to evaluate and explore the capabilities of the computational approach developed in this research. Several aspects were studied, such as analyzing different composite arrangements that include different types of materials, composites that present softening after the yield point is reached (e.g. damage and plasticity) and composites with zones that present high strain gradients. The examples were carried out in composites with one and with several periodic domains using different unit cell configurations. The examples are compared to benchmark solutions obtained with the classical one-scale FE method.
En esta tesis se propone y desarrolla un método de homogeneización multi-dominio basado en una técnica en dos escalas. El método es capaz de analizar estructuras de materiales compuestos con varias distribuciones periódicas dentro de un mismo continuo mediante la partición de todo el dominio del material compuesto en subestructuras utilizando la teoría clásica de homogeneización a través de una formulación estándar de mecánica de medios continuos de primer orden. La necesidad de desarrollar este método multi-dominio surgió porque los métodos actuales de homogeneización se basan en el supuesto de que todo el dominio del material está representado por solo una distribución periódica o cuasi-periódica. Sin embargo, en algunos casos, la estructura puede estar formada por más de un tipo de distribución de dominio periódico. Los principios teóricos desarrollados en el método de homogeneización multi-dominio se aplicaron para ensamblar una herramienta computacional basada en dos problemas de valores de contorno anidados, los cuales son representados por un código de elementos finitos (FE) en dos escalas: a) una escala global, que trata el material compuesto como un material homogéneo. Esta escala se ocupa de las condiciones de contorno, las cargas aplicadas y los diferentes subdominios periódicos (o cuasi-periódicos) que puedan existir en el material compuesto; y b) una escala local, que obtiene la respuesta homogenizada de un volumen representativo o celda unitaria. Esta escala se ocupa de la geometría, y de la distribución espacial de los constituyentes del compuesto así como de sus propiedades constitutivas. El método se basa en la hipótesis de periodicidad local derivada de la periodicidad de la estructura interna del material. La implementación numérica de las restricciones de los desplazamientos y las fuerzas derivadas de la periodicidad se realizaron por medio del método de multiplicadores de Lagrange. La formulación incluye un método para calcular el tensor constitutivo tangente no-lineal homogeneizado una vez que el umbral de la no-linealidad de cualquiera de las celdas unitarias ha sido superado. El procedimiento se basa en llevar a cabo una derivación numérica aplicando una técnica de perturbación. El tensor constitutivo tangente se calcula para cada incremento de carga y para cada iteración del análisis una vez que la estructura ha entrado en el rango no-lineal. El método de perturbación se aplicó tanto en la escala global como en la local con el fin de analizar la efectividad del método en ambas escalas. Se lleva a cabo un proceso de paralelización en el método con el fin de acelerar el proceso de cómputo debido al enorme coste computacional que requiere la solución iterativa incremental anidada. Se investiga el efecto de ablandamiento por deformación en el material usando el método de homogeneización en dos escalas a través de un enfoque de fractura discreta. Se estudió la objetividad en el mallado dentro de la formulación clásica de FE en una escala y luego los conceptos expuestos se extrapolaron en el marco de la homogeneización de dos escalas. Se enfatiza la importancia de la longitud característica del elemento en un análisis multi-escala en el cálculo de la energía específica disipada cuando se produce el efecto de ablandamiento. Se presentan varios ejemplos para evaluar la propuesta computacional desarrollada en esta investigación. Se estudiaron diferentes configuraciones de compuestos que incluyen diferentes tipos de materiales, así como compuestos que presentan ablandamiento después de que el punto de fluencia del material se alcanza (usando daño y plasticidad) y compuestos con zonas que presentan altos gradientes de deformación. Los ejemplos se llevaron a cabo en materiales compuestos con uno y con varios dominios periódicos utilizando diferentes configuraciones de células unitarias. Los ejemplos se comparan con soluciones de referencia obtenidas con el método clásico de elementos finitos en una escala.
APA, Harvard, Vancouver, ISO, and other styles
35

Samadiani, Emad. "Energy efficient thermal management of data centers via open multi-scale design." Diss., Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/37218.

Full text
Abstract:
Data centers are computing infrastructure facilities that house arrays of electronic racks containing high power dissipation data processing and storage equipment whose temperature must be maintained within allowable limits. In this research, the sustainable and reliable operations of the electronic equipment in data centers are shown to be possible through the Open Engineering Systems paradigm. A design approach is developed to bring adaptability and robustness, two main features of open systems, in multi-scale convective systems such as data centers. The presented approach is centered on the integration of three constructs: a) Proper Orthogonal Decomposition (POD) based multi-scale modeling, b) compromise Decision Support Problem (cDSP), and c) robust design to overcome the challenges in thermal-fluid modeling, having multiple objectives, and inherent variability management, respectively. Two new POD based reduced order thermal modeling methods are presented to simulate multi-parameter dependent temperature field in multi-scale thermal/fluid systems such as data centers. The methods are verified to achieve an adaptable, robust, and energy efficient thermal design of an air-cooled data center cell with an annual increase in the power consumption for the next ten years. Also, a simpler reduced order modeling approach centered on POD technique with modal coefficient interpolation is validated against experimental measurements in an operational data center facility.
APA, Harvard, Vancouver, ISO, and other styles
36

Cocle, Roger. "Combining the vortex-in-cell and parallel fast multipole methods for efficient domain decomposition simulations : DNS and LES approaches." Université catholique de Louvain, 2007. http://edoc.bib.ucl.ac.be:81/ETD-db/collection/available/BelnUcetd-08172007-165806/.

Full text
Abstract:
This thesis is concerned with the numerical simulation of high Reynolds number, three-dimensional, incompressible flows in open domains. Many problems treated in Computational Fluid Dynamics (CFD) occur in free space: e.g., external aerodynamics past vehicles, bluff bodies or aircraft; shear flows such as shear layers or jets. In observing all these flows, we can remark that they are often unsteady, appear chaotic with the presence of a large range of eddies, and are mainly dominated by convection. For years, it was shown that Lagrangian Vortex Element Methods (VEM) are particularly well appropriate for simulating such flows. In VEM, two approaches are classically used for solving the Poisson equation. The first one is the Biot-Savart approach where the Poisson equation is solved using the Green's function approach. The unbounded domain is thus implicitly taken into account. In that case, Parallel Fast Multipole (PFM) solvers are usually used. The second approach is the Vortex-In-Cell (VIC) method where the Poisson equation is solved on a grid using fast grid solvers. This requires to impose boundary conditions or to assume periodicity. An important difference is that fast grid solvers are much faster than fast multipole solvers. We here combine these two approaches by taking the advantages of each one and, eventually, we obtain an efficient VIC-PFM method to solve incompressible flows in open domain. The major interest of this combination is its computational efficiency: compared to the PFM solver used alone, the VIC-PFM combination is 15 to 20 times faster. The second major advantage is the possibility to run Large Eddy Simulations (LES) at high Reynolds number. Indeed, as a part of the operations are done in an Eulerian way (i.e. on the VIC grid), all the existing subgrid scale (SGS) models used in classical Eulerian codes, including the recent "multiscale" models, can be easily implemented.
APA, Harvard, Vancouver, ISO, and other styles
37

Hileman, James Isaac. "Large-scale structures and noise generation in high-speed jets." Connect to this title online, 2004. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1078776079.

Full text
Abstract:
Thesis (Ph. D.)--Ohio State University, 2004.
Title from first page of PDF file. Document formatted into pages; contains xxviii, 365 p.; also includes graphics. Includes bibliographical references (p. 269-279).
APA, Harvard, Vancouver, ISO, and other styles
38

KUROIWA, TATIANA. "Application of modal decomposition and random decrement technique to ambient vibration measurement for detection of stiffness and damping change of a full-scale frame structure." 京都大学 (Kyoto University), 2009. http://hdl.handle.net/2433/124509.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Samadi, Afshin. "Large Scale Solar Power Integration in Distribution Grids : PV Modelling, Voltage Support and Aggregation Studies." Doctoral thesis, KTH, Elektriska energisystem, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-154602.

Full text
Abstract:
Long term supporting schemes for photovoltaic (PV) system installation have led to accommodating large numbers of PV systems within load pockets in distribution grids. High penetrations of PV systems can cause new technical challenges, such as voltage rise due to reverse power flow during light load and high PV generation conditions. Therefore, new strategies are required to address the associated challenges. Moreover, due to these changes in distribution grids, a different response behavior of the distribution grid on the transmission side can be expected. Hence, a new equivalent model of distribution grids with high penetration of PV systems is needed to be addressed for future power system studies. The thesis contributions lie in three parts. The first part of the thesis copes with the PV modelling. A non-proprietary PV model of a three-phase, single stage PV system is developed in PSCAD/EMTDC and PowerFactory. Three different reactive power regulation strategies are incorporated into the models and their behavior are investigated in both simulation platforms using a distribution system with PV systems. In the second part of the thesis, the voltage rise problem is remedied by use of reactive power. On the other hand, considering large numbers of PV systems in grids, unnecessary reactive power consumption by PV systems first increases total line losses, and second it may also jeopardize the stability of the network in the case of contingencies in conventional power plants, which supply reactive power. Thus, this thesis investigates and develops the novel schemes to reduce reactive power flows while still keeping voltage within designated limits via three different approaches: decentralized voltage control to the pre-defined set-points developing a coordinated active power dependent (APD) voltage regulation Q(P)using local signals developing a multi-objective coordinated droop-based voltage (DBV) regulation Q(V) using local signals   In the third part of the thesis, furthermore, a gray-box load modeling is used to develop a new static equivalent model of a complex distribution grid with large numbers of PV systems embedded with voltage support schemes. In the proposed model, variations of voltage at the connection point simulate variations of the model’s active and reactive power. This model can simply be integrated intoload-flow programs and replace the complex distribution grid, while still keepingthe overall accuracy high. The thesis results, in conclusion, demonstrate: i) using rms-based simulations in PowerFactory can provide us with quite similar results using the time domain instantaneous values in PSCAD platform; ii) decentralized voltage control to specific set-points through the PV systems in the distribution grid is fundamentally impossible dueto the high level voltage control interaction and directionality among the PV systems; iii) the proposed APD method can regulate the voltage under the steady-state voltagelimit and consume less total reactive power in contrast to the standard characteristicCosφ(P)proposed by German Grid Codes; iv) the proposed optimized DBV method can directly address voltage and successfully regulate it to the upper steady-state voltage limit by causing minimum reactive power consumption as well as line losses; v) it is beneficial to address PV systems as a separate entity in the equivalencing of distribution grids with high density of PV systems.

The Doctoral Degrees issued upon completion of the programme are issued by Comillas Pontifical University, Delft University of Technology and KTH Royal Institute of Technology. The invested degrees are official in Spain, the Netherlands and Sweden, respectively. QC 20141028

APA, Harvard, Vancouver, ISO, and other styles
40

Mestry, Siddharth D., Martha A. Centeno, Jose A. Faria, Purushothaman Damodaran, and Chen Chin-Sheng. "Branch and Price Solution Approach for Order Acceptance and Capacity Planning in Make-to-Order Operations." FIU Digital Commons, 2010. http://digitalcommons.fiu.edu/etd/145.

Full text
Abstract:
The increasing emphasis on mass customization, shortened product lifecycles, synchronized supply chains, when coupled with advances in information system, is driving most firms towards make-to-order (MTO) operations. Increasing global competition, lower profit margins, and higher customer expectations force the MTO firms to plan its capacity by managing the effective demand. The goal of this research was to maximize the operational profits of a make-to-order operation by selectively accepting incoming customer orders and simultaneously allocating capacity for them at the sales stage. For integrating the two decisions, a Mixed-Integer Linear Program (MILP) was formulated which can aid an operations manager in an MTO environment to select a set of potential customer orders such that all the selected orders are fulfilled by their deadline. The proposed model combines order acceptance/rejection decision with detailed scheduling. Experiments with the formulation indicate that for larger problem sizes, the computational time required to determine an optimal solution is prohibitive. This formulation inherits a block diagonal structure, and can be decomposed into one or more sub-problems (i.e. one sub-problem for each customer order) and a master problem by applying Dantzig-Wolfe’s decomposition principles. To efficiently solve the original MILP, an exact Branch-and-Price algorithm was successfully developed. Various approximation algorithms were developed to further improve the runtime. Experiments conducted unequivocally show the efficiency of these algorithms compared to a commercial optimization solver. The existing literature addresses the static order acceptance problem for a single machine environment having regular capacity with an objective to maximize profits and a penalty for tardiness. This dissertation has solved the order acceptance and capacity planning problem for a job shop environment with multiple resources. Both regular and overtime resources is considered. The Branch-and-Price algorithms developed in this dissertation are faster and can be incorporated in a decision support system which can be used on a daily basis to help make intelligent decisions in a MTO operation.
APA, Harvard, Vancouver, ISO, and other styles
41

Xie, Biancun. "Modeling and simulation of silicon interposers for 3-d integrated systems." Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/53922.

Full text
Abstract:
Three-dimensional (3-D) system integration is believed to be a promising technology and has gained tremendous momentum in the semiconductor industry recently. The Silicon interposer is the key enabler for the 3-D systems, and is expected to have high input/output counts, fine wiring lines and many TSVs. Modeling and design of the silicon interposer can be challenging and is becoming a critical task. This dissertation mainly focuses on developing an efficient modeling approach for silicon interposers in 3-D systems. The developed numerical methods can be classified as several categories. 1. The investigation of the coupling effects in large TSV arrays in silicon interposers. The importance of coupling between TSVs for low resistivity silicon substrates is quantified both in frequency and time domains. This has been compared with high resistivity silicon substrates. 2. The development of an electromagnetic modeling approach for non-uniform TSVs. To model the complex TSV structures, an approach for modeling conical TSVs is proposed first. Later a hybrid modeling method which combines the conical TSV modeling method and cylindrical modeling method is proposed to model the non-uniform TSV structures. 3. The development of a hybrid modeling approach for power delivery networks (PDN) with through-silicon vias (TSVs). The proposed approach extends multi-layer finite difference method (M-FDM) to include TSVs by extracting their parasitic behavior using an integral equation based solver. 4. The development of an efficient approach for modeling signal paths with TSVs in silicon interposers. The proposed method utilizes the 3-D finite-difference frequency-domain (FDFD) method to model the redistribution layer (RDL) transmission lines. A new formulation on incorporating multiport networks into the 3-D FDFD formulation is presented to include the parasitic effects of TSV arrays in the system matrix. 5. The development of a 3-D FDFD non-conformal domain decomposition method. The proposed method allows modeling individual domains independently using the FDFD method with non-matching meshing grids at interfaces. This non-conformal domain decomposition method is applied to model interconnections in silicon interposer.
APA, Harvard, Vancouver, ISO, and other styles
42

Chodak, Jillian. "Pyrolysis and Hydrodynamics of Fluidized Bed Media." Thesis, Virginia Tech, 2010. http://hdl.handle.net/10919/32920.

Full text
Abstract:
Interest in non-traditional fuel sources, carbon dioxide sequestration, and cleaner combustion has brought attention on gasification to supplement fossil fueled energy, particularly by a fluidized bed. Developing tools and methods to predict operation and performance of gasifiers will lead to more efficient gasifier designs. This research investigates bed fluidization and particle decomposition for fluidized materials. Experimental methods were developed to model gravimetric and energetic response of thermally decomposing materials. Gravimetric, heat flow, and specific heat data were obtained from a simultaneous thermogravimetric analyzer (DSC/TGA). A method was developed to combine data in an energy balance and determine an optimized heat of decomposition value. This method was effective for modeling simple reactions but not for complex decomposition. Advanced method was developed to model mass loss using kinetic reactions. Kinetic models were expanded to multiple reactions, and an approach was developed to identify suitable multiple reaction mechanisms. A refinement method for improving the fit of kinetic parameters was developed. Multiple reactions were combined with the energy balance, and heats of decomposition determined for each reaction. From this research, this methodology can be extended to describe more complex thermal decomposition. Effects of particle density and diameter on the minimum fluidization velocity were investigated, and results compared to empirical models. Effects of bed mass on pressure drop through fluidized beds were studied. A method was developed to predict hydrodynamic response of binary beds from the response of each particle type and mass. Resulting pressure drops of binary mixtures resembled behavior superposition for individual particles.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
43

Gacura, Matthew David. "Drivers of Fungal Community Composition and Function In Temperate Forests." Kent State University / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=kent1543579763552776.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Butcher, Daniel S. A. "Influence of asymmetric valve timing strategy on in-cylinder flow of the internal combustion engine." Thesis, Loughborough University, 2016. https://dspace.lboro.ac.uk/2134/23327.

Full text
Abstract:
Variable Valve Timing (VVT) presents a powerful tool in the relentless pursuit of efficiency improvements in the internal combustion engine. As the valves have such ultimate control over the gas exchange processes, extensive research effort in this area has shown how valve event timing can be manipulated to reduce engine pumping losses, fuel consumption and engine out emissions. Pumping losses may be significantly reduced by use of throttleless strategies, making use of intake valve duration for load control, while alternative cycles such as the Miller cycle allow modification of the effective compression ratio. More recently, the value of single valve operation in part load conditions is exploited, bringing with it the concept of asymmetric valve lifts. Work in this area found the side effect of asymmetric valve operation is a significant change in the behaviour of the in-cylinder flow structures, velocities and turbulence intensity. Work presented in this thesis exploits asymmetric valve strategies to modify the in-cylinder flow conditions. The Proper Orthogonal Decomposition (POD) is a method employed in the fluids dynamics field to facilitate the separation of coherent motion structures from the turbulence. In the presented work, the application of POD to in-cylinder flow analysis is further developed by the introduction of a novel method for identifying the POD modes representative of coherent motion and those representative of the turbulence. A POD mode correlation based technique is introduced and developed, with the resulting fields showing evidence of coherence and turbulence respectively. Experimental tests are carried out using a full length optically accessible, single cylinder research engine equipped with a fully variable valve train (FVVT) to allow full control of both valve timing and lift. In-cylinder flow is measured through the use of Particle Image Velocimetry (PIV) at several crank angle timings during the intake stroke whilst the engine is operated under a range of asymmetric valve strategies. The exhaust valves and one intake valve have their respective schedules fixed, while the second intake valve schedule is adjusted to 80\%, 60\%, 40\%, 20\%, 0\% lift. The resulting PIV fields are separated into coherent motion and turbulence using the developed technique, allowing for analysis of each constituent independently. The coherent element gives insight to large scale flows, often of the order of magnitude of the cylinder. These structures not only give a clear indication of the overall motion and allow assessment of flow characteristics such as swirl and tumble ratio, but the variation in the spatial location of these structures provides additional insight to the cyclic to cycle variation (CCV) of the flow, which would not otherwise be possible due to the inclusion of the turbulent data. Similarly, with the cyclic variation removed from the turbulent velocity field, a true account of the fluctuating velocity, u' and derived values such as the Turbulent Kinetic Energy (TKE) may be gained. Results show how manipulation of a one intake valve timing can influence both the large scale motions and the turbulence intensity. By the reduction of lift, the swirl ratio is increased almost linearly as the typical counter-rotating vortex pair becomes asymmetric, before a single vortex structure is observed in the lowest lift cases. A switching mechanism between the two is identified and found to be responsible for increased levels of CCV. With the reduction in lift, TKE is observed not only to increase, but change the spatial distribution of turbulence. Of course, the reduction in valve lift comes with the penalty of a reduced valve curtain area. However, it was identified both in literature and throughout this study that the reduction in lift did not negatively influence the engine breathing as the same trapped mass was achieved under all cases with no adjustment of manifold pressure. While literature shows both bulk motion and turbulence are key in liquid fuel break-up during the intake stroke, the mixing effects under port-injected natural gas were investigated experimentally using Laser Induced Fluorescence (LIF). The valve strategy was found to have no significant effect on the mixture distribution at the time of spark.
APA, Harvard, Vancouver, ISO, and other styles
45

Wang, Zhu. "Reduced-Order Modeling of Complex Engineering and Geophysical Flows: Analysis and Computations." Diss., Virginia Tech, 2012. http://hdl.handle.net/10919/27504.

Full text
Abstract:
Reduced-order models are frequently used in the simulation of complex flows to overcome the high computational cost of direct numerical simulations, especially for three-dimensional nonlinear problems. Proper orthogonal decomposition, as one of the most commonly used tools to generate reduced-order models, has been utilized in many engineering and scientific applications. Its original promise of computationally efficient, yet accurate approximation of coherent structures in high Reynolds number turbulent flows, however, still remains to be fulfilled. To balance the low computational cost required by reduced-order modeling and the complexity of the targeted flows, appropriate closure modeling strategies need to be employed. In this dissertation, we put forth two new closure models for the proper orthogonal decomposition reduced-order modeling of structurally dominated turbulent flows: the dynamic subgrid-scale model and the variational multiscale model. These models, which are considered state-of-the-art in large eddy simulation, are carefully derived and numerically investigated. Since modern closure models for turbulent flows generally have non-polynomial nonlinearities, their efficient numerical discretization within a proper orthogonal decomposition framework is challenging. This dissertation proposes a two-level method for an efficient and accurate numerical discretization of general nonlinear proper orthogonal decomposition closure models. This method computes the nonlinear terms of the reduced-order model on a coarse mesh. Compared with a brute force computational approach in which the nonlinear terms are evaluated on the fine mesh at each time step, the two-level method attains the same level of accuracy while dramatically reducing the computational cost. We numerically illustrate these improvements in the two-level method by using it in three settings: the one-dimensional Burgers equation with a small diffusion parameter, a two-dimensional flow past a cylinder at Reynolds number Re = 200, and a three-dimensional flow past a cylinder at Reynolds number Re = 1000. With the help of the two-level algorithm, the new nonlinear proper orthogonal decomposition closure models (i.e., the dynamic subgrid-scale model and the variational multiscale model), together with the mixing length and the Smagorinsky closure models, are tested in the numerical simulation of a three-dimensional turbulent flow past a cylinder at Re = 1000. Five criteria are used to judge the performance of the proper orthogonal decomposition reduced-order models: the kinetic energy spectrum, the mean velocity, the Reynolds stresses, the root mean square values of the velocity fluctuations, and the time evolution of the proper orthogonal decomposition basis coefficients. All the numerical results are benchmarked against a direct numerical simulation. Based on these numerical results, we conclude that the dynamic subgrid-scale and the variational multiscale models are the most accurate. We present a rigorous numerical analysis for the discretization of the new models. As a first step, we derive an error estimate for the time discretization of the Smagorinsky proper orthogonal decomposition reduced-order model for the Burgers equation with a small diffusion parameter. The theoretical analysis is numerically verified by two tests on problems displaying shock-like phenomena. We then present a thorough numerical analysis for the finite element discretization of the variational multiscale proper orthogonal decomposition reduced-order model for convection-dominated convection-diffusion-reaction equations. Numerical tests show the increased numerical accuracy over the standard reduced-order model and illustrate the theoretical convergence rates. We also discuss the use of the new reduced-order models in realistic applications such as airflow simulation in energy efficient building design and control problems as well as numerical simulation of large-scale ocean motions in climate modeling. Several research directions that we plan to pursue in the future are outlined.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
46

Carpier, Yann. "Contribution à l’analyse multi-échelles et multi-physiques du comportement mécanique de matériaux composites à matrice thermoplastique sous températures critiques." Thesis, Normandie, 2018. http://www.theses.fr/2018NORMIR28/document.

Full text
Abstract:
L’utilisation croissante des matériaux composites à matrice thermoplastique dans l’industrie aéronautique passe par une meilleure compréhension de leur comportement mécanique lors d’une exposition à un flux rayonnant (conséquence d’un incendie). Cette étude, portant sur le comportement thermo-mécanique de stratifiés tissés quasi-isotropes composés d’une matrice PPS renforcée par des fibres de carbone, se divise en 3 parties. Tout d’abord, la décomposition thermique du matériau et l’évolution de ses propriétés mécaniques avec la température sont étudiées. Ces données permettent ensuite d’appréhender le comportement de ces matériaux soumis à des chargements combinés (flux rayonnant et chargement mécanique en traction ou en compression, de type monotone à rupture et en fluage). La dernière partie vise à identifier les paramètres matériau nécessaires pour la simulation thermo-mécanique aux échelles macroscopique et mésoscopique
The increasing use of thermoplastic-based composite materials in the aeronautical industry requires a better understanding of their mechanical behavior when exposed to radiant heat flux (consequence of a fire exposure). This study, which examines the thermo-mechanical behavior of quasi-isotropic woven laminates composed of PPS reinforced with carbon fibers, is divided into 3 parts. First, the thermal decomposition of the material and the evolution of its mechanical properties with temperature is studied. These data help to understand the behavior of these materials subjected to combined loads (radiant heat flux and tensile or compressive loadings). The last part aims to identify the material parameters necessary for thermo-mechanical simulation at macroscopic and mesoscopic scales
APA, Harvard, Vancouver, ISO, and other styles
47

Lynch-Aird, Jeanne Elizabeth. "Estimation of post-mortem interval using decomposition scales for hanging bodies." Thesis, University of Central Lancashire, 2016. http://clok.uclan.ac.uk/16582/.

Full text
Abstract:
The extent of decomposition of a body can be used, in conjunction with accumulated degree days (ADD), to provide an estimate of the post-mortem interval (PMI). PMI estimations are important in aiding police to narrow down the possible identity of a body, and to include or exclude suspects, and also to establish the order of death for inheritance purposes when two or more potential beneficiaries die at around the same time. Previous studies have shown the decomposition pattern in hanging bodies to be different from that of a body on the ground, but the sample sizes used have been small. This study presents the results of a series of decomposition studies on hanging bodies in a variety of situations; clothed and unclothed, and fully or partially suspended. The study used domestic pigs (Sus scrofa) which enabled large enough sample sizes for statistical robustness. Pigs lying on the ground were used as controls. The pattern of decomposition in hanging pigs was found to differ sufficiently from that of pigs lying on the ground to require the creation of a novel decomposition scoring scale, which was used successfully to score both clothed and unclothed fully suspended bodies, as well as the upper, suspended, part of partially suspended bodies. The presence of loose, lightweight clothing, which did not impede insect access, was found to affect both the pattern and rate of decomposition in hanging pigs, with the clothed bodies decomposing faster than the unclothed bodies (p < 0.05, F2, 477 = 1238). The variations in the start weights of the pigs used for these studies was found to have a statistically significant effect on the rate of decomposition for both the hanging bodies and those on the ground (p < 0.05, F5, 714 = 1962) but the effect was so small as to make no practical difference across the range of start weights encountered. The effect of variation in start weight may be of greater concern, however, in scoring very heavy, obese, bodies and may be exacerbated by the increased fat-to-muscle ratios encountered in such bodies. Finally a set of ADD prediction tables were produced for the hanging and surface pigs. Further work is needed to establish to what extent these tables can be used for humans and, in light of the growing obesity problems in humans, to investigate the effect of weight and increased fat-to-muscle ratios on the pattern and rate of decomposition.
APA, Harvard, Vancouver, ISO, and other styles
48

Malek, Mohamed. "Extension de l'analyse multi-résolution aux images couleurs par transformées sur graphes." Thesis, Poitiers, 2015. http://www.theses.fr/2015POIT2304/document.

Full text
Abstract:
Dans ce manuscrit, nous avons étudié l’extension de l’analyse multi-résolution aux images couleurs par des transformées sur graphe. Dans ce cadre, nous avons déployé trois stratégies d’analyse différentes. En premier lieu, nous avons défini une transformée basée sur l’utilisation d’un graphe perceptuel dans l’analyse à travers la transformé en ondelettes spectrale sur graphe. L’application en débruitage d’image met en évidence l’utilisation du SVH dans l’analyse des images couleurs. La deuxième stratégie consiste à proposer une nouvelle méthode d’inpainting pour des images couleurs. Pour cela, nous avons proposé un schéma de régularisation à travers les coefficients d’ondelettes de la TOSG, l’estimation de la structure manquante se fait par la construction d’un graphe des patchs couleurs à partir des moyenne non locales. Les résultats obtenus sont très encourageants et mettent en évidence l’importance de la prise en compte du SVH. Dans la troisième stratégie, nous proposons une nouvelleapproche de décomposition d’un signal défini sur un graphe complet. Cette méthode est basée sur l’utilisation des propriétés de la matrice laplacienne associée au graphe complet. Dans le contexte des images couleurs, la prise en compte de la dimension couleur est indispensable pour pouvoir identifier les singularités liées à l’image. Cette dernière offre de nouvelles perspectives pour une étude approfondie de son comportement
In our work, we studied the extension of the multi-resolution analysis for color images by using transforms on graphs. In this context, we deployed three different strategies of analysis. Our first approach consists of computing the graph of an image using the psychovisual information and analyzing it by using the spectral graph wavelet transform. We thus have defined a wavelet transform based on a graph with perceptual information by using the CIELab color distance. Results in image restoration highlight the interest of the appropriate use of color information. In the second strategy, we propose a novel recovery algorithm for image inpainting represented in the graph domain. Motivated by the efficiency of the wavelet regularization schemes and the success of the nonlocal means methods we construct an algorithm based on the recovery of information in the graph wavelet domain. At each step the damaged structure are estimated by computing the non local graph then we apply the graph wavelet regularization model using the SGWT coefficient. The results are very encouraging and highlight the use of the perceptual informations. In the last strategy, we propose a new approach of decomposition for signals defined on a complete graphs. This method is based on the exploitation of of the laplacian matrix proprieties of the complete graph. In the context of image processing, the use of the color distance is essential to identify the specificities of the color image. This approach opens new perspectives for an in-depth study of its behavior
APA, Harvard, Vancouver, ISO, and other styles
49

Xie, Jianyong. "Electrical-thermal modeling and simulation for three-dimensional integrated systems." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/50307.

Full text
Abstract:
The continuous miniaturization of electronic systems using the three-dimensional (3D) integration technique has brought in new challenges for the computer-aided design and modeling of 3D integrated circuits (ICs) and systems. The major challenges for the modeling and analysis of 3D integrated systems mainly stem from four aspects: (a) the interaction between the electrical and thermal domains in an integrated system, (b) the increasing modeling complexity arising from 3D systems requires the development of multiscale techniques for the modeling and analysis of DC voltage drop, thermal gradients, and electromagnetic behaviors, (c) efficient modeling of microfluidic cooling, and (d) the demand of performing fast thermal simulation with varying design parameters. Addressing these challenges for the electrical/thermal modeling and analysis of 3D systems necessitates the development of novel numerical modeling methods. This dissertation mainly focuses on developing efficient electrical and thermal numerical modeling and co-simulation methods for 3D integrated systems. The developed numerical methods can be classified into three categories. The first category aims to investigate the interaction between electrical and thermal characteristics for power delivery networks (PDNs) in steady state and the thermal effect on characteristics of through-silicon via (TSV) arrays at high frequencies. The steady-state electrical-thermal interaction for PDNs is addressed by developing a voltage drop-thermal co-simulation method while the thermal effect on TSV characteristics is studied by proposing a thermal-electrical analysis approach for TSV arrays. The second category of numerical methods focuses on developing multiscale modeling approaches for the voltage drop and thermal analysis. A multiscale modeling method based on the finite-element non-conformal domain decomposition technique has been developed for the voltage drop and thermal analysis of 3D systems. The proposed method allows the modeling of a 3D multiscale system using independent mesh grids in sub-domains. As a result, the system unknowns can be greatly reduced. In addition, to improve the simulation efficiency, the cascadic multigrid solving approach has been adopted for the voltage drop-thermal co-simulation with a large number of unknowns. The focus of the last category is to develop fast thermal simulation methods using compact models and model order reduction (MOR). To overcome the computational cost using the computational fluid dynamics simulation, a finite-volume compact thermal model has been developed for the microchannel-based fluidic cooling. This compact thermal model enables the fast thermal simulation of 3D ICs with a large number of microchannels for early-stage design. In addition, a system-level thermal modeling method using domain decomposition and model order reduction is developed for both the steady-state and transient thermal analysis. The proposed approach can efficiently support thermal modeling with varying design parameters without using parameterized MOR techniques.
APA, Harvard, Vancouver, ISO, and other styles
50

Malta, Edgard Borges. "Investigação experimental das vibrações induzidas pela emissão de vórtices em catenárias sujeitas a perfis de correnteza variável, ortogonais ao plano de lançamento." Universidade de São Paulo, 2015. http://www.teses.usp.br/teses/disponiveis/3/3135/tde-30012016-074321/.

Full text
Abstract:
A investigacao experimental do fenomeno de vibracoes induzidas pela emissao de vortices (VIV) em modelos flexiveis longos lancados em catenaria e sujeitos a pers de correnteza variavel e ortogonais ao plano de lancamento e o tema principal desta tese de doutorado. O objetivo, neste trabalho, e identicar, descrever e discutir os comportamentos dinamicos do VIV nas catenarias sob correnteza variavel, comparando-os com aqueles exibidos por este fenomeno agindo em um cilindro flexivel vertical. Desta forma, tenciona-se contribuir para a compreensao mais geral do VIV nas linhas oceanicas utilizadas na exploracao de petroleo e gas no mar. Neste contexto, muito se sabe sobre o comportamento do VIV em cilindros rgidos montados em suportes elasticos, bem como em cilindros flexiveis nao curvados, ambos os arranjos com uma vasta literatura descrevendo nao so os comportamentos dinamicos, como tambem suas relacoes com os aspectos fluidos que lhe dao origem. Contudo, no caso dos cilindros flexiveis longos, ainda existem pontos que precisam de compreensao mais profunda, particularmente aqueles que representam condicoes mais proximas da operacao oceanica real - o lancamento em catenaria e a acao de um perfil variavel e ortogonal de correnteza. Para tanto, um dispositivo denominado braco giratorio foi projetado e construido para que experimentos com modelos de catenaria em correnteza variavel com a profundidade fossem conduzidos no tanque de provas fisico do laboratorio Tanque de Provas Numerico (TPN), caracterizando uma pesquisa inedita no tema. Como condicoes de ensaio foram consideradas diferentes configuracoes geometricas da catenaria, todas comparadas com um cilindro flexivel vertical ensaiado tanto no braco giratorio, como tambem em tanque de reboque. Tratados segundo uma tecnica de decomposicao modal, os resultados de deslocamento ao longo dos modelos nessas condicoes de ensaio permitiram a identicacao e a descricao de comportamentos dinamicos bastante parecidos nas catenarias, sempre comparados aqueles do modelo vertical, o que trouxe grande abrangencia a compreensao do VIV nas linhas flexiveis em geral. Como contribuicao adicional, este trabalho dotou o tanque de provas do TPN de um equipamento bastante interessante para ampla gama de investigacoes sobre as interacoes fluido-estruturais - o braco giratorio.
The experimental investigation of Vortex-Induced Vibrations (VIV) on long catenary models and subjected to non-uniform velocity profiles, orthogonal to the catenarys plane, it is the main subject of this doctoral thesis. As a main goal, this work search to identify, describe and discuss the dynamic behaviors of VIV on catenary lines under non-uniform velocity profiles, comparing the results with those coming from the same phenomenon acting on flexible cylinders in a vertical configuration. Thus, it is intended to help for a more general understanding of VIV in offshore lines used in the oil and gas exploration at sea. In this context, much is known about the behavior of VIV on rigid cylinders elastically supported, as well as on flexible cylinders with no curvature, both configurations counting with a large literature describing not only the dynamic behaviors, but also their relationships with hydrodynamics aspects. However, in the case of long flexible cylinders, there are still more points to be understand, particularly at those conditions closer to the offshore operations - with catenary lines under variable current profile. Under those considerations, a rotating-arm was designed and assembled in the model basin at Numerical Offshore Tank (TPN), Brazil, for model tests under non-uniform current profile, featuring itself as an unpublished research on this topic. Different catenary geometries were tested in the rotating-arm at USP and also in the towing tank at Institute for Technological Research of the State of Sao Paulo (IPT), Brazil, all of them being compared with a vertical flexible cylinder. From those conditions of test, data was analyzed according to a technique of modal decomposition, so the motion results from all the models allowed the identication and the description of a dynamic behavior quite similar in the catenary when compared to that from a vertical cylinder, which has brought great understanding about the VIV on flexible lines. As an contribution, this work has provided to the TPN\'s model basin an interesting device for a wide variety of researchs on fluid- structural interaction - the rotating-arm.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography