Dissertations / Theses on the topic 'MCMC optimization'

To see the other types of publications on this topic, follow the link: MCMC optimization.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 16 dissertations / theses for your research on the topic 'MCMC optimization.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Mahendran, Nimalan. "Bayesian optimization for adaptive MCMC." Thesis, University of British Columbia, 2011. http://hdl.handle.net/2429/30636.

Full text
Abstract:
A new randomized strategy for adaptive Markov chain Monte Carlo (MCMC) using Bayesian optimization, called Bayesian-optimized MCMC, is proposed. This approach can handle non-differentiable objective functions and trades off exploration and exploitation to reduce the number of function evaluations. Bayesian-optimized MCMC is applied to the complex setting of sampling from constrained, discrete and densely connected probabilistic graphical models where, for each variation of the problem, one needs to adjust the parameters of the proposal mechanism automatically to ensure efficient mixing of the Markov chains. It is found that Bayesian-optimized MCMC is able to match or surpass manual tuning of the proposal mechanism by a domain expert.
APA, Harvard, Vancouver, ISO, and other styles
2

Karimi, Belhal. "Non-Convex Optimization for Latent Data Models : Algorithms, Analysis and Applications." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLX040/document.

Full text
Abstract:
De nombreux problèmes en Apprentissage Statistique consistent à minimiser une fonction non convexe et non lisse définie sur un espace euclidien. Par exemple, les problèmes de maximisation de la vraisemblance et la minimisation du risque empirique en font partie.Les algorithmes d'optimisation utilisés pour résoudre ce genre de problèmes ont été largement étudié pour des fonctions convexes et grandement utilisés en pratique.Cependant, l'accrudescence du nombre d'observation dans l'évaluation de ce risque empirique ajoutée à l'utilisation de fonctions de perte de plus en plus sophistiquées représentent des obstacles.Ces obstacles requièrent d'améliorer les algorithmes existants avec des mis à jour moins coûteuses, idéalement indépendantes du nombre d'observations, et d'en garantir le comportement théorique sous des hypothèses moins restrictives, telles que la non convexité de la fonction à optimiser.Dans ce manuscrit de thèse, nous nous intéressons à la minimisation de fonctions objectives pour des modèles à données latentes, ie, lorsque les données sont partiellement observées ce qui inclut le sens conventionnel des données manquantes mais est un terme plus général que cela.Dans une première partie, nous considérons la minimisation d'une fonction (possiblement) non convexe et non lisse en utilisant des mises à jour incrémentales et en ligne. Nous proposons et analysons plusieurs algorithmes à travers quelques applications.Dans une seconde partie, nous nous concentrons sur le problème de maximisation de vraisemblance non convexe en ayant recourt à l'algorithme EM et ses variantes stochastiques. Nous en analysons plusieurs versions rapides et moins coûteuses et nous proposons deux nouveaux algorithmes du type EM dans le but d'accélérer la convergence des paramètres estimés
Many problems in machine learning pertain to tackling the minimization of a possibly non-convex and non-smooth function defined on a Many problems in machine learning pertain to tackling the minimization of a possibly non-convex and non-smooth function defined on a Euclidean space.Examples include topic models, neural networks or sparse logistic regression.Optimization methods, used to solve those problems, have been widely studied in the literature for convex objective functions and are extensively used in practice.However, recent breakthroughs in statistical modeling, such as deep learning, coupled with an explosion of data samples, require improvements of non-convex optimization procedure for large datasets.This thesis is an attempt to address those two challenges by developing algorithms with cheaper updates, ideally independent of the number of samples, and improving the theoretical understanding of non-convex optimization that remains rather limited.In this manuscript, we are interested in the minimization of such objective functions for latent data models, ie, when the data is partially observed which includes the conventional sense of missing data but is much broader than that.In the first part, we consider the minimization of a (possibly) non-convex and non-smooth objective function using incremental and online updates.To that end, we propose several algorithms exploiting the latent structure to efficiently optimize the objective and illustrate our findings with numerous applications.In the second part, we focus on the maximization of non-convex likelihood using the EM algorithm and its stochastic variants.We analyze several faster and cheaper algorithms and propose two new variants aiming at speeding the convergence of the estimated parameters
APA, Harvard, Vancouver, ISO, and other styles
3

Chaari, Lotfi. "Parallel magnetic resonance imaging reconstruction problems using wavelet representations." Phd thesis, Université Paris-Est, 2010. http://tel.archives-ouvertes.fr/tel-00587410.

Full text
Abstract:
Pour réduire le temps d'acquisition ou bien améliorer la résolution spatio-temporelle dans certaines application en IRM, de puissantes techniques parallèles utilisant plusieurs antennes réceptrices sont apparues depuis les années 90. Dans ce contexte, les images d'IRM doivent être reconstruites à partir des données sous-échantillonnées acquises dans le " k-space ". Plusieurs approches de reconstruction ont donc été proposées dont la méthode SENSitivity Encoding (SENSE). Cependant, les images reconstruites sont souvent entâchées par des artéfacts dus au bruit affectant les données observées, ou bien à des erreurs d'estimation des profils de sensibilité des antennes. Dans ce travail, nous présentons de nouvelles méthodes de reconstruction basées sur l'algorithme SENSE, qui introduisent une régularisation dans le domaine transformé en ondelettes afin de promouvoir la parcimonie de la solution. Sous des conditions expérimentales dégradées, ces méthodes donnent une bonne qualité de reconstruction contrairement à la méthode SENSE et aux autres techniques de régularisation classique (e.g. Tikhonov). Les méthodes proposées reposent sur des algorithmes parallèles d'optimisation permettant de traiter des critères convexes, mais non nécessairement différentiables contenant des a priori parcimonieux. Contrairement à la plupart des méthodes de reconstruction qui opèrent coupe par coupe, l'une des méthodes proposées permet une reconstruction 4D (3D + temps) en exploitant les corrélations spatiales et temporelles. Le problème d'estimation d'hyperparamètres sous-jacent au processus de régularisation a aussi été traité dans un cadre bayésien en utilisant des techniques MCMC. Une validation sur des données réelles anatomiques et fonctionnelles montre que les méthodes proposées réduisent les artéfacts de reconstruction et améliorent la sensibilité/spécificité statistique en IRM fonctionnelle
APA, Harvard, Vancouver, ISO, and other styles
4

Park, Jee Hyuk. "On the separation of preferences among marked point process wager alternatives." [College Station, Tex. : Texas A&M University, 2008. http://hdl.handle.net/1969.1/ETD-TAMU-2757.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Bardenet, Rémi. "Towards adaptive learning and inference : applications to hyperparameter tuning and astroparticle physics." Thesis, Paris 11, 2012. http://www.theses.fr/2012PA112307.

Full text
Abstract:
Les algorithmes d'inférence ou d'optimisation possèdent généralement des hyperparamètres qu'il est nécessaire d'ajuster. Nous nous intéressons ici à l'automatisation de cette étape d'ajustement et considérons différentes méthodes qui y parviennent en apprenant en ligne la structure du problème considéré.La première moitié de cette thèse explore l'ajustement des hyperparamètres en apprentissage artificiel. Après avoir présenté et amélioré le cadre générique de l'optimisation séquentielle à base de modèles (SMBO), nous montrons que SMBO s'applique avec succès à l'ajustement des hyperparamètres de réseaux de neurones profonds. Nous proposons ensuite un algorithme collaboratif d'ajustement qui mime la mémoire qu'ont les humains d'expériences passées avec le même algorithme sur d'autres données.La seconde moitié de cette thèse porte sur les algorithmes MCMC adaptatifs, des algorithmes d'échantillonnage qui explorent des distributions de probabilité souvent complexes en ajustant leurs paramètres internes en ligne. Pour motiver leur étude, nous décrivons d'abord l'observatoire Pierre Auger, une expérience de physique des particules dédiée à l'étude des rayons cosmiques. Nous proposons une première partie du modèle génératif d'Auger et introduisons une procédure d'inférence des paramètres individuels de chaque événement d'Auger qui ne requiert que ce premier modèle. Ensuite, nous remarquons que ce modèle est sujet à un problème connu sous le nom de label switching. Après avoir présenté les solutions existantes, nous proposons AMOR, le premier algorithme MCMC adaptatif doté d'un réétiquetage en ligne qui résout le label switching. Nous présentons une étude empirique et des résultats théoriques de consistance d'AMOR, qui mettent en lumière des liens entre le réétiquetage et la quantification vectorielle
Inference and optimization algorithms usually have hyperparameters that require to be tuned in order to achieve efficiency. We consider here different approaches to efficiently automatize the hyperparameter tuning step by learning online the structure of the addressed problem. The first half of this thesis is devoted to hyperparameter tuning in machine learning. After presenting and improving the generic sequential model-based optimization (SMBO) framework, we show that SMBO successfully applies to the task of tuning the numerous hyperparameters of deep belief networks. We then propose an algorithm that performs tuning across datasets, mimicking the memory that humans have of past experiments with the same algorithm on different datasets. The second half of this thesis deals with adaptive Markov chain Monte Carlo (MCMC) algorithms, sampling-based algorithms that explore complex probability distributions while self-tuning their internal parameters on the fly. We start by describing the Pierre Auger observatory, a large-scale particle physics experiment dedicated to the observation of atmospheric showers triggered by cosmic rays. The models involved in the analysis of Auger data motivated our study of adaptive MCMC. We derive the first part of the Auger generative model and introduce a procedure to perform inference on shower parameters that requires only this bottom part. Our model inherently suffers from label switching, a common difficulty in MCMC inference, which makes marginal inference useless because of redundant modes of the target distribution. After reviewing existing solutions to label switching, we propose AMOR, the first adaptive MCMC algorithm with online relabeling. We give both an empirical and theoretical study of AMOR, unveiling interesting links between relabeling algorithms and vector quantization
APA, Harvard, Vancouver, ISO, and other styles
6

Cheng, Yougan. "Computational Models of Brain Energy Metabolism at Different Scales." Case Western Reserve University School of Graduate Studies / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=case1396534897.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Thouvenin, Pierre-Antoine. "Modeling spatial and temporal variabilities in hyperspectral image unmixing." Phd thesis, Toulouse, INPT, 2017. http://oatao.univ-toulouse.fr/19258/1/THOUVENIN_PierreAntoine.pdf.

Full text
Abstract:
Acquired in hundreds of contiguous spectral bands, hyperspectral (HS) images have received an increasing interest due to the significant spectral information they convey about the materials present in a given scene. However, the limited spatial resolution of hyperspectral sensors implies that the observations are mixtures of multiple signatures corresponding to distinct materials. Hyperspectral unmixing is aimed at identifying the reference spectral signatures composing the data -- referred to as endmembers -- and their relative proportion in each pixel according to a predefined mixture model. In this context, a given material is commonly assumed to be represented by a single spectral signature. This assumption shows a first limitation, since endmembers may vary locally within a single image, or from an image to another due to varying acquisition conditions, such as declivity and possibly complex interactions between the incident light and the observed materials. Unless properly accounted for, spectral variability can have a significant impact on the shape and the amplitude of the acquired signatures, thus inducing possibly significant estimation errors during the unmixing process. A second limitation results from the significant size of HS data, which may preclude the use of batch estimation procedures commonly used in the literature, i.e., techniques exploiting all the available data at once. Such computational considerations notably become prominent to characterize endmember variability in multi-temporal HS (MTHS) images, i.e., sequences of HS images acquired over the same area at different time instants. The main objective of this thesis consists in introducing new models and unmixing procedures to account for spatial and temporal endmember variability. Endmember variability is addressed by considering an explicit variability model reminiscent of the total least squares problem, and later extended to account for time-varying signatures. The variability is first estimated using an unsupervised deterministic optimization procedure based on the Alternating Direction Method of Multipliers (ADMM). Given the sensitivity of this approach to abrupt spectral variations, a robust model formulated within a Bayesian framework is introduced. This formulation enables smooth spectral variations to be described in terms of spectral variability, and abrupt changes in terms of outliers. Finally, the computational restrictions induced by the size of the data is tackled by an online estimation algorithm. This work further investigates an asynchronous distributed estimation procedure to estimate the parameters of the proposed models.
APA, Harvard, Vancouver, ISO, and other styles
8

Diabaté, Modibo. "Modélisation stochastique et estimation de la croissance tumorale." Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAM040.

Full text
Abstract:
Cette thèse porte sur la modélisation mathématique de la dynamique du cancer ; elle se divise en deux projets de recherche.Dans le premier projet, nous estimons les paramètres de la limite déterministe d'un processus stochastique modélisant la dynamique du mélanome (cancer de la peau) traité par immunothérapie. L'estimation est réalisée à l'aide d'un modèle statistique non-linéaire à effets mixtes et l'algorithme SAEM, à partir des données réelles de taille tumorale mesurée au cours du temps chez plusieurs patients. Avec ce modèle mathématique qui ajuste bien les données, nous évaluons la probabilité de rechute du mélanome (à l'aide de l'algorithme Importance Splitting), et proposons une optimisation du protocole de traitement (doses et instants du traitement).Nous proposons dans le second projet, une méthode d'approximation de vraisemblance basée sur une approximation de l'algorithme Belief Propagation à l'aide de l'algorithme Expectation-Propagation, pour une approximation diffusion du modèle stochastique de mélanome observée chez un seul individu avec du bruit gaussien. Cette approximation diffusion (définie par une équation différentielle stochastique) n'ayant pas de solution analytique, nous faisons recours à une méthode d'Euler pour approcher sa solution (après avoir testé la méthode d'Euler sur le processus de diffusion d'Ornstein Uhlenbeck). Par ailleurs, nous utilisons une méthode d'approximation de moments pour faire face à la multidimensionnalité et la non-linéarité de notre modèle. A l'aide de la méthode d'approximation de vraisemblance, nous abordons l'estimation de paramètres dans des Modèles de Markov Cachés
This thesis is about mathematical modeling of cancer dynamics ; it is divided into two research projects.In the first project, we estimate the parameters of the deterministic limit of a stochastic process modeling the dynamics of melanoma (skin cancer) treated by immunotherapy. The estimation is carried out with a nonlinear mixed-effect statistical model and the SAEM algorithm, using real data of tumor size. With this mathematical model that fits the data well, we evaluate the relapse probability of melanoma (using the Importance Splitting algorithm), and we optimize the treatment protocol (doses and injection times).We propose in the second project, a likelihood approximation method based on an approximation of the Belief Propagation algorithm by the Expectation-Propagation algorithm, for a diffusion approximation of the melanoma stochastic model, noisily observed in a single individual. This diffusion approximation (defined by a stochastic differential equation) having no analytical solution, we approximate its solution by using an Euler method (after testing the Euler method on the Ornstein Uhlenbeck diffusion process). Moreover, a moment approximation method is used to manage the multidimensionality and the non-linearity of the melanoma mathematical model. With the likelihood approximation method, we tackle the problem of parameter estimation in Hidden Markov Models
APA, Harvard, Vancouver, ISO, and other styles
9

Kim, Tae Seon. "Modeling, optimization, and control of via formation by photosensitive polymers for MCM-D applications." Diss., Georgia Institute of Technology, 1998. http://hdl.handle.net/1853/15017.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Al-Hasani, Firas Ali Jawad. "Multiple Constant Multiplication Optimization Using Common Subexpression Elimination and Redundant Numbers." Thesis, University of Canterbury. Electrical and Computer Engineering, 2014. http://hdl.handle.net/10092/9054.

Full text
Abstract:
The multiple constant multiplication (MCM) operation is a fundamental operation in digital signal processing (DSP) and digital image processing (DIP). Examples of the MCM are in finite impulse response (FIR) and infinite impulse response (IIR) filters, matrix multiplication, and transforms. The aim of this work is minimizing the complexity of the MCM operation using common subexpression elimination (CSE) technique and redundant number representations. The CSE technique searches and eliminates common digit patterns (subexpressions) among MCM coefficients. More common subexpressions can be found by representing the MCM coefficients using redundant number representations. A CSE algorithm is proposed that works on a type of redundant numbers called the zero-dominant set (ZDS). The ZDS is an extension over the representations of minimum number of non-zero digits called minimum Hamming weight (MHW). Using the ZDS improves CSE algorithms' performance as compared with using the MHW representations. The disadvantage of using the ZDS is it increases the possibility of overlapping patterns (digit collisions). In this case, one or more digits are shared between a number of patterns. Eliminating a pattern results in losing other patterns because of eliminating the common digits. A pattern preservation algorithm (PPA) is developed to resolve the overlapping patterns in the representations. A tree and graph encoders are proposed to generate a larger space of number representations. The algorithms generate redundant representations of a value for a given digit set, radix, and wordlength. The tree encoder is modified to search for common subexpressions simultaneously with generating of the representation tree. A complexity measure is proposed to compare between the subexpressions at each node. The algorithm terminates generating the rest of the representation tree when it finds subexpressions with maximum sharing. This reduces the search space while minimizes the hardware complexity. A combinatoric model of the MCM problem is proposed in this work. The model is obtained by enumerating all the possible solutions of the MCM that resemble a graph called the demand graph. Arc routing on this graph gives the solutions of the MCM problem. A similar arc routing is found in the capacitated arc routing such as the winter salting problem. Ant colony optimization (ACO) meta-heuristics is proposed to traverse the demand graph. The ACO is simulated on a PC using Python programming language. This is to verify the model correctness and the work of the ACO. A parallel simulation of the ACO is carried out on a multi-core super computer using C++ boost graph library.
APA, Harvard, Vancouver, ISO, and other styles
11

Ren, Yuan. "Adaptive Evolutionary Monte Carlo for Heuristic Optimization: With Applications to Sensor Placement Problems." 2008. http://hdl.handle.net/1969.1/ETD-TAMU-2008-12-84.

Full text
Abstract:
This dissertation presents an algorithm to solve optimization problems with "black-box" objective functions, i.e., functions that can only be evaluated by running a computer program. Such optimization problems often arise in engineering applications, for example, the design of sensor placement. Due to the complexity in engineering systems, the objective functions usually have multiple local optima and depend on a huge number of decision variables. These difficulties make many existing methods less effective. The proposed algorithm is called adaptive evolutionary Monte Carlo (AEMC), and it combines sampling-based and metamodel-based search methods. AEMC incorporates strengths from both methods and compensates limitations of each individual method. Specifically, the AEMC algorithm combines a tree-based predictive model with an evolutionary Monte Carlo sampling procedure for the purpose of heuristic optimization. AEMC is able to escape local optima due to the random sampling component, and it improves the quality of solutions quickly by using information learned from the tree-based model. AEMC is also an adaptive Markov chain Monte Carlo (MCMC) algorithm, and is in fact the rst adaptive MCMC algorithm that simulates multiple Markov chains in parallel. The ergodicity property of the AEMC algorithm is studied. It is proven that the distribution of samples obtained by AEMC converges asymptotically to the "target" distribution determined by the objective function. This means that AEMC has a larger probability of collecting samples from regions containing the global optimum than from other regions, which implies that AEMC will reach the global optimum given enough run time. The AEMC algorithm falls into the category of heuristic optimization algorithms, and is applicable to the problems that can be solved by other heuristic methods, such as genetic algorithm. Advantages of AEMC are demonstrated by applying it to a sensor placement problem in a manufacturing process, as well as to a suite of standard test functions. It is shown that AEMC is able to enhance optimization effectiveness and efficiency as compared to a few alternative strategies, including genetic algorithm, Markov chain Monte Carlo algorithms, and meta-model based methods. The effectiveness of AEMC for sampling purposes is also shown by applying it to a mixture Gaussian distribution.
APA, Harvard, Vancouver, ISO, and other styles
12

Till, Matthew Charles. "Actuarial Inference and Applications of Hidden Markov Models." Thesis, 2011. http://hdl.handle.net/10012/6094.

Full text
Abstract:
Hidden Markov models have become a popular tool for modeling long-term investment guarantees. Many different variations of hidden Markov models have been proposed over the past decades for modeling indexes such as the S&P 500, and they capture the tail risk inherent in the market to varying degrees. However, goodness-of-fit testing, such as residual-based testing, for hidden Markov models is a relatively undeveloped area of research. This work focuses on hidden Markov model assessment, and develops a stochastic approach to deriving a residual set that is ideal for standard residual tests. This result allows hidden-state models to be tested for goodness-of-fit with the well developed testing strategies for single-state models. This work also focuses on parameter uncertainty for the popular long-term equity hidden Markov models. There is a special focus on underlying states that represent lower returns and higher volatility in the market, as these states can have the largest impact on investment guarantee valuation. A Bayesian approach for the hidden Markov models is applied to address the issue of parameter uncertainty and the impact it can have on investment guarantee models. Also in this thesis, the areas of portfolio optimization and portfolio replication under a hidden Markov model setting are further developed. Different strategies for optimization and portfolio hedging under hidden Markov models are presented and compared using real world data. The impact of parameter uncertainty, particularly with model parameters that are connected with higher market volatility, is once again a focus, and the effects of not taking parameter uncertainty into account when optimizing or hedging in a hidden Markov are demonstrated.
APA, Harvard, Vancouver, ISO, and other styles
13

Chen, Ming-Chi, and 陳明奇. "Simulation and Optimization of MCM Interconnections." Thesis, 1997. http://ndltd.ncl.edu.tw/handle/10625596669140722965.

Full text
Abstract:
碩士
國立臺灣大學
電機工程學系
85
To take full advantage of the increased speed and density of VLSI circuits, multichip modules ( MCMs ) have been developed to reduce signal delay, power requirements, and the physical size of electronic systems. However, as more chips are placed in the same package, the line density will greatly increase. This can result in serious signal distortions. In order to ensure the performance of an MCM-based system, the interconnections between bare chips must be designed carefully. In this thesis, we develop a dedicated simulation system to analyze MCM interconnection networks and help the package designers to find the optimal design. This simulation system consists of three parts, namely, parameter calculator, circuit simulator, and circuit optimizer. The parameter calculator evaluates the transmission-line parameters of interconnections. These parameters are fed into the circuit simulator to determine the time-domain response of an MCM interconnection network. If the circuit responsdoes not satisfy the performance specifications, the circuit optimizer can help us to find the optimalhe simulation results are also compared with the results from empirical formulas, references and another circuit simulator, HSPICE. Very good agreement is observed.
APA, Harvard, Vancouver, ISO, and other styles
14

CHUN-JEN, FANG. "Thermal Optimization for a Confined Stationary or Rotating MCM Disk with Round Air Jet Array Impingement." 2006. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0016-1303200709471079.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

FANG, CHUN-JEN, and 方俊仁. "Thermal Optimization for a Confined Stationary or Rotating MCM Disk with Round Air Jet Array Impingement." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/44834863622902797150.

Full text
Abstract:
博士
國立清華大學
動力機械工程學系
95
A series of experimental investigations with stringent measurement methods on the studies related to fluid flow and heat transfer characteristics of a stationary or rotating MCM disk with various cooling techniques have been performed. The total experimental cases for a stationary or rotating MCM disk with various cooling techniques are statistically designed by the Design of Experiments (DOE) together with Central Composite Design method (CCD). The relevant parameters influencing fluid flow and heat transfer performance for a stationary or rotating MCM disk with various cooling techniques include: steady-state Grashof number (Grs), ratio of the confinement spacing to disk diameter (H/D), ratio of jet separation distance to nozzle diameter (H/d), jet Reynolds number (Rej) and rotational Reynolds number (Rer). The ranges of the above-mentioned parameters are: Grs = 2.32 ×105 - 2.57×106, H/D = 0.083 - 1.2, H/d = 0.83 - 14.4, Rej = 89 - 17364 and Rer = 0 - 2903. Their effects on fluid flow and heat transfer characteristics for a stationary or rotating MCM disk with various cooling techniques have been systematically explored. In addition, a sensitivity analysis, the so-called “ANOVA”, for the design factors has been performed. An effective optimal method with the RSM and SQP techniques for performing the thermal optimization of a stationary or rotating MCM disk with various cooling techniques under multi-constraints has been successfully developed. Six subtopics of thermal optimization have been systematically explored. They are (1) a confined stationary MCM disk in natural convection; (2) a confined rotating MCM disk; (3) a stationary MCM disk with confined single round jet impingement; (4) a confined rotating MCM disk with single round jet impingement;(5) a confined stationary MCM disk with round jet array impingement; and (6) a confined rotating MCM disk with round jet array impingement. In hydrodynamic aspect, the fluid flow characteristics including the streamwise velocity and turbulence intensity distributions at nozzle exits, jet potential core length, streamwise velocity decay along jet centerline and turbulence intensities evolution along jet centerline are investigated. The flow behaviors for single round jet and for jet array impingement have been experimentally verified as a symmetrical flow and an unsymmetrical flow, respectively. Based on the measurement of the above-mentioned jet flow characteristics for jet array impingement, the jet flow behaviors at nozzle exits can be classified into two regimes such as “initially transitional flow regime” and “initially turbulent flow regime.” Additionally, new correlations of the ratio of potential core length to nozzle diameter, Lpc/d, in terms of relevant influencing parameters for a confined stationary or rotating MCM disk with single round jet and round jet array impingement at various nozzle jets are presented. In heat transfer aspect, from all the experimental data measured for transient-/steady-state local and average heat transfer characteristics, the thermal behavior has been verified to be axisymmetrically maintained and the results have been achieved in an axisymmetric form. The stagnation, local and average heat transfer characteristics for a stationary or rotating MCM disk with various cooling techniques are successively explored. Besides, the mutual influences among buoyancy, disk rotation and jet impingement on the heat transfer performance of a confined stationary or rotating MCM disk with round jet array impingement have been quantitatively evaluated. New correlations of stagnation, local and average Nusselt numbers in terms of relevant parameters are proposed. To interpret the convective heat transfer characteristics on the confined stationary or rotating MCM disk surface due to the mutual effects among jet impingement and disk rotation, the heat transfer behavior can be classified into two distinct heat transfer regimes such as disk rotation-dominated regime and jet impingement-dominated regime for the cases with a specified ratio of rotational Reynolds number to jet Reynolds number, i.e., . Two empirical correlations of classifying these two distinct regimes are proposed for the single round jet and jet array impingement, respectively. The steady-state heat transfer enhancement for jet array impingement compared with single round jet impinging onto a confined stationary or rotating MCM disk has been systematically explored; and a new correlation of the heat transfer enhancement ratio, , in terms of relevant influencing parameters is reported. Furthermore, a series of thermal optimizations with multiple constraints such as space, jet Reynolds number, rotational Reynolds number, nozzle exit velocity, disk rotational speed and total power consumption constraints for a stationary or rotating MCM disk with various cooling techniques have been performed and discussed. New correlations of the optimal steady-state average heat transfer performance for the cases of a confined stationary or rotating MCM disk with single round jet or jet array impingement are finally presented.
APA, Harvard, Vancouver, ISO, and other styles
16

Lin, Heng-cheng, and 林恒正. "Application of 2nd order Sub-modeling & Experiment Design for Multi-Chip Module (MCM) Reliability Optimization." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/02825932414845460806.

Full text
Abstract:
博士
國立成功大學
工程科學系碩博士班
95
Multi-chip module (MCM) is a single package, which encapsulates more than one die. The purpose is to reduce the dimension, improve performance, lower power consumption and save the cost. A MCM package consists of various components, when it is under a temperature cycling load, due to the mismatch of coefficients of thermal expansion (CTE) of components, the package tends to deform and lead to fatigue failure of solder joints. Therefore, this paper focused on a MCM model under thermal cycle loading to investigate the effects of component’s material and geometry on the fatigue life of 96.5Sn3.5Ag lead-free solder joints in a MCM assembly. The components of MCM include heatsink, thermal adhesive, chips, underfill, structural adhesive, substrate, printed circuit boards (PCB) and Sn3.5Ag lead-free solder ball. Surface Evolver, an energy-based approach software is applied to calculate the shape of the solder ball, then bring this data into ANSYS, a finite element analysis software, to generate a 3-D finite element sliced bar-like model and use Second-order Sub-Modeling to perform the analysis in order to simplify numerical simulation of the model, an effective substitution for solder joints/underfill between the chip and the substrate is adopted turning the complex geometry to a simple isotropic layer. About the material behavior of the lead-free solder, the multi-linear isotropic hardening model is selected, using Garofalo-Arrhenius mathematical model to describe its plastic and creep behavior. The material’s properties for other components are assumed to be linear elastic. According to JEDEC Test Method A104-B thermal cycle loading between 40℃ to 125℃ up to 10 cycles is applied to the MCM package. Hereby, we computed the deformation, stress, strain and hysterisis curve of the outermost solder joint (between substrate and PCB) by finite element analysis. The equivalent strain range is substituted into Coffin-Mansion formula to estimate the fatigue life of solder joint. For efficiently running numerical simulation Second-Order Sub-model is developed for a simpler simulation. For verifying the accuracy and efficiency of Second-Order Sub-model we compare it with fine mesh global model under same thermal cycle loading, the results show that the difference of equivalent strain range between these two models is only 1.25%. The calculation time and hard disk capacity required for Second-Order Sub-Modeling is only about 24% and 38% compare to fine mesh global modeling. The single-factor experiment was adopted to predict the impact on the fatigue life of MCM by following 6 factors: The upper and lower pad radii of the solder ball,The PCB thickness and CTE,the substrate thickness and CTE. First, the single factor analysis is performed to evaluate the effect of each parameter with the solder ball reliability. Then both Taguchi method and the Response Surface Method (RSM) are applied to obtain an optimal parameter combination to improve the reliability of MCM package. Results from the Single-factor experiment show that the reduction of both upper and lower solder pad radius, fixing the upper solder pad and increasing the lower solder pad radius, fixing the lower solder pad and reducing the upper solder pad radius, reducing the thickness of the PCB, increasing the CTE of PCB, reducing substrate thickness or lower the CTE of substrate, promises to improve the fatigue life of MCM package. The optimal design of Taguchi method shows significant improvement, the equivalent strain range is 0.903%. And the fatigue life is 5599 cycles, compare to the original design with equivalent strain range 4.779 %.and the fatigue life 87 cycles, the equivalent strain range reduced about 81% and the fatigue life enhanced about 64 times. Response Surface Method is employed to derive a closed-form function in polynomial format to predict the fatigue life for the solder ball. The RSM model predicts a set of optimal design with equivalent strain range 0.9059% and the fatigue life 5554 cycles. And the certified experiment with this optimal design shows that the equivalent strain range is 0.9237% and the fatigue life is 5289 cycles. The RSM model could successfully predict the reliability of the MCM package.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography