Tesi sul tema "Mixed variables"

Segui questo link per vedere altri tipi di pubblicazioni sul tema: Mixed variables.

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Vedi i top-50 saggi (tesi di laurea o di dottorato) per l'attività di ricerca sul tema "Mixed variables".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Vedi le tesi di molte aree scientifiche e compila una bibliografia corretta.

1

Moustaki, Irini. "Latent variable models for mixed manifest variables". Thesis, London School of Economics and Political Science (University of London), 1996. http://etheses.lse.ac.uk/78/.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Latent variable models are widely used in social sciences in which interest is centred on entities such as attitudes, beliefs or abilities for which there e)dst no direct measuring instruments. Latent modelling tries to extract these entities, here described as latent (unobserved) variables, from measurements on related manifest (observed) variables. Methodology already exists for fitting a latent variable model to manifest data that is either categorical (latent trait and latent class analysis) or continuous (factor analysis and latent profile analysis). In this thesis a latent trait and a latent class model are presented for analysing the relationships among a set of mixed manifest variables using one or more latent variables. The set of manifest variables contains metric (continuous or discrete) and binary items. The latent dimension is continuous for the latent trait model and discrete for the latent class model. Scoring methods for allocating individuals on the identified latent dimen-sions based on their responses to the mixed manifest variables are discussed. ' Item nonresponse is also discussed in attitude scales with a mixture of binary and metric variables using the latent trait model. The estimation and the scoring methods for the latent trait model have been generalized for conditional distributions of the observed variables given the vector of latent variables other than the normal and the Bernoulli in the exponential family. To illustrate the use of the naixed model four data sets have been analyzed. Two of the data sets contain five memory questions, the first on Thatcher's resignation and the second on the Hillsborough football disaster; these five questions were included in BMRBI's August 1993 face to face omnibus survey. The third and the fourth data sets are from the 1990 and 1991 British Social Attitudes surveys; the questions which have been analyzed are from the sexual attitudes sections and the environment section respectively.
2

Chang, Soong Uk. "Clustering with mixed variables /". [St. Lucia, Qld.], 2005. http://www.library.uq.edu.au/pdfserve.php?image=thesisabs/absthe19086.pdf.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Mahat, Nor Idayu. "Some investigations in discriminant analysis with mixed variables". Thesis, University of Exeter, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.432783.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The location model is a potential basis for discriminating between groups of objects with mixed types of variables. The model specifies a parametric form for the conditional distribution of the continuous variables given each pattern of values of the categorical variables, thus leading to a theoretical discriminant function between the groups. To conduct a practical discriminant analysis, the objects must first be sorted into the cells of a multinomial table generated from the categorical values, and the model parameters must then be estimated from the data. However, in many practical situations some of the cells are empty, which prevents simple implementation of maximum likelihood estimation and restricts the feasibility of linear model estimators to cases with relatively few categorical variables. This deficiency was overcome by non-parametric smoothing estimation proposed by Asparoukhov and Krzanowski (2000). Its usual implementation uses exponential and piece-wise smoothing functions for the continuous variables, and adaptive weighted nearest neighbour for the categorical variables. Despite increasing the range of applicability, the smoothing parameters that are chosen by maximising the leave-one-out pseudo-likelihood depend on distributional assumptions, while, the smoothing method for the categorical variables produces erratic values if the number of variables is large. This thesis rectifies these shortcomings, and extends location model methodology to situations where there are large numbers of mixed categorical and continuous variables. Chapter 2 uses the simplest form of the exponential smoothing function for the continuous variables and describes how the smoothing parameters can instead be chosen by minimising either the leave-one-out error rate or the leave-one-out Brier score, neither of which make distributional assumptions. Alternative smoothing methods, namely a kernel and a weighted form of the maximum likelihood, are also investigated for the categorical variables. Numerical evidence in Chapter 3 shows that there is little to choose among the strategies for estimating smoothing parameters and among the smoothing methods for the categorical variables. However, some of the proposed smoothing methods are more feasible when the number of parameters to be estimated is reduced. Chapter 4 reviews previous work on problems of high dimensional feature variables, and focuses on selecting variables on the basis of the distance between groups. In particular, the Kullback-Leibler divergence is considered for the location model, but existing theory based on maximum likelihood estimators is not applicable for general cases. Chapter 5 therefore describes the implementation of this distance for smoothed estimators, and investigates its asymptotic distribution. The estimated distance and its asymptotic distribution provide a stopping rule in a sequence of searching processes, either by forward, backward or stepwise selections, following the test for no additional information. Simulation results in Chapter 6 exhibit the feasibility of the proposed variable selection strategies for large numbers of variables, but limitations in several circumstances are identified. Applications to real data sets in Chapter 7 show how the proposed methods are competitive with, and sometimes better than other existing classification methods. Possible future work is outlined in Chapter 8.
4

Pelamatti, Julien. "Mixed-variable Bayesian optimization : application to aerospace system design". Thesis, Lille 1, 2020. http://www.theses.fr/2020LIL1I003.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Dans le cadre de la conception de systèmes complexes, tels que les aéronefs et les lanceurs, la présence de fonctions d'objectifs et/ou de contraintes à forte intensité de calcul (e.g., modèles d'éléments finis) couplée à la dépendance de choix de conception technologique discrets et non ordonnés entraîne des problèmes d'optimisation difficiles. De plus, une partie de ces choix technologiques est associée à un certain nombre de variables de conception continues et discrètes spécifiques qui ne doivent être prises en considération que si des choix technologiques spécifiques sont faits. Par conséquent, le problème d'optimisation qui doit être résolu afin de déterminer la conception optimale du système présente un espace de recherche et un domaine de faisabilité variant de façon dynamique. Les algorithmes existants qui permettent de résoudre ce type particulier de problèmes ont tendance à exiger une grande quantité d'évaluations de fonctions afin de converger vers l'optimum réalisable, et sont donc inadéquats lorsqu'il s'agit de résoudre les problèmes à forte intensité de calcul. Pour cette raison, cette thèse explore la possibilité d'effectuer une optimisation de l'espace de conception contraint à variables mixtes et de taille variable en s'appuyant sur des méthodes d’optimisation à base de modèles de substitution créés à l'aide de processus Gaussiens, également connue sous le nom d'optimisation Bayésienne. Plus spécifiquement, 3 axes principaux sont discutés. Premièrement, la modélisation de substitution par processus gaussien de fonctions mixtes continues/discrètes et les défis qui y sont associés sont discutés en détail. Un formalisme unificateur est proposé afin de faciliter la description et la comparaison entre les noyaux existants permettant d'adapter les processus gaussiens à la présence de variables discrètes non ordonnées. De plus, les performances réelles de modélisation de ces différents noyaux sont testées et comparées sur un ensemble de benchmarks analytiques et de conception ayant des caractéristiques et des paramétrages différents. Dans la deuxième partie de la thèse, la possibilité d'étendre la modélisation de substitution mixte continue/discrète à un contexte d'optimisation Bayésienne est discutée. La faisabilité théorique de cette extension en termes de modélisation de la fonction objectif/contrainte ainsi que de définition et d'optimisation de la fonction d'acquisition est démontrée. Différentes alternatives possibles sont considérées et décrites. Enfin, la performance de l'algorithme d'optimisation proposé, avec diverses paramétrisations des noyaux et différentes initialisations, est testée sur un certain nombre de cas-test analytiques et de conception et est comparée aux algorithmes de référence.Dans la dernière partie de ce manuscrit, deux approches permettant d'adapter les algorithmes d'optimisation bayésienne mixte continue/discrète discutés précédemment afin de résoudre des problèmes caractérisés par un espace de conception variant dynamiquement au cours de l’optimisation sont proposées. La première adaptation est basée sur l'optimisation parallèle de plusieurs sous-problèmes couplée à une allocation de budget de calcul basée sur l'information fournie par les modèles de substitution. La seconde adaptation, au contraire, est basée sur la définition d'un noyau permettant de calculer la covariance entre des échantillons appartenant à des espaces de recherche partiellement différents en fonction du regroupement hiérarchique des variables dimensionnelles. Enfin, les deux alternatives sont testées et comparées sur un ensemble de cas-test analytiques et de conception.Globalement, il est démontré que les méthodes d'optimisation proposées permettent de converger vers les optimums des différents types de problèmes considérablement plus rapidement par rapport aux méthodes existantes. Elles représentent donc un outil prometteur pour la conception de systèmes complexes
Within the framework of complex system design, such as aircraft and launch vehicles, the presence of computationallyintensive objective and/or constraint functions (e.g., finite element models and multidisciplinary analyses)coupled with the dependence on discrete and unordered technological design choices results in challenging optimizationproblems. Furthermore, part of these technological choices is associated to a number of specific continuous anddiscrete design variables which must be taken into consideration only if specific technological and/or architecturalchoices are made. As a result, the optimization problem which must be solved in order to determine the optimalsystem design presents a dynamically varying search space and feasibility domain.The few existing algorithms which allow solving this particular type of problems tend to require a large amountof function evaluations in order to converge to the feasible optimum, and result therefore inadequate when dealingwith the computationally intensive problems which can often be encountered within the design of complex systems.For this reason, this thesis explores the possibility of performing constrained mixed-variable and variable-size designspace optimization by relying on surrogate model-based design optimization performed with the help of Gaussianprocesses, also known as Bayesian optimization. More specifically, 3 main axes are discussed. First, the Gaussianprocess surrogate modeling of mixed continuous/discrete functions and the associated challenges are extensivelydiscussed. A unifying formalism is proposed in order to facilitate the description and comparison between theexisting kernels allowing to adapt Gaussian processes to the presence of discrete unordered variables. Furthermore,the actual modeling performances of these various kernels are tested and compared on a set of analytical and designrelated benchmarks with different characteristics and parameterizations.In the second part of the thesis, the possibility of extending the mixed continuous/discrete surrogate modeling toa context of Bayesian optimization is discussed. The theoretical feasibility of said extension in terms of objective/-constraint function modeling as well as acquisition function definition and optimization is shown. Different possiblealternatives are considered and described. Finally, the performance of the proposed optimization algorithm, withvarious kernels parameterizations and different initializations, is tested on a number of analytical and design relatedtest-cases and compared to reference algorithms.In the last part of this manuscript, two alternative ways of adapting the previously discussed mixed continuous/discrete Bayesian optimization algorithms in order to solve variable-size design space problems (i.e., problemscharacterized by a dynamically varying design space) are proposed. The first adaptation is based on the paralleloptimization of several sub-problems coupled with a computational budget allocation based on the informationprovided by the surrogate models. The second adaptation, instead, is based on the definition of a kernel allowingto compute the covariance between samples belonging to partially different search spaces based on the hierarchicalgrouping of design variables. Finally, the two alternatives are tested and compared on a set of analytical and designrelated benchmarks.Overall, it is shown that the proposed optimization methods allow to converge to the various constrained problemoptimum neighborhoods considerably faster when compared to the reference methods, thus representing apromising tool for the design of complex systems
5

Lazare, Arnaud. "Global optimization of polynomial programs with mixed-integer variables". Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLY011.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Dans cette thèse, nous nous intéressons à l'étude des programmes polynomiaux, c'est à dire les problème d'optimisation dont la fonction objectif et/ou les contraintes font intervenir des polynômes de plusieurs variables. Ces problèmes ont de nombreuses applications pratiques et constituent actuellement un champ de recherche très actif. Différentes méthodes permettent de les résoudre de façon exacte ou approchée, en utilisant par exemple des relaxationssemidéfinies positives du type "moments-somme de carrés". Mais ces problèmes restent très difficiles et on ne sait résoudre en toute généralité que des instances de petite taille.Dans le cas quadratique, une approche de résolution exacte efficace a été initialement proposée à travers la méthode QCR. Elle se base sur une reformulation quadratique convexe "optimale" au sens de la borne par relaxation continue.Une des motivations de cette thèse est de généraliser cette approche pour le cas des problèmes polynomiaux. Dans la majeure partie de ce manuscrit, nous étudions les problèmes d'optimisation en variables binaires. Nous proposons deux familles de reformulations convexes pour ces problèmes: des reformulations "directes" et des reformulations passant par la quadratisation.Pour les reformulations directes, nous nous intéressons tout d'abord aux linéarisations. Nous introduisons le concept de q-linéarisation, une linéarisation utilisant q variables additionnelles, et comparons les bornes obtenues par relaxation continue pour différentes valeurs de q. Ensuite, nous appliquons la reformulation convexe au problème polynomial, en ajoutant des termes supplémentaires à la fonction objectif, mais sans ajouter de variables ou de contraintes additionnelles.La deuxième famille de reformulations convexes vise à étendre la reformulation quadratique convexe au cas polynomial. Nous proposons plusieurs nouvelles reformulations alternatives que nous comparons aux méthodes existantes sur des instances de la littérature. En particulier nous présentons l'algorithme PQCR pour résoudre des problèmes polynomiaux binaires sans contrainte. La méthode PQCR permet de résoudre des instances jusqu'ici non résolues. En plus des expérimentations numériques, nous proposons aussi une étude théorique visant à comparer les différentes reformulations quadratiques de la littérature puis à leur appliquer une reformulation convexe.Enfin nous considérons des cas plus généraux et nous proposons une méthode permettant de calculer des relaxations convexes pour des problèmes continus
In this thesis, we are interested in the study of polynomial programs, that is optimization problems for which the objective function and/or the constraints are expressed by multivariate polynomials. These problems have many practical applications and are currently actively studied. Different methods can be used to find either a global or a heuristic solution, using for instance, positive semi-definite relaxations as in the "Moment/Sum of squares" method. But these problems remain very difficult and only small instances are addressed. In the quadratic case, an effective exact solution approach was initially proposed in the QCR method. It is based on a quadratic convex reformulation, which is optimal in terms of continuous relaxation bound.One of the motivations of this thesis is to generalize this approach to the case of polynomial programs. In most of this manuscript, we study optimization problems with binary variables. We propose two families of convex reformulations for these problems: "direct" reformulations and quadratic ones.For direct reformulations, we first focus on linearizations. We introduce the concept of q-linearization, that is a linearization using q additional variables, and we compare the bounds obtained by continuous relaxation for different values of q. Then, we apply convex reformulation to the polynomial problem, by adding additional terms to the objective function, but without adding additional variables or constraints.The second family of convex reformulations aims at extending quadratic convex reformulation to the polynomial case. We propose several new alternative reformulations that we compare to existing methods on instances of the literature. In particular we present the algorithm PQCR to solve unconstrained binary polynomial problems. The PQCR method is able to solve several unsolved instances. In addition to numerical experiments, we also propose a theoretical study to compare the different quadratic reformulations of the literature and then apply a convex reformulation to them.Finally, we consider more general problems and we propose a method to compute convex relaxations for continuous problems
6

Bonnet, Anna. "Heritability Estimation in High-dimensional Mixed Models : Theory and Applications". Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLS498/document.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Nous nous intéressons à desméthodes statistiques pour estimer l'héritabilitéd'un caractère biologique, qui correspond à lapart des variations de ce caractère qui peut êtreattribuée à des facteurs génétiques. Nousproposons dans un premier temps d'étudierl'héritabilité de traits biologiques continus àl'aide de modèles linéaires mixtes parcimonieuxen grande dimension. Nous avons recherché lespropriétés théoriques de l'estimateur du maximumde vraisemblance de l'héritabilité : nousavons montré que cet estimateur était consistantet vérifiait un théorème central limite avec unevariance asymptotique que nous avons calculéeexplicitement. Ce résultat, appuyé par des simulationsnumériques sur des échantillons finis,nous a permis de constater que la variance denotre estimateur était très fortement influencéepar le ratio entre le nombre d'observations et lataille des effets génétiques. Plus précisément,quand le nombre d’observations est faiblecomparé à la taille des effets génétiques (ce quiest très souvent le cas dans les étudesgénétiques), la variance de l’estimateur était trèsgrande. Ce constat a motivé le développementd'une méthode de sélection de variables afin dene garder que les variants génétiques les plusimpliqués dans les variations phénotypiques etd’améliorer la précision des estimations del’héritabilité.La dernière partie de cette thèse est consacrée àl'estimation d'héritabilité de données binaires,dans le but d'étudier la part de facteursgénétiques impliqués dans des maladies complexes.Nous proposons d'étudier les propriétésthéoriques de la méthode développée par Golanet al. (2014) pour des données de cas-contrôleset très efficace en pratique. Nous montronsnotamment la consistance de l’estimateur del’héritabilité proposé par Golan et al. (2014)
We study statistical methods toestimate the heritability of a biological trait,which is the proportion of variations of thistrait that can be explained by genetic factors.First, we propose to study the heritability ofquantitative traits using high-dimensionalsparse linear mixed models. We investigate thetheoretical properties of the maximumlikelihood estimator for the heritability and weshow that it is a consistent estimator and that itsatisfies a central limit theorem with a closedformexpression for the asymptotic variance.This result, supported by an extendednumerical study, shows that the variance of ourestimator is strongly affected by the ratiobetween the number of observations and thesize of the random genetic effects. Moreprecisely, when the number of observations issmall compared to the size of the geneticeffects (which is often the case in geneticstudies), the variance of our estimator is verylarge. This motivated the development of avariable selection method in order to capturethe genetic variants which are involved themost in the phenotypic variations and providemore accurate heritability estimations. Wepropose then a variable selection methodadapted to high dimensional settings and weshow that, depending on the number of geneticvariants actually involved in the phenotypicvariations, called causal variants, it was a goodidea to include or not a variable selection stepbefore estimating heritability.The last part of this thesis is dedicated toheritability estimation for binary data, in orderto study the proportion of genetic factorsinvolved in complex diseases. We propose tostudy the theoretical properties of the methoddeveloped by Golan et al. (2014) for casecontroldata, which is very efficient in practice.Our main result is the proof of the consistencyof their heritability estimator
7

Adamec, Vaclav. "The Effect of Maternal and Fetal Inbreeding on Dystocia, Calf Survival, Days to First Service and Non-Return Performance in U.S. Dairy Cattle". Diss., Virginia Tech, 2002. http://hdl.handle.net/10919/25999.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Intensive selection for increased milk production over many generations has led to growing genetic similarity and increased relationships in dairy population. In the current study, inbreeding depression was estimated for number of days to first service, summit milk, conception by 70 days non-return, and calving rate with a linear mixed model (LMM) approach and for calving difficulty, calf mortality with a Bayesian threshold model (BTM) for categorical traits. Effectiveness of classical and unknown parentage group procedures to estimate inbreeding coefficients was evaluated depending on completeness of a 5-generation pedigree. A novel method derived from the classical formula to estimate inbreeding was utilized to evaluate completeness of pedigrees. Two different estimates of maternal inbreeding were fitted in separate models as a linear covariate in combined LMM analyses (Holstein registered and grade cows and Jersey cows) or separate analyses (registered Holstein cows) by parity (1-4) with fetal inbreeding. Impact of inbreeding type, model, data structure, and treatment of herd-year-season (HYS) on magnitude and size of inbreeding depression were assessed. Grade Holstein datasets were sampled and analyzed by percentage of pedigree present (0-30%, 30-70% and 70-100%). BTM analyses (sire-mgs) were performed using Gibbs sampling for parities 1, 2 and 3 fitting maternal inbreeding only. In LMM analyses of grade data, the least pedigree and diagonal A matrix performed the worst. Significant inbreeding effects were obtained in most traits in cows of parity 1. Fetal inbreeding depression was mostly lower than that from maternal inbreeding. Inbreeding depression in binary traits was the most difficult to evaluate. Analyses with non-additive effects included in LMM, for data by inbreeding level and by age group should be preferred to estimate inbreeding depression. In BTM inbreeding effects were strongly related to dam parity and calf sex. Largest effects were obtained from parity 1 cows giving birth to male calves (0.417% and 0.252% for dystocia and calf mortality) and then births to female calves (0.300% and 0.203% for dystocia and calf mortality). Female calves from mature cows were the least affected (0.131% and 0.005% for dystocia and calf mortality). Data structure was found to be a very important factor to attainment of convergence in distribution.
Ph. D.
8

Fernández, Villegas Renzo. "A beta inflated mean regression model with mixed effects for fractional response variables". Master's thesis, Pontificia Universidad Católica del Perú, 2017. http://tesis.pucp.edu.pe/repositorio/handle/123456789/8847.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
In this article we propose a new mixed effects regression model for fractional bounded response variables. Our model allows us to incorporate covariates directly to the expected value, so we can quantify exactly the influence of these covariates in the mean of the variable of interest rather than to the conditional mean. Estimation is carried out from a Bayesian perspective and due to the complexity of the augmented posterior distribution we use a Hamiltonian Monte Carlo algorithm, the No-U-Turn sampler, implemented using Stan software. A simulation study for comparison, in terms of bias and RMSE, was performed showing that our model has a better performance than other traditional longitudinal models for bounded variables. Finally, we applied our Beta Inflated mixed-effects regression model to real data which consists of utilization of credit lines in the peruvian financial system.
En este artículo proponemos un nuevo modelo de regresión con efectos mixtos para variables acotadas fraccionarias. Este modelo nos permite incorporar covariables directamente al valor esperado, de manera que podemos cuantificar exactamente la influencia de estas covariables en la media de la variable de interés en vez de en la media condicional. La estimación se llevó a cabo desde una perspectiva bayesiana y debido a la complejidad de la distribución aumentada a posteriori usamos un algoritmo de Monte Carlo Hamiltoniano, el muestreador No-U-Turn, que se encuentra implementado en el software Stan. Se realizó un estudio de simulación que compara, en términos de sesgo y RMSE, el modelo propuesto con otros modelos tradicionales longitudinales para variables acotadas, resultando que el primero tiene un mejor desempeño. Finalmente, aplicamos nuestro modelo de regresión Beta Inflacionada con efectos mixtos a datos reales los cuales consistían en información de la utilización de las líneas de crédito en el sistema financiero peruano.
Tesis
9

Dahito, Marie-Ange. "Constrained mixed-variable blackbox optimization with applications in the automotive industry". Electronic Thesis or Diss., Institut polytechnique de Paris, 2022. http://www.theses.fr/2022IPPAS017.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Bon nombre de problèmes d'optimisation rencontrés dans l'industrie font appel à des systèmes complexes et n'ont pas de formulation analytique explicite : ce sont des problèmes d'optimisation de type boîte noire (ou blackbox en anglais). Ils peuvent être dits “mixtes”, auquel cas ils impliquent des variables de différentes natures (continues et discrètes), et avoir de nombreuses contraintes à satisfaire. De plus, les évaluations de l'objectif et des contraintes peuvent être numériquement coûteuses.Dans cette thèse, nous étudions des méthodes de résolution de tels problèmes complexes, à savoir des problèmes d'optimisation boîte noire avec contraintes et variables mixtes, pour lesquels les évaluations des fonctions sont très coûteuses en temps de calcul.Puisque l'utilisation de dérivées n'est pas envisageable, ce type de problèmes est généralement abordé par des approches sans dérivées comme les algorithmes évolutionnaires, les méthodes de recherche directe et les approches basées sur des métamodèles.Nous étudions les performances de telles méthodes déterministes et stochastiques dans le cadre de l'optimisation boîte noire, y compris sur un cas test en éléments finis que nous avons conçu. En particulier, nous évaluons les performances de la variante ORTHOMADS de l'algorithme de recherche directe MADS sur des problèmes d'optimisation continus et à variables mixtes issus de la littérature.Nous proposons également une nouvelle méthode d'optimisation boîte noire, nommée BOA, basée sur des approximations par métamodèles. Elle comporte deux phases dont la première vise à trouver un point réalisable tandis que la seconde améliore itérativement la valeur de l'objectif de la meilleure solution réalisable trouvée. Nous décrivons des expériences utilisant des instances de la littérature et des applications de l'industrie automobile. Elles incluent des tests de notre algorithme avec différents types de métamodèles, ainsi que des comparaisons avec ORTHOMADS
Numerous industrial optimization problems are concerned with complex systems and have no explicit analytical formulation, that is they are blackbox optimization problems. They may be mixed, namely involve different types of variables (continuous and discrete), and comprise many constraints that must be satisfied. In addition, the objective and constraint blackbox functions may be computationally expensive to evaluate.In this thesis, we investigate solution methods for such challenging problems, i.e constrained mixed-variable blackbox optimization problems involving computationally expensive functions.As the use of derivatives is impractical, problems of this form are commonly tackled using derivative-free approaches such as evolutionary algorithms, direct search and surrogate-based methods.We investigate the performance of such deterministic and stochastic methods in the context of blackbox optimization, including a finite element test case designed for our research purposes. In particular, the performance of the ORTHOMADS instantiation of the direct search MADS algorithm is analyzed on continuous and mixed-integer optimization problems from the literature.We also propose a new blackbox optimization algorithm, called BOA, based on surrogate approximations. It proceeds in two phases, the first of which focuses on finding a feasible solution, while the second one iteratively improves the objective value of the best feasible solution found. Experiments on instances stemming from the literature and applications from the automotive industry are reported. They namely include results of our algorithm considering different types of surrogates and comparisons with ORTHOMADS
10

Mohd, Isa Khadijah. "Corporate taxpayers’ compliance variables under the self-assessment system in Malaysia : a mixed methods approach". Thesis, Curtin University, 2012. http://hdl.handle.net/20.500.11937/1796.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This thesis examines corporate taxpayers’ compliance variables and analyses the influence of business characteristics on compliance behaviour. A two-phase exploratory mixed methods approach was employed to explore participants’ views of corporate taxpayers’ compliance variables, with the intention of using this information to develop a survey instrument. The method comprised eight focus group interviews with 60 tax auditors from the Inland Revenue Board of Malaysia (IRBM), and a mixed-mode survey among selected Malaysian corporate taxpayers. Thematic analysis and descriptive and inferential analysis were mainly used to examine the qualitative and quantitative data.The results suggest that the main corporate taxpayers’ compliance variables are: tax knowledge, tax complexity, tax agents and tax audits. The main business characteristics that are found to have significant influence on compliance variables are the length of time the business has been operational, size and industry. Continuous tax education and tax audit programmes are thus vital, and should focus more closely on specific groups of taxpayers, namely smaller and more newly established companies, companies in rural areas, and business industries that are more inclined to use cash transactions. Moreover, as many corporate taxpayers perceive the probability of an audit as low, the IRBM should publicise its audit activities more prolifically through available media channels. Tax simplification, especially on laws regarding estimation of income tax, is also an important consideration.This study extends the scope of tax compliance research to corporate taxpayers, and builds upon the limited international and Malaysian literature in this area. Most of the research findings of this thesis yield consistent results with respect to particular tax compliance variables. In a tax policy context, this study enables international tax authorities in general, and Malaysian tax authorities in particular, to have greater confidence in developing and administering tax laws and policies to maintain and/or increase the overall level of corporate compliance.
11

Jia, Yanan Jia. "Generalized Bilinear Mixed-Effects Models for Multi-Indexed Multivariate Data". The Ohio State University, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=osu1469180629.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
12

Arendt, Christopher D. "Adaptive Pareto Set Estimation for Stochastic Mixed Variable Design Problems". Ft. Belvoir : Defense Technical Information Center, 2009. http://handle.dtic.mil/100.2/ADA499860.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
13

Barjhoux, Pierre-Jean. "Towards efficient solutions for large scale structural optimization problems with categorical and continuous mixed design variables". Thesis, Toulouse, ISAE, 2020. http://depozit.isae.fr/theses/2020/2020_Barjhoux_Pierre-Jean.pdf.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Dans l’industrie aéronautique, les problèmes d’optimisation de structurepeuvent impliquer des changements de matériaux, de types de raidisseurs, et detailles d’éléments. Dans ce travail, il est ainsi proposé de résoudre des problèmes degrande taille (minimisation de masse) par rapport à des variables catégorielles et continues,sujets à des contraintes de stress et de déplacements. Trois algorithmes sontprésentés, discutés dans le manuscrit au regard de cas tests de plus en plus complexes.En tout premier lieu, un algorithme basé sur le "branch and bound" a été mis en place.Une formulation d’un problème dédié au calcul de minorants de la masse optimale estproposée. Bien que l’algorithme permette de trouver des solutions optimales, la tendancedu coût de calcul en fonction de l’augmentation du nombre d’éléments est exponentielle.Le second algorithme s’appuie sur une formulation bi-niveau du problème d’origine, oùle problème supérieur consiste à minimiser une approximation au premier ordre du résultatdu niveau inférieur. L’évolution du coût de calcul par rapport à l’augmentation dunombre d’éléments et de valeurs catégorielles est quasiment linéaire. Enfin, un troisièmealgorithme tire partie d’une reformulation du problème mixte catégoriel continu en unproblème bi-niveau mixte avec variables entières continûment relâchables. Les cas testsnumériques montrent la résolution d’un problème avec plus d’une centaine d’éléments.Également, le coût de calcul est quasi-indépendant du nombre de valeurs de variablescatégorielles disponibles par élément
Nowadays in the aircraft industry, structural optimization problemscan be really complex and combine changes in choices of materials, stiffeners, orsizes/types of elements. In this work, it is proposed to solve large scale structural weightminimization problems with both categorical and continuous variables, subject to stressand displacements constraints. Three algorithms have been proposed. As a first attempt,an algorithm based on the branch and bound generic framework has been implemented.A specific formulation to compute lower bounds has been proposed. According to thenumerical tests, the algorithm returned the exact optima. However, the exponentialscalability of the computational cost with respect to the number of structural elementsprevents from an industrial application. The second algorithm relies on a bi-level formulationof the mixed categorical problem. The master full categorical problem consists ofminimizing a first order like approximation of the slave problem with respect to the categoricaldesign variables. The method offers a quasi-linear scaling of the computationalcost with respect to the number of elements and categorical values. Finally, in the thirdapproach the optimization problem is formulated as a bi-level mixed integer non-linearprogram with relaxable design variables. Numerical tests include an optimization casewith more than one hundred structural elements. Also, the computational cost scalingis quasi-independent from the number of available categorical values per element
14

Weld, Christopher. "Computational Graphics and Statistical Analysis: Mixed Type Random Variables, Confidence Regions, and Golden Quantile Rank Sets". W&M ScholarWorks, 2019. https://scholarworks.wm.edu/etd/1563898977.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This dissertation has three principle areas of research: mixed type random variables, confidence regions, and golden quantile rank sets. While each offers a specific focus, some common themes persist; broadly stated, there are three. First, computational graphics play a critical role. Second, software development facilitates implementation and accessibility. Third, statistical analysis---often attributable to the aforementioned automation---provides valuable insights and applications. Each of the principle research areas are briefly summarized next. Mixed type random variables are a hybrid of continuous and discrete random variables, having components of both continuous probability density and discrete probability mass. This dissertation illustrates the challenges inherent in plotting mixed type distributions, and introduces an algorithm that addresses those issues. It considers sums and products of mixed type random variables, and supports its conclusions using Monte Carlo simulation experiments. Lastly, it introduces MixedAPPL, a computer algebra system software package designed for manipulating mixed type random variables. Confidence regions are a multi-dimensional version of a confidence interval. They are helpful to visualize and quantify uncertainty surrounding a point estimate. We begin by developing efficient plot algorithms for two-dimensional confidence regions. This research focuses specifically on likelihood-ratio based confidence regions for two-parameter univariate probability models, although the plot techniques are transferable to any two-dimensional setting. The R package 'conf' is introduced, which automates these confidence region plot algorithms for complete and right-censored data sets. Among its benefits, 'conf' provides access to Monte Carlo simulation experiments for confidence region coverage to an extent not possible previously. The corresponding coverage analysis results include reference tables for the Weibull, normal, and log-logistic distributions. These reference tables yield confidence region plots with exact coverage. The final topic is the introduction and analysis of a golden quantile rank set (GQRS). The term quantile rank set is used here to denote the population cumulative distribution function values corresponding to a sample. A GQRS can be thought of as "perfectly" representative of their population distribution because samples corresponding to a GQRS result in an estimator(s) matching the associated true population parameter(s). This unique characteristic is not applicable for all estimators and/or distributions, but when present, provides valuable insights and applications. Specifically, applications include an alternative (and at times computationally superior) method for parameter estimation and an exact actual coverage methodology for confidence regions (at times in which currently only estimates exist). Distributions with a GQRS associated with maximum likelihood estimation include the normal, exponential, Weibull, log logistic, and one-parameter exponential power distributions.
15

Muazu, Naseer Babangida. "Comparative analysis of domestic fuel-wood energy consumption between South Africa and Nigeria: A mixed methods approach". University of Western Cape, 2019. http://hdl.handle.net/11394/7473.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Philosophiae Doctor - PhD
South Africa was considered to have attained universal access to modern energy, this meant that the number of households that have access to energy had successfully increased from 30% in 1994 to 87% in 2012. However, the situation in Nigeria is such that electricity generating figures are very poor and they cannot meet half of the demand of Nigerian households, and the majority of the states have challenges in accessing sufficient fossil fuels. However, recent trends in domestic energy consumption for both countries are becoming biased in favor of fuel-wood energy especially among low-income households, “descending the energy ladder”.
16

Leone, Suzanna. "The Relationship between Classroom Climate Variables and Student Achievement". Bowling Green State University / OhioLINK, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1256594309.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
17

Tʻang, Min. "Extention of evaluating the operating characteristics for dependent mixed variables-attributes sampling plans to large first sample size /". Online version of thesis, 1991. http://hdl.handle.net/1850/11208.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
18

Lata, Mary Elizabeth. "Variables affecting first order fire effects, characteristics, and behavior in experimental and prescribed fires in mixed and tallgrass prairie". Diss., University of Iowa, 2006. http://ir.uiowa.edu/etd/72.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
19

Ye, Xin. "Development of models for understanding causal relationships among activity and travel variables". [Tampa, Fla] : University of South Florida, 2006. http://purl.fcla.edu/usf/dc/et/SFE0001842.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
20

Dion, Charlotte. "Estimation non-paramétrique de la densité de variables aléatoires cachées". Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAM031/document.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Cette thèse comporte plusieurs procédures d'estimation non-paramétrique de densité de probabilité.Dans chaque cas les variables d'intérêt ne sont pas observées directement, ce qui est une difficulté majeure.La première partie traite un modèle linéaire mixte où des observations répétées sont disponibles.La deuxième partie s'intéresse aux modèles d'équations différentielles stochastiques à effets aléatoires. Plusieurs trajectoires sont observées en temps continu sur un intervalle de temps commun.La troisième partie se place dans un contexte de bruit multiplicatif.Les différentes parties de cette thèse sont reliées par un contexte commun de problème inverse et par une problématique commune: l'estimation de la densité d'une variable cachée. Dans les deux premières parties la densité d'un ou plusieurs effets aléatoires est estimée. Dans la troisième partie il s'agit de reconstruire la densité de la variable d'origine à partir d'observations bruitées.Différentes méthodes d'estimation globale sont utilisées pour construire des estimateurs performants: estimateurs à noyau, estimateurs par projection ou estimateurs construits par déconvolution.La sélection de paramètres mène à des estimateurs adaptatifs et les risques quadratiques intégrés sont majorés grâce à une inégalité de concentration de Talagrand. Une étude sur simulations de chaque estimateur illustre leurs performances. Un jeu de données neuronales est étudié grâce aux procédures mises en place pour les équations différentielles stochastiques
This thesis contains several nonparametric estimation procedures of a probability density function.In each case, the main difficulty lies in the fact that the variables of interest are not directly observed.The first part deals with a mixed linear model for which repeated observations are available.The second part focuses on stochastic differential equations with random effects. Many trajectories are observed continuously on the same time interval.The third part is in a full multiplicative noise framework.The parts of the thesis are connected by the same context of inverse problems and by a common problematic: the estimation of the density function of a hidden variable.In the first two parts the density of one or two random effects is estimated. In the third part the goal is to rebuild the density of the original variable from the noisy observations.Different global methods are used and lead to well competitive estimators: kernel estimators, projection estimators or estimators built from deconvolution.Parameter selection gives adaptive estimators and the integrated risks are bounded using a Talagrand concentration inequality.A simulation study for each proposed estimator highlights their performances.A neuronal dataset is investigated with the new procedures for stochastic differential equations developed in this work
21

Cuesta, Ramirez Jhouben Janyk. "Optimization of a computationally expensive simulator with quantitative and qualitative inputs". Thesis, Lyon, 2022. http://www.theses.fr/2022LYSEM010.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Dans cette thèse, les problèmes mixtes couteux sont abordés par le biais de processus gaussiens où les variables discrètes sont relaxées en variables latentes continues. L'espace continu est plus facilement exploité par les techniques classiques d'optimisation bayésienne que ne le serait un espace mixte. Les variables discrètes sont récupérées soit après l'optimisation continue, soit simultanément avec une contrainte supplémentaire de compatibilité continue-discrète qui est traitée avec des lagrangiens augmentés. Plusieurs implémentations possibles de ces optimiseurs mixtes bayésiens sont comparées. En particulier, la reformulation du problème avec des variables latentes continues est mise en concurrence avec des recherches travaillant directement dans l'espace mixte. Parmi les algorithmes impliquant des variables latentes et un lagrangien augmenté, une attention particulière est consacrée aux multiplicateurs de lagrange pour lesquels des techniques d'estimation locale et globale sont étudiées. Les comparaisons sont basées sur l'optimisation répétée de trois fonctions analytiques et sur une application mécanique concernant la conception d'une poutre. Une étude supplémentaire pour l'application d'une stratégie d'optimisation mixte proposée dans le domaine de l'auto-calibrage mixte est faite. Cette analyse s'inspire d'une application de quantification des radionucléides, qui définit une fonction inverse spécifique nécessitant l'étude de ses multiples propriétés dans le scenario continu. une proposition de différentes stratégies déterministes et bayésiennes a été faite en vue d'une définition complète dans un contexte de variables mixtes
In this thesis, costly mixed problems are approached through gaussian processes where the discrete variables are relaxed into continuous latent variables. the continuous space is more easily harvested by classical bayesian optimization techniques than a mixed space would. discrete variables are recovered either subsequently to the continuous optimization, or simultaneously with an additional continuous-discrete compatibility constraint that is handled with augmented lagrangians. several possible implementations of such bayesian mixed optimizers are compared. in particular, the reformulation of the problem with continuous latent variables is put in competition with searches working directly in the mixed space. among the algorithms involving latent variables and an augmented lagrangian, a particular attention is devoted to the lagrange multipliers for which a local and a global estimation techniques are studied. the comparisons are based on the repeated optimization of three analytical functions and a mechanical application regarding a beam design. an additional study for applying a proposed mixed optimization strategy in the field of mixed self-calibration is made. this analysis was inspired in an application in radionuclide quantification, which defined an specific inverse function that required the study of its multiple properties in the continuous scenario. a proposition of different deterministic and bayesian strategies was made towards a complete definition in a mixed variable setup
22

Giacofci, Joyce. "Classification non supervisée et sélection de variables dans les modèles mixtes fonctionnels. Applications à la biologie moléculaire". Thesis, Grenoble, 2013. http://www.theses.fr/2013GRENM025/document.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Un nombre croissant de domaines scientifiques collectent de grandes quantités de données comportant beaucoup de mesures répétées pour chaque individu. Ce type de données peut être vu comme une extension des données longitudinales en grande dimension. Le cadre naturel pour modéliser ce type de données est alors celui des modèles mixtes fonctionnels. Nous traitons, dans une première partie, de la classification non-supervisée dans les modèles mixtes fonctionnels. Nous présentons dans ce cadre une nouvelle procédure utilisant une décomposition en ondelettes des effets fixes et des effets aléatoires. Notre approche se décompose en deux étapes : une étape de réduction de dimension basée sur les techniques de seuillage des ondelettes et une étape de classification où l'algorithme EM est utilisé pour l'estimation des paramètres par maximum de vraisemblance. Nous présentons des résultats de simulations et nous illustrons notre méthode sur des jeux de données issus de la biologie moléculaire (données omiques). Cette procédure est implémentée dans le package R "curvclust" disponible sur le site du CRAN. Dans une deuxième partie, nous nous intéressons aux questions d'estimation et de réduction de dimension au sein des modèles mixtes fonctionnels et nous développons en ce sens deux approches. La première approche se place dans un objectif d'estimation dans un contexte non-paramétrique et nous montrons dans ce cadre, que l'estimateur de l'effet fixe fonctionnel basé sur les techniques de seuillage par ondelettes possède de bonnes propriétés de convergence. Notre deuxième approche s'intéresse à la problématique de sélection des effets fixes et aléatoires et nous proposons une procédure basée sur les techniques de sélection de variables par maximum de vraisemblance pénalisée et utilisant deux pénalités SCAD sur les effets fixes et les variances des effets aléatoires. Nous montrons dans ce cadre que le critère considéré conduit à des estimateurs possédant des propriétés oraculaires dans un cadre où le nombre d'individus et la taille des signaux divergent. Une étude de simulation visant à appréhender les comportements des deux approches développées est réalisée dans ce contexte
More and more scientific studies yield to the collection of a large amount of data that consist of sets of curves recorded on individuals. These data can be seen as an extension of longitudinal data in high dimension and are often modeled as functional data in a mixed-effects framework. In a first part we focus on performing unsupervised clustering of these curves in the presence of inter-individual variability. To this end, we develop a new procedure based on a wavelet representation of the model, for both fixed and random effects. Our approach follows two steps : a dimension reduction step, based on wavelet thresholding techniques, is first performed. Then a clustering step is applied on the selected coefficients. An EM-algorithm is used for maximum likelihood estimation of parameters. The properties of the overall procedure are validated by an extensive simulation study. We also illustrate our method on high throughput molecular data (omics data) like microarray CGH or mass spectrometry data. Our procedure is available through the R package "curvclust", available on the CRAN website. In a second part, we concentrate on estimation and dimension reduction issues in the mixed-effects functional framework. Two distinct approaches are developed according to these issues. The first approach deals with parameters estimation in a non parametrical setting. We demonstrate that the functional fixed effects estimator based on wavelet thresholding techniques achieves the expected rate of convergence toward the true function. The second approach is dedicated to the selection of both fixed and random effects. We propose a method based on a penalized likelihood criterion with SCAD penalties for the estimation and the selection of both fixed effects and random effects variances. In the context of variable selection we prove that the penalized estimators enjoy the oracle property when the signal size diverges with the sample size. A simulation study is carried out to assess the behaviour of the two proposed approaches
23

Vasudevan, S. "Development of new spatially curved non-linear frame finite element using a mixed variational principle and rotations as independent variables". Diss., Georgia Institute of Technology, 1998. http://hdl.handle.net/1853/13069.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
24

Hay, John Leslie. "Statistical modelling for non-Gaussian time series data with explanatory variables". Thesis, Queensland University of Technology, 1999.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
25

Zhang, Yang [Verfasser], Horst [Akademischer Betreuer] Baier e Kai-Uwe [Akademischer Betreuer] Bletzinger. "Efficient Procedures for Structural Optimization with Integer and Mixed-Integer Design Variables / Yang Zhang. Gutachter: Horst Baier ; Kai-Uwe Bletzinger. Betreuer: Horst Baier". München : Universitätsbibliothek der TU München, 2015. http://d-nb.info/1071948083/34.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
26

Hartung, Julie A. "“It’s never going to be perfect even though I want it to be”: Quantitatively and qualitatively investigating honors and non-honors students’ experiences of perfectionism and related variables". Digital Commons @ East Tennessee State University, 2021. https://dc.etsu.edu/honors/631.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Previous research has demonstrated that students in university honors programs may be distinct from their non-honors counterparts. To further examine these differences and the overall experiences of honors students, this thesis utilized a Study 1/Study 2 mixed methodology design to examine the experiences of honors students within East Tennessee State University’s University Honors Scholars program. Study One quantitatively examined the differences between honors and non-honors students’ levels of perfectionism, imposter syndrome, and academic and social competitiveness. Findings from Study One inspired Study Two, which qualitatively examined honors students’ experiences with perfectionism, uncovering the sources and effects of their perfectionistic behaviors. Combined, these findings indicate that not only do honors students experience higher levels of perfectionism than non-honors students, likely stemming from the expectations and standards held by the honors program, but also that their perfectionistic behaviors are overall maladaptive and are used to avoid failure rather than in the pursuit of success.
27

Zhang, Dan. "Business-to-Business (B2B) media in UK : a mixed methods study using product variables to assess the impacts of social media on product strategies". Thesis, University of Westminster, 2016. https://westminsterresearch.westminster.ac.uk/item/q10qy/business-to-business-b2b-media-in-uk-a-mixed-methods-study-using-product-variables-to-assess-the-impacts-of-social-media-on-product-strategies.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Business-to-business (B2B) media, which used to be known as the trade press, has occupied one of the blind spots of media research. Digitisation has helped transforming B2B media from their old profile of trade magazines into a dynamic media sector producing multiple publishing and off-line products with different business models. Previous work on the digitisation of media focused on the mass media and neglected the B2B sector. This study addresses this gap by examining the impacts of social media as part of the forces of digitisation on the B2B media industry in the UK and how the industry has adjusted business strategies in response to the impacts. Literature study describes the uniqueness of B2B media in comparison with the mass media and develops an analytical framework which defines the B2B media via their core value proposition of helping audiences make money. To analyse the different ways B2B media attempt to provide this value proposition, the thesis develops a typology of B2B products using two variables: utility and timeliness. It also identifies and explained the third variable: confidentiality. Social media are found to provide audiences and users with the same utilities as B2B media do: information and connectivity. The analytical framework therefore speculates that social media may impact on different B2B products and companies either as a competition or supplement. The study then collects empirical data to understand how the real impacts of social media and digitisation are on the variables and product strategies of B2B media. Quantitative survey and qualitative interview data from B2B media practitioners reveal the strengths and weaknesses of social media to suggest that social media partially and weakly influence the different types of B2B media products on the timeliness and confidentiality variables but have no effect on the basic utility variable. The research participants consider social media not to be in competition and respond to the impacts of social media positively by using them as connectivity tools. The B2B media practitioners also control and adjust the timeliness and confidentiality variables of their product as part of their product strategy changes, which do not seem to be a direct response to social media, but to the peer competition and the disruptions from greater digitisation forces in the market. The conclusions of the study contradict the expectations of social media as a disruptive force to the B2B media. Instead, the data suggest a realistic allocation of internal resources by the industry to respond to the impacts of social media. As a pioneering study of its kind in the literature of media and media business research, this thesis defines the specific aspects of B2B media products and of the sector in the media landscape. The study contributes a comprehensive analytical framework with which it calls for future research of B2B media using audience, corporate structure, global markets, technology, and other perspectives.
28

Zulian, Marine. "Méthodes de sélection et de validation de modèles à effets mixtes pour la médecine génomique". Thesis, Institut polytechnique de Paris, 2020. http://www.theses.fr/2020IPPAX003.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
L'étude de phénomènes biologiques complexes tels que la physiopathologie humaine, la pharmacocinétique d'un médicament ou encore sa pharmacodynamie peut être enrichie par des approches de modélisation et de simulation. Les progrès technologiques de la génétique permettent la constitution de jeux de données issues de populations plus larges et plus hétérogènes. L'enjeu est alors de développer des outils intégrant les données génomiques et phénotypiques permettant d'expliquer la variabilité inter-individuelle. Dans cette thèse, nous développons des méthodes qui permettent de prendre en compte la complexité des données biologiques et la complexité des processus sous-jacents. Des étapes de curation des covariables génomiques nous permettent de restreindre le nombre de covariables potentielles ainsi que de limiter les corrélations entre covariables. Nous proposons un algorithme de sélection de covariables dans un modèle à effets mixtes dont la structure est contrainte par le processus physiologique étudié. En particulier, nous illustrons les méthodes développées sur deux applications issues de la médecine : des données réelles d'hypertension artérielle et des données simulées du métabolisme du tramadol (opioïde)
The study of complex biological phenomena such as human pathophysiology, pharmacokinetics of a drug or its pharmacodynamics can be enriched by modelling and simulation approaches. Technological advances in genetics allow the establishment of data sets from larger and more heterogeneous populations. The challenge is then to develop tools that integrate genomic and phenotypic data to explain inter-individual variability. In this thesis, we develop methods that take into account the complexity of biological data and the complexity of underlying processes. Curation steps of genomic covariates allow us to limit the number of potential covariates and limit correlations between covariates. We propose an algorithm for selecting covariates in a mixed effects model whose structure is constrained by the physiological process. In particular, we illustrate the developed methods on two medical applications: actual high blood pressure data and simulated tramadol (opioid) metabolism data
29

Hannachi, Marwa. "Placement des tâches matérielles de tailles variables sur des architectures reconfigurables dynamiquement et partiellement". Thesis, Université de Lorraine, 2017. http://www.theses.fr/2017LORR0297/document.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Les systèmes adaptatifs basés sur les architectures FPGA (Field-Programmable Gate Arrays) peuvent bénéficier grandement de la grande flexibilité offerte par la reconfiguration partielle dynamique (DPR). Grâce au DPR, les tâches matérielles composant un système adaptatif peuvent être allouées et re-allouées à la demande ou en fonction de l'environnement dynamique. Les flots de conceptions disponibles et les outils commerciaux ont évolué pour répondre aux exigences des architectures reconfigurables qui sont toutefois limitées dans leurs fonctionnalités. Ces outils ne permettent pas un placement et une relocation efficaces de tâches matérielles de tailles variables. L'objectif principal de ces travaux de thèse consiste à proposer des nouvelles méthodologies et de nouvelles approches pour faciliter au concepteur la phase de conception d'un système adaptatif reconfigurable opérationnelle, valide, optimisé et adapté aux changements dynamiques de l'environnement. La première contribution de cette thèse porte sur la problématique de la relocation des tâches matérielles de tailles différentes. Une méthodologie de conception est proposée pour répondre à un problème majeur des mécanismes de relogement : le stockage d'une unique bitstream de configuration pour réduire les besoins de la mémoire et pour accroître la réutilisable des modules matériels générés. Une technique de partitionnement de la région reconfigurable est appliquée dans la méthodologie de relogement proposée pour augmenter l'efficacité d'utilisation des ressources matérielles dans le cas des tâches reconfigurables de tailles variables. Cette méthodologie prend en compte aussi la communication entre différentes régions reconfigurables et la région statique. Pour valider la méthode, plusieurs études de cas sont implémentées. Cette validation montre une utilisation efficace des ressources matérielles ainsi une réduction importante du temps de reconfiguration. La deuxième partie de cette thèse présente et détaille une formulation mathématique afin d'automatiser le floorplanning des zones reconfigurables dans les FPGAs. Les algorithmes de recherche présentés dans cette thèse sont basés sur la technique d'optimisation PLMNE (programmation linéaire mixte en nombres entiers). Ces algorithmes permettent de définir automatiquement l'emplacement, la taille et la forme de la zone reconfigurable dynamique. Nous nous intéressons principalement dans cette recherche à la satisfaction des contraintes de placement des zones reconfigurables et celles liées à la relocation. De plus, nous considérons l’optimisation des ressources matérielles dans le FPGA en tenant compte des tâches de tailles variables. Finalement, une évaluation de l'approche proposée est présentée
Adaptive systems based on Field-Programmable Gate Arrays (FPGA) architectures can benefit greatly from the high degree of flexibility offered by dynamic partial reconfiguration (DPR). Thanks to DPR, hardware tasks composing an adaptive system can be allocated and relocated on demand or depending on the dynamically changing environment. Existing design flows and commercial tools have evolved to meet the requirements of reconfigurables architectures, but that are limited in functionality. These tools do not allow an efficient placement and relocation of variable-sized hardware tasks. The main objective of this thesis is to propose a new methodology and a new approaches to facilitate to the designers the design phase of an adaptive and reconfigurable system and to make it operational, valid, optimized and adapted to dynamic changes in the environment. The first contribution of this thesis deals with the issues of relocation of variable-sized hardware tasks. A design methodology is proposed to address a major problem of relocation mechanisms: storing a single configuration bitstream to reduce memory requirements and increasing the reusability of generating hardware modules. A reconfigurable region partitioning technique is applied in this proposed relocation methodology to increase the efficiency of use of hardware resources in the case of reconfigurable tasks of variable sizes. This methodology also takes into account communication between different reconfigurable regions and the static region. To validate the design method, several cases studies are implemented. This validation shows an efficient use of hardware resources and a significant reduction in reconfiguration time. The second part of this thesis presents and details a mathematical formulations in order to automate the floorplanning of the reconfigurable regions in the FPGAs. The algorithms presented in this thesis are based on the optimization technique MILP (mixed integer linear programming). These algorithms allow to define automatically the location, the size and the shape of the dynamic reconfigurable region. We are mainly interested in this research to satisfy the constraints of placement of the reconfigurable zones and those related to the relocation. In addition, we consider the optimization of the hardware resources in the FPGA taking into account the tasks of variable sizes. Finally, an evaluation of the proposed approach is presented
30

Meneghel, Danilevicz Ian. "Robust linear mixed models, alternative methods to quantile regression for panel data, and adaptive LASSO quantile regression with fixed effects". Electronic Thesis or Diss., université Paris-Saclay, 2022. http://www.theses.fr/2022UPAST176.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
La thèse est constituée de trois chapitres. Le premier s'intéresse au lien entre l’exposition à la pollution de l’air et les affections respiratoires chez les enfants et les adolescents. La cohorte comprend 82 individus observés mensuellement pendant 6 mois. Nous proposons un modèle linéaire mixte robuste combiné à une analyse en composantes principales afin de gérer la multicolinéarité entre les covariables et l’impact des observations extrêmes sur les estimations. Le deuxième chapitre analyse des données de panel au moyen de modèles à effets fixes et utilisant différentes fonction de perte. Afin d'éviter que le nombre de paramètres augmente avec la taille de l'échantillon, nous pénalisons chaque méthode de régression par LASSO. Les propriétés asymptotiques de ces nouvelles techniques sont établies. Nous testons les performances des méthodes avec des données de panel économiques issues de l'OCDE. Le but recherché dans le troisième chapitre est de contraindre simultanément les constantes de régression individuelles et les covariables explicatives. Le LASSO adaptatif permet de réduire la dimensionnalité en assurant asymptotiquement la sélection du bon modèle. Nous testons la précision des méthodes proposées sur des données de cohorte de dimension modérée
This thesis consists of three chapters on longitudinal data analysis. Linear mixed models are discussed, both random effects (where individual intercepts are interpreted as random variables) and fixed effects (where individual intercepts are considered unknown constants, i.e., they must be estimated). Furthermore, robust models (resistant to outliers) and efficient models (with low estimator variability) are proposed in the scope of repeated measures. The second part of the thesis is dedicated to quantile regression, which explores the full conditional distribution of an outcome given its predictors. It introduces a more general method for dealing with heteroscedastic variables and longitudinal data. The first chapter is motivated by evaluating the statistical association between air pollution exposure and children and adolescents' lung ability among six months. A robust linear mixed model combined with an equally robust principal component analysis is proposed to deal with multicollinearity between covariates and the impact of extreme observations on the estimates. Huber and Tukey loss functions (M-estimation examples) are considered to obtain more robust estimators than the least squared function usually used to estimate the parameters of linear mixed models. A finite sample size study is carried out in the case where the covariates follow linear time series models with or without additive outliers. The impact of time correlation and outliers on fixed effect parameter estimates in linear mixed models is investigated. In addition, weights are introduced to reduce the estimates' bias even more. The study of the real data revealed that the robust principal component analysis exhibits three principal components explaining more than 90% of the total variability. The second principal component, which corresponds to particles smaller than 10 microns, significantly affects respiratory capacity. In addition, biological indicators such as passive smoking have a negative and significant effect on children's lung ability. The second chapter analyses fixed effect panel data with three different loss functions. To avoid the number of parameters increases with the sample size, we propose to penalize each regression method with the least absolute shrinkage and selection operator (LASSO). The asymptotic properties of two of these new techniques are established. A Monte Carlo study is performed for homoscedastic and heteroscedastic models. Although the model is more challenging to estimate in the heteroscedastic case for most statistical methods, the proposed methods perform well in both scenarios. This confirms that the proposed quantile regression methods are robust to heteroscedasticity. Their performance is tested on economic panel data from the Organisation for Economic Cooperation and Development (OECD). The objective of the third chapter is to simultaneously restrict the number of individual regression constants and explanatory covariates. In addition to the LASSO, an adaptive LASSO is proposed, which enjoys oracle proprieties, i.e., it owns the asymptotic selection of the true model if it exists, and it has the classical asymptotic normality property. Monte Carlo simulations are performed in the case of low dimensionality (much more observations than parameters) and in the case of moderate dimensionality (equivalent number of observations and parameters). In both cases, the adaptive method performs much better than the non-adaptive methods. Finally, we apply our methodology to a cohort dataset of moderate dimensionality. For each chapter, open-source software is written, which is available to the scientific community
Esta tese consiste em três capítulos sobre análise de dados longitudinais. São discutidos modelos lineares mistos, tanto efeitos aleatórios (onde interseptos individuais são interpretados como variáveis aleatórias) quanto efeitos fixos (onde interseptos individuais são considerados constantes desconhecidas, ou seja, devem ser estimadas). Além disso, modelos robustos (resistentes a outliers) e modelos eficientes (com baixa variabilidade de estimadores) são propostos no âmbito de medidas repetidas. A segunda parte da tese é dedicada à regressão quantílica, que explora toda a distribuição condicional de uma variável resposta dado suas preditoras. Ela introduz um método mais geral para lidar com variáveis heterocedásticas e dados longitudinais. O primeiro capítulo é motivado pela avaliação da associação estatística entre a exposição à poluição do ar e a capacidade pulmonar de crianças e adolescentes durante um período de seis meses. Um modelo linear misto robusto combinado com uma análise de componentes principais igualmente robusta é proposto para lidar com a multicolinearidade entre covariáveis e o impacto de observações extremas sobre as estimativas. As funções de perda Huber e Tukey (exemplos de \textit{M-estimation}) são consideradas para obter estimadores mais robustos do que a função de mínimos quadrados geralmente usada para estimar os parâmetros de modelos lineares mistos. Um estudo de tamanho de amostra finito é realizado no caso em que as covariáveis seguem modelos de séries temporais lineares com ou sem outliers aditivos. É investigado o impacto da correlação temporal e outliers nas estimativas de parâmetros de efeito fixo em modelos lineares mistos. Além disso, foram introduzidos pesos para reduzir ainda mais o enviesamento das estimativas. Um estudo em dados reais revelou que a análise robusta dos componentes principais apresenta três componentes principais que explicam mais de 90% da variabilidade total. O segundo componente principal, que corresponde a partículas menores que 10 micrômetros, afeta significativamente a capacidade respiratória. Além disso, os indicadores biológicos como o tabagismo passivo têm um efeito negativo e significativo na capacidade pulmonar das crianças. O segundo capítulo analisa dados de painel com efeito fixo com três diferentes funções de perda. Para evitar que o número de parâmetros aumente com o tamanho da amostra, propomos penalizar cada método de regressão com least absolute shrinkage and selection operator (LASSO). As propriedades assimptóticas de duas dessas novas técnicas são estabelecidas. Um estudo de Monte Carlo é realizado para modelos homocedásticos e heterosecásticos. Embora o modelo seja mais difícil de estimar no caso heterocedástico para a maioria dos métodos estatísticos, os métodos propostos têm bom desempenho em ambos os cenários. Isto confirma que os métodos de regressão quantílica propostos são robustos à heterocedasticidade. Seu desempenho é testado nos dados do painel econômico da Organização para Cooperação e Desenvolvimento Econômico (OCDE). O objetivo do terceiro capítulo é restringir simultaneamente o número de constantes de regressão individuais e covariáveis explicativas. Além do LASSO, é proposto um LASSO adaptativo que permite a seleção assimptótica do modelo verdadeiro, se este existir, e que desfruta da propriedade de normalidade assimptótica clássica. As simulações de Monte Carlo são realizadas no caso de baixa dimensionalidade (muito mais observações do que parâmetros) e no caso de dimensionalidade moderada (número equivalente de observações e parâmetros). Em ambos os casos, o método adaptativo tem um desempenho muito melhor do que os métodos não adaptativos. Finalmente, aplicamos nossa metodologia em um conjunto de dados de coorte de dimensionalidade moderada. Para cada capítulo, um software de código aberto é escrito e colocado à disposição da comunidade científica
31

Morice, Erwan. "Fissuration dans les matériaux quasi-fragiles : approche numérique et expérimentale pour la détermination d'un modèle incrémental à variables condensées". Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2014. http://tel.archives-ouvertes.fr/tel-01048626.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
La rupture des matériaux quasi-fragiles, tels que les céramiques ou les bétons, peut être représentée schématiquement par la succession des étapes de nucléation et de coalescence de micro-fissures. Modéliser ce processus de rupture est un enjeu particulièrement important lorsque l'on s'intéresse à la résistance des structures en béton, en particulier à la prédiction de la perméabilité des structures endommagées. La démarche choisie est une vision multi-échelle où le comportement global est caractérisé par la mécanique de la rupture, et le comportement local représenté par la méthode des éléments discrets. Le modèle représente la fissuration par des grandeurs généralisées, qui seront définies dans le cadre de la mécanique de la rupture. Afin de prendre en compte l'aspect non linéaire de la fissuration dans les matériaux quasi-fragiles, la cinématique usuelle de la mécanique de la rupture est enrichie par l'ajout de degrés de libertés supplémentaires chargés de représenter la part non linéaire du champ de vitesse. L'évolution du comportement est alors condensé par l'évolution de facteurs d'intensité. Le modèle proposé permet de prédire le comportement lors de chargements de mode mixte I+II proportionnel et non-proportionnel. Enfin, une campagne d'essais visant à caractériser le comportement en fissuration du mortier à été réalisée. Les résultats obtenus montrent un rôle important de la fissuration par fatigue. La méthode de changement d'échelle a également été appliquée sur les champs de vitesse en pointe de fissure, confirmant la représentation du comportement en pointe de fissure par une cinématique enrichie.
32

Baragatti, Meïli. "Sélection bayésienne de variables et méthodes de type Parallel Tempering avec et sans vraisemblance". Thesis, Aix-Marseille 2, 2011. http://www.theses.fr/2011AIX22100/document.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Cette thèse se décompose en deux parties. Dans un premier temps nous nous intéressons à la sélection bayésienne de variables dans un modèle probit mixte.L'objectif est de développer une méthode pour sélectionner quelques variables pertinentes parmi plusieurs dizaines de milliers tout en prenant en compte le design d'une étude, et en particulier le fait que plusieurs jeux de données soient fusionnés. Le modèle de régression probit mixte utilisé fait partie d'un modèle bayésien hiérarchique plus large et le jeu de données est considéré comme un effet aléatoire. Cette méthode est une extension de la méthode de Lee et al. (2003). La première étape consiste à spécifier le modèle ainsi que les distributions a priori, avec notamment l'utilisation de l'a priori conventionnel de Zellner (g-prior) pour le vecteur des coefficients associé aux effets fixes (Zellner, 1986). Dans une seconde étape, nous utilisons un algorithme Metropolis-within-Gibbs couplé à la grouping (ou blocking) technique de Liu (1994) afin de surmonter certaines difficultés d'échantillonnage. Ce choix a des avantages théoriques et computationnels. La méthode développée est appliquée à des jeux de données microarray sur le cancer du sein. Cependant elle a une limite : la matrice de covariance utilisée dans le g-prior doit nécessairement être inversible. Or il y a deux cas pour lesquels cette matrice est singulière : lorsque le nombre de variables sélectionnées dépasse le nombre d'observations, ou lorsque des variables sont combinaisons linéaires d'autres variables. Nous proposons donc une modification de l'a priori de Zellner en y introduisant un paramètre de type ridge, ainsi qu'une manière de choisir les hyper-paramètres associés. L'a priori obtenu est un compromis entre le g-prior classique et l'a priori supposant l'indépendance des coefficients de régression, et se rapproche d'un a priori précédemment proposé par Gupta et Ibrahim (2007).Dans une seconde partie nous développons deux nouvelles méthodes MCMC basées sur des populations de chaînes. Dans le cas de modèles complexes ayant de nombreux paramètres, mais où la vraisemblance des données peut se calculer, l'algorithme Equi-Energy Sampler (EES) introduit par Kou et al. (2006) est apparemment plus efficace que l'algorithme classique du Parallel Tempering (PT) introduit par Geyer (1991). Cependant, il est difficile d'utilisation lorsqu'il est couplé avec un échantillonneur de Gibbs, et nécessite un stockage important de valeurs. Nous proposons un algorithme combinant le PT avec le principe d'échanges entre chaînes ayant des niveaux d'énergie similaires dans le même esprit que l'EES. Cette adaptation appelée Parallel Tempering with Equi-Energy Moves (PTEEM) conserve l'idée originale qui fait la force de l'algorithme EES tout en assurant de bonnes propriétés théoriques et une utilisation facile avec un échantillonneur de Gibbs.Enfin, dans certains cas complexes l'inférence peut être difficile car le calcul de la vraisemblance des données s'avère trop coûteux, voire impossible. De nombreuses méthodes sans vraisemblance ont été développées. Par analogie avec le Parallel Tempering, nous proposons une méthode appelée ABC-Parallel Tempering, basée sur la théorie des MCMC, utilisant une population de chaînes et permettant des échanges entre elles
This thesis is divided into two main parts. In the first part, we propose a Bayesian variable selection method for probit mixed models. The objective is to select few relevant variables among tens of thousands while taking into account the design of a study, and in particular the fact that several datasets are merged together. The probit mixed model used is considered as part of a larger hierarchical Bayesian model, and the dataset is introduced as a random effect. The proposed method extends a work of Lee et al. (2003). The first step is to specify the model and prior distributions. In particular, we use the g-prior of Zellner (1986) for the fixed regression coefficients. In a second step, we use a Metropolis-within-Gibbs algorithm combined with the grouping (or blocking) technique of Liu (1994). This choice has both theoritical and practical advantages. The method developed is applied to merged microarray datasets of patients with breast cancer. However, this method has a limit: the covariance matrix involved in the g-prior should not be singular. But there are two standard cases in which it is singular: if the number of observations is lower than the number of variables, or if some variables are linear combinations of others. In such situations we propose to modify the g-prior by introducing a ridge parameter, and a simple way to choose the associated hyper-parameters. The prior obtained is a compromise between the conditional independent case of the coefficient regressors and the automatic scaling advantage offered by the g-prior, and can be linked to the work of Gupta and Ibrahim (2007).In the second part, we develop two new population-based MCMC methods. In cases of complex models with several parameters, but whose likelihood can be computed, the Equi-Energy Sampler (EES) of Kou et al. (2006) seems to be more efficient than the Parallel Tempering (PT) algorithm introduced by Geyer (1991). However it is difficult to use in combination with a Gibbs sampler, and it necessitates increased storage. We propose an algorithm combining the PT with the principle of exchange moves between chains with same levels of energy, in the spirit of the EES. This adaptation which we are calling Parallel Tempering with Equi-Energy Move (PTEEM) keeps the original idea of the EES method while ensuring good theoretical properties and a practical use in combination with a Gibbs sampler.Then, in some complex models whose likelihood is analytically or computationally intractable, the inference can be difficult. Several likelihood-free methods (or Approximate Bayesian Computational Methods) have been developed. We propose a new algorithm, the Likelihood Free-Parallel Tempering, based on the MCMC theory and on a population of chains, by using an analogy with the Parallel Tempering algorithm
33

Tchapnga, Takoudjou Rodrigue. "Méthodes de modélisation et d'optimisation par recherche à voisinages variables pour le problème de collecte et de livraison avec transbordement". Thesis, Bordeaux, 2014. http://www.theses.fr/2014BORD0052/document.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
La présente thèse se déroule dans le cadre du projet ANR PRODIGE et est axée sur la recherche de stratégies permettant l’optimisation du transport en général et du transport routier de marchandises en particulier. Le problème de transport support de cette étude est le problème de collecte et livraison avec transbordement. Ce problème généralise plusieurs problèmes de transports classiques. Le transbordement y est utilisé comme levier de flexibilité et d’optimisation. Pour analyser et résoudre ce problème, les analyses sont effectuées suivant trois axes : le premier axe concerne l’élaboration d’un modèle analytique plus précisément d’un modèle mathématique en variables mixtes. Ce modèle permet de fournir dessolutions optimales au décisionnaire du transport mais présente l’inconvénient de nécessiter un temps de résolution qui croit exponentiellement avec la taille du problème. Cette limitation est levée par le deuxième axe d’étude qui permet de résoudre le problème de transport étudié par une méthode d’optimisation approchée tout en garantissant des solutions satisfaisantes.La méthode utilisée est une métaheuristique inspirée de la recherche à voisinages variables (VNS). Dans le troisième axe, l’ensemble des résultats obtenus dans la thèse sont testés en situation de transports réels via le projet PRODIGE
The thesis is conducted under the ANR project PRODIGE and it is focused on seeking strategies allowing the optimization of transport in general and road freight transport in particular. The transportation problem support for this study is the pickup and delivery problem with transshipment.This problem generalizes several classical transportation problems.Transshipment is used as optimization and flexibility leverage. To study and solve this problem, analyzes are performed along three axes :the first objective concerns the development of an analytical model, more accurately a mathematical model with mixed variables. This model allows providing optimal solution to the decision maker, but has the disadvantage of requiring a time resolution that grows exponentially with the size of the problem. This limitation is overcome by the second line of the study that solves the transportation problem studied by an approximate optimization method while ensuring satisfactory solutions. The method used is a mataheuristic broadly followed the variables neighborhoods research principles. In the third objective, the overall results obtained in the thesis are tested in real transport situation via the PRODIGE project
34

Labenne, Amaury. "Méthodes de réduction de dimension pour la construction d'indicateurs de qualité de vie". Thesis, Bordeaux, 2015. http://www.theses.fr/2015BORD0239/document.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
L’objectif de cette thèse est de développer et de proposer de nouvellesméthodes de réduction de dimension pour la construction d’indicateurs composites dequalité de vie à l’échelle communale. La méthodologie statistique développée met l’accentsur la prise en compte de la multidimensionnalité du concept de qualité de vie, avecune attention particulière sur le traitement de la mixité des données (variables quantitativeset qualitatives) et l’introduction des conditions environnementales. Nous optonspour une approche par classification de variables et pour une méthode multi-tableaux(analyse factorielle multiple pour données mixtes). Ces deux méthodes permettent deconstruire des indicateurs composites que nous proposons comme mesure des conditionsde vie à l’échelle communale. Afin de faciliter l’interprétation des indicateurscomposites construits, une méthode de sélection de variables de type bootstrap estintroduite en analyse factorielle multiple. Enfin nous proposons la méthode hclustgeode classification d’observations qui intègre des contraintes de proximité géographiqueafin de mieux appréhender la spatialité des phénomènes mis en jeu
The purpose of this thesis is to develop and suggest new dimensionreduction methods to construct composite indicators on a municipal scale. The developedstatistical methodology highlights the consideration of the multi-dimensionalityof the quality of life concept, with a particular attention on the treatment of mixeddata (quantitative and qualitative variables) and the introduction of environmentalconditions. We opt for a variable clustering approach and for a multi-table method(multiple factorial analysis for mixed data). These two methods allow to build compositeindicators that we propose as a measure of living conditions at the municipalscale. In order to facilitate the interpretation of the created composite indicators, weintroduce a method of selections of variables based on a bootstrap approach. Finally,we suggest the clustering of observations method, named hclustgeo, which integratesgeographical proximity constraints in the clustering procedure, in order to apprehendthe spatiality specificities better
35

Jackson, Tracey. "Applying the Multiple Constituents’ Model and Social Justice Variables to Determine the Constituents’ Perception of the Virginia Putative Father Registry". VCU Scholars Compass, 2013. http://scholarscompass.vcu.edu/etd/2974.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
A putative father registry represents a legal option for unmarried males who wish to secure legal notice regarding an adoption proceeding for a child they may have fathered. Putative father registries must balance the interests of the putative father against those of the child, the birth mother, and the adoptive parents. This study utilized a framework adapted from the Multiple Constituency Model and used social justice, as indicated by distributive justice and procedural justice, to determine the perceptions among primary constituency groups of the Virginia Putative Father Registry. This research utilized a mixed-methods approach to analyze qualitative data from focus groups in combination with quantitative results from an online survey. The results of the qualitative analysis revealed eight principal findings: First, nearly all putative fathers were unaware of the existence of putative father registry in general, or the Virginia Putative Father Registry in particular. Second, putative fathers were unaware that sex is legal notice in Virginia. Third, once aware of the concept of a putative father registry, the focus group males had positive opinions about putative father registries and the Virginia Putative Father Registry. Fourth, putative fathers preferred to receive notice through the mail regarding an alleged child. Fifth, putative fathers have a negative opinion of providing notice by posting it in newspapers. Sixth, promoting awareness of putative father registries needs to target male audiences and preferably have an interactive component. Seventh, putative fathers expressed strong positive feelings about knowing about a child they may have fathered being placed for adoption. Finally, single male participants in the focus groups were more convinced about the importance of a putative father registry in comparison to married male participants. Quantitative survey data indicated that putative fathers were perceived as the primary constituent group that would benefit the most from a putative father registry. The safeguard variable was significant as it relates to occupation, putative fathers and birth mothers. The study also found that survey respondents indicated that the general public was not aware of putative father registries, and this perception was borne out in focus group results.
36

Nizard, David. "Programmation mathématique non convexe non linéaire en variables entières : un exemple d'application au problème de l'écoulement de larges blocs d'actifs". Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG015.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
La programmation mathématique fournit un cadre pour l'étude et la résolution des problèmes d'optimisation contraints ou non. Elle constitue une branche active des mathématiques appliquées, depuis la deuxième moitié du XXème siècle.L'objet de cette thèse est la résolution d'un programme mathématique non convexe non linéaire en variables entières, sous contrainte linéaire d'égalité. Le problème proposé, bien qu'abordé dans cette étude uniquement pour le cas déterministe, trouve son origine en finance, sous le nom d'écoulement de larges blocs d'actifs, ou de liquidation optimale de portefeuille. Il consiste à vendre une (très large) quantité M donnée d'un actif financier en temps fini (discrétisé en N instants) en maximisant le produit de cette vente. A chaque instant, le prix de vente est modélisé par une fonction de pénalité qui reflète le comportement antagoniste du marché face à l'écoulement progressif.Du point de vue, de la programmation mathématique, cette classe de problème est NP-difficile résoudre d'après Garey et Johson, car la non-convexité de la fonction objectif impose d'adapter les méthodes classiques de résolutions (Branch and Bound , coupes) en variables entières. De plus, comme on ne connait pas de méthode de résolution générale pour cette classe de problèmes, les méthodes utilisées doivent être adaptées aux spécificités du problème.La première partie de cette thèse est consacrée à la résolution exacte ou approchée utilisant la programmation dynamique. Nous montrons en effet, que l'équation de Bellman s'applique au problème proposé et permet ainsi de résoudre exactement et rapidement les petites instances. Pour les moyennes et grandes instances, où la programmation dynamique n'est plus disponible et/ou performante, nous proposons des bornes inférieures via différentes heuristiques utilisant la programmation dynamique ainsi que des méthodes de recherche locale, dont nous étudions la qualité (optimalité, temps CPU) et la complexité.La seconde partie de la thèse s'intéresse à la reformulation équivalente du problème de thèse sous forme factorisée et à sa relaxation convexe via les inégalités de McCormick. Nous proposons alors deux algorithmes de résolution exacte du type Branch and Bound, qui fournissent l'optimum global ou un encadrement en temps limité.Dans une troisième partie, dédiée aux expérimentations numériques, nous comparons les méthodes de résolutions proposées entre elles et aux solvers de l'état de l'art. Nous observons notamment que les bornes obtenues sont souvent proches et mêmes parfois meilleures que celles des solvers libres ou commerciaux auxquels nous nous comparons (ex : LocalSolver, Scip, Baron, Couenne et Bonmin).De plus, nous montrons que nos méthodes de résolutions peuvent s'appliquer à toute fonction de pénalité suffisamment régulière et croissante, ce qui comprend notamment des fonctions qui ne sont pas actuellement pas prises en charge par certains solvers, bien qu'elles aient un sens économique pour le problème, comme par exemple les fonctions trigonométriques ou la fonction arctangente.Numériquement, la programmation dynamique permet de résoudre à l'optimum, sous la minute, des instances de taille N<100 et M<10 000. Les heuristiques proposées fournissent de très bonnes bornes inférieures, qui atteignent très souvent l'optimum, pour N<1 000 et M<100 000. Par contraste, la résolution du problème factorisé n'est efficace que pour N< 10, M<1 000, mais nous obtenons des bornes supérieures relativement bonnes. Enfin, pour les grandes instances (M>1 000 000), nos heuristiques à base de programmation dynamique, lorsqu'elles sont disponibles, fournissent les meilleures bornes inférieures, mais nous n'avons pas d'encadrement précis de l'optimum car nos bornes supérieures ne sont pas fines
Mathematical programming provides a framework to study and resolve optimization problems, constrained or not. It represents an active domain of Applied Mathematics, for the second half of the 20th century.The aim of this thesis is to solve an non convex, non linear, pure integer, mathematical program, under a linear constraint of equality. This problem, although studied in this dissertation only in the deterministic case, stems from a financial application, known as the large block sale problem, or optimal portfolio liquidation. It consists in selling a (very large) known quantity M of a financial asset in finite time, discretized in N points in time, while maximizing the proceeds of the sale. At each point in time, the sell price is modeled by a penalty function, which reflects the antagonistic behavior of the market in response to our progressive selling flow.From the standpoint of the mathematical programming, this class of problems is NP-hard to solve according to Garey and Johnson, because the non convexity of the objective function imposes on us to adapt classical resolutions methods (Branch and Bound, cuts) for integer variables. In addition, as no general resolution method for this class of problems is known, the methods used for solving must be adapted to the problem specifics.The first part of the thesis is devoted to solve the problem, either exactly or approximately, using Dynamic Programming. We indeed prove that Bellman's equation applies to the problem studied and thus enables to solve it exactly and quickly for small instances. For medium and large instances, for which Dynamic Programming is either not available and/or efficient, we provide lower bounds using different heuristics relying on Dynamic Programming, or local search methods, for which performance (tightness and CPU time) and complexity are studied.The second part of this thesis focuses on the equivalent reformulation of the problem in a factored form, and on its convex relaxation using McCormick's inequalities. We introduce two exact resolution algorithms, which belongs to the Branch and Bound category. They return the global optimum or bound it in limited time.In a third part, dedicated to numerical experiments, we compare our resolution methods between each other and to state of the art solvers. We notice in particular that our bounds are comparable and sometimes even better than solvers' bounds, both free and commercial (e.g LocalSolver, Scip, Baron, Couenne et Bonmin), which we use as benchmark.In addition, we show that our resolution methods may apply to sufficiently regular and increasing penalty functions, especially functions which are currently not handled by some solvers, even though they make economic sense for the problem, as does trigonometric functions or the arctangent function for instance.Numerically, Dynamic Programming does optimally solve the problem, within a minute, for instances of size N<100 and M< 10 000. Our heuristics provide very tight lower bounds, which often reach the optimum, for N<1 000 and M<100 000. By contrast, optimal resolution of the factored problem proves efficient for instances of size N<10, M<1 000, even though we obtain relatively good upper bounds. Lastly, for large instances (M>1 000 000), our heuristics based on Dynamic Programming, when available, return the best lower bounds. However, we are not able to bound the optimum tightly, since our upper bounds are not thin
37

Lobato, Rafael Durbano. "Algoritmos para problemas de programação não-linear com variáveis inteiras e contínuas". Universidade de São Paulo, 2009. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-06072009-130912/.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Muitos problemas de otimização envolvem tanto variáveis inteiras quanto contínuas e podem ser modelados como problemas de programação não-linear inteira mista. Problemas dessa natureza aparecem com freqüência em engenharia química e incluem, por exemplo, síntese de processos, projeto de colunas de destilação, síntese de rede de trocadores de calor e produção de óleo e gás. Neste trabalho, apresentamos algoritmos baseados em Lagrangianos Aumentados e branch and bound para resolver problemas de programação não-linear inteira mista. Duas abordagens são consideradas. Na primeira delas, um algoritmo do tipo Lagrangianos Aumentados é usado como método para resolver os problemas de programação não-linear que aparecem em cada um dos nós do método branch and bound. Na segunda abordagem, usamos o branch and bound para resolver os problemas de minimização em caixas com variáveis inteiras que aparecem como subproblemas do método de Lagrangianos Aumentados. Ambos os algoritmos garantem encontrar a solução ótima de problemas convexos e oferecem recursos apropriados para serem usados na resolução de problemas não convexos, apesar de não haver garantia de otimalidade nesse caso. Apresentamos um problema de empacotamento de retângulos em regiões convexas arbitrárias e propomos modelos para esse problema que resultam em programas não-lineares com variáveis inteiras e contínuas. Realizamos alguns experimentos numéricos e comparamos os resultados obtidos pelo método descrito neste trabalho com os resultados alcançados por outros métodos. Também realizamos experimentos com problemas de programação não-linear inteira mista encontrados na literatura e comparamos o desempenho do nosso método ao de outro disponível publicamente.
Many optimization problems contain both integer and continuous variables and can be modeled as mixed-integer nonlinear programming problems. Problems of this nature appear frequently in chemical engineering and include, for instance, process synthesis, design of distillation columns, heat exchanger network synthesis and oil and gas production. In this work, we present algorithms based on Augmented Lagrangians and branch and bound for solving mixed-integer nonlinear programming problems. Two approaches are considered. In the first one, an Augmented Lagrangian algorithm is used for solving nonlinear programming problems that appear at each node in the branch and bound method. In the second approach, we use a branch and bound method for solving box-constrained problems with integer variables that appear as subproblems of the Augmented Lagrangian algorithm. Both algorithms guarantee to find an optimal solution for convex problems and have appropriate strategies to deal with non-convex problems, although there is no guarantee of optimality in this case. We present a problem of packing rectangles within an arbitrary convex region and propose models for this problem that result in nonlinear programs with integer and continuous variables. We have performed some numerical experiments and compared the results reached by the method described in this work and the results obtained by other methods. We have also performed experiments with mixed-integer nonlinear programming problems found in the literature and compared the performance of our method to that of other method publicly available.
38

Anderson, Mary E. "Financial Aid and Other Selected Variables Related to the Retention of First-Time Full-Time College Freshmen and their Persistence to Graduation Within Six Years at a Private Historically Black College or University". DigitalCommons@Robert W. Woodruff Library, Atlanta University Center, 2016. http://digitalcommons.auctr.edu/cauetds/43.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This mixed methods research study used a QUAN-QUAL Model to examine the impact that various factors have on student persistence to graduation in postsecondary education. A documentary research approach was used to collect secondary or existing data from the student information system for first-time full-time freshmen in the Fall 2008 Cohort who graduated within six years. The size of the sample for the quantitative inquiry was 211. A correlational research design was employed to determine if a significant relationship existed between the dependent variables—Persistence to Graduation within Six Years (YEAR) and Final GPA at Time of Degree Completion (FIN GPA)—and the independent variables, Financial Aid Awarded (FINAID), High School GPA (HSGPA), ACT Composite Score (ACT COMP), SAT Combined Score (SAT COMB), First-Year First-Semester GPA (FYFS GPA), First-Year Cumulative GPA (FY GPA), Adjusted Gross Income (AGI), and On-Campus or Off-Campus Housing (ON-OFF CAMP). Descriptive statistical analyses were used to describe, summarize, and interpret the data collected. A case study research approach was used to gain an in-depth understanding into the real-life experiences of a small group of students who did not graduate within six years and who were still persisting toward degree attainment. The Graduation: Survey of Undergraduate Persistence questionnaire was distributed to the participants to gain a holistic understanding of the impact that family, faculty, peers, financial resources, and other environmental influences had on their experiences while persisting toward a college degree. Four questionnaires were completed and returned, followed by three in-depth interviews. The findings from the survey and interviews on the role of financial aid supported the quantitative findings on the relationship between financial aid awarded and persistence to graduation. In the quantitative data analysis, persistence to graduation within six years was significant and positively related to the number of occurrences of financial aid awarded. As the number of financial aid occurrences decreased, the number of years to graduate decreased. Alternatively, an increase in the number of financial aid occurrences resulted in an increase in years to graduate. Postsecondary educational leaders and P-12 educational leaders can utilize the study in forming partnerships to foster collaboration and a “move to action” in preparing students to do college-level course work upon graduating from high school.
39

Pham, Viet Nga. "Programmation DC et DCA pour l'optimisation non convexe/optimisation globale en variables mixtes entières : Codes et Applications". Phd thesis, INSA de Rouen, 2013. http://tel.archives-ouvertes.fr/tel-00833570.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Basés sur les outils théoriques et algorithmiques de la programmation DC et DCA, les travaux de recherche dans cette thèse portent sur les approches locales et globales pour l'optimisation non convexe et l'optimisation globale en variables mixtes entières. La thèse comporte 5 chapitres. Le premier chapitre présente les fondements de la programmation DC et DCA, et techniques de Séparation et Evaluation (B&B) (utilisant la technique de relaxation DC pour le calcul des bornes inférieures de la valeur optimale) pour l'optimisation globale. Y figure aussi des résultats concernant la pénalisation exacte pour la programmation en variables mixtes entières. Le deuxième chapitre est consacré au développement d'une méthode DCA pour la résolution d'une classe NP-difficile des programmes non convexes non linéaires en variables mixtes entières. Ces problèmes d'optimisation non convexe sont tout d'abord reformulées comme des programmes DC via les techniques de pénalisation en programmation DC de manière que les programmes DC résultants soient efficacement résolus par DCA et B&B bien adaptés. Comme première application en optimisation financière, nous avons modélisé le problème de gestion de portefeuille sous le coût de transaction concave et appliqué DCA et B&B à sa résolution. Dans le chapitre suivant nous étudions la modélisation du problème de minimisation du coût de transaction non convexe discontinu en gestion de portefeuille sous deux formes : la première est un programme DC obtenu en approximant la fonction objectif du problème original par une fonction DC polyèdrale et la deuxième est un programme DC mixte 0-1 équivalent. Et nous présentons DCA, B&B, et l'algorithme combiné DCA-B&B pour leur résolution. Le chapitre 4 étudie la résolution exacte du problème multi-objectif en variables mixtes binaires et présente deux applications concrètes de la méthode proposée. Nous nous intéressons dans le dernier chapitre à ces deux problématiques challenging : le problème de moindres carrés linéaires en variables entières bornées et celui de factorisation en matrices non négatives (Nonnegative Matrix Factorization (NMF)). La méthode NMF est particulièrement importante de par ses nombreuses et diverses applications tandis que les applications importantes du premier se trouvent en télécommunication. Les simulations numériques montrent la robustesse, rapidité (donc scalabilité), performance et la globalité de DCA par rapport aux méthodes existantes.
40

Ren, Xuchun. "Novel computational methods for stochastic design optimization of high-dimensional complex systems". Diss., University of Iowa, 2015. https://ir.uiowa.edu/etd/1738.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The primary objective of this study is to develop new computational methods for robust design optimization (RDO) and reliability-based design optimization (RBDO) of high-dimensional, complex engineering systems. Four major research directions, all anchored in polynomial dimensional decomposition (PDD), have been defined to meet the objective. They involve: (1) development of new sensitivity analysis methods for RDO and RBDO; (2) development of novel optimization methods for solving RDO problems; (3) development of novel optimization methods for solving RBDO problems; and (4) development of a novel scheme and formulation to solve stochastic design optimization problems with both distributional and structural design parameters. The major achievements are as follows. Firstly, three new computational methods were developed for calculating design sensitivities of statistical moments and reliability of high-dimensional complex systems subject to random inputs. The first method represents a novel integration of PDD of a multivariate stochastic response function and score functions, leading to analytical expressions of design sensitivities of the first two moments. The second and third methods, relevant to probability distribution or reliability analysis, exploit two distinct combinations built on PDD: the PDD-SPA method, entailing the saddlepoint approximation (SPA) and score functions; and the PDD-MCS method, utilizing the embedded Monte Carlo simulation (MCS) of the PDD approximation and score functions. For all three methods developed, both the statistical moments or failure probabilities and their design sensitivities are both determined concurrently from a single stochastic analysis or simulation. Secondly, four new methods were developed for RDO of complex engineering systems. The methods involve PDD of a high-dimensional stochastic response for statistical moment analysis, a novel integration of PDD and score functions for calculating the second-moment sensitivities with respect to the design variables, and standard gradient-based optimization algorithms. The methods, depending on how statistical moment and sensitivity analyses are dovetailed with an optimization algorithm, encompass direct, single-step, sequential, and multi-point single-step design processes. Thirdly, two new methods were developed for RBDO of complex engineering systems. The methods involve an adaptive-sparse polynomial dimensional decomposition (AS-PDD) of a high-dimensional stochastic response for reliability analysis, a novel integration of AS-PDD and score functions for calculating the sensitivities of the failure probability with respect to design variables, and standard gradient-based optimization algorithms, resulting in a multi-point, single-step design process. The two methods, depending on how the failure probability and its design sensitivities are evaluated, exploit two distinct combinations built on AS-PDD: the AS-PDD-SPA method, entailing SPA and score functions; and the AS-PDD-MCS method, utilizing the embedded MCS of the AS-PDD approximation and score functions. In addition, a new method, named as the augmented PDD method, was developed for RDO and RBDO subject to mixed design variables, comprising both distributional and structural design variables. The method comprises a new augmented PDD of a high-dimensional stochastic response for statistical moment and reliability analyses; an integration of the augmented PDD, score functions, and finite-difference approximation for calculating the sensitivities of the first two moments and the failure probability with respect to distributional and structural design variables; and standard gradient-based optimization algorithms, leading to a multi-point, single-step design process. The innovative formulations of statistical moment and reliability analysis, design sensitivity analysis, and optimization algorithms have achieved not only highly accurate but also computationally efficient design solutions. Therefore, these new methods are capable of performing industrial-scale design optimization with numerous design variables.
41

Mitjana, Florian. "Optimisation topologique de structures sous contraintes de flambage". Thesis, Toulouse 3, 2018. http://www.theses.fr/2018TOU30343/document.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
L'optimisation topologique vise à concevoir une structure en recherchant la disposition optimale du matériau dans un espace de conception donné, permettant ainsi de proposer des designs optimaux innovants. Cette thèse est centrée sur l'optimisation topologique pour des problèmes de conception de structures prenant en compte des contraintes de flambage. Dans une large variété de domaines de l'ingénierie, la conception innovante de structures est cruciale. L'allègement des structures lors la phase de conception tient une place prépondérante afin de réduire les coûts de fabrication. Ainsi l'objectif est souvent la minimisation de la masse de la structure à concevoir. En ce qui concerne les contraintes, en plus des contraintes mécaniques classiques (compression, tension), il est nécessaire de prendre en compte des phénomènes dits de flambage, qui se caractérisent par une amplification des déformations de la structure et une potentielle annihilation des capacités de la structure à supporter les efforts appliqués. Dans le but d'adresser un large panel de problèmes d'optimisation topologique, nous considérons les deux types de représentation d'une structure : les structures treillis et les structures continues. Dans le cadre de structures treillis, l'objectif est de minimiser la masse en optimisant le nombre d'éléments de la structure et les dimensions des sections transversales associées à ces éléments. Nous considérons les structures constituées d'éléments poutres et nous introduisons une formulation du problème comme un problème d'optimisation non-linéaire en variables mixtes. Afin de prendre en compte des contraintes de manufacturabilité, nous proposons une fonction coût combinant la masse et la somme des seconds moments d'inertie de chaque poutre. Nous avons développé un algorithme adapté au problème d'optimisation considéré. Les résultats numériques montrent que l'approche proposée mène à des gains de masses significatifs par rapport à des approches existantes. Dans le cas des structures continues, l'optimisation topologique vise à discrétiser le domaine de conception et à déterminer les éléments de ce domaine discrétisé qui doivent être composés de matière, définissant ainsi un problème d'optimisation discret. [...]
Topology optimization aims to design a structure by seeking the optimal material layout within a given design space, thus making it possible to propose innovative optimal designs. This thesis focuses on topology optimization for structural problems taking into account buckling constraints. In a wide variety of engineering fields, innovative structural design is crucial. The lightening of structures during the design phase holds a prominent place in order to reduce manufacturing costs. Thus the goal is often the minimization of the mass of the structure to be designed. Regarding the constraints, in addition to the conventional mechanical constraints (compression, tension), it is necessary to take into account buckling phenomena which are characterized by an amplification of the deformations of the structure and a potential annihilation of the capabilities of the structure to support the applied efforts. In order to adress a wide range of topology optimization problems, we consider the two types of representation of a structure: lattice structures and continuous structures. In the framework of lattice structures, the objective is to minimize the mass by optimizing the number of elements of the structure and the dimensions of the cross sections associated to these elements. We consider structures constituted by a set of frame elements and we introduce a formulation of the problem as a mixed-integer nonlinear problem. In order to obtain a manufacturable structure, we propose a cost function combining the mass and the sum of the second moments of inertia of each frame. We developed an algorithm adapted to the considered optimization problem. The numerical results show that the proposed approach leads to significant mass gains over existing approaches. In the case of continuous structures, topology optimization aims to discretize the design domain and to determine the elements of this discretized domain that must be composed of material, thus defining a discrete optimization problem. [...]
42

Tran, Vuong. "Bayesian variable selection in linear mixed effects models". Thesis, Linköpings universitet, Statistik och maskininlärning, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-139069.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Variable selection techniques have been well researched and used in many different fields. There is rich literature on Bayesian variable selection in linear regression models, but only few of them are about mixed effects. The topic of the thesis is Bayesian variable selection in linear mixed effect models. The choice of methods to achieve this goal is to induce different shrinkage priors. Both unimodal shrinkage priors and spike-and-slab priors are used and compared. The distributions that have been chosen, either as unimodal priors or parts of the spike-and-slab priors are the Normal distribution, the Student-t distribution and the Laplace distribution. Both the simulations and the real dataset studies have been carried out, with the intention of investigating and evaluating how good the chosen distributions are as shrinkage priors. Obtained results from the real dataset shows that spike-and-slab priors yield more shrinkage effect than what unimodal priors does. However, inducing spike-and-slab priors carelessly without any consideration if the size of the data is sufficiently large enough may lead to poor model parameter estimations. Results from the simulations studies indicates that a mixture of Laplace distribution for both the spike and slab components is the prior that yields the highest shrinkage effect among the investigated shrinkage priors.
43

Socha, Krzysztof. "Ant colony optimization for continuous and mixed-variable domains". Doctoral thesis, Universite Libre de Bruxelles, 2008. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210533.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
In this work, we present a way to extend Ant Colony Optimization (ACO), so that it can be applied to both continuous and mixed-variable optimization problems. We demonstrate, first, how ACO may be extended to continuous domains. We describe the algorithm proposed, discuss the different design decisions made, and we position it among other metaheuristics.

Following this, we present the results of numerous simulations and testing. We compare the results obtained by the proposed algorithm on typical benchmark problems with those obtained by other methods used for tackling continuous optimization problems in the literature. Finally, we investigate how our algorithm performs on a real-world problem coming from the medical field—we use our algorithm for training neural network used for pattern classification in disease recognition.

Following an extensive analysis of the performance of ACO extended to continuous domains, we present how it may be further adapted to handle both continuous and discrete variables simultaneously. We thus introduce the first native mixed-variable version of an ACO algorithm. Then, we analyze and compare the performance of both continuous and mixed-variable

ACO algorithms on different benchmark problems from the literature. Through the research performed, we gain some insight into the relationship between the formulation of mixed-variable problems, and the best methods to tackle them. Furthermore, we demonstrate that the performance of ACO on various real-world mixed-variable optimization problems coming from the mechanical engineering field is comparable to the state of the art.
Doctorat en Sciences de l'ingénieur
info:eu-repo/semantics/nonPublished

44

Lan, Lan. "Variable Selection in Linear Mixed Model for Longitudinal Data". NCSU, 2006. http://www.lib.ncsu.edu/theses/available/etd-05172006-211924/.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Fan and Li (JASA, 2001) proposed a family of variable selection procedures for certain parametric models via a nonconcave penalized likelihood approach, where significant variable selection and parameter estimation were done simultaneously, and the procedures were shown to have the oracle property. In this presentation, we extend the nonconcave penalized likelihood approach to linear mixed models for longitudinal data. Two new approaches are proposed to select significant covariates and estimate fixed effect parameters and variance components. In particular, we show the new approaches also possess the oracle property when the tuning parameter is chosen appropriately. We assess the performance of the proposed approaches via simulation and apply the procedures to data from the Multicenter AIDS Cohort Study.
45

Ishizaki, Masato. "Mixed-initiative natural language dialogue with variable communicative modes". Thesis, University of Edinburgh, 1997. http://hdl.handle.net/1842/518.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
As speech and natural language processing technology advance, it now reaches a stage where the dialogue control or initiative can be studied to realise usable and friendly human computer interface programs such as computer dialogue systems. One of the major problems concerning dialogue initiative is who should take the dialogue initiative when. This thesis tackles this dialogue initiative problem using the following approaches: 1. Human dialogue data is examined for their local dialogue structures; 2. A dialogue manager is proposed and implemented, which handles variations of human dialogue data concerning the dialogue initiative, and experimental results are obtained by having the implemented dialogue managers working with a parser and a generator exchange natural language messages with each other; and 3. A mathematical model is constructed and used to analyse who should take the dialogue initiative when. The first study shows that human dialogue data varies concerning the number of utterance units in a turn and utterance types independently of the difference of the dialogue initiative. The second study shows that the dialogues in which the dialogue initiative constantly alters (mixed-initiative dialogues) are not always more efficient than those in which the dialogue initiative does not change (non mixed-initiative dialogues). The third study concludes that under the assumption that both speakers solve a problem under similar situations, mixed-initiative dialogues are more efficient than non-mixed-initiative dialogues when initiating utterances can reduce a problem search space more efficiently than responding utterances. The above conclusion can be simplified to the condition that the agent should take the dialogue initiative when s/he can make an effective utterance like in the situations where s/he has more knowledge than the partner with respect to the current goal.
46

Luinstra, Wayne Foster. "The effect of process variables on mixer-settler performance". Thesis, University of Ottawa (Canada), 1989. http://hdl.handle.net/10393/5923.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
47

Omer, Jérémy Jean Guy. "Modèles déterministes et stochastiques pour la résolution numérique du problème de maintien de séparation entre aéronefs". Thesis, Toulouse, ISAE, 2013. http://www.theses.fr/2013ESAE0007/document.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Cette thèse s’inscrit dans le domaine de la programmation mathématique appliquée à la séparation d’aéronefs stabilisés en altitude. L’objectif est le développement d’algorithmes de résolution de conflits aériens ; l’enjeu étant d’augmenter la capacité de l’espace aérien afin de diminuer les retards et d’autoriser un plus grand nombre d’aéronefs à suivre leur trajectoire optimale. En outre, du fait de l’imprécision des prédictions relatives à la météo ou à l’état des aéronefs, l’incertitude sur les données est une caractéristique importante du problème. La démarche suivie dans ce mémoire s’attache d’abord au problème déterministe dont l’étude est nettement plus simple. Pour cela, quatre modèles basés sur la programmation non linéaire et sur la programmation linéaire à variables mixtes sont développés en intégrant notamment un critère reflétant la consommation de carburant et la durée de vol. Leur comparaison sur un ensemble de scénarios de test met en évidence l’intérêt d’utiliser un modèle linéaire approché pour l’étude du problème avec incertitudes. Un champ de vent aléatoire, corrélé en temps et en espace, ainsi qu’une erreur gaussienne sur la mesure de la vitesse sont ensuite pris en compte.Dans un premier temps, le problème déterministe est adapté en ajoutant une marge sur la norme de séparation grâce au calcul d’une approximation des probabilités de conflits. Finalement, une formulation stochastique avec recours est développée. Ainsi, les erreurs aléatoires sont explicitement incluses dans le modèle afin de tenir compte de la possibilité d’ordonner des manoeuvres de recours lorsque les erreurs observées engendrent de nouveaux conflits
This thesis belongs to the field of mathematical programming, applied to the separation of aircraft stabilised on the same altitude. The primary objective is to develop algorithms for the resolution of air conflicts. The expected benefit of such algorithm is to increase the capacity of the airspace in order to reduce the number of late flights and let more aircraft follow their optimal trajectory. Moreover, meteorological forecast and trajectory predictions being inexact,the uncertainty on the data is an important issue. The approach that is followed focuses on the deterministic problem in the first place because it is much simpler. To do this, four nonlinear and mixed integer linear programming models, including a criterion based on fuel consumption and flight duration, are developed. Their comparison on a benchmark of scenarios shows the relevance of using an approximate linear model for the study of the problem with uncertainties.A random wind field, correlated in space and time, as well as speed measures with Gaussianerrors are then taken into account. As a first step, the deterministic problem is adapted by computinga margin from an approximate calculation of conflict probabilities and by adding it tothe reference separation distance. Finally, a stochastic formulation with recourse is developed.In this model, the random errors are explicitly included in order to consider the possibility of ordering recourse actions if the observed errors cause new conflicts
48

Alabiso, Audry. "Linear Mixed Model Selection by Partial Correlation". Bowling Green State University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1587142724497829.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
49

Vanderlinde, Jeferson Back [UNESP]. "Planejamento da expansão de sistemas de transmissão usando técnicas especializadas de programação inteira mista". Universidade Estadual Paulista (UNESP), 2017. http://hdl.handle.net/11449/152089.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Submitted by JEFERSON BACK VANDERLINDE null (jefersonbv@yahoo.com.br) on 2017-11-01T16:38:25Z No. of bitstreams: 1 jeferson_tese_final_20171101.pdf: 4860852 bytes, checksum: 2f99c37969be3815f82b1b4455a40230 (MD5)
Approved for entry into archive by LUIZA DE MENEZES ROMANETTO (luizamenezes@reitoria.unesp.br) on 2017-11-13T15:38:34Z (GMT) No. of bitstreams: 1 vanderlinde_jb_dr_ilha.pdf: 4860852 bytes, checksum: 2f99c37969be3815f82b1b4455a40230 (MD5)
Made available in DSpace on 2017-11-13T15:38:34Z (GMT). No. of bitstreams: 1 vanderlinde_jb_dr_ilha.pdf: 4860852 bytes, checksum: 2f99c37969be3815f82b1b4455a40230 (MD5) Previous issue date: 2017-09-06
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Neste trabalho, consideram-se a análise teórica e a implementação computacional dos algoritmos Primal Simplex Canalizado (PSC) e Dual Simplex Canalizado (DSC) especializados. Esses algoritmos foram incorporados em um algoritmo Branch and Bound (B&B) de modo a resolver o problema de Planejamento da Expansão de Sistemas de Transmissão (PEST). Neste caso, o problema PEST foi modelado usando os chamados modelo de Transportes e modelo Linear Disjuntivo (LD), o que produz um problema de Programação Linear Inteiro Misto (PLIM). O algoritmo PSC é utilizado na resolução do problema de Programação Linear (PL) inicial após desconsiderar a restrição de integralidade do problema PLIM original. Juntamente com o algoritmo PSC, foi implementada uma estratégia para reduzir o número de variáveis artificiais adicionadas ao PL, consequentemente reduzindo o número de iterações do algoritmo PSC. O algoritmo DSC é utilizado na reotimização eficiente dos subproblemas gerados pelo algoritmo B&B, através do quadro ótimo do PL inicial, excluindo, assim, a necessidade da resolução completa de cada subproblema e, consequentemente, reduzindo o consumo de processamento e memória. Nesta pesquisa, é apresentada uma nova proposta de otimização, e, consequentemente, a implementação computacional usando a linguagem de programação FORTRAN que opera independentemente de qualquer solver.
In this research, the theoretical analysis and computational implementation of the specialized dual simplex algorithm (DSA) and primal simplex algorithm (PSA) for bounded variables is considered. These algorithms have been incorporated in a Branch and Bound (B&B) algorithm to solve the Transmission Network Expansion Planning (TNEP) problem. In this case, the TNEP problem is modeled using transportation model and linear disjunctive model (DM), which produces a mixed-integer linear programming (MILP) problem. After relaxing the integrality of investment variables of the original MILP problem, the PSA is used to solve the initial linear programming (LP) problem. Also, it has been implemented a strategy in PSA to reduce the number of artificial variables which are added into the LP problem, and consequently reduces the number of iterations of PSA. Through optimal solution of the initial LP, the DSA is used in efficient reoptimization of subproblems, resulting from the B&B algorithm, thus excludes the need for complete resolution of each subproblems, which results reducing the CPU time and memory consumption. This research presents the implementation of the proposed approach using the FORTRAN programming language which operates independently and does not use any commercial solver.
50

Vanderlinde, Jeferson Back. "Planejamento da expansão de sistemas de transmissão usando técnicas especializadas de programação inteira mista /". Ilha Solteira, 2017. http://hdl.handle.net/11449/152089.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Orientador: Rubén Augusto Romero Lázaro
Resumo: Neste trabalho, consideram-se a análise teórica e a implementação computacional dos algoritmos Primal Simplex Canalizado (PSC) e Dual Simplex Canalizado (DSC) especializados. Esses algoritmos foram incorporados em um algoritmo Branch and Bound (B&B) de modo a resolver o problema de Planejamento da Expansão de Sistemas de Transmissão (PEST). Neste caso, o problema PEST foi modelado usando os chamados modelo de Transportes e modelo Linear Disjuntivo (LD), o que produz um problema de Programação Linear Inteiro Misto (PLIM). O algoritmo PSC é utilizado na resolução do problema de Programação Linear (PL) inicial após desconsiderar a restrição de integralidade do problema PLIM original. Juntamente com o algoritmo PSC, foi implementada uma estratégia para reduzir o número de variáveis artificiais adicionadas ao PL, consequentemente reduzindo o número de iterações do algoritmo PSC. O algoritmo DSC é utilizado na reotimização eficiente dos subproblemas gerados pelo algoritmo B&B, através do quadro ótimo do PL inicial, excluindo, assim, a necessidade da resolução completa de cada subproblema e, consequentemente, reduzindo o consumo de processamento e memória. Nesta pesquisa, é apresentada uma nova proposta de otimização, e, consequentemente, a implementação computacional usando a linguagem de programação FORTRAN que opera independentemente de qualquer solver.
Doutor

Vai alla bibliografia