Dissertations / Theses on the topic 'Kernel linear model'

To see the other types of publications on this topic, follow the link: Kernel linear model.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 25 dissertations / theses for your research on the topic 'Kernel linear model.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Roberts, Gareth James. "Monitoring land cover dynamics using linear kernel-driven BRDF model parameter temporal trajectories." Thesis, University College London (University of London), 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.407145.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hu, Zonghui. "Semiparametric functional data analysis for longitudinal/clustered data: theory and application." Texas A&M University, 2004. http://hdl.handle.net/1969.1/3088.

Full text
Abstract:
Semiparametric models play important roles in the field of biological statistics. In this dissertation, two types of semiparametic models are to be studied. One is the partially linear model, where the parametric part is a linear function. We are to investigate the two common estimation methods for the partially linear models when the data is correlated — longitudinal or clustered. The other is a semiparametric model where a latent covariate is incorporated in a mixed effects model. We will propose a semiparametric approach for estimation of this model and apply it to the study on colon carcinogenesis. First, we study the profilekernel and backfitting methods in partially linear models for clustered/longitudinal data. For independent data, despite the potential rootn inconsistency of the backfitting estimator noted by Rice (1986), the two estimators have the same asymptotic variance matrix as shown by Opsomer and Ruppert (1999). In this work, theoretical comparisons of the two estimators for multivariate responses are investigated. We show that, for correlated data, backfitting often produces a larger asymptotic variance than the profilekernel method; that is, in addition to its bias problem, the backfitting estimator does not have the same asymptotic efficiency as the profilekernel estimator when data is correlated. Consequently, the common practice of using the backfitting method to compute profilekernel estimates is no longer advised. We illustrate this in detail by following Zeger and Diggle (1994), Lin and Carroll (2001) with a working independence covariance structure for nonparametric estimation and a correlated covariance structure for parametric estimation. Numerical performance of the two estimators is investigated through a simulation study. Their application to an ophthalmology dataset is also described. Next, we study a mixed effects model where the main response and covariate variables are linked through the positions where they are measured. But for technical reasons, they are not measured at the same positions. We propose a semiparametric approach for this misaligned measurements problem and derive the asymptotic properties of the semiparametric estimators under reasonable conditions. An application of the semiparametric method to a colon carcinogenesis study is provided. We find that, as compared with the corn oil supplemented diet, fish oil supplemented diet tends to inhibit the increment of bcl2 (oncogene) gene expression in rats when the amount of DNA damage increases, and thus promotes apoptosis.
APA, Harvard, Vancouver, ISO, and other styles
3

Kayhan, Belgin. "Parameter Estimation In Generalized Partial Linear Modelswith Tikhanov Regularization." Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12612530/index.pdf.

Full text
Abstract:
Regression analysis refers to techniques for modeling and analyzing several variables in statistical learning. There are various types of regression models. In our study, we analyzed Generalized Partial Linear Models (GPLMs), which decomposes input variables into two sets, and additively combines classical linear models with nonlinear model part. By separating linear models from nonlinear ones, an inverse problem method Tikhonov regularization was applied for the nonlinear submodels separately, within the entire GPLM. Such a particular representation of submodels provides both a better accuracy and a better stability (regularity) under noise in the data. We aim to smooth the nonparametric part of GPLM by using a modified form of Multiple Adaptive Regression Spline (MARS) which is very useful for high-dimensional problems and does not impose any specific relationship between the predictor and dependent variables. Instead, it can estimate the contribution of the basis functions so that both the additive and interaction effects of the predictors are allowed to determine the dependent variable. The MARS algorithm has two steps: the forward and backward stepwise algorithms. In the rst one, the model is built by adding basis functions until a maximum level of complexity is reached. On the other hand, the backward stepwise algorithm starts with removing the least significant basis functions from the model. In this study, we propose to use a penalized residual sum of squares (PRSS) instead of the backward stepwise algorithm and construct PRSS for MARS as a Tikhonov regularization problem. Besides, we provide numeric example with two data sets
one has interaction and the other one does not have. As well as studying the regularization of the nonparametric part, we also mention theoretically the regularization of the parametric part. Furthermore, we make a comparison between Infinite Kernel Learning (IKL) and Tikhonov regularization by using two data sets, with the difference consisting in the (non-)homogeneity of the data set. The thesis concludes with an outlook on future research.
APA, Harvard, Vancouver, ISO, and other styles
4

Ozier-Lafontaine, Anthony. "Kernel-based testing and their application to single-cell data." Electronic Thesis or Diss., Ecole centrale de Nantes, 2023. http://www.theses.fr/2023ECDN0025.

Full text
Abstract:
Les technologies de sequençage en cellule unique mesurent des informations à l’échelle de chaque cellule d’une population. Les données issues de ces technologies présentent de nombreux défis : beaucoup d’observations en grande dimension et souvent parcimonieuses. De nombreuses expériences de biologie consistent à comparer des conditions.L’objet de la thèse est de développer un ensemble d’outils qui compare des échantillons de données issues des technologies de séquençage en cellule unique afin de détecter et décrire les différences qui existent. Pour cela, nous proposons d’appliquer les tests de comparaison de deux échantillons basés sur les méthodes à noyaux existants. Nous proposons de généraliser ces tests à noyaux pour les designs expérimentaux quelconques, ce test s’inspire du test de la trace de Hotelling- Lawley. Nous implémentons pour la première fois ces tests à noyaux dans un packageR et Python nommé ktest, et nos applications sur données simulées et issues d’expériences démontrent leurs performances. L’application de ces méthodes à des données expérimentales permet d’identifier les observations qui expliquent les différences détectées. Enfin, nous proposons une implémentation efficace de ces tests basée sur des factorisations matricielles de type Nyström, ainsi qu’un ensemble d’outils de diagnostic et d’interprétation des résultats pour rendre ces méthodes accessibles et compréhensibles par des nonspécialistes
Single-cell technologies generate data at the single-cell level. They are coumposed of hundreds to thousands of observations (i.e. cells) and tens of thousands of variables (i.e. genes). New methodological challenges arose to fully exploit the potentialities of these complex data. A major statistical challenge is to distinguish biological informationfrom technical noise in order to compare conditions or tissues. This thesis explores the application of kernel testing on single-cell datasets in order to detect and describe the potential differences between compared conditions.To overcome the limitations of existing kernel two-sample tests, we propose a kernel test inspired from the Hotelling-Lawley test that can apply to any experimental design. We implemented these tests in a R and Python package called ktest that is their first useroriented implementation. We demonstrate the performances of kernel testing on simulateddatasets and on various experimental singlecell datasets. The geometrical interpretations of these methods allows to identify the observations leading a detected difference. Finally, we propose a Nyström-based efficient implementationof these kernel tests as well as a range of diagnostic and interpretation tools
APA, Harvard, Vancouver, ISO, and other styles
5

Vassura, Edoardo. "Path integrals on curved space and the worldline formalism." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/13448/.

Full text
Abstract:
Lo scopo primario di questa tesi e' l’analisi di una nuova procedura di regolarizzazione di path integral su spazi curvi, presentata inizialmente dal fisico J. Guven e applicata al caso di una teoria di campo scalare , ma mai utilizzata per svolgere ulteriori calcoli espliciti. Questa procedura, se corretta, permetterebbe di utilizzare il formalismo di path integral su spazi piatti anche nel caso in cui la varieta' di background risulti localmente curva. Tale procedura trasforma di fatto un modello sigma non lineare in un modello efficace lineare, permettando pertanto di aggirare le usuali complicazioni dovute alla generalizzazione di path integral. Una prova diretta della correttezza della procedura di Guven sembra mancare in letteratura: per questo motivo in questa tesi verranno eseguiti vari test volti a tale verifica. Alcuni errori sono stati riscontrati nella proposta iniziale, tra i quali un termine di potenziale che risulta essere non corretto. Ad ogni modo siamo stati in grado di identificare un potenziale che permetta di riprodurre correttamente i primi due coefficienti dell’espansione in serie dell’heat kernel. Utilizzando lo stesso metodo abbiamo poi cercato di ottenere il successivo coefficiente dell’espansione (cubico in termini di curvatura): il risultato ottenuto non risulta essere corretto, cosa che segnala il fallimento di tale metodo ad ordini superiori. Visti tali risultati preliminari, siamo stati indotti a considerare una classe speciale di spazi curvi, quella degli spazi massimamente simmetrici, trovando invece che su tali spazi la procedura di Guven riproduce i risultati corretti. Come verifica abbiamo ottenuto la parte diagonale dell’heat kernel, che ́ stata poi utilizzata per riprodurre l’anomalia di traccia di tipo A per campi scalari in dimensioni arbitrarie fino a D = 12. Questi risultati sono in accordo con quelli attesi. Viene pertanto fornita una prova della validita' di tale procedura su questi spazi.
APA, Harvard, Vancouver, ISO, and other styles
6

Song, Song. "Confidence bands in quantile regression and generalized dynamic semiparametric factor models." Doctoral thesis, Humboldt-Universität zu Berlin, Wirtschaftswissenschaftliche Fakultät, 2010. http://dx.doi.org/10.18452/16341.

Full text
Abstract:
In vielen Anwendungen ist es notwendig, die stochastische Schwankungen der maximalen Abweichungen der nichtparametrischen Schätzer von Quantil zu wissen, zB um die verschiedene parametrische Modelle zu überprüfen. Einheitliche Konfidenzbänder sind daher für nichtparametrische Quantil Schätzungen der Regressionsfunktionen gebaut. Die erste Methode basiert auf der starken Approximation der empirischen Verfahren und Extremwert-Theorie. Die starke gleichmäßige Konsistenz liegt auch unter allgemeinen Bedingungen etabliert. Die zweite Methode beruht auf der Bootstrap Resampling-Verfahren. Es ist bewiesen, dass die Bootstrap-Approximation eine wesentliche Verbesserung ergibt. Der Fall von mehrdimensionalen und diskrete Regressorvariablen wird mit Hilfe einer partiellen linearen Modell behandelt. Das Verfahren wird mithilfe der Arbeitsmarktanalysebeispiel erklärt. Hoch-dimensionale Zeitreihen, die nichtstationäre und eventuell periodische Verhalten zeigen, sind häufig in vielen Bereichen der Wissenschaft, zB Makroökonomie, Meteorologie, Medizin und Financial Engineering, getroffen. Der typische Modelierungsansatz ist die Modellierung von hochdimensionalen Zeitreihen in Zeit Ausbreitung der niedrig dimensionalen Zeitreihen und hoch-dimensionale zeitinvarianten Funktionen über dynamische Faktorenanalyse zu teilen. Wir schlagen ein zweistufiges Schätzverfahren. Im ersten Schritt entfernen wir den Langzeittrend der Zeitreihen durch Einbeziehung Zeitbasis von der Gruppe Lasso-Technik und wählen den Raumbasis mithilfe der funktionalen Hauptkomponentenanalyse aus. Wir zeigen die Eigenschaften dieser Schätzer unter den abhängigen Szenario. Im zweiten Schritt erhalten wir den trendbereinigten niedrig-dimensionalen stochastischen Prozess (stationär).
In many applications it is necessary to know the stochastic fluctuation of the maximal deviations of the nonparametric quantile estimates, e.g. for various parametric models check. Uniform confidence bands are therefore constructed for nonparametric quantile estimates of regression functions. The first method is based on the strong approximations of the empirical process and extreme value theory. The strong uniform consistency rate is also established under general conditions. The second method is based on the bootstrap resampling method. It is proved that the bootstrap approximation provides a substantial improvement. The case of multidimensional and discrete regressor variables is dealt with using a partial linear model. A labor market analysis is provided to illustrate the method. High dimensional time series which reveal nonstationary and possibly periodic behavior occur frequently in many fields of science, e.g. macroeconomics, meteorology, medicine and financial engineering. One of the common approach is to separate the modeling of high dimensional time series to time propagation of low dimensional time series and high dimensional time invariant functions via dynamic factor analysis. We propose a two-step estimation procedure. At the first step, we detrend the time series by incorporating time basis selected by the group Lasso-type technique and choose the space basis based on smoothed functional principal component analysis. We show properties of this estimator under the dependent scenario. At the second step, we obtain the detrended low dimensional stochastic process (stationary).
APA, Harvard, Vancouver, ISO, and other styles
7

Piccini, Jacopo. "Data Dependent Convergence Guarantees for Regression Problems in Neural Networks." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/24218/.

Full text
Abstract:
It has been recently demonstrated that the artificial neural networks’ (ANN) learning under gradient descent method, can be studied using neural tangent kernel (NTK). This thesis’ goal is to show how techniques related to control theory, can be applied to model and improve the hyperparameters training dynamics. Moreover, it will be proven how by using methods from linear parameter varying (LPV) theory can allow the exact representation of the learning dynamics over its whole domain. The first part of the thesis is dedicated to the modelling and analysis of the system. The modelling of simple ANNs is hereby suggested and a method to expand this approach to larger networks is proposed. After the first part, the LPV system model’s different properties are analysed using different methods. After the modelling and analysis phase, the focus will be shifted on how to improve the neural network both in terms of stability and performances. This improvement is achieved by using state feedback on the LPV system. After setting up the control architecture, controllers based on different methods, such as optimal control and robust control, are then synthesized and their performances are compared.
APA, Harvard, Vancouver, ISO, and other styles
8

Vlachos, Dimitrios. "Novel algorithms in wireless CDMA systems for estimation and kernel based equalization." Thesis, Brunel University, 2012. http://bura.brunel.ac.uk/handle/2438/7658.

Full text
Abstract:
A powerful technique is presented for joint blind channel estimation and carrier offset method for code- division multiple access (CDMA) communication systems. The new technique combines singular value decomposition (SVD) analysis with carrier offset parameter. Current blind methods sustain a high computational complexity as they require the computation of a large SVD twice, and they are sensitive to accurate knowledge of the noise subspace rank. The proposed method overcomes both problems by computing the SVD only once. Extensive simulations using MatLab demonstrate the robustness of the proposed scheme and its performance is comparable to other existing SVD techniques with significant lower computational as much as 70% cost because it does not require knowledge of the rank of the noise sub-space. Also a kernel based equalization for CDMA communication systems is proposed, designed and simulated using MatLab. The proposed method in CDMA systems overcomes all other methods.
APA, Harvard, Vancouver, ISO, and other styles
9

Fan, Liangdong. "ESTIMATION IN PARTIALLY LINEAR MODELS WITH CORRELATED OBSERVATIONS AND CHANGE-POINT MODELS." UKnowledge, 2018. https://uknowledge.uky.edu/statistics_etds/32.

Full text
Abstract:
Methods of estimating parametric and nonparametric components, as well as properties of the corresponding estimators, have been examined in partially linear models by Wahba [1987], Green et al. [1985], Engle et al. [1986], Speckman [1988], Hu et al. [2004], Charnigo et al. [2015] among others. These models are appealing due to their flexibility and wide range of practical applications including the electricity usage study by Engle et al. [1986], gum disease study by Speckman [1988], etc., wherea parametric component explains linear trends and a nonparametric part captures nonlinear relationships. The compound estimator (Charnigo et al. [2015]) has been used to estimate the nonparametric component of such a model with multiple covariates, in conjunction with linear mixed modeling for the parametric component. These authors showed, under a strict orthogonality condition, that parametric and nonparametric component estimators could achieve what appear to be (nearly) optimal rates, even in the presence of subject-specific random effects. We continue with research on partially linear models with subject-specific random intercepts. Inspired by Speckman [1988], we propose estimators of both parametric and nonparametric components of a partially linear model, where consistency is achievable under an orthogonality condition. We also examine a scenario without orthogonality to find that bias could still exist asymptotically. The random intercepts accommodate analysis of individuals on whom repeated measures are taken. We illustrate our estimators in a biomedical case study and assess their finite-sample performance in simulation studies. Jump points have often been found within the domain of nonparametric models (Muller [1992], Loader [1996] and Gijbels et al. [1999]), which may lead to a poor fit when falsely assuming the underlying mean response is continuous. We study a specific type of change-point where the underlying mean response is continuous on both left and right sides of the change-point. We identify the convergence rate of the estimator proposed in Liu [2017] and illustrate the result in simulation studies.
APA, Harvard, Vancouver, ISO, and other styles
10

Zhai, Jing. "Efficient Exact Tests in Linear Mixed Models for Longitudinal Microbiome Studies." Thesis, The University of Arizona, 2016. http://hdl.handle.net/10150/612412.

Full text
Abstract:
Microbiome plays an important role in human health. The analysis of association between microbiome and clinical outcome has become an active direction in biostatistics research. Testing the microbiome effect on clinical phenotypes directly using operational taxonomic unit abundance data is a challenging problem due to the high dimensionality, non-normality and phylogenetic structure of the data. Most of the studies only focus on describing the change of microbe population that occur in patients who have the specific clinical condition. Instead, a statistical strategy utilizing distance-based or similarity-based non-parametric testing, in which a distance or similarity measure is defined between any two microbiome samples, is developed to assess association between microbiome composition and outcomes of interest. Despite the improvements, this test is still not easily interpretable and not able to adjust for potential covariates. A novel approach, kernel-based semi-parametric regression framework, is applied in evaluating the association while controlling the covariates. The framework utilizes a kernel function which is a measure of similarity between samples' microbiome compositions and characterizes the relationship between the microbiome and the outcome of interest. This kernel-based regression model, however, cannot be applied in longitudinal studies since it could not model the correlation between the repeated measurements. We proposed microbiome association exact tests (MAETs) in linear mixed model can deal with longitudinal microbiome data. MAETs can test not only the effect of overall microbiome but also the effect from specific cluster of the OTUs while controlling for others by introducing more random effects in the model. The current methods for multiple variance component testing are based on either asymptotic distribution or parametric bootstrap which require large sample size or high computational cost. The exact (R)LRT tests, an computational efficient and powerful testing methodology, was derived by Crainiceanu. Since the exact (R)LRT can only be used in testing one variance component, we proposed an approach that combines the recent development of exact (R)LRT and a strategy for simplifying linear mixed model with multiple variance components to a single case. The Monte Carlo simulation studies present correctly controlled type I error and provided superior power in testing association between microbiome and outcomes in longitudinal studies. Finally, the MAETs were applied to longitudinal pulmonary microbiome datasets to demonstrate that microbiome composition is associated with lung function and immunological outcomes. We also successfully found two interesting genera Prevotella and Veillonella which are associated with forced vital capacity.
APA, Harvard, Vancouver, ISO, and other styles
11

Chopping, M. J. "Linear semi-empirical kernel-driven bidirectional reflectance distribution function models in monitoring semi-arid grasslands from space." Thesis, University of Nottingham, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.262949.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Saribekir, Gozde. "The Turkish Catastrophe Insurance Pool Claims Modeling 2000-2008 Data." Master's thesis, METU, 2013. http://etd.lib.metu.edu.tr/upload/12615755/index.pdf.

Full text
Abstract:
After the 1999 Marmara Earthquake, social, economic and engineering studies on earthquakes became more intensive. The Turkish Catastrophe Insurance Pool (TCIP) was established after the Marmara Earthquake to share the deficit in the budget of the Government. The TCIP has become a data source for researchers, consisting of variables such as number of claims, claim amount and magnitude. In this thesis, the TCIP earthquake claims, collected between 2000 and 2008, are studied. The number of claims and claim payments (aggregate claim amount) are modeled by using Generalized Linear Models (GLM). Observed sudden jumps in claim data are represented by using the exponential kernel function. Model parameters are estimated by using the Maximum Likelihood Estimation (MLE). The results can be used as recommendation in the computation of expected value of the aggregate claim amounts and the premiums of the TCIP.
APA, Harvard, Vancouver, ISO, and other styles
13

Zhang, Yuyao. "Non-linear dimensionality reduction and sparse representation models for facial analysis." Thesis, Lyon, INSA, 2014. http://www.theses.fr/2014ISAL0019/document.

Full text
Abstract:
Les techniques d'analyse du visage nécessitent généralement une représentation pertinente des images, notamment en passant par des techniques de réduction de la dimension, intégrées dans des schémas plus globaux, et qui visent à capturer les caractéristiques discriminantes des signaux. Dans cette thèse, nous fournissons d'abord une vue générale sur l'état de l'art de ces modèles, puis nous appliquons une nouvelle méthode intégrant une approche non-linéaire, Kernel Similarity Principle Component Analysis (KS-PCA), aux Modèles Actifs d'Apparence (AAMs), pour modéliser l'apparence d'un visage dans des conditions d'illumination variables. L'algorithme proposé améliore notablement les résultats obtenus par l'utilisation d'une transformation PCA linéaire traditionnelle, que ce soit pour la capture des caractéristiques saillantes, produites par les variations d'illumination, ou pour la reconstruction des visages. Nous considérons aussi le problème de la classification automatiquement des poses des visages pour différentes vues et différentes illumination, avec occlusion et bruit. Basé sur les méthodes des représentations parcimonieuses, nous proposons deux cadres d'apprentissage de dictionnaire pour ce problème. Une première méthode vise la classification de poses à l'aide d'une représentation parcimonieuse active (Active Sparse Representation ASRC). En fait, un dictionnaire est construit grâce à un modèle linéaire, l'Incremental Principle Component Analysis (Incremental PCA), qui a tendance à diminuer la redondance intra-classe qui peut affecter la performance de la classification, tout en gardant la redondance inter-classes, qui elle, est critique pour les représentations parcimonieuses. La seconde approche proposée est un modèle des représentations parcimonieuses basé sur le Dictionary-Learning Sparse Representation (DLSR), qui cherche à intégrer la prise en compte du critère de la classification dans le processus d'apprentissage du dictionnaire. Nous faisons appel dans cette partie à l'algorithme K-SVD. Nos résultats expérimentaux montrent la performance de ces deux méthodes d'apprentissage de dictionnaire. Enfin, nous proposons un nouveau schéma pour l'apprentissage de dictionnaire adapté à la normalisation de l'illumination (Dictionary Learning for Illumination Normalization: DLIN). L'approche ici consiste à construire une paire de dictionnaires avec une représentation parcimonieuse. Ces dictionnaires sont construits respectivement à partir de visages illuminées normalement et irrégulièrement, puis optimisés de manière conjointe. Nous utilisons un modèle de mixture de Gaussiennes (GMM) pour augmenter la capacité à modéliser des données avec des distributions plus complexes. Les résultats expérimentaux démontrent l'efficacité de notre approche pour la normalisation d'illumination
Face analysis techniques commonly require a proper representation of images by means of dimensionality reduction leading to embedded manifolds, which aims at capturing relevant characteristics of the signals. In this thesis, we first provide a comprehensive survey on the state of the art of embedded manifold models. Then, we introduce a novel non-linear embedding method, the Kernel Similarity Principal Component Analysis (KS-PCA), into Active Appearance Models, in order to model face appearances under variable illumination. The proposed algorithm successfully outperforms the traditional linear PCA transform to capture the salient features generated by different illuminations, and reconstruct the illuminated faces with high accuracy. We also consider the problem of automatically classifying human face poses from face views with varying illumination, as well as occlusion and noise. Based on the sparse representation methods, we propose two dictionary-learning frameworks for this pose classification problem. The first framework is the Adaptive Sparse Representation pose Classification (ASRC). It trains the dictionary via a linear model called Incremental Principal Component Analysis (Incremental PCA), tending to decrease the intra-class redundancy which may affect the classification performance, while keeping the extra-class redundancy which is critical for sparse representation. The other proposed work is the Dictionary-Learning Sparse Representation model (DLSR) that learns the dictionary with the aim of coinciding with the classification criterion. This training goal is achieved by the K-SVD algorithm. In a series of experiments, we show the performance of the two dictionary-learning methods which are respectively based on a linear transform and a sparse representation model. Besides, we propose a novel Dictionary Learning framework for Illumination Normalization (DL-IN). DL-IN based on sparse representation in terms of coupled dictionaries. The dictionary pairs are jointly optimized from normally illuminated and irregularly illuminated face image pairs. We further utilize a Gaussian Mixture Model (GMM) to enhance the framework's capability of modeling data under complex distribution. The GMM adapt each model to a part of the samples and then fuse them together. Experimental results demonstrate the effectiveness of the sparsity as a prior for patch-based illumination normalization for face images
APA, Harvard, Vancouver, ISO, and other styles
14

Massaroppe, Lucas. "Estimação da causalidade de Granger no caso de interação não-linear." Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/3/3142/tde-20122016-083110/.

Full text
Abstract:
Esta tese examina o problema de detecção de conectividade entre séries temporais no sentido de Granger no caso em que a natureza não linear das interações não permite sua determinação por meio de modelos auto-regressivos lineares vetoriais. Mostra-se que é possível realizar esta detecção com auxílio dos chamados métodos de Kernel, que se tornaram populares em aprendizado por máquina (\'machine learning\') já que tais métodos permitem definir formas generalizadas de teste de Granger, coerência parcial direcionada e função de transferência direcionada. Usando simulações, mostram-se alguns exemplos de detecção nos quais fica também evidente que resultados assintóticos deduzidos originalmente para estimadores lineares podem ser generalizados de modo análogo, mostrando-se válidos no presente contexto kernelizado.
This work examines the connectivity detection problem between time series in the Granger sense when the nonlinear nature of interactions determination is impossible via linear vector autoregressive models, but is, nonetheless, feasible with the aid of the so-called Kernel methods that are popular in machine learning. The kernelization approach allows defining generalised versions for Granger tests, partial directed coherence and directed transfer function, which the simulation of some examples shows that the asymptotic detection results originally deducted for linear estimators, can also be employed under kernelization if suitably adapted.
APA, Harvard, Vancouver, ISO, and other styles
15

Mahfouz, Sandy. "Kernel-based machine learning for tracking and environmental monitoring in wireless sensor networkds." Thesis, Troyes, 2015. http://www.theses.fr/2015TROY0025/document.

Full text
Abstract:
Cette thèse porte sur les problèmes de localisation et de surveillance de champ de gaz à l'aide de réseaux de capteurs sans fil. Nous nous intéressons d'abord à la géolocalisation des capteurs et au suivi de cibles. Nous proposons ainsi une approche exploitant la puissance des signaux échangés entre les capteurs et appliquant les méthodes à noyaux avec la technique de fingerprinting. Nous élaborons ensuite une méthode de suivi de cibles, en se basant sur l'approche de localisation proposée. Cette méthode permet d'améliorer la position estimée de la cible en tenant compte de ses accélérations, et cela à l'aide du filtre de Kalman. Nous proposons également un modèle semi-paramétrique estimant les distances inter-capteurs en se basant sur les puissances des signaux échangés entre ces capteurs. Ce modèle est une combinaison du modèle physique de propagation avec un terme non linéaire estimé par les méthodes à noyaux. Les données d'accélérations sont également utilisées ici avec les distances, pour localiser la cible, en s'appuyant sur un filtrage de Kalman et un filtrage particulaire. Dans un autre contexte, nous proposons une méthode pour la surveillance de la diffusion d'un gaz dans une zone d'intérêt, basée sur l'apprentissage par noyaux. Cette méthode permet de détecter la diffusion d'un gaz en utilisant des concentrations relevées régulièrement par des capteurs déployés dans la zone. Les concentrations mesurées sont ensuite traitées pour estimer les paramètres de la source de gaz, notamment sa position et la quantité du gaz libéré
This thesis focuses on the problems of localization and gas field monitoring using wireless sensor networks. First, we focus on the geolocalization of sensors and target tracking. Using the powers of the signals exchanged between sensors, we propose a localization method combining radio-location fingerprinting and kernel methods from statistical machine learning. Based on this localization method, we develop a target tracking method that enhances the estimated position of the target by combining it to acceleration information using the Kalman filter. We also provide a semi-parametric model that estimates the distances separating sensors based on the powers of the signals exchanged between them. This semi-parametric model is a combination of the well-known log-distance propagation model with a non-linear fluctuation term estimated within the framework of kernel methods. The target's position is estimated by incorporating acceleration information to the distances separating the target from the sensors, using either the Kalman filter or the particle filter. In another context, we study gas diffusions in wireless sensor networks, using also machine learning. We propose a method that allows the detection of multiple gas diffusions based on concentration measures regularly collected from the studied region. The method estimates then the parameters of the multiple gas sources, including the sources' locations and their release rates
APA, Harvard, Vancouver, ISO, and other styles
16

Bird, Gregory David. "Linear and Nonlinear Dimensionality-Reduction-Based Surrogate Models for Real-Time Design Space Exploration of Structural Responses." BYU ScholarsArchive, 2020. https://scholarsarchive.byu.edu/etd/8653.

Full text
Abstract:
Design space exploration (DSE) is a tool used to evaluate and compare designs as part of the design selection process. While evaluating every possible design in a design space is infeasible, understanding design behavior and response throughout the design space may be accomplished by evaluating a subset of designs and interpolating between them using surrogate models. Surrogate modeling is a technique that uses low-cost calculations to approximate the outcome of more computationally expensive calculations or analyses, such as finite element analysis (FEA). While surrogates make quick predictions, accuracy is not guaranteed and must be considered. This research addressed the need to improve the accuracy of surrogate predictions in order to improve DSE of structural responses. This was accomplished by performing comparative analyses of linear and nonlinear dimensionality-reduction-based radial basis function (RBF) surrogate models for emulating various FEA nodal results. A total of four dimensionality reduction methods were investigated, namely principal component analysis (PCA), kernel principal component analysis (KPCA), isometric feature mapping (ISOMAP), and locally linear embedding (LLE). These methods were used in conjunction with surrogate modeling to predict nodal stresses and coordinates of a compressor blade. The research showed that using an ISOMAP-based dual-RBF surrogate model for predicting nodal stresses decreased the estimated mean error of the surrogate by 35.7% compared to PCA. Using nonlinear dimensionality-reduction-based surrogates did not reduce surrogate error for predicting nodal coordinates. A new metric, the manifold distance ratio (MDR), was introduced to measure the nonlinearity of the data manifolds. When applied to the stress and coordinate data, the stress space was found to be more nonlinear than the coordinate space for this application. The upfront training cost of the nonlinear dimensionality-reduction-based surrogates was larger than that of their linear counterparts but small enough to remain feasible. After training, all the dual-RBF surrogates were capable of making real-time predictions. This same process was repeated for a separate application involving the nodal displacements of mode shapes obtained from a FEA modal analysis. The modal assurance criterion (MAC) calculation was used to compare the predicted mode shapes, as well as their corresponding true mode shapes obtained from FEA, to a set of reference modes. The research showed that two nonlinear techniques, namely LLE and KPCA, resulted in lower surrogate error in the more complex design spaces. Using a RBF kernel, KPCA achieved the largest average reduction in error of 13.57%. The results also showed that surrogate error was greatly affected by mode shape reversal. Four different approaches of identifying reversed mode shapes were explored, all of which resulted in varying amounts of surrogate error. Together, the methods explored in this research were shown to decrease surrogate error when performing DSE of a turbomachine compressor blade. As surrogate accuracy increases, so does the ability to correctly make engineering decisions and judgements throughout the design process. Ultimately, this will help engineers design better turbomachines.
APA, Harvard, Vancouver, ISO, and other styles
17

Ahmed, Mohamed Salem. "Contribution à la statistique spatiale et l'analyse de données fonctionnelles." Thesis, Lille 3, 2017. http://www.theses.fr/2017LIL30047/document.

Full text
Abstract:
Ce mémoire de thèse porte sur la statistique inférentielle des données spatiales et/ou fonctionnelles. En effet, nous nous sommes intéressés à l’estimation de paramètres inconnus de certains modèles à partir d’échantillons obtenus par un processus d’échantillonnage aléatoire ou non (stratifié), composés de variables indépendantes ou spatialement dépendantes.La spécificité des méthodes proposées réside dans le fait qu’elles tiennent compte de la nature de l’échantillon étudié (échantillon stratifié ou composé de données spatiales dépendantes).Tout d’abord, nous étudions des données à valeurs dans un espace de dimension infinie ou dites ”données fonctionnelles”. Dans un premier temps, nous étudions les modèles de choix binaires fonctionnels dans un contexte d’échantillonnage par stratification endogène (échantillonnage Cas-Témoin ou échantillonnage basé sur le choix). La spécificité de cette étude réside sur le fait que la méthode proposée prend en considération le schéma d’échantillonnage. Nous décrivons une fonction de vraisemblance conditionnelle sous l’échantillonnage considérée et une stratégie de réduction de dimension afin d’introduire une estimation du modèle par vraisemblance conditionnelle. Nous étudions les propriétés asymptotiques des estimateurs proposées ainsi que leurs applications à des données simulées et réelles. Nous nous sommes ensuite intéressés à un modèle linéaire fonctionnel spatial auto-régressif. La particularité du modèle réside dans la nature fonctionnelle de la variable explicative et la structure de la dépendance spatiale des variables de l’échantillon considéré. La procédure d’estimation que nous proposons consiste à réduire la dimension infinie de la variable explicative fonctionnelle et à maximiser une quasi-vraisemblance associée au modèle. Nous établissons la consistance, la normalité asymptotique et les performances numériques des estimateurs proposés.Dans la deuxième partie du mémoire, nous abordons des problèmes de régression et prédiction de variables dépendantes à valeurs réelles. Nous commençons par généraliser la méthode de k-plus proches voisins (k-nearest neighbors; k-NN) afin de prédire un processus spatial en des sites non-observés, en présence de co-variables spatiaux. La spécificité du prédicteur proposé est qu’il tient compte d’une hétérogénéité au niveau de la co-variable utilisée. Nous établissons la convergence presque complète avec vitesse du prédicteur et donnons des résultats numériques à l’aide de données simulées et environnementales.Nous généralisons ensuite le modèle probit partiellement linéaire pour données indépendantes à des données spatiales. Nous utilisons un processus spatial linéaire pour modéliser les perturbations du processus considéré, permettant ainsi plus de flexibilité et d’englober plusieurs types de dépendances spatiales. Nous proposons une approche d’estimation semi paramétrique basée sur une vraisemblance pondérée et la méthode des moments généralisées et en étudions les propriétés asymptotiques et performances numériques. Une étude sur la détection des facteurs de risque de cancer VADS (voies aéro-digestives supérieures)dans la région Nord de France à l’aide de modèles spatiaux à choix binaire termine notre contribution
This thesis is about statistical inference for spatial and/or functional data. Indeed, weare interested in estimation of unknown parameters of some models from random or nonrandom(stratified) samples composed of independent or spatially dependent variables.The specificity of the proposed methods lies in the fact that they take into considerationthe considered sample nature (stratified or spatial sample).We begin by studying data valued in a space of infinite dimension or so-called ”functionaldata”. First, we study a functional binary choice model explored in a case-controlor choice-based sample design context. The specificity of this study is that the proposedmethod takes into account the sampling scheme. We describe a conditional likelihoodfunction under the sampling distribution and a reduction of dimension strategy to definea feasible conditional maximum likelihood estimator of the model. Asymptotic propertiesof the proposed estimates as well as their application to simulated and real data are given.Secondly, we explore a functional linear autoregressive spatial model whose particularityis on the functional nature of the explanatory variable and the structure of the spatialdependence. The estimation procedure consists of reducing the infinite dimension of thefunctional variable and maximizing a quasi-likelihood function. We establish the consistencyand asymptotic normality of the estimator. The usefulness of the methodology isillustrated via simulations and an application to some real data.In the second part of the thesis, we address some estimation and prediction problemsof real random spatial variables. We start by generalizing the k-nearest neighbors method,namely k-NN, to predict a spatial process at non-observed locations using some covariates.The specificity of the proposed k-NN predictor lies in the fact that it is flexible and allowsa number of heterogeneity in the covariate. We establish the almost complete convergencewith rates of the spatial predictor whose performance is ensured by an application oversimulated and environmental data. In addition, we generalize the partially linear probitmodel of independent data to the spatial case. We use a linear process for disturbancesallowing various spatial dependencies and propose a semiparametric estimation approachbased on weighted likelihood and generalized method of moments methods. We establishthe consistency and asymptotic distribution of the proposed estimators and investigate thefinite sample performance of the estimators on simulated data. We end by an applicationof spatial binary choice models to identify UADT (Upper aerodigestive tract) cancer riskfactors in the north region of France which displays the highest rates of such cancerincidence and mortality of the country
APA, Harvard, Vancouver, ISO, and other styles
18

De, Moliner Anne. "Estimation robuste de courbes de consommmation électrique moyennes par sondage pour de petits domaines en présence de valeurs manquantes." Thesis, Bourgogne Franche-Comté, 2017. http://www.theses.fr/2017UBFCK021/document.

Full text
Abstract:
Dans cette thèse, nous nous intéressons à l'estimation robuste de courbes moyennes ou totales de consommation électrique par sondage en population finie, pour l'ensemble de la population ainsi que pour des petites sous-populations, en présence ou non de courbes partiellement inobservées.En effet, de nombreuses études réalisées dans le groupe EDF, que ce soit dans une optique commerciale ou de gestion du réseau de distribution par Enedis, se basent sur l'analyse de courbes de consommation électrique moyennes ou totales, pour différents groupes de clients partageant des caractéristiques communes. L'ensemble des consommations électriques de chacun des 35 millions de clients résidentiels et professionnels Français ne pouvant être mesurées pour des raisons de coût et de protection de la vie privée, ces courbes de consommation moyennes sont estimées par sondage à partir de panels. Nous prolongeons les travaux de Lardin (2012) sur l'estimation de courbes moyennes par sondage en nous intéressant à des aspects spécifiques de cette problématique, à savoir l'estimation robuste aux unités influentes, l'estimation sur des petits domaines, et l'estimation en présence de courbes partiellement ou totalement inobservées.Pour proposer des estimateurs robustes de courbes moyennes, nous adaptons au cadre fonctionnel l'approche unifiée d'estimation robuste en sondages basée sur le biais conditionnel proposée par Beaumont (2013). Pour cela, nous proposons et comparons sur des jeux de données réelles trois approches : l'application des méthodes usuelles sur les courbes discrétisées, la projection sur des bases de dimension finie (Ondelettes ou Composantes Principales de l'Analyse en Composantes Principales Sphériques Fonctionnelle en particulier) et la troncature fonctionnelle des biais conditionnels basée sur la notion de profondeur d'une courbe dans un jeu de données fonctionnelles. Des estimateurs d'erreur quadratique moyenne instantanée, explicites et par bootstrap, sont également proposés.Nous traitons ensuite la problématique de l'estimation sur de petites sous-populations. Dans ce cadre, nous proposons trois méthodes : les modèles linéaires mixtes au niveau unité appliqués sur les scores de l'Analyse en Composantes Principales ou les coefficients d'ondelettes, la régression fonctionnelle et enfin l'agrégation de prédictions de courbes individuelles réalisées à l'aide d'arbres de régression ou de forêts aléatoires pour une variable cible fonctionnelle. Des versions robustes de ces différents estimateurs sont ensuite proposées en déclinant la démarche d'estimation robuste basée sur les biais conditionnels proposée précédemment.Enfin, nous proposons quatre estimateurs de courbes moyennes en présence de courbes partiellement ou totalement inobservées. Le premier est un estimateur par repondération par lissage temporel non paramétrique adapté au contexte des sondages et de la non réponse et les suivants reposent sur des méthodes d'imputation. Les portions manquantes des courbes sont alors déterminées soit en utilisant l'estimateur par lissage précédemment cité, soit par imputation par les plus proches voisins adaptée au cadre fonctionnel ou enfin par une variante de l'interpolation linéaire permettant de prendre en compte le comportement moyen de l'ensemble des unités de l'échantillon. Des approximations de variance sont proposées dans chaque cas et l'ensemble des méthodes sont comparées sur des jeux de données réelles, pour des scénarios variés de valeurs manquantes
In this thesis, we address the problem of robust estimation of mean or total electricity consumption curves by sampling in a finite population for the entire population and for small areas. We are also interested in estimating mean curves by sampling in presence of partially missing trajectories.Indeed, many studies carried out in the French electricity company EDF, for marketing or power grid management purposes, are based on the analysis of mean or total electricity consumption curves at a fine time scale, for different groups of clients sharing some common characteristics.Because of privacy issues and financial costs, it is not possible to measure the electricity consumption curve of each customer so these mean curves are estimated using samples. In this thesis, we extend the work of Lardin (2012) on mean curve estimation by sampling by focusing on specific aspects of this problem such as robustness to influential units, small area estimation and estimation in presence of partially or totally unobserved curves.In order to build robust estimators of mean curves we adapt the unified approach to robust estimation in finite population proposed by Beaumont et al (2013) to the context of functional data. To that purpose we propose three approaches : application of the usual method for real variables on discretised curves, projection on Functional Spherical Principal Components or on a Wavelets basis and thirdly functional truncation of conditional biases based on the notion of depth.These methods are tested and compared to each other on real datasets and Mean Squared Error estimators are also proposed.Secondly we address the problem of small area estimation for functional means or totals. We introduce three methods: unit level linear mixed model applied on the scores of functional principal components analysis or on wavelets coefficients, functional regression and aggregation of individual curves predictions by functional regression trees or functional random forests. Robust versions of these estimators are then proposed by following the approach to robust estimation based on conditional biais presented before.Finally, we suggest four estimators of mean curves by sampling in presence of partially or totally unobserved trajectories. The first estimator is a reweighting estimator where the weights are determined using a temporal non parametric kernel smoothing adapted to the context of finite population and missing data and the other ones rely on imputation of missing data. Missing parts of the curves are determined either by using the smoothing estimator presented before, or by nearest neighbours imputation adapted to functional data or by a variant of linear interpolation which takes into account the mean trajectory of the entire sample. Variance approximations are proposed for each method and all the estimators are compared to each other on real datasets for various missing data scenarios
APA, Harvard, Vancouver, ISO, and other styles
19

Lin, Wei-Chun, and 林韋君. "Semiparametric Linear Transformation Model with Kernel Density Estimation." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/13740404409392723927.

Full text
Abstract:
碩士
淡江大學
統計學系碩士班
104
In survival analysis, the most commonly used models, the proportional hazard model and the proportional odds model, are special cases of linear transformation model. Because of its flexibility, our aim in this thesis is to explore the performance of kernel density estimation on unknown baseline cumulative hazard function under linear transformation model. In this thesis, we chose Nadaraya-Watson kernel estimator to estimate the nonparametric part of linear transformation model. Then we used Newton-Raphson method in the estimation of parametric part, and obtained the estimate of parameter which we are interested in. We presented the application of kernel density estimation on different functions with different kernel functions and bandwidths. In simulation studies, we assume the baseline cumulative hazard function followed a Weibull distribution, and found that the result of kernel density estimation under different censored rate performed well when the sample size is large. We also found that the choice of bandwidth plays an important role in kernel estimation.
APA, Harvard, Vancouver, ISO, and other styles
20

Bukar, Ali M., and Hassan Ugail. "A nonlinear appearance model for age progression." 2017. http://hdl.handle.net/10454/12200.

Full text
Abstract:
No
Recently, automatic age progression has gained popularity due to its nu-merous applications. Among these is the search for missing people, in the UK alone up to 300,000 people are reported missing every year. Although many algorithms have been proposed, most of the methods are affected by image noise, illumination variations, and most importantly facial expres-sions. To this end we propose to build an age progression framework that utilizes image de-noising and expression normalizing capabilities of kernel principal component analysis (Kernel PCA). Here, Kernel PCA a nonlinear form of PCA that explores higher order correlations between input varia-bles, is used to build a model that captures the shape and texture variations of the human face. The extracted facial features are then used to perform age progression via a regression procedure. To evaluate the performance of the framework, rigorous tests are conducted on the FGNET ageing data-base. Furthermore, the proposed algorithm is used to progress images of Mary Boyle; a six-year-old that went missing over 39 years ago, she is considered Ireland’s youngest missing person. The algorithm presented in this paper could potentially aid, among other applications, the search for missing people worldwide.
APA, Harvard, Vancouver, ISO, and other styles
21

Ohinata, Ren. "Three Essays on Application of Semiparametric Regression: Partially Linear Mixed Effects Model and Index Model." Doctoral thesis, 2012. http://hdl.handle.net/11858/00-1735-0000-000D-F0A2-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Tilahun, Gelila. "Statistical Methods for Dating Collections of Historical Documents." Thesis, 2011. http://hdl.handle.net/1807/29890.

Full text
Abstract:
The problem in this thesis was originally motivated by problems presented with documents of Early England Data Set (DEEDS). The central problem with these medieval documents is the lack of methods to assign accurate dates to those documents which bear no date. With the problems of the DEEDS documents in mind, we present two methods to impute missing features of texts. In the first method, we suggest a new class of metrics for measuring distances between texts. We then show how to combine the distances between the texts using statistical smoothing. This method can be adapted to settings where the features of the texts are ordered or unordered categoricals (as in the case of, for example, authorship assignment problems). In the second method, we estimate the probability of occurrences of words in texts using nonparametric regression techniques of local polynomial fitting with kernel weight to generalized linear models. We combine the estimated probability of occurrences of words of a text to estimate the probability of occurrence of a text as a function of its feature -- the feature in this case being the date in which the text is written. The application and results of our methods to the DEEDS documents are presented.
APA, Harvard, Vancouver, ISO, and other styles
23

Netshivhazwaulu, Nyawedzeni. "Forecasting Foreign Direct Investment in South Africa using Non-Parametric Quantile Regression Models." Diss., 2018. http://hdl.handle.net/11602/1297.

Full text
Abstract:
MSc (Statistics)
Department of Statistics
Foreign direct investment plays an important role in the economic growth process in the host country, since foreign direct investment is considered as a vehicle transferring new ideas, capital, superior technology and skills from developed country to developing country. Non-parametric quantile regression is used in this study to estimate the relationship between foreign direct investment and the factors in uencing it in South Africa, using the data for the period 1996 to 2015. The variables are selected using the least absolute shrinkage and selection operator technique, and all the variables were selected to be in the models. The developed non-parametric quantile regression models were used for forecasting the future in ow of foreign direct investment in South Africa. The forecast evaluation was done for all models and the laplace radial basis kernel, ANOVA radial basis kernel and linear quantile regression averaging were selected as the three best models based on the accuracy measures (mean absolute percentage error, root mean square error and mean absolute error). The best set of forecast was selected based on the prediction interval coverage probability, Prediction interval normalized average deviation and prediction interval normalized average width. The results showed that linear quantile regression averaging is the best model to predict foreign direct investment since it had 100% coverage of the predictions. Linear quantile regression averaging was also con rmed to be the best model under the forecast error distribution. One of the contributions of this study was to bring the accurate foreign direct investment forecast results that can help policy makers to come up with good policies and suitable strategic plans to promote foreign direct investment in ows into South Africa.
NRF
APA, Harvard, Vancouver, ISO, and other styles
24

Bourbonnais, Mathieu Louis. "Spatial analysis of factors influencing long-term stress and health of grizzly bears (Ursus arctos) in Alberta, Canada." Thesis, 2013. http://hdl.handle.net/1828/4909.

Full text
Abstract:
A primary focus of wildlife research is to understand how habitat conditions and human activities impact the health of wild animals. External factors, both natural and anthropogenic that impact the ability of an animal to acquire food and build energy reserves have important implications for reproductive success, avoidance of predators, and the ability to withstand disease, and periods of food scarcity. In the analyses presented here, I quantify the impacts of habitat quality and anthropogenic disturbance on indicators of health for individuals in a threatened grizzly bear population in Alberta, Canada. The first analysis relates spatial patterns of hair cortisol concentrations, a promising indicator of long-term stress in mammals, measured from 304 grizzly bears to a variety of continuous environmental variables representative of habitat quality (e.g., crown closure, landcover, and vegetation productivity), topographic conditions (e.g., elevation and terrain ruggedness), and anthropogenic disturbances (e.g., roads, forest harvest blocks, and oil and gas well-sites). Hair cortisol concentration point data were integrated with continuous variables by creating a stress surface for male and female bears using kernel density estimation validated through bootstrapping. The relationships between hair cortisol concentrations for males and females and environmental variables were quantified using random forests, and landscape scale stress levels for both genders was predicted based on observed relationships. Low female stress levels were found to correspond with regions with high levels of anthropogenic disturbance and activity. High female stress levels were associated primarily with high-elevation parks and protected areas. Conversely, low male stress levels were found to correspond with parks and protected areas and spatially limited moderate to high stress levels were found in regions with greater anthropogenic disturbance. Of particular concern for conservation is the observed relationship between low female stress and sink habitats which have high mortality rates and high energetic costs. Extending the first analysis, the second portion of this research examined the impacts of scale-specific habitat selection and relationships between biology, habitat quality, and anthropogenic disturbance on body condition in 85 grizzly bears represented using a body condition index. Habitat quality and anthropogenic variables were represented at multiple scales using isopleths of a utilization distribution calculated using kernel density estimation for each bear. Several hypotheses regarding the influence of biology, habitat quality, and anthropogenic disturbance on body condition quantified using linear mixed-effects models were evaluated at each habitat selection scale using the small sample Aikake Information Criterion. Biological factors were influential at all scales as males had higher body condition than females, and body condition increased with age for both genders. At the scale of most concentrated habitat selection, the biology and habitat quality hypothesis had the greatest support and had a positive effect on body condition. A component of biology, the influence of long-term stress, which had a negative impact on body condition, was most pronounced within the biology and habitat quality hypothesis at this scale. As the scale of habitat selection was represented more broadly, support for the biology and anthropogenic disturbance hypothesis increased. Anthropogenic variables of particular importance were distance decay to roads, density of secondary linear features, and density of forest harvest areas which had a negative relationship with body condition. Management efforts aimed to promote landscape conditions beneficial to grizzly bear health should focus on promoting habitat quality in core habitat and limiting anthropogenic disturbance within larger grizzly bear home ranges.
Graduate
0768
0463
0478
mathieub@uvic.ca
APA, Harvard, Vancouver, ISO, and other styles
25

Vasilescu, M. Alex O. "A Multilinear (Tensor) Algebraic Framework for Computer Graphics, Computer Vision and Machine Learning." Thesis, 2012. http://hdl.handle.net/1807/65327.

Full text
Abstract:
This thesis introduces a multilinear algebraic framework for computer graphics, computer vision, and machine learning, particularly for the fundamental purposes of image synthesis, analysis, and recognition. Natural images result from the multifactor interaction between the imaging process, the scene illumination, and the scene geometry. We assert that a principled mathematical approach to disentangling and explicitly representing these causal factors, which are essential to image formation, is through numerical multilinear algebra, the algebra of higher-order tensors. Our new image modeling framework is based on(i) a multilinear generalization of principal components analysis (PCA), (ii) a novel multilinear generalization of independent components analysis (ICA), and (iii) a multilinear projection for use in recognition that maps images to the multiple causal factor spaces associated with their formation. Multilinear PCA employs a tensor extension of the conventional matrix singular value decomposition (SVD), known as the M-mode SVD, while our multilinear ICA method involves an analogous M-mode ICA algorithm. As applications of our tensor framework, we tackle important problems in computer graphics, computer vision, and pattern recognition; in particular, (i) image-based rendering, specifically introducing the multilinear synthesis of images of textured surfaces under varying view and illumination conditions, a new technique that we call ``TensorTextures'', as well as (ii) the multilinear analysis and recognition of facial images under variable face shape, view, and illumination conditions, a new technique that we call ``TensorFaces''. In developing these applications, we introduce a multilinear image-based rendering algorithm and a multilinear appearance-based recognition algorithm. As a final, non-image-based application of our framework, we consider the analysis, synthesis and recognition of human motion data using multilinear methods, introducing a new technique that we call ``Human Motion Signatures''.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography