Teses / dissertações sobre o tema "Matrix linear regression"

Siga este link para ver outros tipos de publicações sobre o tema: Matrix linear regression.

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Veja os 16 melhores trabalhos (teses / dissertações) para estudos sobre o assunto "Matrix linear regression".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Veja as teses / dissertações das mais diversas áreas científicas e compile uma bibliografia correta.

1

Kuljus, Kristi. "Rank Estimation in Elliptical Models : Estimation of Structured Rank Covariance Matrices and Asymptotics for Heteroscedastic Linear Regression". Doctoral thesis, Uppsala universitet, Matematisk statistik, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-9305.

Texto completo da fonte
Resumo:
This thesis deals with univariate and multivariate rank methods in making statistical inference. It is assumed that the underlying distributions belong to the class of elliptical distributions. The class of elliptical distributions is an extension of the normal distribution and includes distributions with both lighter and heavier tails than the normal distribution. In the first part of the thesis the rank covariance matrices defined via the Oja median are considered. The Oja rank covariance matrix has two important properties: it is affine equivariant and it is proportional to the inverse of the regular covariance matrix. We employ these two properties to study the problem of estimating the rank covariance matrices when they have a certain structure. The second part, which is the main part of the thesis, is devoted to rank estimation in linear regression models with symmetric heteroscedastic errors. We are interested in asymptotic properties of rank estimates. Asymptotic uniform linearity of a linear rank statistic in the case of heteroscedastic variables is proved. The asymptotic uniform linearity property enables to study asymptotic behaviour of rank regression estimates and rank tests. Existing results are generalized and it is shown that the Jaeckel estimate is consistent and asymptotically normally distributed also for heteroscedastic symmetric errors.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Shrewsbury, John Stephen. "Calibration of trip distribution by generalised linear models". Thesis, University of Canterbury. Department of Civil and Natuaral Resources Engineering, 2012. http://hdl.handle.net/10092/7685.

Texto completo da fonte
Resumo:
Generalised linear models (GLMs) provide a flexible and sound basis for calibrating gravity models for trip distribution, for a wide range of deterrence functions (from steps to splines), with K factors and geographic segmentation. The Tanner function fitted Wellington Transport Strategy Model data as well as more complex functions and was insensitive to the formulation of intrazonal and external costs. Weighting from variable expansion factors and interpretation of the deviance under sparsity are addressed. An observed trip matrix is disaggregated and fitted at the household, person and trip levels with consistent results. Hierarchical GLMs (HGLMs) are formulated to fit mixed logit models, but were unable to reproduce the coefficients of simple nested logit models. Geospatial analysis by HGLM showed no evidence of spatial error patterns, either as random K factors or as correlations between them. Equivalence with hierarchical mode choice, duality with trip distribution, regularisation, lorelograms, and the modifiable areal unit problem are considered. Trip distribution is calibrated from aggregate data by the MVESTM matrix estimation package, incorporating period and direction factors in the intercepts. Counts across four screenlines showed a significance similar to a thousand-household travel survey. Calibration was possible only in conjuction with trip end data. Criteria for validation against screenline counts were met, but only if allowance was made for error in the trip end data.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Wang, Shuo. "An Improved Meta-analysis for Analyzing Cylindrical-type Time Series Data with Applications to Forecasting Problem in Environmental Study". Digital WPI, 2015. https://digitalcommons.wpi.edu/etd-theses/386.

Texto completo da fonte
Resumo:
This thesis provides a case study on how the wind direction plays an important role in the amount of rainfall, in the village of Somi$acute{o}$. The primary goal is to illustrate how a meta-analysis, together with circular data analytic methods, helps in analyzing certain environmental issues. The existing GLS meta-analysis combines the merits of usual meta-analysis that yields a better precision and also accounts for covariance among coefficients. But, it is quite limited since information about the covariance among coefficients is not utilized. Hence, in my proposed meta-analysis, I take the correlations between adjacent studies into account when employing the GLS meta-analysis. Besides, I also fit a time series linear-circular regression as a comparable model. By comparing the confidence intervals of parameter estimates, covariance matrix, AIC, BIC and p-values, I discuss an improvement on the GLS meta analysis model in its application to forecasting problem in Environmental study.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Kim, Jingu. "Nonnegative matrix and tensor factorizations, least squares problems, and applications". Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/42909.

Texto completo da fonte
Resumo:
Nonnegative matrix factorization (NMF) is a useful dimension reduction method that has been investigated and applied in various areas. NMF is considered for high-dimensional data in which each element has a nonnegative value, and it provides a low-rank approximation formed by factors whose elements are also nonnegative. The nonnegativity constraints imposed on the low-rank factors not only enable natural interpretation but also reveal the hidden structure of data. Extending the benefits of NMF to multidimensional arrays, nonnegative tensor factorization (NTF) has been shown to be successful in analyzing complicated data sets. Despite the success, NMF and NTF have been actively developed only in the recent decade, and algorithmic strategies for computing NMF and NTF have not been fully studied. In this thesis, computational challenges regarding NMF, NTF, and related least squares problems are addressed. First, efficient algorithms of NMF and NTF are investigated based on a connection from the NMF and the NTF problems to the nonnegativity-constrained least squares (NLS) problems. A key strategy is to observe typical structure of the NLS problems arising in the NMF and the NTF computation and design a fast algorithm utilizing the structure. We propose an accelerated block principal pivoting method to solve the NLS problems, thereby significantly speeding up the NMF and NTF computation. Implementation results with synthetic and real-world data sets validate the efficiency of the proposed method. In addition, a theoretical result on the classical active-set method for rank-deficient NLS problems is presented. Although the block principal pivoting method appears generally more efficient than the active-set method for the NLS problems, it is not applicable for rank-deficient cases. We show that the active-set method with a proper starting vector can actually solve the rank-deficient NLS problems without ever running into rank-deficient least squares problems during iterations. Going beyond the NLS problems, it is presented that a block principal pivoting strategy can also be applied to the l1-regularized linear regression. The l1-regularized linear regression, also known as the Lasso, has been very popular due to its ability to promote sparse solutions. Solving this problem is difficult because the l1-regularization term is not differentiable. A block principal pivoting method and its variant, which overcome a limitation of previous active-set methods, are proposed for this problem with successful experimental results. Finally, a group-sparsity regularization method for NMF is presented. A recent challenge in data analysis for science and engineering is that data are often represented in a structured way. In particular, many data mining tasks have to deal with group-structured prior information, where features or data items are organized into groups. Motivated by an observation that features or data items that belong to a group are expected to share the same sparsity pattern in their latent factor representations, We propose mixed-norm regularization to promote group-level sparsity. Efficient convex optimization methods for dealing with the regularization terms are presented along with computational comparisons between them. Application examples of the proposed method in factor recovery, semi-supervised clustering, and multilingual text analysis are presented.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Nasseri, Sahand. "Application of an Improved Transition Probability Matrix Based Crack Rating Prediction Methodology in Florida’s Highway Network". Scholar Commons, 2008. https://scholarcommons.usf.edu/etd/424.

Texto completo da fonte
Resumo:
With the growing need to maintain roadway systems for provision of safety and comfort for travelers, network level decision-making becomes more vital than ever. In order to keep pace with this fast evolving trend, highway authorities must maintain extremely effective databases to keep track of their highway maintenance needs. Florida Department of Transportation (FDOT), as a leader in transportation innovations in the U.S., maintains Pavement Condition Survey (PCS) database of cracking, rutting, and ride information that are updated annually. Crack rating is an important parameter used by FDOT for making maintenance decisions and budget appropriation. By establishing a crack rating threshold below which traveler comfort is not assured, authorities can screen the pavement sections which are in need of Maintenance and Rehabilitation (M&R). Hence, accurate and reliable prediction of crack thresholds is essential to optimize the rehabilitation budget and manpower. Transition Probability Matrices (TPM) can be utilized to accurately predict the deterioration of crack ratings leading to the threshold. Such TPMs are usually developed by historical data or expert or experienced maintenance engineers' opinion. When historical data are used to develop TPMs, deterioration trends have been used vii indiscriminately, i.e. with no discrimination made between pavements that degrade at different rates. However, a more discriminatory method is used in this thesis to develop TPMs based on classifying pavements first into two groups. They are pavements with relatively high traffic and, pavements with a history of excessive degradation due to delayed rehabilitation. The new approach uses a multiple non-linear regression process to separately optimize TPMs for the two groups selected by prior screening of the database. The developed TPMs are shown to have minimal prediction errors with respect to crack ratings in the database that were not used in the TPM formation. It is concluded that the above two groups are statistically different from each other with respect to the rate of cracking. The observed significant differences in the deterioration trends would provide a valuable tool for the authorities in making critical network-level decisions. The same methodology can be applied in other transportation agencies based on the corresponding databases.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Кір’ян, М. П. "Веб-система загальноосвітноьої школи з використанням алгоритму оцінювання та збору статистики". Master's thesis, Сумський державний університет, 2019. http://essuir.sumdu.edu.ua/handle/123456789/76750.

Texto completo da fonte
Resumo:
Виконано дослідження предметної області, проведений аналіз існуючих рішень, здійснений вибір методів рішення поставленої задачі. Розроблена зовнішня та внутрішня структура веб- додатку. Розроблено веб-систему з використанням алгоритму оцінювання та збору статистики на базі загальноосвітньої школи. Реалізовано алгоритм з використанням Баєсової лінійної регресії, оскільки на невеликій кількості даних він надає достатньо високу точність. Розроблено систему з використанням новітніх технологій, які дозволять легко підтримувати та оновлювати систему у майбутньому.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Bettache, Nayel. "Matrix-valued Time Series in High Dimension". Electronic Thesis or Diss., Institut polytechnique de Paris, 2024. http://www.theses.fr/2024IPPAG002.

Texto completo da fonte
Resumo:
L'objectif de cette thèse est de modéliser des séries temporelles à valeurs matricielles dans un cadre de grande dimension. Pour ce faire, la totalité de l'étude est présentée dans un cadre non asymptotique. Nous fournissons d'abord une procédure de test capable de distinguer dans le cas de vecteurs ayant une loi centrée stationnaire si leur matrice de covariance est égale à l'identité ou si elle possède une structure de Toeplitz sparse. Dans un second temps, nous proposons une extension de la régression linéaire matricielle de faible rang à une régression à deux paramètres matriciels qui créent des corrélations entre les lignes et les colonnes des observations. Enfin nous introduisons et estimons un topiques-modèle dynamique où l'espérance des observations est factorisée en une matrice statique et une matrice qui évolue dans le temps suivant un processus autorégressif d'ordre un à valeurs dans un simplexe
The objective of this thesis is to model matrix-valued time series in a high-dimensional framework. To this end, the entire study is presented in a non-asymptotic framework. We first provide a test procedure capable of distinguishing whether the covariance matrix of centered random vectors with centered stationary distribution is equal to the identity or has a sparse Toeplitz structure. Secondly, we propose an extension of low-rank matrix linear regression to a regression model with two matrix-parameters which create correlations between the rows and he columns of the output random matrix. Finally, we introduce and estimate a dynamic topic model where the expected value of the observations is factorizes into a static matrix and a time-dependent matrix following a simplex-valued auto-regressive process of order one
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Žiupsnys, Giedrius. "Klientų duomenų valdymas bankininkystėje". Master's thesis, Lithuanian Academic Libraries Network (LABT), 2011. http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2010~D_20110709_152442-86545.

Texto completo da fonte
Resumo:
Darbas apima banko klientų kredito istorinių duomenų dėsningumų tyrimą. Pirmiausia nagrinėjamos banko duomenų saugyklos, siekiant kuo geriau perprasti bankinius duomenis. Vėliau naudojant banko duomenų imtis, kurios apima kreditų grąžinimo istoriją, siekiama įvertinti klientų nemokumo riziką. Tai atliekama adaptuojant algoritmus bei programinę įrangą duomenų tyrimui, kuris pradedamas nuo informacijos apdorojimo ir paruošimo. Paskui pritaikant įvairius klasifikavimo algoritmus, sudarinėjami modeliai, kuriais siekiama kuo tiksliau suskirstyti turimus duomenis, nustatant nemokius klientus. Taip pat siekiant įvertinti kliento vėluojamų mokėti paskolą dienų skaičių pasitelkiami regresijos algoritmai bei sudarinėjami prognozės modeliai. Taigi darbo metu atlikus numatytus tyrimus, pateikiami duomenų vitrinų modeliai, informacijos srautų schema. Taip pat nurodomi klasifikavimo ir prognozavimo modeliai bei algoritmai, geriausiai įvertinantys duotas duomenų imtis.
This work is about analysing regularities in bank clients historical credit data. So first of all bank information repositories are analyzed to comprehend banks data. Then using data mining algorithms and software for bank data sets, which describes credit repayment history, clients insolvency risk is being tried to estimate. So first step in analyzis is information preprocessing for data mining. Later various classification algorithms is used to make models wich classify our data sets and help to identify insolvent clients as accurate as possible. Besides clasiffication, regression algorithms are analyzed and prediction models are created. These models help to estimate how long client are late to pay deposit. So when researches have been done data marts and data flow schema are presented. Also classification and regressions algorithms and models, which shows best estimation results for our data sets, are introduced.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

NÓBREGA, Caio Santos Bezerra. "Uma estratégia para predição da taxa de aprendizagem do gradiente descendente para aceleração da fatoração de matrizes". Universidade Federal de Campina Grande, 2014. http://dspace.sti.ufcg.edu.br:8080/jspui/handle/riufcg/362.

Texto completo da fonte
Resumo:
Submitted by Johnny Rodrigues (johnnyrodrigues@ufcg.edu.br) on 2018-04-11T14:50:08Z No. of bitstreams: 1 CAIO SANTOS BEZERRA NÓBREGA - DISSERTAÇÃO PPGCC 2014..pdf: 983246 bytes, checksum: 5eca7651706ce317dc514ec2f1aa10c3 (MD5)
Made available in DSpace on 2018-04-11T14:50:08Z (GMT). No. of bitstreams: 1 CAIO SANTOS BEZERRA NÓBREGA - DISSERTAÇÃO PPGCC 2014..pdf: 983246 bytes, checksum: 5eca7651706ce317dc514ec2f1aa10c3 (MD5) Previous issue date: 2014-07-30
Capes
Sugerir os produtos mais apropriados aos diversos tipos de consumidores não é uma tarefa trivial, apesar de ser um fator chave para aumentar satisfação e lealdade destes. Devido a esse fato, sistemas de recomendação têm se tornado uma ferramenta importante para diversas aplicações, tais como, comércio eletrônico, sites personalizados e redes sociais. Recentemente, a fatoração de matrizes se tornou a técnica mais bem sucedida de implementação de sistemas de recomendação. Os parâmetros do modelo de fatoração de matrizes são tipicamente aprendidos por meio de métodos numéricos, tal como o gradiente descendente. O desempenho do gradiente descendente está diretamente relacionada à configuração da taxa de aprendizagem, a qual é tipicamente configurada para valores pequenos, com o objetivo de não perder um mínimo local. Consequentemente, o algoritmo pode levar várias iterações para convergir. Idealmente,é desejada uma taxa de aprendizagem que conduza a um mínimo local nas primeiras iterações, mas isto é muito difícil de ser realizado dada a alta complexidade do espaço de valores a serem pesquisados. Começando com um estudo exploratório em várias bases de dados de sistemas de recomendação, observamos que, para a maioria das bases, há um padrão linear entre a taxa de aprendizagem e o número de iterações necessárias para atingir a convergência. A partir disso, propomos utilizar modelos de regressão lineares simples para predizer, para uma base de dados desconhecida, um bom valor para a taxa de aprendizagem inicial. A ideia é estimar uma taxa de aprendizagem que conduza o gradiente descendenteaummínimolocalnasprimeirasiterações. Avaliamosnossatécnicaem8bases desistemasderecomendaçãoreaisecomparamoscomoalgoritmopadrão,oqualutilizaum valorfixoparaataxadeaprendizagem,ecomtécnicasqueadaptamataxadeaprendizagem extraídas da literatura. Nós mostramos que conseguimos reduzir o número de iterações até em 40% quando comparados à abordagem padrão.
Suggesting the most suitable products to different types of consumers is not a trivial task, despite being a key factor for increasing their satisfaction and loyalty. Due to this fact, recommender systems have be come an important tool for many applications, such as e-commerce, personalized websites and social networks. Recently, Matrix Factorization has become the most successful technique to implement recommendation systems. The parameters of this model are typically learned by means of numerical methods, like the gradient descent. The performance of the gradient descent is directly related to the configuration of the learning rate, which is typically set to small values, in order to do not miss a local minimum. As a consequence, the algorithm may take several iterations to converge. Ideally, one wants to find a learning rate that will lead to a local minimum in the early iterations, but this is very difficult to achieve given the high complexity of search space. Starting with an exploratory study on several recommendation systems datasets, we observed that there is an over all linear relationship between the learnin grate and the number of iterations needed until convergence. From this, we propose to use simple linear regression models to predict, for a unknown dataset, a good value for an initial learning rate. The idea is to estimate a learning rate that drives the gradient descent as close as possible to a local minimum in the first iteration. We evaluate our technique on 8 real-world recommender datasets and compared it with the standard Matrix Factorization learning algorithm, which uses a fixed value for the learning rate over all iterations, and techniques fromt he literature that adapt the learning rate. We show that we can reduce the number of iterations until at 40% compared to the standard approach.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Cavalcanti, Alexsandro Bezerra. "Aperfeiçoamento de métodos estatísticos em modelos de regressão da família exponencial". Universidade de São Paulo, 2009. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-05082009-170043/.

Texto completo da fonte
Resumo:
Neste trabalho, desenvolvemos três tópicos relacionados a modelos de regressão da família exponencial. No primeiro tópico, obtivemos a matriz de covariância assintótica de ordem $n^$, onde $n$ é o tamanho da amostra, dos estimadores de máxima verossimilhança corrigidos pelo viés de ordem $n^$ em modelos lineares generalizados, considerando o parâmetro de precisão conhecido. No segundo tópico calculamos o coeficiente de assimetria assintótico de ordem n^{-1/2} para a distribuição dos estimadores de máxima verossimilhança dos parâmetros que modelam a média e dos parâmetros de precisão e dispersão em modelos não-lineares da família exponencial, considerando o parâmetro de dispersão desconhecido, porém o mesmo para todas as observações. Finalmente, obtivemos fatores de correção tipo-Bartlett para o teste escore em modelos não-lineares da família exponencial, considerando covariáveis para modelar o parâmetro de dispersão. Avaliamos os resultados obtidos nos três tópicos desenvolvidos por meio de estudos de simulação de Monte Carlo
In this work, we develop three topics related to the exponential family nonlinear regression. First, we obtain the asymptotic covariance matrix of order $n^$, where $n$ is the sample size, for the maximum likelihood estimators corrected by the bias of order $n^$ in generalized linear models, considering the precision parameter known. Second, we calculate an asymptotic formula of order $n^{-1/2}$ for the skewness of the distribution of the maximum likelihood estimators of the mean parameters and of the precision and dispersion parameters in exponential family nonlinear models considering that the dispersion parameter is the same although unknown for all observations. Finally, we obtain Bartlett-type correction factors for the score test in exponential family nonlinear models assuming that the precision parameter is modelled by covariates. Monte Carlo simulation studies are developed to evaluate the results obtained in the three topics.
Estilos ABNT, Harvard, Vancouver, APA, etc.
11

Söderberg, Max Joel, e Axel Meurling. "Feature selection in short-term load forecasting". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-259692.

Texto completo da fonte
Resumo:
This paper investigates correlation between energy consumption 24 hours ahead and features used for predicting energy consumption. The features originate from three categories: weather, time and previous energy. The correlations are calculated using Pearson correlation and mutual information. This resulted in the highest correlated features being those representing previous energy consumption, followed by temperature and month. Two identical feature sets containing all attributes1 were obtained by ranking the features according to correlation. Three feature sets were created manually. The first set contained seven attributes representing previous energy consumption over the course of seven days prior to the day of prediction. The second set consisted of weather and time attributes. The third set consisted of all attributes from the first and second set. These sets were then compared on different machine learning models. It was found the set containing all attributes and the set containing previous energy attributes yielded the best performance for each machine learning model. 1In this report, the words ”attribute” and ”feature” are used interchangeably.
I denna rapport undersöks korrelation och betydelsen av olika attribut för att förutspå energiförbrukning 24 timmar framåt. Attributen härstammar från tre kategorier: väder, tid och tidigare energiförbrukning. Korrelationerna tas fram genom att utföra Pearson Correlation och Mutual Information. Detta resulterade i att de högst korrelerade attributen var de som representerar tidigare energiförbrukning, följt av temperatur och månad. Två identiska attributmängder erhölls genom att ranka attributen över korrelation. Tre attributmängder skapades manuellt. Den första mängden innehåll sju attribut som representerade tidigare energiförbrukning, en för varje dag, sju dagar innan datumet för prognosen av energiförbrukning. Den andra mängden bestod av väderoch tidsattribut. Den tredje mängden bestod av alla attribut från den första och andra mängden. Dessa mängder jämfördes sedan med hjälp av olika maskininlärningsmodeller. Resultaten visade att mängden med alla attribut och den med tidigare energiförbrukning gav bäst resultat för samtliga modeller.
Estilos ABNT, Harvard, Vancouver, APA, etc.
12

Fridgeirsdottir, Gudrun A. "The development of a multiple linear regression model for aiding formulation development of solid dispersions". Thesis, University of Nottingham, 2018. http://eprints.nottingham.ac.uk/52176/.

Texto completo da fonte
Resumo:
As poor solubility continues to be problem for new chemical entities (NCEs) in medicines development the use and interest in solid dispersions as a formulation-based solution has grown. Solid dispersions, where a drug is typically dispersed in a molecular state within an amorphous water-soluble polymer, present a good strategy to significantly enhance the effective drug solubility and hence bioavailability of drugs. The main drawback of this formulation strategy is the inherent instability of the amorphous form. With the right choice of polymer and manufacturing method, sufficient stability can be accomplished. However, finding the right combination of carrier and manufacturing method can be challenging, being labour, time and material costly. Therefore, a knowledge based support tool based upon a statistically significant data set to help with the formulation process would be of great value in the pharmaceutical industry. Here, 60 solid dispersion formulations were produced using ten, poorly soluble, chemically diverse APIs, three commonly used polymers and two manufacturing methods (spray drying and hot-melt extrusion). A long term stability study, up to one year, was performed on all formulations at accelerated conditions. Samples were regularly checked for the onset of crystallisation during the period, using mainly, polarised light microscopy. The stability data showed a large variance in stability between, methods, polymers and APIs. No obvious trends could be observed. Using statistical modelling, the experimental data in combination with calculated and predicted physicochemical properties of the APIs, several multiple linear regression (MLR) models were built. These had a good adjusted R2 and most showed good predictability in leave-one-out cross validations. Additionally, a validation on half of the models (eg. those based on spray-drying models) using an external dataset showed excellent predictability, with the correct ranking of formulations and accurate prediction of stability. In conclusion, this work has provided important insight into the complex correlations between the physical stability of amorphous solid dispersions and factors such as manufacturing method, carrier and properties of the API. Due to the expansive number of formulations studied here, which is far greater than previously published in the literature in a single study, more general conclusions can be drawn about these correlations than has previously been possible. This thesis has shown the potential of using well-founded statistical models in the formulation development of solid dispersion and given more insight into the complexity of these systems and how stability of these is dependent on multiple factors.
Estilos ABNT, Harvard, Vancouver, APA, etc.
13

Tomek, Peter. "Approximation of Terrain Data Utilizing Splines". Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2012. http://www.nusl.cz/ntk/nusl-236488.

Texto completo da fonte
Resumo:
Pro optimalizaci letových trajektorií ve velmi malé nadmorské výšce, terenní vlastnosti musí být zahrnuty velice přesne. Proto rychlá a efektivní evaluace terenních dat je velice důležitá vzhledem nato, že čas potrebný pro optimalizaci musí být co nejkratší. Navyše, na optimalizaci letové trajektorie se využívájí metody založené na výpočtu gradientu. Proto musí být aproximační funkce terenních dat spojitá do určitého stupne derivace. Velice nádejná metoda na aproximaci terenních dat je aplikace víceroměrných simplex polynomů. Cílem této práce je implementovat funkci, která vyhodnotí dané terenní data na určitých bodech spolu s gradientem pomocí vícerozměrných splajnů. Program by měl vyčíslit více bodů najednou a měl by pracovat v $n$-dimensionálním prostoru.
Estilos ABNT, Harvard, Vancouver, APA, etc.
14

Winkler, Anderson M. "Widening the applicability of permutation inference". Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:ce166876-0aa3-449e-8496-f28bf189960c.

Texto completo da fonte
Resumo:
This thesis is divided into three main parts. In the first, we discuss that, although permutation tests can provide exact control of false positives under the reasonable assumption of exchangeability, there are common examples in which global exchangeability does not hold, such as in experiments with repeated measurements or tests in which subjects are related to each other. To allow permutation inference in such cases, we propose an extension of the well known concept of exchangeability blocks, allowing these to be nested in a hierarchical, multi-level definition. This definition allows permutations that retain the original joint distribution unaltered, thus preserving exchangeability. The null hypothesis is tested using only a subset of all otherwise possible permutations. We do not need to explicitly model the degree of dependence between observations; rather the use of such permutation scheme leaves any dependence intact. The strategy is compatible with heteroscedasticity and can be used with permutations, sign flippings, or both combined. In the second part, we exploit properties of test statistics to obtain accelerations irrespective of generic software or hardware improvements. We compare six different approaches using synthetic and real data, assessing the methods in terms of their error rates, power, agreement with a reference result, and the risk of taking a different decision regarding the rejection of the null hypotheses (known as the resampling risk). In the third part, we investigate and compare the different methods for assessment of cortical volume and area from magnetic resonance images using surface-based methods. Using data from young adults born with very low birth weight and coetaneous controls, we show that instead of volume, the permutation-based non-parametric combination (NPC) of thickness and area is a more sensitive option for studying joint effects on these two quantities, giving equal weight to variation in both, and allowing a better characterisation of biological processes that can affect brain morphology.
Estilos ABNT, Harvard, Vancouver, APA, etc.
15

VLČKOVÁ, Miroslava. "Kvalita účetních dat v řízení podniku". Doctoral thesis, 2014. http://www.nusl.cz/ntk/nusl-177497.

Texto completo da fonte
Resumo:
This dissertation thesis deals with the analysis of the quality of accounting data needed for company management and decision-making processes. The main objective of this thesis is to evaluate accounting data quality according to selected criteria which causally affect this quality. The focus is placed on the proposal of a model suitable for evaluation of accounting data quality for management purposes. The author focused on the issue of the quality of accounting data, the determination of criteria negatively influencing this quality and their impact on company management. Particular criteria were determined for both financial and managerial accounting, as a basic source of information for value management. The next step was to conduct procedures for calculating the weights of the particular criteria and to create evaluation models of accounting data quality. Based on the model of the quality of financial accounting, a multiple regression analysis was applied together with a stepwise analysis in order to determine a relationship between the accounting data quality and selected financial indicators. In the context of management accounting, the work analysed to what extent Czech companies use management accounting and what knowledge company managers possess in this field. After the evaluation of the conducted analyses, an implementation guide of management accounting for small and medium-sized enterprises was created, which is included in the appendix of this thesis.
Estilos ABNT, Harvard, Vancouver, APA, etc.
16

Horáček, Jaroslav. "Intervalové lineární a nelineární systémy". Doctoral thesis, 2019. http://www.nusl.cz/ntk/nusl-408132.

Texto completo da fonte
Resumo:
First, basic aspects of interval analysis, roles of intervals and their applications are addressed. Then, various classes of interval matrices are described and their relations are depicted. This material forms a prelude to the unifying theme of the rest of the work - solving interval linear systems. Several methods for enclosing the solution set of square and overdetermined interval linear systems are covered and compared. For square systems the new shaving method is introduced, for overdetermined systems the new subsquares approach is introduced. Detecting unsolvability and solvability of such systems is discussed and several polynomial conditions are compared. Two strongest condi- tions are proved to be equivalent under certain assumption. Solving of interval linear systems is used to approach other problems in the rest of the work. Computing enclosures of determinants of interval matrices is addressed. NP- hardness of both relative and absolute approximation is proved. New method based on solving square interval linear systems and Cramer's rule is designed. Various classes of matrices with polynomially computable bounds on determinant are characterized. Solving of interval linear systems is also used to compute the least squares linear and nonlinear interval regression. It is then applied to real...
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia