Rozprawy doktorskie na temat „M-statistics”

Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: M-statistics.

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „M-statistics”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Fluck, Elody [Verfasser], i M. [Akademischer Betreuer] Kunz. "Hail statistics for European countries / Elody Fluck ; Betreuer: M. Kunz". Karlsruhe : KIT-Bibliothek, 2018. http://d-nb.info/1153828693/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Aduba, Chukwuemeka Nnabuife. "N-Player Statistical Nash Game Control: M-th Cost Cumulant Optimization". Diss., Temple University Libraries, 2014. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/298838.

Pełny tekst źródła
Streszczenie:
Electrical and Computer Engineering
Ph.D.
Game theory is the study of tactical interactions involving conflicts and cooperations among multiple decision makers called players with applications in diverse disciplines such as economics, biology, management, communication networks, electric power systems and control. This dissertation studies a statistical differential game problem where finite N players optimize their system performance by shaping the distribution of their cost function through cost cumulants. This research integrates game theory with statistical optimal control theory and considers a statistical Nash non-cooperative nonzero-sum game for a nonlinear dynamic system with nonquadratic cost functions. The objective of the statistical Nash game is to find the equilibrium solution where no player has the incentive to deviate once other players maintain their equilibrium strategy. The necessary condition for the existence of the Nash equilibrium solution is given for the m-th cumulant cost optimization using the Hamilton-Jacobi-Bellman (HJB) equations. In addition, the sufficient condition which is the verification theorem for the existence of Nash equilibrium solution is given for the m-th cumulant cost optimization using the Hamilton-Jacobi-Bellman (HJB) equations. However, solving the HJB equations even for relatively low dimensional game problem is not trivial, we propose to use neural network approximate method to find the solution of the HJB partial differential equations for the statistical game problem. Convergence proof of the neural network approximate method solution to exact solution is given. In addition, numerical examples are provided for the statistical game to demonstrate the applicability of the proposed theoretical developments.
Temple University--Theses
Style APA, Harvard, Vancouver, ISO itp.
3

Berta, Zachory Kaczmarczyk. "Super-Earth and Sub-Neptune Exoplanets: a First Look from the MEarth Project". Thesis, Harvard University, 2013. http://dissertations.umi.com/gsas.harvard:10833.

Pełny tekst źródła
Streszczenie:
Exoplanets that transit nearby M dwarfs allow us to measure the sizes, masses, and atmospheric properties of distant worlds. Between 2008 and 2013, we searched for such planets with the MEarth Project, a photometric survey of the closest and smallest main-sequence stars. This thesis uses the first planet discovered with MEarth, the warm 2.7 Earth radius exoplanet GJ1214b, to explore the possibilities that planets transiting M dwarfs provide.
Astronomy
Style APA, Harvard, Vancouver, ISO itp.
4

Lewis, John Robert. "Bayesian Restricted Likelihood Methods". The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1407505392.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Savatovic, Anita, i Mejra Cakic. "Estimating Optimal Checkpoint Intervals Using GPSS Simulation". Thesis, Linköping University, Department of Mathematics, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-8558.

Pełny tekst źródła
Streszczenie:

In this project we illustrate how queueing simulation may be used to find the optimal interval for checkpointing problems and compare results with theoretical computations for simple systems that may be treated analytically.

We consider a relatively simple model for an internet banking facility. From time to time, the application server breaks down. The information at the time of the breakdown has to be passed onto the back up server before service may be resumed. To make the change over as efficient as possible, information of the state of user’s accounts is saved at regular intervals. This is known as checkpointing.

Firstly, we use GPSS (a queueing simulation tool) to find, by simulation, an optimal checkpointing interval, which maximises the efficiency of the server. Two measures of efficiency are considered; the availability of the server and the average time a customer spends in the system. Secondly, we investigate how far the queueing theory can go to providing an analytic solution to the problem and see whether or not this is in line with the results obtained through simulation.

The analysis shows that checkpointing is not necessary if breakdowns occur frequently and log reading after failure does not take much time. Otherwise, checkpointing is necessary and the analysis shows how GPSS may be used to obtain the optimal checkpointing interval. Relatively complicated systems may be simulated, where there are no analytic tools available. In simple cases, where theoretical methods may be used, the results from our simulations correspond with the theoretical calculations.

Style APA, Harvard, Vancouver, ISO itp.
6

Kouamo, Olaf. "Analyse des séries chronologiques à mémoire longue dans le domaine des ondelettes". Phd thesis, Paris, Télécom ParisTech, 2011. https://pastel.hal.science/pastel-00565656.

Pełny tekst źródła
Streszczenie:
Le thème de nos travaux porte sur la statistique des processus à longue mémoire, pour lesquels nous proposons et validons des outils statistiques issus de l'analyse par ondelettes. Ces dernières années ces méthodes pour estimer le paramètre de mémoire sont devenues très populaires. Cependant, les résultats théoriques validant rigoureusement les estimateurs pour les modèles semi paramétriques classiques à longue mémoire sont récents (cf. Les articles de E. Moulines, F. Roueff et M. Taqqu depuis 2007). Les résultats que nous proposons dans cette thèse s'inscrivent directement dans le prolongement de ces travaux. Nous avons proposé une procédure de test pour détecter des ruptures sur la densité spectrale généralisée. Dans le domaine des ondelettes, le test devient un test de ruptures sur la variance des coefficients d'ondelettes. Nous avons ensuite développé un algorithme de calcul rapide de la matrice de covariance des coefficients d'ondelettes. Deux applications de cet algorithme sont proposées , d'une part pour l'estimation de d et d'autre part pour améliorer le test proposé dans le chapitre précédent. Pour finir, nous avons étudié les estimateurs robustes robustes du paramètre de mémoire d dans le domaine des ondelettes. En se basant sur trois estimateurs de la variance des coefficients d'ondelettes à une échelle. La contribution majeure de ce chapitre est le théorème central limite obtenu pour les trois estimateurs de d dans le cadre des processus gaussiens M(d)
The theme of our work focuses on statistical process long memory, for which we propose and validate tools statistics from the wavelet analysis. In recent years these methods for estimating the memory setting became very popular. However, rigorously validating the theoretical results estimators for semiparametric models classic long memory are new (cf. The articles by E. Moulines, F. Roueff and M. Taqqu since 2007). The results we propose in this thesis are a direct extension of this work. We have proposed a test procedure for detecting changes on the generalized spectral density. In the wavelet domain, the test becomes a test of change in the variance of wavelet coefficients. We then developed an algorithm for fast computation of covariance matrix of wavelet coefficients. Two applications of this algorithm are proposed, first to estimate d and the other part to improve the test proposed in the previous chapter. Finally, we studied the robust estimators of the memory parameter in the wavelet domain, based on three estimators of the variance of wavelet coefficients at scale. The major contribution of this chapter is the central limit theorem obtained for three estimators in the context of Gaussian processes M (d)
Style APA, Harvard, Vancouver, ISO itp.
7

Kouamo, Olaf. "Analyse des séries chronologiques à mémoire longue dans le domaine des ondelettes". Phd thesis, Télécom ParisTech, 2011. http://pastel.archives-ouvertes.fr/pastel-00565656.

Pełny tekst źródła
Streszczenie:
Le thème de nos travaux porte sur la statistique des processus à longue mémoire, pour lesquels nous proposons et validons des outils statistiques issus de l'analyse par ondelettes. Ces dernières années ces méthodes pour estimer le paramètre de mémoire sont devenues très populaires. Cependant, les résultats théoriques validant rigoureusement les estimateurs pour les modèles semi paramétriques classiques à longue mémoire sont récents (cf. les articles de E. Moulines, F. Roueff et M. Taqqu depuis 2007). Les résultats que nous proposons dans cette thèse s'inscrivent directement dans le prolongement de ces travaux. Nous avons proposé une procédure de test pour détecter des ruptures sur la densité spectrale généralisée. Dans le domaine des ondelettes, le test devient un test de ruptures sur la variance des coefficients d'ondelettes. Nous avons ensuite développé un algorithme de calcul rapide de la matrice de covariance des coefficients d'ondelettes. Deux applications de cet algorithme sont proposées , d'une part pour l'estimation de d et d'autre part pour améliorer le test proposé dans le chapitre précédent. Pour finir, nous avons étudié les estimateurs robustes robustes du paramètre de mémoire d dans le domaine des ondelettes. en se basant sur trois estimateurs de la variance des coefficients d'ondelettes à une échelle. La contribution majeure de ce chapitre est le théorème central limite obtenu pour les trois estimateurs de d dans le cadre des processus gaussiens M(d).
Style APA, Harvard, Vancouver, ISO itp.
8

Elizalde, Torrent Sergi. "Consecutive patterns and statistics on restricted permutations". Doctoral thesis, Universitat Politècnica de Catalunya, 2004. http://hdl.handle.net/10803/5839.

Pełny tekst źródła
Streszczenie:
El tema d'aquesta tesi és l'enumeració de permutacions amb subseqüències prohibides respecte a certs estadístics, i l'enumeració de permutacions que eviten subseqüències generalitzades.
Després d'introduir algunes definicions sobre subseqüències i estadístics en permutacions i camins de Dyck, comencem estudiant la distribució dels estadístics -nombre de punts fixos' i -nombre d'excedències' en permutacions que eviten una subseqüència de longitud 3. Un dels resultats principals és que la distribució conjunta d'aquest parell de paràmetres és la mateixa en permutacions que eviten 321 que en permutacions que eviten 132. Això generalitza un teorema recent de Robertson, Saracino i Zeilberger. Demostrem aquest resultat donant una bijecció que preserva els dos estadístics en qüestió i un altre paràmetre. La idea clau consisteix en introduir una nova classe d'estadístics en camins de Dyck, basada en el que anomenem túnel.
A continuació considerem el mateix parell d'estadístics en permutacions que eviten simultàniament dues o més subseqüències de longitud 3. Resolem tots els casos donant les funcions generadores corresponents. Alguns casos són generalitzats a subseqüències de longitud arbitrària. També descrivim la distribució d'aquests paràmetres en involucions que eviten qualsevol subconjunt de subseqüències de longitud 3. La tècnica principal consisteix en fer servir bijeccions entre permutacions amb subseqüències prohibides i certs tipus de camins de Dyck, de manera que els estadístics en permutacions que considerem corresponen a estadístics en camins de Dyck que són més fàcils d'enumerar.
Tot seguit presentem una nova família de bijeccions del conjunt de camins de Dyck a sí mateix, que envien estadístics que apareixen en l'estudi de permutacions amb subseqüències prohibides a estadístics clàssics en camins de Dyck, la distribució dels quals s'obté fàcilment. En particular, això ens dóna una prova bijectiva senzilla de l'equidistribució de punts fixos en les permutacions que eviten 321 i en les que eviten 132. A continuació donem noves interpretacions dels nombres de Catalan i dels nombres de Fine. Considerem una classe de permutacions definida en termes d'aparellaments de 2n punts en una circumferència sense creuaments. N'estudiem l'estructura i algunes propietats, i donem la distribució de diversos estadístics en aquests permutacions.
En la següent part de la tesi introduïm una noció diferent de subseqüències prohibides, amb el requeriment que els elements que formen la subseqüència han d'aparèixer en posicions consecutives a la permutació. Més en general, estudiem la distribució del nombre d'ocurrències de subparaules (subseqüències consecutives) en permutacions. Resolem el problema en diversos casos segons la forma de la subparaula, obtenint-ne les funcions generadores exponencials bivariades corresponents com a solucions de certes equacions diferencials lineals. El mètode està basat en la representació de permutacions com a arbres binaris creixents i en mètodes simbòlics.
La part final tracta de subseqüències generalitzades, que extenen tant la noció de subseqüències clàssiques com la de subparaules. Per algunes subseqüències obtenim nous resultats enumeratius. Finalment estudiem el comportament assimptòtic del nombre de permutacions de mida n que eviten una subseqüència generalitzada fixa quan n tendeix a infinit. També donem fites inferiors i superiors en el nombre de permutacions que eviten certes subseqüències.
Style APA, Harvard, Vancouver, ISO itp.
9

Nardini, C. "STATISTICS IN CLINICAL TRIALS: OUT OF CONDITION. SOME PROBLEMS OF UNCONDITIONAL INFERENCEAT THE CROSSROADS OF METHODOLOGY AND ETHICS". Doctoral thesis, Università degli Studi di Milano, 2013. http://hdl.handle.net/2434/218889.

Pełny tekst źródła
Streszczenie:
Randomized controlled trials are experiments for the evaluation of a new treatment option, currently representing the "gold standard" in health care assessment. Clinical trials fulfill a double role of evidence production and of regulatory oversight in sanctioning new drugs' approval into the drug market. For this reason trials are large and tightly regulated enterprises that have to comply with ethical requirements while at the same time maintaining high epistemic standards, in a balance that becomes increasingly difficult to strike as research questions become more and more sophisticated. The statistical framework adopted for designing and analysing trials represents a relevant part of this architecture. Statististical methodology influences such aspects as which inferences are licensed on the basis of data and what is the degree of support granted to an hypothesis. Thus, statistics plays a fundamental role as a gatekeeper both in warranting the ethical permissibility of a trial, and in licensing conclusions about the most effective treatment. Certain widely-accepted statistical principles have an impact on the way results from medical studies are evaluated. One such principle is conditioning, i.e. the possibility to incorporate an assessment of strength of evidence in inferential statements of confidence. Currently, conditioning is not part of the statistical method in use, although it is upheld by alternative statistical paradigms such as the Bayesian. In my thesis I analyze the impact of conditioning upon the ethical, epistemic and regulatory facets of trials and I suggest the possibility of incorporating conditioning within the current statistical paradigm of clinical research.
Style APA, Harvard, Vancouver, ISO itp.
10

Zhang, Zongjun. "Adaptive Robust Regression Approaches in data analysis and their Applications". University of Cincinnati / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1445343114.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
11

Nguyen, Thi Mong Ngoc. "Estimation récursive pour les modèles semi-paramétriques". Phd thesis, Université Sciences et Technologies - Bordeaux I, 2010. http://tel.archives-ouvertes.fr/tel-00938607.

Pełny tekst źródła
Streszczenie:
Dans cette th ese, nous nous int eressons au mod ele semi-param etrique de r egression de la forme y = f( \theta'x; \epsilon), lorsque x \in R^p et y\in R. Notre objectif est d' etudier des probl emes d'estimation des param etres \theta et f de ce mod ele avec des m ethodes r ecursives. Dans la premi ere partie, l'approche que nous d eveloppons est fond ee sur une m ethode introduite par Li (1991), appel ee Sliced Inverse Regression (SIR). Nous proposons des m ethodes SIR r ecursives pour estimer le param etre . Dans le cas particulier o u l'on consid ere le nombre de tranches egal a 2, il est possible d'obtenir une expression analytique de l'estimateur de la direction de . Nous proposons une forme r ecursive pour cet estimateur, ainsi qu'une forme r ecursive de l'estimateur de la matrice d'int er^et. Ensuite, nous proposons une nouvelle approche appell ee \SIRoneslice" (r ecursive ou non r ecursive) de la m ethode SIR bas ee sur l'utilisation de l'information contenue dans une seule tranche optimale (qu'il faudra choisir parmi un nombre quelconque de tranches). Nous proposons egalement un crit ere \bootstrap na f" pour le choix du nombre de tranches. Des r esultats asymptotiques sont donn es et une etude sur des simulations d emontre le bon comportement num erique des approches r ecursives propos ees et l'avantage principal de l'utilisation la version r ecursive de SIR et de SIRoneslice du point de vue des temps de calcul. Dans la second partie, nous travaillons sur des donn ees de valvom etrie mesur ees sur des bivalves. Sur ces donn ees, nous comparons le comportement num erique de trois estimateurs non param etrique de la fonction de r egression : celui de Nadaraya-Watson, celui de Nadaraya-Watson r ecursif et celui de R ev esz qui est lui aussi r ecursif. Dans la derni ere partie de cette th ese, nous proposons une m ethode permettant de combiner l'estimation r ecursive de la fonction de lien f par l'estimateur de Nadaraya- Watson r ecursif et l'estimation du param etre via l'estimateur SIR r ecursif. Nous etablissons une loi des grands nombres ainsi qu'un th eor eme de limite centrale. Nous illustrons ces r esultats th eoriques par des simulations montrant le bon comportement num erique de la m ethode d'estimation propos ee.
Style APA, Harvard, Vancouver, ISO itp.
12

Pun, Man-chi, i 潘敏芝. "M out of n bootstrap for nonstandard M-estimation: consistency and robustness". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2004. http://hub.hku.hk/bib/B29833590.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
13

Hou, Yunkui. "Stochastic optimal control of G/M/1 queueing system with breakdowns /". The Ohio State University, 1991. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487694702782079.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
14

Zhan, Yihui. "Bootstrapping functional M-estimators /". Thesis, Connect to this title online; UW restricted, 1996. http://hdl.handle.net/1773/8958.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
15

Tra, Yolande. "A study of nonparametric estimation of location using L-, M- and R-estimators". Virtual Press, 1994. http://liblink.bsu.edu/uhtbin/catkey/917018.

Pełny tekst źródła
Streszczenie:
Nonparametric procedures use weak assumptions such as continuity of the distribution so that they are applicable to a large class F of underlying distributions. Statistics that are distribution-free over F may be constructed to be estimators of location. Such estimators are derived from rank tests called R-estimators. They are robust estimators. The concept of robust estimation is based on a neighborhood of parametric models called "gross error models". The M-estimator, which is a maximum likelihood type estimator, arose from such investigations using the normal distribution. A third big class of estimators is the class of linear combinations of order statistics called L-estimators. They are constructed as an average of quantiles. Examples are the sample mean and the sample median.In this thesis, some definitions and results involving these three basic classes of estimates are provided. For each class, an example of a robust estimator is presented. Numerical values are given to assess the robustness of each estimator in terms of breakdown point and gross error sensitivity. Further, the U-statistics which are unbiased estimators of location parameters, are used to obtain asymptotically efficient R-estimates.
Department of Mathematical Sciences
Style APA, Harvard, Vancouver, ISO itp.
16

Cheung, Kar-yee, i 張家愉. "Iteration of m out of n bootstrap in non-regular cases". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2003. http://hub.hku.hk/bib/B26652432.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
17

宗錦軒 i Kam-hin Chung. "A study of M-out-of-N bootstrap approaches to construction of confidence sets". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1998. http://hub.hku.hk/bib/B3121521X.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
18

Chung, Kam-hin. "A study of M-out-of-N bootstrap approaches to construction of confidence sets /". Hong Kong : University of Hong Kong, 1998. http://sunzi.lib.hku.hk/hkuto/record.jsp?B19853038.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
19

Zhang, Yi. "Statistical analyses of transcriptomic responses of M. tuberculosis under environmental stresses". Thesis, Birkbeck (University of London), 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.499179.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
20

Neves, Vasco. "Étude sur les paramétres stellaires des naines M et leur lien à la formation planétaire". Thesis, Grenoble, 2013. http://www.theses.fr/2013GRENY082/document.

Pełny tekst źródła
Streszczenie:
Au moment d'écrire ma Thèse plus de 900 exoplanètes été annoncées et plus de 2700 planètes détectées par le télescope spatial Kepler sont en attente d'être confirmées. La haute précision des spectres et des courbes de lumière obtenue dans les relevés Doppler et transit, permet l'étude détaillée des paramètres des étoiles hôtes, et ouvre la possibilité d'enquêter sur les corrélations étoile planètes. En outre, la détermination des paramètres stellaires avec précision est un besoin critique pour déterminer les paramètres planétaires, à savoir, la masse, le rayon et la densité.Dans le cas des naines FGK, la détermination des paramètres stellaires est bien établie et peut être utilisée avec confiance pour étudier la relation planète-étoile ainsi que pour obtenir les paramètres planétaires avec une grande precision. Cependant, ce n'est pas le cas pour les naines M, les étoiles les plus communes de la Galaxie. Par rapport à leurs cousines plus chaudes, les naines M sont plus petites, plus froides, et plus faiblement lumineuses, et donc plus difficile à étudier. Le plus grand défi qui concerne les naines M est lié à la présence de milliards de lignes moléculaires qui gomme le continuum et rend l'analyse spectrale classique presque impossible. Trouver des fac ̧ons nouvelles et novatrices pour surmonter cet obstacle et obtenir une mesure des paramètres stellaires est l'objectif principal de cette Thèse .Pour l'atteindre, j'ai concentré mes recherches sur deux approches méthodologiques, photométrique et spectroscopiques. Mon premier travail avait pour objectif d'établir l'étalonnage de métallicité pho- tométrique précis. Par manque de binaires FGK+M avec de bonnes données photométriques je ne pouvais pas atteindre cet objectif. Il m'a cependant était possible, avec les données disponibles, de comparer les étalonnages photométriques déjà établies et légèrement améliorer le meilleur d'entre eux, comme décrit au Chapitre 3.Puis, je me suis concentré sur les approches spectroscopiques pour obtenir des paramètres stel- laires plus précis pour les naines M. À cette fin, j'ai utilisé des spectres HARPS de haute résolution et développé une méthode pour mesurer les lignes spectrales sans tenir compte du continuum . En utilisant cette méthode, je créé un nouvel étalonnage visible avec une précision de 0.08 dex pour [Fe/H] et 80 K pourTeff .Ce travail est détaillé dans le Chapitre 4.Finalement , j'ai également participé à l'amélioration des paramètres de l'étoile GJ3470 et de sa planète, où mon expertise dans les paramètres stellaires de naines M avait un rôle important. Les détails concernant cette enquête sont présentés dans le Chapitre 5
At the time of writing of this Thesis more than 900 planets have been announced and about 2700 planets from the Kepler space telescope are waiting to be confirmed. The very precise spectra and light curves obtained in Doppler and transit surveys, allows the in-depth study of the parameters of the host stars, and opens the possibility to investigate the star-plant correlations. Also, determining the stellar parameters with precision is critical for more precise determinations of the planetary parameters, namely, mass, radius, and density.In the case of the FGK dwarfs, the determination of stellar parameters is well established and can be used with confidence to study the star-planet relation as well as to obtain precise planetary parameters. However, this is not the case for M dwarfs, the most common stars in the Galaxy. Compared to their hotter cousins, M dwarfs are smaller, colder, and fainter, and therefore harder to study. The biggest challenge regarding M dwarfs is related to the presence of billions of molecular lines that depress the continuum making a classical spectral analysis almost impossible. Finding new and innovative ways to overcome this obstacle in order to obtain precise stellar parameters is the goal of this Thesis.To achieve this goal I focused my research into two main avenues: photometric and spectroscopic methods. My initial work had the objective of establishing a precise photometric metallicity calibration, but I could not reach this goal, as I did not have enough FGK+M binaries with good photometric data. However, it was possible, with the available data, to compare the already established photometric calibrations and slightly improve the best one, as described in Chapter 3.Then, I focused on spectroscopic approaches with the aim of obtaining precise M dwarf parame- ters. To this end I used HARPS high-resolution spectra and developed a method to measure the spectral lines disregarding the continuum completely. Using this method I established a new visible calibration with a precision of 0.08 dex for [Fe/H] and 80 K for Te f f . This work is detailed in Chapter 4.Finally, I also participated in the refinement of the parameters of the star GJ3470 and its planet, where my expertise in stellar parameters of M dwarfs had an important role. The details regarding this investigation are shown in Chapter 5
No momento em que escrevo esta Tese, o número de planetas anunciados já ultrapassou os 900 e os cerca de 2700 candidatos detectados pelo telescópio espacial Kepler esperam por confirmação. Os espectros e as curvas de luz obtidos nos programas de procura de planetas permitem, também, o estudo em profundidade dos parâmetros das estrelas com planetas e abrem a possibilidade de investigar a relação estrela-planeta. Neste contexto, a determinação com precisão dos parâmetros estelares é crítica na determinação precisa dos parâmetros planetários, nomeadamente, a massa, o raio e a densidade.No caso das anãs FGK, os métodos de determinação dos parâmetros estelares estão bem estabelecidos e podem ser usados com confiança no estudo da relação estrela-planeta, assim como na obtenção de parâmetros planetários precisos. No entanto, não é esse o caso para as anãs M, as estrelas mais comuns da nossa Galáxia. Ao contrário das suas primas, as estrelas M são mais pequenas, frias e ténues e, assim sendo, mais difíceis de estudar. O grande entrave no estudo das estrelas M está relacionado com a presença de biliões de linhas moleculares que deprimem o contínuo espectral, fazendo com que uma análise espectral clássica se torne quase impossível. A procura de métodos inovadores que possibilitem ultrapassar este obstáculo, tendo em vista a obtenção de parâmetros precisos, é o objectivo desta Tese.Tendo em conta esse objetivo, foquei os meus esforços em duas linhas principais de pesquisa, baseadas em métodos fotométricos e métodos espectroscópicos. O meu trabalho inicial tinha como objetivo o estabelecimento de uma calibração fotométrica para a metalicidade, mas não me foi possível atingir esse objetivo, pois não tinha sistemas binários FGK+M suficientes com bons dados fotométricos. No entanto, foi possível, com os dados disponíveis, comparar as calibrações fotométricas existentes e refinar ligeiramente a melhor delas, como descrito no Capítulo 3.Após este trabalho passei a concentrar-me em técnicas espectroscópicas de obtenção de parâmetros estelares em estrelas M. Tendo em mente esse objetivo, usei espectros HARPS de alta resolução para desenvolver um novo método de medição de linhas espectrais independente do contínuo espectral. Seguidamente, usei este método no desenvolvimento de uma nova calibração de metalicidade e temperatura efectiva em estrelas M na região do visível, através da qual consegui atingir uma precisão de 0.08 dex para a [Fe/H] e de 80 K para a temperatura. Este trabalho está descrito no Capítulo 4.Ao mesmo tempo colaborei na determinação com precisão dos parâmetros da estrela GJ3470 e do seu planeta, onde a minha proficiência na determinação de parâmetros estelares em anãs M teve um papel importante. Os detalhes relacionados com este trabalho de investigação estão descritos no Capítulo 5
Style APA, Harvard, Vancouver, ISO itp.
21

Wallmark, Joakim. "Selection bias when estimating average treatment effects in the M and butterfly structures". Thesis, Umeå universitet, Statistik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-160792.

Pełny tekst źródła
Streszczenie:
Due to a phenomenon known as selection bias, the estimator of the average treatmen teffect (ATE) of a treatment variable on some outcome may be biased. Selection bias, caused by exclusion of possible units from the studied data, is a major obstacle to valid statistical and causal inferences. It is hard to detect in experimental or observational studies and is introduced when conditioning a sample on a common collider of the treatment and response variables. A certain type of selection bias known as M-Bias occurs when conditioning on a pretreatment variable that is part of a particular variable structure, the M structure. In this structure, the collider has no direct causal association with the treatment and outcome variables, but it is indirectly associated with both through ancestors. In this thesis, scenarios where potential M-bias arises were examined in a simulation study. The percentage of bias relative to the true ATE was estimated for each of the scenarios. A continuous collider variable was used and samples were conditioned to only include units with values on the collider variable above a certain cutoff value.T he cutoff value was varied to explore the relationship between the collider and theresulting bias. A variation of the M structure known as the butterfly structure was also studied in a similar fashion. The butterfly structure is known to result in confounding bias when not adjusting for said collider but selection bias when adjustment is done. The results show that selection bias is relatively small compared to bias originating from confounding in the butterfly structure. Increasing the cutoff level in this structure substantially decreases the overall bias of the ATE in almost all of the explored scenarios. The bias was smaller in the M structure than in the butterfly structure in close to all scenarios. For the M structure, the bias was generally smaller for higher cutoff values and insubstantial in some scenarios. This occurred because in most of the studied scenarios, a large proportion of the variance of the collider was explained by binary ancestors of said collider. When these ancestors are the primary causes of the collider, increasing the cutoff to a high enough value causes adjustment for the ancestors. Adjusting for these ancestors will in turn d-separate the treatment and the outcome which results in an unbiased estimator of the ATE. When conducting studies in pratice, the possibility of selection bias should be taken into consideration. Even though this type of bias is usually small even whe nthe causal effects between involved variables are strong, it can still be significant and an unbiased estimator cannot be taken for granted in the presence of sample selection.
Style APA, Harvard, Vancouver, ISO itp.
22

Källberg, David. "Nonparametric Statistical Inference for Entropy-type Functionals". Doctoral thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-79976.

Pełny tekst źródła
Streszczenie:
In this thesis, we study statistical inference for entropy, divergence, and related functionals of one or two probability distributions. Asymptotic properties of particular nonparametric estimators of such functionals are investigated. We consider estimation from both independent and dependent observations. The thesis consists of an introductory survey of the subject and some related theory and four papers (A-D). In Paper A, we consider a general class of entropy-type functionals which includes, for example, integer order Rényi entropy and certain Bregman divergences. We propose U-statistic estimators of these functionals based on the coincident or epsilon-close vector observations in the corresponding independent and identically distributed samples. We prove some asymptotic properties of the estimators such as consistency and asymptotic normality. Applications of the obtained results related to entropy maximizing distributions, stochastic databases, and image matching are discussed. In Paper B, we provide some important generalizations of the results for continuous distributions in Paper A. The consistency of the estimators is obtained under weaker density assumptions. Moreover, we introduce a class of functionals of quadratic order, including both entropy and divergence, and prove normal limit results for the corresponding estimators which are valid even for densities of low smoothness. The asymptotic properties of a divergence-based two-sample test are also derived. In Paper C, we consider estimation of the quadratic Rényi entropy and some related functionals for the marginal distribution of a stationary m-dependent sequence. We investigate asymptotic properties of the U-statistic estimators for these functionals introduced in Papers A and B when they are based on a sample from such a sequence. We prove consistency, asymptotic normality, and Poisson convergence under mild assumptions for the stationary m-dependent sequence. Applications of the results to time-series databases and entropy-based testing for dependent samples are discussed. In Paper D, we further develop the approach for estimation of quadratic functionals with m-dependent observations introduced in Paper C. We consider quadratic functionals for one or two distributions. The consistency and rate of convergence of the corresponding U-statistic estimators are obtained under weak conditions on the stationary m-dependent sequences. Additionally, we propose estimators based on incomplete U-statistics and show their consistency properties under more general assumptions.
Style APA, Harvard, Vancouver, ISO itp.
23

Riou-Durand, Lionel. "Theoretical contributions to Monte Carlo methods, and applications to Statistics". Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLG006/document.

Pełny tekst źródła
Streszczenie:
La première partie de cette thèse concerne l'inférence de modèles statistiques non normalisés. Nous étudions deux méthodes d'inférence basées sur de l'échantillonnage aléatoire : Monte-Carlo MLE (Geyer, 1994), et Noise Contrastive Estimation (Gutmann et Hyvarinen, 2010). Cette dernière méthode fut soutenue par une justification numérique d'une meilleure stabilité, mais aucun résultat théorique n'avait encore été prouvé. Nous prouvons que Noise Contrastive Estimation est plus robuste au choix de la distribution d'échantillonnage. Nous évaluons le gain de précision en fonction du budget computationnel. La deuxième partie de cette thèse concerne l'échantillonnage aléatoire approché pour les distributions de grande dimension. La performance de la plupart des méthodes d’échantillonnage se détériore rapidement lorsque la dimension augmente, mais plusieurs méthodes ont prouvé leur efficacité (e.g. Hamiltonian Monte Carlo, Langevin Monte Carlo). Dans la continuité de certains travaux récents (Eberle et al., 2017 ; Cheng et al., 2018), nous étudions certaines discrétisations d’un processus connu sous le nom de kinetic Langevin diffusion. Nous établissons des vitesses de convergence explicites vers la distribution d'échantillonnage, qui ont une dépendance polynomiale en la dimension. Notre travail améliore et étend les résultats de Cheng et al. pour les densités log-concaves
The first part of this thesis concerns the inference of un-normalized statistical models. We study two methods of inference based on sampling, known as Monte-Carlo MLE (Geyer, 1994), and Noise Contrastive Estimation (Gutmann and Hyvarinen, 2010). The latter method was supported by numerical evidence of improved stability, but no theoretical results had yet been proven. We prove that Noise Contrastive Estimation is more robust to the choice of the sampling distribution. We assess the gain of accuracy depending on the computational budget. The second part of this thesis concerns approximate sampling for high dimensional distributions. The performance of most samplers deteriorates fast when the dimension increases, but several methods have proven their effectiveness (e.g. Hamiltonian Monte Carlo, Langevin Monte Carlo). In the continuity of some recent works (Eberle et al., 2017; Cheng et al., 2018), we study some discretizations of the kinetic Langevin diffusion process and establish explicit rates of convergence towards the sampling distribution, that scales polynomially fast when the dimension increases. Our work improves and extends the results established by Cheng et al. for log-concave densities
Style APA, Harvard, Vancouver, ISO itp.
24

Barkino, Iliam. "Summary Statistic Selection with Reinforcement Learning". Thesis, Uppsala universitet, Avdelningen för beräkningsvetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-390838.

Pełny tekst źródła
Streszczenie:
Multi-armed bandit (MAB) algorithms could be used to select a subset of the k most informative summary statistics, from a pool of m possible summary statistics, by reformulating the subset selection problem as a MAB problem. This is suggested by experiments that tested five MAB algorithms (Direct, Halving, SAR, OCBA-m, and Racing) on the reformulated problem and comparing the results to two established subset selection algorithms (Minimizing Entropy and Approximate Sufficiency). The MAB algorithms yielded errors at par with the established methods, but in only a fraction of the time. Establishing MAB algorithms as a new standard for summary statistics subset selection could therefore save numerous scientists substantial amounts of time when selecting summary statistics for approximate bayesian computation.
Style APA, Harvard, Vancouver, ISO itp.
25

Haman, John T. "The energy goodness-of-fit test and E-M type estimator forasymmetric Laplace distributions". Bowling Green State University / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1524756256837676.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
26

Parolini, Giuditta <1978&gt. ""Making Sense of Figures": Statistics, Computing and Information Technologies in Agriculture and Biology in Britain, 1920s-1960s". Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2013. http://amsdottorato.unibo.it/5289/1/Parolini_Giuditta_Tesi.pdf.

Pełny tekst źródła
Streszczenie:
Throughout the twentieth century statistical methods have increasingly become part of experimental research. In particular, statistics has made quantification processes meaningful in the soft sciences, which had traditionally relied on activities such as collecting and describing diversity rather than timing variation. The thesis explores this change in relation to agriculture and biology, focusing on analysis of variance and experimental design, the statistical methods developed by the mathematician and geneticist Ronald Aylmer Fisher during the 1920s. The role that Fisher’s methods acquired as tools of scientific research, side by side with the laboratory equipment and the field practices adopted by research workers, is here investigated bottom-up, beginning with the computing instruments and the information technologies that were the tools of the trade for statisticians. Four case studies show under several perspectives the interaction of statistics, computing and information technologies, giving on the one hand an overview of the main tools – mechanical calculators, statistical tables, punched and index cards, standardised forms, digital computers – adopted in the period, and on the other pointing out how these tools complemented each other and were instrumental for the development and dissemination of analysis of variance and experimental design. The period considered is the half-century from the early 1920s to the late 1960s, the institutions investigated are Rothamsted Experimental Station and the Galton Laboratory, and the statisticians examined are Ronald Fisher and Frank Yates.
Style APA, Harvard, Vancouver, ISO itp.
27

Parolini, Giuditta <1978&gt. ""Making Sense of Figures": Statistics, Computing and Information Technologies in Agriculture and Biology in Britain, 1920s-1960s". Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2013. http://amsdottorato.unibo.it/5289/.

Pełny tekst źródła
Streszczenie:
Throughout the twentieth century statistical methods have increasingly become part of experimental research. In particular, statistics has made quantification processes meaningful in the soft sciences, which had traditionally relied on activities such as collecting and describing diversity rather than timing variation. The thesis explores this change in relation to agriculture and biology, focusing on analysis of variance and experimental design, the statistical methods developed by the mathematician and geneticist Ronald Aylmer Fisher during the 1920s. The role that Fisher’s methods acquired as tools of scientific research, side by side with the laboratory equipment and the field practices adopted by research workers, is here investigated bottom-up, beginning with the computing instruments and the information technologies that were the tools of the trade for statisticians. Four case studies show under several perspectives the interaction of statistics, computing and information technologies, giving on the one hand an overview of the main tools – mechanical calculators, statistical tables, punched and index cards, standardised forms, digital computers – adopted in the period, and on the other pointing out how these tools complemented each other and were instrumental for the development and dissemination of analysis of variance and experimental design. The period considered is the half-century from the early 1920s to the late 1960s, the institutions investigated are Rothamsted Experimental Station and the Galton Laboratory, and the statisticians examined are Ronald Fisher and Frank Yates.
Style APA, Harvard, Vancouver, ISO itp.
28

Riou-Durand, Lionel. "Theoretical contributions to Monte Carlo methods, and applications to Statistics". Electronic Thesis or Diss., Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLG006.

Pełny tekst źródła
Streszczenie:
La première partie de cette thèse concerne l'inférence de modèles statistiques non normalisés. Nous étudions deux méthodes d'inférence basées sur de l'échantillonnage aléatoire : Monte-Carlo MLE (Geyer, 1994), et Noise Contrastive Estimation (Gutmann et Hyvarinen, 2010). Cette dernière méthode fut soutenue par une justification numérique d'une meilleure stabilité, mais aucun résultat théorique n'avait encore été prouvé. Nous prouvons que Noise Contrastive Estimation est plus robuste au choix de la distribution d'échantillonnage. Nous évaluons le gain de précision en fonction du budget computationnel. La deuxième partie de cette thèse concerne l'échantillonnage aléatoire approché pour les distributions de grande dimension. La performance de la plupart des méthodes d’échantillonnage se détériore rapidement lorsque la dimension augmente, mais plusieurs méthodes ont prouvé leur efficacité (e.g. Hamiltonian Monte Carlo, Langevin Monte Carlo). Dans la continuité de certains travaux récents (Eberle et al., 2017 ; Cheng et al., 2018), nous étudions certaines discrétisations d’un processus connu sous le nom de kinetic Langevin diffusion. Nous établissons des vitesses de convergence explicites vers la distribution d'échantillonnage, qui ont une dépendance polynomiale en la dimension. Notre travail améliore et étend les résultats de Cheng et al. pour les densités log-concaves
The first part of this thesis concerns the inference of un-normalized statistical models. We study two methods of inference based on sampling, known as Monte-Carlo MLE (Geyer, 1994), and Noise Contrastive Estimation (Gutmann and Hyvarinen, 2010). The latter method was supported by numerical evidence of improved stability, but no theoretical results had yet been proven. We prove that Noise Contrastive Estimation is more robust to the choice of the sampling distribution. We assess the gain of accuracy depending on the computational budget. The second part of this thesis concerns approximate sampling for high dimensional distributions. The performance of most samplers deteriorates fast when the dimension increases, but several methods have proven their effectiveness (e.g. Hamiltonian Monte Carlo, Langevin Monte Carlo). In the continuity of some recent works (Eberle et al., 2017; Cheng et al., 2018), we study some discretizations of the kinetic Langevin diffusion process and establish explicit rates of convergence towards the sampling distribution, that scales polynomially fast when the dimension increases. Our work improves and extends the results established by Cheng et al. for log-concave densities
Style APA, Harvard, Vancouver, ISO itp.
29

Huck, Matthias Verfasser], Hermann [Akademischer Betreuer] [Ney i Alexander M. [Akademischer Betreuer] Fraser. "Statistical models for hierarchical phrase-based machine translation / Matthias Huck ; Hermann Ney, Alexander M. Fraser". Aachen : Universitätsbibliothek der RWTH Aachen, 2018. http://d-nb.info/1195151713/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
30

Huck, Matthias [Verfasser], Hermann [Akademischer Betreuer] Ney i Alexander M. [Akademischer Betreuer] Fraser. "Statistical models for hierarchical phrase-based machine translation / Matthias Huck ; Hermann Ney, Alexander M. Fraser". Aachen : Universitätsbibliothek der RWTH Aachen, 2018. http://d-nb.info/1195151713/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
31

Bormann, Carsten [Verfasser], i M. [Akademischer Betreuer] Schienle. "Multivariate Extremes in Financial Markets: New Statistical Testing Methods and Applications / Carsten Bormann ; Betreuer: M. Schienle". Karlsruhe : KIT-Bibliothek, 2017. http://d-nb.info/112603679X/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
32

Blanc, Sebastian M. [Verfasser], i T. [Akademischer Betreuer] Setzer. "Bias-Variance Aware Integration of Judgmental Forecasts and Statistical Models / Sebastian M. Blanc ; Betreuer: T. Setzer". Karlsruhe : KIT-Bibliothek, 2016. http://d-nb.info/112049818X/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
33

Barilli, Elisa. "When 1 in 200 is higher than 5 in 1000: the "1 in X effect" on the perceived probability of having a Down syndrome-affected child". Doctoral thesis, University of Trento, 2010. http://eprints-phd.biblio.unitn.it/399/1/TESI_Barilli_UNITN_Eprints.pdf.

Pełny tekst źródła
Streszczenie:
Among numerical formats available to express probability, ratios (i.e., frequencies) are extensively employed in risk communication, due perhaps to an intuitive sense of their clarity and simplicity. The present thesis was designed to investigate how the use of superficially different but mathematically equivalent ratio formats affects the magnitude perception of the probability that is conveyed. In particular, focus of research was the influence that those expressions, when employed in risk communication of prenatal screening test results, have on prospective parents’ perceptions of the chance of having a Down syndrome-affected child. No clear evidence was found in the literature, on whether the choice of one of the equivalent ratio format that can be used to state a given probability does matter in terms of subjective perception of the chance. Indeed, existent studies deliver contrasting results, and theories elaborated on those basis point in diverging directions. These could be summarised in the suggestion, on the one hand, that people tend to neglect denominators in ratios (hence they judge 10 in 100 as larger than 1 in 10: “Ratio-bias” or “denominator neglect”) and, on the other hand, in a claim that people neglect numerators, rather than denominators (hence they rate 1 in 10 as larger than 10 in 100: “group-diffusion” or “reference group” effect). Nevertheless, implications of either group of theories could not entirely be transferred to the specific issue at study, mainly because of problems of ecological validity (type of scenario and stimuli, experimental design). Hence, provided the necessary adjustments to both the original experimental designs and materials, we tested empirically the applicability of those predictions to the specific case under examination. Subjective evaluations of equivalent ratios presented between-subjects in scenario paradigm were analysed by means of the magnitude assessments given by a total number of 1673 participants on Likert scales. Overall, results of a series of 12 main studies pointed to a new bias which we dubbed the “1 in X effect” given the triangulation of its source to that specific ratio format. Indeed, findings indicated, that laypeople’ subjective estimation of the same probability presented through a “1 in X” format (e.g., 1 in 200) and an “N in X*N” format (e.g., 5 in 1000) varied significantly and in a consistent way. In particular, a given probability was systematically perceived as bigger and more alarming when expressed in the first rather than in second format, an effect clearly inconsistent with the idea of denominator neglect. This effect was replicated across different populations and probability magnitudes. Practical implications of these findings for health communication have been addressed in a dedicated section, all the more necessary considering that in one study on health-care professionals we had found, that they appeared themselves de-sensitized to the “1 in X effect” (seemingly because of their daily use of probabilistic ratios). While the effect was not attenuated in laypeople by a classic communicative intervention (i.e., a verbal analogy), it disappeared with one of the most employed visual aids, namely an icon array. Furthermore, in a first attempt to pinpoint the cognitive processes responsible for the bias, the affective account stemming from literature on dual-process theories has not received support, contrary to our expectations. Hence, the most likely origin for the bias seems to reside either, as suggested by some inspections, in a specific motivation to process the information, and/or in the increased ability to see oneself or others as that affected when a “1 in X” format is processed. Clearly, further empirical research is needed in order to attain this cognitive level of explanation.
Style APA, Harvard, Vancouver, ISO itp.
34

CARCAGNÌ, ANTONELLA. "Una specificazione semiparametrica del modello di regressione M-Quantile ad effetti casuali con applicazioni a dati ambientali georeferenziati". Doctoral thesis, Università degli Studi di Milano-Bicocca, 2017. http://hdl.handle.net/10281/180711.

Pełny tekst źródła
Streszczenie:
Abstract Questo lavoro di tesi ha come finalità lo sviluppo e l’implementazione di un modello semiparametrico M-quantile ad effetti random che sia in grado di cogliere l’eventuale presenza di un trend spaziale nei dati ambientali. Il modello proposto è una estensione del modello M-quantile ad effetti casuali di base in cui è stata inclusa una componente spaziale. La componente spaziale è modellata combinando insieme un’intercetta random (Chambers e Tzavidis, 2006) che coglie l’effetto del gruppo e un termine semiparametrico per catturare la regolarità residua nello spazio (Pratesi et al. 2009). Quest’ultima componente è trattata mediante una spline bivariata delle coordinate geografiche dei siti di campionamento. Come proposto da Rupert et al. (2003), i coefficienti dei nodi della spline bivariata sono trattati come effetti random. L’approccio di massima verosimiglianza robusta (Richardson and Welsh, 1995) e uno metodo sequenziale a due stadi è stato adottato per ottenere la stima dei parametri del modello (Tzavidis et al., 2015). Tre studi di simulazione basati sul modello sono stati condotti per verificare la prestazioni di stima e predittive ma anche per confrontare il modello proposto con il modello non-parametrico M-Quantile P-spline. Infine, il modello è stato apllicato a dati di concentrazione di Radon indoor della regione Lombardia.
In this work a M-quantile regression approach (Breckling and Chambers, 1988) is proposed to evaluate their impact at different level of the response variable. In particular, we extend the basic M-quantile model to include a spatial component in addition to other covariates. The spatial component is modelled by combining a random intercept (Chambers and Tzavidis, 2006) to catch the lithology effect on IRC and a semiparametric term, which is expected to grasp residual regularities across space (Pratesi et al. 2009). The flexible component is modeled via a thin-plate bivariate spline of the geographical coordinates (longitude and latitude) of the sampling sites. Akin to Ruppert et al. (2003), we propose to treat the coefficients of the knots of the bivariate spline as a further random component in order to obtain smoother results. A robust maximum likelihood approach (Richardson and Welsh, 1995) has been adopted to estimate the model using the two-stage algorithm proposed by Tzavidis et al. (2015). Three model-based simulations were carried out to confirm estimation and predictives performance and to compare the semiparametric M-Quantile random effect with alternative approach at the problem. The model is applied to a sample of IRC measures collected in two successive radon campaigns in Lombardy.
Style APA, Harvard, Vancouver, ISO itp.
35

Ximenes, Lucas Cunha. "Avalia??o de m?todos de agrupamento para a classifica??o da capacidade produtiva de um trecho da Floresta Nacional do Tapaj?s ? PA". UFVJM, 2016. http://acervo.ufvjm.edu.br/jspui/handle/1/1380.

Pełny tekst źródła
Streszczenie:
Submitted by Jos? Henrique Henrique (jose.neves@ufvjm.edu.br) on 2017-06-09T17:17:51Z No. of bitstreams: 2 lucas_cunha_ximenes.pdf: 2578435 bytes, checksum: 674c742b7551f13345ff22f59b8220c5 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
Approved for entry into archive by Rodrigo Martins Cruz (rodrigo.cruz@ufvjm.edu.br) on 2017-06-22T15:04:07Z (GMT) No. of bitstreams: 2 lucas_cunha_ximenes.pdf: 2578435 bytes, checksum: 674c742b7551f13345ff22f59b8220c5 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
Made available in DSpace on 2017-06-22T15:04:07Z (GMT). No. of bitstreams: 2 lucas_cunha_ximenes.pdf: 2578435 bytes, checksum: 674c742b7551f13345ff22f59b8220c5 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Previous issue date: 2017
Coordena??o de Aperfei?oamento de Pessoal de N?vel Superior (CAPES)
O estudo teve como objetivo definir a melhor combina??o de m?todo de agrupamento com medida de similaridade para a classificar a capacidade produtiva de um trecho na Floresta Nacional do Tapaj?s. O invent?rio florestal amostral foi realizado no ano de 2012 e para a loca??o das parcelas foram abertas 12 faixas de, aproximadamente, 1,5 m de largura, equidistantes 4,0 km, na dire??o leste-oeste, e com comprimento variando de 4 km a 13,75 km. A instala??o das parcelas, com dimens?es de 30 x 250 m, distribu?das sistematicamente por 500 m em cada linha. Foi levado em considera??o para a defini??o das classes de tamanho (CT): CT 1 (classe de regenera??o) - 10 cm ? DAP < 25 cm nos primeiros 50 m da parcela (30 m x 50 m); CT 2 (classe de crescimento) - 25 cm ? DAP < 50 cm nos primeiros 100 m (30 m x 100 m); e CT 3 (classe de colheita) - DAP ? 50 cm em toda a parcela (30 m x 250 m). Para a classifica??o da capacidade produtiva, realizou-se um filtro no banco de dados original por classe de tamanho, no qual foram selecionados os indiv?duos com qualidade de fuste 1 (fuste reto) e 2 (fuste com pequenas tortuosidades) e que t?m valor no mercado regional. As 204 parcelas foram agrupadas em grupos homog?neos, no qual foram produzidos 40 dendrogramas do tipo vertical para cada uma das 3 classes de tamanho (totalizando 120 dendrogramas), baseados na combina??o de 5 medidas de dist?ncia (Euclidiana Simples, Euclidiana Quadrada, Manhattan, Canberra e Mahalanobis), com 8 m?todos de agrupamento hier?rquicos, sendo: Ward1, Ward2 Liga??o Simples, Liga??o Completa, UPGMA, WPGMA, Mediana e Centroide. Com o intuito de verificar a valida??o dos m?todos de agrupamento testados, foram confeccionadas 120 tabelas de an?lise discriminante linear de Fisher, sendo 40 para cada classe de tamanho, contendo as probabilidades para cada classe de estoque, bem como a porcentagem de classifica??o das combina??es testadas na an?lise de agrupamento. As an?lises de agrupamento e discriminante possibilitaram estratificar as parcelas heterog?neas de uma floresta inequi?nea em ?reas com parcelas homog?neas em termos de volume, densidade b?sica da madeira e grupo de comercializa??o. A combina??o entre medida de dist?ncia de Manhattan e m?todo de Ward2 mostrou-se ser a mais eficiente para estratificar florestas inequi?neas em classes de estoque volum?trico.
Disserta??o (Mestrado) ? Programa de P?s-Gradua??o em Ci?ncia Florestal, Universidade Federal dos Vales do Jequitinhonha e Mucuri, 2016.
The study aimed to determine the best combination of clustering method with similarity measure to classify the productive capacity of a stretch at Tapaj?s National Forest. The sample forest inventory was carried out in 2012 and for plot allocations we opened 12 tracks of approximately 1.5 m wide, 4.0 km equidistant in east-west direction, and length ranging from 4 km to 13.75 km. The plot installation, with dimensions of 30 x 250 m, systematically distributed in 500 m in each row. We took into account for the definition of size classes (CT): CT 1 (regeneration class) - 10 cm ? DBH <25 cm in the first 50 m of the plot (30 m x 50 m); CT 2 (growth class) - 25 cm ? DBH <50 cm in the first 100 m (30 m x 100 m); and CT 3 (harvesting class) - DBH ? 50 cm in the whole plot (30 m x 250 m). For the classification of productive capacity, there was a filter in the original database by size class, in which individuals were selected with bole quality 1 (straight bole) and 2 (bole with small tortuosities) and which have value in the regional market. The 204 plots were grouped into homogeneous groups, which were produced 40 dendrograms of the vertical type for each of the three size classes (totaling 120 dendrograms), based on the combination of five measures of distance (Euclidean Simple, Squared Euclidean, Manhattan, Canberra and Mahalanobis) with 8 hierarchical clustering methods, namely: Ward1, Ward2 Simple Link, Complete Link, UPGMA, WPGMA, Median and Centroid. In order to check the validation of the tested clustering methods, we produced 120 Fisher linear discriminant analysis tables, with 40 for each size class containing the probabilities for each stock class as well as the percentage of the combinations tested in the cluster analysis. The cluster and discriminant analysis made it possible to stratify the heterogeneous plots of a native forest in areas with homogeneous portions in terms of volume, wood density and commercialization group. The combination of measure distance of Manhattan and Ward2 method proved to be the most efficient to stratify uneven-aged stands in forest stock volume classes.
Style APA, Harvard, Vancouver, ISO itp.
36

Alhumaidi, Mouhammad [Verfasser], Abdelhak M. [Akademischer Betreuer] Zoubir i Harald [Akademischer Betreuer] Klingbeil. "Statistical Signal Processing Techniques for Coherent Transversal Beam Dynamics in Synchrotrons / Mouhammad Alhumaidi. Betreuer: Abdelhak M. Zoubir ; Harald Klingbeil". Darmstadt : Universitäts- und Landesbibliothek Darmstadt, 2015. http://d-nb.info/1111112657/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
37

Bassene, Aladji. "Contribution à la modélisation spatiale des événements extrêmes". Thesis, Lille 3, 2016. http://www.theses.fr/2016LIL30039/document.

Pełny tekst źródła
Streszczenie:
Dans cette de thèse, nous nous intéressons à la modélisation non paramétrique de données extrêmes spatiales. Nos résultats sont basés sur un cadre principal de la théorie des valeurs extrêmes, permettant ainsi d’englober les lois de type Pareto. Ce cadre permet aujourd’hui d’étendre l’étude des événements extrêmes au cas spatial à condition que les propriétés asymptotiques des estimateurs étudiés vérifient les conditions classiques de la Théorie des Valeurs Extrêmes (TVE) en plus des conditions locales sur la structure des données proprement dites. Dans la littérature, il existe un vaste panorama de modèles d’estimation d’événements extrêmes adaptés aux structures des données pour lesquelles on s’intéresse. Néanmoins, dans le cas de données extrêmes spatiales, hormis les modèles max stables,il n’en existe que peu ou presque pas de modèles qui s’intéressent à l’estimation fonctionnelle de l’indice de queue ou de quantiles extrêmes. Par conséquent, nous étendons les travaux existants sur l’estimation de l’indice de queue et des quantiles dans le cadre de données indépendantes ou temporellement dépendantes. La spécificité des méthodes étudiées réside sur le fait que les résultats asymptotiques des estimateurs prennent en compte la structure de dépendance spatiale des données considérées, ce qui est loin d’être trivial. Cette thèse s’inscrit donc dans le contexte de la statistique spatiale des valeurs extrêmes. Elle y apporte trois contributions principales. • Dans la première contribution de cette thèse permettant d’appréhender l’étude de variables réelles spatiales au cadre des valeurs extrêmes, nous proposons une estimation de l’indice de queue d’une distribution à queue lourde. Notre approche repose sur l’estimateur de Hill (1975). Les propriétés asymptotiques de l’estimateur introduit sont établies lorsque le processus spatial est adéquatement approximé par un processus M−dépendant, linéaire causal ou lorsqu'il satisfait une condition de mélange fort (a-mélange). • Dans la pratique, il est souvent utile de lier la variable d’intérêt Y avec une co-variable X. Dans cette situation, l’indice de queue dépend de la valeur observée x de la co-variable X et sera appelé indice de queue conditionnelle. Dans la plupart des applications, l’indice de queue des valeurs extrêmes n’est pas l’intérêt principal et est utilisé pour estimer par exemple des quantiles extrêmes. La contribution de ce chapitre consiste à adapter l’estimateur de l’indice de queue introduit dans la première partie au cadre conditionnel et d’utiliser ce dernier afin de proposer un estimateur des quantiles conditionnels extrêmes. Nous examinons les modèles dits "à plan fixe" ou "fixed design" qui correspondent à la situation où la variable explicative est déterministe et nous utlisons l’approche de la fenêtre mobile ou "window moving approach" pour capter la co-variable. Nous étudions le comportement asymptotique des estimateurs proposés et donnons des résultats numériques basés sur des données simulées avec le logiciel "R". • Dans la troisième partie de cette thèse, nous étendons les travaux de la deuxième partie au cadre des modèles dits "à plan aléatoire" ou "random design" pour lesquels les données sont des observations spatiales d’un couple (Y,X) de variables aléatoires réelles. Pour ce dernier modèle, nous proposons un estimateur de l’indice de queue lourde en utilisant la méthode des noyaux pour capter la co-variable. Nous utilisons un estimateur de l’indice de queue conditionnelle appartenant à la famille de l’estimateur introduit par Goegebeur et al. (2014b)
In this thesis, we investigate nonparametric modeling of spatial extremes. Our resultsare based on the main result of the theory of extreme values, thereby encompass Paretolaws. This framework allows today to extend the study of extreme events in the spatialcase provided if the asymptotic properties of the proposed estimators satisfy the standardconditions of the Extreme Value Theory (EVT) in addition to the local conditions on thedata structure themselves. In the literature, there exists a vast panorama of extreme events models, which are adapted to the structures of the data of interest. However, in the case ofextreme spatial data, except max-stables models, little or almost no models are interestedin non-parametric estimation of the tail index and/or extreme quantiles. Therefore, weextend existing works on estimating the tail index and quantile under independent ortime-dependent data. The specificity of the methods studied resides in the fact that theasymptotic results of the proposed estimators take into account the spatial dependence structure of the relevant data, which is far from trivial. This thesis is then written in thecontext of spatial statistics of extremes. She makes three main contributions.• In the first contribution of this thesis, we propose a new approach of the estimatorof the tail index of a heavy-tailed distribution within the framework of spatial data. This approach relies on the estimator of Hill (1975). The asymptotic properties of the estimator introduced are established when the spatial process is adequately approximated by aspatial M−dependent process, spatial linear causal process or when the process satisfies a strong mixing condition.• In practice, it is often useful to link the variable of interest Y with covariate X. Inthis situation, the tail index depends on the observed value x of the covariate X and theunknown fonction (.) will be called conditional tail index. In most applications, the tailindexof an extreme value is not the main attraction, but it is used to estimate for instance extreme quantiles. The contribution of this chapter is to adapt the estimator of the tail index introduced in the first part in the conditional framework and use it to propose an estimator of conditional extreme quantiles. We examine the models called "fixed design"which corresponds to the situation where the explanatory variable is deterministic. To tackle the covariate, since it is deterministic, we use the window moving approach. Westudy the asymptotic behavior of the estimators proposed and some numerical resultsusing simulated data with the software "R".• In the third part of this thesis, we extend the work of the second part of the framemodels called "random design" for which the data are spatial observations of a pair (Y,X) of real random variables . In this last model, we propose an estimator of heavy tail-indexusing the kernel method to tackle the covariate. We use an estimator of the conditional tail index belonging to the family of the estimators introduced by Goegebeur et al. (2014b)
Style APA, Harvard, Vancouver, ISO itp.
38

Stellet, Jan Erik [Verfasser], i J. M. [Akademischer Betreuer] Zöllner. "Statistical modelling of algorithms for signal processing in systems based on environment perception / Jan Erik Stellet. Betreuer: J. M. Zöllner". Karlsruhe : KIT-Bibliothek, 2016. http://d-nb.info/1108451160/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
39

Azawi, Mayce al [Verfasser], i Thomas M. [Akademischer Betreuer] Breuel. "Statistical Language Modeling for Historical Documents using Weighted Finite-State Transducers and Long Short-Term Memory / Mayce Al Azawi. Betreuer: Thomas M Breuel". Kaiserslautern : Technische Universität Kaiserslautern, 2015. http://d-nb.info/1068504137/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
40

Colombo, Alessio. "Socially aware motion planning of assistive robots in crowded environments". Doctoral thesis, University of Trento, 2015. http://eprints-phd.biblio.unitn.it/1396/1/colombo_phd_thesis.pdf.

Pełny tekst źródła
Streszczenie:
People with impaired physical or mental ability often find it challenging to negotiate crowded or unfamiliar environments, leading to a vicious cycle of deteriorating mobility and sociability. In particular, crowded environments pose a challenge to the comfort and safety of those people. To address this issue we present a novel two-level motion planning framework to be embedded efficiently in portable devices. At the top level, the long term planner deals with crowded areas, permanent or temporary anomalies in the environment (e.g., road blocks, wet floors), and hard and soft constraints (e.g., "keep a toilet within reach of 10 meters during the journey", "always avoid stairs"). A priority tailored on the user's needs can also be assigned to the constraints. At the bottom level, the short term planner anticipates undesirable circumstances in real time, by verifying simulation traces of local crowd dynamics against temporal logical formulae. The model takes into account the objectives of the user, preexisting knowledge of the environment and real time sensor data. The algorithm is thus able to suggest a course of action to achieve the user’s changing goals, while minimising the probability of problems for the user and other people in the environment. An accurate model of human behaviour is crucial when planning motion of a robotic platform in human environments. The Social Force Model (SFM) is such a model, having parameters that control both deterministic and stochastic elements. The short term planner embeds the SFM in a control loop that determines higher level objectives and reacts to environmental changes. Low level predictive modelling is provided by the SFM fed by sensors; high level logic is provided by statistical model checking. To parametrise and improve the short term planner, we have conducted experiments to consider typical human interactions in crowded environments. We have identified a number of behavioural patterns which may be explicitly incorporated in the SFM to enhance its predictive power. To validate our hierarchical motion planner we have run simulations and experiments with elderly people within the context of the DALi European project. The performance of our implementation demonstrates that our technology can be successfully embedded in a portable device or robot.
Style APA, Harvard, Vancouver, ISO itp.
41

Leite, Paixao Edmilson <1966&gt. "Transição de egressos evadidos e diplomados da educação profissional técnica para o mundo do trabalho: situação e perfis ocupacionais de 2006 a 2010". Doctoral thesis, Università Ca' Foscari Venezia, 2013. http://hdl.handle.net/10579/6502.

Pełny tekst źródła
Streszczenie:
RIASSUNTO Questa tesi di dottorato costituisce una ricerca in cotutela tra Brasile e Italia che, partendo da un'impostazione teorica, si avvale di metodologie di analisi quali-quantitativa per indagare i profili professionali di studenti sia diplomati che dropout provenienti da 37 Scuole tecnico-professionali secondarie brasiliane. La ricerca é stata effettuata in Brasile ed in Italia, mediante un accordo internazionale firmato tra i Rettori dell'Universidade Federal de Minas Gerais e dell'Università Ca' Foscari Venezia. L'indagine ha ricevuto il supporto delle agenzie nazionali di sostegno alla ricerca. La raccolta dei microdati è stata fatta utilizzando due diversi tipi di questionari applicati a due campioni costituiti rispettivamente da 762 studenti dropout e 742 diplomati, nel periodo tra il 2011 e il 2013. I microdati, derivati da fonti primarie, sono relativi agli anni accademici 2006-2010. L'elemento chiave della tesi è costituito dalla definizione dei profili professionali attesi e dall’analisi della loro correlazione con i percorsi di transizione dalla scuola al mondo del lavoro. Le categorie della Filosofia della Praxis stabilite da Antonio Gramsci, in Italia, e da Karl Heinrich Marx, in Germania, vengono assunte come riferimenti teorici fondamentali, sia dal punto di vista storico che in senso politico ed economico. La tesi conduce un'approfondita analisi del dibattito emergente, a livello internazionale, in ordine ai seguenti problemi: sistema scolastico duale vs scuola unitaria; unità tra formazione accademica e formazione professionale vs loro separazione. Inoltre si è proceduto anche a precisare contenuti e limiti del dibattito emergente sul rapporto possibile tra formazione tradizionale e formazione per competenze. E’ stato così possibile pervenire ad un'elaborazione critica di due diversi tipi di sistemi di stratificazione occupazionale attualmente utilizzati nei Paesi oggetto d’esame della Tesi. Il lavoro teorico è stato sostenuto da una raffinata analisi metodologica che si avvale di due strumenti di analisi quantitativa: 1) Il "modello teorico e concettuale della performance scolastica e studentesca nella Scuola Secondaria" - California University - USA, 2) Una tabella con le dimensioni degli studi e d'analisi, fatta dal dottorando, con le variabili statistiche adottate da rispondere alle ipotesi. Queste due dimensioni hanno permesso di elaborare tre diversi profili professionali di riferimento per l’analisi dei due campioni di studenti: uno ad ispirazione socio-demografica ed economica; l'altro ad ispirazione formativa; e un terzo ad ispirazione professionale. La tesi ha potuto così fondare le sue analisi ed argomentazioni su due ipotesi di ricerca complementari, arricchendo di molto la portata esplicativa delle sue conclusioni. La tesi ha, infatti, concluso che gli studenti che si sono diplomati mostravano profili professionali significativamente migliori rispetto a quelli di coloro che avevano abbandonato la scuola tecnico-professionale, sia nell'anno 2006 che 2011. Dai due campioni risulta che il 76% degli studenti totali proveniva da classi economiche di basso censo brasiliane, aveva studiato nelle scuole pubbliche e aveva lavorato in un'area professionale corrispondente alla formazione tecnica ricevuta. La ricerca ha anche individuato un punto chiave: utilizzando una procedura d'analisi comparativa, è giunta alla conclusione che i risultati dell'indagine sono generalizzabili a tutta la popolazione N del Sistema Brasiliano di Formazione Tecnico-Professionale, costituita da circa di 189.988 studenti. Infine, la ricerca ha permesso di individuare tre differenti tipologie di gerarchie di fattori statisticamente associati alle decisioni degli studenti. Queste riguardano: i fattori di scelta dei corsi tecnico-professionali degli studenti dropout; i fattori di abbandono; i fattori associati alla conclusione dei corsi tecnico-professionali. Footnote: CNPq: Brazilian National Counsel of Technological and Scientific Development. CAPES: Brazilian Coordination for the Improvement of Higher Education Personnel. INEP: Brazilian Institute for Studies and Educational Research.
ABSTRACT This PhD Thesis is a theoretical, quantitative, and exploratory research that investigates the professional profiles of students from 37 secondary professional schools from Brazilian Vocational Education Network, throughout Minas Gerais State. This research was done in Brazil and in Italy - co-supervision thesis -, by means an international agreement signed between Universidade Federal de Minas Gerais, and the Ca' Foscari University of Venice. The investigation was also supported by the Brazilian Federal Research Agencies . The micro data collection was done using two different sets of questionnaires, applied from 2011 to 2013, to subjects in two main samples: 762 dropouts and 742 graduated students from secondary Professional Education. The micro data, extracted from primary sources, were related to the academic period from 2006 to 2010. About the research problem, it was a key element for this study the establishment of subjects' occupational profiles and the connections of those final profiles to the pathways in their transition from school to work. The Philosophy of Praxis' categories established by Scholars Antonio Gramsci, in Italy, and Karl Heinrich Marx, in Germany, were set as the main theoretical framework for this thesis in a historical, political and economical view. In this thesis, an important educational argument was presented considering different scholars that argue for different school organizations: the duality school versus the unitarian school; the separation or not between the academic and vocational tracks. Based on the theoretical argument established between school qualification versus competencies, it was also possible to present two different types of occupational stratification systems used today. Methodologically, two quantitative analytical dimensions were used: 1) the Theoretical and Conceptual Model of High School Performance (California University - UCSB scholars - USA); 2) a table with simple and composite variables. In that way, those two dimensions allowed us to draw three different student occupational profiles: sociodemografic and economical, educational, and occupational. By testing two complementary hypothesis, this research concluded that the vocational students that graduated had significantly better occupational profiles than those of the dropouts, both in 2006 and 2011. From the two samples of students, 76% came from the low economic Brazilian classes, had been studying in public schools and they continued working in the same area of their vocational formation. This research also identified a key point: the thesis results, using a comparative sample analysis, showed to be valid for the N population of the Brazilian Federal Vocational Education System, of 189.988 students. Finally, the research allowed us to identify three hierarchies of factors statistically associated to the students' decisions to choose, and to dropout, and to conclude their vocational courses. Footnote: CNPq: Brazilian National Counsel of Technological and Scientific Development. CAPES: Brazilian Coordination for the Improvement of Higher Education Personnel. INEP: Brazilian Institute for Studies and Educational Research.
RESUMO Esta tese de doutorado é uma pesquisa teórica, quantitativa e exploratória que investiga os perfis profissionais de alunos oriundos de 37 escolas secundárias da Rede Federal de Educação Profissional no Brasil, espalhadas por todo o Estado de Minas Gerais. A investigação foi desenvolvida no Brasil e na Itália - cotutela de tese -, a partir de um acordo internacional firmado entre a Universidade Federal de Minas Gerais e a Università Ca' Foscari Venezia. A investigação foi financiada por agências de pesquisa federais brasileiras. A coleta de microdados foi feita por meio de dois questionários distintos, aplicados, de 2011 a 2013, a sujeitos de duas amostras principais: 762 jovens que abandonaram a escola e 742 alunos formados em escolas secundárias Profissionais. Os microdados, extraídos de fontes primárias, eram referentes ao período acadêmico de 2006 a 2010. Sobre o problema de pesquisa, constituiu-se em um elemento chave deste estudo o traçado dos perfis ocupacionais finais destes dois grupos de indivíduos, assim como as conexões de seus perfis com os itinerários de sua transição escola-trabalho. As categorias da Filosofia da Práxis, desenvolvidas pelos estudiosos Antonio Gramsci, na Itália, e Karl Heinrich Marx, na Alemanha, foram definidas para esta tese como o principal referencial teórico em uma visão histórica, política e econômica. Estabeleceu-se um debate educacional entre distintos teóricos que defendem diferentes organizações escolares: a dualidade escolar X a escola unitária; separação ou não entre formação geral e formação profissional. Noutro debate teórico, qualificação X competências, foi possível também apresentar dois diferentes tipos de sistemas de estratificação ocupacional na atualidade. Na metodologia, foram usadas como dimensões de análise quantitativa: 1) o Modelo Teórico e Conceitual de Desempenho Escolar e Estudantil no Ensino Médio (autores da Universidade da Califórnia - UCSB - EUA); 2) uma tabela de variáveis simples e compostas. Assim, estas dimensões permitiram traçar três tipos de perfis profissionais dos alunos: sociodemográfico e econômico, educacional e profissional. Ao testar dois tipos complementares de hipótese, a pesquisa concluiu que os alunos formados no ensino profissional secundário tinham um perfil ocupacional significativamente melhor do que o daqueles que abandonaram a escola, tanto em 2006, quanto em 2011. Das duas amostras de estudantes, 76% vêm de classes econômicas baixas, estudaram em escolas públicas e estão trabalhando em áreas afins à formação profissional recebida na escola. Um ponto-chave é que os resultados da tese demonstraram-se válidos, por análises amostrais comparativas, também para a população N de 189.988 estudantes em todo o Brasil. Finalmente, a investigação permitiu o estabelecimento de três distintas hierarquias de fatores estatisticamente associados às decisões dos alunos de escolher, de abandonar e de concluir seus cursos técnicos. Footnote: CNPq: Brazilian National Counsel of Technological and Scientific Development. CAPES: Brazilian Coordination for the Improvement of Higher Education Personnel. INEP: Brazilian Institute for Studies and Educational Research.
Style APA, Harvard, Vancouver, ISO itp.
42

Massias, Mathurin. "Sparse high dimensional regression in the presence of colored heteroscedastic noise : application to M/EEG source imaging". Electronic Thesis or Diss., Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLT053.

Pełny tekst źródła
Streszczenie:
Parmi les techniques d’imagerie cerébrale, la magneto- et l’électro-encéphalographie se distinguent pour leur faible degré d’invasivité et leur excellente résolution temporelle. La reconstruction de l’activité neuronale à partir de l’enregistrement des champs électriques et magnétiques constitue un problème inverse extr êmement mal posé, auquel il est nécessaire d’ajouter des contraintes pour le résoudre. Une approche populaire, empruntée dans ce manuscrit, est de postuler que la solution est parcimonieuse spatialement, ce qui peut s’obtenir par une pénalisation L2/1. Cependant, ce type de régularisation nécessite de résoudre des problèmes d’optimisation non-lisses en grande dimension, avec des méthodes itératives dont la performance se dégrade avec la dimension. De plus, les enregistrements M/EEG sont typiquement corrompus par un fort bruit coloré, allant à l’encontre des hypothèses classiques de résolution des problèmes inverses. Dans cette thèse, nous proposons d’abord une accélération des algorithmes itératifs utilisés pour résoudre le problème bio-magnétique avec régularisation L2/1. Les améliorations classiques (règles de filtrage et ensemble actifs), tirent parti de la parcimonie de la solution: elles ignorent les sources cérébrales inactives, et réduisent ainsi la dimension du problème. Nous introduisons une nouvelle technique d’ensemble actifs, reposant sur les règles de filtrage les plus performantes actuellement. Nous proposons des techniques duales avancées, qui permettent un contrôle plus fin de l’optimalité et améliorent les techniques d’identification de prédicteurs. Notre construction duale extrapole la structure Vectorielle Autoregressive des iterés duaux, régularité que nous relions aux propriétés d’identification de support des algorithmes proximaux. En plus du problème inverse bio-magnétique, l’approche proposée est appliquée à l’ensemble des modèles linéaires g énéralisés r égularisés L1. Deuxièmement, nous introduisons de nouveaux estimateurs concomitants pour la régression multitâche, conçus pour traiter du bruit gaussien correlé. Le probleme d’optimisation sous-jacent est convexe, et présente une structure “lisse + proximable” attrayante ; nous lions la formulation de ce problème au lissage des normes de Schatten
Understanding the functioning of the brain under normal and pathological conditions is one of the challenges of the 21textsuperscript{st} century.In the last decades, neuroimaging has radically affected clinical and cognitive neurosciences.Amongst neuroimaging techniques, magneto- and electroencephalography (M/EEG) stand out for two reasons: their non-invasiveness, and their excellent time resolution.Reconstructing the neural activity from the recordings of magnetic field and electric potentials is the so-called bio-magnetic inverse problem.Because of the limited number of sensors, this inverse problem is severely ill-posed, and additional constraints must be imposed in order to solve it.A popular approach, considered in this manuscript, is to assume spatial sparsity of the solution: only a few brain regions are involved in a short and specific cognitive task.Solutions exhibiting such a neurophysiologically plausible sparsity pattern can be obtained through L21-penalized regression approaches.However, this regularization requires to solve time-consuming high-dimensional and non-smooth optimization problems, with iterative (block) proximal gradients solvers.% Issues of M/EEG: noise:Additionally, M/EEG recordings are usually corrupted by strong non-white noise, which breaks the classical statistical assumptions of inverse problems. To circumvent this, it is customary to whiten the data as a preprocessing step,and to average multiple repetitions of the same experiment to increase the signal-to-noise ratio.Averaging measurements has the drawback of removing brain responses which are not phase-locked, ie do not happen at a fixed latency after the stimuli presentation onset.%Making it faster.In this work, we first propose speed improvements of iterative solvers used for the L21-regularized bio-magnetic inverse problem.Typical improvements, screening and working sets, exploit the sparsity of the solution: by identifying inactive brain sources, they reduce the dimensionality of the optimization problem.We introduce a new working set policy, derived from the state-of-the-art Gap safe screening rules.In this framework, we also propose duality improvements, yielding a tighter control of optimality and improving feature identification techniques.This dual construction extrapolates on an asymptotic Vector AutoRegressive regularity of the dual iterates, which we connect to manifold identification of proximal algorithms.Beyond the L21-regularized bio-magnetic inverse problem, the proposed methods apply to the whole class of sparse Generalized Linear Models.%Better handling of the noiseSecond, we introduce new concomitant estimators for multitask regression.Along with the neural sources estimation, concomitant estimators jointly estimate the noise covariance matrix.We design them to handle non-white Gaussian noise, and to exploit the multiple repetitions nature of M/EEG experiments.Instead of averaging the observations, our proposed method, CLaR, uses them all for a better estimation of the noise.The underlying optimization problem is jointly convex in the regression coefficients and the noise variable, with a ``smooth + proximable'' composite structure.It is therefore solvable via standard alternate minimization, for which we apply the improvements detailed in the first part.We provide a theoretical analysis of our objective function, linking it to the smoothing of Schatten norms.We demonstrate the benefits of the proposed approach for source localization on real M/EEG datasets.Our improved solvers and refined modeling of the noise pave the way for a faster and more statistically efficient processing of M/EEG recordings, allowing for interactive data analysis and scaling approaches to larger and larger M/EEG datasets
Style APA, Harvard, Vancouver, ISO itp.
43

Simpson, Daniel Peter. "Krylov subspace methods for approximating functions of symmetric positive definite matrices with applications to applied statistics and anomalous diffusion". Thesis, Queensland University of Technology, 2008. https://eprints.qut.edu.au/29751/1/Simpson_Final_Thesis.pdf.

Pełny tekst źródła
Streszczenie:
Matrix function approximation is a current focus of worldwide interest and finds application in a variety of areas of applied mathematics and statistics. In this thesis we focus on the approximation of A^(-α/2)b, where A ∈ ℝ^(n×n) is a large, sparse symmetric positive definite matrix and b ∈ ℝ^n is a vector. In particular, we will focus on matrix function techniques for sampling from Gaussian Markov random fields in applied statistics and the solution of fractional-in-space partial differential equations. Gaussian Markov random fields (GMRFs) are multivariate normal random variables characterised by a sparse precision (inverse covariance) matrix. GMRFs are popular models in computational spatial statistics as the sparse structure can be exploited, typically through the use of the sparse Cholesky decomposition, to construct fast sampling methods. It is well known, however, that for sufficiently large problems, iterative methods for solving linear systems outperform direct methods. Fractional-in-space partial differential equations arise in models of processes undergoing anomalous diffusion. Unfortunately, as the fractional Laplacian is a non-local operator, numerical methods based on the direct discretisation of these equations typically requires the solution of dense linear systems, which is impractical for fine discretisations. In this thesis, novel applications of Krylov subspace approximations to matrix functions for both of these problems are investigated. Matrix functions arise when sampling from a GMRF by noting that the Cholesky decomposition A = LL^T is, essentially, a `square root' of the precision matrix A. Therefore, we can replace the usual sampling method, which forms x = L^(-T)z, with x = A^(-1/2)z, where z is a vector of independent and identically distributed standard normal random variables. Similarly, the matrix transfer technique can be used to build solutions to the fractional Poisson equation of the form ϕn = A^(-α/2)b, where A is the finite difference approximation to the Laplacian. Hence both applications require the approximation of f(A)b, where f(t) = t^(-α/2) and A is sparse. In this thesis we will compare the Lanczos approximation, the shift-and-invert Lanczos approximation, the extended Krylov subspace method, rational approximations and the restarted Lanczos approximation for approximating matrix functions of this form. A number of new and novel results are presented in this thesis. Firstly, we prove the convergence of the matrix transfer technique for the solution of the fractional Poisson equation and we give conditions by which the finite difference discretisation can be replaced by other methods for discretising the Laplacian. We then investigate a number of methods for approximating matrix functions of the form A^(-α/2)b and investigate stopping criteria for these methods. In particular, we derive a new method for restarting the Lanczos approximation to f(A)b. We then apply these techniques to the problem of sampling from a GMRF and construct a full suite of methods for sampling conditioned on linear constraints and approximating the likelihood. Finally, we consider the problem of sampling from a generalised Matern random field, which combines our techniques for solving fractional-in-space partial differential equations with our method for sampling from GMRFs.
Style APA, Harvard, Vancouver, ISO itp.
44

Simpson, Daniel Peter. "Krylov subspace methods for approximating functions of symmetric positive definite matrices with applications to applied statistics and anomalous diffusion". Queensland University of Technology, 2008. http://eprints.qut.edu.au/29751/.

Pełny tekst źródła
Streszczenie:
Matrix function approximation is a current focus of worldwide interest and finds application in a variety of areas of applied mathematics and statistics. In this thesis we focus on the approximation of A..=2b, where A 2 Rnn is a large, sparse symmetric positive definite matrix and b 2 Rn is a vector. In particular, we will focus on matrix function techniques for sampling from Gaussian Markov random fields in applied statistics and the solution of fractional-in-space partial differential equations. Gaussian Markov random fields (GMRFs) are multivariate normal random variables characterised by a sparse precision (inverse covariance) matrix. GMRFs are popular models in computational spatial statistics as the sparse structure can be exploited, typically through the use of the sparse Cholesky decomposition, to construct fast sampling methods. It is well known, however, that for sufficiently large problems, iterative methods for solving linear systems outperform direct methods. Fractional-in-space partial differential equations arise in models of processes undergoing anomalous diffusion. Unfortunately, as the fractional Laplacian is a non-local operator, numerical methods based on the direct discretisation of these equations typically requires the solution of dense linear systems, which is impractical for fine discretisations. In this thesis, novel applications of Krylov subspace approximations to matrix functions for both of these problems are investigated. Matrix functions arise when sampling from a GMRF by noting that the Cholesky decomposition A = LLT is, essentially, a `square root' of the precision matrix A. Therefore, we can replace the usual sampling method, which forms x = L..T z, with x = A..1=2z, where z is a vector of independent and identically distributed standard normal random variables. Similarly, the matrix transfer technique can be used to build solutions to the fractional Poisson equation of the form n = A..=2b, where A is the finite difference approximation to the Laplacian. Hence both applications require the approximation of f(A)b, where f(t) = t..=2 and A is sparse. In this thesis we will compare the Lanczos approximation, the shift-and-invert Lanczos approximation, the extended Krylov subspace method, rational approximations and the restarted Lanczos approximation for approximating matrix functions of this form. A number of new and novel results are presented in this thesis. Firstly, we prove the convergence of the matrix transfer technique for the solution of the fractional Poisson equation and we give conditions by which the finite difference discretisation can be replaced by other methods for discretising the Laplacian. We then investigate a number of methods for approximating matrix functions of the form A..=2b and investigate stopping criteria for these methods. In particular, we derive a new method for restarting the Lanczos approximation to f(A)b. We then apply these techniques to the problem of sampling from a GMRF and construct a full suite of methods for sampling conditioned on linear constraints and approximating the likelihood. Finally, we consider the problem of sampling from a generalised Matern random field, which combines our techniques for solving fractional-in-space partial differential equations with our method for sampling from GMRFs.
Style APA, Harvard, Vancouver, ISO itp.
45

Dorigoni, Alessia. "An eye tracking exploration of cognitive reflection in consumer decision-making". Doctoral thesis, Università degli studi di Trento, 2019. https://hdl.handle.net/11572/368835.

Pełny tekst źródła
Streszczenie:
The works presented in this thesis are the result of the experiments conducted in the Cognitive and Experimental Economics Laboratory (CEEL) and in the Consumer Neuroscience Laboratory (NCLab) of the Economics and Management Department at the University of Trento. The aim of this research is to study the influence of cognitive impulsivity on commercial problem-solving and consumer decision-making. We focused on the attentional aspects related to the decision-making process as analyzed by the eye movements. The first section will present the main topic of the thesis, the key tool used to conduct the experiments (eye tracker) and the three papers; the latter will compose the second, third and fourth chapter. All the chapters have a common thread: to shed light on the cognitive aspects of problem-solving and their implications for the consumer decision-making process as analyzed through gaze behaviour. - The aim of the first paper, “The role of numeracy, cognitive reflection and attentional patterns in commercial problem-solving†by Dorigoni, Polonio, Graffeo and Bonini, is to analyze the predictive power of two important cognitive abilities, numeracy and cognitive reflection, in two different problem solving scenarios with high numerical components. - The aim of the second paper, “Getting the best deal: Effects of cognitive reflection on mental accounting of choice attributes†by Dorigoni, Cadonna and Bonini is to understand if people with low cognitive reflection are more prone to mental accounting across attributes of the same product; low cognitive reflectors do not integrate all the attribute costs and consequently they do not always choose the best deal. - The aim of the third paper, “Cognitive reflection and gaze behaviour in visual tasks†by Dorigoni, Rajsic and Bonini is to demonstrate that cognitive reflection has predictive power on heuristics and biases related to perceptual and visual tasks. This result is extremely important because it reflects a different disposition to see and analyze the information depending on the cognitive impulsivity.
Style APA, Harvard, Vancouver, ISO itp.
46

Dorigoni, Alessia. "An eye tracking exploration of cognitive reflection in consumer decision-making". Doctoral thesis, University of Trento, 2019. http://eprints-phd.biblio.unitn.it/3792/1/TESI-DEFINITIVA-DORIGONI.pdf.

Pełny tekst źródła
Streszczenie:
The works presented in this thesis are the result of the experiments conducted in the Cognitive and Experimental Economics Laboratory (CEEL) and in the Consumer Neuroscience Laboratory (NCLab) of the Economics and Management Department at the University of Trento. The aim of this research is to study the influence of cognitive impulsivity on commercial problem-solving and consumer decision-making. We focused on the attentional aspects related to the decision-making process as analyzed by the eye movements. The first section will present the main topic of the thesis, the key tool used to conduct the experiments (eye tracker) and the three papers; the latter will compose the second, third and fourth chapter. All the chapters have a common thread: to shed light on the cognitive aspects of problem-solving and their implications for the consumer decision-making process as analyzed through gaze behaviour. - The aim of the first paper, “The role of numeracy, cognitive reflection and attentional patterns in commercial problem-solving” by Dorigoni, Polonio, Graffeo and Bonini, is to analyze the predictive power of two important cognitive abilities, numeracy and cognitive reflection, in two different problem solving scenarios with high numerical components. - The aim of the second paper, “Getting the best deal: Effects of cognitive reflection on mental accounting of choice attributes” by Dorigoni, Cadonna and Bonini is to understand if people with low cognitive reflection are more prone to mental accounting across attributes of the same product; low cognitive reflectors do not integrate all the attribute costs and consequently they do not always choose the best deal. - The aim of the third paper, “Cognitive reflection and gaze behaviour in visual tasks” by Dorigoni, Rajsic and Bonini is to demonstrate that cognitive reflection has predictive power on heuristics and biases related to perceptual and visual tasks. This result is extremely important because it reflects a different disposition to see and analyze the information depending on the cognitive impulsivity.
Style APA, Harvard, Vancouver, ISO itp.
47

Santolin, Chiara. "Learning Regularities from the Visual World". Doctoral thesis, Università degli studi di Padova, 2016. http://hdl.handle.net/11577/3424417.

Pełny tekst źródła
Streszczenie:
Patterns of visual objects, streams of sounds, and spatiotemporal events are just a few examples of the structures present in a variety of sensory inputs. Amid such variety, numerous regularities can be found. In order to handle the sensory processing, individuals of each species have to be able to rapidly track these regularities. Statistical learning is one of the principal mechanisms that enable to track patterns from the flow of sensory information, by detecting coherent relations between elements (e.g., A predicts B). Once relevant structures are detected, learners are sometimes required to generalize to novel situations. This process can be challenging since it demands to abstract away from the surface information, and extract structures from previously-unseen stimuli. Over the past two decades, researchers have shown that statistical learning and generalization operate across domains, modalities and species, supporting the generality assumption. These mechanisms in fact, play a crucial role in organizing the sensory world, and developing representation of the environment. But when and how do organisms begin to track and generalize patterns from the environment? From the overall existing literature, very little is known about the roots these mechanisms. The experiments described in this thesis were all designed to explore whether statistical learning and generalization of visual patterns are fully available at birth, using the newborn domestic chick (Gallus gallus) as animal model. This species represents an excellent developmental model for the study of the ontogeny of several cognitive traits because it can be tested soon after hatching, and allows complete manipulation of pre- and post-natal experience. In Chapter 2, four statistical learning experiments are described. Through learning-by-exposure, visually-naive chicks were familiarized to a computer-presented stream of objects defined by a statistical structure; in particular, transitional (conditional) probabilities linked together sequence elements (e.g., the cross predicts the circle 100% of the times). After exposure, the familiar structured sequence were compared to a random presentation (Experiment 1) or a novel, structured combination (Experiment 2) of the familiar shapes. Chicks successfully differentiated test sequences in both experiments. One relevant aspect of these findings is that the learning process is unsupervised. Despite the lack of reinforcement, the mere exposure to the statistically-defined input was sufficient to obtain a significant learning effect. Two additional experiments have been designed in order to explore the complexity of the patterns that can be learned by this species. In particular, the aim of Experiments 3 and 4 was to investigate chicks’ ability to discriminate subtle differences of distributional properties of the stimuli. New sequences have been created; the familiar one was formed by a pairs of shapes that always appear in that order whereas the unfamiliar stimulus was formed by shapes spanning the boundaries across familiar pairs (part-pairs). Unfamiliar part-pairs were indeed created by joining the last element of a familiar pair and the first element of another (subsequent) familiar pair. The key difference among pairs and part-pairs lied on the probabilistic structure of the two: being formed by the union of two familiar elements, part-pairs were experienced during familiarization but with a lower probability. In order to distinguish test sequences, chicks needed to detect a very small difference in conditional probability characterizing the two stimuli. Unfortunately, the animals were unable to differentiate test sequences when formed by 8 (Experiment 3) or 6 (Experiment 4) elements. My final goal would have been to discover whether chicks are effectively able to pick up transitional probabilities or whether they simply track frequencies of co-occurrence. In Experiments 1 and 2, since the frequency of appearance of each shape was balanced across stimuli, it was impossible to tell if chicks detected transitional probabilities (e.g., X predicts Y) or frequencies of co-occurrence (e.g., X and Y co-occur together, but any predictive relation characterize them) among elements. However, since the animals did not succeed in the first task, being unable to discriminate pairs vs. part-pairs, data are inconclusive as regards to this issue. Possible explanations and theoretical implications of these results are provided in the final chapter of this thesis. In Chapter 3, the two studies described were aimed at testing newborn chicks’ capacities of generalization of patterns presented as stings of visual tokens. For instance, the pattern AAB can be defined as “two identical items (AA) followed by another one, different from the formers (B)”. Patterns were presented as triplets of simultaneously-visible shapes, arranged according to AAB, ABA (Experiment 5), ABB and BAA (Experiment 6). Using a training procedure, chicks were able to recognize the trained regularity when compared to another (neutral) regularity (for instance, AAB displayed as cross-cross-circle vs. ABA displayed as cross-circle-cross). Chicks were also capable of generalizing these patterns to novel exemplars composed of previously-unseen elements (AAB vs. ABA implemented by hourglass-hourglass-arrow vs. hourglass-arrow-hourglass). A subsequent study (Experiment 6) was aimed at verifying whether the presence/absence of contiguous reduplicated elements (in AAB but not in ABA) may have facilitated learning and generalization in previous task. All regularities comprised an adjacent repetition that gave the triplets asymmetrical structures (AAB vs. ABB and AAB vs. BAA). Chicks discriminated pattern-following and pattern-violating novel test triplets instantiating all regularities employed in the study, suggesting that the presence/absence of an adjacent repetition was not a relevant cue to succeed in the task. Overall, the present research provides new data of statistical learning and generalization of visual regularities in a newborn animal model, revealing that these mechanisms fully operate at the very beginning of life. For what concerns statistical learning, day-old chicks performed better than neonates but similar to human infants. As regards to generalization, chicks’ performance is consistent to what shown by neonates in the linguistic domain. These findings suggest that newborn chicks may be predisposed to track visual regularities in their postnatal environment. Despite the very limited previous experience, after a mere exposure to a structured input or a 3-days training session, significant learning and generalization effects have been obtained, pointing to the presence of early predispositions serving the development of these cognitive abilities.
Il mondo sensoriale è composto da un insieme di regolarità. Sequenze di sillabe e note musicali, oggetti disposti nell’ambiente visivo e sequenze di eventi sono solo alcune delle tipologie di pattern caratterizzanti l’input sensoriale. La capacità di rilevare queste regolarità risulta fondamentale per l’acquisizione di alcune proprietà del linguaggio naturale (ad esempio, la sintassi), l’apprendimento di sequenze di azioni (ad esempio, il linguaggio dei segni), la discriminazione di eventi ambientali complessi come pure la pianificazione del comportamento. Infatti, rilevare regolarità da una molteplicità di eventi permette di anticipare e pianificare azioni future, aspetti cruciali di adattamento all’ambiente. Questo meccanismo di apprendimento, riportato in letteratura con il nome di statistical learning, consiste nella rilevazione di distribuzioni di probabilità da input sensoriali ovvero, relazioni di dipendenza tra i suoi diversi componenti (ad esempio, X predice Y). Come illustrato nell capitolo introduttivo della presente ricerca, nonostante si tratti di uno dei meccanismi responsabili dell’apprendimento del linguaggio naturale umano, lo statistical learning non sembra essersi evoluto in modo specifico per servire questa funzione. Tale meccanismo rappresenta un processo cognitivo generale che si manifesta in diversi domini sensoriali (acustico, visivo, tattile), modalità (temporale oppure spaziale-statico) e specie (umana e non-umane). La rilevazione di pattern gioca quindi un ruolo fondamentale nell’elaborazione dell’informazione sensoriale, necessaria ad una corretta rappresentazione dell’ambiente. Una volta apprese le regolarità e le strutture presenti nell’ambiente, gli organismi viventi devono saper generalizzare tali strutture a stimoli nuovi da un punto di vista percettivo, ma rappresentanti le stesse regolarità. L’aspetto cruciale della generalizzazione è quindi la capacità di riconoscere una regolarità familiare anche quando implementata da nuovi stimoli. Anche il processo di generalizzazione ricopre un ruolo fondamentale nell’apprendimento della sintassi del linguaggio naturale umano. Ciò nonostante, si tratta di un meccanismo dominio-generale e non specie-specifico. Ciò che non risultava chiaro dalla letteratura era l’ontogenesi di entrambi i meccanismi, specialmente nel dominio visivo. In altre parole, non era chiaro se le abilità di statistical learning e generalizzazione di strutture visive fossero completamente sviluppate alla nascita. Il principale obbiettivo degli esperimenti condotti in questa tesi era quindi quello di approfondire le origini di visual statistical learning e generalizzazione, tramite del pulcino di pollo domestico (Gallus gallus) come modello animale. Appartenendo ad una specie precoce, il pulcino neonato è quasi completamente autonomo per una serie di funzioni comportamentali diventando il candidato ideale per lo studio dell’ontogenesi di diverse abilità percettive e cognitive. La possibilità di essere osservato appena dopo la nascita, e la completa manipolazione dell’ambiente pre- e post- natale (tramite schiusa e allevamento in condizioni controllate), rende il pulcino un’ottimo modello sperimentale per lo studio dell’apprendimento di regolarità. La prima serie di esperimenti illustrati erano allo studio di statistical learning (Chapter 2). Tramite un paradigma sperimentale basato sull’apprendimento per esposizione (imprinting filiale), pulcini neonati naive dal punto di vista visivo, sono stati esposti ad una video-sequenza di elementi visivi arbitrari (forme geometriche). Tale stimolo è definito da una struttura “statistica” basata su transitional (conditional) probabilities che determinano l’ordine di comparsa di ciascun elemento (ad esempio, il quadrato predice la croce con una probabilità del 100%). Al termine della fase di esposizione, i pulcini riuscivano a riconoscere tale sequenza, discriminandola rispetto a sequenze non-familiari che consistevano in una presentazione random degli stessi elementi (ovvero nessun elemento prediceva la comparsa di nessun altro elemento; Experiment 1) oppure in una ricombinazione degli stessi elementi familiari secondo nuovi pattern statistici (ad esempio, il quadrato predice la T con probabilità del 100% ma tale relazione statistica non era mai stata esperita dai pulcini; Experiment 2). In entrambi gli esperimenti i pulcini discriminarono la sequenza familiare da quella non-familiare, dimostrandosi in grado di riconoscere il struttura statistica alla quale erano stati esposti durante la fase d’imprinting. Uno degli aspetti più affascinanti di questo risultato è che il processo di apprendimento è non-supervisionato ovvero nessun rinforzo era stato dato ai pulcini durante la fase di esposizione. Successivamente, sono stati condotti altri due esperimenti (Experiments 3 and 4) con l’obbiettivo di verificare se i pulcini fossero in grado di apprendere regolarità più complesse di quelle testate in precedenza. In particolare, il compito che dovevano svolgere i pulcini consisteva nel differenziare una sequenza familiare strutturata similmente a quella appena descritta e una sequenza non-familiare composta da part-pairs ovvero coppie di figure composte dall’unione dell’ultima figura componente una coppia familiare e la prima figura componente un’altra coppia familiare. Essendo formate dall’unione di elementi appartenenti a coppie familiari, le part-pairs venivano esperite dai pulcini durante la fase di familiarizazzione ma con una probabilità più bassa rispetto alle pairs. La difficoltà del compito risiede quindi nel rilevare una sottile differenza caratterizzante la distribuzione di probabilità dei due stimoli. Sfortunatamente i pulcini non sono stati in grado di discriminare le due sequenze ne quando composte da 8 elementi (Experiment 3) ne da 6 (Experiment 4). L’obbiettivo finale di questi due esperimenti sarebbe stato quello di scoprire il tipo di regolarità appresa dai pulcini. Infatti, negli esperimenti 1 e 2 i pulcini potrebbero aver discriminato sequenze familiari e non familiari sulla base delle frequenze di co-occorrenza delle figure componenti le coppie familiari (ad esempio, co-occorrenza di X e Y) piuttosto che sulle probabilità condizionali (ad esempio, X predice Y). Tuttavia, non avendo superato il test presentato negli esperimenti 3 e 4, la questione riguardante quale tipo di cue statistico viene appreso da questa specie rimane aperta. Possibili spiegazioni e implicazioni teoriche di tale risultato non significativo sono discusse nel capitolo conclusivo. Il secondo gruppo di esperimenti condotti nella presente ricerca riguarda l’indagine del processo di generalizzazione di regolarità visive (Chapter 3). Le regolarità indagate sono rappresentate come stringhe di figure geometriche organizzate spazialmente, i cui elementi sono visibili simultaneamente. Ad esempio, la regolarità definita come AAB viene descritta come una tripletta in cui i primi due elementi sono identici tra loro (AA), seguiti da un’altro elemento diverso dai precedenti (B). I pattern impiegati erano AAB, ABA (Experiment 5) ABB e BAA (Experiment 6) e la procedura sperimentale utilizzata prevedeva addestramento tramite rinforzo alimentare. Una volta imparato a riconoscere il pattern rinforzato (ad esempio, AAB implementato da croce-croce-cerchio) da quello non rinforzato (ad esempio, ABA implementato da croce-cerchio-croce), i pulcini dovevano riconoscere tali strutture rappresentate da nuovi elementi (ad esempio, clessidra-clessidra-freccia vs. clessidra-freccia-clessidra). Gli animali si dimostrarono capaci di generalizzare tutte le regolarità a nuovi esemplari delle stesse. L’aspetto più importante di questi risultati è quanto dimostrato nell’esperimento 6, il cui obbiettivo era quello di indagare le possibili strategie di apprendimento messe in atto dagli animali nello studio precedente. Infatti, considerando il confronto AAB vs. ABA, i pulcini potrebbero aver riconosciuto (e generalizzato) il pattern familiare sulla base della presenza di una ripetizione consecutiva di uno stesso elemento (presente in AAB ma non in ABA, dove lo stesso elemento A è ripetuto e posizionato ai due estremi della tripletta). Nell’esperimento 6 sono state quindi confrontate regolarità caratterizzate da ripetizioni: AAB vs. ABB e AAB vs. BAA. I pulcini si mostrarono comunque in grado di distinguere le nuove regolarità e di generalizzare a nuovi esemplari, suggerendo come tale abilità non sia limitata a un particolare tipo di configurazione. Complessivamente, i risultati ottenuti nella presente ricerca costituiscono la prima evidenza di statistical learning e generalizzazione di regolarità visive in un modello animale osservato appena dopo la nascita. Per quanto riguarda lo statistical learning, i pulcini dimostrano capacità comparabili a quelle osservate in altre specie animali e agli infanti umani ma apparentemente superiori a quelle osservate nel neonato. Ipotesi e implicazioni teoriche di tali differenze sono riportate nel capitolo conclusivo. Per quanto riguarda i processi di generalizzazione, la performance dei pulcini è in linea con quanto dimostrato dai neonati umani nel dominio linguistico. Alla luce di questi risultati, è plausibile pensare che il pulcino si biologicamente predisposto ad rilevare regolarità caratterizzanti il suo ambiente visivo, a partire dai primi momenti di vita.
Style APA, Harvard, Vancouver, ISO itp.
48

Last, Carsten [Verfasser], i Dr Ing Wahl Friedrich M. [Akademischer Betreuer] Prof. "From Global to Local Statistical Shape Priors - Novel methods to obtain accurate reconstruction results with a limited amount of training shapes / Carsten Last ; Betreuer: Friedrich M. Prof. Dr.-Ing. Wahl". Braunschweig : Technische Universität Braunschweig, 2016. http://d-nb.info/117581816X/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
49

SORGENTE, ANGELA. "BENESSERE FINANZIARIO DEI GIOVANI ADULTI: QUALI METODOLOGIE DI RICERCA E TECNICHE STATISTICHE SONO NECESSARIE?" Doctoral thesis, Università Cattolica del Sacro Cuore, 2018. http://hdl.handle.net/10280/39103.

Pełny tekst źródła
Streszczenie:
Lo scopo generale della presente tesi è quello di arricchire la letteratura sul benessere finanziario dei giovani adulti adottando metodologie di ricerca e tecniche statistiche mai applicate in questo filone di ricerca. Nello specifico, nel primo capitolo è stata utilizzata la scoping methodology, ovvero una metodologia di sintesi della letteratura, con l’obiettivo di identificare la definizione, le componenti, i predittori e gli outcome del benessere finanziario dei giovani. Nel secondo capitolo è stata applicata la Latent Transition Analysis, con l’obiettivo di identificare sottogruppi omogenei di giovani rispetto ai marcatori dell’adultità che essi hanno già raggiunto e di verificare la relazione di tali sottogruppi con il benessere finanziario dei giovani che ad essi appartengono. Il terzo capitolo propone una metodologia per sviluppare e validare nuovi strumenti di misurazione, sulla base della visione contemporanea della validità. Tale metodologia, composta da tre diversi step, è stata utilizzata per la creazione di uno strumento adatto a misurare, su un campione di giovani italiani, il benessere finanziario soggettivo. Infine, il quarto capitolo riguarda la multiple informant methodology, che è stata utilizzata per raccogliere informazioni da madre, padre e figlio sul processo di socializzazione finanziaria familiare ed il suo impatto sul benessere finanziario del figlio.
The general aim of this research work is to enrich the literature on emerging adults’ financial well-being with research methodologies and statistical techniques never previously applied in this research field. Specifically, the first chapter of this thesis concerns the scoping methodology, a knowledge synthesis methodology that I adopted to identify the emerging adults’ financial well-being definition, components, predictors and outcomes. The second chapter consists in the application of a new statistical technique, Latent Transition Analysis, that I used to identify subgroups of emerging adults homogeneous in their configuration of adult social markers already reached and to investigate the relation between these emerging adults’ subgroups and their financial well-being. The third chapter describes a three-step methodology to develop and validate new measurement instruments, based on the contemporary view of validity proposed in the last fifty years. This three-step procedure was here applied to develop and validate a new instrument measuring subjective financial well-being for an emerging adult target population. Finally, the fourth chapter concerns the multiple informant methodology that I applied to collect information about family financial socialization and its impact on the child’s financial well-being from mother, father and the emerging adult child.
Style APA, Harvard, Vancouver, ISO itp.
50

SORGENTE, ANGELA. "BENESSERE FINANZIARIO DEI GIOVANI ADULTI: QUALI METODOLOGIE DI RICERCA E TECNICHE STATISTICHE SONO NECESSARIE?" Doctoral thesis, Università Cattolica del Sacro Cuore, 2018. http://hdl.handle.net/10280/39103.

Pełny tekst źródła
Streszczenie:
Lo scopo generale della presente tesi è quello di arricchire la letteratura sul benessere finanziario dei giovani adulti adottando metodologie di ricerca e tecniche statistiche mai applicate in questo filone di ricerca. Nello specifico, nel primo capitolo è stata utilizzata la scoping methodology, ovvero una metodologia di sintesi della letteratura, con l’obiettivo di identificare la definizione, le componenti, i predittori e gli outcome del benessere finanziario dei giovani. Nel secondo capitolo è stata applicata la Latent Transition Analysis, con l’obiettivo di identificare sottogruppi omogenei di giovani rispetto ai marcatori dell’adultità che essi hanno già raggiunto e di verificare la relazione di tali sottogruppi con il benessere finanziario dei giovani che ad essi appartengono. Il terzo capitolo propone una metodologia per sviluppare e validare nuovi strumenti di misurazione, sulla base della visione contemporanea della validità. Tale metodologia, composta da tre diversi step, è stata utilizzata per la creazione di uno strumento adatto a misurare, su un campione di giovani italiani, il benessere finanziario soggettivo. Infine, il quarto capitolo riguarda la multiple informant methodology, che è stata utilizzata per raccogliere informazioni da madre, padre e figlio sul processo di socializzazione finanziaria familiare ed il suo impatto sul benessere finanziario del figlio.
The general aim of this research work is to enrich the literature on emerging adults’ financial well-being with research methodologies and statistical techniques never previously applied in this research field. Specifically, the first chapter of this thesis concerns the scoping methodology, a knowledge synthesis methodology that I adopted to identify the emerging adults’ financial well-being definition, components, predictors and outcomes. The second chapter consists in the application of a new statistical technique, Latent Transition Analysis, that I used to identify subgroups of emerging adults homogeneous in their configuration of adult social markers already reached and to investigate the relation between these emerging adults’ subgroups and their financial well-being. The third chapter describes a three-step methodology to develop and validate new measurement instruments, based on the contemporary view of validity proposed in the last fifty years. This three-step procedure was here applied to develop and validate a new instrument measuring subjective financial well-being for an emerging adult target population. Finally, the fourth chapter concerns the multiple informant methodology that I applied to collect information about family financial socialization and its impact on the child’s financial well-being from mother, father and the emerging adult child.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii