Siga este enlace para ver otros tipos de publicaciones sobre el tema: Latency variation.

Tesis sobre el tema "Latency variation"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 28 mejores tesis para su investigación sobre el tema "Latency variation".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Pal, Asmita. "Split Latency Allocator: Process Variation-Aware Register Access Latency Boost in a Near-Threshold Graphics Processing Unit". DigitalCommons@USU, 2018. https://digitalcommons.usu.edu/etd/7155.

Texto completo
Resumen
Over the last decade, Graphics Processing Units (GPUs) have been used extensively in gaming consoles, mobile phones, workstations and data centers, as they have exhibited immense performance improvement over CPUs, in graphics intensive applications. Due to their highly parallel architecture, general purpose GPUs (GPGPUs) have gained the foreground in applications where large data blocks can be processed in parallel. However, the performance improvement is constrained by a large power consumption. Likewise, Near Threshold Computing (NTC) has emerged as an energy-efficient design paradigm. Hence, operating GPUs at NTC seems like a plausible solution to counteract the high energy consumption. This work investigates the challenges associated with NTC operation of GPUs and proposes a low-power GPU design, Split Latency Allocator, to sustain the performance of GPGPU applications.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Channe, Gowda Anushree. "Latency and Jitter Control in 5G Ethernet Fronthaul Network". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/17651/.

Texto completo
Resumen
With 5G technology, networks are expected to offer high speed with ultra-low latency among different users. Maintaining the current network architecture will lead to an unsustainable transport delay and jitters increase. Limiting the transport delay and the jitters have become a necessity for mobile network operators. The main requirement in 5G networks is the demand of limiting the transport delay. This, thesis proposes a novel mechanism to minimize packet delay and delay variation in 5G Ethernet fronthaul network. The goal is to achieve bounded delay aggregation of traffic ,suitable for application in fronthaul transport. Hybrid switching technology can be adopted to provide efficient fronthaul in 5G. Hybrid switches allows to multiplex traffics with different characteristics over the same wavelengths, thus increasing the network resource utilization. This thesis proposes a scheduling mechanism for hybrid switches to aggregate streams from the network, the Bypass traffic (BP), and the traffic from the fronthaul links, the ADD traffic, using an algorithm which looks for the time gaps in the BP stream for the insertion of the ADD traffic. The proposed strategy minimizes the delay of packets by making use of the available gaps during the transmission to limit the network latency. The size of the required time gaps, the time window, is suitably reduced by dividing the timeout time duration with number of intervals (N) with the Window reduction mechanism so that the delay variation or jitter of both aggregated streams are bounded. The results demonstrate that the aforementioned requirements are can be achieved by suitably tuning the parameters of the algorithm inputs, mainly the window reduction factor, timeout time duration and the number of intervals, resulting in values of packet delay and delay variation bounded at 10 microseconds or even lower up to 85-90percent carried load of aggregated flows. Hence, we show their suitability for delay sensitive future applications in 5G networking.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Aukštuolienė, Eglė. "Herpes simplex virus sequence variation in the promoter of the latency associated gene and correlation with clinical features". Doctoral thesis, Lithuanian Academic Libraries Network (LABT), 2013. http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2013~D_20130327_100916-38563.

Texto completo
Resumen
Herpes simplex virus (HSV) causes recurrent orofacial and genital infections and establishes latent infection in sensory neurons. During latency all virus genes are supressed except the latency associated transcripts which are transcribed from latency associated gene (LAT). It is established that HSV LAT promoter mutants have lower levels of spontaneous reactivation rates in small animal models compared to wild virus. However, the variation in the LAT promoter has not been studied in viruses from clinical samples in humans. The aim of the sudy was to evaluate the sequence variation in herpes simplex virus latency associated gene promoter from clinical samples by developing and applying molecular methods and correlate with herpes infection clinical features. In this study a new PCR method specific for HSV LAT promoter was developed and HSV LAT promoter DNA sequences from Lithuanian and Swedish mucocutaneous and cerebrospinal fluid clinical samples were analyzed. HSV type 2 was found to be the main cause of genital herpes in the population of the Lithuanian patients. All cases of orofacial herpes simplex infection were caused by HSV type 1. The structure of the LAT promoter region was studied in 145 HSV clinical samples. HSV LAT promoter was found to be G+C rich and contained variable homopolimer tracts. An inter- and intrastrain variability of homopolimer tracts in the promoter region was detected, potentially giving rise to a large variation at the protein level, leading to... [to full text]
Herpes simplex virusas sukelia recidyvuojančią burnos-veido ir lytinių organų infekciją. Latentinėje būklėje šis virusas glūdi sensoriniuose ganglijuose. Latencijos metu visi HSV genai yra supresuoti, išskyrus su latencija susijusį geną (LAT). Tyrimais nustatyta, kad tarp HSV LAT promotoriaus mutantų reaktyvacijos dažnis laboratorinių gyvūnėlių modeliuose yra mažesnis nei laukinių virusų. Nėra atlikta tyrimų, kurie nagrinėtų LAT promotoriaus sekų variaciją herpes simplex virusuose, išskirtuose iš žmonių klinikinių mėginių. Šio tyrimo tikslas buvo įvertinti herpes simplex viruso LAT promotoriaus sekų įvairovę molekulinės diagnostikos metodais bei palyginti su infekcijos klinikiniais požymiais. Tuo tikslu buvo sukurtas PGR metodas HSV LAT promotoriaus analizei atlikti. Buvo ištirta Lietuvos ir Švedijos klinikiniuose odos-gleivinių bei cerebrospinalinio skysčio mėginiuose rasto herpes simplex viruso promotoriaus DNR sekų įvairovė. Tyrimo metu rasta, kad 2 tipo herpes simplex virusas buvo pagrindinė lytinių organų HSV infekcijos priežastis tarp Lietuvos pacientų. Visuose veido srities bėrimuose rasta 1 tipo HSV. HSV LAT promotoriaus sekos ištirtos 145 klinikiniuose mėginiuose. Nustatyta, kad HSV LAT promotoriaus sekos yra gausios GC ir turi variabilias homopolimerinių nukleotidų sritis, kurios varijuoja tarp viruso padermių ir pačių padermių viduje. Ši variacija gali turėti įtakos baltymų sintezei, o drauge ir fenotipo pokyčiams. Nenustatytas ryšys tarp HSV LAT promotoriaus... [toliau žr. visą tekstą]
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Soudais, Guillaume. "End-to-End Service Guarantee for High-Speed Optical Networks". Electronic Thesis or Diss., Institut polytechnique de Paris, 2024. http://www.theses.fr/2024IPPAT027.

Texto completo
Resumen
Poussée par un besoin croissant de bande passante et de performance, le réseau informatique s’est développé de telle sorte que les réseaux OT et de télécommunications cherchent à exploiter cette infrastructure pour leur expansion. Ces trois secteurs ont historiquement été séparés en raison de différentes exigences entre autres en matière de latence, de sa variation et de fiabilité. Pour répondre aux besoins des applications temps critiques, le groupe de travail Time Sensitive Network a développé de nouveaux ensembles de protocoles qui commencent à être mis en œuvre dans des produits commerciaux. D’autres groupes ont proposé des architectures novatrices avec contrôle du temps pour permettre des performances garanties, entre et à l’intérieur des centres de données périphériques. Dans ma thèse, je propose une solution pour transporter les applications temps critiques dans les réseaux existants, car elle ne nécessite pas de changer toute l’architecture. Je montre les avantages de sa mise en œuvre dans les réseaux TSN pour une solution pérenne avec une utilisation améliorée des ressources. Pour transporter le trafic temps critique dans les réseaux existants, je propose de créer un chemin en isolant et en planifiant le trafic temps critique sur un canal avec une latence garantie. Avec cette construction, je développe un algorithme pour effectuer une compensation de la variation de latence, permettant une transmission à la latence constante pour le trafic temps critique. Dans un second temps, je propose une méthode de synchronisation et mets en œuvre un réseau de mesure principalement utilise ici pour la mesure de la latence, m’aidant à obtenir des informations sur la distribution de la latence que mon protocole créé. Enfin, avec un algorithme de compensation de variation de latence amélioré, je démontre de meilleures performances en matière de gigue et étudie le temps de mise en service de notre protocole, permettant l’utilisation des ressources uniquement lorsque le trafic temps critique est présent. Dans ma thèse, je démontre, avec une implémentation FPGA, la réduction de la variation de latence, permettant aux applications des réseaux OT et de télécommunications de fonctionner sur les réseaux existants et augmentés par des normes TSN
Driven by an ever-growing bandwidth and performance need, the IT network has grown such that OT and telecommunications networks are looking to exploit this infrastructure for their expansion. These three sectors have historically been separated due to different requirements, on latency, its variation and on reliability. To answer to time critical application needs, the Time Sensitive Network taskforce has developed new sets of protocols that are starting to be implemented in commercial products. Other groups have proposed novel architecture with time control to enable guaranteed performances between and inside edge datacenters. In my PhD I propose a solution to carry time critical application in legacy networks as it does not require to change the whole architecture. I show the benefits of its implementation in TSN networks for a future-proof solution with improved resource usage. To carry time critical traffic in legacy I propose to create a path by isolating and scheduling time critical traffic on a channel with guaranteed latency. With this construction, I build an algorithm to perform latency variation compensation enabling a constant latency transmission for time critical traffic. In a second time, propose a synchronization scheme and implemented a monitoring network primarily used here for latency monitoring, helping me gain insights on the distribution of latency that my protocol creates. Lastly, with an improved latency compensation algorithm, I demonstrate better jitter performances and study the turn-up time for our protocol enabling resource usage only when time-critical traffic is present. In my PhD I demonstrate, with an FPGA implementation and commercial product, latency variation reduction enabling OT and telecommunications network applications to run on legacy and TSN augmented network
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Challis, E. A. L. "Variational approximate inference in latent linear models". Thesis, University College London (University of London), 2013. http://discovery.ucl.ac.uk/1414228/.

Texto completo
Resumen
Latent linear models are core to much of machine learning and statistics. Specific examples of this model class include Bayesian generalised linear models, Gaussian process regression models and unsupervised latent linear models such as factor analysis and principal components analysis. In general, exact inference in this model class is computationally and analytically intractable. Approximations are thus required. In this thesis we consider deterministic approximate inference methods based on minimising the Kullback-Leibler (KL) divergence between a given target density and an approximating `variational' density. First we consider Gaussian KL (G-KL) approximate inference methods where the approximating variational density is a multivariate Gaussian. Regarding this procedure we make a number of novel contributions: sufficient conditions for which the G-KL objective is differentiable and convex are described, constrained parameterisations of Gaussian covariance that make G-KL methods fast and scalable are presented, the G-KL lower-bound to the target density's normalisation constant is proven to dominate those provided by local variational bounding methods. We also discuss complexity and model applicability issues of G-KL and other Gaussian approximate inference methods. To numerically validate our approach we present results comparing the performance of G-KL and other deterministic Gaussian approximate inference methods across a range of latent linear model inference problems. Second we present a new method to perform KL variational inference for a broad class of approximating variational densities. Specifically, we construct the variational density as an affine transformation of independently distributed latent random variables. The method we develop extends the known class of tractable variational approximations for which the KL divergence can be computed and optimised and enables more accurate approximations of non-Gaussian target densities to be obtained.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Hedges, Stephanie Nicole. "A Latent Class Analysis of American English Dialects". BYU ScholarsArchive, 2017. https://scholarsarchive.byu.edu/etd/6480.

Texto completo
Resumen
Research on the dialects of English spoken within the United States shows variation regarding lexical, morphological, syntactic, and phonological features. Previous research has tended to focus on one linguistic variable at a time with variation. To incorporate multiple variables in the same analysis, this thesis uses a latent class analysis to perform a cluster analysis on results from the Harvard Dialect Survey (2003) in order to investigate what phonetic variables from the Harvard Dialect Survey are most closely associated with each dialect. This thesis also looks at how closely the latent class analysis results correspond to the Atlas of North America (Labov, Ash & Boberg, 2005b) and how well the results correspond to Joshua Katz's heat maps (Business Insider, 2013; Byrne, 2013; Huffington Post, 2013; The Atlantic, 2013). The results from the Harvard Dialect Survey generally parallel the findings of the Linguistic Atlas of North American English, providing support for six basic dialects of American English. The variables with the highest probability of occurring in the North dialect are ‘pajamas: /æ/’, ‘coupon: /ju:/’, ‘Monday, Friday: /e:/’ ‘Florida: /ɔ/’, and ‘caramel: 2 syllables’. For the South dialect, the top variables are ‘handkerchief: /ɪ/’, ‘lawyer: /ɒ/’, ‘pajamas: /ɑ/’, and ‘poem’ as 2 syllables. The top variables in the West dialect include ‘pajamas: /ɑ/’, ‘Florida: /ɔ/’, ‘Monday, Friday: /e:/’, ‘handkerchief: /ɪ/’, and ‘lawyer: /ɔj/’. For the New England dialect, they are ‘Monday, Friday: /e:/’, ‘route: /ru:t/’, ‘caramel: 3 syllables’, ‘mayonnaise: /ejɑ/’, and ‘lawyer: /ɔj/’. The top variables for the Midland dialect are ‘pajamas: /æ/’, ‘coupon: /u:/’, ‘Monday, Friday: /e:/’, ‘Florida: /ɔ/’, and ‘lawyer: /ɔj/’ and for New York City and the Mid-Atlantic States, they are ‘handkerchief: /ɪ/’, ‘Monday, Friday: /e:/’, ‘pajamas: /ɑ/’, ‘been: /ɪ/’, ‘route: /ru:t/’, ‘lawyer: /ɔj/’, and ‘coupon: /u:/’. One major discrepancy between the results from the latent class analysis and the linguistic atlas is the region of the low back merger. In the latent class analysis, the North dialect has a low probability of the ‘cot/caught’ low back vowel distinction, whereas the linguistic atlas found this to be a salent variable of the North dialect. In conclusion, these results show that the latent class analysis corresponds with current research, as well as adding additional information with multiple variables.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Khan, Mohammad. "Variational learning for latent Gaussian model of discrete data". Thesis, University of British Columbia, 2012. http://hdl.handle.net/2429/43640.

Texto completo
Resumen
This thesis focuses on the variational learning of latent Gaussian models for discrete data. The learning is difficult since the discrete-data likelihood is not conjugate to the Gaussian prior. Existing methods to solve this problem are either inaccurate or slow. We consider a variational approach based on evidence lower bound optimization. We solve the following two main problems of the variational approach: the computational inefficiency associated with the maximization of the lower bound and the intractability of the lower bound. For the first problem, we establish concavity of the lower bound and design fast learning algorithms using concave optimization. For the second problem, we design tractable and accurate lower bounds, some of which have provable error guarantees. We show that these lower bounds not only make accurate variational learning possible, but can also give rise to algorithms with a wide variety of speed-accuracy trade-offs. We compare various lower bounds, both theoretically and experimentally, giving clear design guidelines for variational algorithms. Through application to real-world data, we show that the variational approach can be more accurate and faster than existing methods.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Toyinbo, Peter Ayo. "Additive Latent Variable (ALV) Modeling: Assessing Variation in Intervention Impact in Randomized Field Trials". Scholar Commons, 2009. http://scholarcommons.usf.edu/etd/3673.

Texto completo
Resumen
In order to personalize or tailor treatments to maximize impact among different subgroups, there is need to model not only the main effects of intervention but also the variation in intervention impact by baseline individual level risk characteristics. To this end a suitable statistical model will allow researchers to answer a major research question: who benefits or is harmed by this intervention program? Commonly in social and psychological research, the baseline risk may be unobservable and have to be estimated from observed indicators that are measured with errors; also it may have nonlinear relationship with the outcome. Most of the existing nonlinear structural equation models (SEM’s) developed to address such problems employ polynomial or fully parametric nonlinear functions to define the structural equations. These methods are limited because they require functional forms to be specified beforehand and even if the models include higher order polynomials there may be problems when the focus of interest relates to the function over its whole domain. To develop a more flexible statistical modeling technique for assessing complex relationships between a proximal/distal outcome and 1) baseline characteristics measured with errors, and 2) baseline-treatment interaction; such that the shapes of these relationships are data driven and there is no need for the shapes to be determined a priori. In the ALV model structure the nonlinear components of the regression equations are represented as generalized additive model (GAM), or generalized additive mixed-effects model (GAMM). Replication study results show that the ALV model estimates of underlying relationships in the data are sufficiently close to the true pattern. The ALV modeling technique allows researchers to assess how an intervention affects individuals differently as a function of baseline risk that is itself measured with error, and uncover complex relationships in the data that might otherwise be missed. Although the ALV approach is computationally intensive, it relieves its users from the need to decide functional forms before the model is run. It can be extended to examine complex nonlinearity between growth factors and distal outcomes in a longitudinal study.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Christmas, Jacqueline. "Robust spatio-temporal latent variable models". Thesis, University of Exeter, 2011. http://hdl.handle.net/10036/3051.

Texto completo
Resumen
Principal Component Analysis (PCA) and Canonical Correlation Analysis (CCA) are widely-used mathematical models for decomposing multivariate data. They capture spatial relationships between variables, but ignore any temporal relationships that might exist between observations. Probabilistic PCA (PPCA) and Probabilistic CCA (ProbCCA) are versions of these two models that explain the statistical properties of the observed variables as linear mixtures of an alternative, hypothetical set of hidden, or latent, variables and explicitly model noise. Both the noise and the latent variables are assumed to be Gaussian distributed. This thesis introduces two new models, named PPCA-AR and ProbCCA-AR, that augment PPCA and ProbCCA respectively with autoregressive processes over the latent variables to additionally capture temporal relationships between the observations. To make PPCA-AR and ProbCCA-AR robust to outliers and able to model leptokurtic data, the Gaussian assumptions are replaced with infinite scale mixtures of Gaussians, using the Student-t distribution. Bayesian inference calculates posterior probability distributions for each of the parameter variables, from which we obtain a measure of confidence in the inference. It avoids the pitfalls associated with the maximum likelihood method: integrating over all possible values of the parameter variables guards against overfitting. For these new models the integrals required for exact Bayesian inference are intractable; instead a method of approximation, the variational Bayesian approach, is used. This enables the use of automatic relevance determination to estimate the model orders. PPCA-AR and ProbCCA-AR can be viewed as linear dynamical systems, so the forward-backward algorithm, also known as the Baum-Welch algorithm, is used as an efficient method for inferring the posterior distributions of the latent variables. The exact algorithm is tractable because Gaussian assumptions are made regarding the distribution of the latent variables. This thesis introduces a variational Bayesian forward-backward algorithm based on Student-t assumptions. The new models are demonstrated on synthetic datasets and on real remote sensing and EEG data.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Dahl, Joakim. "Analysis of the effect of latent dimensions on disentanglement in Variational Autoencoders". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-291614.

Texto completo
Resumen
Disentanglement is a subcategory to Representaton learning where we, apart from believing that useful properties can be extracted from the data in a more compact form, also envision that the data itself is constituted from a lower-dimensional subset of explanatory factors. Explanatory factors are an ambiguous concept and what they portray varies with the dataset. A dataset constituted of flowers may have stem size and color as explanatory factors, while for another dataset it may be location or position. The explanatory factors are themselves often nested in a complex interaction in order to generate the data. Disentanglement can be summarized as to breaking the potentially complex interaction between the explanatory factors to liberate them from one another. The liberated explanatory factors can then constitute the foundation of the representations, a procedure that is believed to enhance downstream machine learning tasks. Disentangling the explanatory factors in an unsupervised environment has proven to be a difficult task for many reasons, perhaps most notably the lack of knowledge of how many they are and what they reflect. To be able to evaluate the degree of disentanglement attained, we will consider a dataset annotated for us with target labels corresponding to the explanatory factors that generated the data. Knowing the number of explanatory factors gives an indication of what dimensionality the representation should have to at least be able to capture all of the explanatory factors. Many of the empirical studies that have been considered in this paper treat the dimensionality of the representations as a constant when evaluating the degree of disentanglement achieved. The purpose of this paper is to extend the discussion regarding disentanglement by treating the dimensionality of the representations as a variable to be alternated and investigate how this impacts the degree of disentanglement achieved. The experiments performed in this paper do however suggest that the visual inspection of the disentanglement attained in a high dimensional representation space are difficult to interpret and evaluate for the human eye. One is therefore even further reliant on the disentanglement scores, which does not require any human interaction for the evaluation. The disentanglement scores seem to exhibit a static behaviour, not changing as much as one would believe given the visual inspection. Therefore, investigating how the representation dimensionality affect the disentanglement attained among the representations is a delicate matter. Many of the empirical studies considered in this paper suggest that mostly the regularization is impacting the disentanglement. It does however seem like there are far more parameters than originally was expected that need further evaluation to deduce their impact with respect to disentanglement.
Utnästling är en underkategori till Representations inlärning där vi inte bara tror att nyttiga egenskaper av datan kan utvinnas i en mer kompakt form, utan också att datan själv är bestående av en lågdimensionell delmängd av förklarande faktorer. Förklarande faktorer är ett tvetydigt begrepp och vad dessa poträtterar är varierande med datasetet i fråga. Ett dataset bestående av blommor kan ha stjälk storlek och färg som förklarande faktorer, medan för ett annat dataset kan detta vara plats eller position. Dem förklarande faktorerna är ofta nästlade i en komplex interaktion för att generera datan. Utnästling kan summeras som att bryta ner denna potentiellt komplexa interaktion mellan dem förklarande faktorerna för att frigöra dem från varandra. Dem frigivna förklarande faktorerna utgör därefter själva grunden till representationerna, en procedur som är betrodd att förbättra nedströms maskininlärningsuppgifter. Att nästla ut dem förklarande faktorerna i en oövervakad omgivning har visat sig vara en svår uppgift av många anledningar, kanske den allra viktigaste är att vi saknar vetskap om hur många dem är och vad dem reflekterar. För att kunna utvärdera graden av utnästling som vi har uppnått så kommer vi att betrakta ett dataset som är annoterat med de förklarande faktorer som genererade datan. Att veta hur många de förklarande faktorerna är ger en indikation av vilken dimensionalitet representationerna bör ha för att åtminstone kunna fånga alla de förklarande faktorerna. Många av de empiriska studier som betraktats i detta papper behandlar dimensionaliteten av representationerna som en konstant när vi utvärderar utnästlingen som uppnåtts. Syftet med detta papper är att förlänga diskussionen runt utnästling genom att behandla dimensionaliteten av representationerna som en variabel att alternera och undersöka hur detta påverkar graden av utnästling som uppnåtts.   Experimenten som utförts i detta papper indikerar att visuell inspektion av den uppnådda utnästlingen i ett högdimensionellt representationsrum är svårt att tolka och utvärdera för ett mänskligt öga. Därav är vi ofta beroende av utnästlingspoäng, som inte behöver någon mänsklig interaktion för utvärderingen. Utnästlingspoängen uppvisar dock ett statiskt beteende, och förändras inte till den grad som den visuella inspektionen indikerar. Av just denna anledning är utvärderingen av hur representations dimensionaliteten påverkar utnästlingen uppnådd hos representationerna ett känsligt ämne. Många av de empiriska studier som betraktats i detta papper föreslår att regulariseringen är det som mestadels påverkar utnästlingen. Det verkar huruvida som att där är betydligt fler parametrar än vad som tidigare misstänkts som behöver utvärderas, och i synnerhet deras påverkan på utnästlingen.
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Chandra, Sathees B. C. "Heritable variation for learning : molecular analysis of reversal learning and latent inhibition in the honeybee, Apis millifera /". The Ohio State University, 2000. http://rave.ohiolink.edu/etdc/view?acc_num=osu148819515435839.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Brault, Vincent. "Estimation et sélection de modèle pour le modèle des blocs latents". Thesis, Paris 11, 2014. http://www.theses.fr/2014PA112238/document.

Texto completo
Resumen
Le but de la classification est de partager des ensembles de données en sous-ensembles les plus homogènes possibles, c'est-à-dire que les membres d'une classe doivent plus se ressembler entre eux qu'aux membres des autres classes. Le problème se complique lorsque le statisticien souhaite définir des groupes à la fois sur les individus et sur les variables. Le modèle des blocs latents définit une loi pour chaque croisement de classe d'objets et de classe de variables, et les observations sont supposées indépendantes conditionnellement au choix de ces classes. Toutefois, il est impossible de factoriser la loi jointe des labels empêchant le calcul de la logvraisemblance et l'utilisation de l'algorithme EM. Plusieurs méthodes et critères existent pour retrouver ces partitions, certains fréquentistes, d'autres bayésiens, certains stochastiques, d'autres non. Dans cette thèse, nous avons d'abord proposé des conditions suffisantes pour obtenir l'identifiabilité. Dans un second temps, nous avons étudié deux algorithmes proposés pour contourner le problème de l'algorithme EM : VEM de Govaert et Nadif (2008) et SEM-Gibbs de Keribin, Celeux et Govaert (2010). En particulier, nous avons analysé la combinaison des deux et mis en évidence des raisons pour lesquelles les algorithmes dégénèrent (terme utilisé pour dire qu'ils renvoient des classes vides). En choisissant des lois a priori judicieuses, nous avons ensuite proposé une adaptation bayésienne permettant de limiter ce phénomène. Nous avons notamment utilisé un échantillonneur de Gibbs dont nous proposons un critère d'arrêt basé sur la statistique de Brooks-Gelman (1998). Nous avons également proposé une adaptation de l'algorithme Largest Gaps (Channarond et al. (2012)). En reprenant leurs démonstrations, nous avons démontré que les estimateurs des labels et des paramètres obtenus sont consistants lorsque le nombre de lignes et de colonnes tendent vers l'infini. De plus, nous avons proposé une méthode pour sélectionner le nombre de classes en ligne et en colonne dont l'estimation est également consistante à condition que le nombre de ligne et de colonne soit très grand. Pour estimer le nombre de classes, nous avons étudié le critère ICL (Integrated Completed Likelihood) dont nous avons proposé une forme exacte. Après avoir étudié l'approximation asymptotique, nous avons proposé un critère BIC (Bayesian Information Criterion) puis nous conjecturons que les deux critères sélectionnent les mêmes résultats et que ces estimations seraient consistantes ; conjecture appuyée par des résultats théoriques et empiriques. Enfin, nous avons comparé les différentes combinaisons et proposé une méthodologie pour faire une analyse croisée de données
Classification aims at sharing data sets in homogeneous subsets; the observations in a class are more similar than the observations of other classes. The problem is compounded when the statistician wants to obtain a cross classification on the individuals and the variables. The latent block model uses a law for each crossing object class and class variables, and observations are assumed to be independent conditionally on the choice of these classes. However, factorizing the joint distribution of the labels is impossible, obstructing the calculation of the log-likelihood and the using of the EM algorithm. Several methods and criteria exist to find these partitions, some frequentist ones, some bayesian ones, some stochastic ones... In this thesis, we first proposed sufficient conditions to obtain the identifiability of the model. In a second step, we studied two proposed algorithms to counteract the problem of the EM algorithm: the VEM algorithm (Govaert and Nadif (2008)) and the SEM-Gibbs algorithm (Keribin, Celeux and Govaert (2010)). In particular, we analyzed the combination of both and highlighted why the algorithms degenerate (term used to say that it returns empty classes). By choosing priors wise, we then proposed a Bayesian adaptation to limit this phenomenon. In particular, we used a Gibbs sampler and we proposed a stopping criterion based on the statistics of Brooks-Gelman (1998). We also proposed an adaptation of the Largest Gaps algorithm (Channarond et al. (2012)). By taking their demonstrations, we have shown that the labels and parameters estimators obtained are consistent when the number of rows and columns tend to infinity. Furthermore, we proposed a method to select the number of classes in row and column, the estimation provided is also consistent when the number of row and column is very large. To estimate the number of classes, we studied the ICL criterion (Integrated Completed Likelihood) whose we proposed an exact shape. After studying the asymptotic approximation, we proposed a BIC criterion (Bayesian Information Criterion) and we conjecture that the two criteria select the same results and these estimates are consistent; conjecture supported by theoretical and empirical results. Finally, we compared the different combinations and proposed a methodology for co-clustering
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Jaradat, Shatha. "OLLDA: Dynamic and Scalable Topic Modelling for Twitter : AN ONLINE SUPERVISED LATENT DIRICHLET ALLOCATION ALGORITHM". Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-177535.

Texto completo
Resumen
Providing high quality of topics inference in today's large and dynamic corpora, such as Twitter, is a challenging task. This is especially challenging taking into account that the content in this environment contains short texts and many abbreviations. This project proposes an improvement of a popular online topics modelling algorithm for Latent Dirichlet Allocation (LDA), by incorporating supervision to make it suitable for Twitter context. This improvement is motivated by the need for a single algorithm that achieves both objectives: analyzing huge amounts of documents, including new documents arriving in a stream, and, at the same time, achieving high quality of topics’ detection in special case environments, such as Twitter. The proposed algorithm is a combination of an online algorithm for LDA and a supervised variant of LDA - labeled LDA. The performance and quality of the proposed algorithm is compared with these two algorithms. The results demonstrate that the proposed algorithm has shown better performance and quality when compared to the supervised variant of LDA, and it achieved better results in terms of quality in comparison to the online algorithm. These improvements make our algorithm an attractive option when applied to dynamic environments, like Twitter. An environment for analyzing and labelling data is designed to prepare the dataset before executing the experiments. Possible application areas for the proposed algorithm are tweets recommendation and trends detection.
Tillhandahålla högkvalitativa ämnen slutsats i dagens stora och dynamiska korpusar, såsom Twitter, är en utmanande uppgift. Detta är särskilt utmanande med tanke på att innehållet i den här miljön innehåller korta texter och många förkortningar. Projektet föreslår en förbättring med en populär online ämnen modellering algoritm för Latent Dirichlet Tilldelning (LDA), genom att införliva tillsyn för att göra den lämplig för Twitter sammanhang. Denna förbättring motiveras av behovet av en enda algoritm som uppnår båda målen: analysera stora mängder av dokument, inklusive nya dokument som anländer i en bäck, och samtidigt uppnå hög kvalitet på ämnen "upptäckt i speciella fall miljöer, till exempel som Twitter. Den föreslagna algoritmen är en kombination av en online-algoritm för LDA och en övervakad variant av LDA - Labeled LDA. Prestanda och kvalitet av den föreslagna algoritmen jämförs med dessa två algoritmer. Resultaten visar att den föreslagna algoritmen har visat bättre prestanda och kvalitet i jämförelse med den övervakade varianten av LDA, och det uppnådde bättre resultat i fråga om kvalitet i jämförelse med den online-algoritmen. Dessa förbättringar gör vår algoritm till ett attraktivt alternativ när de tillämpas på dynamiska miljöer, som Twitter. En miljö för att analysera och märkning uppgifter är utformad för att förbereda dataset innan du utför experimenten. Möjliga användningsområden för den föreslagna algoritmen är tweets rekommendation och trender upptäckt.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Wenzel, Florian. "Scalable Inference in Latent Gaussian Process Models". Doctoral thesis, Humboldt-Universität zu Berlin, 2020. http://dx.doi.org/10.18452/20926.

Texto completo
Resumen
Latente Gauß-Prozess-Modelle (latent Gaussian process models) werden von Wissenschaftlern benutzt, um verborgenen Muster in Daten zu er- kennen, Expertenwissen in probabilistische Modelle einfließen zu lassen und um Vorhersagen über die Zukunft zu treffen. Diese Modelle wurden erfolgreich in vielen Gebieten wie Robotik, Geologie, Genetik und Medizin angewendet. Gauß-Prozesse definieren Verteilungen über Funktionen und können als flexible Bausteine verwendet werden, um aussagekräftige probabilistische Modelle zu entwickeln. Dabei ist die größte Herausforderung, eine geeignete Inferenzmethode zu implementieren. Inferenz in probabilistischen Modellen bedeutet die A-Posteriori-Verteilung der latenten Variablen, gegeben der Daten, zu berechnen. Die meisten interessanten latenten Gauß-Prozess-Modelle haben zurzeit nur begrenzte Anwendungsmöglichkeiten auf großen Datensätzen. In dieser Doktorarbeit stellen wir eine neue effiziente Inferenzmethode für latente Gauß-Prozess-Modelle vor. Unser neuer Ansatz, den wir augmented variational inference nennen, basiert auf der Idee, eine erweiterte (augmented) Version des Gauß-Prozess-Modells zu betrachten, welche bedingt konjugiert (conditionally conjugate) ist. Wir zeigen, dass Inferenz in dem erweiterten Modell effektiver ist und dass alle Schritte des variational inference Algorithmus in geschlossener Form berechnet werden können, was mit früheren Ansätzen nicht möglich war. Unser neues Inferenzkonzept ermöglicht es, neue latente Gauß-Prozess- Modelle zu studieren, die zu innovativen Ergebnissen im Bereich der Sprachmodellierung, genetischen Assoziationsstudien und Quantifizierung der Unsicherheit in Klassifikationsproblemen führen.
Latent Gaussian process (GP) models help scientists to uncover hidden structure in data, express domain knowledge and form predictions about the future. These models have been successfully applied in many domains including robotics, geology, genetics and medicine. A GP defines a distribution over functions and can be used as a flexible building block to develop expressive probabilistic models. The main computational challenge of these models is to make inference about the unobserved latent random variables, that is, computing the posterior distribution given the data. Currently, most interesting Gaussian process models have limited applicability to big data. This thesis develops a new efficient inference approach for latent GP models. Our new inference framework, which we call augmented variational inference, is based on the idea of considering an augmented version of the intractable GP model that renders the model conditionally conjugate. We show that inference in the augmented model is more efficient and, unlike in previous approaches, all updates can be computed in closed form. The ideas around our inference framework facilitate novel latent GP models that lead to new results in language modeling, genetic association studies and uncertainty quantification in classification tasks.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Abuzaid, Abdullah Ibrahim. "A Variation of Positioning Phase Change Materials (PCMs) Within Building Enclosures and Their Utilization Toward Thermal Performance". Diss., Virginia Tech, 2018. http://hdl.handle.net/10919/100612.

Texto completo
Resumen
Recently, buildings have been receiving more serious attention to help reduce global energy consumption. At the same time, thermal comfort has become an increasing concern for building occupants. Phase Change Materials (PCMs), which are capable of storing and releasing significant amounts of energy by melting and solidifying at a given temperature, are perceived as a promising opportunity for improving the thermal performance of buildings. This is because they use their thermophysical properties and latent heat while transforming state (or phase) as a feature for thermal energy storage systems to reduce overall energy demand, specifically during peaks hours, as well as to improve thermal comfort in buildings. This research aims to provide an overview of opportunities and challenges for the utilization of PCMs in the Architecture, Engineering, and Construction (AEC) sector, a broader understanding of specifically promising technologies, and a clarification of the effectiveness of different applications in building enclosures design especially in exterior walls. The research discusses how PCMs can be incorporated within building enclosures effectively to enhance building performance and improve thermal comfort while reducing heating and cooling energy consumption in buildings. The major objectives of the research include studying the properties of PCMs and their potential impact on building construction, clarifying PCMs selection criteria for building application, identifying the effectiveness of utilizing PCMs on saving energy, and evaluating the contribution of utilizing PCMs in building enclosures to thermal comfort. The research uses an exploratory quantitative approach that contains three main stages: 1) a systematic literature review, 2) laboratory experiments, and 3) validation to meet the goal of the research. Finally, by extrapolating results, the research ends with a practical assessment of application opportunities and how to effectively utilize PCMs in exterior walls of buildings.
PHD
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Carlsson, Filip y Philip Lindgren. "Deep Scenario Generation of Financial Markets". Thesis, KTH, Matematisk statistik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-273631.

Texto completo
Resumen
The goal of this thesis is to explore a new clustering algorithm, VAE-Clustering, and examine if it can be applied to find differences in the distribution of stock returns and augment the distribution of a current portfolio of stocks and see how it performs in different market conditions. The VAE-clustering method is as mentioned a newly introduced method and not widely tested, especially not on time series. The first step is therefore to see if and how well the clustering works. We first apply the algorithm to a dataset containing monthly time series of the power demand in Italy. The purpose in this part is to focus on how well the method works technically. When the model works well and generates proper results with the Italian Power Demand data, we move forward and apply the model on stock return data. In the latter application we are unable to find meaningful clusters and therefore unable to move forward towards the goal of the thesis. The results shows that the VAE-clustering method is applicable for time series. The power demand have clear differences from season to season and the model can successfully identify those differences. When it comes to the financial data we hoped that the model would be able to find different market regimes based on time periods. The model is though not able distinguish different time periods from each other. We therefore conclude that the VAE-clustering method is applicable on time series data, but that the structure and setting of the financial data in this thesis makes it to hard to find meaningful clusters. The major finding is that the VAE-clustering method can be applied to time series. We highly encourage further research to find if the method can be successfully used on financial data in different settings than tested in this thesis.
Syftet med den här avhandlingen är att utforska en ny klustringsalgoritm, VAE-Clustering, och undersöka om den kan tillämpas för att hitta skillnader i fördelningen av aktieavkastningar och förändra distributionen av en nuvarande aktieportfölj och se hur den presterar under olika marknadsvillkor. VAE-klusteringsmetoden är som nämnts en nyinförd metod och inte testad i stort, särskilt inte på tidsserier. Det första steget är därför att se om och hur klusteringen fungerar. Vi tillämpar först algoritmen på ett datasätt som innehåller månatliga tidsserier för strömbehovet i Italien. Syftet med denna del är att fokusera på hur väl metoden fungerar tekniskt. När modellen fungerar bra och ger tillfredställande resultat, går vi vidare och tillämpar modellen på aktieavkastningsdata. I den senare applikationen kan vi inte hitta meningsfulla kluster och kan därför inte gå framåt mot målet som var att simulera olika marknader och se hur en nuvarande portfölj presterar under olika marknadsregimer. Resultaten visar att VAE-klustermetoden är väl tillämpbar på tidsserier. Behovet av el har tydliga skillnader från säsong till säsong och modellen kan framgångsrikt identifiera dessa skillnader. När det gäller finansiell data hoppades vi att modellen skulle kunna hitta olika marknadsregimer baserade på tidsperioder. Modellen kan dock inte skilja olika tidsperioder från varandra. Vi drar därför slutsatsen att VAE-klustermetoden är tillämplig på tidsseriedata, men att strukturen på den finansiella data som undersöktes i denna avhandling gör det svårt att hitta meningsfulla kluster. Den viktigaste upptäckten är att VAE-klustermetoden kan tillämpas på tidsserier. Vi uppmuntrar ytterligare forskning för att hitta om metoden framgångsrikt kan användas på finansiell data i andra former än de testade i denna avhandling
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Kim, June-Yung. "VARIATIONS IN THE CO-OCCURRENCE OF MENTAL HEALTH PROBLEMS IN ADOLESCENTS WITH PRENATAL DRUG EXPOSURE". Case Western Reserve University School of Graduate Studies / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=case1596435824634679.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Wedenberg, Kim y Alexander Sjöberg. "Online inference of topics : Implementation of the topic model Latent Dirichlet Allocation using an online variational bayes inference algorithm to sort news articles". Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-222429.

Texto completo
Resumen
The client of the project has problems with complex queries and noisewhen querying their stream of five million news articles per day. Thisresults in much manual work when sorting and pruning the search result of their query. Instead of using direct text matching, the approachof the project was to use a topic model to describe articles in terms oftopics covered and to use this new information to sort the articles. An online version of the topic model Latent Dirichlet Allocationwas implemented using online variational Bayes inference to handlestreamed data. Using 100 dimensions, topics such as sports and politics emerged during training on a 1.7 million articles big simulatedstream. These topics were used to sort articles based on context. Theimplementation was found accurate enough to be useful for the client aswell as fast and stable enough to be a feasible solution to the problem.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Jouffroy, Emma. "Développement de modèles non supervisés pour l'obtention de représentations latentes interprétables d'images". Electronic Thesis or Diss., Bordeaux, 2024. http://www.theses.fr/2024BORD0050.

Texto completo
Resumen
Le Laser Mégajoule (LMJ) est un instrument d’envergure qui simule des conditions de pression et de température similaires à celles des étoiles. Lors d’expérimentations, plusieurs diagnostics sont guidés dans la chambre d’expériences et il est essentiel qu’ils soient positionnés de manière précise. Afin de minimiser les risques liés à l’erreur humaine dans un tel contexte expérimental, la mise en place d'un système anti-collision automatisé est envisagée. Cela passe par la conception d’outils d’apprentissage automatique offrant des niveaux de décision fiables à partir de l’interprétation d’images issues de caméras positionnées dans la chambre. Nos travaux de recherche se concentrent sur des méthodes neuronales génératives probabilistes, en particulier les auto-encodeurs variationnels (VAEs). Le choix de cette classe de modèles est lié au fait qu’elle rende possible l’accès à un espace latent lié directement aux propriétés des objets constituant la scène observée. L’enjeu majeur est d’étudier la conception de modèles de réseaux profonds permettant effectivement d’accéder à une telle représentation pleinement informative et interprétable dans un objectif de fiabilité du système. Le formalisme probabiliste intrinsèque du VAE nous permet, si nous pouvons remonter à une telle représentation, d’accéder à une analyse d’incertitudes des informations encodées
The Laser Megajoule (LMJ) is a large research device that simulates pressure and temperature conditions similar to those found in stars. During experiments, diagnostics are guided into an experimental chamber for precise positioning. To minimize the risks associated with human error in such an experimental context, the automation of an anti-collision system is envisaged. This involves the design of machine learning tools offering reliable decision levels based on the interpretation of images from cameras positioned in the chamber. Our research focuses on probabilistic generative neural methods, in particular variational auto-encoders (VAEs). The choice of this class of models is linked to the fact that it potentially enables access to a latent space directly linked to the properties of the objects making up the observed scene. The major challenge is to study the design of deep network models that effectively enable access to such a fully informative and interpretable representation, with a view to system reliability. The probabilistic formalism intrinsic to VAE allows us, if we can trace back to such a representation, to access an analysis of the uncertainties of the encoded information
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Hameed, Khurram. "Computer vision based classification of fruits and vegetables for self-checkout at supermarkets". Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2022. https://ro.ecu.edu.au/theses/2519.

Texto completo
Resumen
The field of machine learning, and, in particular, methods to improve the capability of machines to perform a wider variety of generalised tasks are among the most rapidly growing research areas in today’s world. The current applications of machine learning and artificial intelligence can be divided into many significant fields namely computer vision, data sciences, real time analytics and Natural Language Processing (NLP). All these applications are being used to help computer based systems to operate more usefully in everyday contexts. Computer vision research is currently active in a wide range of areas such as the development of autonomous vehicles, object recognition, Content Based Image Retrieval (CBIR), image segmentation and terrestrial analysis from space (i.e. crop estimation). Despite significant prior research, the area of object recognition still has many topics to be explored. This PhD thesis focuses on using advanced machine learning approaches to enable the automated recognition of fresh produce (i.e. fruits and vegetables) at supermarket self-checkouts. This type of complex classification task is one of the most recently emerging applications of advanced computer vision approaches and is a productive research topic in this field due to the limited means of representing the features and machine learning techniques for classification. Fruits and vegetables offer significant inter and intra class variance in weight, shape, size, colour and texture which makes the classification challenging. The applications of effective fruit and vegetable classification have significant importance in daily life e.g. crop estimation, fruit classification, robotic harvesting, fruit quality assessment, etc. One potential application for this fruit and vegetable classification capability is for supermarket self-checkouts. Increasingly, supermarkets are introducing self-checkouts in stores to make the checkout process easier and faster. However, there are a number of challenges with this as all goods cannot readily be sold with packaging and barcodes, for instance loose fresh items (e.g. fruits and vegetables). Adding barcodes to these types of items individually is impractical and pre-packaging limits the freedom of choice when selecting fruits and vegetables and creates additional waste, hence reducing customer satisfaction. The current situation, which relies on customers correctly identifying produce themselves leaves open the potential for incorrect billing either due to inadvertent error, or due to intentional fraudulent misclassification resulting in financial losses for the store. To address this identified problem, the main goals of this PhD work are: (a) exploring the types of visual and non-visual sensors that could be incorporated into a self-checkout system for classification of fruits and vegetables, (b) determining a suitable feature representation method for fresh produce items available at supermarkets, (c) identifying optimal machine learning techniques for classification within this context and (d) evaluating our work relative to the state-of-the-art object classification results presented in the literature. An in-depth analysis of related computer vision literature and techniques is performed to identify and implement the possible solutions. A progressive process distribution approach is used for this project where the task of computer vision based fruit and vegetables classification is divided into pre-processing and classification techniques. Different classification techniques have been implemented and evaluated as possible solution for this problem. Both visual and non-visual features of fruit and vegetables are exploited to perform the classification. Novel classification techniques have been carefully developed to deal with the complex and highly variant physical features of fruit and vegetables while taking advantages of both visual and non-visual features. The capability of classification techniques is tested in individual and ensemble manner to achieved the higher effectiveness. Significant results have been obtained where it can be concluded that the fruit and vegetables classification is complex task with many challenges involved. It is also observed that a larger dataset can better comprehend the complex variant features of fruit and vegetables. Complex multidimensional features can be extracted from the larger datasets to generalise on higher number of classes. However, development of a larger multiclass dataset is an expensive and time consuming process. The effectiveness of classification techniques can be significantly improved by subtracting the background occlusions and complexities. It is also worth mentioning that ensemble of simple and less complicated classification techniques can achieve effective results even if applied to less number of features for smaller number of classes. The combination of visual and nonvisual features can reduce the struggle of a classification technique to deal with higher number of classes with similar physical features. Classification of fruit and vegetables with similar physical features (i.e. colour and texture) needs careful estimation and hyper-dimensional embedding of visual features. Implementing rigorous classification penalties as loss function can achieve this goal at the cost of time and computational requirements. There is a significant need to develop larger datasets for different fruit and vegetables related computer vision applications. Considering more sophisticated loss function penalties and discriminative hyper-dimensional features embedding techniques can significantly improve the effectiveness of the classification techniques for the fruit and vegetables applications.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Huang, Yung-Hui y 黃永輝. "A Variation of Minimum Latency Problem on Path and Tree". Thesis, 2007. http://ndltd.ncl.edu.tw/handle/34224405045658544193.

Texto completo
Resumen
碩士
國立清華大學
資訊工程學系
95
In mobile environment, users retrieve information by portable devices. Since the mobile devices usually have limited power, the issue of minimization the data access latency is important. Periodic broadcasts of frequently requested data can thus reduce the traffics in the air and save the powers of the mobile devices. However, users need to wait for the required data to appear on the broadcast channel. It follows the rule “the more time they wait then the more power devices have to consume. Finding the minimum latency tour can thus help us in solving this kind of problem. In this paper we study the variation of the minimum latency problem (MLP) [2]. The MLP is to find a walk tour on the graph G(V,E) with a distance matrix di,j.Where di,j indicate the distance between vi and vj. Let l(vi) is the latency length of vi, defined to be the distance traveled before the first visiting vi. The minimum latency tour is to minimize the . In some message broadcast and scheduling problem [8] the vertex also has latency time and weight. Those problem need to extend the objective function of the minimum latency tour as . The definition is equivalent to the MLP with no edge distance but vertex latency time and vertex weight. We give a linear algorithm for the un-weighted full k-ary tree or k-path graphs, and O(n log n) time for general tree graphs. The time complexity in trees is the same as Adolphson's result; however, the algorithm given here is not only simpler, easier to understand, but also more flexible and thus can be easily extended to other classes of graphs.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Sharma, Dinesh R. McGee Daniel. "Logistic regression, measures of explained variation, and the base rate problem". 2006. http://etd.lib.fsu.edu/theses/available/etd-06292006-153249.

Texto completo
Resumen
Thesis (Ph. D.)--Florida State University, 2006.
Advisor: Daniel L. McGee, Sr., Florida State University, College of Arts and Sciences, Dept. of Statistics. Title and description from dissertation home page (viewed Sept. 21, 2006). Document formatted into pages; contains xii, 147 pages. Includes bibliographical references.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Yeh, Mei Yu y 葉美妤. "Variations in BOLD response latency estimated from event-related fMRI at 3T: Comparisons between gradient-echo and spin-echo". Thesis, 2009. http://ndltd.ncl.edu.tw/handle/71818480564676041029.

Texto completo
Resumen
碩士
長庚大學
醫學物理暨影像科學研究所
97
Functional MRI (fMRI) based on the blood oxygenation level-dependent (BOLD) contrast commonly uses gradient-recalled echo (GRE) signal to detect regional hemodynamic variations due to neural activities. While the spatial localization of activation has showed promising applications in both neuroscience and clinical studies, indexing the temporal BOLD response is still a poor deputy for detecting the timing of neural activity. Particularly, for the sub-second time scale between brain regions, the hemodynamic response may not be able to resolve the differences due to its signal origin or the noise in data, or both. This study aimed to evaluate the performance of latency estimation by different BOLD fMRI techniques, with two event-related experiments at 3T. The first experiment (experiment Ⅰ) evaluated the variations of hemodynamic latency between voxels within the visual cortex and their relationship with CNR for GRE, spin echo (SE) and diffusion-weighted SE (DWSE). The second experiment (experiment Ⅱ) used delayed visual stimuli between two hemifields (delay time = 0, 250 and 500 ms) to assess the temporal resolving power of three acquisition conditions: GRE with TR = 1000 ms (GRETR1000), GRE with TR = 500 ms (GRETR500) and SE with TR = 1000 ms (SETR1000). The results of experiment I showed the earliest latency with DWSE (1.97 ± 0.33 ms) followed by SE (2.44 ± 0.31 ms) and then GRE (2.84 ± 0.72 ms), with significant differences found between the DWSE and the other two (p<0.05). In general, latency variations decreased as the contrast-to-noise ratio (CNR) increased for all three techniques. However, similar variations were found between GRE and SE even when the later had lower CNR. For example, when averaging through 30 trials, the latency variations were 0.70 ± 0.05 vs. 0.64 ± 0.14 and CNRs were 10.54 ± 2.21 vs. 7.42 ± 0.34 (p<0.05), for GRE and SE, respectively. For experiment II, significant correlations were found between measured and preset stimulus delays for subject-averaged data obtained from all three conditions (r2 =0.992, 0.990 and 0.958 for GRETR1000, GRETR500 and SETR1000, respectively ). The inter-subject variation of the measured delay was found to be greatest with the GRETR1000 (89~319 ms) followed by the GRETR500 (120~260 ms) and the smallest with the SETR1000 (71~152 ms). In summary, BOLD responses obtained from GRE exhibited greater CNR but no compromised latency variations in the visual cortex. SE was potentially capable of improving the performance of latency estimation, however, no significant advantages were found due to its inferior sensitivity at 3T.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Liu, Tang-Hao y 劉唐豪. "Study on the modification and the morphological variation of indanthrone via latent pigment technology". Thesis, 2010. http://ndltd.ncl.edu.tw/handle/45627412498988300591.

Texto completo
Resumen
博士
國立中興大學
化學工程學系所
98
Abstract The latent pigment technology starts from an appropriate pigment precursor which has to be soluble or molecularly dispersible like a dye in the polymer, and after the subsequent physical or chemical treatment, it can convert in-situ to the pigment form in the application medium. It is soluble and easily is dispersed in application medium without auxiliary substances such as dispersing agents and surfactants and any time- and energy-consuming treatment so that it remains stable when used in different applications. The latent pigment BOC-indanthrone is synthesized by replacing the hydrogen atom in the NH group of blue high performance pigment, indanthrone with a compound containing the t-butyloxycarbonyl (t-BOC) group. It is soluble completely in the organic solvent and application medium. We can obtain the regenerated indanthrone pigment with different morphology and particle size from parent pigment by thermolysis and acidolysis of BOC-indanthrone in the organic solvent or polymeric film. X-ray diffraction (XRD) results revealed that the crystal phases in the regenerated and parent indanthrone pigments are the same. The results indicate that the morphology of the regenerated pigment converted from BOC-indanthrone by thermolysis in NMP, DMF, PGMEA, DMSO and cyclohexanone are the same as the parent pigment (slated and flat). The morphology of the regenerated pigment from BOC-indanthrone by acidolysis in the organic solvent depends on the type and amount of the source of acid. When Boc-indanthrone converted to regenerated indanthrone pigment through acidolysis by CD 1012 (photoacid generator, PAG) in the organic solvent, the morphology of the regenerated pigment changed from a cubic to a spherical form due to the stibium (Sb) ion resulted from the photolysis of CD-1012 and influenced the aggregation of indanthrone molecules. The morphology of the regenerated pigment from BOC-indanthrone by hydrochloric acid was from cubic to bar-like form depends on the amount of hydrochloric acid. The results reveal that high temperature and long term thermolysis (180°C for 180min) is necessary to convert BOC-indanthrone into regenerated pigment in the polymeric film, and it may damage the application medium. In the presence of a trace acid, the thermolysis temperature and the reaction time of the convertion of BOC-indanthrone to regenerated pigment can be reduced and can extend the application field of the latent pigment. The morphology of the regenerated pigments were cubic form in the polymeric film whether the presence of PAG or not. The regenerated pigment converted from latent pigment by thermal treatment after acidolysis exhibits an excellent dispersion and distribution property in the photo-polymeric film and the resolution of 30μm in line width can be obtained by the photo-resist containing indanthrone converted from BOC-indanthrone by thermal treatment subsequent to the acidolysis in the presence of PAG at 130 ℃.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Rigden, Angela Jean. "Sources of variation in multi-decadal water fluxes inferred from weather station data". Thesis, 2017. https://hdl.handle.net/2144/27166.

Texto completo
Resumen
Terrestrial evapotranspiration (ET) is a significant component of the energy and water balances at the land surface. However, direct, continuous measurements of ET are spatially limited and only available since the 1990s. Due to this lack of observations, detecting and attributing long-term regional trends in ET remains difficult. This dissertation aims to alleviate the data limitation and detect long-term trends by developing a method to infer ET from data collected at common weather stations, which are spatially and temporally abundant. The methodology used to infer ET from historical meteorological data is based on an emergent relation between the land surface and atmospheric boundary layer. We refer to this methodology as the Evapotranspiration from Relative Humidity at Equilibrium method, or the “ETRHEQ method”. In the first section of this dissertation, we develop the ETRHEQ method for use at common weather stations and demonstrate the utility of the method at twenty eddy covariance sites spanning a wide range of climate and plant functional types. Next, we apply the ETRHEQ method at historical weather stations across the continental U.S. and show that ET estimates obtained via the ETRHEQ method compare well with watershed scale ET, as well as ET estimates from land surface models. From 1961 to 1997, we find negligible or increasing trends in summertime ET over the central U.S. and the west coast and negative trends in the eastern and western U.S. From 1998 to 2014, we find a sharp decline in summertime ET across the entire U.S. We show that this decline is consistent with decreasing transpiration associated with declines in humidity. Lastly, we assess the sensitivity of ET to perturbations in soil moisture and humidity anticipated with climate change. We demonstrate that the response of ET to changing humidity and soil moisture is strongly dependent on the biological and hydrological state of the surface, particularly the degree of water stress and vegetation fraction. In total, this dissertation demonstrates the utility of the ETRHEQ method as a means to estimate ET from weather station data and highlights the critical role of vegetation in modulating ET variability.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Shen, Sheng-Yao y 沈聖堯. "Improving Variational Auto-Encoder Based Neural Topic Model with Sparse Latent Concept Layer". Thesis, 2017. http://ndltd.ncl.edu.tw/handle/jg7gu7.

Texto completo
Resumen
碩士
國立臺灣大學
電機工程學研究所
105
In this thesis, the primary contribution is proposing a simple variational auto-encoder based topic model, and effective topic word selection criteria. By decomposing the probability matrix into the product of a topic matrix and a word matrix, we introduce sparse latent concepts (SLC) as the dimensionalities of the semantic space of the topic and word vectors, improve the model based on the idea that a topic is represented as few latent concepts, and select topic words by semantic similarity between topic and word vectors. In the experiments, SLC-based model outperforms the non-SLC-based model in terms of average topic coherence.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Lee, Jaron Jia Rong. "A Variational Bayes Approach to Clustered Latent Preference Models for Directed Network Data". Thesis, 2016. http://hdl.handle.net/1885/116971.

Texto completo
Resumen
Variational Bayes (VB) refers to a framework used to make fast deterministic approximations to the posterior density for Bayesian statistical inference. Traditionally, it has competed with Markov Chain Monte Carlo (MCMC) methods, a stochastic method which is asymptotically correct but computationally expensive. We derive the VB approximation to the Directed Clustered Latent Preference Network Model, which is inspired by ideas from Hoff et al. (2002); Handcock et al. (2007); Ward and Hoff (2007); Salter-Townshend and Murphy (2013); Krivitsky and Handcock (2008). The model handles binary-valued or continuous directed network data, and incorporates Gaussian mixture models over the separate latent sending and receiving preference spaces of each actor. We apply the model to simulated and real datasets to evaluate its performance against ex- isting MCMC methods such as the Gibbs sampler. We discover new insights in the well-studied Sampson’s Monks dataset (Sampson, 1968), as well as confirm existing results with the Correlates of War International Trade dataset (Barbieri and Keshk, 2012). We conclude by discussing unresolved issues, potential solutions, and areas of future work.
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Han, Shaobo. "Bayesian Learning with Dependency Structures via Latent Factors, Mixtures, and Copulas". Diss., 2016. http://hdl.handle.net/10161/12828.

Texto completo
Resumen

Bayesian methods offer a flexible and convenient probabilistic learning framework to extract interpretable knowledge from complex and structured data. Such methods can characterize dependencies among multiple levels of hidden variables and share statistical strength across heterogeneous sources. In the first part of this dissertation, we develop two dependent variational inference methods for full posterior approximation in non-conjugate Bayesian models through hierarchical mixture- and copula-based variational proposals, respectively. The proposed methods move beyond the widely used factorized approximation to the posterior and provide generic applicability to a broad class of probabilistic models with minimal model-specific derivations. In the second part of this dissertation, we design probabilistic graphical models to accommodate multimodal data, describe dynamical behaviors and account for task heterogeneity. In particular, the sparse latent factor model is able to reveal common low-dimensional structures from high-dimensional data. We demonstrate the effectiveness of the proposed statistical learning methods on both synthetic and real-world data.


Dissertation
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía