Dissertations / Theses on the topic 'Mixed precision'

To see the other types of publications on this topic, follow the link: Mixed precision.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 19 dissertations / theses for your research on the topic 'Mixed precision.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Omland, Steffen [Verfasser]. "Mixed Precision Multilevel Monte Carlo Algorithms for Reconfigurable Computing Systems / Steffen Omland." München : Verlag Dr. Hut, 2016. http://d-nb.info/1113336447/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

McEntee, Peter John. "The integration and validation of precision management tools in mixed farming systems." Thesis, Curtin University, 2016. http://hdl.handle.net/20.500.11937/54060.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In mixed farming systems up to 40% of farm area is in pasture, yet little is known about sub-paddock spatial variability/stability during a pasture phase. Precision agriculture technologies were used to incorporate pasture phase spatial variability into mixed farming precision management. Variations in paddock productivity over time were correlated between both crop and pasture phases, allowing site specific management to be adopted in both crop and pasture phases. However, the relationship was weaker in drought years.
3

Steffy, Daniel E. "Topics in exact precision mathematical programming." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/39639.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The focus of this dissertation is the advancement of theory and computation related to exact precision mathematical programming. Optimization software based on floating-point arithmetic can return suboptimal or incorrect resulting because of round-off errors or the use of numerical tolerances. Exact or correct results are necessary for some applications. Implementing software entirely in rational arithmetic can be prohibitively slow. A viable alternative is the use of hybrid methods that use fast numerical computation to obtain approximate results that are then verified or corrected with safe or exact computation. We study fast methods for sparse exact rational linear algebra, which arises as a bottleneck when solving linear programming problems exactly. Output sensitive methods for exact linear algebra are studied. Finally, a new method for computing valid linear programming bounds is introduced and proven effective as a subroutine for solving mixed-integer linear programming problems exactly. Extensive computational results are presented for each topic.
4

Gerest, Matthieu. "Using Block Low-Rank compression in mixed precision for sparse direct linear solvers." Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS447.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Pour résoudre des systèmes linéaires creux de grande taille, on peut vouloir utiliser des méthodes directes, numériquement robustes, mais coûteuses en termes d'utilisation de la mémoire et de temps de résolution. C'est le cas de la méthode multifrontale, notamment implémentée par le solveur MUMPS. L’une des fonctionnalités disponibles dans ce solveur est l’utilisation de la compression Block Low-Rank (BLR), qui améliore les performances. L'objectif de cette thèse est d'explorer plusieurs pistes d'amélioration de cette compression BLR, de façon à améliorer les performances de la méthode multifrontale. En particulier, nous proposons une variante de la compression BLR utilisant simultanément plusieurs formats de nombres à virgule flottante (précision mixte). Notre démarche, basée sur une analyse d'erreur, permet dans un premier temps de réduire la complexité d'une factorisation LU de matrice dense, sans pour autant impacter l'erreur commise de façon significative. Dans un second temps, nous adaptons ces algorithmes à la méthode multifrontale. Une première implémentation utilise notre compression BLR en précision mixte comme format de stockage, et permet ainsi de réduire la consommation mémoire de MUMPS. Une seconde implémentation permet de combiner ces gains en mémoire avec des gains en temps lors de la phase de résolution de systèmes triangulaires, grâce à des calculs effectués en précision faible. Cependant, nous remarquons que cette étape n'est pas aussi performante que prévu en BLR, dans le cas d'un système linéaire à plusieurs seconds membres. Pour y remédier, nous proposons de nouvelles variantes BLR de la résolution de systèmes triangulaires, dans laquelle la localité mémoire a été améliorée. Nous justifions l'intérêt de cette approche grâce à une analyse de volume de communication. Nous implémentons nos algorithmes dans un prototype simplifié, puis dans MUMPS, et nous obtenons des gains en temps dans les deux cas
In order to solve large sparse linear systems, one may want to use a direct method, numerically robust but rather costly, both in terms of memory consumption and computation time. The multifrontal method belong to this class algorithms, and one of its high-performance parallel implementation is the solver MUMPS. One of the functionalities of MUMPS is the use of Block Low-Rank (BLR) matrix compression, that improves its performance. In this thesis, we present several new techniques aiming at further improving the performance of dense and sparse direct solvers, on top of using a BLR compression. In particular, we propose a new variant of BLR compression in which several floating-point formats are used simultaneously (mixed precision). Our approach is based on an error analysis, and it first allows to reduce the estimated cost of a LU factorization of a dense matrix, without having a significant impact on the error. Second, we adapt these algorithms to the multifrontal method. A first implementation uses our mixed-precision BLR compression as a storage format only, thus allowing to reduce the memory footprint of MUMPS. A second implementation allows to combine these memory gains with time reductions in the triangular solution phase, by switching computations to low precision. However, we notice performance issues related to BLR for this phase, in case the system has many right-hand sides. Therefore, we propose new BLR variants of triangular solution that improve the data locality and reduce data movements, as highlighted by a communication volume analysis. We implement our algorithms within a simplified prototype and within solver MUMPS. In both cases, we obtain time gains
5

Wolfram, Heiko. "Model Building, Control Design and Practical Implementation of a High Precision, High Dynamical MEMS Acceleration Sensor." Universitätsbibliothek Chemnitz, 2005. http://nbn-resolving.de/urn:nbn:de:swb:ch1-200501921.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This paper presents the whole process of building up a high precision, high dynamical MEMS acceleration sensor. The first samples have achieved a resolution of better than 500 micro g and a bandwidth of more than 200 Hz. The sensor fabrication technology is shortly covered in the paper. A theoretical model is built from the physical principles of the complete sensor system, consisting of the MEMS sensor, the charge amplifier and the PWM driver for the sensor element. The mathematical modeling also covers problems during startup. A reduced order model of the entire system is used to design a robust control with the Mixed-Sensitivity H-infinity Approach. Since the system has an unstable pole, imposed by the electrostatic field and time delay, caused by A/D-D/A conversation delay and DSP computing time, limitations for the control design are given. The theoretical model might be inaccurate or lacks of completeness, because the parameters for the theoretical model building vary from sample to sample or might be not known. A new identification scheme for open or closed-loop operation is deployed to obtain directly from the samples the parameters of the mechanical system and the voltage dependent gains. The focus of this paper is the complete system development and identification process including practical tests in a DSP TI-TMS320C3000 environment.
6

Bustamante, Danilo. "High-Precision, Mixed-Signal Mismatch Measurement of Metal-Oxide-Metal Capacitors and a 13-GHz 5-bit 360-Degree Phase Shifter." BYU ScholarsArchive, 2020. https://scholarsarchive.byu.edu/etd/9240.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
A high-precision mixed-signal mismatch measurement technique for metal-oxide metal (MoM) capacitors as well as the design of a 13-GHz 5-bit 360-degree phase shifter are presented. This thesis presents a high-precision, mixed-signal mismatch measurement technique for metal-oxide–metal capacitors. The proposed technique incorporates a switched-capacitor op amp within the measurement circuit to significantly improve the measurement precision while relaxing the resolution requirement on the backend analog-to-digital converter (ADC). The proposed technique is also robust against multiple types of errors. A detailed analysis is presented to quantify the sensitivity improvement of the proposed technique over the conventional one. In addition, this thesis proposes a multiplexing technique to measure a large number of capacitors in a single chip and a new layout to improve matching. A prototype fabricated in 180 nm CMOS technology demonstrates the ability to sense capacitor mismatch standard deviation as low as 0.045% with excellent repeatability, all without the need of a high-resolution ADC. The 13-GHz 5-bit 360-degree phase shifter consists of 2 stages. The first stage utilizes a delay line for 4-bit 180-degree phase shift. A second stage provides 1-bit 180-degree phase shift. The phase shifter includes gain tuning so as to allow a gain variation of less than 1 dB. The design has been fabricated in 180 nm CMOS technology and measurement results show a complete 360◦ phase shift with an average step size of 10.7◦ at 13-GHz. After calibration the phase shifter presented an output gain S21 of 0.5 dB with a gain variation of less than 1 dB across all codes at 13-GHz. The remaining s-parameter testing showed a S22 and S11 below -11 dB and a S12 below -49 dB at 13 GHz.
7

Geifman, Nophar, Richard E. Kennedy, Lon S. Schneider, Iain Buchan, and Roberta Diaz Brinton. "Data-driven identification of endophenotypes of Alzheimer’s disease progression: implications for clinical trials and therapeutic interventions." BIOMED CENTRAL LTD, 2018. http://hdl.handle.net/10150/627086.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Background: Given the complex and progressive nature of Alzheimer's disease (AD), a precision medicine approach for diagnosis and treatment requires the identification of patient subgroups with biomedically distinct and actionable phenotype definitions. Methods: Longitudinal patient-level data for 1160 AD patients receiving placebo or no treatment with a follow-up of up to 18 months were extracted from an integrated clinical trials dataset. We used latent class mixed modelling (LCMM) to identify patient subgroups demonstrating distinct patterns of change over time in disease severity, as measured by the Alzheimer's Disease Assessment Scale-cognitive subscale score. The optimal number of subgroups (classes) was selected by the model which had the lowest Bayesian Information Criterion. Other patient-level variables were used to define these subgroups' distinguishing characteristics and to investigate the interactions between patient characteristics and patterns of disease progression. Results: The LCMM resulted in three distinct subgroups of patients, with 10.3% in Class 1, 76.5% in Class 2 and 13.2% in Class 3. While all classes demonstrated some degree of cognitive decline, each demonstrated a different pattern of change in cognitive scores, potentially reflecting different subtypes of AD patients. Class 1 represents rapid decliners with a steep decline in cognition over time, and who tended to be younger and better educated. Class 2 represents slow decliners, while Class 3 represents severely impaired slow decliners: patients with a similar rate of decline to Class 2 but with worse baseline cognitive scores. Class 2 demonstrated a significantly higher proportion of patients with a history of statins use; Class 3 showed lower levels of blood monocytes and serum calcium, and higher blood glucose levels. Conclusions: Our results, 'learned' from clinical data, indicate the existence of at least three subgroups of Alzheimer's patients, each demonstrating a different trajectory of disease progression. This hypothesis-generating approach has detected distinct AD subgroups that may prove to be discrete endophenotypes linked to specific aetiologies. These findings could enable stratification within a clinical trial or study context, which may help identify new targets for intervention and guide better care.
8

Di, Pace Brian S. "Site- and Location-Adjusted Approaches to Adaptive Allocation Clinical Trial Designs." VCU Scholars Compass, 2019. https://scholarscompass.vcu.edu/etd/5706.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Response-Adaptive (RA) designs are used to adaptively allocate patients in clinical trials. These methods have been generalized to include Covariate-Adjusted Response-Adaptive (CARA) designs, which adjust treatment assignments for a set of covariates while maintaining features of the RA designs. Challenges may arise in multi-center trials if differential treatment responses and/or effects among sites exist. We propose Site-Adjusted Response-Adaptive (SARA) approaches to account for inter-center variability in treatment response and/or effectiveness, including either a fixed site effect or both random site and treatment-by-site interaction effects to calculate conditional probabilities. These success probabilities are used to update assignment probabilities for allocating patients between treatment groups as subjects accrue. Both frequentist and Bayesian models are considered. Treatment differences could also be attributed to differences in social determinants of health (SDH) that often manifest, especially if unmeasured, as spatial heterogeneity amongst the patient population. In these cases, patient residential location can be used as a proxy for these difficult to measure SDH. We propose the Location-Adjusted Response-Adaptive (LARA) approach to account for location-based variability in both treatment response and/or effectiveness. A Bayesian low-rank kriging model will interpolate spatially-varying joint treatment random effects to calculate the conditional probabilities of success, utilizing patient outcomes, treatment assignments and residential information. We compare the proposed methods with several existing allocation strategies that ignore site for a variety of scenarios where treatment success probabilities vary.
9

Zulian, Marine. "Méthodes de sélection et de validation de modèles à effets mixtes pour la médecine génomique." Thesis, Institut polytechnique de Paris, 2020. http://www.theses.fr/2020IPPAX003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
L'étude de phénomènes biologiques complexes tels que la physiopathologie humaine, la pharmacocinétique d'un médicament ou encore sa pharmacodynamie peut être enrichie par des approches de modélisation et de simulation. Les progrès technologiques de la génétique permettent la constitution de jeux de données issues de populations plus larges et plus hétérogènes. L'enjeu est alors de développer des outils intégrant les données génomiques et phénotypiques permettant d'expliquer la variabilité inter-individuelle. Dans cette thèse, nous développons des méthodes qui permettent de prendre en compte la complexité des données biologiques et la complexité des processus sous-jacents. Des étapes de curation des covariables génomiques nous permettent de restreindre le nombre de covariables potentielles ainsi que de limiter les corrélations entre covariables. Nous proposons un algorithme de sélection de covariables dans un modèle à effets mixtes dont la structure est contrainte par le processus physiologique étudié. En particulier, nous illustrons les méthodes développées sur deux applications issues de la médecine : des données réelles d'hypertension artérielle et des données simulées du métabolisme du tramadol (opioïde)
The study of complex biological phenomena such as human pathophysiology, pharmacokinetics of a drug or its pharmacodynamics can be enriched by modelling and simulation approaches. Technological advances in genetics allow the establishment of data sets from larger and more heterogeneous populations. The challenge is then to develop tools that integrate genomic and phenotypic data to explain inter-individual variability. In this thesis, we develop methods that take into account the complexity of biological data and the complexity of underlying processes. Curation steps of genomic covariates allow us to limit the number of potential covariates and limit correlations between covariates. We propose an algorithm for selecting covariates in a mixed effects model whose structure is constrained by the physiological process. In particular, we illustrate the developed methods on two medical applications: actual high blood pressure data and simulated tramadol (opioid) metabolism data
10

Cardoso, Adilson Silva. "Design and characterization of BiCMOS mixed-signal circuits and devices for extreme environment applications." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/53099.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
State-of-the-art SiGe BiCMOS technologies leverage the maturity of deep-submicron silicon CMOS processing with bandgap-engineered SiGe HBTs in a single platform that is suitable for a wide variety of high performance and highly-integrated applications (e.g., system-on-chip (SOC), system-in-package (SiP)). Due to their bandgap-engineered base, SiGe HBTs are also naturally suited for cryogenic electronics and have the potential to replace the costly de facto technologies of choice (e.g., Gallium-Arsenide (GaAs) and Indium-Phosphide (InP)) in many cryogenic applications such as radio astronomy. This work investigates the response of mixed-signal circuits (both RF and analog circuits) when operating in extreme environments, in particular, at cryogenic temperatures and in radiation-rich environments. The ultimate goal of this work is to attempt to fill the existing gap in knowledge on the cryogenic and radiation response (both single event transients (SETs) and total ionization dose (TID)) of specific RF and analog circuit blocks (i.e., RF switches and voltage references). The design approach for different RF switch topologies and voltage references circuits are presented. Standalone Field Effect Transistors (FET) and SiGe HBTs test structures were also characterized and the results are provided to aid in the analysis and understanding of the underlying mechanisms that impact the circuits' response. Radiation mitigation strategies to counterbalance the damaging effects are investigated. A comprehensive study on the impact of cryogenic temperatures on the RF linearity of SiGe HBTs fabricated in a new 4th-generation, 90 nm SiGe BiCMOS technology is also presented.
11

Rappaport, Ari. "Estimations d'erreurs a posteriori et adaptivité en approximation numérique des EDPs : régularisation, linéarisation, discrétisation et précision en virgule flottante." Electronic Thesis or Diss., Sorbonne université, 2024. http://www.theses.fr/2024SORUS057.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Cette thèse se concentre principalement sur l'analyse d'erreur a posteriori et les algorithmes adaptatifs qui en découlent pour résoudre itérativement des équations aux dérivées partielles (EDP) non linéaires. Nous considérons des EDP de type elliptique et parabolique dégénéré. Nous étudions également l'adaptivité dans la précision en virgule flottante d'un solveur multigrille. Dans les deux premiers chapitres, nous considérons des EDP elliptiques découlant d'un problème de minimisation d'énergie. L'analyse a posteriori est directement basée sur la différence d'énergie entre la solution vraie et approchée. Les applications non linéaires des EDP elliptiques que nous considérons sont en particulier fortement montones et Lipschitziennes. Dans ce contexte, une quantité importante est la « force de la non-linéarité » donnée par le rapport L/α où L est la constante de Lipschitz et α est la constante de (forte) montonicité. Dans le Chapitre 1, nous étudions un algorithme adaptatif comprenant la régularisation, la discrétisation et la linéarisation adaptative. L'algorithme est appliqué à une EDP elliptique avec une non-linéarité non régulière. Nous établissions une borne supérieure garantie, fondée sur un estimateur basé sur l'écart primal-dual. De plus, nous isolons les composantes de l'erreur correspondant à la régularisation, la discrétisation et la linéarisation qui conduisent à des critères d'arrêt adaptatifs. Nous prouvons que les estimateurs des composantes convergent vers zéro dans les limites respectives de régularisation, discrétisation et linéarisation de l'algorithme. Nous présentons des résultats numériques démontrant l'efficacité de l'algorithme. Nous présentons également des preuves numériques de robustesse par rapport au ratio mentionné L/α qui motive le travail dans le deuxième chapitre. Dans le Chapitre 2, nous examinons la question de l'efficacité et de la robustesse de l'estimateur d'erreur de l'écart primal-dual. Nous considérons en particulier une différence d'énergie augmentée, pour laquelle nous établissons une indépendance vis-à-vis de ce ratio pour la linéarisation de Zarantonello et seulement une dépendance locale et calculable par patch pour d'autres méthodes de linéarisation, y compris la linéarisation de Newton. Des résultats numériques sont présentés pour étayer les développements théoriques. Dans le Chapitre 3, nous nous tournons vers le problème de régularisation adaptative pour l'équation de Richards. L'équation de Richards apparaît dans le contexte de la modélisation des milieux poreux. Elle contient des non-linéarités non-régulières, et se prêtant à la même approche que celle adoptée dans le Chapitre 1. Nous développons des estimateurs, ici inspirés des estimateurs en norme duale du résidu, ainsi qu'un algorithme adaptatif basé sur ces estimateurs. Dans le Chapitre 4, nous fournissons des détails sur la mise en œuvre efficace du flux équilibré : un ingrédient crucial dans le calcul des estimateurs d'erreur. La mise en œuvre s'appuie sur le paradigme du multi-threading dans le langage de programmation Julia. Une boucle supplémentaire est introduite pour éviter les allocations de mémoire, essentielle pour obtenir un passage à l'échelle parallèle. Dans le Chapitre 5, nous considérons un algorithme de précision mixte avec une méthode de multigrille géométrique comme solveur interne. Le solveur multigrille fournit intrinsèquement un estimateur d'erreur que nous utilisons dans le critère d'arrêt pour le raffinement itératif. Nous présentons un benchmark pour démontrer l'accélération obtenue en utilisant des représentations en simple précision des matrices creuses impliquées. Nous concevons également un algorithme adaptatif qui utilise l'estimateur mentionné pour identifier quand le raffinement itératif échoue pour des problèmes trop mal conditionnés, l'algorithme est alors capable de récupérer et de résoudre le problème entièrement en double précision
This thesis concerns a posteriori error analysis and adaptive algorithms to approximately solve nonlinear partial differential equations (PDEs). We consider PDEs of both elliptic and degenerate parabolic type. We also study adaptivity in floating point precision of a multigrid solver of systems of linear algebraic equations. In the first two chapters, we consider elliptic PDEs arising from an energy minimization problem. The a posteriori analysis therein is based directly on the difference of the energy in the true and approximate solution. The nonlinear operators of the elliptic PDEs we consider are strongly monotone and Lipschitz continuous. In this context, an important quantity is the “strength of the nonlinearity” given by the ratio L/α where L is the Lipschitz continuity constant and α is the (strong) monotonicity constant. In Chapter 1 we study an adaptive algorithm comprising adaptive regularization, discretization, and linearization. The algorithm is applied to an elliptic PDE with a nonsmooth nonlinearity. We derive a guaranteed upper bound based on primal-dual gap based estimator. Moreover, we isolate components of the error corresponding to regularization, discretization, and linearization that lead to adaptive stopping criteria. We prove that the component estimators converge to zero in the respective limits of regularization, discretization, and linearization steps of the algorithm. We present numerical results demonstrating the effectiveness of the algorithm. We also present numerical evidence of robustness with respect to the aforementioned ratio L/α which motivates the work in the second chapter. In Chapter 2, we consider the question of efficiency and robustness of the primal-dual gap error estimator. We in particular consider an augmented energy difference, for which we establish independence of the ratio L/α (robustness) for the Zarantonello linearization and only patch-local and computable dependence for other linearization methods including the Newton linearization. Numerical results are presented to substantiate the theoretical developments. In Chapter 3 we turn our attention to the problem of adaptive regularization for the Richards equation. The Richards equation appears in the context of porous media modeling. It contains nonsmooth nonlinearities, which are amenable to the same approach we adopt in Chapter 1. We develop estimators and an adaptive algorithm where the estimators are inspired by estimators based on the dual norm of the residual. We test our algorithm on a series of numerical examples coming from the literature. In Chapter 4 we provide details for an efficient implementation of the equilibrated flux, a crucial ingredient in computing the error estimators so far discussed. The implementation relies on the multi-threading paradigm in the Julia programming language. An additional loop is introduced to avoid memory allocations, which is crucial to obtain parallel scaling. In Chapter 5 we consider a mixed precision iterative refinement algorithm with a geometric multigrid method as the inner solver. The multigrid solver inherently provides an error estimator of the algebraic error which we use in the stopping criterion for the iterative refinement. We present a benchmark to demonstrate the speedup obtained by using single precision representations of the sparse matrices involved. We also design an adaptive algorithm that uses the aforementioned estimator to identify when iterative refinement in single precision fails and is able to recover and solve the problem fully in double precision
12

Gulbinas, Gediminas. "Šiuolaikiniais maišytuvais gaminamo asfaltbetonio mišinių kokybės gerinimo galimybių analizė." Master's thesis, Lithuanian Academic Libraries Network (LABT), 2005. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2005~D_20050612_224508-61143.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This paper covers the analysis of the main asphalt concrete production technological processes. It includes calculation of dosed materials for the two asphalt concrete mixes, as well as precision and stability of maintenance of temperature, necessary for the technological process. The above calculation is made following the production data printed by the mixer's printer. These data were divided into four groups: shift, week, month, season. This paper includes statistical indexes that reflects technological process of the asphalt concrete mixer: the arithmetical averages of real doses, temperatures and mixing time , the average square deviations Sq, variation's coefficients V etc. The methods, described in this paper, can be used for improvement of the normative documents and the quality control of produced asphalt concrete mixture. The paper consists of 143 pages and 129 drawings (some of them are in the addition) . The list of literature includes 29 bibliographic editions.
13

Duarte, Cláudia Filipa Pires. "Essays on mixed-frequency data : forecasting and unit root testing." Doctoral thesis, Instituto Superior de Economia e Gestão, 2016. http://hdl.handle.net/10400.5/11662.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Doutoramento em Economia
Nas últimas décadas, OS investigadores têm tido acesso a bases de dados cada vez mais abrangentes, que incluem séries com frequências temporais mais elevadas e que são divulgadas mais atempadamente. Em contraste, algumas variáveis, nomeadamente alguns dos principais indicadores macroeconómicos, são divulgados com urn desfasamento temporal significativo e com baixa frequência. Esta situação levanta questões sobre como lidar com séries com frequências temporais diferentes, mistas. Ao longo do tempo, várias técnicas têm sido propostas. Esta tese debruça-se sobre uma técnica em particular - a abordagem MI(xed) DA{ta) S{ampling), proposta por Ghysels et al. (2004). No Capitulo 1 eu utilizo a técnica MIDAS para prever o crescimento do PIB na área do euro com base num pequcno conjunto de indicadores, cobrindo séries com diferentes frequências temporais e divulgadas com diferentes desfasamentos. Eu cornparo o desempenho de urn conjunto alargado de regressões MIDAS, utilizando a raiz quadrada do erro quadrático média de previsão e tomando como ponto de referência quer regressões autoregressivas, quer multivariadas (bridge models). A questão sobre a forma de introduzir tcrmos autoregressivos nas equações MIDAS é dirirnida. São consideradas diferentes combinações de variáveis, obtidas através da agregação de previsões ou de regressões multivariadas, assim como diferentes frequências ternporais. Os resultados sugerern que, em geral, a utilização de regressões MIDAS contribui para o aurnento da precisão das previsões. Adicionalmente, nesta tese são propostos novas testes de raízes unitárias que exploram inforrnação com frequências rnistas. Tipicamente, os testes de raízes unitárias têm baixa potência, especialrnente em amostras pequenas. Uma forma de combatcr esta dificuldade consiste em recorrer a testes que exploram informação adicional de urn regressor estacionário incluído na regressão de teste. Eu avalio se é possível melhorar 0 desempenho de alguns testes deste tipo ao explorar dados com frequêcias temporais mistas, através de regressões MIDAS. No Capitulo 2 eu proponho uma nova classe de testes da familia Dickey-Fuller (DF) com regressores adicionais de frequência temporal mista, tomando por base os testes DF com regressores adicionais (CADF) propostos por Hansen (1995) e uma versão modificada proposta por Pesavento (2006), semelhante ao filtro GLS aplicado ao teste ADF univariado em Elliott et al. (1996). Em alternativa aos testes da familia DF, Elliott and Jansson (2003) propõem urn teste de raízes unitárias viável que retém propriedades óptimas mesmo na presenc;a de variáveis deterministicas (EJ), tomando por base a versão univariada proposta por Elliott et al. (1996). No Capitulo 3 eu alargo o âmbito de aplicação destes testes de forma a incluir dados com frequência temporal mista. Dado que para implementar o teste EJ é necessário estimar modclos VAR, eu proponho urn modelo VAR-MIDAS não restrito, parcimonioso, que inclui séries de frequência temporal mista e é estimado com técnicas econométricas tradicionais. Os resultados de urn exercício de Monte Carlo indicam que os testes com dados de frequência temporal mista têrn urn desempenho em termos de potência melhor do que os testes que agregam todas as variáveis para a mcsma frequência temporal (necessariamente a frequência mais baixa). Os ganhos são robustos à dimensão da amostra, à escolha do número de desfasamentos a incluir nas regressões de teste e às frequências temporais concretas. Adicionalmente, os testes da familia EJ tendem a ter urn melhor desempenho do que os testes da familia CADF, independentemente das frequências temporais consideradas. Para ilustrar empiricamentc a utilização destes testes, analisa-se a série da taxa de desemprego nos EUA.
Over the last decades, researchers have had access to more comprehensive datasets, which are released on a more frequent and timely basis. Nevertheless, some variables, namely some key macroeconomic indicators, are released with a significant time delay and at low frequencies. This situation raises the question on how to deal with series released at different, mixed time frequencies. Over the years and for different purposes, several techniques have been put forward. This essav focuses on a particular technique - the MI(xed) DA(ta) S(ampling) framework, proposed by Ghysels et al. (2004). In Chapter 1 I use MIDAS for forecasting euro area GDP growth using a small set of selected indicators in an environment with different sampling frequencies and asynchronous releases of information. I run a horse race between a wide set of MIDAS regressions and evaluate their performance, in terms of root mean squared forecast error, against AR and quarterly bridge models. The issue on how to include autoregressive terms in MIDAS regressions is disentangled. Different combinations of variables, through forecast pooling and multi-variable regressions, and different time frequencies are also considered. The results obtained suggest that in general, using MIDAS regressions contributes to increase forecast accuracy. In addition, I propose new unit root tests that exploit mixed-frequency information. Unit root tests typically suffer from low power in small samples. To overcome this shortcoming, tests exploiting information from stationary covariates have been proposed. I assess whether it is possible to improve the power performance of some of these tests by exploiting mixed-frequency data, through the MIDAS approach. In Chapter 2 I put forward a new class of mixed-frequency covariate-augmented Dickey-Fuller (DF) tests, extending the covariate-augmented DF test (CADF test) proposed by Hansen (1995) and its modified version, similar to the GLS generalisation of the univariate ADF test in Elliott et al. (1996), proposed by Pesavento (2006). Alternatively to the CADF tests, Elliott and Jansson (2003) proposed a feasible point optimal unit root test in the presence of deterministic components (EJ test hereafter), which extended the univariate results in Elliott et al. (1996). In Chapter 3 I go one step further and include mixed-frequency data in the EJ testing framework. Given that implementing the EJ test requires estimating VAR models, in order to plug in mixed-frequency data in the test regression I propose an unconstrained, though parsimonious, stacked skip-sampled reduced-form VAR-MIDAS model, which is estimated using standard econometric techniques. The results from a Monte Carlo exercise indicate that mixed-frequency tests have better power performance than low-frequency tests. The gains are robust to the size of the sample, to the lag specification of the test regressions and to different combinations of time frequencies. Moreover, the EJ-family of tests tends to have a better power performance than the CADF-family of tests, either with low or mixed-frequency data. An empirical illustration using the US unemployment rate is presented.
14

Huang, Jhih-Ming, and 黃志銘. "Inexact and Mixed Precision Eigenvalue Solvers on GPU." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/73593608140877647378.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
碩士
國立臺灣大學
數學研究所
102
Eigenvalue problem is one of the most crucial topics in engineering and science fields nowaday. In practice applications, the target matrix is usually large and sparse, hence solving the eigenvalue problems need huge computa- tion amount. The high efficiency is a strong demand in practice, therefore High Performance Computing, HPC, plays an important role in this topic. One important approach for getting higher performance is mixed precision design, which means it will change the operation precision during the com- putation without dropping the finial accuracy. Since single precision requires less memory storage and it may cause higher cache hit ratio, which may affect performance a lot. In addition, in some numerical operation, single precision is faster than double precision. Hence, if the original algorithm is accuracy insensitive, which means that it could lost some accuracy during the compu- tation and keep the same final accuracy, then it is suitable to be redesigned as a mixed precision type algorithm to enhance the performance. The eigen- solver we focus on exactly belongs to this type. Shift-Invert Residual Arnoldi, SIRA, algorithm is an well-known eigenvalue solver, which consists of an in- ner loop and an outer loop. The inner loop is solving a linear system, which is for searching the correction direction to help outer loop find the desired eigen-pair. The efficiency of SIRA relies on the solutions of the inner-loop linear systems. These systems can be solved in lower accuracy without down- grading the final accuracy of the target eigenvalues. By taking advantage of this algorithmic feature and the computational power of GPU, we develop a mixed precision eigensolver in this research. We develop a method called pocket method, it adaptively choosing the double or single precision to solve the linear system. Moreover, in solving the linear system, it automatically adjust the inner tolerance and timing of exiting inner loop. Pocket method has the best performance in most of our experiments.
15

Hsieh, Chen-Yuan, and 謝禎原. "HapticSphere: Physical Support To Enable Precision Touch Interaction in Mobile Mixed-Reality." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/8e9t49.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
碩士
國立交通大學
資訊科學與工程研究所
106
This work presents HapticSphere, a wearable spherical surface enabled by bridging a finger and the HMD with a passive string. Users perceive physical support at the finger when reaching it to the surface defined by the string extent. This physical support assists users in precise touch interaction in the context of stationary and walking virtual or mixed-reality. We propose three methods of attachment of the haptic string (directly on the head or on the body), and illustrate a novel single-step calibration algorithm that supports these configurations by estimation of a grand haptic sphere, once an head-coordinated touch interaction is established. Two user studies were conducted to validate our approach and to compare the touch performance with physical support in sitting and walking situations in the context of mobile mixed-reality scenarios. The results reported that, in walking condition, touch interaction with physical support significantly outperform with the visual-only condition. There is an extend work at the end, adding string-based device to commercial product, which is suitable for long-term task, such as VR desktop.
16

Su, Chia-Sheng, and 蘇家陞. "Development of a Precision Irrigation Model for a Mixed Paddy Rice and Upland Crops Field." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/45145685045495707315.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
碩士
國立中央大學
土木工程學系
104
In Taiwan, irrigation water accounts for the major usage in agricultural water. Recently, mixed paddy rice and upland crops fields becomes more and more popular for farmers choose to grow crops based on their individual wills. Given the situation that in Taiwan irrigation water distribution depends on manual work, and the water of irrigation and the lost from the conveyance can not be accurately calculated, the supply and demand in the field is lack of coordination. This study applies system dynamic model to establish irrigation water management model for a mixed paddy rice and upland crops field in central Taiwan. The goal is to provide precision irrigation practice allocating irrigation water in such way to minimize water loss and raise efficiency. Through precision irrigation practice for mixed paddy rice and upland crops fields, a substantial saving in irrigation water, i.e., 291 mm in terms of water depth can be realized for one season of growing vegetable in 2016. In the case of paddy field, 519 mm of water saving was estimated for a 30-days simulation on the proposed automatic control gates system in the second rice crop on 2015. Moreover, in the test site, the pumped groundwater was estimated to be 81 mm in depth, which compared to the simulated water requirement 65 mm indicates the possible eater saving being 16 mm. The study on the possible water saving for 9 scenarios provides information in management the mixed crops farming in water shortage periods.
17

Buonocore, Luca. "Ultimate precision for the Drell-Yan process: mixed QCDxQED(EW) corrections, final state radiation and power suppressed contributions." Tesi di dottorato, 2020. http://www.fedoa.unina.it/13156/1/luca_buonocore_32.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The discovery of the Higgs Boson at the Large Hadron Collider in 2012 represented a breakthrough in particle physics, providing a strong confirmation of the mechanism of Electro-Weak-Symmetry-Breaking, which is in turn responsible for the generation of elementary-particle masses. The Higgs discovery, however, was not followed by any evidence of physics Beyond the Standard Model, and it is difficult to reconcile our current description of the fundamental particles and their interactions with long-standing problems like neutrino masses, matter-anti matter asymmetry, the existence of dark matter and dark energy and the hierarchy problem. The lack of new-physics signals has stimulated a new precision collider programme, which was made possible by the advances on both the experimental and theoretical sides. Indeed, the precision target accuracy expected by the end of the planned LHC data taking in 2038 is at the (sub)percent level. For a meaningful comparison with experimental data, we need theoretical predictions which have a similar level of accuracy. This translates into the necessity of computing higher order terms in perturbation theory, known as radiative corrections in the language of Quantum Field Theory. At an hadronic collider as the LHC the effects due to the strong interaction (described by Quantum CromoDynamics (QCD)) dominate. In the last decades a big effort has been profused to compute QCD radiative corrections and nowadays Next-to-Next-to Leading Order (NNLO) computations represent the state of the art for many $2\to2$ processes. The production of a dilepton pair via the Drell-Yan mechanism has a special place in the precision phenomenology program at LHC for its importance in experimental calibrations and for the precise determination of important electro-weak (EW) parameters such as the W mass. From the theoretical side, Drell-Yan is one of the most studied processes. QCD corrections are known up to NNLO and in part at N$^3$LO, while EW corrections are known at NLO. At this level of accuracy, it becomes relevant to assess the relative importance of the mixed QCD-EW corrections. In this thesis, we set up a subtraction framework to compute the full set of mixed QCD-EW(QED) corrections to the the Drell-Yan process at the differential level. We rely on the transverse momentum resummation formalism to handle the genuine NNLO-type infrared divergences associated to both initial and final state radiation in the small transverse momentum limit, exploiting the corresponding results for heavy-quark pair production. In particular, we have to deal with massive leptons in the final state as the their mass acts as a regulator for final-state collinear divergences. This may challenge the numerical stability since the physical lepton masses are very small. We extensively study the radiation pattern of massive emitters, building a dedicated momentum mapping which smoothly approaches the massless limit. Furthermore, we study, for the first time, the leading power suppressed contributions appearing at small transverse momenta, and we show that they are driven by final-state soft radiation. As a validation of our construction, we show results both for the inclusive and the relevant differential distribution for the mixed QCD-QED corrections to the production of an on-shell Z boson.
18

Kubínová, Marie. "Numerické metody pro řešení diskrétních inverzních úloh." Doctoral thesis, 2018. http://www.nusl.cz/ntk/nusl-392433.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Title: Numerical Methods in Discrete Inverse Problems Author: Marie Kubínová Department: Department of Numerical Mathematics Supervisor: RNDr. Iveta Hnětynková, Ph.D., Department of Numerical Mathe- matics Abstract: Inverse problems represent a broad class of problems of reconstruct- ing unknown quantities from measured data. A common characteristic of these problems is high sensitivity of the solution to perturbations in the data. The aim of numerical methods is to approximate the solution in a computationally efficient way while suppressing the influence of inaccuracies in the data, referred to as noise, that are always present. Properties of noise and its behavior in reg- ularization methods play crucial role in the design and analysis of the methods. The thesis focuses on several aspects of solution of discrete inverse problems, in particular: on propagation of noise in iterative methods and its representation in the corresponding residuals, including the study of influence of finite-precision computation, on estimating the noise level, and on solving problems with data polluted with noise coming from various sources. Keywords: discrete inverse problems, iterative solvers, noise estimation, mixed noise, finite-precision arithmetic - iii -
19

BLAHOUT, Jaroslav. "Vliv jednotlivých komponent směsných krmných dávek u krmných míchacích vozů (bez vybírací frézy) na přesnost nakládek." Master's thesis, 2016. http://www.nusl.cz/ntk/nusl-381151.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Topics thesis focuses on the feed mixing wagons, specifically on the impact of individual components of mixed feed rations with feed mixer wagons without pickup cutters accuracy loads. The result of this work is your own assessment of the qualitative characteristics of individual components of mixed feed rations, but their influence on the possible inaccuracy of the weighing equipment of these vehicles and thus on the actual composition of the diet and the resulting measures to prevent the occurrence of these inaccuracies when preparing feed.

To the bibliography