To see the other types of publications on this topic, follow the link: RAGE method.

Dissertations / Theses on the topic 'RAGE method'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'RAGE method.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Lafon, Monique. "La nucleocapside du virus rabique : une nouvelle cible pour la reponse immunitaire et pour la therapie." Paris 7, 1987. http://www.theses.fr/1987PA077219.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Barret, Julien. "Clonage, ingénierie et transfert de grands fragments de génome chez Bacillus subtilis." Electronic Thesis or Diss., Bordeaux, 2024. http://www.theses.fr/2024BORD0458.

Full text
Abstract:
L’ingénierie des génomes des micro-organismes est devenue un standard dans les biotechnologies microbiennes. En 2010, des technologies prometteuses de biologie de synthèse utilisant la levure comme plateforme pour l’assemblage et l’ingénierie de génomes synthétiques bactériens suivi de leur transplantation dans une cellule receveuse ont vu le jour. Ces technologies ont conduit à la création des premières cellules synthétiques et ouvert de nouvelles voies vers la construction de cellules aux propriétés biologiques entièrement contrôlées. Le transfert de ces outils à des micro-organismes d’intérêt industriel comme la bactérie Gram+ Bacillus subtilis (Bsu), modèle dans le secteur des biotechnologies, constituerait une avancée majeure. C’est précisément l’objectif du projet ANR « Bacillus 2.0 », qui réunit deux équipes INRAE et qui se propose d’adapter l’ensemble de ces outils de biologie de synthèse à Bsu afin d’être en mesure de partir d’une conception, assistée par ordinateur, de génomes semi-synthétiques de Bsu jusqu’à l’obtention de nouvelles souches industrielles. Cependant, les premiers travaux réalisés sur ce projet ont montré que le génome entier de Bsu ne pouvait pas être cloné et maintenu en l’état dans la levure. Ces résultats risquaient de remettre en question la faisabilité du projet dans son ensemble et en particulier la pertinence d’utiliser la levure comme plateforme d’assemblage du génome semi-synthétique de Bsu.L’objectif de ma thèse a consisté à démontrer que la levure restait un hôte pertinent pour le projet « Bacillus 2.0 ». Elle s’est déclinée en 3 parties. Dans la première partie, une méthode de clonage de génome récemment développée au laboratoire et dénommée CReasPy-Fusion, a progressivement été adaptée à Bsu. Les résultats obtenus ont montré (i) le transfert possible d'ADN plasmidique entre protoplastes bactériens et sphéroplastes de levure, (ii) l'efficacité d'un système CRISPR-Cas9 porté par les cellules de levure pour capturer/modifier cet ADN plasmidique pendant la fusion Bsu/levure, puis (iii) l'efficacité de ce même système pour capturer des fragments de génome d’une centaine de kb à partir de trois souches différentes. Des observations en microscopie à fluorescence ont également été réalisées et ont mis en évidence deux types d’interactions qui permettraient de passer d’un contact protoplastes/sphéroplastes à un ADN bactérien cloné dans la levure. Dans la seconde partie de ma thèse, la méthode CReasPy-Fusion a été mise à profit pour tenter de cloner de grands fragments du génome de Bsu dans la levure. Des fragments génomiques jusqu’à ~1 Mb ont pu être clonés dans la levure, mais leur capture a nécessité l’ajout préalable d’un grand nombre d’ARS sur le génome de Bsu pour stabiliser les constructions génétiques. La dernière partie a été l’adaptation de la méthode RAGE à Bsu. Cette méthode permet le transfert, non pas d’un génome entier mais de portions de génomes bactériens depuis la levure vers la bactérie à éditer. Une preuve de concept a été réalisée avec l’échange d’un premier fragment de génome de 155 kb par une version réduite de 44 kb.En conclusion, les travaux réalisés au cours de cette thèse ont montré la pertinence d’utiliser la levure comme plateforme d’ingénierie dans les modifications à grande échelle du génome de Bsu. D’une part, nous avons montré que des fragments d’une centaine de kb peuvent être clonés dans la levure, modifiés et transférés dans une cellule receveuse de façon à générer des Bsu mutants. Cette stratégie offre une véritable alternative à la transplantation de génome. D’autre part, nous avons montré que de grands fragments du génome de Bsu (jusqu’à 1Mb) peuvent également être clonés dans la levure à condition de contenir de nombreux ARS dans leurs séquences. Grâce à ces résultats, le clonage d’un génome réduit de Bsu chez la levure est redevenu un objectif réalisable
Genome engineering of microorganisms has become a standard in microbial biotechnology. In 2010, promising synthetic biology technologies using yeast as a platform for the assembly and engineering of synthetic bacterial genomes followed by their transplantation into a recipient cell have emerged. These technologies have led to the creation of the first synthetic cells and opened new avenues towards the construction of cells with fully controlled biological properties. Transferring these tools to microorganisms of industrial interest such as the Gram+ bacterium Bacillus subtilis (Bsu), a model in the biotechnology sector, would be a major step forward. This is precisely the aim of the ANR "Bacillus 2.0" project, which brings together two INRAE teams and aims to adapt all these synthetic biology tools to Bsu so as to be able to go from computer-aided design of semi-synthetic Bsu genomes to the production of new industrial strains. However, initial work on this project showed that the entire Bsu genome could not be cloned and maintained in yeast in its current state. These results threatened to call into question the feasibility of the entire project and, in particular, the relevance of using yeast as a platform for assembling the semi-synthetic Bsu genome.The goal of my thesis was to demonstrate that yeast remained a relevant host for the Bacillus 2.0 project. It was divided into 3 parts. In the first part, a genome cloning method recently developed in the laboratory, called CReasPy-Fusion, was progressively adapted to Bsu. The results obtained showed (i) the possible transfer of plasmid DNA between bacterial protoplasts and yeast spheroplasts, (ii) the efficiency of a CRISPR-Cas9 system carried by yeast cells to capture/modify this plasmid DNA during Bsu/yeast fusion, and then (iii) the efficiency of the same system to capture genomic fragments of about a hundred kb from three different strains. Fluorescence microscopy observations were also carried out revealing two types of interaction that would enable the transition from protoplast/spheroplast contact to cloned bacterial DNA in yeast. In the second part of my thesis, the CReasPy-Fusion method was used in an attempt to clone large Bsu genome fragments in yeast. Genomic fragments of up to ~1 Mb could be cloned in yeast, but their capture required the prior addition of a large number of ARS to the Bsu genome to stabilize the genetic constructs. The final part was the adaptation of the RAGE method to Bsu. This method allow the transfer, not of a whole genome, but of portions of bacterial genomes from yeast to the bacteria to be edited. Proof of concept was achieved by exchanging a 155 kb genome fragment with a reduced 44 kb version.In conclusion, the work carried out during this thesis has shown the relevance of using yeast as an engineering platform for large-scale modifications of the Bsu genome. On the one hand, we have shown that fragments of around 100 kb can be cloned in yeast, modified and transferred into a recipient cell to generate Bsu mutants. This strategy offers a real alternative to genome transplantation. On the other hand, we have shown that large fragments of the Bsu genome (up to 1 Mb) can also be cloned in yeast, provided they contain numerous ARS in their sequences. Thanks to these results, cloning a reduced Bsu genome in yeast has once again become an achievable goal
APA, Harvard, Vancouver, ISO, and other styles
3

Neuman, Arthur James III. "Regularization Methods for Ill-posed Problems." Kent State University / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=kent1273611079.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Horová, Denisa. "Ocenění privátní firmy." Master's thesis, Vysoká škola ekonomická v Praze, 2010. http://www.nusl.cz/ntk/nusl-75249.

Full text
Abstract:
The master thesis deals with the appreciation of medical practice premises. Methods which are used, described and analyzed in the thesis, represent the standard expert methods for business valuation. These are supplemented by specific procedures used for determining the value of medical practices in particular. The work also describes the health care system of the Czech Republic, the methods and sources of payment for medical treatments, value generators in medical practices and basic procedures for identifying approximate value of medical practice, eventually of its goodwill. On practical example of medical practice there are described and applied also the scientific yield methods, which can derive the value of this type of business quite accurately. In the conclusion there are also discussed some currently used but not entirely accurate valuation processes.
APA, Harvard, Vancouver, ISO, and other styles
5

FONSECA, GABRIEL P. "Monte Carlo modeling of the patient and treatment delivery complexities for high dose rate brachytherapy." reponame:Repositório Institucional do IPEN, 2015. http://repositorio.ipen.br:8080/xmlui/handle/123456789/25298.

Full text
Abstract:
Submitted by Claudinei Pracidelli (cpracide@ipen.br) on 2015-12-10T16:54:22Z No. of bitstreams: 0
Made available in DSpace on 2015-12-10T16:54:22Z (GMT). No. of bitstreams: 0
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Tese (Doutorado em Tecnologia Nuclear)
IPEN/T
Instituto de Pesquisas Energeticas e Nucleares - IPEN-CNEN/SP
FAPESP:11/01913-4
APA, Harvard, Vancouver, ISO, and other styles
6

Steed, Arnold F. "A heuristic search method of selecting range-range sites for hydrographic surveys." Thesis, Monterey, California. Naval Postgraduate School, 1991. http://hdl.handle.net/10945/27078.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Tautenhahn, Martin. "Lokalisierung für korrelierte Anderson Modelle." Master's thesis, Universitätsbibliothek Chemnitz, 2007. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-200701584.

Full text
Abstract:
Im Fokus dieser Diplomarbeit steht ein korreliertes Anderson Modell. Unser Modell beschreibt kurzreichweitige Einzelplatzpotentiale, wobei negative Korrelationen zugelassen werden. Für dieses korrelierte Modell wird mittels der fraktionalen Momentenmethode im Falle genügend großer Unordnung exponentieller Abfall der Greenschen Funktion bewiesen. Anschließend wird daraus für den nicht korrelierten Spezialfall Anderson Lokalisierung bewiesen
This thesis (diploma) is devoted to a correlated Anderson model. Our model describes short range single site potentials, whereby negative correlations become certified. For this correlated model exponential decay of the Greens' function is proven in the case sufficient large disorder according to the fractional moment method. Subsequently, we prove Anderson localization for the not correlated special case
APA, Harvard, Vancouver, ISO, and other styles
8

Li, Tuo. "Fingerprint Identification by Improved Method of Minutiae Matching." Miami University / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=miami1484672769912832.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Aulí, Llinàs Francesc. "Model-Based JPEG2000 rate control methods." Doctoral thesis, Universitat Autònoma de Barcelona, 2006. http://hdl.handle.net/10803/5806.

Full text
Abstract:
Aquesta recerca està centrada en l'escalabilitat qualitativa de l'estàndard de compressió d'imatges JPEG2000. L'escalabilitat qualitativa és una característica fonamental que permet el truncament de la tira de bits a diferents punts sense penalitzar la qualitat de la imatge recuperada. L'escalabilitat qualitativa és també fonamental en transmissions d'imatges interactives, ja que permet la transmissió de finestres d'interès a diferents qualitats.
El JPEG2000 aconsegueix escalabilitat qualitativa a partir del mètode de control de factor de compressió utilitzat en el procés de compressió, que empotra capes de qualitat a la tira de bits. En alguns escenaris, aquesta arquitectura pot causar dos problemàtiques: per una banda, quan el procés de codificació acaba, el número i distribució de les capes de qualitat és permanent, causant una manca d'escalabilitat qualitativa a tires de bits amb una o poques capes de qualitat. Per altra banda, el mètode de control de factor de compressió construeix capes de qualitat considerant la optimització de la raó distorsió per l'àrea completa de la imatge, i això pot provocar que la distribució de les capes de qualitat per la transmissió de finestres d'interès no sigui adequada.
Aquesta tesis introdueix tres mètodes de control de factor de compressió que proveeixen escalabilitat qualitativa per finestres d'interès, o per tota l'àrea de la imatge, encara que la tira de bits contingui una o poques capes de qualitat. El primer mètode està basat en una simple estratègia d'entrellaçat (CPI) que modela la raó distorsió a partir d'una aproximació clàssica. Un anàlisis acurat del CPI motiva el segon mètode, basat en un ordre d'escaneig invers i una concatenació de passades de codificació (ROC). El tercer mètode es beneficia dels models de raó distorsió del CPI i ROC, desenvolupant una novedosa aproximació basada en la caracterització de la raó distorsió dels blocs de codificació dins una subbanda (CoRD).
Els resultats experimentals suggereixen que tant el CPI com el ROC són capaços de proporcionar escalabilitat qualitativa a tires de bits, encara que continguin una o poques capes de qualitat, aconseguint un rendiment de codificació pràcticament equivalent a l'obtingut amb l'ús de capes de qualitat. Tot i això, els resultats del CPI no estan ben balancejats per les diferents raons de compressió i el ROC presenta irregularitats segons el corpus d'imatges. CoRD millora els resultats de CPI i ROC i aconsegueix un rendiment ben balancejat. A més, CoRD obté un rendiment de compressió una mica millor que l'aconseguit amb l'ús de capes de qualitat. La complexitat computacional del CPI, ROC i CoRD és, a la pràctica, negligible, fent-los adequats per el seu ús en transmissions interactives d'imatges.
This work is focused on the quality scalability of the JPEG2000 image compression standard. Quality scalability is an important feature that allows the truncation of the code-stream at different bit-rates without penalizing the coding performance. Quality scalability is also fundamental in interactive image transmissions to allow the delivery of Windows of Interest (WOI) at increasing qualities.
JPEG2000 achieves quality scalability through the rate control method used in the encoding process, which embeds quality layers to the code-stream. In some scenarios, this architecture might raise two drawbacks: on the one hand, when the coding process finishes, the number and bit-rates of quality layers are fixed, causing a lack of quality scalability to code-streams encoded with a single or few quality layers. On the other hand, the rate control method constructs quality layers considering the rate¬distortion optimization of the complete image, and this might not allocate the quality layers adequately for the delivery of a WOI at increasing qualities.
This thesis introduces three rate control methods that supply quality scalability for WOIs, or for the complete image, even if the code-stream contains a single or few quality layers. The first method is based on a simple Coding Passes Interleaving (CPI) that models the rate-distortion through a classical approach. An accurate analysis of CPI motivates the second rate control method, which introduces simple modifications to CPI based on a Reverse subband scanning Order and coding passes Concatenation (ROC). The third method benefits from the rate-distortion models of CPI and ROC, developing an approach based on a novel Characterization of the Rate-Distortion slope (CoRD) that estimates the rate-distortion of the code¬blocks within a subband.
Experimental results suggest that CPI and ROC are able to supply quality scalability to code-streams, even if they contain a single or few quality layers, achieving a coding performance almost equivalent to the one obtained with the use of quality layers. However, the results of CPI are unbalanced among bit-rates, and ROC presents an irregular coding performance for some corpus of images. CoRD outperforms CPI and ROC achieving well-balanced and regular results and, in addition, it obtains a slightly better coding performance than the one achieved with the use of quality layers. The computational complexity of CPI, ROC and CoRD is negligible in practice, making them suitable to control interactive image transmissions.
APA, Harvard, Vancouver, ISO, and other styles
10

Alshahrani, Mohammed Nasser D. "Statistical methods for rare variant association." Thesis, University of Leeds, 2018. http://etheses.whiterose.ac.uk/22436/.

Full text
Abstract:
Deoxyribonucleic acid (DNA) sequencing allows researchers to conduct more complete assessments of low-frequency and rare genetic variants. In anticipation of the availability of next-generation sequencing data, there is increasing interest in investigating associations between complex traits and rare variants (RVs). In contrast to association studies of common variants (CVs), due to the low frequencies of RVs, common wisdom suggests that existing statistical tests for CVs might not work, motivating the recent development of several new tests that analyze RVs, most of which are based on the idea of pooling/collapsing RVs. Genome-wide association studies (GWAS) based on common SNPs gained more attention in the last few years and have been regularly used to examine complex genetic compositions of diseases and quantitative traits. GWASs have not discovered everything associated with diseases and genetic variations. However, recent empirical evidence has demonstrated that low-frequency and rare variants are, in fact, connected to complex diseases. This thesis will focus on the study of rare variant association. Aggregation tests, where multiple rare variants are analyzed jointly, have incorporated weighting schemes on variants. However, their power is very much dependent on the weighting scheme. I will address three topics in this thesis: the definition of rare variants and their call file (VCF) and a description of the methods that have been used in rare variant analysis. Finally, I will illustrate challenges involved in the analysis of rare variants and propose different weighting schemes for them. Therefore, since the efficiency of rare variant studies might be considerably improved by the application of an appropriate weighting scheme, choosing the proper weighting scheme is the topic of the thesis. In the following chapters, I will propose different weighting schemes, where weights are applied at the level of the variant, the individual or the cell (i.e. the individual genotype call), as well as a weighting scheme that can incorporate quality measures for variants (i.e., a quality score for variant calls) and cells (i.e., genotype quality).
APA, Harvard, Vancouver, ISO, and other styles
11

Mirza, Muhammad Javed. "Robust methods in range image understanding /." The Ohio State University, 1992. http://rave.ohiolink.edu/etdc/view?acc_num=osu148777912090705.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Pumprová, Zuzana. "Valuation Methods of Interest Rate Options." Master's thesis, Vysoká škola ekonomická v Praze, 2010. http://www.nusl.cz/ntk/nusl-73665.

Full text
Abstract:
The subject of this thesis are selected interest rate models and valuation of interest rate derivatives, especially interest rate options. Time-homogeneous one-factor short rate models, Vasicek and Cox-Ingersoll-Ross, and time-inhomogeneous short rate model, Hull{White, are treated. Heath-Jarrow-Morton framework is introduced as an alternative to short rate models, evolving the entire term structure of interest rates. The short rate models are shown to be special cases of models within the framework. The models are derived using the risk-neutral pricing methodology.
APA, Harvard, Vancouver, ISO, and other styles
13

Frankcombe, Terry James. "Numerical methods in reaction rate theory /." [St. Lucia, Qld.], 2002. http://adt.library.uq.edu.au/public/adt-QU20021128.175205/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Mäsiarová, Jana. "Exchange Rate Modelling - Parities and Czech Crown." Master's thesis, Vysoká škola ekonomická v Praze, 2009. http://www.nusl.cz/ntk/nusl-17469.

Full text
Abstract:
The paper analyses validity of main exchange rate theories in case of the Czech crown. Investigated relationships comprise purchasing power parity, interest rate parity and real interest monetary model. Technical part of the analysis involves cointegration, namely Johansen's method based on vector autoregressive models. Two currency pairs are in the focus: CZK/EUR and CZK/USD. Empirical calculations did not prove the absolute validity of the theories but pointed out to other factors of exchange rate, such as convergence process, impacts on inflation targeting decisions, non-monetarist determinants and the recent financial crisis.
APA, Harvard, Vancouver, ISO, and other styles
15

Sinka, Katharine Jane. "Developing the Mutual Climatic Range method of palaeoclimatic reconstruction." Thesis, University of East Anglia, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.359332.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Norton, E. R., L. J. Clark, and E. W. Carpenter. "Planting Method and Seeding Rate Evaluation in Graham County." College of Agriculture, University of Arizona (Tucson, AZ), 2002. http://hdl.handle.net/10150/197471.

Full text
Abstract:
A single field experiment was established in 2001 at the Safford Agricultural Center to evaluate the effects planting method and seeding rate have on plant population and yield of an Upland cotton cultivar Deltapine DP655BR. Two planting methods; planting into moisture (pre-irrigate) and dry plant/water-up, were main effects with three seeding rates of 10, 20, and 30 lbs./acre as sub-effects. These effects were evaluated with respect to stand establishment and yield. Analysis of variance showed no significant differences with respect to planting method for either plant population or yield, so data was combined across main effects. Significant differences were observed in plant population and yield as a function of seeding rate. A linear increase in yield with plant population was observed. These results are not consistent with previous research performed examining plant population effects on yield. This experiment will be conducted again in 2002 in an effort to validate results observed in 2001.
APA, Harvard, Vancouver, ISO, and other styles
17

Willersjö, Nyfelt Emil. "Comparison of the 1st and 2nd order Lee–Carter methods with the robust Hyndman–Ullah method for fitting and forecasting mortality rates." Thesis, Mälardalens högskola, Akademin för utbildning, kultur och kommunikation, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-48383.

Full text
Abstract:
The 1st and 2nd order Lee–Carter methods were compared with the Hyndman–Ullah method in regards to goodness of fit and forecasting ability of mortality rates. Swedish population data was used from the Human Mortality Database. The robust estimation property of the Hyndman–Ullah method was also tested with inclusion of the Spanish flu and a hypothetical scenario of the COVID-19 pandemic. After having presented the three methods and making several comparisons between the methods, it is concluded that the Hyndman–Ullah method is overall superior among the three methods with the implementation of the chosen dataset. Its robust estimation of mortality shocks could also be confirmed.
APA, Harvard, Vancouver, ISO, and other styles
18

Turer, Ibrahim. "Specific Absorption Rate Calculations Using Finite Difference Time Domain Method." Master's thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/12605200/index.pdf.

Full text
Abstract:
This thesis investigates the problem of interaction of electromagnetic radiation with human tissues. A Finite Difference Time Domain (FDTD) code has been developed to model a cellular phone radiating in the presence of a human head. In order to implement the code, FDTD difference equations have been solved in a computational domain truncated by a Perfectly Matched Layer (PML). Specific Absorption Rate (SAR) calculations have been carried out to study safety issues in mobile communication.
APA, Harvard, Vancouver, ISO, and other styles
19

Miller, Audrey K. "Explanations and Blame Following Unwanted Sex: A Multi-Method Investigation." Ohio : Ohio University, 2005. http://www.ohiolink.edu/etd/view.cgi?ohiou1127421605.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Clifford, Gari D. "Signal processing methods for heart rate variability analysis." Thesis, University of Oxford, 2002. http://ora.ox.ac.uk/objects/uuid:5129701f-1d40-425a-99a3-59a05e8c1b23.

Full text
Abstract:
Heart rate variability (HRV), the changes in the beat-to-beat heart rate calculated from the electrocardiogram (ECG), is a key indicator of an individual's cardiovascular condition. Assessment of HRV has been shown to aid clinical diagnosis and intervention strategies. However, the variety of HRV estimation methods and contradictory reports in this field indicate that there is a need for a more rigorous investigation of these methods as aids to clinical evaluation. This thesis investigates the development of appropriate HRV signal processing techniques in the context of pilot studies in two fields of potential application, sleep and head-up tilting (HUT). A novel method for characterising normality in the ECG using both timing information and morphological characteristics is presented. A neural network, used to learn the beat-to-beat variations in ECG waveform morphology, is shown to provide a highly sensitive technique for identifying normal beats. Fast Fourier Transform (FFT) based frequency-domain HRV techniques, which require re-sampling of the inherently unevenly sampled heart beat time-series (RR tachogram) to produce an evenly sampled time series, are then explored using a new method for producing an artificial RR tachogram. Re-sampling is shown to produce a significant error in the estimation of an (entirely specified) artificial RR tachogram. The Lomb periodogram, a method which requires no re-sampling and is applicable to the unevenly sampled nature of the signal is investigated. Experiments demonstrate that the Lomb periodogram is superior to the FFT for evaluating HRV measured by the LF/HF-ratio, a ratio of the low to high frequency power in the RR tachogram within a specified band (0.04-0.4 Hz). The effect of adding artificial ectopic beats in the RR tachogram is then considered and it is shown that ectopic beats significantly alter the spectrum and therefore must be removed or replaced. Replacing ectopic beats by phantom beats is compared to the case of ectopic-realted RR interval removal for the FFT and Lomb methods for varying levels of ectopy. The Lomb periodogram is shown to provide a signficantly better estimate of the LF/HF- ratio under these conditions and is a robust method for measuring the LF/HF-ratio in the presence of (a possibly unknown number of) ectpoic beats or artefacts. The Lomb peridogram and FFT-based techniques are applied to a database of sleep apnoeic and normal subjects. A new method of assessing HRV during sleep is proposed to minimise the confounding effects on HRV of changes due to changing mental activity. Estimation of LF/HF-ratio using the Lomb technique is shown to separate these two patient groups more effectively than with FFT-based techniques. Results are also presented for the application of these methods to controlled (HUT) studies on subjects with syncope, an autonomic nervous system problem, which indicate that the techniques developed in this thesis may provide a method for differentiating between sub-classes of syncope.
APA, Harvard, Vancouver, ISO, and other styles
21

Zeileis, Achim, Ajay Shah, and Ila Patnaik. "Exchange Rate Regime Analysis Using Structural Change Methods." Department of Statistics and Mathematics, WU Vienna University of Economics and Business, 2007. http://epub.wu.ac.at/386/1/document.pdf.

Full text
Abstract:
Regression models for de facto currency regime classification are complemented by inferential techniques for tracking the stability of exchange rate regimes. Several structural change methods are adapted to these regressions: tools for assessing the stability of exchange rate regressions in historical data (testing), in incoming data (monitoring) and for determining the breakpoints of shifts in the exchange rate regime (dating). The tools are illustrated by investigating the Chinese exchange rate regime after China gave up on a fixed exchange rate to the US dollar in 2005 and to track the evolution of the Indian exchange rate regime since 1993.
Series: Research Report Series / Department of Statistics and Mathematics
APA, Harvard, Vancouver, ISO, and other styles
22

Gaines, Tommi Lynn. "Statistical methods for analyzing multiple race response data." Diss., Restricted to subscribing institutions, 2008. http://proquest.umi.com/pqdweb?did=1580805511&sid=5&Fmt=2&clientId=1564&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Randtke, Edward Alexander. "Development and Evaluation of Exchange Rate Measurement Methods." Diss., The University of Arizona, 2013. http://hdl.handle.net/10150/314652.

Full text
Abstract:
Exchange rate determination allows precise modeling of chemical systems, and allows one to infer properties relevant to tumor biology such as enzyme activity and pH. Current exchange rate determination methods found via Contrast Enhanced Saturation Transfer agents are not effective for fast exchanging protons and use non-linear models. A comparison of their effectiveness has not been performed. In this thesis, I compare the effectiveness of current exchange rate measurement methods. I also develop exchange rate measurement methods that are effective for fast exchanging CEST agents and use linear models instead of non-linear models. In chapter 1 I review current exchange rate measurement methods. In chapter 2 I compare several of the current methods of exchange rate measurement, along with several techniques we develop. In chapter 3 I linearize the Quantifying Exchange through Saturation Transfer (QUEST) measurement method analogously to the Omega Plot method, and compare its effectiveness to the QUEST method. In chapter 4, I compare the effectiveness of current exchange rate theories (Transition State Theory and Landau-Zener theory) in the moderate coupling regime, and propose our own combined Eyring-Landau-Zener theory for this intermediate regime. In chapter 5 I discuss future directions for method development and experiments involving exchange rate determination.
APA, Harvard, Vancouver, ISO, and other styles
24

Aieta, C. D. "QUANTUM AND SEMICLASSICAL METHODS FOR RATE CONSTANT CALCULATIONS." Doctoral thesis, Università degli Studi di Milano, 2018. http://hdl.handle.net/2434/546203.

Full text
Abstract:
Chemical reactions are intrinsically dynamical processes. Reaction rate constants, and thus the understanding of chemical kinetics, can be in principle obtained at a very detailed level if one is able to compute the real time quantum dynamics for the reactive system. Unfortunately, the numerical implementation of real time quantum dynamics is very hard to perform, especially for high dimensional systems, because the computational effort scales exponentially with the number of degrees of freedom. In this Ph.D. thesis, two open problems in reaction rate theory have been addressed. The first one is to extend to high dimensional systems the inclusion of quantum effects in rate constant computations. The second issue deal with the inclusion of real time dynamics into very accurate rate constants calculations. The thesis is organized as follows. After a general Introduction, the second chapter is an overview of the state of the art in reaction rate theory. Then, in the third chapter, the derivation of Miller's Semiclassical Transition State Theory (SCTST) is recalled. SCTST is the method employed to obtain accurate and quantum-corrected rate constants for high dimensional reactions. In chapter 4, a novel parallel implementation of this theory (that has also been released as an open source code into J. R. Barker's MultiWell suite of codes) is described together with its application to high dimensional systems. In the following chapters, a new quantum rate approach able to include real time dynamics effects is presented. Derivation and applications of the latter are thoroughly described in chapter 6. The thesis ends with some perspectives about possible future developments.
APA, Harvard, Vancouver, ISO, and other styles
25

Dřímal, Marek. "How Does the New Keynesian Phillips Curve Forecast the Rate of Inflation in the Czech Economy?" Master's thesis, Vysoká škola ekonomická v Praze, 2011. http://www.nusl.cz/ntk/nusl-198859.

Full text
Abstract:
This analysis studies the phenomenon of the New Keynesian Phillips Curve - its inception from the RBC theory and DSGE modelling via incorporation of nominal rigidities, and its various specifications and empirical issues. The estimates on Czech macroeconomic data using the Generalised Method of Moments show that the hybrid New Keynesian Phillips Curve with the labour income share or the real unit labour cost as driving variables can be considered as an appropriate model describing inflation in the Czech Republic. Compared to other analyses, we show that the inflation process in the Czech Republic exhibits higher backwardness vis-a-vis other researchers' estimates based on US data.
APA, Harvard, Vancouver, ISO, and other styles
26

Johnson, Keela P. "An evaluation of the methods used to prevent sexual assault within colleges." Online version, 2000. http://www.uwstout.edu/lib/thesis/2000/2000johnsonke.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Alanazi, Mohammed Awwad. "Non-invasive Method to Measure Energy Flow Rate in a Pipe." Thesis, Virginia Tech, 2018. http://hdl.handle.net/10919/103179.

Full text
Abstract:
Current methods for measuring energy flow rate in a pipe use a variety of invasive sensors, including temperature sensors, turbine flow meters, and vortex shedding devices. These systems are costly to buy and install. A new approach that uses non-invasive sensors that are easy to install and less expensive has been developed. A thermal interrogation method using heat flux and temperature measurements is used. A transient thermal model, lumped capacitance method LCM, before and during activation of an external heater provides estimates of the fluid heat transfer coefficient h and fluid temperature. The major components of the system are a thin-foil thermocouple, a heat flux sensor (PHFS), and a heater. To minimize the thermal contact resistance R" between the thermocouple thickness and the pipe surface, two thermocouples, welded and parallel, were tested together in the same set-up. Values of heat transfer coefficient h, thermal contact resistance R", time constant �[BULLET], and the water temperature �[BULLET][BULLET], were determined by using a parameter estimation code which depends on the minimum root mean square RMS error between the analytical and experimental sensor temperature values. The time for processing data to get the parameter estimation values is from three to four minutes. The experiments were done over a range of flow rates (1.5 gallon/minute to 14.5 gallon/minute). A correlation between the heat transfer coefficient h and the flow rate Q was done for both the parallel and the welded thermocouples. Overall, the parallel thermocouple is better than the welded thermocouple. The parallel thermocouple gives small average thermal contact resistance average R"=0.00001 (m2.�[BULLET][BULLET]/W), and consistence values of water temperature and heat transfer coefficient h, with good repeatability and sensitivity. Consequently, a non-invasive energy flow rate meter or (BTU) meter can be used to estimate the flow rate and the fluid temperature in real life.
MS
APA, Harvard, Vancouver, ISO, and other styles
28

McGregor, Susan Jennifer. "Practice Makes the Difference: The Effect of Rate-Building and Rate-Controlled Practice on Retention." The University of Waikato, 2006. http://hdl.handle.net/10289/2515.

Full text
Abstract:
Six home-schooled students and one adult participant each initially practiced to accuracy two decks of five previously unknown multiplication facts. The decks were yoked for practice and reinforcement. Once accurate performance was achieved, overpractice was undertaken using custom computer software that allowed either fast (free-operant) or rate controlled responding. Rate-building practice, to an established fluency performance standard, was used with one deck while practice with the other deck was rate-controlled. The number of times a fact was practiced was the same for both methods. Response rate and accuracy was assessed after training to accuracy, at the end of overpractice and after 4 and 8-weeks of no practice. The assessment at the end of rate-building confirmed that rate building resulted in fast and accurate responding. It also confirmed that, for the rate controlled facts, response rates did not meet the fluency performance standard. However, the 4- and 8-week retention assessments showed no consistent differences in accuracy or response rate between the rate-controlled and rate built decks. After 8 weeks without practice, performance on the rate-built deck was not significantly different to that prior to rate building. These results suggest that practice to fluency does not lead to superior retention when compared to the same amount of rate-controlled practice. The results also indicate that when a skill is practiced to fluency, a period without practice leads to deterioration, to pre-rate-building levels, of accuracy and response rate. This study highlights the need for research examining the role of maintenance in the effectiveness of fluency based learning like Precision Teaching.
APA, Harvard, Vancouver, ISO, and other styles
29

von, Hippel Eric, Nikolaus Franke, and Reinhard Wilhelm Prügl. "Pyramiding: Efficient search for rare subjects." Elsevier, 2009. http://dx.doi.org/10.1016/j.respol.2009.07.005.

Full text
Abstract:
The need to economically identify rare subjects within large, poorly-mapped search spaces is a frequently-encountered problem for social scientists and managers. It is notoriously difficult, for example, to identify "the best new CEO for our company," or the "best three lead users to participate in our product development project." Mass screening of entire populations or samples becomes steadily more expensive as the number of acceptable solutions within the search space becomes rarer. The search strategy of "pyramiding" is a potential solution to this problem under many conditions. Pyramiding is a search process based upon the idea that people with a strong interest in a topic or field tend to know people more expert than themselves. In this paper we report upon four experiments empirically exploring the efficiency of pyramiding searches relative to mass screening. We find that pyramiding on average identified the most expert individual in a group on a specific topic with only 28.4% of the group interviewed - a great efficiency gain relative to mass screening. Further, pyramiding identified one of the top 3 experts in a population after interviewing only 15.9% of the group on average. We discuss conditions under which the pyramiding search method is likely to be efficient relative to screening.
APA, Harvard, Vancouver, ISO, and other styles
30

Wu, Di. "A Novel Method to Prepare Silica Based Carbon Dioxide Capture Sorbent." University of Akron / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=akron1215095709.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Tye, Thomas N. "Application of digital signal processing methods to very high frequency omnidirectional range (VOR) signals in the design of an airborne flight measurement system." Ohio University / OhioLINK, 1996. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1177702951.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

O'Kelley, Ryan. "Rate handling methods in variable amplitude fatigue cycle processing." Honors in the Major Thesis, University of Central Florida, 2010. http://digital.library.ucf.edu/cdm/ref/collection/ETH/id/1477.

Full text
Abstract:
This item is only available in print in the UCF Libraries. If this is your Honors Thesis, you can help us make it available online for use by researchers around the world by following the instructions on the distribution consent form at http://library.ucf.edu/Systems/DigitalInitiatives/DigitalCollections/InternetDistributionConsentAgreementForm.pdf You may also contact the project coordinator, Kerri Bottorff, at kerri.bottorff@ucf.edu for more information.
Bachelors
Engineering and Computer Science
Mechanical Engineering
APA, Harvard, Vancouver, ISO, and other styles
33

Jones, Peter P. "Determining cluster-cluster aggregation rate kernals using inverse methods." Thesis, University of Warwick, 2013. http://wrap.warwick.ac.uk/57480/.

Full text
Abstract:
We investigate the potential of inverse methods for retrieving adequate information about the rate kernel functions of cluster-cluster aggregation processes from mass density distribution data. Since many of the classical physical kernels have fractional order exponents the ability of an inverse method to appropriately represent such functions is a key concern. In early chapters, the properties of the Smoluchowski Coagulation Equation and its simulation using Monte Carlo techniques are introduced. Two key discoveries made using the Monte Carlo simulations are briefly reported. First, that for a range of nonlocal solutions of finite mass spectrum aggregation systems with a source of mass injection, collective oscillations of the solution can persist indefinitely despite the presence of significant noise. Second, that for similar finite mass spectrum systems with (deterministic) stable, but sensitive, nonlocal stationary solutions, the presence of noise in the system can give rise to behaviour indicative of phase-remembering, noise-driven quasicycles. The main research material on inverse methods is then presented in two subsequent chapters. The first of these chapters investigates the capacity of an existing inverse method in respect of the concerns about fractional order exponents in homogeneous kernels. The second chapter then introduces a new more powerful nonlinear inverse method, based upon a novel factorisation of homogeneous kernels, whose properties are assessed in respect of both stationary and scaling mass distribution data inputs.
APA, Harvard, Vancouver, ISO, and other styles
34

Greene, Daniel John. "Methods for determining the genetic causes of rare diseases." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/270546.

Full text
Abstract:
Thanks to the affordability of DNA sequencing, hundreds of thousands of individuals with rare disorders are undergoing whole-genome sequencing in an effort to reveal novel disease aetiologies, increase our understanding of biological processes and improve patient care. However, the power to discover the genetic causes of many unexplained rare diseases is hindered by a paucity of cases with a shared molecular aetiology. This thesis presents research into statistical and computational methods for determining the genetic causes of rare diseases. Methods described herein treat important aspects of the nature of rare diseases, including genetic and phenotypic heterogeneity, phenotypes involving multiple organ systems, Mendelian modes of inheritance and the incorporation of complex prior information such as model organism phenotypes and evolutionary conservation. The complex nature of rare disease phenotypes and the need to aggregate patient data across many centres has led to the adoption of the Human Phenotype Ontology (HPO) as a means of coding patient phenotypes. The HPO provides a standardised vocabulary and captures relationships between disease features. I developed a suite of software packages dubbed 'ontologyX' in order to simplify analysis and visualisation of such ontologically encoded data, and enable them to be incorporated into complex analysis methods. An important aspect of the analysis of ontological data is quantifying the semantic similarity between ontologically annotated entities, which is implemented in the ontologyX software. We employed this functionality in a phenotypic similarity regression framework, 'SimReg', which models the relationship between ontologically encoded patient phenotypes of individuals and rare variation in a given genomic locus. It does so by evaluating support for a model under which the probability that a person carries rare alleles in a locus depends on the similarity between the person's ontologically encoded phenotype and a latent characteristic phenotype which can be inferred from data. A probability of association is computed by comparison of the two models, allowing prioritisation of candidate loci for involvement in disease with respect to a heterogeneous collection of disease phenotypes. SimReg includes a sophisticated treatment of HPO-coded phenotypic data but dichotomises the genetic data at a locus. Therefore, we developed an additional method, 'BeviMed', standing for Bayesian Evaluation of Variant Involvement in Mendelian Disease, which evaluates the evidence of association between allele configurations across rare variants within a genomic locus and a case/control label. It is capable of inferring the probability of association, and conditional on association, the probability of each mode of inheritance and probability of involvement of each variant. Inference is performed through a Bayesian comparison of multiple models: under a baseline model disease risk is independent of allele configuration at the given rare variant sites and under an alternate model disease risk depends on the configuration of alleles, a latent partition of variants into pathogenic and non-pathogenic groups and a mode of inheritance. The method can be used to analyse a dataset comprising thousands of individuals genotyped at hundreds of rare variant sites in a fraction of a second, making it much faster than competing methods and facilitating genome-wide application.
APA, Harvard, Vancouver, ISO, and other styles
35

Shi, Feng. "Nucleation and growth in materials and on surfaces : kinetic Monte Carlo simulations and rate equation theory /." Connect to full text in OhioLINK ETD Center, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1216839589.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Rabe, Hermann. "The finite section method for infinite Vandermonde matrices and applications / H. Rabe." Thesis, North-West University, 2007. http://hdl.handle.net/10394/2034.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Ahlfeld, Richard Benedikt Heinrich. "A data-driven uncertainty quantification method for scarce data and rare events." Thesis, Imperial College London, 2017. http://hdl.handle.net/10044/1/59075.

Full text
Abstract:
The efficient integration of manufacturing, experimental or operational data into physical simulations using uncertainty quantification techniques is becoming increasingly important for the engineering industry. There is a genuine interest to use uncertainty quantification methods to improve product efficiency, reliability, and safety. However, temporary methods still have various limitations, which have prevented a wider industrial application. This thesis analyses existing problems in industry and contributes new solutions in five areas: data scarcity, the curse of dimensionality, rare events, discontinuous models and epistemic model-form uncertainty. The major novelty of this work is a new data-driven Polynomial Chaos framework called SAMBA that can be used to improve engineering designs by accounting for uncertainties in simulations more efficiently and accurately. SAMBA provides a single solution to many industrial problems. It is particularly useful for applications where only scarce statistical data is available and to efficiently quantify the likelihood of rare events with disastrous consequences, the so-called Black Swan. Other benefits are the simple reconstruction of adaptive and anisotropic sparse grid quadrature rules to alleviate the curse of dimensionality, a simpler combination of arbitrary distributions and random data within a single method, and higher accuracy for data sets that do not follow a definable distribution. To deal with special cases, two extensions have been added to SAMBA's framework: first, a modification that allows the use of arbitrary points within Pad\'e approximations for models containing discontinuities and second, a general multi-fidelity framework to counteract model-form uncertainty in simulations. The thesis concludes with the application of SAMBA to various state-of-the-art models in Formula 1, turbomachinery, and space flight (one example analyses structural dynamics uncertainty of NASA's Space Launch System). Particularly innovative are two applications: one showing how to account for Black Swans in gas turbine simulations and one how to deal with scarce data in Formula 1.
APA, Harvard, Vancouver, ISO, and other styles
38

Kubat, Jamie. "Comparing Dunnett's Test with the False Discovery Rate Method: A Simulation Study." Thesis, North Dakota State University, 2013. https://hdl.handle.net/10365/27025.

Full text
Abstract:
Recently, the idea of multiple comparisons has been criticized because of its lack of power in datasets with a large number of treatments. Many family-wise error corrections are far too restrictive when large quantities of comparisons are being made. At the other extreme, a test like the least significant difference does not control the family-wise error rate, and therefore is not restrictive enough to identify true differences. A solution lies in multiple testing. The false discovery rate (FDR) uses a simple algorithm and can be applied to datasets with many treatments. The current research compares the FDR method to Dunnett's test using agronomic data from a study with 196 varieties of dry beans. Simulated data is used to assess type I error and power of the tests. In general, the FDR method provides a higher power than Dunnett's test while maintaining control of the type I error rate.
APA, Harvard, Vancouver, ISO, and other styles
39

Tsai, Hsin-Feng, and 蔡幸峰. "A Bit Rate Scalable CELP Coding Method." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/47270424925862588484.

Full text
Abstract:
碩士
國立成功大學
電機工程學系碩博士班
92
CELP (coded excitation linear prediction) has been the dominant speech coding scheme of various standards since it was proposed in 1984. MPEG-4 not only adopts CELP as the coding scheme but also provides a function of scalability. The function of scalability could decode higher quality speech signal by utilizing more parameters. MPEG-4 increases the quality of the decoded signal by means of the concept of “enhancement layer”. In order to overcome the drawback of the large bitrate step size of an enhancement layer, DPWBC was proposed. DPWBC adopts a suitable waveform as basis to encode each selected signal. DPWBC regards a continuous positive or continuous negative signal as one signal and decide whether the signal should be encoded or not according to their width. The signal with large width has higher priority to be selected while signals with smaller width are eliminated. This thesis proposes other two decision criteria: power-decision and energy-decision. Under the power-decision criterion, signals with higher power have higher priority to be selected while signals with lower power are eliminated; under the energy-decision criterion, signals with higher energy have higher priority to be selected while signals with lower energy are eliminated. Besides, in this thesis other shape waveforms as basis and different frame size are test. Experimental results show that the performance of energy-decision criterion is superior to those of width-decision and power-decision. Besides, the performance of power-decision criterion is comparable to that of width-decision criterion.
APA, Harvard, Vancouver, ISO, and other styles
40

Liu, Wen-Chieh, and 劉文傑. "Range Finder With Pulse Reflection Oscillation Method." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/64121824428828339033.

Full text
Abstract:
博士
國立交通大學
光電工程所
91
For either one of the two conventional methods, the phase method or the pulse method, employed in usual laser range-finders, more and more sophisticated circuit is needed when higher and higher precision is to be performed. The purpose of this thesis is to present a new method, the pulse reflection oscillation method, and to verify its feasibility by designing and operating a prototype system based on its principle. The pulse reflection oscillation method works by measuring the period of oscillation waves in the emission-reflection loop to decide the propagation time between the target point and the reference point. It employs a frequency counter to measure the period. It can perform very high precision measurement with very simple circuit. In this thesis, the system’s optical path configuration is defined and the principle of pulse reflection oscillation is discussed. Based on the configuration, the system is applied in the measurement of the propagation delay in optical fibers and in the determination of the fault point in electric cables. The results verify the feasibility of the method.
APA, Harvard, Vancouver, ISO, and other styles
41

Chang, Yu-Hsuan, and 章育瑄. "Comparison of Volatility Forecasting Performance - Range-based method vs. Return-based method." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/89463565158100373178.

Full text
Abstract:
碩士
淡江大學
財務金融學系碩士班
98
This article selects the appropriate model to match volatility of nine stock markets from ARMA, GARCH, CARR and VECM models and use range, high and low variables to match the models. In the meantime, we use Parkinson (1980) proposed ranged-based estimator and squared return to be the proxy of true volatility. This study not only uses statistic loss functions, including MAE, MSE, LLE, GMLE and the VaR performance assessments are based on the range of measures that address the accuracy and efficiency, but also employ more robust SPA test to compare forecasting performance of models. The empirical result indicates that, for MAE and LLE, CARR model is preferred.In addition, for MSE and GMLE, asymmetric GARCH models are preferred.For VaR based loss function, except for KOSPI, NKI225 and TAIEX, CARR model is preferred.In a word, for statistic and financial loss functions, there are high performance to forecast volatility of nine stock markets which is CARR model or asymmetric GARCH model be used. Therefore, alternative stock markets and loss functions are important for volatility forecasting.
APA, Harvard, Vancouver, ISO, and other styles
42

Hou, Mei Chun, and 侯玫君. "Convergent Rate Of The Logarithmic Barrier Function Method." Thesis, 1998. http://ndltd.ncl.edu.tw/handle/18538523394762871449.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Kao, Chi-Feng, and 高啟峰. "The Research On The Best Stroke Rate Evaluation Method For The Mid 300m Section Of The Standard 500m Dragon Boat Race." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/93x6f2.

Full text
Abstract:
碩士
臺北市立大學
運動科學研究所
105
Research Purpose: The purpose of this research is to discover the best evaluation method in the mid 300m section of the standard 500m dragon boat race. Method: The research objects are 18 team members(16 paddlers, 1 drummer and 1 flag catcher) from BNP Paribas Cardiff dragon boat team who participated in Women's Open in the 7th New Taipei City Speaker Cup Dragon Boat Race in 2017 with the paddles and the boat certified by IDBF. The method is to record and analyze how different stroke rate influences the boat speed in the mid 300m section of the total 500m distance. Result: Except that 72 strokes per min. and 77strokes per min. did not show statistical significance (p = 0.055), all the other groups did (p = 0.000)。Conclusion: Only when you practice the best stroke rate and stroke distance under the maximum intensity that the paddlers can bear can you have the most efficiency to complete the race with good results.
APA, Harvard, Vancouver, ISO, and other styles
44

Wang, Li. "Robust Methods of Testing Long Range." Thesis, 2007. http://hdl.handle.net/10012/3392.

Full text
Abstract:
This thesis develops a novel robust periodogram method for detecting long memory. Though many test for long memory are based on the idea of linear regression, there exists no results in statistical literature on utilizing the robust regression methodology for detection of long memory. The advantage of the robust regression is a substantially less sensitivity to atypical observations or outliers, compared to the classical regression that is based on the least squares method. The thesis suggests two versions of the robust periodogram methods based on the least quan- tile and the least trimmed methods. The new robust periodogram methods are shown to provide smaller bias in long memory estimation when compared with the classical periodogram method. However, variability of estimation is increased. Therefore, we develop the bootstrapped modification of the new robust periodogram methods to reduce variability of estimation. The new bootstrapped modi¯cations of the robust periodogram tests substantially reduce variance of estimation and provides a competitively low bias. All proposed robust methods are illustrated by simulations and the case studies on currency exchange rates, and comparative analysis with other existing tests for long memory is carried out.
APA, Harvard, Vancouver, ISO, and other styles
45

Chen, Yu-Chou, and 陳禹州. "Flow Rate Based Detection Method for Apneas And Hypopneas." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/s222np.

Full text
Abstract:
碩士
國立中山大學
機械與機電工程學系研究所
95
SAS has become an increasingly important public-health problem in recent years. It can adversely affect neurocognitive, cardiovascular, respiratory diseases and can also cause behavior disorder. Since up to 90% of these cases are obstructive sleep apnea (OSA), therefore, the study of how to diagnose, detect and treat OSA is becoming a significant issue, academically and medically. Polysomnography (PSG) can monitor the OSA with relatively fewer invasive techniques. However, PSG-based sleep studies are expansive and time-consuming because they require overnight evaluation in sleep laboratories with dedicated systems and attending personnel. This work develops a flow rate based detection method for apneas. In particular, via signal processing, feature extraction and neural network, this thesis introduces a flow rate based detective system. The goal is to detect OSA with less time and reduced financial costs.
APA, Harvard, Vancouver, ISO, and other styles
46

Yang-YuCheng and 鄭暘諭. "Estimation of False Discovery Rate Using Empirical Bayes Method." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/78t3ye.

Full text
Abstract:
碩士
國立成功大學
統計學系
104
In multiple testing problems, if you do not adjust the individual type I error rate and still set the individual significance level α, then the overall type I error rate of m hypotheses will be expanded to be mα. This study assumes that several genes have mixed normal distribution, and parameters have prior distribution. We use the Bayesian posterior distribution and EM algorithm to estimate the proportion of the null hypothesis which is true, then to estimate the number of null hypothesis which is true, and FDR. We compare the performance of these estimators for different parameters through the Monte Carlo algorithm. The estimator using McNemar test proposed by Ma & Chao (2011) may cause estimation error too large as the significance level is set to be α=0.05. The estimator proposed by Benjamini & Hochberg (2000) is unstable when the ratio of gene mutation is set to be random. The estimator using Friedman test proposed by Ma & Tsai (2011) also has the same scenario. When the number of genes and the number of patients both are large and the proportion of true null hypothesis is higher, the proposed EBay estimator has the smaller RMSE. Hence it’s more accurate.
APA, Harvard, Vancouver, ISO, and other styles
47

Hsiu-Ying, Liu, and 劉琇瑩. "Evaluating the Interest Rate Derivatives with Historical Simulation Method." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/78754808453561915709.

Full text
Abstract:
碩士
國立暨南國際大學
財務金融學系
96
This thesis uses the historical simulation method to evaluate the price of general pricing models of interest rate derivatives, where the general pricing models are derived without any distribution restrictions. The general pricing models and historical simulation method are set up in order to avoid the mispricing problem that is caused by the mistake of distribution assumption from the traditional pricing methods. The historical simulation method needs not to suppose the distribution type of interest rate in advance. It uses the Esscher transform method to find the forward measure firstly. Then, it changes the probability measure from the original distribution to forward distribution by the Esscher transform method, so as to simulate the distributions of general pricing models that are under the forward measure. Finally, using these simulated distributions, this thesis solves the general model price of interest rate derivatives. The article also provides the numerical analyses on the cap to compare the pricing errors in our method with the pricing errors in the traditional models that assume the volatility of underlying asset return as normal distribution.
APA, Harvard, Vancouver, ISO, and other styles
48

Rong, Fu Li, and 傅麗容. "Forecasting exchange rate by the method of decision tree." Thesis, 1993. http://ndltd.ncl.edu.tw/handle/64746816116041977395.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Chun-YuChen and 陳俊佑. "A heart-rate-variability based automatic sleep scoring method." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/28720334747703520524.

Full text
Abstract:
碩士
國立成功大學
醫學資訊研究所
100
Sleep is important to everyone. However, not everyone can acquire good sleep quality. For the diagnosis, all night polysomnographic (PSG) recordings are usually taken from the patients. The doctor needs to realize the sleep quality and quantity of them. Nevertheless, visual sleep scoring is a time consuming and subjective process. Therefore, developing an automatic sleep scoring method is a very important issue. Due to the disturbance from typical biomedical signals: EEG, EOG, and EMG recording are too huge, the sleep quality scored from those signals is not accurate enough. So our objective of this study is developing an automatic sleep scoring method which only uses the heart rate as the input signal. Although the method using HRV as the input signal is not good enough, the benefits like less disturbance, easy to use and capability of detecting sleep cycle, make it has unlimited potential. Everyone has different heart rate features. However, most of recent studies used cross-subject concept to develop their automatic sleep scoring method, cause they have lower accuracy. According to this, we can’t use cross-subject way to treat the signal like EEG, EOG and EMG dataset. In the study, we use the concept of subject-dependent to construct our method. Using this concept and 3 features: average heart rate, variance of heart rate and HRV LF power, we have 69.48% total accuracy, 63.48% accuracy for wake, 71.30% accuracy for light sleep, 68.06% accuracy for deep sleep, and 68.78% accuracy for REM sleep. We expect this study can integrate with various heart rate signal recorder such as ECG and pulse oximeter for sleep monitoring in clinical or homecare application.
APA, Harvard, Vancouver, ISO, and other styles
50

Yu, Tung Ching, and 游東卿. "The Application of Optical Flow Method on Relay Race." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/70739644737631800651.

Full text
Abstract:
碩士
國立中興大學
應用數學系所
101
The method of optical flow is used in this thesis to calculate the velocity of runners. The purpose of the research is to give coaches a useful tool to evaluate runners’ performance. The commercial software MATLAB is used in implementing the method to develop a computer program. As examples, the video movies of two middle school runners are analyzed by using the developed MATLAB program.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography