Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Fixed-matrix.

Dissertationen zum Thema „Fixed-matrix“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-25 Dissertationen für die Forschung zum Thema "Fixed-matrix" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Peacock, Matthew James McKenzie. „Random Matrix Theory Analysis of Fixed and Adaptive Linear Receivers“. University of Sydney, 2006. http://hdl.handle.net/2123/985.

Der volle Inhalt der Quelle
Annotation:
Doctor of Philosophy (PhD)
This thesis considers transmission techniques for current and future wireless and mobile communications systems. Many of the results are quite general, however there is a particular focus on code-division multiple-access (CDMA) and multi-input multi-output (MIMO) systems. The thesis provides analytical techniques and results for finding key performance metrics such as signal-to-interference and noise power ratios (SINR) and capacity. This thesis considers a large-system analysis of a general linear matrix-vector communications channel, in order to determine the asymptotic performance of linear fixed and adaptive receivers. Unlike many previous large-system analyses, these results cannot be derived directly from results in the literature. This thesis considers a first-principles analytical approach. The technique unifies the analysis of both the minimum-mean-squared-error (MMSE) receiver and the adaptive least-squares (ALS) receiver, and also uses a common approach for both random i.i.d. and random orthogonal precoding. The approach is also used to derive the distribution of sums and products of free random matrices. Expressions for the asymptotic SINR of the MMSE receiver are derived, along with the transient and steady-state SINR of the ALS receiver, trained using either i.i.d. data sequences or orthogonal training sequences. The results are in terms of key system parameters, and allow for arbitrary distributions of the power of each of the data streams and the eigenvalues of the channel correlation matrix. In the case of the ALS receiver, we allow a diagonal loading constant and an arbitrary data windowing function. For i.i.d. training sequences and no diagonal loading, we give a fundamental relationship between the transient/steady-state SINR of the ALS and the MMSE receivers. We demonstrate that for a particular ratio of receive to transmit dimensions and window shape, all channels which have the same MMSE SINR have an identical transient ALS SINR response. We demonstrate several applications of the results, including an optimization of information throughput with respect to training sequence length in coded block transmission.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Ko, Lok Shun. „Matrix fixed charge density modulates exudate concentration during cartilage compression“. Thesis, McGill University, 2013. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=117081.

Der volle Inhalt der Quelle
Annotation:
Streaming potentials arise from the presence of negative fixed charges in cartilage extracellular matrix glycosaminoglycans. Arthroscopic assessment of these potentials can potentially detect localized surface lesions and provide quantitative diagnostic information. Electrolyte filtration is also a phenomenon arising from glycosaminoglycans. Commonly assumed negligible despite a lack of experimental validation, it can be important for design and interpretation of streaming potential measurements and choice of modeling assumptions. The objective of this thesis was therefore to quantify electrolyte filtration and estimate its effect on streaming potential measurements. Chloride ion concentration in exudate of compressed cartilage was measured and explant GAG content was colorimetrically assayed. Pilot studies indicated that an appropriate strain rate for experiments was 8x10^(-3) s^(-1) in order to eliminate concerns of exudate evaporation and explant damage (at low and high strain rates, respectively). Exudate concentration of explants equilibrated in 1x PBS was significantly (p<0.05) lower than bath at strains of 37.5, 50 and 62.5%, with clear dependence on magnitude. Exudate concentration was also significantly lower than that of the bath when 50% strain was applied after equilibration in 0.5, 1 and 2x PBS while the relative difference seemed to increase with decreasing bath concentration (p=0.065 between 0.5 and 2x PBS). Decreasing exudate concentration correlated negatively with increasing post-compression GAG concentration, while no difference between exudate concentration and bath concentration was ever observed for compression of uncharged agarose gel controls. Findings show that exudate from compressed cartilage is dilute relative to bath due to the presence of matrix fixed charges. This difference leads to the generation of extraneous diffusion potentials during streaming potential measurements, particularly under conditions of low strain rates and high strains.
Le potentiel d'écoulement est due à la présence de charges fixes négatives sur les glycosaminoglycanes de la matrice extracellulaire du cartilage. Son évaluation arthroscopique peut potentiellement détecter les lésions localisées et fournir de l'information diagnostique quantitatif. La filtration d'électrolyte est également un phénomène découlant des glycosaminoglycanes. Généralement supposé négligeable malgré un manque de validation expérimentale, il peut être important pour la conception et l'interprétation des measurements du potentiel d'écoulement et pour le choix des suppositions de modélisation. L'objectif de cette thèse est donc de quantifier le phénomène de filtration d'électrolyte et d'estimer son effet sur les mesures de potentiel d'écoulement. La concentration de chlorure dans l'exsudat de cartilage comprimé a été mesurée et le contenu de GAG a été colorimétriquement dosé. Des études pilotes ont indiqué qu'une vitesse de déformation de 8x10^(-3) s^(-1) était appropriée afin d'éliminer les préoccupations d'évaporation d'exsudat ainsi que du dommage à l'explant (à haute et basse vitesses, respectivement). La concentration d'exsudat des explants équilibrés dans 1x PBS et soumis à une deformation de 37,5, 50 et 62,5% était significativement (p<0.05) inférieure à celle du bain, avec claire dépendance sur la magnitude de déformation. Elle était également significativement inférieure lorsqu'une deformation de 50% a été appliquée après l'équilibrage en 0,5, 1 et 2x PBS. La différence relative avec le bain semblait augmenter avec la diminution de la concentration de celui-ci (p=0,065 entre 0,5 et 2 PBS). La baisse de concentration de l'exsudat a corrélé négativement avec le contenu de glycosaminoglycanes post-compression, alors qu'aucune différence n'a jamais été détectée dans l'exsudat des explants d'agarose. Nos résultats démontrent que la concentration d'exsudat de cartilage comprimé est dilué par rapport au bain en raison de la présence des charges fixes. Cette différence mène à la génération de potentiels de diffusion durant la mesure du potentiel d'écoulement, surtout sous les conditions de haute déformation et de faible vitesse de deformation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Garcia, Ignacio de Mateo. „Iterative matrix-free computation of Hopf bifurcations as Neimark-Sacker points of fixed point iterations“. Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, 2012. http://dx.doi.org/10.18452/16478.

Der volle Inhalt der Quelle
Annotation:
Klassische Methoden für die direkte Berechnung von Hopf Punkten und andere Singularitaten basieren auf der Auswertung und Faktorisierung der Jakobimatrix. Dieses stellt ein Hindernis dar, wenn die Dimensionen des zugrundeliegenden Problems gross genug ist, was oft bei Partiellen Diferentialgleichungen der Fall ist. Die betrachteten Systeme haben die allgemeine Darstellung f ( x(t), α) für t grösser als 0, wobei x die Zustandsvariable, α ein beliebiger Parameter ist und f glatt in Bezug auf x und α ist. In der vorliegenden Arbeit wird ein Matrixfreies Schema entwicklet und untersucht, dass ausschliesslich aus Produkten aus Jakobimatrizen und Vektoren besteht, zusammen mit der Auswertung anderer Ableitungsvektoren erster und zweiter Ordnung. Hiermit wird der Grenzwert des Parameters α, der zuständig ist für das Verlieren der Stabilität des Systems, am Hopfpunkt bestimmt. In dieser Arbeit wird ein Gleichungssystem zur iterativen Berechnung des Hopfpunktes aufgestellt. Das System wird mit einer skalaren Testfunktion φ, die aus einer Projektion des kritischen Eigenraums bestimmt ist, ergänzt. Da das System f aus einer räumlichen Diskretisierung eines Systems Partieller Differentialgleichungen entstanden ist, wird auch in dieser Arbeit die Berechung des Fehlers, der bei der Diskretisierung unvermeidbar ist, dargestellt und untersucht. Zur Bestimmung der Hopf-Bedingungen wird ein einzelner Parameter gesteuert. Dieser Parameter wird unabhängig oder zusammen mit dem Zustandsvektor in einem gedämpften Iterationsschritt neu berechnet. Der entworfene Algorithmus wird für das FitzHugh-Nagumo Model erprobt. In der vorliegenden Arbeit wird gezeigt, wie für einen kritischen Strom, das Membranpotential eine fortschreitende Welle darstellt.
Classical methods for the direct computation of Hopf bifurcation points and other singularities rely on the evaluation and factorization of Jacobian matrices. In view of large scale problems arising from PDE discretization systems of the form f( x (t), α ), for t bigger than 0, where x are the state variables, α are certain parameters and f is smooth with respect to x and α, a matrix-free scheme is developed based exclusively on Jacobian-vector products and other first and second derivative vectors to obtain the critical parameter α causing the loss of stability at the Hopf point. In the present work, a system of equations is defined to locate Hopf points, iteratively, extending the system equations with a scalar test function φ, based on a projection of the eigenspaces. Since the system f arises from a spatial discretization of an original set of PDEs, an error correction considering the different discretization procedures is presented. To satisfy the Hopf conditions a single parameter is adjusted independently or simultaneously with the state vector in a deflated iteration step, reaching herewith both: locating the critical parameter and accelerating the convergence rate of the system. As a practical experiment, the algorithm is presented for the Hopf point of a brain cell represented by the FitzHugh-Nagumo model. It will be shown how for a critical current, the membrane potential will present a travelling wave typical of an oscillatory behaviour.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Dhanani, S. „Application of a social accounting matrix (SAM) fixed-price multiplier model to agricultural sector analysis in Pakistan“. Thesis, University of Oxford, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.382509.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Hrbáček, Jiří. „Experimentální podpora vývoje specifického integrovaného zařízení“. Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2021. http://www.nusl.cz/ntk/nusl-442834.

Der volle Inhalt der Quelle
Annotation:
Regenerative heat exchangers are used in a wide range of industries and in the technical equipment of buildings. These heat exchangers play an important role in saving thermal energy and removing volatile organic compounds from flue gases. The theoretical part of the work deals with the division of regenerative exchangers into rotary and switching exchangers and the possibilities of their use. These types of heat exchangers are used in many applications, e.g. as a heat exchanger using waste heat to preheat the process gas (regeneration layer), or as catalysts to accelerate the reaction required to remove volatile organic compounds (catalytic layer), or as integrated equipment where both the regeneration layer and the catalytic layer. The aim of the diploma thesis is experimental support in the development of a computer program for the design of a specific integrated device. The program allows the calculation of the regeneration and catalytic bed, or both beds simultaneously, i.e. integrated equipment. The diploma thesis deals with the support of a mathematical model for the calculation of the regeneration bed. Pressure loss and heat transfer play an important role in the selection and subsequent calculation of a suitable bed. To calculate them, it is possible to find more available computational relationships that differ significantly in their accuracy. It is therefore necessary to select the most suitable ones for the computational model. The practical part of the work then deals with research, analysis, and assessment of the suitability of methods used to calculate pressure losses based on a comparison with the values measured on experimental equipment. Subsequently, the work deals with computational methods for determining the heat transfer coefficient of the packed bed. A significant part of the practical part deals with the modification of the experimental equipment for the verification of computational relations for the determination of heat transfer with measured data.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Ferrari, Peron Costa Priscila [Verfasser]. „Evaluation of clinical indices, microbiological and matrix metalloproteinase-8 levels in subgingival biofilm of patients with fixed appliance, before and during orthodontic treatment / Priscila Ferrari Peron Costa“. Mainz : Universitätsbibliothek Mainz, 2020. http://d-nb.info/1213271452/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Mateo, Garcia Ignacio de [Verfasser], Andreas [Akademischer Betreuer] Griewank, Nicolas R. [Akademischer Betreuer] Gauger und Willy J. F. [Akademischer Betreuer] Govaerts. „Iterative matrix-free computation of Hopf bifurcations as Neimark-Sacker points of fixed point iterations / Ignacio de Mateo Garcia. Gutachter: Andreas Griewank ; Nicolas R. Gauger ; Willy J. F. Govaerts“. Berlin : Humboldt Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, 2012. http://d-nb.info/1020871148/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Lydia, Emílio Jorge. „Um método de matriz resposta com esquema iterativo de inversão parcial por região para problemas unidimensionais de transporte de nêutrons monoenergéticos na formulação de ordenadas discretas“. Universidade do Estado do Rio de Janeiro, 2011. http://www.bdtd.uerj.br/tde_busca/arquivo.php?codArquivo=3208.

Der volle Inhalt der Quelle
Annotation:
Um método de matriz resposta (RM) é descrito para gerar soluções numéricas livres de erros de truncamento espacial para problemas de transporte de nêutrons monoenergéticos e com fonte fixa, em geometria unidimensional na formulação de ordenadas discretas (SN). O método RM com esquema iterativo de inversão parcial por região (RBI) converge valores numéricos para os fluxos angulares nas fronteiras das regiões que coincidem com os valores da solução analítica das equações SN, afora os erros de arredondamento da aritmética finita computacional. Desenvolvemos um esquema numérico de reconstrução espacial, que fornece a saída para os fluxos escalares de nêutrons em qualquer ponto do domínio definido pelo usuário, com um passo de avanço também escolhido pelo usuário. Resultados numéricos são apresentados para ilustrar a precisão do presente método em cálculos de malha grossa.
Presented here is a response matrix (RM) method, which solves numerically fixedsource one-speed slab-geometry neutron transport problems in the discrete ordinates (SN) formulation. The numerical solutions are completely free from spatial truncation errors. Therefore, the RM method with the RBI iterative scheme converges numerical values for the region-edge angular fluxes, which coincide with the numerical values generated from the analytical solution, apart from computational finite arithmetic considerations. A spatial reconstruction scheme has also been developed to yield the detailed profile of the scalar flux using a fixed step defined by the code user. Numerical results are given to illustrate the offered methods accuracy.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Sarracini, Junior Fernando. „Sintese de controladores H 'Infinito' de ordem reduzida com aplicação no controle ativo de estruturas flexiveis“. [s.n.], 2006. http://repositorio.unicamp.br/jspui/handle/REPOSIP/263105.

Der volle Inhalt der Quelle
Annotation:
Orientador: Alberto Luiz Serpa
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecanica
Made available in DSpace on 2018-08-06T08:29:26Z (GMT). No. of bitstreams: 1 SarraciniJunior_Fernando_M.pdf: 6793847 bytes, checksum: effab2db853f18298e45c0a677c2ee50 (MD5) Previous issue date: 2006
Resumo: A implementação de controladores de ordem reduzida (fixa) demanda um menor esforço de processamento e consequentemente recursos de hardware menos sofisticados em relação à implementação de controladores de ordem completa. Este trabalho mostra que a implementação prática de controladores H 00 de ordem fixa voltados para o controle de estruturas flexíveis é factível. A obtenção de tais controladores é um problema considerado difícil por ser nãoconvexo. Para contornar as dificuldades numéricas de obtenção dos controladores de ordem fixa, uma combinação do método Lagrangiano Aumentado com Desigualdades Matriciais Lineares (LMls) é utilizada. Uma estrutura de viga com engaste em uma de suas extremidades é modelada através do método de Elementos Finitos. Controladores Hoc de ordem fixa e de ordem completa são projetados com base em um modelo matemático truncado. Incertezas de modelagem e a presença de modos próximos na região de frequência de interesse dificultam a obtenção de controladores que garantam a estabilidade e um desempenho satisfatório. Para contornar estas dificuldades, usa-se a técnica de controle robusto Hoo e filtros de ponderação. Dessa forma, procura-se minimizar o efeito das incertezas e evitar que modos que não foram considerados durante a fase de projeto dos controladores não sejam excitados, garantido assim a não ocorrência do fenômeno denominado spillover. Controladores Hoo de ordem completa e ordem fixa são implementados na prática e os resultados experimentais são comparados com resultados simulados
Abstract: The implementation of reduced (fixed ) order controllers requires a smaller computational effort and. consequently, less advanced hardware resources in relation to the implementation of full order controllers. This work shows that the practical implementation of fixed order Hoo controllers directed toward the control of flexible structures is viable. Obtaining such controllers is considered a difficult task for being a non-convex problem. To overcome the numerical difficulties of attainment of fixed order controllers, a combination of the Lagrangian method increased with Linear Matrix Inequalities (LMIs) is used. A cantilever beam is modelled with the Finite Element Method. Fixed and full order controllers are designed based on a truncated mathematical model. Modelling uncertainties and the existence of near modes in the frequency range of interest make difficult the attainment of controllers that assure the stability and the performance of the system. To overcome this difficulty, the robust Hoo control and weighing filters are used. In this way, it is desired to minimize the effect of uncertainties and avoid the excitement of non-modelled modes, assuring that the spillover phenomenon does not occur. Full order and fixed order H x controllers are implemented in the practice and the experimental results are compared with the simulated results
Mestrado
Mecanica dos Sólidos e Projeto Mecanico
Mestre em Engenharia Mecânica
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Guida, Mateus Rodrigues. „Método numérico de Matriz Resposta acoplado a um esquema de reconstrução espacial analítica para cálculos unidimensionais de transporte de nêutrons na formulação de ordenadas discretas multigrupo de energia com fonte fixa“. Universidade do Estado do Rio de Janeiro, 2011. http://www.bdtd.uerj.br/tde_busca/arquivo.php?codArquivo=6176.

Der volle Inhalt der Quelle
Annotation:
Conselho Nacional de Desenvolvimento Científico e Tecnológico
Um método de Matriz Resposta (MR) é descrito para gerar soluções numéricas livres de erros de truncamento espacial para problemas multigrupo de transporte de nêutrons com fonte fixa e em geometria unidimensional na formulação de ordenadas discretas (SN). Portanto, o método multigrupo MR com esquema iterativo de inversão nodal parcial (NBI) converge valores numéricos para os fluxos angulares nas fronteiras das regiões que coincidem com os valores da solução analítica das equações multigrupo SN, afora os erros de arredondamento da aritmética finita computacional. É também desenvolvido um esquema numérico de reconstrução espacial, que fornece a saída para os fluxos escalares de nêutrons em cada grupo de energia em um intervalo qualquer do domínio definido pelo usuário, com um passo de avanço também escolhido pelo usuário. Resultados numéricos são apresentados para ilustrar a precisão do presente método em cálculos de malha grossa.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Shah, Kumar. „Quantitative Analysis of Tobacco Specific Nitrosamine in Human Urine Using Molecularly Imprinted Polymers as a Potential Tool for Cancer Risk Assessment“. VCU Scholars Compass, 2009. http://scholarscompass.vcu.edu/etd/1954.

Der volle Inhalt der Quelle
Annotation:
Measuring urinary tobacco specific nitrosamine 4-(methylnitrosamino)-1-(3-pyridyl)-1-butanol (NNAL) and its glucuronide conjugate may provide the best biomarker of tobacco smoke lung carcinogen metabolism. Existence of differences in the extent of NNAL metabolism rates may be potentially related to an individuals’ lung cancer susceptibility. Low concentrations of NNAL in smokers urine (<1 ng/mL) require sensitive and selective methods for analysis. Traditionally, this involves extensive, time-consuming sample preparation that limits throughput and adds to measurement variability. Molecularly imprinted polymers (MIPs) have been developed for the analysis of urinary NNAL by offline cartridge extraction combined with LC-MS/MS. This method when reproduced demonstrated problems with matrix effects. In the first part of this work, investigation of matrix effects and related problems with sensitivity for the published offline extraction method has been conducted. In order to address the need to improve throughput and other analytical figures of merit for the original method, the second part of this work deals with development of a high-throughput online microfluidic method using capillary-columns packed with MIP beads for the analysis of urinary NNAL. The method was validated as per the FDA guidance, and enabled low volume, rapid analysis of urinary NNAL by direct injection on a microfluidic column packed with NNAL specific MIP beads. The method was used for analysis of urinary NNAL and NNAL-Gluc in smokers. Chemometric methods were used with this data to develop a potential cancer-risk-assessment tool based on pattern recognition in the concentrations of these compounds in urine. In the last part, method comparison approaches for the online and the offline sample extraction techniques were investigated. A ‘fixed’ range acceptance criterion based on combined considerations of method precision and accuracy, and the FDA bioanalytical guidance limits on precision and accuracy was proposed. Data simulations studies to evaluate the probabilities of successful transfers using the proposed criteria were performed. Various experimental designs were evaluated and a design comprised of 3 runs with 3 replicates each with an acceptance range of ±20% was found appropriate. The off-line and the on-line sample extraction methods for NNAL analysis were found comparable using the proposed fixed range acceptance criteria.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Godoi, Cláudio Roberto Cardoso de. „Divergência genética e predição de valores genotípicos em soja“. Universidade Federal de Goiás, 2014. http://repositorio.bc.ufg.br/tede/handle/tede/3884.

Der volle Inhalt der Quelle
Annotation:
Submitted by Luciana Ferreira (lucgeral@gmail.com) on 2015-01-16T13:14:38Z No. of bitstreams: 2 license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) Tese - Cláudio Roberto Cardoso de Godoi - 2014.pdf: 1446327 bytes, checksum: 78154341b9ccb5964b8508984eea19e1 (MD5)
Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2015-01-16T13:48:33Z (GMT) No. of bitstreams: 2 license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) Tese - Cláudio Roberto Cardoso de Godoi - 2014.pdf: 1446327 bytes, checksum: 78154341b9ccb5964b8508984eea19e1 (MD5)
Made available in DSpace on 2015-01-16T13:48:33Z (GMT). No. of bitstreams: 2 license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) Tese - Cláudio Roberto Cardoso de Godoi - 2014.pdf: 1446327 bytes, checksum: 78154341b9ccb5964b8508984eea19e1 (MD5) Previous issue date: 2014-05-07
Soybean breeding programs practice selection of high genetic value genotypes with two main objectives: a) to use them as parents in the hybridization process (first stage of the program), and b) to indicate them as new cultivars (final stage of the program). In this context, a first study used microsatellite markers (SSR) to assess the genetic diversity of soybean germplasm adapted to the Brazilian conditions. The experimental material consisted of 192 accessions, which included both introductions and Brazilian germplasm. The genetic divergence was assessed by descriptive analysis and the Rogers-W genetic distance. A total of 222 alleles were identified in the 37 genotyped loci, with an average of six alleles and a range of 2 to 14 alleles per locus. The genotypes were clustered according to the origin of the germplasm, and resulted in two groups: one group formed by introductions and other by Brazilian genotypes. Eighty five percent of the genetic distances estimates were above 0.70, suggesting that the assessed germplasm has good potential for hybridization in soybean breeding programs. It was concluded that the SSR markers are useful to identify divergent genotypic groups, as well as genotypic combinations with high genetic variability. It also became clear that the use of introduced germplasm ensures the incorporation of alleles necessary to increase the genetic base of soybeans and, consequently, the variability needed for the selective process. In a second study, the mixed model approach was used to assess some strategies of estimation and prediction of genotypic values for grain yield in the soybean regional yield trials. A total of 111 genotypes classified into three maturity groups were sown in up to 23 experiments in Central Brazil. The experiments were carried out in randomized complete block designs, with three replications. The biometrical analyses followed the fixed model and mixed model approaches, in the latter case assuming the genotypic effects as random. In the mixed model approach, analyses were made with or without information from the relationship estimates obtained either by genealogy or SSR markers, arranged in a genotypic covariance matrix (G). Also, in a context of spatial analysis, different structures were used in the residual covariance matrix (R) for each mixed model adjusted. The following conclusions were obtained: i) the fixed model analysis is adequate to estimate genotypic values in soybean trials with balanced data and orthogonal design; ii) under such conditions and intermediate to low heritability, the inclusion of relationship information associated to G matrix, although does not ensure the best fit models, improves the precision in predicting genotypic values; iii) the use of spatial structures associated to R matrix, in presence of the residual autocorrelation, improves the goodness of model fit to the data; and, iv) the choice of model for the analysis does not change the ranking of the genotypes in high heritability situations and, therefore, does not impact significantly on the selection of superior genotypes.
Os programas de melhoramento de soja visam à seleção de genótipos de alto valor genético, com a finalidade de uso principalmente em duas de suas etapas: a) como genitores no processo de hibridação (fase inicial); e, b) para indicação como nova cultivar (fase final). Nesse contexto, num primeiro estudo avaliou-se, por meio de marcadores microssatélites (SSR), a diversidade genética em germoplasma de soja adaptado às condições brasileiras. O material experimental constituiu-se de 192 acessos, entre introduções e germoplasma de origem nacional. Na avaliação da divergência genética, considerou-se a análise descritiva e a distância genética de Rogers-W. Nos 37 locos genotipados, identificaram-se 222 alelos, com média de seis alelos por loco e variação de 2 a 14 alelos. O agrupamento dos genótipos mostrou-se associado à origem do germoplasma, resultando em dois grupos: um introduzido e outro brasileiro. Das estimativas de distâncias genéticas obtidas, 85% foram superiores a 0,70, indicando bom potencial do germoplasma para hibridações em programas de melhoramento da soja. Concluiu-se que os marcadores SSR são úteis na identificação de grupos genotípicos divergentes, bem como de combinações de alta variabilidade genética. Ademais, o uso de germoplasma introduzido garante a incorporação de alelos necessários à ampliação da base genética da espécie e, consequentemente, da variabilidade necessária para uso no processo seletivo. Num segundo estudo, no contexto da análise de modelos mistos, avaliaram-se estratégias de estimação e predição de valores genotípicos para produtividade de grãos, a partir de ensaios de competição final de linhagens de soja. Os genótipos, em número de 111 e classificados em três grupos de maturação, foram semeados em até 23 experimentos conduzidos na região central do Brasil. Os experimentos foram conduzidos no delineamento de blocos completos casualizados, com três repetições. Nas análises biométricas adotaram-se as abordagens de modelo fixo e de modelo misto, neste caso, assumindo-se efeitos genotípicos como aleatórios. Na última abordagem, consideraram-se ainda análises com ou sem uso da informação de parentesco genético, obtida a partir de genealogias ou por marcadores SSR, e associada à matriz de covariâncias dos efeitos aleatórios (G). Para cada modelo, num contexto de análise espacial, adotaram-se também distintas estruturas para a matriz de covariâncias residuais (R). Concluiu-se, então, que: i) a análise com modelo fixo é adequada para estimar efeitos genotípicos em soja, sob condições de balanceamento dos dados e ortogonalidade do delineamento; ii) sob tais condições, a inclusão da informação de parentesco associada à matriz G, embora não garanta melhor ajuste aos modelos, sob herdabilidade moderada ou baixa, melhora a precisão das predições de valores genotípicos; iii) o uso de estruturas espaciais associadas à matriz R, na presença de autocorrelação residual, melhora o ajuste estatístico dos modelos; e, iv) corrobora-se a tese de que, sob alta herdabilidade, a escolha do modelo de análise não altera o posicionamento relativo dos genótipos, e, portanto, não impacta significativamente na seleção de genótipos superiores.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Schönherr, Marek. „Improving predictions for collider observables by consistently combining fixed order calculations with resummed results in perturbation theory“. Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2012. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-83876.

Der volle Inhalt der Quelle
Annotation:
With the constantly increasing precision of experimental data acquired at the current collider experiments Tevatron and LHC the theoretical uncertainty on the prediction of multiparticle final states has to decrease accordingly in order to have meaningful tests of the underlying theories such as the Standard Model. A pure leading order calculation, defined in the perturbative expansion of said theory in the interaction constant, represents the classical limit to such a quantum field theory and was already found to be insufficient at past collider experiments, e.g. LEP or Hera. Such a leading order calculation can be systematically improved in various limits. If the typical scales of a process are large and the respective coupling constants are small, the inclusion of fixed-order higher-order corrections then yields quickly converging predictions with much reduced uncertainties. In certain regions of the phase space, still well within the perturbative regime of the underlying theory, a clear hierarchy of the inherent scales, however, leads to large logarithms occurring at every order in perturbation theory. In many cases these logarithms are universal and can be resummed to all orders leading to precise predictions in these limits. Multiparticle final states now exhibit both small and large scales, necessitating a description using both resummed and fixed-order results. This thesis presents the consistent combination of two such resummation schemes with fixed-order results. The main objective therefor is to identify and properly treat terms that are present in both formulations in a process and observable independent manner. In the first part the resummation scheme introduced by Yennie, Frautschi and Suura (YFS), resumming large logarithms associated with the emission of soft photons in massive Qed, is combined with fixed-order next-to-leading matrix elements. The implementation of a universal algorithm is detailed and results are studied for various precision observables in e.g. Drell-Yan production or semileptonic B meson decays. The results obtained for radiative tau and muon decays are also compared to experimental data. In the second part the resummation scheme introduced by Dokshitzer, Gribov, Lipatov, Altarelli and Parisi (DGLAP), resumming large logarithms associated with the emission of collinear partons applicable to both Qcd and Qed, is combined with fixed-order next-to-leading matrix elements. While the focus rests on its application to Qcd corrections, this combination is discussed in detail and the implementation is presented. The resulting predictions are evaluated and compared to experimental data for a multitude of processes in four different collider environments. This formulation has been further extended to accommodate real emission corrections to beyond next-to-leading order radiation otherwise described only by the DGLAP resummation. Its results are also carefully evaluated and compared to a wide range of experimental data.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Khalid, Adeel S. „Development and Implementation of Rotorcraft Preliminary Design Methodology using Multidisciplinary Design Optimization“. Diss., Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/14013.

Der volle Inhalt der Quelle
Annotation:
A formal framework is developed and implemented in this research for preliminary rotorcraft design using IPPD methodology. All the technical aspects of design are considered including the vehicle engineering, dynamic analysis, stability and control, aerodynamic performance, propulsion, transmission design, weight and balance, noise analysis and economic analysis. The design loop starts with a detailed analysis of requirements. A baseline is selected and upgrade targets are identified depending on the mission requirements. An Overall Evaluation Criterion (OEC) is developed that is used to measure the goodness of the design or to compare the design with competitors. The requirements analysis and baseline upgrade targets lead to the initial sizing and performance estimation of the new design. The digital information is then passed to disciplinary experts. This is where the detailed disciplinary analyses are performed. Information is transferred from one discipline to another as the design loop is iterated. To coordinate all the disciplines in the product development cycle, Multidisciplinary Design Optimization (MDO) techniques e.g. All At Once (AAO) and Collaborative Optimization (CO) are suggested. The methodology is implemented on a Light Turbine Training Helicopter (LTTH) design. Detailed disciplinary analyses are integrated through a common platform for efficient and centralized transfer of design information from one discipline to another in a collaborative manner. Several disciplinary and system level optimization problems are solved. After all the constraints of a multidisciplinary problem have been satisfied and an optimal design has been obtained, it is compared with the initial baseline, using the earlier developed OEC, to measure the level of improvement achieved. Finally a digital preliminary design is proposed. The proposed design methodology provides an automated design framework, facilitates parallel design by removing disciplinary interdependency, current and updated information is made available to all disciplines at all times of the design through a central collaborative repository, overall design time is reduced and an optimized design is achieved.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Pasca, Bogdan Mihai. „Calcul flottant haute performance sur circuits reconfigurables“. Phd thesis, Ecole normale supérieure de lyon - ENS LYON, 2011. http://tel.archives-ouvertes.fr/tel-00654121.

Der volle Inhalt der Quelle
Annotation:
De plus en plus de constructeurs proposent des accélérateurs de calculs à base de circuits reconfigurables FPGA, cette technologie présentant bien plus de souplesse que le microprocesseur. Valoriser cette flexibilité dans le domaine de l'accélération de calcul flottant en utilisant les langages de description de circuits classiques (VHDL ou Verilog) reste toutefois très difficile, voire impossible parfois. Cette thèse a contribué au développement du logiciel FloPoCo, qui offre aux utilisateurs familiers avec VHDL un cadre C++ de description d'opérateurs arithmétiques génériques adapté au calcul reconfigurable. Ce cadre distingue explicitement la fonctionnalité combinatoire d'un opérateur, et la problématique de son pipeline pour une précision, une fréquence et un FPGA cible donnés. Afin de pouvoir utiliser FloPoCo pour concevoir des opérateurs haute performance en virgule flottante, il a fallu d'abord concevoir des blocs de bases optimisés. Nous avons d'abord développé des additionneurs pipelinés autour des lignes de propagation de retenue rapides, puis, à l'aide de techniques de pavages, nous avons conçu de gros multiplieurs, possiblement tronqués, utilisant des petits multiplieurs. L'évaluation de fonctions élémentaires en flottant implique souvent l'évaluation en virgule fixe d'une fonction. Nous présentons un opérateur générique de FloPoCo qui prend en entrée l'expression de la fonction à évaluer, avec ses précisions d'entrée et de sortie, et construit un évaluateur polynomial optimisé de cette fonction. Ce bloc de base a permis de développer des opérateurs en virgule flottante pour la racine carrée et l'exponentielle qui améliorent considérablement l'état de l'art. Nous avons aussi travaillé sur des techniques de compilation avancée pour adapter l'exécution d'un code C aux pipelines flexibles de nos opérateurs. FloPoCo a pu ainsi être utilisé pour implanter sur FPGA des applications complètes.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Vestin, Albin, und Gustav Strandberg. „Evaluation of Target Tracking Using Multiple Sensors and Non-Causal Algorithms“. Thesis, Linköpings universitet, Reglerteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-160020.

Der volle Inhalt der Quelle
Annotation:
Today, the main research field for the automotive industry is to find solutions for active safety. In order to perceive the surrounding environment, tracking nearby traffic objects plays an important role. Validation of the tracking performance is often done in staged traffic scenarios, where additional sensors, mounted on the vehicles, are used to obtain their true positions and velocities. The difficulty of evaluating the tracking performance complicates its development. An alternative approach studied in this thesis, is to record sequences and use non-causal algorithms, such as smoothing, instead of filtering to estimate the true target states. With this method, validation data for online, causal, target tracking algorithms can be obtained for all traffic scenarios without the need of extra sensors. We investigate how non-causal algorithms affects the target tracking performance using multiple sensors and dynamic models of different complexity. This is done to evaluate real-time methods against estimates obtained from non-causal filtering. Two different measurement units, a monocular camera and a LIDAR sensor, and two dynamic models are evaluated and compared using both causal and non-causal methods. The system is tested in two single object scenarios where ground truth is available and in three multi object scenarios without ground truth. Results from the two single object scenarios shows that tracking using only a monocular camera performs poorly since it is unable to measure the distance to objects. Here, a complementary LIDAR sensor improves the tracking performance significantly. The dynamic models are shown to have a small impact on the tracking performance, while the non-causal application gives a distinct improvement when tracking objects at large distances. Since the sequence can be reversed, the non-causal estimates are propagated from more certain states when the target is closer to the ego vehicle. For multiple object tracking, we find that correct associations between measurements and tracks are crucial for improving the tracking performance with non-causal algorithms.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Chen, Yen-Wen, und 陳彥文. „Fixed-Slope Carrier Modulation for Indirect Matrix Converter“. Thesis, 2017. http://ndltd.ncl.edu.tw/handle/7qu94x.

Der volle Inhalt der Quelle
Annotation:
碩士
國立中山大學
電機工程學系研究所
106
When it comes to AC/AC conversion, AC/AC converters that have a dc-link capacitor to be an energy buffer between two ac systems and regulate dc-link voltage are commonly used. However, this capacitor affects life and volume of the converter device and requires soft-start circuit to prevent excessive inrush current. In recent years, with the innovation and development of power electronics, indirect matrix converter was proposed; this topology is more complex in modulation, but no drawbacks that caused by a dc-link capacitance. In this thesis, a carrier-based modulation method is proposed to implement zero current commutation (ZCC) in an indirect matrix converter (IMC) with a fixed-slope carrier. This method reduces the difficulty of producing switching signals that follows ZCC by modifying reference signals of carrier-based PWM method that needs a varying-slope carrier. The modified reference signals can then be used in PWM processes with a fixed-slope carrier. Moreover, there is no jump discontinuity in the modified reference signals, which can avoid abrupt changes in control that cause switching errors. At the end, the feasibility of this method is verified by experiments as input power factor correction, output frequency limitation test and output load variation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Hung, Chun-Yao, und 洪峻堯. „Fixed-Slope Carrier Modulation for Direct Matrix Converter“. Thesis, 2018. http://ndltd.ncl.edu.tw/handle/87ewt5.

Der volle Inhalt der Quelle
Annotation:
碩士
國立中山大學
電機工程學系研究所
107
In recent years, the technology of power electronics becomes more mature, the issue of AC/AC conversion has also received much attention. The conventional AC/AC converters covert energy in the form of AC to DC and then AC, the part of dc-link usually uses capacitor for voltage regulation and energy storage. However, this capacitor has short lifetime and large size in the converter, and the additional soft-start circuit is needed to prevent the occurrence of excessive inrush current. In recent years, the matrix converter was proposed, this topology is more complex in modulation design, but no drawback from dc-link capacitor. In this thesis, a fixed-slope carrier modulation method is proposed to implement three steps hybrid commutation method in the direct matrix converter. Because this method uses fixed slope carrier, so that is easier than space vector modulation and variable-slope carrier method to implement the switch signals. Moreover, the modulation signals of proposed method are more continuously, which can’t easy to cause switching errors in control. At the end, the feasibility and reliability of this method is verified by testing the steady and dynamic experiment of output frequency variation, output magnitude variation and output load variation from output side.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Chang, Guo-Cia, und 張國財. „Fixed - Order H_inf Controllers Design via Linear Matrix Inequality Approach“. Thesis, 2004. http://ndltd.ncl.edu.tw/handle/56665843130459052378.

Der volle Inhalt der Quelle
Annotation:
碩士
國立中正大學
化學工程研究所
92
A bilinear matrix inequality(BMI) approach for designing controller of control engineering is the point of this thesis . First of all , we dissus -stability , pole placement , time-delay problems of static ouput feedback(SOF) system. Fixed-order H_inf control problem is a general problem in robust control , this problem can be a linear matrix inequality problem by Bound Real Lemma .The fixed-order H_inf control problem is to find a controller such that the closed loop system is stable and the H_inf -norm of the closed loop transfer function is strictly less than a constant gamma . Time-delay problems are general in chemical process. We design stable fixed-order H_inf controller for constant time-delay system by a bilinear matrix inequality approach.Finally , we also dissus time-varying-delay system.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Haung, Wei-Cheng, und 黃暐程. „A Study of Image Authentication Technique Based on Fixed Point of Regular Stochastic Matrix“. Thesis, 2011. http://ndltd.ncl.edu.tw/handle/81525070563001864625.

Der volle Inhalt der Quelle
Annotation:
碩士
國立中興大學
資訊管理學系所
99
In the recently years, the whole world has gradually toward the IT society , the rapid development of network technology and image processing also are getting mature, the electronic products is commonly used in daily life, such as: computers, mobile phones, digital cameras, etc., those device are all with a lot of digital images, even in the real life, there exist many established photographs, graphics, pictures, logos, etc., also can be transformed by digital device into digital images, so that many digital media such as image, audio, video...etc., becomes easy to obtain or exchange over the Internet. As the result, the digital images may be tampered for specific purposes without the permission from original authors by attacker. So, this kind of behavior seriously to hurt the ownership of original authors and the accuracy of digital image. Therefore, based on the security requirements of digital image during the transmission time in the network, this paper, we proposed a fragile watermarked technique based on the fixed point of regular stochastic matrices in Markov chain to achieve the purpose of image authentication that not only can determine the correctness of digital image, but also to locate the tampered region in the image. If the digital image is tampered, we also can evaluate the integrity of tampered region to judge the truthfulness of image content. The concept of image authentication is to generate authentication data from the original image as a watermark, and then it is embedded into the original image for ensure the originality of image. Besides, in our method with the property of produce an amount of authentication data, so not only can verify the digital image integrity, but also can to locate the tampered region of the image to protect and identify the image originality, and with our image authentication method, the high quality image feature data can be embedded into image more times for recover the tampered region to make the possibility of recover tampered region is increased. In our experiment, we will show you some instance about how to combine feature extraction technique for recover tampered region in whole image. The main purpose of image authentication is that to detect whether an image has been altered or not, but sometimes, in order to increase the visibility of digital images, adjusted the brightness of image is necessary approach at sometimes, if the brightness adjustment is not too large, human eyes is difficult to aware the differences among with original image, for this reason, brightness adjustment is a legal manipulation for us, so we should improve our image authentication method and make some adjustments to develop a strategy for increase some tolerate ability for adjusted brightness situation in our image authentication method.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

李玟娟. „Nano-confined crystallization of block copolymers:crystallization in the microdomain structure fixed by a crosslinked matrix“. Thesis, 2001. http://ndltd.ncl.edu.tw/handle/55407843730331233479.

Der volle Inhalt der Quelle
Annotation:
碩士
國立清華大學
化學工程學系
89
Abstract We studied the crystallization kinetics and crystalline morphology of poly(ethylene oxide)-block-poly(1,4-butadiene) (PEO-b-PB) exhibiting various mesophase structures in the melt. In order to establish a proper model system in which the crystallization can be effective confined within the nano-scaled microdomains while maintaining high degree of crystallizability of the crystalline blocks, the amorphous PB blocks in PB-b-PEO were crosslinked by a photo-initiated crosslinking reaction so that the crystallization-induced disruption of microdomain structure may be effectively avoided. Small angle X-ray scattering (SAXS) results revealed that the microdomain morphology in the melt effectively preserved upon crystallization of the PEO block after the PB matrix crosslinked. The kinetics of the crystallization confined in the microdomains displayed a parallel transition with the transformation of microdomain morphology. Such a distinct correlation stemmed largely from the homogeneous nucleation controlled crystallizations where the direct proportionality between nucleation rate and microdomain volume rendered the basis for the direct correlation. The homogeneous nucleation controlled crystallizations in the compositions containing cylindrical and spherical PEO microdomains was further verified from the isothermal crystallization study. In spite of the effective confinement imposed by the crosslinked PB phase, crystallization in the lamellar phase still proceeded through a mechanism analogous to the spherulitic crystallization of homopolymer.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Liu, Ken-Tzu, und 劉耕孜. „A Study of Image Authentication Technique by Applying Fixed Point of Regular Stochastic Matrix on DCT“. Thesis, 2014. http://ndltd.ncl.edu.tw/handle/42550577353167126383.

Der volle Inhalt der Quelle
Annotation:
碩士
國立中興大學
資訊管理學系所
102
In this paper, we propose a novel method of robust, high accuracy image tampering detection. By using the idea of fixed points extracted from a DCT (Discrete Cosine Transform) coefficient matrix from the field of stochastic matrices, we can modify this fixed point and render it an image’s authentication code. The authentication code can be used to detect tampering and locate the suspected region under our cross-examination procedure. Moreover, when the image is modified, even by slight brightness adjustment and widely used, lossy JPEG compression, our method is significantly good at detecting the tampered image regions. In this experiment, the results show that our proposed method is able to detect and locate the tampered image regions precisely and robustly against brightness adjustments and the JPEG compressions. Our method works quite well even when the watermarked image has been JPEG-compressed twice. Besides, a new idea of digital document authentication is proposed. We apply the characteristics of the Markov chain to document authentication. This concept which combines document authentication and image processing is able to authenticate the document and also remain the same document size.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Hsieh, Chao-Han, und 謝昭漢. „mRNA expressions of matrix metalloproteinases and tissue inhibitor metalloproteinases on formalin fixed paraffin embedded human breast tumor tissues using RT-PCR“. Thesis, 2011. http://ndltd.ncl.edu.tw/handle/t3uet5.

Der volle Inhalt der Quelle
Annotation:
碩士
國立中興大學
動物科學系所
99
Formalin fixed paraffin embedded (FFPE) is the most commonly used method worldwide for tissue storage; this resource represents a vast repository of tissue material with a long-term clinical follow-up. Although, FFPE preserves the tissue integrity it may cause extensive damage to nucleic acids stored within the tissue. Hence, the primary goal of this experiment is to set up the best condition for detecting mRNA expression in FFPE tissues by RT-PCR. To optimize the RT-PCR condition, we compare the GAPDH mRNA expression in fresh frozen tissues and tissues with formalin fixation for 1, 2 and 3 days under different proteinase K reaction time, primer concentration, commercial RT kits, amplicon size of primer treatment. We concluded that a shorten fixation time to less than one day, and extended proteinase K reaction time to 24 hours produced better RNA quality and recovery. The appropriate primer amplicon size is smaller than 100 bp and the concentration of primer is better at 5 μM in 1.5 μl. Biomi Biotech RT kit is more sensitive than Invitrogen Superscript RT kit in detecting small amplicon size primer. Matrix metalloproteinases (MMPs) can degrade extracellular matrix, which is regulated by tissue inhibitors of metalloproteinases (TIMPs). Hence, those two factors are considered to play an important role in cancer metastasis and invasion. We applied the above optimal condition for RT-PCR to measure the RNA expressions of matrix metalloproteinases (MMP) -2, -9, -14 and tissue inhibitor of metalloproteinases (TIMP) -1, -2 on FFPE human breast cancer tissues. Thirty cases of FFPE human breast tumor from 2009 and two cases of FFPE human breast tumor from 2007 were adopted in this experiment. FFPE of eleven cases of intraductal papilloma (IP), eleven cases of ductal carcinoma in situ (DCIS), and ten cases of invasive ductal carcinoma (IDC) were included in current study. The RT-PCR results were quantified. The statistics of the results show that RNA expressions of MMP-9 and TIMP-2 were significantly lower in DCIS. TIMP-1 RNA expression was not detected in all samples studied. Correlation analysis show that the expression between MMP-2 and MMP-9, MMP-2 and TIMP-2, MMP-9 and TIMP-2 are highly related. The correlation analysis of MMP-2, MMP-9, MMP-14 and TIMP-2 are highly related in DCIS. Our results suggest that MMP-2, MMP-14 and TIMP-2 are important in extracellular matrix degradation in DCIS and MMP-9 is more relevant with invasive ductal carcinoma.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Satak, Neha. „Design, Development And Flight Control Of Sapthami - A Fixed Wing Micro Air Vehicle“. Thesis, 2008. http://hdl.handle.net/2005/872.

Der volle Inhalt der Quelle
Annotation:
Two micro air vehicles, namely Sapthami and Sapthami-flyer, are developed in this thesis. Their total weight is less than 200grams each. They fit inside a 30cm and 32cm sphere respectively and carry the commercially available Kestrel autopilot hardware. The vehicles have an endurance of around 20-30 minutes. The stall speed of Sapthami is around 7m/s and that of Sapthami-flyer is around 5m/s as found by nonlinear modeling. The low stall speed makes it possible for them to be launched by hand. This enhances their portability as they do not require any launching equipment. The vehicle installed with Kestrel autopilot system is capable of many modes of operations. It is capable of fully autonomous flight with the aid of a variety of sensors like the GPS unit, heading sensor, 2-axis magnetometer, 3-axis accelerometer and 3-axis gyros. The vehicle carrying the Kestrel autopilot hardware is capable of autonomous and semi-autonomous flights after installation and tuning of feedback loops. Sapthami, is a tailless flying wing with an inverse zimmermann profile. A flying wing is a preferred configuration for the MAV as it maximizes the lifting area for a given size constraint. For a maximum size constraint of 30cm and aspect ratio around 1, the vehicle operates at Reynolds number between 100,000 to 250,000, at flight velocity 7 m/s to 15 m/s. The Inverse Zimmerman profile has a higher lift coefficient, CL, in comparison to the other planforms such as rectangular, elliptical and Zimmermann, for aspect ratio 1 to 1.25 and tested at Reynolds number of 100,000. The configuration of Sapthami is clean as there is no fuselage and all the components like autopilot hardware and battery are housed inside the wing. A thick reflex Martin Hepperle (MH) airfoil MH18 is chosen which gives sufficient space to place the components. This airfoil is specially used for tailless configurations due to its negative camber at the trailing edge. This negative camber helps in reducing the negative pitching moment of the wing, since no separate horizontal tail is available on a tailless aircraft to compensate for it. The vehicle is fabricated using the blue foam, having a density of 30kg/m3 . The wing is fabricated by CNC machining after which slots are cut manually to embed the electronics. The vehicle is found to have stable flying characteristics. Limited flight trials are done for Sapthami. It takes large time to fabricate the vehicle due to limited availability of CNC machining facility. Therefore, a new tailless, wing-fuselage configuration, which can be fabricated with balsa wood, is designed. Sapthami-flyer is the second vehicle designed in this thesis. Its wing span is slightly more than Sapthami. Since it is a wing-fuselage configuration, therefore there is no need for a thick airfoil. Mark drela’s AG airfoils are found to have better lift than MH airfoils for the inverse Zimmerman profile. The thickness of the airfoils is reduced to 1% so that the wing can be made by a 1.5mm thick balsa sheet to reduce weight. The inverse Zimmermann profile wing with the AG09 airfoil is found to have best lift-to-drag ratio when compared to AG36, MH45 and MH18. The analysis is done using commercially available AVL software. AG09 with 1% thickness is used in the final configuration. This configuration has better short period damping than Sapthami. It also has slower modes of operation than Sapthami. The operating modes of most of the MAVs, including Sapthami and Sapthami-flyer, are lowly damped but fast. This makes it difficult for the pilot to fly the vehicle. To improve the flying qualities of the vehicle artificial stabilization is required. The feedback is implemented on the Kestrel autopilot hardware. It allows only PID based feedback structures to be implemented, hence gives no choice to the designer to implement higher order control. The digital integrator and differentiator implementation for feedback are non-ideal. This further reduces the effectiveness of control. The problem is dealt with by incorporating the additional dynamics introduced by these implementation while formulating the control problem. Further the modeling of the micro air vehicle is done by using vortex lattice simulation based softwares. The fidelity of the obtained dynamics is low. Therefore, there is high uncertainty in the plant model. The controller also needs to reject the wind gust disturbances which are of the order of the flight speed of the vehicle. All the above stated requirements from the control design can be best addressed by a robust control design. Sapthami-flyer uses aileron and elevator for control. There is no rudder in the configuration in order to reduce weight. In the longitudinal dynamics, pitch rate and pitch error feedback to elevator are used to increase the short period and phugoid damping respectively. In the lateral dynamics, a combination of roll rate, yaw rate and roll error feedback is given to aileron to improve the dutch roll damping and stabilize the spiral mode. The feedback loops for both longitudinal and lateral dynamics are multi-output single input design problems, therefore simultaneous tuning of loops is beneficial. The PID control is designed by first converting the actual plant to a static output feedback equivalent plant. The dynamics introduced by non-ideal differentiator and integrator implementation on the autopilot hardware are incorporated in the open loop static output feedback formulation. The robust pole placement for the SOF plant is done by using the modified iterative matrix inequality algorithm developed in this thesis. It is capable of multi-loop, multi-objective feedback design for SOF plants. The algorithm finds the optimal solution by simultaneously putting constraints on the H2 performance, pole placement, gain and phase margin of the closed loop system. The pole placement is done to minimize the real part of largest eigenvalue. A single controller is designed at a suit-able operating point and constraints are put on the gain and phase margin of the closed loop plant at other operating points. The designed controller is tested in flight on board Sapthami-flyer. The vehicle is also capable of tracking commanded pitch and roll attitude with the help of pitch error, roller or feedbacks. This is shown in the flight when the pilot leaves the RC stick and the vehicle tracks the desired attitude. The vehicle has shown improved flying characteristics in the closed loop mode.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Schönherr, Marek. „Improving predictions for collider observables by consistently combining fixed order calculations with resummed results in perturbation theory“. Doctoral thesis, 2011. https://tud.qucosa.de/id/qucosa%3A25914.

Der volle Inhalt der Quelle
Annotation:
With the constantly increasing precision of experimental data acquired at the current collider experiments Tevatron and LHC the theoretical uncertainty on the prediction of multiparticle final states has to decrease accordingly in order to have meaningful tests of the underlying theories such as the Standard Model. A pure leading order calculation, defined in the perturbative expansion of said theory in the interaction constant, represents the classical limit to such a quantum field theory and was already found to be insufficient at past collider experiments, e.g. LEP or Hera. Such a leading order calculation can be systematically improved in various limits. If the typical scales of a process are large and the respective coupling constants are small, the inclusion of fixed-order higher-order corrections then yields quickly converging predictions with much reduced uncertainties. In certain regions of the phase space, still well within the perturbative regime of the underlying theory, a clear hierarchy of the inherent scales, however, leads to large logarithms occurring at every order in perturbation theory. In many cases these logarithms are universal and can be resummed to all orders leading to precise predictions in these limits. Multiparticle final states now exhibit both small and large scales, necessitating a description using both resummed and fixed-order results. This thesis presents the consistent combination of two such resummation schemes with fixed-order results. The main objective therefor is to identify and properly treat terms that are present in both formulations in a process and observable independent manner. In the first part the resummation scheme introduced by Yennie, Frautschi and Suura (YFS), resumming large logarithms associated with the emission of soft photons in massive Qed, is combined with fixed-order next-to-leading matrix elements. The implementation of a universal algorithm is detailed and results are studied for various precision observables in e.g. Drell-Yan production or semileptonic B meson decays. The results obtained for radiative tau and muon decays are also compared to experimental data. In the second part the resummation scheme introduced by Dokshitzer, Gribov, Lipatov, Altarelli and Parisi (DGLAP), resumming large logarithms associated with the emission of collinear partons applicable to both Qcd and Qed, is combined with fixed-order next-to-leading matrix elements. While the focus rests on its application to Qcd corrections, this combination is discussed in detail and the implementation is presented. The resulting predictions are evaluated and compared to experimental data for a multitude of processes in four different collider environments. This formulation has been further extended to accommodate real emission corrections to beyond next-to-leading order radiation otherwise described only by the DGLAP resummation. Its results are also carefully evaluated and compared to a wide range of experimental data.:1. Introduction 1.1 Event generators 1.2 The event generator Sherpa 1.3 Outline of this thesis Part I YFS resummation & fixed order calculations 2 Yennie-Frautschi-Suura resummation 2.1 Resummation of virtual photon corrections 2.2 Resummation of real emission corrections 2.3 The Yennie-Frautschi-Suura form factor 3 A process independent implementation in Sherpa 3.1 The Algorithm 3.1.1 The master formula 3.1.2 Phase space transformation 3.1.3 Mapping of momenta 3.1.4 Event generation 3.2 Higher Order Corrections 3.2.1 Approximations for real emission matrix elements 3.2.2 Real emission corrections 3.2.3 Virtual emission corrections 4 The Z lineshape and radiative lepton decay corrections 4.1 The Z lineshape 4.1.1 Radiation pattern 4.1.2 Numerical stability 4.2 Radiative lepton decays 4.3 Summary and conclusions 5 Electroweak corrections to semileptonic B decays 5.1 Tree-level decay 5.2 Next-to-leading order corrections 5.2.1 Matching of different energy regimes 5.2.2 Short-distance next-to-leading order corrections 5.2.3 Long-distance next-to-leading order corrections 5.2.4 Structure dependent terms 5.2.5 Soft-resummation and inclusive exponentiation 5.3 Methods 5.3.1 BLOR 5.3.2 Sherpa/Photons 5.3.3 PHOTOS 5.4 Results 5.4.1 Next-to-leading order corrections to decay rates 5.4.2 Next-to-leading order corrections to differential rates 5.4.3 Influence of explicit short-distance terms 5.5 Summary and conclusions Part II DGLAP resummation & fixed order calculations 6 DGLAP resummation & approximate higher order corrections 6.1 Dokshitzer-Gribov-Lipatov-Altarelli-Parisi resummation 6.1.1 The naive parton model 6.1.2 QCD corrections to the parton model 6.1.3 Factorisation and the collinear counterterm 6.1.4 The DGLAP equations 6.2 Parton evolution 6.2.1 Approximate real emission cross sections 6.2.2 Parton evolution 6.2.3 Scale choices for the running coupling 6.3 Soft emission corrections 7 The reinterpretation and automisation of the POWHEG method 7.1 Decomposition of the real-emission cross sections 7.2 Construction of a parton shower 7.3 Matrix element corrections to parton showers 7.4 The reformulation of the POWHEG method 7.4.1 Approximate NLO cross sections 7.4.2 The POWHEG method and its accuracy 7.5 The single-singularity projectors 7.6 Theoretical ambiguities 7.7 MC@NLO 7.8 Realisation of the POWHEG method in the Sherpa Monte Carlo 7.8.1 Matrix elements and subtraction terms 7.8.2 The parton shower 7.8.3 Implementation & techniques 7.8.4 Automatic identification of Born zeros 7.9 Results for processes with trivial colour structures 7.9.1 Process listing 7.9.2 Tests of internal consistency 7.9.3 Comparison with tree-level matrix-element parton-shower merging 7.9.4 Comparison with experimental data 7.9.5 Comparison with existing POWHEG 7.10 Results for processes with non-trivial colour structures 7.10.1 Comparison with experimental data 7.11 Summary and conclusions 8 MENLOPS 8.1 Improving parton showers with higher-order matrix elements 8.1.1 The POWHEG approach 8.1.2 The ME+PS approach 8.2 Merging POWHEG and ME+PS - The MENLOPS 8.3 Results 8.3.1 Merging Systematics 8.3.2 ee -> jets 8.3.3 Deep-inelastic lepton-nucleon scattering 8.3.4 Drell-Yan lepton-pair production 8.3.5 W+jets Production 8.3.6 Higgs boson production 8.3.7 W-pair+jets production 8.4 Summary and conclusions Summary Appendix A Details on the YFS resummation implementation A.1 The YFS-Form-Factor A.1.1 Special cases A.2 A.2.1 Avarage photon multiplicity A.2.2 Photon energy A.2.3 Photon angles A.2.4 Photons from multipoles A.3 Massive dipole splitting functions A.3.1 Final State Emitter, Final State Spectator A.3.2 Final State Emitter, Initial State Spectator A.3.3 Initial State Emitter, Final State Spectator B Formfactors and higher order matrix elements for semileptonic B decays B.1 Form factor models of exclusive semileptonic B meson decays B.1.1 Form factors for B -> D l nu B.1.2 Form factors for B -> pi l nu B.1.3 Form factors for B -> D0* l nu B.2 NLO matrix elements B.2.1 Real emission matrix elements B.2.2 Virtual emission matrix elements B.3 Scalar Integrals B.3.1 General definitions B.3.2 Tadpole integrals B.3.3 Bubble integrals B.3.4 Triangle integrals C Explicit form of the leading order Altarelli-Parisi splitting functions C.1 Collinear limit of real emission matrix elements C.1.1 q -> gq splittings C.1.2 q -> qg splittings C.1.3 g -> qq splittings C.1.4 g -> gg splittings Bibliography
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie