Thèses sur le sujet « Maximal curves »

Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Maximal curves.

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleures thèses pour votre recherche sur le sujet « Maximal curves ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.

1

Wang, Jie. « Geometry of general curves via degenerations and deformations ». The Ohio State University, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=osu1291067498.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Roos, Joris [Verfasser]. « Singular integrals and maximal operators related to Carleson's theorem and curves in the plane / Joris Roos ». Bonn : Universitäts- und Landesbibliothek Bonn, 2017. http://d-nb.info/1139049038/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Kadiköylü, Irfan. « Rank Stratification of Spaces of Quadrics and Moduli of Curves ». Doctoral thesis, Humboldt-Universität zu Berlin, 2018. http://dx.doi.org/10.18452/19191.

Texte intégral
Résumé :
In dieser Arbeit untersuchen wir Varietäten singulärer, quadratischer Hyperflächen, die eine projektive Kurve enthalten, und effektive Divisoren im Modulraum von Kurven, die mittels verschiedener Eigenschaften von quadratischen Hyperflächen definiert werden. In Kapitel 2 berechnen wir die Klasse des effektiven Divisors im Modulraum von Kurven mit Geschlecht g und n markierten Punkten, der als der Ort von solchen markierten Kurven definiert ist, dass das Projektion der kanonischen Abbildung der Kurve von den markierten Punkten auf einer quadratischen Hyperfläche liegt. Mithilfe dieser Klasse zeigen wir, dass die Modulräume mit Geschlecht 16, 17 und 8 markierten Punkten Varietäten von allgemeinem Typ sind. In Kapitel 3 stratifizieren wir den Raum von quadratischen Hyperflächen, die eine projektive Kurve enthalten, mithilfe des Rangs dieser Hyperflächen. Wir zeigen, dass jedes Stratum die erwartete Dimension hat, falls die Kurve ein allgemeines Element des Hilbertschemas ist. Mit Rücksicht auf Rang von quadratischen Hyperflächen, eine ähnliche Konstruktion wie in Kapitel 2 ergibt neue Divisoren im Modulraum von Kurven. Wir berechnen die Klasse von diesen Divisoren und zeigen, dass der Modulraum von Kurven mit Geschlecht 15 und 9 markierten Punkten eine Varietät von allgemeinem Typ ist. In Kapitel 4 präsentieren wir unterschiedliche Resultate, die mit Themen von vorigen Kapiteln im Zusammenhang stehen. Zum Ersten berechnen wir die Klasse von Divisoren im Modulraum von Kurven, die als die Orte von Kurven definiert sind, wo die maximale Rang Vermutung nicht gilt. Zweitens zeigen wir, dass jedes Geradenbündel als Tensorprodukt von zwei Geradenbündeln mit zwei Schnitten geschrieben werden kann, falls die Kurve allgemein ist und eine gewisse numerische Bedingung erfüllt ist. Zuletzt benutzen wir bekannte Divisorklassen zu zeigen, dass der Modulraum von Kurven mit Geschlecht 12 und 10 markierten Punkten eine Varietät von allgemeinem Typ ist.
In this thesis, we study varieties of singular quadrics containing a projective curve and effective divisors in the moduli space of pointed curves defined via various constructions involving quadric hypersurfaces. In Chapter 2, we compute the class of the effective divisor in the moduli space of n-pointed genus g curves, which is defined as the locus of pointed curves such that the projection of the canonical model of the curve from the marked points lies on a quadric hypersurface. Using this class, we show that the moduli spaces of 8-pointed genus 16 and 17 curves are varieties of general type. In Chapter 3, we stratify the space of quadrics that contain a given curve in the projective space, using the ranks of the quadrics. We show, in a certain numerical range, that each stratum has the expected dimension if the curve is general in its Hilbert scheme. By incorporating the datum of the rank of quadrics, a similar construction as the one in Chapter 2 yields new divisors in the moduli space of pointed curves. We compute the class of these divisors and show that the moduli space of 9-pointed genus 15 curves is a variety of general type. In Chapter 4, we present miscellaneous results, which are related with our main work in the previous chapters. Firstly, we consider divisors in the moduli space of genus g curves, which are defined as the failure locus of maximal rank conjecture for hypersurfaces of degree greater than two. We illustrate three examples of such divisors and compute their classes. Secondly, using the classical correspondence between rank 4 quadrics and pencils on curves, we show that the map that associates to a pair of pencils their tensor product in the Picard variety is surjective, when the curve is general and obvious numerical assumptions are satisfied. Finally, we use divisor classes, that are already known in the literature, to show that the moduli space of 10-pointed genus 12 curves is a variety of general type.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Jerassy-Etzion, Yaniv. « Stripping the yield curve with maximally smooth forward curves ». Tallahassee, Florida : Florida State University, 2010. http://etd.lib.fsu.edu/theses/available/etd-01132010-124541.

Texte intégral
Résumé :
Thesis (Ph. D.)--Florida State University, 2010.
Title and description from dissertation home page viewed on July 28, 2010. Advisor: Paul M. Beaumont, Florida State University, College of Social Sciences and Public Policy, Dept. of Economics. Includes bibliographical references.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Torres, Orihuela Fernando Eduardo. « Sobre curvas maximales ». Pontificia Universidad Católica del Perú, 2014. http://repositorio.pucp.edu.pe/index/handle/123456789/96043.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Profilo, Stanley. « Curvas nodais maximais via curvas de Fermat ». Universidade Federal do Espírito Santo, 2009. http://repositorio.ufes.br/handle/10/6473.

Texte intégral
Résumé :
Made available in DSpace on 2016-12-23T14:34:48Z (GMT). No. of bitstreams: 1 Dissert_Stanley.pdf: 300904 bytes, checksum: 858e1f7b615889b7f7b8d24956db175b (MD5) Previous issue date: 2009-06-26
We study the rational projective nodal plane curves in the projective plane P2(C) by using the Fermat curve Fn : Xn+Y n+Zn = 0. We deal with the theory of dual curves in the projective plane and a special type of group action of Zn x Zn on the Fermat curve and its dual to construct, for any positive integer n maior ou igual a 3, a rational nodal plane curve of degree equal to n -1. A rational nodal plane curve is a projective rational plane curve (that is, a genus zero curve) that presents as singularities only nodal points, that is, singularities of multiplicity two with distinct tangents. The basic reference is the paper "On Fermat Curves and Maximal Nodal Curves"by Matsuo OKA published in Michigan Math. Journal, v.53. in 2005.
Estudamos curvas projetivas nodais racionais no plano projetivo P2(C) através das curvas de Fermat Fn : Xn+Y n+Zn = 0. Utilizamos a teoria de curvas duais e um tipo especial de ação do grupo Zn x Zn sobre a curva de Fermat e sua dual para construir, para cada n maior ou igual a 3, uma curva plana nodal racional de grau n -1. Uma curva plana nodal racional é uma curva projetiva plana racional (isto é, de gênero zero) que possui apenas singularidades do tipo nó. A referência básica é o trabalho de Matsuo OKA "On Fermat Curves and Maximal Nodal Curves" publicado em 2005 no periódico Michigan Math. Journal, v.53.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Teherán, Herrera Arnoldo Rafael 1968. « Sobre curvas maximais não recobertas pela curva hermitiana ». [s.n.], 2014. http://repositorio.unicamp.br/jspui/handle/REPOSIP/307080.

Texte intégral
Résumé :
Orientadores: Fernando Eduardo Torres Orihuela, Ercílio Carvalho da Silva
Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Matemática Estatística e Computação Científica
Made available in DSpace on 2018-08-25T19:08:58Z (GMT). No. of bitstreams: 1 TeheranHerrera_ArnoldoRafael_D.pdf: 1331567 bytes, checksum: 7885ebc0ee3a5a3c7ddbc40bca6def1e (MD5) Previous issue date: 2014
Resumo: Apresentamos algumas aplicações, especialmente usaremos as curvas construídas para calcular alguns AG códigos num ponto racional; estes serão construídos usando certo semigrupo telescópico no ponto racional da curva correspondente. Finalmente compararemos os parâmetros obtidos de nossos exemplos, com os parâmetros dos códigos existentes na literatura
Abstract: In this thesis we work out exemples of maximal curve wich are not covered by the corresponding Hermitian curve. These exemples arise as covered curves of the called GK curve. We also construct exemples of maximal array which cannot be Galois covered by the corresponding Hermitian curve. Finally we stay some applications to coding theory
Doutorado
Matematica Aplicada
Doutor em Matemática Aplicada
Styles APA, Harvard, Vancouver, ISO, etc.
8

Tyler, Thomas Francis. « Maximum curves of analytic functions and associated problems ». Thesis, Imperial College London, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.405895.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Gallón, Gómez Santiago Alejandro. « Template estimation for samples of curves and functional calibration estimation via the method of maximum entropy on the mean ». Toulouse 3, 2013. http://thesesups.ups-tlse.fr/2000/.

Texte intégral
Résumé :
L'une des principales difficultés de l'analyse des données fonctionnelles consiste à extraire un motif commun qui synthétise l'information contenue par toutes les fonctions de l'échantillon. Le Chapitre 2 examine le problème d'identification d'une fonction qui représente le motif commun en supposant que les données appartiennent à une variété ou en sont suffisamment proches, d'une variété non linéaire de basse dimension intrinsèque munie d'une structure géométrique inconnue et incluse dans un espace de grande dimension. Sous cette hypothèse, un approximation de la distance géodésique est proposé basé sur une version modifiée de l'algorithme Isomap. Cette approximation est utilisée pour calculer la fonction médiane empirique de Fréchet correspondante. Cela fournit un estimateur intrinsèque robuste de la forme commune. Le Chapitre 3 étudie les propriétés asymptotiques de la méthode de normalisation quantile développée par Bolstad, et al. (2003) qui est devenue l'une des méthodes les plus populaires pour aligner des courbes de densité en analyse de données de microarrays en bioinformatique. Les propriétés sont démontrées considérant la méthode comme un cas particulier de la procédure de la moyenne structurelle pour l'alignement des courbes proposée par Dupuy, Loubes and Maza (2011). Toutefois, la méthode échoue dans certains cas. Ainsi, nous proposons une nouvelle méthode, pour faire face à ce problème. Cette méthode utilise l'algorithme développée dans le Chapitre 2. Dans le Chapitre 4, nous étendons le problème d'estimation de calage pour la moyenne d'une population finie de la variable de sondage dans un cadre de données fonctionnelles. Nous considérons le problème de l'estimation des poids de sondage fonctionnel à travers le principe du maximum d'entropie sur la moyenne -MEM-. En particulier, l'estimation par calage est considérée comme un problème inverse linéaire de dimension infinie suivant la structure de l'approche du MEM. Nous donnons un résultat précis d'estimation des poids de calage fonctionnels pour deux types de mesures aléatoires a priori: la measure Gaussienne centrée et la measure de Poisson généralisée
One of the main difficulties in functional data analysis is the extraction of a meaningful common pattern that summarizes the information conveyed by all functions in the sample. The problem of finding a meaningful template function that represents this pattern is considered in Chapter 2 assuming that the functional data lie on an intrinsically low-dimensional smooth manifold with an unknown underlying geometric structure embedding in a high-dimensional space. Under this setting, an approximation of the geodesic distance is developed based on a robust version of the Isomap algorithm. This approximation is used to compute the corresponding empirical Fréchet median function, which provides a robust intrinsic estimator of the template. The Chapter 3 investigates the asymptotic properties of the quantile normalization method by Bolstad, et al. (2003) which is one of the most popular methods to align density curves in microarray data analysis. The properties are proved by considering the method as a particular case of the structural mean curve alignment procedure by Dupuy, Loubes and Maza (2011). However, the method fails in some case of mixtures, and a new methodology to cope with this issue is proposed via the algorithm developed in Chapter 2. Finally, the problem of calibration estimation for the finite population mean of a survey variable under a functional data framework is studied in Chapter 4. The functional calibration sampling weights of the estimator are obtained by matching the calibration estimation problem with the maximum entropy on the mean -MEM- principle. In particular, the calibration estimation is viewed as an infinite-dimensional linear inverse problem following the structure of the MEM approach. A precise theoretical setting is given and the estimation of functional calibration weights assuming, as prior measures, the centered Gaussian and compound Poisson random measures is carried out
Styles APA, Harvard, Vancouver, ISO, etc.
10

Peralta, Alyne da Silva. « Analise de regionalização de vazão maxima para pequenas bacias hidrograficas / \ Alyne da Silva Peralta ». [s.n.], 2003. http://repositorio.unicamp.br/jspui/handle/REPOSIP/258621.

Texte intégral
Résumé :
Orientadores: Abel Maia Genovez, Antonio Carlos Zuffo
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Civil
Made available in DSpace on 2018-08-03T17:24:37Z (GMT). No. of bitstreams: 1 Peralta_AlynedaSilva_M.pdf: 3275105 bytes, checksum: 7a91d323ff58498113e44faa719406dc (MD5) Previous issue date: 2003
Mestrado
Styles APA, Harvard, Vancouver, ISO, etc.
11

Iezzi, Annamaria. « Nombre de points rationnels des courbes singulières sur les corps finis ». Thesis, Aix-Marseille, 2016. http://www.theses.fr/2016AIXM4027/document.

Texte intégral
Résumé :
On s'intéresse, dans cette thèse, à des questions concernant le nombre maximum de points rationnels d'une courbe singulière définie sur un corps fini, sujet qui, depuis Weil, a été amplement abordé dans le cas lisse. Cette étude se déroule en deux temps. Tout d'abord on présente une construction de courbes singulières de genres et corps de base donnés, possédant un grand nombre de points rationnels : cette construction, qui repose sur des notions et outils de géométrie algébrique et d'algèbre commutative, permet de construire, en partant d'une courbe lisse X, une courbe à singularités X', de telle sorte que X soit la normalisée de X', et que les singularités ajoutées soient rationnelles sur le corps de base et de degré de singularité prescrit. Ensuite, en utilisant une approche euclidienne, on prouve une nouvelle borne sur le nombre de points fermés de degré deux d'une courbe lisse définie sur un corps fini.La combinaison de ces résultats, à priori indépendants, permet notamment d'étudier le problème de savoir quand la borne d'Aubry-Perret, analogue de la borne de Weil dans le cas singulier, est atteinte. Cela nous amène de façon naturelle à l'étude des propriétés des courbes maximales et, lorsque la cardinalité du corps de base est un carré, à l'analyse du spectre des genres de ces dernières
In this PhD thesis, we focus on some issues about the maximum number of rational points on a singular curve defined over a finite field. This topic has been extensively discussed in the smooth case since Weil's works. We have split our study into two stages. First, we provide a construction of singular curves of prescribed genera and base field and with many rational points: such a construction, based on some notions and tools from algebraic geometry and commutative algebra, yields a method for constructing, given a smooth curve X, another curve X' with singularities, such that X is the normalization of X', and the added singularities are rational on the base field and with the prescribed singularity degree. Then, using a Euclidian approach, we prove a new bound for the number of closed points of degree two on a smooth curve defined over a finite field.Combining these two a priori independent results, we can study the following question: when is the Aubry-Perret bound (the analogue of the Weil bound in the singular case) reached? This leads naturally to the study of the properties of maximal curves and, when the cardinality of the base field is a square, to the analysis of the spectrum of their genera
Styles APA, Harvard, Vancouver, ISO, etc.
12

Tomasini, Arnaud. « Intersections maximales de quadriques réelles ». Thesis, Strasbourg, 2014. http://www.theses.fr/2014STRAD035/document.

Texte intégral
Résumé :
La géométrie algébrique réelle est dans sa définition la plus simple, l'étude des ensembles de solutions d'un système d'équations polynomiales à coefficients réelles. Dans cette vaste thématique, on se concentre sur les intersections de quadriques où déjà le cas de trois quadriques reste largement ouvert. Notre sujet peut être résumé comme l'étude topologique des variétés algébriques réelles et l'interaction entre leur topologie d'une part et leur déformations et dégénérations d'autre part, un problème issu du 16ième problème de Hilbert et enrichi par des développements récents. Au cours de cette thèse, nous allons nous focaliser sur les intersections maximales de quadriques réelles et en particulier démonter l'existence de telles intersections en utilisant des développements issus des recherches effectuées depuis la fin des années 80. Dans le cas d'intersections de trois quadriques, nous allons mettre en évidence le lien très étroits entre ces intersections d'une part et les courbes planes d'autre part, et démontrer que l'étude des M-courbes (une des problématiques du 16ième problème de Hilbert) peut se faire à travers l'étude des intersections maximales. Nous utiliserons ensuite les résultats sur les courbes planes nodales afin de déterminer dans certains cas les classes de déformations d'intersections de trois quadriques réelles
Real algebraic geometry is in its simplest definition, the study of sets of solutions of a system of polynomial equations with real coefficients. In this theme, we focus on the intersections of quadrics where already the case of three quadrics remains wide open. Our subject can be summarized as the topological study of real algebraic varieties and interaction between their topology on the one hand and their deformations and degenerations on the other hand, a problem coming from the 16th Hilbert problem and enriched by recent developments. In this thesis, we will focus on maximum intersections of real quadrics and particularly prove the existence of such intersections using research developments made since the late 80. In the case of intersections of three quadrics, we will point the very close link between the intersections on the one hand and on the other plane curves, and show that the study of M-curves (one of the problems of the 16th Hilbert problem) may be done through the study of maximum intersections. Next, we will use the study on nodal plane curves to determine in some cases deformation classes of intersections of three real quadrics
Styles APA, Harvard, Vancouver, ISO, etc.
13

Karlsson, Emil. « The unweighted mean estimator in a Growth Curve model ». Thesis, Linköpings universitet, Matematisk statistik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-131043.

Texte intégral
Résumé :
The field of statistics is becoming increasingly more important as the amount of data in the world grows. This thesis studies the Growth Curve model in multivariate statistics which is a model that is not widely used. One difference compared with the linear model is that the Maximum Likelihood Estimators are more complicated. That makes it more difficult to use and to interpret which may be a reason for its not so widespread use. From this perspective this thesis will compare the traditional mean estimator for the Growth Curve model with the unweighted mean estimator. The unweighted mean estimator is simpler than the regular MLE. It will be proven that the unweighted estimator is in fact the MLE under certain conditions and examples when this occurs will be discussed. In a more general setting this thesis will present conditions when the un-weighted estimator has a smaller covariance matrix than the MLEs and also present confidence intervals and hypothesis testing based on these inequalities.
Styles APA, Harvard, Vancouver, ISO, etc.
14

Pereira, Nayara Negrão. « Modelos não lineares mistos na análise de curvas de crescimento de bovinos da raça Tabapuã ». Universidade Federal de Viçosa, 2014. http://locus.ufv.br/handle/123456789/4080.

Texte intégral
Résumé :
Made available in DSpace on 2015-03-26T13:32:22Z (GMT). No. of bitstreams: 1 texto completo.pdf: 434501 bytes, checksum: 0ab5e3021a0cbd11c8c868f09b269488 (MD5) Previous issue date: 2014-02-21
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
The analysis of growth curves of animals has been widely used to increase the efficiency of beef cattle ranching. Related to growth curves with nonlinear mixed models strategic, studies have strategic applications in genetic improving programs in defining selection criteria for earliness and weight gain, aimed at, that for each individual is estimated a random coefficient, facilitating identification and selection of more efficient animals based on the coefficients. This methodology considers the variability between and within individuals. The objective of this study was to evaluate the efficiency of the adjustment of growth curves by nonlinear mixed models. Nonlinear models, Michaelis-Menten Modified, Logistic, von Bertalanffy, Gompertz, Richards and Brody, were fitted, with and without the incorporation of random effects for analysis of growth in beef cattle Tabapuã race. For comparison between fixed and mixed models were used the following adjustment quality evaluators: Akaike s information criterion (AIC), Bayesian information criterion (BIC), mean absolute deviation (DMA), mean square error (MSE) and coefficient of determination (R2). The use of nonlinear mixed model was efficient to describe bovine growth curves.
A análise de curvas de crescimento de animais tem sido muito utilizada para aumentar a eficiência da pecuária de corte. Estudos relacionados a curvas de crescimento com modelos não lineares mistos podem ter aplicações estratégicas em programas de melhoramento genético na definição de critérios de seleção para precocidade e ganho de peso, tendo em vista, que para cada indivíduo é estimado um coeficiente aleatório, facilitando a identificação e seleção de animais mais eficientes com base nos coeficientes. Essa metodologia considera a variabilidade entre e dentro de indivíduos. O objetivo deste trabalho foi avaliar a eficiência do ajuste de curvas de crescimento através de modelos não lineares mistos. Foram ajustados os modelos não lineares Michaelis-Menten Modificado, Logístico, von Bertalanffy, Gompertz, Richards e Brody, com e sem a incorporação de efeitos aleatórios para análise de curva de crescimento de bovinos de corte da raça Tabapuã. Para comparação entre modelos fixos e mistos foram utilizados os seguintes avaliadores de qualidade de ajuste: critério de informação de Akaike (AIC), critério de informação bayesiano (BIC), desvio médio absoluto (DMA), erro quadrático médio (EQM) e coeficiente de determinação (R2). A utilização de modelos não lineares mistos foi eficiente para descrever curvas de crescimento de bovinos.
Styles APA, Harvard, Vancouver, ISO, etc.
15

Amaral, Magali Teresopolis Reis. « Utilização de curvas de crescimento longitudinal com distribuição normal θ-generalizada multivariada, no estudo da disfunção cardíaca em ratos com estenose aórtica supravalvar ». Botucatu, 2018. http://hdl.handle.net/11449/152676.

Texte intégral
Résumé :
Orientador: Carlos Roberto Padovani
Resumo: Em muitas situações, existe a necessidade de estudar o comportamento de alguma característica em uma mesma unidade amostral ao longo do tempo, dose acumulada de algum nutriente ou medicamento. Na prática, a estrutura dos dados dessa natureza geralmente estabelece comportamentos não lineares nos parâmetros de interesse, já que estes caracterizam melhor a realidade biológica pesquisada. Essa conjuntura é propícia ao estudo de remodelação cardíaca (RC) por sobrecarga pressórica em ratos submetidos a diferentes manobras sequenciais de cálcio. Como o comportamento da RC não está claramente estabelecido, o objetivo deste trabalho consiste em fazer um estudo comparativo sobre a performance de quatro modelos de curvas de crescimento em quatro grupos experimentais, considerando erros normais $\theta$ generalizado multivariado. Além disso, a modelagem dos dados envolve duas estruturas de covariância: a homocedástica com a presença de autocorrelação lag 1 e a heterocedástica multiplicativa. No contexto metodológico, utiliza-se o procedimento de estimação por máxima verossimilhança com a aplicação da técnica de reamostragem bootstrap. Além disso, técnicas de simulações são implementadas para comprovação das propriedades metodológicas aplicadas. Para comparação entre os modelos, utilizam-se alguns avaliadores de qualidade de ajuste. Conclui-se, no presente estudo, que a estrutura homocedástica com autocorrelação lag 1 para os modelos Brody e de Von Bertalanffy, destacam-se por apresentar ... (Resumo completo, clicar acesso eletrônico abaixo)
Abstract: Many situations, there is a need to study the behavior of some characteristic in the same sample unit over time, accumulated dose of some nutrient or medication. In practice, the structure of data of this nature generally establishes non-linear behaviors in the parameters of interest, since these characterize better the biological reality researched. This situation is favorable to the study of cardiac remodeling (CR) by pressure overload in rats submitted to different sequential calcium maneuvers. As the behavior of CR is not clearly established, the objective of this work is to perform a comparative study on the performance of four models of growth curves in four experimental groups, considering normalized multivariate θ standard errors. In addition, the data modeling involves two covariance structures: the homocedastic with the presence of autocorrelation lag 1 and the multiplicative heterocedastic. In the methodological context, the procedure of estimation by maximum likelihood is used with the technique of bootstrap resampling. In addition, simulation techniques are implemented to prove the methodological properties applied. For the comparison between the models, some adjustment quality evaluators are used. It is concluded in the present study that the homocedastic structure with lag 1 autocorrelation for the Brody and Von Bertalanffy models stands out for presenting excellent estimates and good quality of adjustment of the maximum developed stress (TD) as a function of t... (Complete abstract click electronic access below)
Doutor
Styles APA, Harvard, Vancouver, ISO, etc.
16

Huh, Jungwon, Quang Tran, Achintya Haldar, Innjoon Park et Jin-Hee Ahn. « Seismic Vulnerability Assessment of a Shallow Two-Story Underground RC Box Structure ». MDPI AG, 2017. http://hdl.handle.net/10150/625742.

Texte intégral
Résumé :
Tunnels, culverts, and subway stations are the main parts of an integrated infrastructure system. Most of them are constructed by the cut-and-cover method at shallow depths (mainly lower than 30 m) of soil deposits, where large-scale seismic ground deformation can occur with lower stiffness and strength of the soil. Therefore, the transverse racking deformation (one of the major seismic ground deformation) due to soil shear deformations should be included in the seismic design of underground structures using cost- and time-efficient methods that can achieve robustness of design and are easily understood by engineers. This paper aims to develop a simplified but comprehensive approach relating to vulnerability assessment in the form of fragility curves on a shallow two-story reinforced concrete underground box structure constructed in a highly-weathered soil. In addition, a comparison of the results of earthquakes per peak ground acceleration (PGA) is conducted to determine the effective and appropriate number for cost- and time-benefit analysis. The ground response acceleration method for buried structures (GRAMBS) is used to analyze the behavior of the structure subjected to transverse seismic loading under quasi-static conditions. Furthermore, the damage states that indicate the exceedance level of the structural strength capacity are described by the results of nonlinear static analyses (or so-called pushover analyses). The Latin hypercube sampling technique is employed to consider the uncertainties associated with the material properties and concrete cover owing to the variation in construction conditions. Finally, a large number of artificial ground shakings satisfying the design spectrum are generated in order to develop the seismic fragility curves based on the defined damage states. It is worth noting that the number of ground motions per PGA, which is equal to or larger than 20, is a reasonable value to perform a structural analysis that produces satisfactory fragility curves.
Styles APA, Harvard, Vancouver, ISO, etc.
17

Kawamori, Naoki, Steven J. Rossi, Blake D. Justice, Erin E. Haff, Emido E. Pistilli, Harold S. O'Bryant, Michael H. Stone et G. Gregory Haff. « Peak Force and Rate of Force Development During Isometric Mid-Thigh Clean Pulls and Dynamic Mid-Thigh Clean Pulls Performed at Various Intensities ». Digital Commons @ East Tennessee State University, 2006. https://dc.etsu.edu/etsu-works/4613.

Texte intégral
Résumé :
Eight male collegiate weightlifters (age: 21.2 ± 0.9 years; height: 177.6 ± 2.3 cm; and body mass: 85.1 ± 3.3 kg) participated in this study to compare isometric to dynamic force-time dependent variables. Subjects performed the isometric and dynamic mid-thigh clean pulls at 30–120% of their one repetition maximum (1RM) power clean (118.4 ± 5.5 kg) on a 61 X 121.9–cm AMTI forceplate. Variables such as peak force (PF) and peak rate of force development (PRFD) were calculated and were compared between isometric and dynamic conditions. The relationships between force-time dependent variables and vertical jump performances also were examined. The data indicate that the isometric PF had no significant correlations with the dynamic PF against light loads. On the one hand, there was a general trend toward stronger relationships between the isometric and dynamic PF as the external load increased for dynamic muscle actions. On the other hand, the isometric and dynamic PRFD had no significant correlations regardless of the external load used for dynamic testing. In addition, the isometric PF and dynamic PRFD were shown to be strongly correlated with vertical jump performances, whereas the isometric PRFD and dynamic PF had no significant correlations with vertical jump performances. In conclusion, it appears that the isometric and dynamic measures of force-time curve characteristics represent relatively specific qualities, especially when dynamic testing involves small external loads. Additionally, the results suggest that athletes who possess greater isometric maximum strength and dynamic explosive strength tend to be able to jump higher. Eight male collegiate weightlifters (age: 21.2 ± 0.9 years; height: 177.6 ± 2.3 cm; and body mass: 85.1 ± 3.3 kg) participated in this study to compare isometric to dynamic force-time dependent variables. Subjects performed the isometric and dynamic mid-thigh clean pulls at 30–120% of their one repetition maximum (1RM) power clean (118.4 ± 5.5 kg) on a 61 X 121.9–cm AMTI forceplate. Variables such as peak force (PF) and peak rate of force development (PRFD) were calculated and were compared between isometric and dynamic conditions. The relationships between force-time dependent variables and vertical jump performances also were examined. The data indicate that the isometric PF had no significant correlations with the dynamic PF against light loads. On the one hand, there was a general trend toward stronger relationships between the isometric and dynamic PF as the external load increased for dynamic muscle actions. On the other hand, the isometric and dynamic PRFD had no significant correlations regardless of the external load used for dynamic testing. In addition, the isometric PF and dynamic PRFD were shown to be strongly correlated with vertical jump performances, whereas the isometric PRFD and dynamic PF had no significant correlations with vertical jump performances. In conclusion, it appears that the isometric and dynamic measures of force-time curve characteristics represent relatively specific qualities, especially when dynamic testing involves small external loads. Additionally, the results suggest that athletes who possess greater isometric maximum strength and dynamic explosive strength tend to be able to jump higher.
Styles APA, Harvard, Vancouver, ISO, etc.
18

Yandt, Mark. « Characterization Techniques and Optimization Principles for Multi-Junction Solar Cells and Maximum Long Term Performance of CPV Systems ». Thesis, Université d'Ottawa / University of Ottawa, 2017. http://hdl.handle.net/10393/35870.

Texte intégral
Résumé :
Two related bodies of work are presented, both of which aim to further the rapid development of next generation concentrating photovoltaic systems using high efficiency multi junction solar cells. They are complementary since the characterization of commercial devices and the systematic application of design principles for future designs must progress in parallel in order to accelerate iterative improvements. First addressed, is the field characterization of state of the art concentrating photovoltaic systems. Performance modeling and root cause analysis of deviations from the modeling results are critical for bringing reliable high value products to the market. Two complementary tools are presented that facilitate acceleration of the development cycle. The “Dynamic real-time I V Curve Measurement System…” provides a live picture of the current-voltage characteristics of a CPV module. This provides the user with an intuitive understanding of how module performance responds under perturbation. The “Shutter technique for noninvasive individual cell characterization in sealed concentrating photovoltaic modules,” allows the user to probe individual cell characteristics within a sealed module. This facilitates non-invasive characterization of modules that are in situ. Together, these tools were used to diagnose the wide spread failure of epoxy connections between the carrier and the emitter of bypass diodes installed in sealed commercial modules. Next, the optimization principals that are used to choose energy yield maximizing bandgap combinations for multi-junction solar cells are investigated. It is well understood that, due to differences in the solar resource in different geographical locations, this is fundamentally a local optimization problem. However, until now, a robust methodology for determining the influences of geography and atmospheric content on the ideal design point has not been developed. This analysis is presented and the influence of changing environment on the representative spectra that are used to optimize bandgap combinations is demonstrated. Calculations are confirmed with ground measurements in Ottawa, Canada and the global trends are refined for this particular location. Further, as cell designers begin to take advantage of more flexible manufacturing processes, it is critical to know if and how optimization criteria must change for solar cells with more junctions. This analysis is expanded to account for the differences between cells with up to 8 subcell bandgaps. A number of software tools were also developed for the Sunlab during this work. A multi-junction solar cell model calibration tool was developed to determine the parameters that describe each subcell. The tool fits a two diode model to temperature dependent measurements of each subcell and provides the fitting parameters so that the performance of multi-junction solar cells composed of those subcells can be modeled for real world conditions before they are put on-sun. A multi-junction bandgap optimization tool was developed to more quickly and robustly determine the ideal bandgap combinations for a set of input spectra. The optimization process outputs the current results during iteration so that they may be visualized. Finally, software tools that compute annual energy yield for input multi-junction cell parameters were developed. Both a brute force tool that computes energy harvested at each time step, and an accelerated tool that first bins time steps into discrete bins were developed. These tools will continue to be used by members of the Sunlab.
Styles APA, Harvard, Vancouver, ISO, etc.
19

Guppy, Stuart N., Claire J. Brady, Yosuke Kotani, Michael H. Stone, Nikola Medic et Guy Gregory Haff. « The Effect of Altering Body Posture and Barbell Position on the Between-Session Reliability of Force-Time Curve Characteristics in the Isometric Mid-Thigh Pull ». Digital Commons @ East Tennessee State University, 2018. https://dc.etsu.edu/etsu-works/6286.

Texte intégral
Résumé :
Seventeen strength and power athletes (n = 11 males, 6 females; height: 177.5 ± 7.0 cm, 165.8 ± 11.4 cm; body mass: 90.0 ± 14.1 kg, 66.4 ± 13.9 kg; age: 30.6 ± 10.4 years, 30.8 ± 8.7 years), who regularly performed weightlifting movements during their resistance training programs, were recruited to examine the effect of altering body posture and barbell position on the between-session reliability of force-time characteristics generated in the isometric mid-thigh pull (IMTP). After participants were familiarised with the testing protocol, they undertook two testing sessions which were separated by seven days. In each session, the participants performed three maximal IMTP trials in each of the four testing positions examined, with the order of testing randomized. In each position, no significant differences were found between sessions for all force-time characteristics (p = >0.05). Peak force (PF), time-specific force (F50, F90, F150, F200, F250) and IMP time-bands (0–50, 0–90, 0–150, 0–200, 0–250 ms) were reliable across each of the four testing positions (ICC ≥ 0.7, CV ≤ 15%). Time to peak force, peak RFD, RFD time-bands (0–50, 0–90, 0–150, 0–200, 0–250 ms) and peak IMP were unreliable regardless of the testing position used (ICC =15%). Overall, the use of body postures and barbell positions during the IMTP that do not correspond to the second pull of the clean have no adverse effect on the reliability of the force-time characteristics generated.
Styles APA, Harvard, Vancouver, ISO, etc.
20

Naeem, Muhammad Farhan. « Analysis of an Ill-posed Problem of Estimating the Trend Derivative Using Maximum Likelihood Estimation and the Cramér-Rao Lower Bound ». Thesis, Linnéuniversitetet, Institutionen för fysik och elektroteknik (IFE), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-95163.

Texte intégral
Résumé :
The amount of carbon dioxide in the Earth’s atmosphere has significantly increased in the last few decades as compared to the last 80,000 years approximately. The increase in carbon dioxide levels are affecting the temperature and therefore need to be understood better. In order to study the effects of global events on the carbon dioxide levels, one need to properly estimate the trends in carbon dioxide in the previous years. In this project, we will perform the task of estimating the trend in carbon dioxide measurements taken in Mauna Loa for the last 46 years, also known as the Keeling Curve, using estimation techniques based on a Taylor and Fourier series model equation. To perform the estimation, we will employ Maximum Likelihood Estimation (MLE) and the Cramér-Rao Lower Bound (CRLB) and review our results by comparing it to other estimation techniques. The estimation of the trend in Keeling Curve is well-posed however, the estimation for the first derivative of the trend is an ill-posed problem. We will further calculate if the estimation error is under a suitable limit and conduct statistical analyses for our estimated results.
Styles APA, Harvard, Vancouver, ISO, etc.
21

Magalhães, Elisabete de Mello. « Aplicação do método de Newton desacoplado para o fluxo de carga continuado / ». Ilha Solteira : [s.n.], 2010. http://hdl.handle.net/11449/87114.

Texte intégral
Résumé :
Orientador: Dilson Amâncio Alves
Banca: Anna Diva Plasencia Lotufo
Banca: Edmárcio Antonio Belati
Resumo: Este trabalho apresenta o método de Newton desacoplado para o fluxo de carga continuado. O método foi melhorado por uma técnica de parametrização geométrica possibilitando assim o traçado completo das curvas P-V, e o cálculo do ponto de máximo carregamento de sistemas elétricos de potência, sem os problemas de mau condicionamento. O objetivo é o de apresentar de forma didática os passos envolvidos no processo de melhoria do método de Newton Desacoplado a partir da observação das trajetórias de solução do fluxo de carga. A técnica de parametrização geométrica que consiste na adição de uma equação de reta que passa por um ponto no plano formado pelas variáveis: tensão nodal de uma barra k qualquer e o fator de carregamento eliminam os problemas de singularidades das matrizes envolvidas no processo e ampliam o grupo das variáveis de tensão que podem ser usadas como parâmetro da continuação. Os resultados obtidos com a nova metodologia para o sistema teste do IEEE (14, 30, 57, 118 e 300 barras) e também para os sistemas reais de grande porte, o 638 barras do sistema Sul-Sudeste brasileiro e do sistema de 904 barras do sudoeste Americano, mostram que as características do método convencional são melhoradas na região do ponto de máximo carregamento e que a região de convergência ao redor da singularidade é sensivelmente aumentada. São apresentados vários testes com a finalidade de prover um completo entendimento do funcionamento do método proposto e também avaliar seu desempenho
Abstract: This work presents the decoupled Newton method for continuation power flow. The method was improved by using a geometric parameterization technique that allows the complete tracing of P-V curves, and the computation of maximum loading point of a power system, without ill-conditioning problems. The goal is to present in a clear and didactic way the steps involved in the development of the improved decoupled Newton method obtained from the observation of the geometrical behavior of power flow solutions. The geometric parameterization technique that consists of the addition of a line equation, which passes through a point in the plane determined by the bus voltage magnitude and loading factor variables, can eliminate the ill-conditioning problems of matrices used by the method and can enlarge the set of voltage variables that can be used as continuation parameter to P-V curve tracing. The method is applied to the IEEE systems (14, 30, 57, 118 and 300 buses) and two large real systems: the south-southeast Brazilian system (638 buses) and the 904-bus southwestern American system. The results show that the best characteristics of the conventional decoupled Newton's method are improved in the vicinity of the maximum loading point and therefore the region of convergence around it is enlarged. Several tests are presented with the purpose of providing a complete understanding of the behavior of the proposed method and also to evaluate its performance
Mestre
Styles APA, Harvard, Vancouver, ISO, etc.
22

Ebert, Anthony C. « Dynamic queueing networks : Simulation, estimation and prediction ». Thesis, Queensland University of Technology, 2020. https://eprints.qut.edu.au/180771/1/Anthony_Ebert_Thesis.pdf.

Texte intégral
Résumé :
Inspired by the problem of managing passenger flow in airport terminals, novel statistical approaches to simulation, estimation and prediction of these systems were developed. A simulation algorithm was developed with computational speed-ups of more than one hundred-fold. The computational improvement was leveraged to infer parameters governing a dynamic queueing system for the first time. Motivated by the original application, contributions to both functional data analysis as well as combined parameter and state inference were made.
Styles APA, Harvard, Vancouver, ISO, etc.
23

Pontelli, Charles Bolson. « CARACTERIZAÇÃO DA VARIABILIDADE ESPACIAL DAS CARACTERÍSTICAS QUÍMICAS DO SOLO E DA PRODUTIVIDADE DAS CULTURAS, UTILIZANDO AS FERRAMENTAS DA AGRICULTURA DE PRECISÃO ». Universidade Federal de Santa Maria, 2006. http://repositorio.ufsm.br/handle/1/7543.

Texte intégral
Résumé :
In this work were investigated the spatial variability existent on the soil attributes utilized for soil fertility evaluation and its grade of participation on the yield crop variability. An experiment was conducted over five years (2000 to 2005) in an area of 57 ha in the municipality of Palmeira das Missões in a Latossolo Vermelho distrófico típico (EMBRAPA, 1999). Geopositioned soil samples were collected in May 2002, with a regular grid of 100x100 meters and at a depth of 0 to 10 cm. Yield data of soybean 2000/01, corn 2001/02, wheat 2002, soybean 2002/03, wheat 2003, soybean 2003/04 and corn 2004/05 were analyzed. The yield data was collected with a machine equipped with a system that take and register georeferenced data. Yield averages for each soil sample point were calculated, using yield data collected at a radius of 30 meters around the point. The yield and soil data were analyzed using the Pearson correlation Matrix. Average quadratic polynomial equations were calculated for the yields of soybean 2001/02 and 2003/03, where the nutrient pH, Organic Mater (OM), Phosphorus (P) and yield average were divided into five categories: very low (VL), low (L), mean (M), high (H) and very high (VH). For P was used categories proposed by Schlindwein (2003), for pH and OM are used categories adapted from Comissão (2004). Maximum efficiency technical (MET) and maximum economical efficiency (MEE)considering 90% of relative yield are calculated using the adjusted equations. A small correlation was found between the soil chemical attributes and the yield productivity. A negative correlation from 0,25 to 0,46 was found between the clay texture and corn yield 2005 and soybean yield 2004 respectively. The average soybean yield response to soil attributes curves shows the values of maximum technical efficiency (MTE) of the attributes in the soil. The MTE for P, pH and OM are 14,4 mg dm-3; 5.9 and 4.1%, respectively. Higher values of MTE can reduce the crop yield. The MEE for P, pH and OM are 4,4 mg dm -3; 5.5 e 3.2%, respectively.
Neste trabalho foi investigada a variabilidade espacial existente nos atributos do solo utilizados na avaliação da fertilidade dos solos, bem como seu grau de participação na variabilidade da produtividade das culturas. Foi conduzido um experimento por 5 anos (2000 a 2005) em uma área comercial de 57 ha no município de Palmeira das Missões em um Latossolo Vermelho distrófico típico (EMBRAPA, 1999). Em maio de 2002 foram coletadas amostras de solo georeferenciadas em malha regular de 100 x 100 metros a uma profundidade de 0 a 10 cm. As safras de soja 2000/01, milho 2001/02, soja 2002/03, trigo 2003, soja 2003/04 e milho 2004/05 foram analisadas. Os dados de produtividade foram coletados com uma colhedoura com sistema de tomada e registro de informações georeferenciadas. Para cada ponto de coleta de solo foi calculada uma produtividade média utilizando-se os dados num raio de 30 m ao redor do ponto. Os dados de produtividade e solo foram analisadas pela matriz de correlação de Pearson. Para a cultura da soja nas safras de 2000/01 e 2002/03 foram determinadas equações polinomiais quadráticas médias, onde se determinou os valores médios do pH, matéria orgânica (MO), fósforo (P) e da produtividade para cinco classes: muito baixa (MB), baixa (B), média (M), alta (A) e muito alta (MA). Para o fósforo foram adotadas as classes propostas por Schlindwein, (2003) e para pH e MO usou-se classes adaptadas da Comissão (2004). Através das equações ajustadas foi calculado a máxima eficiêcia técnica (MET) e a máxima eficiencia economica MEE) que foi obtida considerando 90 % do rendimento relativo. Os atributos químicos do solo e a produtividade apresentaram baixas correlações. A argila apresentou correlação negativa de 0.25 a 0.46 com as produtividades de milho 2005 e soja 2004 respectivamente. Os valores de MET para P, pH e MO foram de 14,4 mg dm-3; 5,9 e 4,1%, respectivamente. Valores acima destes, podem incorrer em decréscimo de produtividade. A MEE para P, pH e MO foram de 4,4 mg dm-3; 5,5 e 3,2%, respectivamente.
Styles APA, Harvard, Vancouver, ISO, etc.
24

Callisesi, Giulia. « Simplified worldline path integrals for p-forms and type-A trace anomalies ». Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/17060/.

Texte intégral
Résumé :
In this work we study a simplified version of the path integral for a particle on a sphere, and more generally on maximally symmetric spaces, in the case of N=2 supersymmetries on the worldline. This quantum mechanics is generically that of a nonlinear sigma model in one dimension with two supersymmetries (N=2 supersymmetric quantum mechanics), and it is mostly used for describing spin 1 fields and p-forms in first quantisation. Here, we conjecture a simplified path integral defined in terms of a linear sigma model, rather than a nonlinear one. The use of a quadratic kinetic term in the bosonic part of the particle action should be allowed by the use of Riemann normal coordinates, while a scalar effective potential is expected to reproduce the effects of the curvature. Such simplifications have already been proven to be possible for the cases of N=0 and N=1 supersymmetric quantum mechanics. As a particular application, we employ our construction to give a simplified worldline representation of the one-loop effective action of gauge p-forms on maximally symmetric spaces. We use it to compute the first three Seeley-DeWitt coefficients, denoted by a_(p+1)(d;p), namely a_1(2;0), a_2(4;1) and a_3(6;2), that appear in the calculation of the type-A trace anomalies of conformally invariant p-form gauge potentials in d=2p+2 dimensions. The simplified model describes correctly the first two coefficients, while it seems to fail to reproduce the third one. One possible reason could be that the model is based on a conjecture about the effective potential that has been oversimplified in our analysis. Future work could improve our construction, in order to give a correct description to all orders, or alternatively disprove the possibility of having such a simplification in the full N=2 quantum mechanics.
Styles APA, Harvard, Vancouver, ISO, etc.
25

Magalhães, Elisabete de Mello [UNESP]. « Aplicação do método de Newton desacoplado para o fluxo de carga continuado ». Universidade Estadual Paulista (UNESP), 2010. http://hdl.handle.net/11449/87114.

Texte intégral
Résumé :
Made available in DSpace on 2014-06-11T19:22:32Z (GMT). No. of bitstreams: 0 Previous issue date: 2010-09-23Bitstream added on 2014-06-13T19:48:55Z : No. of bitstreams: 1 magalhaes_em_me_ilha.pdf: 455261 bytes, checksum: c5f0181b55df616b30443981a524ebbc (MD5)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Este trabalho apresenta o método de Newton desacoplado para o fluxo de carga continuado. O método foi melhorado por uma técnica de parametrização geométrica possibilitando assim o traçado completo das curvas P-V, e o cálculo do ponto de máximo carregamento de sistemas elétricos de potência, sem os problemas de mau condicionamento. O objetivo é o de apresentar de forma didática os passos envolvidos no processo de melhoria do método de Newton Desacoplado a partir da observação das trajetórias de solução do fluxo de carga. A técnica de parametrização geométrica que consiste na adição de uma equação de reta que passa por um ponto no plano formado pelas variáveis: tensão nodal de uma barra k qualquer e o fator de carregamento eliminam os problemas de singularidades das matrizes envolvidas no processo e ampliam o grupo das variáveis de tensão que podem ser usadas como parâmetro da continuação. Os resultados obtidos com a nova metodologia para o sistema teste do IEEE (14, 30, 57, 118 e 300 barras) e também para os sistemas reais de grande porte, o 638 barras do sistema Sul-Sudeste brasileiro e do sistema de 904 barras do sudoeste Americano, mostram que as características do método convencional são melhoradas na região do ponto de máximo carregamento e que a região de convergência ao redor da singularidade é sensivelmente aumentada. São apresentados vários testes com a finalidade de prover um completo entendimento do funcionamento do método proposto e também avaliar seu desempenho
This work presents the decoupled Newton method for continuation power flow. The method was improved by using a geometric parameterization technique that allows the complete tracing of P-V curves, and the computation of maximum loading point of a power system, without ill-conditioning problems. The goal is to present in a clear and didactic way the steps involved in the development of the improved decoupled Newton method obtained from the observation of the geometrical behavior of power flow solutions. The geometric parameterization technique that consists of the addition of a line equation, which passes through a point in the plane determined by the bus voltage magnitude and loading factor variables, can eliminate the ill-conditioning problems of matrices used by the method and can enlarge the set of voltage variables that can be used as continuation parameter to P-V curve tracing. The method is applied to the IEEE systems (14, 30, 57, 118 and 300 buses) and two large real systems: the south-southeast Brazilian system (638 buses) and the 904-bus southwestern American system. The results show that the best characteristics of the conventional decoupled Newton’s method are improved in the vicinity of the maximum loading point and therefore the region of convergence around it is enlarged. Several tests are presented with the purpose of providing a complete understanding of the behavior of the proposed method and also to evaluate its performance
Styles APA, Harvard, Vancouver, ISO, etc.
26

Vrána, Michal. « Měřicí systém pro sledování efektivity fotovoltaického panelu ». Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2012. http://www.nusl.cz/ntk/nusl-219652.

Texte intégral
Résumé :
This thesis describes the design of the active load for adjusting the maximum power point of PV module and the module loaded with the defined parameters for measuring the effectiveness and identifying the characteristics of the PV module.
Styles APA, Harvard, Vancouver, ISO, etc.
27

Souza, Tiago de Jesus. « Previsão da curva tensão-recalque em solos tropicais arenosos a partir de ensaios de cone sísmico ». Universidade de São Paulo, 2011. http://www.teses.usp.br/teses/disponiveis/18/18132/tde-25042012-163755/.

Texte intégral
Résumé :
Apresenta-se neste trabalho a aplicação de um método para a previsão da curva tensão-recalque de fundações diretas assentes em solos tropicais arenosos a partir de resultados de ensaios de cone sísmico (SCPT). Os locais estudados foram os campos experimentais de fundações da EESC/USP - São Carlos e da UNESP-Bauru, onde existem resultados de provas de carga realizados a diferentes profundidades, assim como resultados de ensaios SCPT. As previsões realizadas apresentaram bons resultados, após ajustes dos parâmetros f e g, pois as curvas tensão-recalque estimadas foram próximas a aquelas obtidas a partir de provas de carga em placa, para as profundidades maiores que 1,5 metros. Verifica-se assim a aplicabilidade do método, após seu ajuste, para reproduzir a curva tensão-recalque neste tipo de solo, empregando uma abordagem mais racional, com menor dependência de correlações empíricas. Destaca-se nesta pesquisa que existe uma variabilidade dos resultados de ensaios SCPT e de provas de carga que está relacionada com a mudança de sucção no solo. Para o campo experimental de São Carlos foi possível ainda fazer uma avaliação da variabilidade nas previsões realizadas, pois existe maior número de resultados de ensaios de campo e provas de cargas disponíveis.
It is presented in this dissertation the use of a method for predicting the stress-settlement curve of shallow foundations on tropical sandy soils based on seismic cone (SCPT) test results. The studied sites were the experimental research sites from USP - São Carlos, and UNESP - Bauru, Brazil, where there are results from plate load tests conducted at various depths, as well as SCPT test results. The stress-settlement curve predictions show good results, after adjusting the parameters f and g, because the estimated curves were close to those obtained from plate load tests, to depths greater than 1.5 meters. The applicability of the method, after its adjustment, to reproduce the stress-settlement curve for this type of soil, was verified employing a more rational approach with less reliance on empirical correlations. It is highlighted in this research that there is variability on SCPT and plate load test results, which is related to the change in soil suction. It was also possible to access the variability on the prediction for the USP São Carlos site, since there is a greater number of in situ and plate load tests in this site.
Styles APA, Harvard, Vancouver, ISO, etc.
28

Ciomaga, Adina. « Analytical properties of viscosity solutions for integro-differential equations : image visualization and restoration by curvature motions ». Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2011. http://tel.archives-ouvertes.fr/tel-00624378.

Texte intégral
Résumé :
Le manuscrit est constitué de deux parties indépendantes.Propriétés des Solutions de Viscosité des Equations Integro-Différentielles.Nous considérons des équations intégro-différentielles elliptiques et paraboliques non-linéaires (EID), où les termes non-locaux sont associés à des processus de Lévy. Ce travail est motivé par l'étude du Comportement en temps long des solutions de viscosité des EID, dans le cas périodique. Le résultat classique nous dit que la solution u(¢, t ) du problème de Dirichlet pour EID se comporte comme ?t Åv(x)Åo(1) quand t !1, où v est la solution du problème ergodique stationaire qui correspond à une unique constante ergodique ?.En général, l'étude du comportement asymptotique est basé sur deux arguments: la régularité de solutions et le principe de maximumfort.Dans un premier temps, nous étudions le Principe de Maximum Fort pour les solutions de viscosité semicontinues des équations intégro-différentielles non-linéaires. Nous l'utilisons ensuite pour déduire un résultat de comparaison fort entre sous et sur-solutions des équations intégro-différentielles, qui va assurer l'unicité des solutions du problème ergodique à une constante additive près. De plus, pour des équationssuper-quadratiques le principe de maximum fort et en conséquence le comportement en temps grand exige la régularité Lipschitzienne.Dans une deuxième partie, nous établissons de nouvelles estimations Hölderiennes et Lipschitziennes pour les solutions de viscosité d'une large classe d'équations intégro-différentielles non-linéaires, par la méthode classique de Ishii-Lions. Les résultats de régularité aident de plus à la résolution du problème ergodique et sont utilisés pour fournir existence des solutions périodiques des EID.Nos résultats s'appliquent à une nouvelle classe d'équations non-locales que nous appelons équations intégro-différentielles mixtes. Ces équations sont particulièrement intéressantes, car elles sont dégénérées à la fois dans le terme local et non-local, mais leur comportement global est conduit par l'interaction locale - non-locale, par exemple la diffusion fractionnaire peut donner l'ellipticité dans une direction et la diffusion classique dans la direction orthogonale.Visualisation et Restauration d'Images par Mouvements de CourbureLe rôle de la courbure dans la perception visuelle remonte à 1954, et on le doit à Attneave. Des arguments neurologiques expliquent que le cerveau humain ne pourrait pas possiblement utiliser toutes les informations fournies par des états de simulation. Mais en réalité on enregistre des régions où la couleur change brusquement (des contours) et en outre les angles et les extremas de courbure. Pourtant, un calcul direct de courbures sur une image est impossible. Nous montrons comment les courbures peuvent être précisément évaluées, à résolution sous-pixelique par un calcul sur les lignes de niveau après leur lissage indépendant.Pour cela, nous construisons un algorithme que nous appelons Level Lines (Affine) Shortening, simulant une évolution sous-pixelique d'une image par mouvement de courbure moyenne ou affine. Aussi bien dans le cadre analytique que numérique, LLS (respectivement LLAS) extrait toutes les lignes de niveau d'une image, lisse indépendamment et simultanément toutes ces lignes de niveau par Curve Shortening(CS) (respectivement Affine Shortening (AS)) et reconstruit une nouvelle image. Nousmontrons que LL(A)S calcule explicitement une solution de viscosité pour le le Mouvement de Courbure Moyenne (respectivement Mouvement par Courbure Affine), ce qui donne une équivalence avec le mouvement géométrique.Basé sur le raccourcissement de lignes de niveau simultané, nous fournissons un outil de visualisation précis des courbures d'une image, que nous appelons un Microscope de Courbure d'Image. En tant que application, nous donnons quelques exemples explicatifs de visualisation et restauration d'image : du bruit, des artefacts JPEG, de l'aliasing seront atténués par un mouvement de courbure sous-pixelique
Styles APA, Harvard, Vancouver, ISO, etc.
29

Iop, Rodrigo da Rosa. « Análise dos parâmetros da curva de força de preensão manual isométrica máxima em mulheres com artrite reumatoide e a sua relação com atividade da doença ». Universidade do Estado de Santa Catarina, 2013. http://tede.udesc.br/handle/handle/257.

Texte intégral
Résumé :
Made available in DSpace on 2016-12-06T17:06:55Z (GMT). No. of bitstreams: 1 Rodrigo da Rosa Iop.pdf: 1224669 bytes, checksum: 3bae5d9fc8b8bd98eb6a51475f81d1e3 (MD5) Previous issue date: 2013-06-04
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
Este estudo teve como objetivo analisar os parâmetros da curva de força de preensão manual isométrica máxima em mulheres com artrite reumatoide e a sua relação com a atividade da doença. Participaram deste estudo 9 mulheres com artrite reumatoide e 10 mulheres saudáveis, pareadas por idade. A média de idade das mulheres com artrite foi de 56,66±11,81 e das saudáveis foi de 56,0±11,42. Foram utilizada ficha de avaliação, escala de Graffar para determinar o nível sociocenômico e o inventário de Edinburg, a fim de determinar a dominância lateral. Para avaliar o nível da atividade da doença foi utilizado Disease Activity Score por meio da Proteína C-Reativa. Para avaliação dos parâmetros da curva força vs tempo de preensão manual foi utilizado dinamômetro digital produzido pelo Laboratório de Instrumentação da Udesc por meio de janelas de tempo (0-30ms; 0-50ms; 0-100ms) Os parâmetros analisados foram: força de preensão máxima, tempo para atingir a força de preensão máxima, taxa de desenvolvimento da força e o pico da taxa de desenvolvimento da força para o lado dominante e não dominante. Para comparar a média dos parâmetros da curva de força de preensão manual isométrica máxima vs tempo entre os grupos foi utilizado o Teste T para amostras independentes. A relação entre os parâmetros da curva de força de preensão e o Disease Activity Score, bem como a Proteina C-Reativa nas mulheres com artrite foi verificada por meio da correlação de Pearson. A relação entre os parâmetros da curva de força de preensão manual isométrica máxima vs tempo e o número de articulações dolorosa, edemaciadas e a percepção geral de saúde foi verificada através do teste de Spearman. A força máxima e o pico da taxa de desenvolvimento apresentaram diferença significativa entre os grupos. Foi possível verificar associação linear entre o Disease Activity Score com tempo para atingir a força máxima do lado não dominante e com a taxa de desenvolvimento da força (0-100ms) do lado dominante, bem como entre a Proteína C-Reativa com a força máxima, tempo para atingir a força máxima dominante e a taxa de desenvolvimento da força (0-100ms) dominante e o pico da taxa de desenvolvimento da força de ambos os lados. As informações sobre os parâmetros da curva força vs tempo durante a contração isométrica máxima podem contribuir na avaliação da fraqueza muscular e incapacidade gerada pelo processo inflamatório em pacientes com artrite, tornando-se uma ferramenta útil para fins preventivos e de reabilitação.
Styles APA, Harvard, Vancouver, ISO, etc.
30

Rêgo, Thiago Luiz de Oliveira do. « Sobre o número máximo de retas em superfícies não singular de grau 4 em P3 ». Universidade Federal da Paraíba, 2016. http://tede.biblioteca.ufpb.br:8080/handle/tede/9302.

Texte intégral
Résumé :
Submitted by ANA KARLA PEREIRA RODRIGUES (anakarla_@hotmail.com) on 2017-08-23T13:08:07Z No. of bitstreams: 1 arquivototal.pdf: 1209071 bytes, checksum: 1eddcf2f494891c2466f5052f15d1ced (MD5)
Made available in DSpace on 2017-08-23T13:08:07Z (GMT). No. of bitstreams: 1 arquivototal.pdf: 1209071 bytes, checksum: 1eddcf2f494891c2466f5052f15d1ced (MD5) Previous issue date: 2016-09-14
Conselho Nacional de Pesquisa e Desenvolvimento Científico e Tecnológico - CNPq
In 1943 Beniamino Segrebelievedtohaveshownthatthemaximumnumberof lines containedinasmoothquarticsurfacein P3 is 64, ([16]).Butrecently,therewasa majoroverturnonthatthemewhenthemathematiciansRamsandSchuttfoundthat Segre hadmadeamistakeinhisworktoforgetthequartic'sfamily Z , ([14]),which essentiallycorrespondstothosequarticscontainingalinesthatcanbeincidenttomore than 18 lines containedinthesurface.Inthiswork,basedon([14]),weshowthatevery smoothquarticsurface,whichdoesnotbelongtofamily Z containsamaximumof 64 lines. Oneofthemostimportanttoolstoshowthisresult,isthestudyof_brations _l induced byaline l containedonthesurface,andtherelationshipbetweentheEuler characteristicofthebase(P1 in ourcase),the_bersandthesurfaceconcerned.
Em 1943,BeniaminoSegreacreditouterdemonstradoqueonúmeromáximo de retascontidasnumasuperfíciequárticanãosingularem P3 é 64; ([16]). Mas recentemente,houveumareviravoltanessetema,quandoosmatemáticosSªawomir Rams eMatthiasSchüttconstataramqueSegretinhacometidoumerroemseutrabalho ao esquecerasquárticasdafamília Z; ([14]), quecorrespondemessencialmenteas quárticas quepossuemretasquepodemserincidentesamaisde 18 retas contidas na superfície.Nestetrabalho,tendocomobase[14],mostramosquetodaquártica não singular,quenãopertenceafamília Z; contémnomáximo 64 retas. Umadas ferramentasmaisimportantes,paramostraresseresultado,éoestudodas_brações _l induzida porumareta l contidanasuperfície,earelaçãoqueexisteentrea característica deEulerdabase(emnossocaso P1), das_brassingulareseadasuperfície em questão.
Styles APA, Harvard, Vancouver, ISO, etc.
31

Čížek, Ondřej. « Makroekonometrický model měnové politiky ». Doctoral thesis, Vysoká škola ekonomická v Praze, 2010. http://www.nusl.cz/ntk/nusl-165290.

Texte intégral
Résumé :
First of all, general principals of contemporary macroeconometric models are described in this dissertation together with a brief sketch of alternative approaches. Consequently, the macroeconomic model of a monetary policy is formulated in order to describe fundamental relationships between real and nominal economy. The model originated from a linear one by making some of the parameters endogenous. Despite this nonlinearity, I expressed my model in a state space form with time-varying coefficients, which can be solved by a standard Kalman filter. Using outcomes of this algorithm, likelihood function was then calculated and maximized in order to obtain estimates of the parameters. The theory of identifiability of a parametric structure is also described. Finally, the presented theory is applied on the formulated model of the euro area. In this model, the European Central Bank was assumed to behave according to the Taylor rule. The econometric estimation, however, showed that this common assumption in macroeconomic modeling is not adequate in this case. The results from econometric estimation and analysis of identifiability also indicated that the interest rate policy of the European Central Bank has only a very limited effect on real economic activity of the European Union. Both results are influential, as monetary policy in the last two decades has been modeled as interest rate policy with the Taylor rule in most macroeconometric models.
Styles APA, Harvard, Vancouver, ISO, etc.
32

Nourbakhsh, Ghavameddin. « Reliability analysis and economic equipment replacement appraisal for substation and sub-transmission systems with explicit inclusion of non-repairable failures ». Thesis, Queensland University of Technology, 2011. https://eprints.qut.edu.au/40848/1/Ghavameddin_Nourbakhsh_Thesis.pdf.

Texte intégral
Résumé :
The modern society has come to expect the electrical energy on demand, while many of the facilities in power systems are aging beyond repair and maintenance. The risk of failure is increasing with the aging equipments and can pose serious consequences for continuity of electricity supply. As the equipments used in high voltage power networks are very expensive, economically it may not be feasible to purchase and store spares in a warehouse for extended periods of time. On the other hand, there is normally a significant time before receiving equipment once it is ordered. This situation has created a considerable interest in the evaluation and application of probability methods for aging plant and provisions of spares in bulk supply networks, and can be of particular importance for substations. Quantitative adequacy assessment of substation and sub-transmission power systems is generally done using a contingency enumeration approach which includes the evaluation of contingencies, classification of the contingencies based on selected failure criteria. The problem is very complex because of the need to include detailed modelling and operation of substation and sub-transmission equipment using network flow evaluation and to consider multiple levels of component failures. In this thesis a new model associated with aging equipment is developed to combine the standard tools of random failures, as well as specific model for aging failures. This technique is applied in this thesis to include and examine the impact of aging equipments on system reliability of bulk supply loads and consumers in distribution network for defined range of planning years. The power system risk indices depend on many factors such as the actual physical network configuration and operation, aging conditions of the equipment, and the relevant constraints. The impact and importance of equipment reliability on power system risk indices in a network with aging facilities contains valuable information for utilities to better understand network performance and the weak links in the system. In this thesis, algorithms are developed to measure the contribution of individual equipment to the power system risk indices, as part of the novel risk analysis tool. A new cost worth approach was developed in this thesis that can make an early decision in planning for replacement activities concerning non-repairable aging components, in order to maintain a system reliability performance which economically is acceptable. The concepts, techniques and procedures developed in this thesis are illustrated numerically using published test systems. It is believed that the methods and approaches presented, substantially improve the accuracy of risk predictions by explicit consideration of the effect of equipment entering a period of increased risk of a non-repairable failure.
Styles APA, Harvard, Vancouver, ISO, etc.
33

Kratz, Marie. « Some contributions in probability and statistics of extremes ». Habilitation à diriger des recherches, Université Panthéon-Sorbonne - Paris I, 2005. http://tel.archives-ouvertes.fr/tel-00239329.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
34

ZINI, GIOVANNI. « Maximal curves over finite fields and related objects ». Doctoral thesis, 2017. http://hdl.handle.net/2158/1088164.

Texte intégral
Résumé :
The thesis deals with maximal curves over finite fields, that is, algebraic curves X of genus g over a finite field GF(q^2) of cardinality q^2 attaining the Hasse-Weil upper bound q^2+1+2gq on the number of GF(q^2)-rational places.We construct many quotient curves of the GK maximal curves and give explicit equations; in this way, we obtain many new genera for maximal curves, and new maximal curves which are not covered or Galois covered by the Hermitian curve. We show that another important maximal curves are not Galois covered by the Hermitian curve maximal over their field of maximality: the curves by Garcia-Guneri-Stichtenoth and by Garcia-Stichtenoth, one Suzuki curve and one Ree curve, the covers of the Suzuki and Ree curves introducted by Skabelund. We give applications of (maximal) curves over finite fields in several areas. In Finite Geometry, we construct new small families of complete (k,3)- and (k,4)-arcs in the Galois planes; in Coding Theory, we construct Goppa codes from separable Kummer covers of the projective line and from the GK curve; Permutation Polynomials, we classify explicitely a large class of complete permutation polynomials of monomial type.
Styles APA, Harvard, Vancouver, ISO, etc.
35

Kim, Joonil. « Hilbert transform and maximal function along curves in the Heisenberg Group ». 1998. http://catalog.hathitrust.org/api/volumes/oclc/40411868.html.

Texte intégral
Résumé :
Thesis (Ph. D.)--University of Wisconsin--Madison, 1998.
Typescript. eContent provider-neutral record in process. Description based on print version record. Includes bibliographical references (leaves 51-53).
Styles APA, Harvard, Vancouver, ISO, etc.
36

Lin, Yu-Ping, et 林鈺苹. « A study of the Maximum Smoothness Yield Curves ». Thesis, 2007. http://ndltd.ncl.edu.tw/handle/46490498292453390331.

Texte intégral
Résumé :
碩士
義守大學
財務金融學系碩士班
95
Lim and Xiao (2002) provides using material of zero coupon bond and fitting the forward rate curve with the maximum smoothness. It derives out that the forward rate functional form is four spline functional form. Then, it can give the yield rate curve via forward rate curve. We try to put the idea of the maximum smoothness into the yield rate curve directly. It derives out that the yield rate functional form is three spline functional form. The next, we simulate the fitting effect that compare deriving out the yield rate curve via the forward rate curve with the yield rate curve directly. When it derives out the yield rate curve via the forward rate curve, the sample and effect are much and better. When it derives out yield rate curve directly, the sample and effect are much and better. Finally we put the yield rate curve directly into coupon bond. We simulate to confer the fitting effect of coupon yield rate curve. We discover that the sample and effect are much and better in the event.
Styles APA, Harvard, Vancouver, ISO, etc.
37

Chen, Ching-Hua, et 陳靜華. « A Study of the Maximum Flatness Forward Rate Curves ». Thesis, 2007. http://ndltd.ncl.edu.tw/handle/60492751735753306120.

Texte intégral
Résumé :
碩士
義守大學
財務金融學系碩士班
95
Lim and Xiao (2002) model base on zero-coupon bond prices as sample data and derive the interesting result that quadratic polynomial spline functions obtain given the maximum flatness estimation of the forward rate curves. Because coupon bond prices are more frequent in internal market trade. Therefore, this article tries to extend Lim and Xiao (2002) model to apply directly to coupon bond, and discuss the impact of coupon bond prices on yield curves. According to simulating and analyzing result, we can find the proper effect will be optimal and most stable if sample data include short, middle, and long term data. And using zero-coupon bond model to analyze will has the same result. It shows when using coupon bond to analyze market yield, the yield will be more close to market yield curves if market have more short, middle, and long term data.
Styles APA, Harvard, Vancouver, ISO, etc.
38

Προκάκης, Χρήστος. « Ο ρόλος της τροποποιημένης μεγίστης θυμεκτομής στην έκβαση των ασθενών με βαρεία μυασθένεια ». Thesis, 2010. http://nemertes.lis.upatras.gr/jspui/handle/10889/4177.

Texte intégral
Résumé :
Σκοπός: Η θυμεκτομή αποτελεί κοινώς αποδεκτή θεραπεία της μυασθένειας με τις διάφορες προσπελάσεις να αναφέρονται ως ανάλογης αξίας για την επίτευξη ύφεσης της νόσου. Έχοντας πλέον την μόνιμη σταθερή ύφεση ως καθαρή και μετρήσιμη νευρολογική έκβαση των μυασθενικών ασθενών μετά θυμεκτομή και γνωρίζοντας ότι η ύφεση της νόσου αποτελεί χρόνο-εξαρτώμενο γεγονός, πραγματοποιήσαμε μια αναδρομική μελέτη των ασθενών με μυασθένεια που αντιμετωπίστηκαν χειρουργικά με σκοπό τον πιο αξιόπιστο καθορισμό του ρόλου των μεγίστων θυμικών εκτομών και την ταυτοποίηση προγνωστικών παραγόντων για ύφεση της νόσου μετά θυμεκτομή. Υλικό και μέθοδος. Η μελέτη περιλαμβάνει 78 ασθενείς που υποβλήθηκαν σε τροποποιημένη μέγιστη θυμεκτομή από το 1990 έως το 2007. Οι ενδείξεις θυμεκτομής περιελάμβαναν: οφθαλμική μυασθένεια ανθιστάμενη στη φαρμακευτική αγωγή, γενικευμένη μυασθένεια και μυασθένεια με θύμωμα. Τα στοιχεία που συλλέχθηκαν αφορούσαν τη βαρύτητα της νόσου (τροποποιημένη Osserman ταξινόμηση), την προεγχειρητική φαρμακευτική αγωγή, την ηλικία έναρξης της νόσου (≤ 40/ > 40 έτη), το χρονικό διάστημα που μεσολάβησε από τη διάγνωση στη θυμεκτομή (≤ 12/ > 12 μήνες), το φύλο, την ιστολογία του θύμου αδένα, τη θνητότητα και τις επιπλοκές. Στους ασθενείς με θύμωμα περαιτέρω στοιχεία που ελήφθησαν υπόψη αφορούσαν τον ιστολογικό τύπο του θυμώματος κατά την Παγκόσμια Οργάνωση Υγείας και το στάδιο του όγκου κατά Masaoka. Η εκτίμηση της νευρολογικής έκβασης στο τέλος του μετεγχειρητικού follow up έγινε βάση της νέας ταξινόμησης του Αμερικανικού Ιδρύματος για τη Βαρεία Μυασθένεια με την πλήρη σταθερή ύφεση να λαμβάνεται υπόψη για τον καθορισμό της επάρκειας της διενεργηθείσας εκτομής και για τη σύγκριση των αποτελεσμάτων μας με αυτά προηγουμένων μελετών. Η στατιστική ανάλυση των αποτελεσμάτων έγινε με το SPSS 17 και αφορούσε δύο ομάδες ασθενών ανάλογα με την παρουσία ή μη θυμώματος. Η μέθοδος Kaplan-Meier χρησιμοποιήθηκε για την εκτίμηση της επίπτωσης των υπό εκτίμηση προγνωστικών παραγόντων στην επίτευξη της πλήρους ύφεσης ενώ η Cox Regression ανάλυση αποτέλεσε το μοντέλο για την ανάλυση της ταυτόχρονης επίδρασης των υπό μελέτη παραμέτρων στην επίτευξη πλήρους σταθερής ύφεσης. Τιμές του p < 0.05 θεωρήθηκαν στατιστικά σημαντικές. Αποτελέσματα: 51 ασθενείς είχαν μυασθένεια χωρίς θύμωμα και 27 ασθενείς παρανεοπλασματική μυασθένεια. Δεν υπήρχαν στατιστικά σημαντικές διαφορές στα προεγχειρητικά κλινικά χαρακτηριστικά των ασθενών πλην της αναμενομένης εμφάνισης της νόσου σε απώτερη ηλικία στους ασθενείς με θύμωμα. Η θνητότητα ήταν μηδενική ενώ η χειρουργική νοσηρότητα, ανάλογη προηγουμένων μελετών θυμεκτομής με διαφορετικού τύπου προσπέλασεις, ανήλθε στο 7,7% και ήταν ως επί το πλείστον ήσσονος σημασίας. Το ποσοστό μετεγχειρητικής μυασθενικής κρίσης ήταν μόλις 3,8%. Οι ασθενείς με μυασθένεια και θύμωμα βίωσαν όψιμη νευρολογική έκβαση ανάλογη αυτής των ασθενών χωρίς θύμωμα (πιθανότητα ύφεσης 74,5% vs 85,7%, p= 0.632). Η μη χρήση στεροειδών στην προεγχειρητική φαρμακευτική αγωγή, ως έμμεσος δείκτης της βαρύτητας της νόσου, σχετίστηκε με στατιστικά καλύτερη πιθανότητα για πλήρη ύφεση των συμπτωμάτων τόσο στους ασθενείς με θύμωμα (95% CI 2.687-339.182, p= 0.006) όσο και σε αυτούς χωρίς θύμωμα (CI 95% 1.607-19.183, P= 0.007) στην πολυπαραγοντική ανάλυση. Αξιόλογη διαφορά, αν και στατιστικά μη σημαντική, για τη έκβαση της νόσου είχε η πρώιμη σε σχέση με την απώτερη χειρουργική αντιμετώπιση των ασθενών. Στη σύγκριση των 27 ασθενών με μυασθένεια και θύμωμα με 12 επιπλέον ασθενείς που υποβλήθηκαν στην ίδια επέμβαση για θύμωμα άνευ μυασθένειας η παρουσία των συμπτωμάτων μυϊκής αδυναμίας συνδυάστηκε με στατιστικά σημαντική βελτίωση της επιβίωσης των ασθενών (100% vs 38,8% στη 10ετία, p< 0.001). Στους ασθενείς με μυασθένεια χωρίς θύμωμα και απώτερης ηλικιακά έναρξης της νόσου το ποσοστό σημαντικής βελτίωσης των μυασθενικών συμπτωμάτων, εξαιρουμένης της πλήρους ύφεσης, ήταν 70%. Στους ασθενείς με μυασθένεια και θύμωμα η ιστολογική ταυτοποίηση των θυμωμάτων κατά την Παγκόσμια Οργάνωση Υγείας προέκυψε στατιστικά σημαντική τόσο στην μονοπαραγοντική όσο και στην πολυπαραγοντική ανάλυση με τα θυμώματα τύπου Β2, Α και Β3 να επιτυγχάνουν από πολύ καλή έως άριστη πιθανότητα πλήρους ύφεσης και τα θυμώματα τύπου ΑΒ, Β1 και C να έχουν απογοητευτική έκβαση όσον αφορά την ίαση. Συμπεράσματα: Η παρούσα μελέτη δείχνει ότι η τροποποιημένη μεγίστη θυμεκτομή είναι ασφαλής και σχετίζεται με υψηλή πιθανότητα για ίαση των μυασθενικών ασθενών με και χωρίς θύμωμα. Οι ασθενείς πρέπει να αντιμετωπίζονται χειρουργικά πρώιμα μετά τη διάγνωση με κυριότερο προγνωστικό παράγοντα για το απώτερο νευρολογικό αποτέλεσμα την προεγχειρητική βαρύτητα της νόσου. Η ασφαλής και πιο αξιόπιστη εκτίμηση της τελευταίας απαιτεί πιο αντικειμενικά κριτήρια όπως αυτά που θεσπίστηκαν από το Αμερικανικό Ίδρυμα για τη Βαρεία Μυασθένεια. Η ενσωμάτωση σε αυτά τα κριτήρια μοριακών παραμέτρων που φαίνεται να επηρεάζουν την πρόγνωση της νόσου, ενδεχόμενα να βελτιώσουν την αξιοπιστία της κλινικής σταδιοποίησης του MGFA και να αναδείξουν υποομάδες ασθενών με διαφορετική νευρολογική πρόγνωση μετά από θυμεκτομή. Επίσης η πρώιμη διάγνωση των θυμωμάτων εξαιτίας των συνυπαρχόντων μυασθενικών συμπτωμάτων μπορεί να οδηγήσει σε καλύτερη επιβίωση τους συγκεκριμένους ασθενείς. Τέλος η νευρολογική έκβαση των ασθενών με θυμωματώδη μυασθένεια σχετίζεται με τον ιστολογικό τύπο των θυμωμάτων, αλλά όχι αναγκαία και με την κακοήθη συμπεριφορά τους.
Objective: Thymectomy represents a widely accepted treatment for myasthenia gravis with different surgical approaches reported as comparably efficient in achieving disease’s remission. With the complete stable remission being currently accepted as a clear measurable outcome of patients with myasthenia undergoing surgical treatment and the knowledge that disease’s remission should be evaluated as a time dependent event we proceeded to a retrospective analysis of our experience on the surgical management of myasthenic patients. The objective was to access the effect of maximal resection on the neurological outcome and identify predictors of disease remission. Materials and methods: The study group consisted of 78 patients who underwent modified maximal thymectomy for myasthenia from 1990 to 2007. Indications for thymectomy included: ocular myasthenia refractory to medical treatment, generalized myasthenia and thymomatous myasthenia. The data collected included preoperative disease’s severity (modified Osserman classification), preoperative medical treatment, age at onset of the disease (≤ 40/ > 40 years), time elapsed between diagnosis and thymectomy (≤ 12/ > 12 months), gender, thymus gland histology, mortality and morbidity. In thymoma patients further analysis was carried out according the World Health Organization histological classification and the Masaoka stage of the tumors. The evaluation of the neurological outcome at the end of follow up was performed according the Myasthenia Gravis Foundation of America classification. Both the effectiveness of the resection performed and the comparison of our results with those of previous studies were done using the complete stable remission as the end point of the study. The statistical analysis of the results was carried out using the SPSS 17. Kaplan-Meier life table analysis was performed and the log rank test was used to evaluate the effect of the variables examined on the distribution of disease’s remission over time. The Cox proportional hazard model was also applied to verify the concurrent effect of the evaluated factors on the achievement of complete stable remission. P values < 0.05 were considered statistically significant. Results: 51 patients suffered of non thymomatous myasthenia while 27 patients had myasthenia with thymoma. The two groups were comparable in refer to the clinical features of the patients apart the more advanced age at the time of the diagnosis for thymoma patients. There was no perioperative mortality, while the surgical morbidity was comparable to the one reported in other series of patients with different surgical approaches and was 7.7%. The rate of postoperative myasthenic crisis was only 3.8%. Thymoma and non thymoma patients experienced comparable complete stable remission prediction (74.5% vs 85.7% at 15 years, p= 0.632). The absence of steroids in the preoperative medical regimen was statistically associated with the achievement of complete stable remission in both thymoma (95% CI 2.687-339.182, p= 0.006) and non thymoma patients (CI 95% 1.607-19.183, P= 0.007) in multivariate analysis. There was an important difference, although not statistically significant, for the neurological outcome between early and late surgical treatment. When the 27 patients with myasthenia and thymoma were compared with other 12 patients similarly operated for thymoma without symptoms and signs of muscular weakness we found that the presence of myasthenia was statistically associated with improved survival (100% vs 38.8% at 10 years, p< 0.001). Non thymoma patients presenting with late onset myasthenia, experienced high improvement (complete stable remission excluded) rate reaching up to 70% at the end of follow up. Among patients with thymomatous myasthenia gravis the World Health Organization histological classification was statistically associated with the late neurological outcome. Thymoma types A, B3 and B2 reached a high to excellent prediction of disease’s remission while types AB, B2 and C had a disappointing neurological outcome. Conclusons: The present study demonstrated that the modified maximal thymectomy is a safe procedure, associated with an excellent neurological outcome in both thymomatous and non thymomatous myasthenia. The patients should be operated early after the diagnosis is made with the disease’s severity being the prime determinant of the possibility to achieve complete remission of myasthenic symptoms. The evaluation of disease’s severity requires objective criteria like the ones proposed by the Myasthenia Gravis Foundation of America. The inclusion in these criteria of molecular markers related to myasthenia’s prognosis and its neurological outcome after thymectomy may further enhance its validity and may allow the identification of subgroups of patients with different disease prognosis after thymectomy. The presence of muscular weakness may lead to early diagnosis and surgical treatment of thymomas with improved survival. Finally the neurologic outcome in thymoma patients after thymectomy may be statistically associated with the World Health Organization classification subtypes but not necessarily with the aggressiveness of these tumors.
Styles APA, Harvard, Vancouver, ISO, etc.
39

Lin, Mei-Chun, et 林玫君. « Fitting Forward Rate Curve with Maximum Smoothness ». Thesis, 2001. http://ndltd.ncl.edu.tw/handle/87646425105173499762.

Texte intégral
Résumé :
碩士
國立臺灣大學
財務金融學研究所
89
Two approaches of fitting forward rate curves are explored in this essay, including the maximum smoothness approach proposed by Adams and Deventer and the approach of deriving implied forward rates from the current term structure of commercial paper prices. Two main topics are studied: (1) which forward rate curve is more smooth and reasonable? (2) which curves’ forward rates have more effective forecasting power about the future spot rates? Since the bill yield curve doesn't close to the flat yield curve as maturity is long in Adams and Deventer model, the revised Adams and Deventer model is proposed to fit forward rate curve of Taiwan Bill market. The empirical evidence indicates the following conclusions: 1.Both approaches on average generate the same forward rates and do not produce implausible values of forward rates. The revised Adams and Deventer model to fit forward rate is more smooth than Adams and Deventer model. 2.The generated forward rates have no forecasting power on the future spot rates.
Styles APA, Harvard, Vancouver, ISO, etc.
40

Schweitzer, Chad. « Impedance curves with nonharmonic maxima analysis and design of multiple column air resonators / ». 1995. http://catalog.hathitrust.org/api/volumes/oclc/33183676.html.

Texte intégral
Résumé :
Thesis (M.S.)--University of Wisconsin--Madison, 1995.
Typescript. eContent provider-neutral record in process. Description based on print version record. Includes bibliographical references (leaf 36).
Styles APA, Harvard, Vancouver, ISO, etc.
41

Tsai, Wen-Huei, et 蔡玟蕙. « CHARACTERS AND NEW ANALYTIC METHOD OF MAXIMAL EXPIRATORY F;OW VOLUME CURVE (MEFVC) ». Thesis, 1990. http://ndltd.ncl.edu.tw/handle/79526453408694976980.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
42

Castro, Ana Beatriz da Cunha Valença de. « Implantes ultra curtos na zona posterior da maxila ». Master's thesis, 2019. http://hdl.handle.net/10284/8350.

Texte intégral
Résumé :
A reabilitação da maxila posterior através de implantes apresenta vários obstáculos, nomeadamente pouca altura óssea residual, pneumatização do seio maxilar e baixa densidade óssea. Implantes de menor comprimento foram desenvolvidos a fim de dar solução a estas situações. Para compensar as reduzidas dimensões, melhoras na macro e microgeometria se fizeram necessárias. O objetivo deste trabalho foi verificar a viabilidade do uso de implantes ultra curtos (≤ 6,5 mm) na zona posterior maxilar. Foi realizada uma revisão bibliográfica de dados recentes da literatura a respeito de fatores mecânicos, biológicos, protéticos e taxas de sucesso. Os implantes ultra curtos podem ser utilizados como uma alternativa às cirurgias de aumento ósseo associadas a implantes longos, com desfechos semelhantes. Estes representam uma opção minimamente invasiva, com menores custos e tempo global de tratamento, para além de menor morbilidade. Porém, ainda há poucos dados de acompanhamento a longo prazo.
Rehabilitation of the posterior maxilla with implants presents many obstacles, namely low residual bone height, pneumatization of the maxillary sinus and low bone density. Implants of shorter length have been designed to solve these situations. In order to compensate reduced dimensions, improvements in macro and microgeometry became necessary. The objective of this study was to verify the feasibility of using ultra short implants (≤6.5 mm) in the posterior maxillary zone. A bibliographic review of recent literature data on mechanical, biological, prosthetic and success factors was carried out. Ultra short implants can be used as an alternative to bone augmentation surgeries associated with long implants with similar outcomes. These represent a minimally invasive option, with lower costs and overall treatment time, in addition to lower morbidity. However, there are still few long-term follow-up data.
Styles APA, Harvard, Vancouver, ISO, etc.
43

Chen, Qi Chuan, et 陳淇釧. « Leaf growth curve, venation and hydathodes of ficus formosana maxim ». Thesis, 1994. http://ndltd.ncl.edu.tw/handle/75333583442623821144.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
44

Feng, Shih-Yao, et 馮士耀. « A Non-parametric Method for Fitting Forward Interest Rate Curve with Maximum Smoothness ». Thesis, 1999. http://ndltd.ncl.edu.tw/handle/06978438449984112834.

Texte intégral
Résumé :
碩士
國立臺灣大學
商學研究所
87
Abstract After Adams & Deventer (1994) showed a model for fitting forward rate curves with the maximum smoothness criterion , Frishling & Yamamura (1996) model also used the maximum smoothness criterion and non-parametric technique to fit forward rate curves . The differences between these two models are the fitting criteria of curves 、 required input data and the non-parametric technique used by Frishling & Yamamura . Adams & Deventer''s model requires zero-coupon bonds for inputs and under the minimization of total curvature of a curve to fit a wanted forward rate curve ; Frishling & Yamamura model requires coupon bonds for inputs and under the minimization of total slope of a curve to fit a wanted forward rate curve. The main purposes of this paper are investigating and applying Frishling & Yamamura model to the Taiwan Government Bonds market . Besides follow the fitting criterion proposed by original Frishling & Yamamura''s model , we also investigate the fitting results when using different fitting criterion . The another important argumentation of this paper is to parameterize forward rate ''points'' solved by Frishling & Yamamura model . This researching purpose makes the cooperation opportunity of Adams & Deventer and Frishling & Yamamura models . The advantages of this cooperation are not only reaching the parameterization goal of discrete forward rate points but also enhancing the Adams & Deventer model''s applying situations. The chapter four of this paper shows the empirical studying results of forecasting the theoretical prices of out-of-sample bonds and the levels or directions of biases after comparing with the representative market prices.
Styles APA, Harvard, Vancouver, ISO, etc.
45

Mpholwane, Matome Lieghtone. « The determinants of running performance in middle distance female athletes ». Thesis, 2008. http://hdl.handle.net/10539/5426.

Texte intégral
Résumé :
ABSTRACT Male subjects are invariably used to study the physiological determinants of middle distance running performance. Studies that do include females have examined only the aerobic contribution to middle distance running performance. The aim of the present study was to investigate aerobic, anaerobic and muscle function factors that could be used to predict middle distance running performance in female runners. This study was performed at an altitude of 1800m. Eleven middle distance female runners aged 18-20 were selected for the study. Aerobic capacity was assessed by measuring the maximal oxygen consumption (VO2max), running velocity at maximal oxygen consumption (vVO2max), running economy (RE) and onset of blood lactate accumulation (OBLA). The blood lactate curve of each subject was constructed by relating the oxygen consumption, to the plasma lactate concentrations. Anaerobic capacity was determined by measuring the maximum accumulated oxygen deficit (MAOD) on a treadmill. Muscle function was assessed by having the subjects cycle as fast as possible against changing brake weights ranging from heavy to light using a Monark cycle ergometer. The brake force (kg) was related to velocity (rpm).
Styles APA, Harvard, Vancouver, ISO, etc.
46

Hsieh, Meng-Hsun, et 謝孟勳. « Using Grid-based Clustering Maximum Likelihood Estimate in Establishing Building Fragility Curves and Their Application in Selection of Emergency Earthquake Routes ». Thesis, 2013. http://ndltd.ncl.edu.tw/handle/11607631125215840264.

Texte intégral
Résumé :
博士
逢甲大學
土木及水利工程博士學位學程
101
In this study, the typological building fragility curves are developed base on the complete building damage records collected after the 1999 Chi-Chi Earthquake in Taiwan. These fragility curves are further applied in selecting routes and planning network of urban emergency earthquake. An aspect of the building fragility curves, a grid-based clustering maximum likelihood estimate (grid-based method), in combination with a grid-based cluster analysis procedure and with a novel maximum likelihood estimate, is proposed to derive fragility curves for 16 building typologies in Taiwan. This new grid-based method generates lower-deviation vulnerability data for reducing the dispersion of datasets than does the traditional district-based method. The proposed grid-based method has three analysis models, including binomial distribution, multinomial distribution (Method 1), and multinomial distribution with a common log-standard deviation (Method 2). The results of the grid-based method show that: (1) the fragility curves are more stable, unsusceptible, and convergent than those from the district-based method; (2) the fragility curves can reasonably expressed vulnerability of buildings thus applicable to the development of building fragility curves for wide-regional damage records; (3) Method 2 provides a more reasonable vulnerability of building thus the common log-standard deviation is a better choice to derive the empirical fragility curves; (4) the fragility curves have acceptable prediction performance even though only two levels of damage in the 1999 Chi-Chi Earthquake. Above results demonstrate that the developed fragility curves can reasonably be implemented for estimating earthquake loss and assessing seismic risk in the future. An aspect of the emergency earthquake routes, a road seismic vulnerability curves analysis is proposed to express the exceeding probability of the road-section block as a function of a specific earthquake intensity measure. The road seismic vulnerability curves are further using in analysis of low disruption risk for emergency earthquake routes (Low-DREER), in combination with road-section block risk analysis and with network analysis. The results of the road seismic vulnerability curves show that: (1) the road-sections with different buildings have different vulnerabilities of road-section block which related the composition of the number of buildings and its typologies; (2) the more number of buildings induced higher and rapidly increasing vulnerabilities of road-section block. An aspect of Low-DREER, the proposed accumulative route risk values can appropriately describe routes disruption risk, which caused by probable road-section block risk induced from the earthquake-induced building collapse. The results of Low-DREER show that a disaster prevention region should have a Primary Low-DREER which has a lowest accumulative route risk values and is able to communicate outside the access road intersections. Finally, the former metropolitan area in Taichung as a case study, the study proposes a method of bi-stage selection for earthquake emergency network. For urban earthquake emergency network in future, the method can be used in selecting primary earthquake emergency routes and applied in suitability assessment of the various types of disaster prevention facility.
Styles APA, Harvard, Vancouver, ISO, etc.
47

Ma, Liangzhuang. « Optimization of trawlnet codend mesh size to allow for maximal undersized fish release and a model consideration of towing time to the effects of the selection curve / ». 2005.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
48

Chen, Hsien-Chi, et 陳憲琦. « Combining HSPF model and Load Duration Curve (LDC) method for developing variable Total Maximum Daily Load (TMDL) in PeiShi creek watershed ». Thesis, 2011. http://ndltd.ncl.edu.tw/handle/7hxph9.

Texte intégral
Résumé :
碩士
國立臺北科技大學
土木與防災研究所
99
Over the past decade, the HSPF model had been used to estimate non-point source pollution of the PeiShi Creek Watershed. However, those pollution estimates and the respective load reduction scenarios were different due to different hydrological conditions. Moreover, the traditional TMDL strategy, based on low flow condition (Q75), is thought to be conservative for area abounded with non-pont source pollution like the studied watershed. Thus, the present research combined HSPF (Hydrological Simulation Program Fortran) model and Load Duration Curve (LDC) method to estimate pollution loads and develop control strategy for different flow conditions for the PeiShi creek watershed. The purpose of this methodology is to control the pollution loads based on different flow regimes developed by the flow frequency analysis.   The results showed that total phosphorus load reduction for high flow range and middle flow range were 40.36 kg/day (30.46 %) and 33.36 kg / day (56.70 %) respectively and total phosphorus loads need to be reduced was 5123 kg/yr. On the other hand, the load reduction estimated by the traditional Q75 control strategy was 10566 kg / yr which is 5443 kg / yr (+100.62%) more than that estimated by the present research. Therefore, the present methodology, combining the HSPF model and LDC method, is thought to be much economical in designing TMDL control scenarios than the traditional Q75 method for area abounded with non-point source pollution like the PeiShi creek watershed. However, the Q75 method is still suitable for area with mainly point source load. Finally, this study suggests that using the middle flow range of Q25 as a non-point source pollution management of the design flow.
Styles APA, Harvard, Vancouver, ISO, etc.
49

Wang, Jiang-Tom, et 王建棠. « The study of modified formula for the maximum unit weight on the compaction curve of the large grain size soil aggregates ». Thesis, 2015. http://ndltd.ncl.edu.tw/handle/28845069051515975392.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
50

Taha, Esraa. « Determination of plasma concentrations using LC/MS and pharmacokinetics of ofloxacin in patients with multi-drug resistant tuberculosis and in patients with multi-drug resistant tuberculosis coinfected with hiv ». Thesis, 2009. http://hdl.handle.net/11394/3413.

Texte intégral
Résumé :
Magister Pharmaceuticae - MPharm
Many studies have investigated the pharmacokinetics of anti-tuberculosis drugs in patients infected with tuberculosis. However, little is known about the pharmacokinetics of the drugs that are used in the treatment of multi-drug resistant tuberculosis (MDRTB).Therefore, the objective of the present study was to investigate the steady state concentrations and the pharmacokinetics of ofloxacin, one of the drugs used in the treatment of MDR-TB in patients infected with MDR-TB and patients with MDR-TB co-infected with HIV Plasma samples were drawn at different times over 24 hours after ofloxacin oral administration. For the determination of ofloxacin plasma concentrations, the liquid chromatography coupled with mass spectrometry analysis method was used.The method was validated over a concentration range of 0.1-10 μg/ml. The lower limit of ofloxacin detection was 0.05μg/ml, while the lower limit of quantification was 0.1μg/ml. The response was linear over the range used with a mean recovery of 97.6%. Ofloxacin peak was well separated at a retention time of 9.6 minutes.The pharmacokinetic parameters obtained were presented as mean ± standard deviation(SD). The peak concentration of ofloxacin (Cmax) was 4.71± 2.27 μg/ml occurred at Tmax 3±1.29 hours after ofloxacin oral administration. The mean (±SD) for the area under the concentration-time curve (AUC0-24) and the area under the concentration-time curve(AUC0-∞) were 68.8±42.61 μg/ml.hr and 91.93±76.86 μg/ml.hr, respectively. Ofloxacin distributed widely with a mean (±SD) volume of distribution (Vd) 2.77±1.16 L/kg and it was eliminated with a mean (±SD) total clearance rate of 0.27±0.25 L/hr/kg. Ofloxacin mean (±SD) half-life was 9.55± 4.69 hours and mean (±SD) of the mean residence time (MRT) was 1512± 6.59 hours.In summary, compared with the previous findings in the literature, ofloxacin pharmacokinetic was altered in MDR-TB patients with or without HIV co-infection.The AUC and Cmax were reduced, while the half-life and the time to reach the peak concentration were prolonged. This suggests that, both the rate and the extent of ofloxacin absorption were decreased. Furthermore, ofloxacin was highly eliminated in patients, which may be related to the altered liver function in this group of patients.Further studies investigating the effect of HIV, liver and kidney dysfunctions on ofloxacin pharmacokinetics are recommended in large number of patients infected with MDR-TB.in addition to the therapeutic drug monitoring to maintain the desired concentration of ofloxacin in the patients.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie