Siga este enlace para ver otros tipos de publicaciones sobre el tema: Matrice de Fisher.

Tesis sobre el tema "Matrice de Fisher"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores tesis para su investigación sobre el tema "Matrice de Fisher".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Nguyen, Thu Thuy. "Developpement de la matrice d'information de Fisher pour des modèles non linéaires à effets mixtes : application à la pharmacocinétique des antibiotiques et l'impact sur l'émergence de la résistance". Paris 7, 2013. http://www.theses.fr/2013PA077029.

Texto completo
Resumen
Les modèles non linéaires à effets mixtes (MNLEM) permettent d'analyser les données longitudinales, par exemple dans les études pharmacocinétique/pharmacodynamique, avec peu de prélèvements par patient. Une méthode pour planifier ces études est d'utiliser la matrice d'information de Fisher (MF) attendue, approximée par linéarisation. Nous avons étendu MF pour prendre en compte la variabilité intra-sujet et les covariables discrètes dans les essais en crossover. Ces développements ont été évalués par simulation, implémentés dans le logiciel PFIM, dédiée à l'évaluation et l'optimisation des protocoles. Nous avons utilisé PFIM pour planifier une étude en crossover, montrant l'absence d'interaction d'un composant sur la pharmacocinétique de l'amoxicilline. Nous avons ensuite proposé et évalué par simulations une alternative pour évaluer la MF, basée sur la quadrature de Gauss et l'intégration stochastique. Cette approche donne des prédictions plus correctes que la linéarisation mais elle est coûteuse en temps de calcul ; son utilisation n'est alors adaptée que pour des MNLEM avec peu d'effets aléatoires. Nous avons aussi étudié l'émergence de la résistance des entérobactéries aux fluoroquinolones dans la flore intestinale. Dans un essai chez le porc, nous avons montré, d'abord par l'approche non-compartimentale, une corrélation entre les concentrations fécales de la ciprofloxacine et les comptes de bactéries résistantes. Nous avons également développé un modèle mécanistique pour mieux caractériser la cinétique des entérobactéries. A notre connaissance, c'est la première modélisation in vivo pour étudier la résistance bactérienne aux fluoroquinolones dans la flore intestinale
Nonlinear mixed effect models (NLMEM) can be used to analyse longitudinal data in patients, for example in pharmacokinetic/pharmacodynamic studies, with fewer samples than the classical non-compartmental approach. A method for designing these studies is to use the Fisher information matrix (MF), approximated by first order linearization of the model. We extended this expression of MF to take into account the within subject variability and the discrete covariates. These developments were evaluated by simulations, implemented in PFIM 3. 2 dedicated to design evaluation and optimisation. We also applied PFIM to design a crossover study, showing absence of interaction of a compound on the pharmacokinetic of amoxicillin. We also proposed and evaluated by simulations an alternative approach to compute MF without linearization, based on Gaussian quadrature and stochastic integration. This approach gave more correct predictions than linearization when the model becomes very nonlinear but it is very time consuming; consequently its use is limited to NLMEM with only few random effects. Next, we studied the expansion of résistance to fluoroquinolones in intestinal flora. In a trial in piglets, we found, by non-compartmental approach, a significant correlation between fecal concentrations of ciprofloxacin and counts of resistant enterobacteria. Then we developed a mechanistic model to more precisely characterize the pharmacokinetic of fecal ciprofloxacin as well as the kinetics of susceptible and resistant enterobacteria. To our knowledge, this is the first in vivo modelling to study the bacterial résistance to fluoroquinolones in intestinal flora
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Koroko, Abdoulaye. "Natural gradient-based optimization methods for deep neural networks". Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG068.

Texto completo
Resumen
La méthode du gradient stochastique est la technologie actuellement prédominante pour effectuer la phase d'entraînement des réseaux de neurones. Par rapport à une descente classique, le calcul du vrai gradient comme une moyenne sur les données est remplacé par un élément aléatoire de la somme. En présence de données massives, cette approximation audacieuse permet de diminuer le nombre d'évaluations de gradients élémentaires et d'alléger le coût de chaque itération. Le prix à payer est l'apparition d'oscillations et la lenteur de convergence souvent excessive en nombre d'itérations. L'objectif de cette thèse est de concevoir une approche à la fois : (i) plus robuste, en faisant appel aux méthodes fondamentales qui ont fait leur preuve en optimisation classique, i.e., en dehors du cadre de l'apprentissage ; et (ii) plus rapide, en termes de vitesse convergence. Nous nous intéressons en particulier aux méthodes de second ordre, connues pour leur stabilité et leur rapidité de convergence. Pour éviter le goulot d'étranglement de ces méthodes, qui est le coût exorbitant d'une itération où intervient un système linéaire à matrice pleine, nous tentons d'améliorer une approximation récemment introduite sous le nom de Kronecker-Factorized Approximation of Curvature (KFAC) pour la matrice de Fisher, laquelle remplace la matrice hessienne dans ce contexte. Plus précisément, nos axes de travail sont : (i) construire de nouvelles factorisations de Kronecker fondées sur une justification mathématique plus rigoureuse que KFAC ; (ii) prendre en compte l'information issue des blocs hors diagonaux de la matrice de Fisher, qui représentent l'interaction entre les différentes couches ; (iii) généraliser KFAC à une architecture de réseau autre que celles pour lesquelles elle a été initialement développée
The stochastic gradient method is currently the prevailing technology for training neural networks. Compared to a classical descent, the calculation of the true gradient as an average over the data is replaced by a random element of the sum. When dealing with massive data, this bold approximation enables one to decrease the number of elementary gradient evaluations and to alleviate the cost of each iteration. The price to be paid is the appearance of oscillations and the slowness of convergence, which is often excessive in terms of number of iterations. The aim of this thesis is to design an approach that is both: (i) more robust, using the fundamental methods that have been successfully proven in classical optimization, i.e., outside the learning framework; and (ii) faster in terms of convergence speed. We are especially interested in second-order methods, known for their stability and speed of convergence. To circumvent the bottleneck of these methods, which lies in the prohibitive cost of an iteration involving a linear system with a full matrix, we attempt to improve an approximation recently introduced as Kronecker-Factorized Approximation of Curvature (KFAC) for the Fisher matrix, which replaces the Hessian matrix in this context. More specifically, our work focuses on: (i) building new Kronecker factorizations based on a more rigorous mathematical justification than in KFAC; (ii) taking into account the information from the off-diagonal blocks of the Fisher matrix, which represent the interaction between the different layers; (iii) generalizing KFAC to a network architecture other than those for which it had been initially developed
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Roy, Prateep Kumar. "Analysis & design of control for distributed embedded systems under communication constraints". Phd thesis, Université Paris-Est, 2009. http://tel.archives-ouvertes.fr/tel-00534012.

Texto completo
Resumen
Les Systèmes de Contrôle Embarqués Distribués (SCED) utilisent les réseaux de communication dans les boucles de rétroaction. Étant donné que les systèmes SCED ont une puissance de batterie, une bande passante de communication et une puissance de calcul limitée, les débits des données ou des informations transmises sont bornées et ils peuvent affecter leur stabilité. Ceci nous amène à élargir le spectre de notre étude et y intégrer une étude sur la relation entre la théorie du contrôle d'un coté et celle de l'information de l'autre. La contrainte de débit de données induit la quantification des signaux tandis que les aspects de calcul temps réel et de communication induit des événements asynchrones qui ne sont plus réguliers ou périodiques. Ces deux phénomènes donnent au SCED une double nature, continue et discrète, et en font des cas d'étude spécifiques. Dans cette thèse, nous analysons la stabilité et la performance de SCED du point de vue de la théorie de l'information et du contrôle. Pour les systèmes linéaires, nous montrons l'importance du compromis entre la quantité d'information communiquée et les objectifs de contrôle, telles que la stabilité, la contrôlabilité/observabilité et les performances. Une approche de conception conjointe de contrôle et de communication (en termes de débit d'information au sens de Shannon) des SCED est étudiée. Les principaux résultats de ces travaux sont les suivants : nous avons prouvé que la réduction d'entropie (ce qui correspond à la réduction d'incertitude) dépend du Grammien de contrôlabilité. Cette réduction est également liée à l'information mutuelle de Shannon. Nous avons démontré que le Grammien de contrôlabilité constitue une métrique de l'entropie théorique de l'information en ce qui concerne les bruits induits par la quantification. La réduction de l'influence de ces bruits est équivalente à la réduction de la norme du Grammien de contrôlabilité. Nous avons établi une nouvelle relation entre la matrice d'information de Fisher (FIM) et le Grammien de Contrôlabilité (CG) basé sur la théorie de l'estimation et la théorie de l'information. Nous proposons un algorithme qui distribue de manière optimale les capacités de communication du réseau entre un nombre "n" d'actionneurs et/ou systèmes concurrents se basant sur la réduction de la norme du Grammien de Contrôlabilité
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Roy, Prateep Kumar. "Analyse et conception de la commande des systèmes embarqués distribués sous des contraintes de communication". Phd thesis, Université Paris-Est, 2009. http://tel.archives-ouvertes.fr/tel-00532883.

Texto completo
Resumen
Les Systèmes de Contrôle Embarqués Distribués (SCED) utilisent les réseaux de communication dans les boucles de rétroaction. Étant donné que les systèmes SCED ont une puissance de batterie, une bande passante de communication et une puissance de calcul limitée, les débits des données ou des informations transmises sont bornées et ils peuvent affecter leur stabilité. Ceci nous amène à élargir le spectre de notre étude et y intégrer une étude sur la relation entre la théorie du contrôle d'un coté et celle de l'information de l'autre. La contrainte de débit de données induit la quantification des signaux tandis que les aspects de calcul temps réel et de communication induit des événements asynchrones qui ne sont plus réguliers ou périodiques. Ces deux phénomènes donnent au SCED une double nature, continue et discrète, et en font des cas d'étude spécifiques. Dans cette thèse, nous analysons la stabilité et la performance de SCED du point de vue de la théorie de l'information et du contrôle. Pour les systèmes linéaires, nous montrons l'importance du compromis entre la quantité d'information communiquée et les objectifs de contrôle, telles que la stabilité, la contrôlabilité/observabilité et les performances. Une approche de conception conjointe de contrôle et de communication (en termes de débit d'information au sens de Shannon) des SCED est étudiée. Les principaux résultats de ces travaux sont les suivants : nous avons prouvé que la réduction d'entropie (ce qui correspond à la réduction d'incertitude) dépend du Grammien de contrôlabilité. Cette réduction est également liée à l'information mutuelle de Shannon. Nous avons démontré que le Grammien de contrôlabilité constitue une métrique de l'entropie théorique de l'information en ce qui concerne les bruits induits par la quantification. La réduction de l'influence de ces bruits est équivalente à la réduction de la norme du Grammien de contrôlabilité. Nous avons établi une nouvelle relation entre la matrice d'information de Fisher (FIM) et le Grammien de Contrôlabilité (CG) basé sur la théorie de l'estimation et la théorie de l'information. Nous proposons un algorithme qui distribue de manière optimale les capacités de communication du réseau entre un nombre "n" d'actionneurs et/ou systèmes concurrents se basant sur la réduction de la norme du Grammien de Contrôlabilité
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Ley, Christophe. "Univariate and multivariate symmetry: statistical inference and distributional aspects". Doctoral thesis, Universite Libre de Bruxelles, 2010. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210029.

Texto completo
Resumen
This thesis deals with several statistical and probabilistic aspects of symmetry and asymmetry, both in a univariate and multivariate context, and is divided into three distinct parts.

The first part, composed of Chapters 1, 2 and 3 of the thesis, solves two conjectures associated with multivariate skew-symmetric distributions. Since the introduction in 1985 by Adelchi Azzalini of the most famous representative of that class of distributions, namely the skew-normal distribution, it is well-known that, in the vicinity of symmetry, the Fisher information matrix is singular and the profile log-likelihood function for skewness admits a stationary point whatever the sample under consideration. Since that moment, researchers have tried to determine the subclasses of skew-symmetric distributions who suffer from each of those problems, which has led to the aforementioned two conjectures. This thesis completely solves these two problems.

The second part of the thesis, namely Chapters 4 and 5, aims at applying and constructing extremely general skewing mechanisms. As such, in Chapter 4, we make use of the univariate mechanism of Ferreira and Steel (2006) to build optimal (in the Le Cam sense) tests for univariate symmetry which are very flexible. Actually, their mechanism allowing to turn a given symmetric distribution into any asymmetric distribution, the alternatives to the null hypothesis of symmetry can take any possible shape. These univariate mechanisms, besides that surjectivity property, enjoy numerous good properties, but cannot be extended to higher dimensions in a satisfactory way. For this reason, we propose in Chapter 5 different general mechanisms, sharing all the nice properties of their competitors in Ferreira and Steel (2006), but which moreover can be extended to any dimension. We formally prove that the surjectivity property holds in dimensions k>1 and we study the principal characteristics of these new multivariate mechanisms.

Finally, the third part of this thesis, composed of Chapter 6, proposes a test for multivariate central symmetry by having recourse to the concepts of statistical depth and runs. This test extends the celebrated univariate runs test of McWilliams (1990) to higher dimensions. We analyze its asymptotic behavior (especially in dimension k=2) under the null hypothesis and its invariance and robustness properties. We conclude by an overview of possible modifications of these new tests./

Cette thèse traite de différents aspects statistiques et probabilistes de symétrie et asymétrie univariées et multivariées, et est subdivisée en trois parties distinctes.

La première partie, qui comprend les chapitres 1, 2 et 3 de la thèse, est destinée à la résolution de deux conjectures associées aux lois skew-symétriques multivariées. Depuis l'introduction en 1985 par Adelchi Azzalini du plus célèbre représentant de cette classe de lois, à savoir la loi skew-normale, il est bien connu qu'en un voisinage de la situation symétrique la matrice d'information de Fisher est singulière et la fonction de vraisemblance profile pour le paramètre d'asymétrie admet un point stationnaire quel que soit l'échantillon considéré. Dès lors, des chercheurs ont essayé de déterminer les sous-classes de lois skew-symétriques qui souffrent de chacune de ces problématiques, ce qui a mené aux deux conjectures précitées. Cette thèse résoud complètement ces deux problèmes.

La deuxième partie, constituée des chapitres 4 et 5, poursuit le but d'appliquer et de proposer des méchanismes d'asymétrisation très généraux. Ainsi, au chapitre 4, nous utilisons le méchanisme univarié de Ferreira and Steel (2006) pour construire des tests de symétrie univariée optimaux (au sens de Le Cam) qui sont très flexibles. En effet, leur méchanisme permettant de transformer une loi symétrique donnée en n'importe quelle loi asymétrique, les contre-hypothèses à la symétrie peuvent prendre toute forme imaginable. Ces méchanismes univariés, outre cette propriété de surjectivité, possèdent de nombreux autres attraits, mais ne permettent pas une extension satisfaisante aux dimensions supérieures. Pour cette raison, nous proposons au chapitre 5 des méchanismes généraux alternatifs, qui partagent toutes les propriétés de leurs compétiteurs de Ferreira and Steel (2006), mais qui en plus sont généralisables à n'importe quelle dimension. Nous démontrons formellement que la surjectivité tient en dimension k > 1 et étudions les caractéristiques principales de ces nouveaux méchanismes multivariés.

Finalement, la troisième partie de cette thèse, composée du chapitre 6, propose un test de symétrie centrale multivariée en ayant recours aux concepts de profondeur statistique et de runs. Ce test étend le célèbre test de runs univarié de McWilliams (1990) aux dimensions supérieures. Nous en analysons le comportement asymptotique (surtout en dimension k = 2) sous l'hypothèse nulle et les propriétés d'invariance et de robustesse. Nous concluons par un aperçu sur des modifications possibles de ces nouveaux tests.
Doctorat en Sciences
info:eu-repo/semantics/nonPublished

Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Zaïdi, Abdelhamid. "Séparation aveugle d'un mélange instantané de sources autorégressives gaussiennes par la méthode du maximum de vraissemblance exact". Université Joseph Fourier (Grenoble), 2000. http://www.theses.fr/2000GRE10233.

Texto completo
Resumen
Cette these est consacree a l'etude du probleme de la separation aveugle d'un melange instantane de sources gaussiennes autoregressives, sans bruit additif, par la methode du maximum de vraisemblance exact. La maximisation de la vraisemblance est decomposee, par relaxation, en deux sous-problemes d'optimisation, egalement traites par des techniques de relaxation. Le premier consiste en l'estimation de la matrice de separation a structure autoregressive des sources fixee. Le second est d'estimer cette structure lorsque la matrice de separation est fixee. Le premier probleme est equivalent a la maximisation du determinant de la matrice de separation sous contraintes non lineaires. Nous donnons un algorithme de calcul de la solution de ce probleme pour lequel nous precisons les conditions de convergence. Nous montrons l'existence de l'estimateur du maximum de vraisemblance dont nous prouvons la consistance. Nous determinons egalement la matrice d'information de fisher relative au parametre global et nous proposons un indice pour mesurer les performances des methodes de separation. Puis nous analysons, par simulation, les performances de l'estimateur ainsi defini et nous montrons l'amelioration qu'il apporte a la procedure de quasi-maximum de vraisemblance ainsi qu'aux autres methodes du second ordre.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Achanta, Hema Kumari. "Optimal sensing matrices". Diss., University of Iowa, 2014. https://ir.uiowa.edu/etd/1421.

Texto completo
Resumen
Location information is of extreme importance in every walk of life ranging from commercial applications such as location based advertising and location aware next generation communication networks such as the 5G networks to security based applications like threat localization and E-911 calling. In indoor and dense urban environments plagued by multipath effects there is usually a Non Line of Sight (NLOS) scenario preventing GPS based localization. Wireless localization using sensor networks provides a cost effective and accurate solution to the wireless source localization problem. Certain sensor geometries show significantly poor performance even in low noise scenarios when triangulation based localization methods are used. This brings the need for the design of an optimum sensor placement scheme for better performance in the source localization process. The optimum sensor placement is the one that optimizes the underlying Fisher Information Matrix(FIM) . This thesis will present a class of canonical optimum sensor placements that produce the optimum FIM for N-dimensional source localization N greater than or equal to 2 for a case where the source location has a radially symmetric probability density function within a N-dimensional sphere and the sensors are all on or outside the surface of a concentric outer N-dimensional sphere. While the canonical solution that we designed for the 2D problem represents optimum spherical codes, the study of 3 or higher dimensional design provides great insights into the design of measurement matrices with equal norm columns that have the smallest possible condition number. Such matrices are of importance in compressed sensing based applications. This thesis also presents an optimum sensing matrix design for energy efficient source localization in 2D. Specifically, the results relate to the worst case scenario when the minimum number of sensors are active in the sensor network. We also propose a distributed control law that guides the motion of the sensors on the circumference of the outer circle so that achieve the optimum sensor placement with minimum communication overhead. The design of equal norm column sensing matrices has a variety of other applications apart from the optimum sensor placement for N-dimensional source localization. One such application is fourier analysis in Magnetic Resonance Imaging (MRI). Depending on the method used to acquire the MR image, one can choose an appropriate transform domain that transforms the MR image into a sparse image that is compressible. Some such transform domains include Wavelet Transform and Fourier Transform. The inherent sparsity of the MR images in an appropriately chosen transform domain, motivates one of the objectives of this thesis which is to provide a method for designing a compressive sensing measurement matrix by choosing a subset of rows from the Discrete Fourier Transform (DFT) matrix. This thesis uses the spark of the matrix as the design criterion. The spark of a matrix is defined as the smallest number of linearly dependent columns of the matrix. The objective is to select a subset of rows from the DFT matrix in order to achieve maximum spark. The design procedure leads us to an interest study of coprime conditions on the row indices chosen with the size of the DFT matrix.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Monaledi, R. L. "Character tables of some selected groups of extension type using Fischer-Clifford matrices". Thesis, University of the Western Cape, 2015. http://hdl.handle.net/11394/5026.

Texto completo
Resumen
>Magister Scientiae - MSc
The aim of this dissertation is to calculate character tables of group extensions. There are several well developed methods for calculating the character tables of some selected group extensions. The method we study in this dissertation, is a standard application of Clifford theory, made efficient by the use of Fischer-Clifford matrices, as introduced by Fischer. We consider only extensions Ḡ of the normal subgroup N by the subgroup G with the property that every irreducible character of N can be extended to an irreducible character of its inertia group in Ḡ , if N is abelian. This is indeed the case if Ḡ is a split extension, by a well known theorem of Mackey. A brief outline of the classical theory of characters pertinent to this study, is followed by a discussion on the calculation of the conjugacy classes of extension groups by the method of coset analysis. The Clifford theory which provide the basis for the theory of Fischer-Clifford matrices is discussed in detail. Some of the properties of these Fischer-Clifford matrices which make their calculation much easier, are also given. We restrict ourselves to split extension groups Ḡ = N:G in which N is always an elementary abelian 2-group. In this thesis we are concerned with the construction of the character tables (by means of the technique of Fischer-Clifford matrices) of certain extension groups which are associated with the orthogonal group O+10(2), the automorphism groups U₆(2):2, U₆(2):3 of the unitary group U₆(2) and the smallest Fischer sporadic simple group Fi₂₂. These groups are of the type type 2⁸:(U₄(2):2), (2⁹ : L₃(4)):2, (2⁹:L₃(4)):3 and 2⁶:(2⁵:S₆).
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Benaych-Georges, Florent. "Matrices aléatoires et probabilités libres". Paris 6, 2005. http://www.theses.fr/2005PA066566.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Porto, Julianna Pinele Santos 1990. "Geometria da informação : métrica de Fisher". [s.n.], 2013. http://repositorio.unicamp.br/jspui/handle/REPOSIP/307256.

Texto completo
Resumen
Orientador: João Eloir Strapasson
Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matemática Estatística e Computação Científica
Made available in DSpace on 2018-08-23T13:44:50Z (GMT). No. of bitstreams: 1 Porto_JuliannaPineleSantos_M.pdf: 2346170 bytes, checksum: 9f8b7284329ef1eb2f319c2e377b7a3c (MD5) Previous issue date: 2013
Resumo: A Geometria da Informação é uma área da matemática que utiliza ferramentas geométricas no estudo de modelos estatísticos. Em 1945, Rao introduziu uma métrica Riemanniana no espaço das distribuições de probabilidade usando a matriz de informação, dada por Ronald Fisher em 1921. Com a métrica associada a essa matriz, define-se uma distância entre duas distribuições de probabilidade (distância de Rao), geodésicas, curvaturas e outras propriedades do espaço. Desde então muitos autores veem estudando esse assunto, que está naturalmente ligado a diversas aplicações como, por exemplo, inferência estatística, processos estocásticos, teoria da informação e distorção de imagens. Neste trabalho damos uma breve introdução à geometria diferencial e Riemanniana e fazemos uma coletânea de alguns resultados obtidos na área de Geometria da Informação. Mostramos a distância de Rao entre algumas distribuições de probabilidade e damos uma atenção especial ao estudo da distância no espaço formado por distribuições Normais Multivariadas. Neste espaço, como ainda não é conhecida uma fórmula fechada para a distância e nem para a curva geodésica, damos ênfase ao cálculo de limitantes para a distância de Rao. Conseguimos melhorar, em alguns casos, o limitante superior dado por Calvo e Oller em 1990
Abstract: Information Geometry is an area of mathematics that uses geometric tools in the study of statistical models. In 1945, Rao introduced a Riemannian metric on the space of the probability distributions using the information matrix provided by Ronald Fisher in 1921. With the metric associated with this matrix, we define a distance between two probability distributions (Rao's distance), geodesics, curvatures and other properties. Since then, many authors have been studying this subject, which is associated with various applications, such as: statistical inference, stochastic processes, information theory, and image distortion. In this work we provide a brief introduction to Differential and Riemannian Geometry and a survey of some results obtained in Information Geometry. We show Rao's distance between some probability distributions, with special atention to the study of such distance in the space of multivariate normal distributions. In this space, since closed forms for the distance and for the geodesic curve are not known yet, we focus on the calculus of bounds for Rao's distance. In some cases, we improve the upper bound provided by Calvo and Oller in 1990
Mestrado
Matematica Aplicada
Mestra em Matemática Aplicada
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Fouodjio, René. "Dépendance et mesure de liaison entre les composantes d'un vecteur aléatoire basées sur la distance entre les matrices d'information de Fisher". Mémoire, Université de Sherbrooke, 2004. http://savoirs.usherbrooke.ca/handle/11143/4592.

Texto completo
Resumen
L'association positive entre les composantes d'un vecteur aléatoire exprime que les variables tendent à prendre toutes des grandes valeurs ou toutes des petites valeurs en même temps tandis qu'une association négative revèle qu'un sous vecteur tend à prendre des grandes valeurs alors que le sous-vecteur complémentaire tend à prendre des petites valeurs et inversement. En cas d'indépendance entre ces composantes, le fait de savoir que certaines des variables prennent des valeurs prédéterminées ne modifie pas la probabilité de variation des autres; on n'a donc pas plus d'information sur la variation de ces dernières. Il importe donc de déterminer la force ou l'intensité d'une liaison entre ces variables lorsque celle-ci est avérée. Ce mémoire porte sur la mesure de dépendance de K. Zografos. La première partie est consacrée à l'étude des formes, de dépendance entre les composantes d'un vecteur aléatoire, la deuxième partie est une approche visant à détecter la dépendance ou l'indépendance à l'aide de la matrice d'information de Fisher, la troisième partie est une présentation des résultats obtenus par K. Zografos et les résultats des simulations constituent la quatrième partie.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Fouodjio, René. "Dépendance et mesure de liaison entre les composantes d'un vecteur aléatoire basées sur la distance entre les matrices d'information de Fisher". Sherbrooke : Université de Sherbrooke, 2004.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Maltauro, Tamara Cantú. "Algoritmo genético aplicado à determinação da melhor configuração e do menor tamanho amostral na análise da variabilidade espacial de atributos químicos do solo". Universidade Estadual do Oeste do Paraná, 2018. http://tede.unioeste.br/handle/tede/3920.

Texto completo
Resumen
Submitted by Neusa Fagundes (neusa.fagundes@unioeste.br) on 2018-09-10T17:23:20Z No. of bitstreams: 2 Tamara_Maltauro2018.pdf: 3146012 bytes, checksum: 16eb0e2ba58be9d968ba732c806d14c1 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
Made available in DSpace on 2018-09-10T17:23:20Z (GMT). No. of bitstreams: 2 Tamara_Maltauro2018.pdf: 3146012 bytes, checksum: 16eb0e2ba58be9d968ba732c806d14c1 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Previous issue date: 2018-02-21
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES
It is essential to determine a sampling design with a size that minimizes operating costs and maximizes the results quality throughout a trial setting that involves the study of spatial variability of chemical attributes on soil. Thus, this trial aimed at resizing a sample configuration with the least possible number of points for a commercial area composed of 102 points, regarding the information on spatial variability of soil chemical attributes to optimize the process. Initially, Monte Carlo simulations were carried out, assuming Gaussian, isotropic, and exponential model for semi-variance function and three initial sampling configurations: systematic, simple random and lattice plus close pairs. The Genetic Algorithm (GA) was used to obtain simulated data and chemical attributes of soil, in order to resize the optimized sample, considering two objective-functions. They are based on the efficiency of spatial prediction and geostatistical model estimation, which are respectively: maximization of global accuracy precision and minimization of functions based on Fisher information matrix. It was observed by the simulated data that for both objective functions, when the nugget effect and range varied, samplings usually showed the lowest values of objectivefunction, whose nugget effect was 0 and practical range was 0.9. And the increase in practical range has generated a slight reduction in the number of optimized sampling points for most cases. In relation to the soil chemical attributes, GA was efficient in reducing the sample size with both objective functions. Thus, sample size varied from 30 to 35 points in order to maximize global accuracy precision, which corresponded to 29.41% to 34.31% of the initial mesh, with a minimum spatial prediction similarity to the original configuration, equal to or greater than 85%. It is noteworthy that such data have reflected on the optimization process, which have similarity between the maps constructed with sample configurations: original and optimized. Nevertheless, the sample size of the optimized sample varied from 30 to 40 points to minimize the function based on Fisher information matrix, which corresponds to 29.41% and 39.22% of the original mesh, respectively. However, there was no similarity between the constructed maps when considering the initial and optimum sample configuration. For both objective functions, the soil chemical attributes showed mild spatial dependence for the original sample configuration. And, most of the attributes showed mild or strong spatial dependence for optimum sample configuration. Thus, the optimization process was efficient when applied to both simulated data and soil chemical attributes.
É necessário determinar um esquema de amostragem com um tamanho que minimize os custos operacionais e maximize a qualidade dos resultados durante a montagem de um experimento que envolva o estudo da variabilidade espacial de atributos químicos do solo. Assim, o objetivo deste trabalho foi redimensionar uma configuração amostral com o menor número de pontos possíveis para uma área comercial composta por 102 pontos, considerando a informação sobre a variabilidade espacial de atributos químicos do solo no processo de otimização. Inicialmente, realizaram-se simulações de Monte Carlo, assumindo as variáveis estacionárias Gaussiana, isotrópicas, modelo exponencial para a função semivariância e três configurações amostrais iniciais: sistemática, aleatória simples e lattice plus close pairs. O Algoritmo Genético (AG) foi utilizado para a obtenção dos dados simulados e dos atributos químicos do solo, a fim de se redimensionar a amostra otimizada, considerando duas funções-objetivo. Essas estão baseadas na eficiência quanto à predição espacial e à estimação do modelo geoestatístico, as quais são respectivamente: a maximização da medida de acurácia exatidão global e a minimização de funções baseadas na matriz de informação de Fisher. Observou-se pelos dados simulados que, para ambas as funções-objetivo, quando o efeito pepita e o alcance variaram, em geral, as amostragens apresentaram os menores valores da função-objetivo, com efeito pepita igual a 0 e alcance prático igual a 0,9. O aumento do alcance prático gerou uma leve redução do número de pontos amostrais otimizados para a maioria dos casos. Em relação aos atributos químicos do solo, o AG, com ambas as funções-objetivo, foi eficiente quanto à redução do tamanho amostral. Para a maximização da exatidão global, tem-se que o tamanho amostral da nova amostra reduzida variou entre 30 e 35 pontos que corresponde respectivamente a 29,41% e a 34,31% da malha inicial, com uma similaridade mínima de predição espacial, em relação à configuração original, igual ou superior a 85%. Vale ressaltar que tais dados refletem no processo de otimização, os quais apresentam similaridade entres os mapas construídos com as configurações amostrais: original e otimizada. Todavia, o tamanho amostral da amostra otimizada variou entre 30 e 40 pontos para minimizar a função baseada na matriz de informaçãode Fisher, a qual corresponde respectivamente a 29,41% e 39,22% da malha original. Mas, não houve similaridade entre os mapas elaborados quando se considerou a configuração amostral inicial e a otimizada. Para ambas as funções-objetivo, os atributos químicos do solo apresentaram moderada dependência espacial para a configuração amostral original. E, a maioria dos atributos apresentaram moderada ou forte dependência espacial para a configuração amostral otimizada. Assim, o processo de otimização foi eficiente quando aplicados tanto nos dados simulados como nos atributos químicos do solo.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Prins, A. L. "Fischer-clifford matrices and character tables of inertia groups of maximal subgroups of finite simple groups of extension type". University of the Western Cape, 2011. http://hdl.handle.net/11394/5430.

Texto completo
Resumen
Philosophiae Doctor - PhD
The aim of this dissertation is to calculate character tables of group extensions. There are several well–developed methods for calculating the character tables of group extensions. In this dissertation we study the method developed by Bernd Fischer, the so–called Fischer–Clifford matrices method, which derives its fundamentals from the Clifford theory. We consider only extensions G of the normal subgroup K by the subgroup Q with the property that every irreducible character of K can be extended to an irreducible character of its inertia group in G, if K is abelian. This is indeed the case if G is a split extension, by a well-known theorem of Mackey. A brief outline of the classical theory of characters pertinent to this study, is followed by a discussion on the calculation of the conjugacy classes of extension groups by the method of coset analysis. The Clifford theory which provide the basis for the theory of Fischer-Clifford matrices is discussed in detail. Some of the properties of these Fischer-Clifford matrices which make their calculation much easier are also given. As mentioned earlier we restrict ourselves to split extension groups G in which K is always elementary abelian. In this thesis we are concerned with the construction of the character tables of certain groups which are associated with Fi₂₂ and Sp₈ (2). Both of these groups have a maximal subgroup of the form 2⁷: Sp₆ (2) but they are not isomorphic to each other. In particular we are interested in the inertia groups of these maximal subgroups, which are split extensions. We use the technique of the Fischer-Clifford matrices to construct the character tables of these inertia groups. These inertia groups of 2⁷ : Sp₆(2), the maximal subgroup of Fi₂₂, are 2⁷ : S₈, 2⁷ : Ο⁻₆(2) and 2⁷ : (2⁵ : S₆). The inertia group of 2⁷ : Sp₆(2), the affine subgroup of Sp₈(2), is 2⁷ : (2⁵ : S₆) which is not isomorphic to the group with the same form which was mentioned earlier.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Larsson, William. "New approaches to moisture determination in complex matrices based on the Karl Fischer Reaction in methanolic and non-alcoholic media". Doctoral thesis, Umeå universitet, Kemi, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-1918.

Texto completo
Resumen
Vattenhaltsbestämning är av stor vikt i många sammanhang. T.ex. kan vattenhalten påverka utbytet av en kemisk syntes, eller ha negativ inverkan på hållbarheten av läkemedel och livsmedel. Standardmetoden för vattenhaltsbestämning är Karl Fischer-titrering, baserad på antingen volymetri eller coulometri. I den här avhandlingen presenteras nya infallsvinklar för bestämning av mycket låga halter vatten i komplexa provmatriser, som t.ex. tekniska oljor och substanser som interfererar med alkoholbaserade Karl Fischer-reagens. Vattnet avskiljs ofta från oljematrisen före titrering genom förångning. I samband med framtagningen av nya referensmaterial för vatten i olja ifrågasattes förångningsteknikernas effektivitet av National Institute of Standards and Technology (NIST). NIST menade att en fraktion av vattnet bands hårt i oljefasen och att det inte kunde frigöras och detekteras annat än med en modifierad volymetrisk metod där reagenset innehöll minst 65% kloroform. I den här avhandlingen presenteras en alternativ metod som uppfyller det ställda kravet för en fullständig upplösning av oljefasen. Med denna metod visas att det inte finns någon anledning att ifrågasätta förångningsteknikernas effektivitet och att den modifierade metoden som NIST använder ger systematiskt för höga resultat. Fördelar som enklare handhavande, kortare konditioneringstider och att endast ett reagens behövs har gjort att diafragmafri coulometri har blivit allt mer populär. Spårhaltsbestämning med denna teknik ställer dock speciellt höga krav på reagensen eftersom strömtätheten vid katoden är låg. Med anledning av detta testades olika typer av kommersiella reagensblandningar för bestämning av små vattenmängder och kritiska parametrar identifierades. Dekanol visade sig ha en gynnsam effekt på katodreaktionen i reagens modifierade med xylen enligt standardmetodbeskrivningen för bestämning av vatten i oljor. För provtyper som inte går att analysera med alkoholbaserade reagenser presenteras en ny typ baserad på N-metylformamid. Med ett sådant reagens bestämdes vattenhalten i ett reaktivt salt som används i litiumjonbatterier. Liknande alkoholfria reagens undersöktes mer utförligt i en djupare studie som även inkluderade formamid och dimetylformamid. För- och nackdelar med dessa alternativa lösningsmedel diskuteras och möjliga reaktionsförlopp föreslås. Det visade sig att läget på jämvikten mellan svaveldioxid och vätesulfit är en avgörande faktor för att förklara den stora skillnaden i reaktionshastighet i dessa lösningsmedel.
Moisture determination is of great importance in the production and use of many substances. For example, the moisture content can affect the efficiency of a chemical reaction or determine the shelf life of pharmaceuticals or foods. The standard method for moisture determination is Karl Fischer (KF) titration, based on either volumetry or coulometry. This thesis concerns new approaches to trace determination in complex sample matrices and is focused on oils and substances that interfere with alcoholic KF reagents. Moisture is frequently separated from oil matrices before titration by means of evaporation techniques. In connection with the preparation of new reference materials for moisture in oil, the National Institute of Standards and Technology (NIST) questioned the efficiency of such evaporation techniques. NIST claimed that some of the moisture was sequestered in the oil phase and that it could only be released and detected by using a modified volumetric KF method with a reagent containing at least 65% chloroform. In this thesis, an alternative KF method that meets the proposed requirement for a complete dissolution of the oil sample is presented. With this method it is shown that there is no reason to question the efficiency of the evaporation techniques and that the criticized volumetric method used by NIST is biased high. Ever since its introduction diaphragm-free coulometry has gained popularity due to its ease of use, with a single reagent and short conditioning times. Trace determination with this technique sets great demands on the reagent due to the resulting low current densities at the generator cathode. The performance of several commercial reagents is evaluated under such unfavorable conditions and critical titration parameters are identified. It is also shown that decanol has a favorable effect on the cathode process when using reagents modified with xylene according to standard methods for moisture determination in oils. For samples that are incompatible with the alcohol component in ordinary KF reagent a new reagent based on N-methylformamide is presented. It is shown that is works well for determinations of moisture in a conductive salt used in lithium-ion batteries. The concept of alcohol-free KF reagents is taken a step further in a systematic investigation, also including formamide and dimethylformamide. Advantages and disadvantages with these solvents are discussed and possible reaction paths are surveyed. It is shown that the position of the sulfur dioxide/hydrogen sulfite equilibrium is the main explanation for the large differences in the KF reaction rates in these solvents.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Florez, Guillermo Domingo Martinez. "Extensões do modelo -potência". Universidade de São Paulo, 2011. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-07072011-154259/.

Texto completo
Resumen
Em analise de dados que apresentam certo grau de assimetria a suposicao que as observações seguem uma distribuição normal, pode resultar ser uma suposição irreal e a aplicação deste modelo pode ocultar características importantes do modelo verdadeiro. Este tipo de situação deu forca á aplicação de modelo assimétricos, destacando-se entre estes a família de distribuições skew-symmetric, desenvolvida por Azzalini (1985). Neste trabalho nos apresentamos uma segunda proposta para a anàlise de dados com presença importante de assimetria e/ou curtose, comparado com a distribuição normal. Nós apresentamos e estudamos algumas propriedades dos modelos alfa-potência e log-alfa-potência, onde também estudamos o problema de estimação, as matrizes de informação observada e esperada de Fisher e o grau do viés dos estimadores mediante alguns processos de simulação. Nós introduzimos um modelo mais estável que o modelo alfa- potência do qual derivamos o caso bimodal desta distribuição e introduzimos os modelos bimodal simêtrico e assimêtrico alfa-potencia. Posteriormente nós estendemos a distribuição alfa-potência para o caso do modelo Birnbaum-Saunders, estudamos as propriedades deste novo modelo, desenvolvemos estimadores para os parametros e propomos estimadores com viés corrigido. Também introduzimos o modelo de regressão alfa-potência para dados censurados e não censurados e para o modelo de regressão log-linear Birnbaum-Saunders; aqui nós derivamos os estimadores dos parâmetros e estudamos algumas técnicas de validação dos modelos. Por ultimo nós fazemos a extensão multivariada do modelo alfa-potência e estudamos alguns processos de estimação dos parâmetros. Para todos os casos estudados apresentam-se ilustrações com dados já analisados previamente com outras suposições de distribuições.
In data analysis where data present certain degree of asymmetry the assunption of normality can result in an unreal situation and the application of this model can hide important caracteristics of the true model. Situations of this type has given strength to the use of asymmetric models with special emphasis on the skew-symmetric distribution developed by Azzalini (1985). In this work we present an alternative for data analysis in the presence of signi¯cant asymmetry or kurtosis, when compared with the normal distribution, as well as other situations that involve such model. We present and study of the properties of the ®-power and log-®-power distributions, where we also study the estimation problem, the observed and expected information matrices and the degree of bias in estimation using simulation procedures. A °exible model version is proposed for the ®-power distribution, following an extension to a bimodal version. Follows next an extension of the Birnbaum-Saunders distribution using the ®-power distribution, where some properties are studied, estimating approaches are developed as well as corrected bias estimator developed. We also develop censored and uncensored regression for the ®-power model and for the log-linear Birnbaum-Saunders regression models, for which model validation techniques are studied. Finally a multivariate extension of the ®-power model is proposed and some estimation procedures are investigated for the model. All the situations investigated were illustrated with data application using data sets previally analysed with other distributions.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

PIRAS, CRISTINA. "Metabolomic investigation of food matrices by ¹H NMR spectroscopy". Doctoral thesis, Università degli Studi di Cagliari, 2012. http://hdl.handle.net/11584/266182.

Texto completo
Resumen
The present Ph.D. work shows some applications on the NMR-based metabolomic approach in food science. The investigated food matrices are largely different, from a manufactured product that undergoes only physical treatments (bottarga), to a manufactured product where biochemical transformations take place (Fiore Sardo cheese), and, finally, a raw food (Argentina sphyraena). All of these food matrices were not chosen by chance, but they represent an important piece of economy of the island of Sardinia, or might be further valorized, gaining more importance in the near future. Indeed, bottarga and Fiore Sardo are typical products exported all over the world, while Argentina sphyraena is a fish a low economic interest, finding no appreciation, at the moment, on the market. The results of this PhD study have contributed with new insights and deeper understanding of the potential perspective of the combined NMR/multivariate methods approach in food science, showing the great versatility of NMR spectroscopy and the strong synergetic relation between NMR and chemometrics. NMR revealed its extraordinary potential, when applied to natural samples and products, while chemometric analytical technique proved to be an essential tool to get information on the properties of interest (e.g., geographical origin for bottarga) based on the knowledge of other properties easily obtained (i.e. NMR spectra). The investigation performed on bottarga demonstrated that a NMR-based metabolomics technique can be a powerful tool for the detection of novel biomarkers and establishing quality control parameters for bottarga. The work presented in this study evidenced the effectiveness of metabolite fingerprinting as a tool to distinguish samples according both to the geographical origin of fish and the manufacturing process. The results relative to the Fiore Sardo showed the potential of the combination of NMR spectroscopy and chemometrics as a promising partnership for detailed cheese analysis, providing knowledge that can facilitate better monitoring of the food production chain and create new opportunities for targeted strategies for processing. Such analysis may be performed in any stage of the cheese manufacturing, allowing for thorough evaluation of every step in the process. Finally, the preliminary results relative to the metabolomic investigation of Argentina sphyraena should certainly serve as a basis for implement a research tool able to provide deeper insights on the biology of this fish species with all advantages offered by the metabolomics approach.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Randrianjanahary, Liantsoa Finaritra. "Cosmology with HI intensity mapping: effect of higher order corrections". University of the Western Cape, 2020. http://hdl.handle.net/11394/7248.

Texto completo
Resumen
Masters of Science
One of the main challenges of cosmology is to unveil the nature of dark energy and dark matter. They can be constrained with baryonic acoustic oscillations (BAO) and redshift space distortions, amongst others. Both have characteristic signatures in the dark matter power spectrum. Biased tracers of dark matter, such as neutral hydrogen, are used to quantify the underlying dark matter density field. It is generally assumed that on large scales the bias of the tracer is linear. However, there is a coupling between small and large scales of the biased tracer which gives rise to a significant non-linear contribution on linear scales in the power spectrum of the biased tracer. The Hydrogen Intensity and Real-time eXperiment (HIRAX) will map the brightness temperature of neutral hydrogen (HI) over BAO scales thanks to the intensity mapping technique. We forecasted cosmological parameters for HIRAX taking into account non-linear corrections to the HI power spectrum and compared them to the linear case. We used methods based on Fisher matrices. We found values for the bias to error ratio of the cosmological parameters as high as 1 or 7, depending on the noise level. We also investigated the change in peaks location on the baryonic acoustic oscillations signal. The value of the shift goes up to Δk = 10-2h/Mpc with a reduction of amplitude of the BAO features from 16:33% to 0:33%, depending on the scales.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Strömberg, Eric. "Faster Optimal Design Calculations for Practical Applications". Thesis, Uppsala universitet, Institutionen för farmaceutisk biovetenskap, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-150802.

Texto completo
Resumen
PopED is a software developed by the Pharmacometrics Research Group at the Department of Pharmaceutical Biosiences, Uppsala University written mainly in MATLAB. It uses pharmacometric population models to describe the pharmacokinetics and pharmacodynamics of a drug and then estimates an optimal design of a trial for that drug. With optimization calculations in average taking a very long time, it was desirable to increase the calculation speed of the software by parallelizing the serial calculation script. The goal of this project was to investigate different methods of parallelization and implement the method which seemed the best for the circumstances.The parallelization was implemented in C/C++ by using Open MPI and tested on the UPPMAX Kalkyl High-Performance Computation Cluster. Some alterations were made in the original MATLAB script to adapt PopED to the new parallel code. The methods which where parallelized included the Random Search and the Line Search algorithms. The testing showed a significant performance increase, with effectiveness per active core rangingfrom 55% to 89% depending on model and number of evaluated designs.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Panas, Dagmara. "Model-based analysis of stability in networks of neurons". Thesis, University of Edinburgh, 2017. http://hdl.handle.net/1842/28883.

Texto completo
Resumen
Neurons, the building blocks of the brain, are an astonishingly capable type of cell. Collectively they can store, manipulate and retrieve biologically important information, allowing animals to learn and adapt to environmental changes. This universal adaptability is widely believed to be due to plasticity: the readiness of neurons to manipulate and adjust their intrinsic properties and strengths of connections to other cells. It is through such modifications that associations between neurons can be made, giving rise to memory representations; for example, linking a neuron responding to the smell of pancakes with neurons encoding sweet taste and general gustatory pleasure. However, this malleability inherent to neuronal cells poses a dilemma from the point of view of stability: how is the brain able to maintain stable operation while in the state of constant flux? First of all, won’t there occur purely technical problems akin to short-circuiting or runaway activity? And second of all, if the neurons are so easily plastic and changeable, how can they provide a reliable description of the environment? Of course, evidence abounds to testify to the robustness of brains, both from everyday experience and scientific experiments. How does this robustness come about? Firstly, many control feedback mechanisms are in place to ensure that neurons do not enter wild regimes of behaviour. These mechanisms are collectively known as homeostatic plasticity, since they ensure functional homeostasis through plastic changes. One well-known example is synaptic scaling, a type of plasticity ensuring that a single neuron does not get overexcited by its inputs: whenever learning occurs and connections between cells get strengthened, subsequently all the neurons’ inputs get downscaled to maintain a stable level of net incoming signals. And secondly, as hinted by other researchers and directly explored in this work, networks of neurons exhibit a property present in many complex systems called sloppiness. That is, they produce very similar behaviour under a wide range of parameters. This principle appears to operate on many scales and is highly useful (perhaps even unavoidable), as it permits for variation between individuals and for robustness to mutations and developmental perturbations: since there are many combinations of parameters resulting in similar operational behaviour, a disturbance of a single, or even several, parameters does not need to lead to dysfunction. It is also that same property that permits networks of neurons to flexibly reorganize and learn without becoming unstable. As an illustrative example, consider encountering maple syrup for the first time and associating it with pancakes; thanks to sloppiness, this new link can be added without causing the network to fire excessively. As has been found in previous experimental studies, consistent multi-neuron activity patterns arise across organisms, despite the interindividual differences in firing profiles of single cells and precise values of connection strengths. Such activity patterns, as has been furthermore shown, can be maintained despite pharmacological perturbation, as neurons compensate for the perturbed parameters by adjusting others; however, not all pharmacological perturbations can be thus amended. In the present work, it is for the first time directly demonstrated that groups of neurons are by rule sloppy; their collective parameter space is mapped to reveal which are the sensitive and insensitive parameter combinations; and it is shown that the majority of spontaneous fluctuations over time primarily affect the insensitive parameters. In order to demonstrate the above, hippocampal neurons of the rat were grown in culture over multi-electrode arrays and recorded from for several days. Subsequently, statistical models were fit to the activity patterns of groups of neurons to obtain a mathematically tractable description of their collective behaviour at each time point. These models provide robust fits to the data and allow for a principled sensitivity analysis with the use of information-theoretic tools. This analysis has revealed that groups of neurons tend to be governed by a few leader units. Furthermore, it appears that it was the stability of these key neurons and their connections that ensured the stability of collective firing patterns across time. The remaining units, in turn, were free to undergo plastic changes without risking destabilizing the collective behaviour. Together with what has been observed by other researchers, the findings of the present work suggest that the impressively adaptable yet robust functioning of the brain is made possible by the interplay of feedback control of few crucial properties of neurons and the general sloppy design of networks. It has, in fact, been hypothesised that any complex system subject to evolution is bound to rely on such design: in order to cope with natural selection under changing environmental circumstances, it would be difficult for a system to rely on tightly controlled parameters. It might be, therefore, that all life is just, by nature, sloppy.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Perez-Ramirez, Javier. "Relay Selection for Multiple Source Communications and Localization". International Foundation for Telemetering, 2013. http://hdl.handle.net/10150/579585.

Texto completo
Resumen
ITC/USA 2013 Conference Proceedings / The Forty-Ninth Annual International Telemetering Conference and Technical Exhibition / October 21-24, 2013 / Bally's Hotel & Convention Center, Las Vegas, NV
Relay selection for optimal communication as well as multiple source localization is studied. We consider the use of dual-role nodes that can work both as relays and also as anchors. The dual-role nodes and multiple sources are placed at fixed locations in a two-dimensional space. Each dual-role node estimates its distance to all the sources within its radius of action. Dual-role selection is then obtained considering all the measured distances and the total SNR of all sources-to-destination channels for optimal communication and multiple source localization. Bit error rate performance as well as mean squared error of the proposed optimal dual-role node selection scheme are presented.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Perez-Ramirez, Javier. "An Opportunistic Relaying Scheme for Optimal Communications and Source Localization". International Foundation for Telemetering, 2012. http://hdl.handle.net/10150/581448.

Texto completo
Resumen
ITC/USA 2012 Conference Proceedings / The Forty-Eighth Annual International Telemetering Conference and Technical Exhibition / October 22-25, 2012 / Town and Country Resort & Convention Center, San Diego, California
The selection of relay nodes (RNs) for optimal communication and source location estimation is studied. The RNs are randomly placed at fixed and known locations over a geographical area. A mobile source senses and collects data at various locations over the area and transmits the data to a destination node with the help of the RNs. The destination node not only needs to collect the sensed data but also the location of the source where the data is collected. Hence, both high quality data collection and the correct location of the source are needed. Using the measured distances between the relays and the source, the destination estimates the location of the source. The selected RNs must be optimal for joint communication and source location estimation. We show in this paper how this joint optimization can be achieved. For practical decentralized selection, an opportunistic RN selection algorithm is used. Bit error rate performance as well as mean squared error in location estimation are presented and compared to the optimal relay selection results.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Urbinati, Eduardo Criscuolo [UNESP]. "Ovulação de matrizes de pacu Piaractus mesopotamicus e o papel da prostaglandina F2α". Universidade Estadual Paulista (UNESP), 2012. http://hdl.handle.net/11449/86671.

Texto completo
Resumen
Made available in DSpace on 2014-06-11T19:22:22Z (GMT). No. of bitstreams: 0 Previous issue date: 2012-02-10Bitstream added on 2014-06-13T19:27:36Z : No. of bitstreams: 1 urbinati_ec_me_jabo.pdf: 438124 bytes, checksum: ec3f06e377e44a119086563d92e6f440 (MD5)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Os relatos de produtores e pesquisadores sobre problemas na fase de ovulação do pacu Piaractus mesopotamicus durante a reprodução artificial nos conduziu a avaliar o uso da prostaglandina F2α. O estudo foi realizado em duas estações reprodutivas (2009/2010 e 2010/2011), com dois experimentos na estação 2009/2010 (T1 e T2) um experimento na estação 2010/2011 (T3). Foi amostrado um total de 45 fêmeas. As fêmeas do grupo controle foram induzidas apenas com extrato bruto de hipófise de carpa (EC, 6mg/kg), enquanto as do grupo tratado receberam prostaglandina F2α (PGF - 2ml/fêmea na estação 2009/2010 e 5ml/fêmea na estação 2010/2011), além da dose de EC. Nas duas estações observou-se 100% de fêmeas desovadas no grupo tratado com PGF enquanto no grupo controle a desova ocorreu em 52,94% e 83,33% no primeiro e segundo experimentos da estação 2009/2010, respectivamente. Em 2010/2011, somente 25% das fêmeas controle desovaram. As taxas de fecundidade, fertilização e eclosão não diferiram (p>0,05) entre os grupos de fêmeas. A análise da freqüência de volume de ovócitos nas diferentes fases de desenvolvimento mostrou, nos ovários analisados pós-desova, valores significativamente maiores (p<0,05) de ovócitos vitelogênicos com quebra de vesícula germinativa, mas não ovulados no grupo controle e maior ocorrência de ovócitos pré-vitelogênicos no grupo tratado com PGF. Independente do tratamento, as maiores taxas de fertilidade ocorreram ao redor de 275 UTA, enquanto a maior parte das fêmeas desovou entre 276 e 323 UTA. Do mesmo modo, independente do tratamento, houve um decréscimo significativo (p<0,05) nas taxas de fertilização e eclosão na comparação das duas coletas de 2009/2010. Os dados obtidos neste estudo sugerem que a prostaglandina pode aumentar as taxas de desova em fêmeas de pacu induzidas à reprodução, com efeito evidente na liberação dos ovócitos dos folículos
Due to reports of fish farmers and researchers about the problems in the ovulation phase of P. mesopotamicus during its artificial reproduction we evaluated the use of prostaglandin F. The study was conducted in two spawning seasons (2009/2010 and 2010/2011), with two sampling in season 2009/2010 and one sampling in 2010/2011. Forty five females broodstock were sampled. Control group was injected with carp pituitary extract (CE, 6mg/kg), while treated group received prostaglandin F (PGF – 2ml/ 2009/2010 season and 5ml/2010/2011 season) in addiction to CE. In the two seasons 100% of treated with PGF groups spawned while in control groups 52,94% and 83,33% in first and second sampling of season 2009/2010 spawned, respectively. In 2010/2011 only 25% of control group spawned. Fecundity, fertility and hatching rates did not differ (p>0.05) between groups. The oocytes volume frequency analysis showed that in ovaries after spawning, in control group, there was a significant higher number (p<0.05) of vitelogenic oocytes with GVBD but anovulated, while in PGF treated group, a higher number (p<0.05) of pre-vitelogenic oocytes was observed. Independent on the treatment, the highest fertility rate occurred around 275 ATU, while most of the females spawned in a range of 276 and 323 ATU. The same way, independent on the treatment, there was a significant decrease (p<0.05) in fertility and hatching rates comparing samplings from 2009/2010. Data obtained in this study suggest that prostaglandin F might enhance spawning rate in hormonal induced females of pacu, with a notable effect on the release of oocytes from the follicles
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Pereira, Mayara de Moura [UNESP]. "Altas concentrações de proteínas em dietas de matrizes de tilápia-do-Nilo e seus efeitos na atividade de enzimas digestivas durante o desenvolvimento inicial". Universidade Estadual Paulista (UNESP), 2015. http://hdl.handle.net/11449/143033.

Texto completo
Resumen
Made available in DSpace on 2016-08-12T18:48:53Z (GMT). No. of bitstreams: 0 Previous issue date: 2015-12-16. Added 1 bitstream(s) on 2016-08-12T18:51:12Z : No. of bitstreams: 1 000866510.pdf: 827590 bytes, checksum: fc70eac380732c705a4371c1c5a5773c (MD5)
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
O presente estudo avaliou a atividade enzimas gástrica (pepsina), intestinal (aminopeptidase e fosfatase alcalina) e pâncreas (tripsina, amilase, lipase, protease) no desenvolvimento embrionário e larval de tilápia do Nilo, Oreochromis niloticus obtido a partir de reprodutores alimentados com dietas contendo quatro níveis de proteína bruta. O experimento foi realizado no período de janeiro a junho/2014, utilizando 144 fêmeas e 48 machos (3: 1) distribuídas em 16 hapas (12 peixes/hapa). Foram utilizados quatro tratamentos, composto pelos seguintes níveis de proteína bruta (PB): 32, 38, 44 e 50%, com quatro repetições. Os ovos foram pesados (mg), quantificados, mantidos em incubadoras (2,0 L), e separadas de acordo com o tratamento. Vinte e quatro (24) amostras (300,0 mg) por tratamento e quatro (4) amostras por estágio de desenvolvimento embrionário e larval [clivagem S0-, S1- blastula, S2- gastrula, S3- eclosão, S4- sete dias após o nascimento e S5- 10 dias após o nascimento] foram recolhidos, mantidos em tubos criogénicos e colocado em azoto líquido (-196.0ºC) até ao momento em que as enzimas digestivas foram analisados. Não houve diferenças entre os valores (P> 0,05) de pepsina, aminopeptidase, tripsina e amilase. No entanto, observaram-se valores diferentes (P <0,05) para a fosfatase alcalina (7 dias pós-nascimento), da lipase (blastula) e protease (blastula e incubação) no que diz respeito aos quatro tratamentos. Os resultados mostraram que dietas com níveis de proteína bruta oferecida a reprodutores de tilápia do Nilo influenciaram a atividade de enzimas digestivas durante períodos embrionário e larval, enfatizando que os nutrientes ingeridos pela mãe foram transferidos para a prole. Assim, mais estudos sobre dietas de reprodutores devem ser conduzidos a fim de fornecer informações adicionais, não só no que diz respeito aos níveis de proteína, mas também de energia, vitaminas e minerais, bem como, a...
The present study assessed the activity of gastric (pepsin), intestinal (aminopeptidase and alkaline phosphatase) and pancreatic (trypsin, amylase, lipase, protease) enzymes on the embryonic and larval development of Nile tilapia, Oreochromis niloticus, obtained from broodstock fed diets with four levels of crude protein. The experiment was carried out from January to June/2014, using 144 females and 48 males (3:1) distributed in 16 hapas (12 fish/hapa). Four treatments were used, composed of the following levels of crude protein (CP): 32, 38, 44 and 50 %, with four replications. The eggs were weighed (mg), quantified, kept in hatcheries (2.0 L), and separated according to treatment. Twenty-four (24) samples (300.0 mg) per treatment and four (4) samples per stage of embryonic and larval development [S0- cleavage, S1- blastula, S2- gastrula, S3- hatching, S4- 7 days posthatch and S5- 10 days posthatch] were collected, kept in cryogenic tubes and placed in liquid nitrogen (-196.0ºC) until the moment the digestive enzymes were analyzed. There were no differences between the values (P>0.05) of pepsin, aminopeptidase, trypsin, and amylase. However, different values were observed (P<0.05) for alkaline phosphatase (7 days posthatch), lipase (blastula), and protease (blastula and hatching) with regard to the four treatments. The results showed that diets with levels of crude protein offered to Nile tilapia broodstock influenced the activity of digestive enzymes during embryonic and larval periods, emphasizing that the nutrients ingested by the broodfish were transferred to the progeny. Thus, more studies on diets of broodstock should be conducted in order to provide additional information, not only with regard to levels of protein, but also energy, vitamins and minerals, as well as the interaction between them, and the use of physiological biomarkers for a successful fish farming
FAPESP: 2013/22570-3
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Pereira, Mayara de Moura. "Altas concentrações de proteínas em dietas de matrizes de tilápia-do-Nilo e seus efeitos na atividade de enzimas digestivas durante o desenvolvimento inicial /". Jaboticabal, 2015. http://hdl.handle.net/11449/143033.

Texto completo
Resumen
Orientador: Elizabeth Romagosa
Banca: Maria Célia Portella
Banca: Jaqueline Dalbello Biller-Takahashi
Resumo: O presente estudo avaliou a atividade enzimas gástrica (pepsina), intestinal (aminopeptidase e fosfatase alcalina) e pâncreas (tripsina, amilase, lipase, protease) no desenvolvimento embrionário e larval de tilápia do Nilo, Oreochromis niloticus obtido a partir de reprodutores alimentados com dietas contendo quatro níveis de proteína bruta. O experimento foi realizado no período de janeiro a junho/2014, utilizando 144 fêmeas e 48 machos (3: 1) distribuídas em 16 hapas (12 peixes/hapa). Foram utilizados quatro tratamentos, composto pelos seguintes níveis de proteína bruta (PB): 32, 38, 44 e 50%, com quatro repetições. Os ovos foram pesados (mg), quantificados, mantidos em incubadoras (2,0 L), e separadas de acordo com o tratamento. Vinte e quatro (24) amostras (300,0 mg) por tratamento e quatro (4) amostras por estágio de desenvolvimento embrionário e larval [clivagem S0-, S1- blastula, S2- gastrula, S3- eclosão, S4- sete dias após o nascimento e S5- 10 dias após o nascimento] foram recolhidos, mantidos em tubos criogénicos e colocado em azoto líquido (-196.0ºC) até ao momento em que as enzimas digestivas foram analisados. Não houve diferenças entre os valores (P> 0,05) de pepsina, aminopeptidase, tripsina e amilase. No entanto, observaram-se valores diferentes (P <0,05) para a fosfatase alcalina (7 dias pós-nascimento), da lipase (blastula) e protease (blastula e incubação) no que diz respeito aos quatro tratamentos. Os resultados mostraram que dietas com níveis de proteína bruta oferecida a reprodutores de tilápia do Nilo influenciaram a atividade de enzimas digestivas durante períodos embrionário e larval, enfatizando que os nutrientes ingeridos pela mãe foram transferidos para a prole. Assim, mais estudos sobre dietas de reprodutores devem ser conduzidos a fim de fornecer informações adicionais, não só no que diz respeito aos níveis de proteína, mas também de energia, vitaminas e minerais, bem como, a...
Abstract: The present study assessed the activity of gastric (pepsin), intestinal (aminopeptidase and alkaline phosphatase) and pancreatic (trypsin, amylase, lipase, protease) enzymes on the embryonic and larval development of Nile tilapia, Oreochromis niloticus, obtained from broodstock fed diets with four levels of crude protein. The experiment was carried out from January to June/2014, using 144 females and 48 males (3:1) distributed in 16 hapas (12 fish/hapa). Four treatments were used, composed of the following levels of crude protein (CP): 32, 38, 44 and 50 %, with four replications. The eggs were weighed (mg), quantified, kept in hatcheries (2.0 L), and separated according to treatment. Twenty-four (24) samples (300.0 mg) per treatment and four (4) samples per stage of embryonic and larval development [S0- cleavage, S1- blastula, S2- gastrula, S3- hatching, S4- 7 days posthatch and S5- 10 days posthatch] were collected, kept in cryogenic tubes and placed in liquid nitrogen (-196.0ºC) until the moment the digestive enzymes were analyzed. There were no differences between the values (P>0.05) of pepsin, aminopeptidase, trypsin, and amylase. However, different values were observed (P<0.05) for alkaline phosphatase (7 days posthatch), lipase (blastula), and protease (blastula and hatching) with regard to the four treatments. The results showed that diets with levels of crude protein offered to Nile tilapia broodstock influenced the activity of digestive enzymes during embryonic and larval periods, emphasizing that the nutrients ingested by the broodfish were transferred to the progeny. Thus, more studies on diets of broodstock should be conducted in order to provide additional information, not only with regard to levels of protein, but also energy, vitamins and minerals, as well as the interaction between them, and the use of physiological biomarkers for a successful fish farming
Mestre
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Urbinati, Eduardo Criscuolo. "Ovulação de matrizes de pacu Piaractus mesopotamicus e o papel da prostaglandina F2α /". Jaboticabal : [s.n.], 2012. http://hdl.handle.net/11449/86671.

Texto completo
Resumen
Orientador: Sérgio Ricardo Batlouni
Banca: Renata Guimarães Moreira
Banca: José Augusto Senhorini
Resumo: Os relatos de produtores e pesquisadores sobre problemas na fase de ovulação do pacu Piaractus mesopotamicus durante a reprodução artificial nos conduziu a avaliar o uso da prostaglandina F2α. O estudo foi realizado em duas estações reprodutivas (2009/2010 e 2010/2011), com dois experimentos na estação 2009/2010 (T1 e T2) um experimento na estação 2010/2011 (T3). Foi amostrado um total de 45 fêmeas. As fêmeas do grupo controle foram induzidas apenas com extrato bruto de hipófise de carpa (EC, 6mg/kg), enquanto as do grupo tratado receberam prostaglandina F2α (PGF - 2ml/fêmea na estação 2009/2010 e 5ml/fêmea na estação 2010/2011), além da dose de EC. Nas duas estações observou-se 100% de fêmeas desovadas no grupo tratado com PGF enquanto no grupo controle a desova ocorreu em 52,94% e 83,33% no primeiro e segundo experimentos da estação 2009/2010, respectivamente. Em 2010/2011, somente 25% das fêmeas controle desovaram. As taxas de fecundidade, fertilização e eclosão não diferiram (p>0,05) entre os grupos de fêmeas. A análise da freqüência de volume de ovócitos nas diferentes fases de desenvolvimento mostrou, nos ovários analisados pós-desova, valores significativamente maiores (p<0,05) de ovócitos vitelogênicos com quebra de vesícula germinativa, mas não ovulados no grupo controle e maior ocorrência de ovócitos pré-vitelogênicos no grupo tratado com PGF. Independente do tratamento, as maiores taxas de fertilidade ocorreram ao redor de 275 UTA, enquanto a maior parte das fêmeas desovou entre 276 e 323 UTA. Do mesmo modo, independente do tratamento, houve um decréscimo significativo (p<0,05) nas taxas de fertilização e eclosão na comparação das duas coletas de 2009/2010. Os dados obtidos neste estudo sugerem que a prostaglandina pode aumentar as taxas de desova em fêmeas de pacu induzidas à reprodução, com efeito evidente na liberação dos ovócitos dos folículos
Abstract: Due to reports of fish farmers and researchers about the problems in the ovulation phase of P. mesopotamicus during its artificial reproduction we evaluated the use of prostaglandin F. The study was conducted in two spawning seasons (2009/2010 and 2010/2011), with two sampling in season 2009/2010 and one sampling in 2010/2011. Forty five females broodstock were sampled. Control group was injected with carp pituitary extract (CE, 6mg/kg), while treated group received prostaglandin F (PGF - 2ml/ 2009/2010 season and 5ml/2010/2011 season) in addiction to CE. In the two seasons 100% of treated with PGF groups spawned while in control groups 52,94% and 83,33% in first and second sampling of season 2009/2010 spawned, respectively. In 2010/2011 only 25% of control group spawned. Fecundity, fertility and hatching rates did not differ (p>0.05) between groups. The oocytes volume frequency analysis showed that in ovaries after spawning, in control group, there was a significant higher number (p<0.05) of vitelogenic oocytes with GVBD but anovulated, while in PGF treated group, a higher number (p<0.05) of pre-vitelogenic oocytes was observed. Independent on the treatment, the highest fertility rate occurred around 275 ATU, while most of the females spawned in a range of 276 and 323 ATU. The same way, independent on the treatment, there was a significant decrease (p<0.05) in fertility and hatching rates comparing samplings from 2009/2010. Data obtained in this study suggest that prostaglandin F might enhance spawning rate in hormonal induced females of pacu, with a notable effect on the release of oocytes from the follicles
Mestre
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Salah, Aghiles. "Von Mises-Fisher based (co-)clustering for high-dimensional sparse data : application to text and collaborative filtering data". Thesis, Sorbonne Paris Cité, 2016. http://www.theses.fr/2016USPCB093/document.

Texto completo
Resumen
La classification automatique, qui consiste à regrouper des objets similaires au sein de groupes, également appelés classes ou clusters, est sans aucun doute l’une des méthodes d’apprentissage non-supervisé les plus utiles dans le contexte du Big Data. En effet, avec l’expansion des volumes de données disponibles, notamment sur le web, la classification ne cesse de gagner en importance dans le domaine de la science des données pour la réalisation de différentes tâches, telles que le résumé automatique, la réduction de dimension, la visualisation, la détection d’anomalies, l’accélération des moteurs de recherche, l’organisation d’énormes ensembles de données, etc. De nombreuses méthodes de classification ont été développées à ce jour, ces dernières sont cependant fortement mises en difficulté par les caractéristiques complexes des ensembles de données que l’on rencontre dans certains domaines d’actualité tel que le Filtrage Collaboratif (FC) et de la fouille de textes. Ces données, souvent représentées sous forme de matrices, sont de très grande dimension (des milliers de variables) et extrêmement creuses (ou sparses, avec plus de 95% de zéros). En plus d’être de grande dimension et sparse, les données rencontrées dans les domaines mentionnés ci-dessus sont également de nature directionnelles. En effet, plusieurs études antérieures ont démontré empiriquement que les mesures directionnelles, telle que la similarité cosinus, sont supérieurs à d’autres mesures, telle que la distance Euclidiennes, pour la classification des documents textuels ou pour mesurer les similitudes entre les utilisateurs/items dans le FC. Cela suggère que, dans un tel contexte, c’est la direction d’un vecteur de données (e.g., représentant un document texte) qui est pertinente, et non pas sa longueur. Il est intéressant de noter que la similarité cosinus est exactement le produit scalaire entre des vecteurs unitaires (de norme 1). Ainsi, d’un point de vue probabiliste l’utilisation de la similarité cosinus revient à supposer que les données sont directionnelles et réparties sur la surface d’une hypersphère unité. En dépit des nombreuses preuves empiriques suggérant que certains ensembles de données sparses et de grande dimension sont mieux modélisés sur une hypersphère unité, la plupart des modèles existants dans le contexte de la fouille de textes et du FC s’appuient sur des hypothèses populaires : distributions Gaussiennes ou Multinomiales, qui sont malheureusement inadéquates pour des données directionnelles. Dans cette thèse, nous nous focalisons sur deux challenges d’actualité, à savoir la classification des documents textuels et la recommandation d’items, qui ne cesse d’attirer l’attention dans les domaines de la fouille de textes et celui du filtrage collaborative, respectivement. Afin de répondre aux limitations ci-dessus, nous proposons une série de nouveaux modèles et algorithmes qui s’appuient sur la distribution de von Mises-Fisher (vMF) qui est plus appropriée aux données directionnelles distribuées sur une hypersphère unité
Cluster analysis or clustering, which aims to group together similar objects, is undoubtedly a very powerful unsupervised learning technique. With the growing amount of available data, clustering is increasingly gaining in importance in various areas of data science for several reasons such as automatic summarization, dimensionality reduction, visualization, outlier detection, speed up research engines, organization of huge data sets, etc. Existing clustering approaches are, however, severely challenged by the high dimensionality and extreme sparsity of the data sets arising in some current areas of interest, such as Collaborative Filtering (CF) and text mining. Such data often consists of thousands of features and more than 95% of zero entries. In addition to being high dimensional and sparse, the data sets encountered in the aforementioned domains are also directional in nature. In fact, several previous studies have empirically demonstrated that directional measures—that measure the distance between objects relative to the angle between them—, such as the cosine similarity, are substantially superior to other measures such as Euclidean distortions, for clustering text documents or assessing the similarities between users/items in CF. This suggests that in such context only the direction of a data vector (e.g., text document) is relevant, not its magnitude. It is worth noting that the cosine similarity is exactly the scalar product between unit length data vectors, i.e., L 2 normalized vectors. Thus, from a probabilistic perspective using the cosine similarity is equivalent to assuming that the data are directional data distributed on the surface of a unit-hypersphere. Despite the substantial empirical evidence that certain high dimensional sparse data sets, such as those encountered in the above domains, are better modeled as directional data, most existing models in text mining and CF are based on popular assumptions such as Gaussian, Multinomial or Bernoulli which are inadequate for L 2 normalized data. In this thesis, we focus on the two challenging tasks of text document clustering and item recommendation, which are still attracting a lot of attention in the domains of text mining and CF, respectively. In order to address the above limitations, we propose a suite of new models and algorithms which rely on the von Mises-Fisher (vMF) assumption that arises naturally for directional data lying on a unit-hypersphere
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

POMA, GIULIA. "Evaluation of bioaccumulation processes of brominated flame retardants in biotic matrices". Doctoral thesis, Università degli Studi di Milano-Bicocca, 2014. http://hdl.handle.net/10281/50902.

Texto completo
Resumen
The global reduction in the use of PBDEs and HBCD as flame retardants has opened the way for the introduction of “Novel” BFRs (NBFRs) in place of the banned formulations, indicating those BFRs that are new in the market or newly/recently observed in the environment in respect to PBDEs and HBCD. Consequently, consumption and production of these NBFRs will keep rising, and increasing environmental levels of these chemicals are expected in the near future. Important representatives of this group are decabromodiphenyl ethane (DBDPE), 1,2-bis(2,4,6-tribromophenoxy) ethane (BTBPE), hexabromobenzene (HBB), and pentabromoethylbenzene (PBEB). In Italy, previous studies have shown that some BFRs (PBDEs) were measured at high concentrations in the Varese province due to the presence of a great number of textile and plastic industries, and particularly in the sediments of Lake Maggiore, where those facilities wastewaters are finally collected mainly through two lake tributaries (Bardello and Boesio). For these reasons, the present thesis has the aim to evaluate the presence, and the potential bioaccumulation and biomagnification processes of six different classes of BFRs (PBDEs, HBCD, DBDPE, BTBPE, HBB and PBEB) in the Lake Maggiore ecosystem, with particular regard to zebra mussels (Dreissena polymorpha), zooplankton, one littoral fish species (common roach - Rutilus rutilus), and two different pelagic species (twaite shad – Alosa agone and European whitefish – Coregonus lavaretus). Finally, the study has also considered the BFR contamination in the lake sediments with the aim of characterizing in detail the possible presence of temporal trends and/or identifying potential sources of contamination. Moreover, it is plausible that the BFR uptake by benthic organisms, followed by fish predation, might be a significant source of bioaccumulation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Salah, Aghiles. "Von Mises-Fisher based (co-)clustering for high-dimensional sparse data : application to text and collaborative filtering data". Electronic Thesis or Diss., Sorbonne Paris Cité, 2016. https://wo.app.u-paris.fr/cgi-bin/WebObjects/TheseWeb.woa/wa/show?t=1858&f=11557.

Texto completo
Resumen
La classification automatique, qui consiste à regrouper des objets similaires au sein de groupes, également appelés classes ou clusters, est sans aucun doute l’une des méthodes d’apprentissage non-supervisé les plus utiles dans le contexte du Big Data. En effet, avec l’expansion des volumes de données disponibles, notamment sur le web, la classification ne cesse de gagner en importance dans le domaine de la science des données pour la réalisation de différentes tâches, telles que le résumé automatique, la réduction de dimension, la visualisation, la détection d’anomalies, l’accélération des moteurs de recherche, l’organisation d’énormes ensembles de données, etc. De nombreuses méthodes de classification ont été développées à ce jour, ces dernières sont cependant fortement mises en difficulté par les caractéristiques complexes des ensembles de données que l’on rencontre dans certains domaines d’actualité tel que le Filtrage Collaboratif (FC) et de la fouille de textes. Ces données, souvent représentées sous forme de matrices, sont de très grande dimension (des milliers de variables) et extrêmement creuses (ou sparses, avec plus de 95% de zéros). En plus d’être de grande dimension et sparse, les données rencontrées dans les domaines mentionnés ci-dessus sont également de nature directionnelles. En effet, plusieurs études antérieures ont démontré empiriquement que les mesures directionnelles, telle que la similarité cosinus, sont supérieurs à d’autres mesures, telle que la distance Euclidiennes, pour la classification des documents textuels ou pour mesurer les similitudes entre les utilisateurs/items dans le FC. Cela suggère que, dans un tel contexte, c’est la direction d’un vecteur de données (e.g., représentant un document texte) qui est pertinente, et non pas sa longueur. Il est intéressant de noter que la similarité cosinus est exactement le produit scalaire entre des vecteurs unitaires (de norme 1). Ainsi, d’un point de vue probabiliste l’utilisation de la similarité cosinus revient à supposer que les données sont directionnelles et réparties sur la surface d’une hypersphère unité. En dépit des nombreuses preuves empiriques suggérant que certains ensembles de données sparses et de grande dimension sont mieux modélisés sur une hypersphère unité, la plupart des modèles existants dans le contexte de la fouille de textes et du FC s’appuient sur des hypothèses populaires : distributions Gaussiennes ou Multinomiales, qui sont malheureusement inadéquates pour des données directionnelles. Dans cette thèse, nous nous focalisons sur deux challenges d’actualité, à savoir la classification des documents textuels et la recommandation d’items, qui ne cesse d’attirer l’attention dans les domaines de la fouille de textes et celui du filtrage collaborative, respectivement. Afin de répondre aux limitations ci-dessus, nous proposons une série de nouveaux modèles et algorithmes qui s’appuient sur la distribution de von Mises-Fisher (vMF) qui est plus appropriée aux données directionnelles distribuées sur une hypersphère unité
Cluster analysis or clustering, which aims to group together similar objects, is undoubtedly a very powerful unsupervised learning technique. With the growing amount of available data, clustering is increasingly gaining in importance in various areas of data science for several reasons such as automatic summarization, dimensionality reduction, visualization, outlier detection, speed up research engines, organization of huge data sets, etc. Existing clustering approaches are, however, severely challenged by the high dimensionality and extreme sparsity of the data sets arising in some current areas of interest, such as Collaborative Filtering (CF) and text mining. Such data often consists of thousands of features and more than 95% of zero entries. In addition to being high dimensional and sparse, the data sets encountered in the aforementioned domains are also directional in nature. In fact, several previous studies have empirically demonstrated that directional measures—that measure the distance between objects relative to the angle between them—, such as the cosine similarity, are substantially superior to other measures such as Euclidean distortions, for clustering text documents or assessing the similarities between users/items in CF. This suggests that in such context only the direction of a data vector (e.g., text document) is relevant, not its magnitude. It is worth noting that the cosine similarity is exactly the scalar product between unit length data vectors, i.e., L 2 normalized vectors. Thus, from a probabilistic perspective using the cosine similarity is equivalent to assuming that the data are directional data distributed on the surface of a unit-hypersphere. Despite the substantial empirical evidence that certain high dimensional sparse data sets, such as those encountered in the above domains, are better modeled as directional data, most existing models in text mining and CF are based on popular assumptions such as Gaussian, Multinomial or Bernoulli which are inadequate for L 2 normalized data. In this thesis, we focus on the two challenging tasks of text document clustering and item recommendation, which are still attracting a lot of attention in the domains of text mining and CF, respectively. In order to address the above limitations, we propose a suite of new models and algorithms which rely on the von Mises-Fisher (vMF) assumption that arises naturally for directional data lying on a unit-hypersphere
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Blahová, Eliška. "Stanovení vybraných "Musk" sloučenin v biotických vzorcích". Master's thesis, Vysoké učení technické v Brně. Fakulta chemická, 2008. http://www.nusl.cz/ntk/nusl-216382.

Texto completo
Resumen
Musk compounds (MUSK) or synthetic fragrances are organic substances commonly used as fragrant constituents of detergents, soaps, cosmetics, personal care products, industrial and in-house cleaning agents, industrial plasticizers, chewing tobacco and fresheners. The big attention is pushed ahead studying these compounds, their fate in different parts of ecosystems and studying their characteristics at present, because musks infiltrate many environmental components, particularly aquatic and marine ecosystems, through their large application and their ability to be perzistant. This diploma thesis was focused on four synthetic “classical” fragrances used over the world. The aim of this study was to perform a method optimisation for the determination of selected fragrances in biotic matrix. There was made an evalution of ability of selected water treatment plant to clear away musks from water, these results were used for evaluating contamination measurement of aquatic ecosystem. The identification and quantification of analytes was carried out by high resolution gas chromatography - mass spectrometry (HRGC/MS).
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

bin, Ahmad Khairuddin Taufiq. "Characterization of objects by fitting the polarization tensor". Thesis, University of Manchester, 2016. https://www.research.manchester.ac.uk/portal/en/theses/characterization-of-objects-by-fitting-the-polarization-tensor(1ee0de67-fdd4-4fae-ba00-3f2e4f3987a8).html.

Texto completo
Resumen
This thesis focuses on some mathematical aspects and a few recent applications of the polarization tensor (PT). Here, the main concern of the study is to characterize objects presented in electrical or electromagnetic fields by only using the PT. This is possible since the PT contains significant information about the object such as shape, orientation and material properties. Two main applications are considered in the study and they are electrosensing fish and metal detection. In each application, we present a mathematical formulation of the PT and briefly discuss its properties. The PT in the electrosensing fish is actually based on the first order generalized polarization tensor (GPT) while the GPT itself generalizes the classical PT called as the P\'lya-Szeg\H PT. In order to investigate the role of the PT in electrosensing fish, we propose two numerical methods to compute the first order PT. The first method is directly based on the quadrature method of numerical integration while the second method is an adaptation of some terminologies of the boundary element method (BEM). A code to use the first method is developed in \textit while a script in \textit is written as an interface for using the new developed code for BEM called as \textit. When comparing the two methods, our numerical results show that the first order PT is more accurate with faster convergence when computed by \textit. During this study, we also give a strategy to determine an ellipsoid from a given first order PT. This is because we would like to propose an experiment to test whether electrosensing fish can discriminate a pair of different objects but with the same first order PT such that the pair could be an ellipsoid and some other object. In addition, the first order PT (or the P\'{o}lya-Szeg\H{o} PT) with complex conductivity (or complex permittivity) which is similar to the PT for Maxwell's equations is also investigated. On the other hand, following recent mathematical foundation of the PT from the eddy current model, we use the new proposed explicit formula to compute the rank 2 PT for a few metallic targets relevance in metal detection. We show that the PT for the targets computed from the explicit formula agree to some degree of accuracy with the PT obtained from metal detectors during experimental works and simulations conducted by the engineers. This suggests to alternatively use the explicit formula which depends only on the geometry and material properties of the target as well as offering lower computational efforts than performing measurements with metal detectors to obtain the PT. By using the explicit formula of the rank 2 PT, we also numerically investigate some properties of the rank 2 PT where, the information obtained could be useful to improve metal detection and also in other potential applications of the eddy current. In this case, if the target is magnetic but non-conducting, the rank 2 PT of the target can also be computed by using the explicit formula of the first order PT.
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Bastian, Michael R. "Neural Networks and the Natural Gradient". DigitalCommons@USU, 2010. https://digitalcommons.usu.edu/etd/539.

Texto completo
Resumen
Neural network training algorithms have always suffered from the problem of local minima. The advent of natural gradient algorithms promised to overcome this shortcoming by finding better local minima. However, they require additional training parameters and computational overhead. By using a new formulation for the natural gradient, an algorithm is described that uses less memory and processing time than previous algorithms with comparable performance.
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Strömberg, Eric. "Applied Adaptive Optimal Design and Novel Optimization Algorithms for Practical Use". Doctoral thesis, Uppsala universitet, Institutionen för farmaceutisk biovetenskap, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-308452.

Texto completo
Resumen
The costs of developing new pharmaceuticals have increased dramatically during the past decades. Contributing to these increased expenses are the increasingly extensive and more complex clinical trials required to generate sufficient evidence regarding the safety and efficacy of the drugs.  It is therefore of great importance to improve the effectiveness of the clinical phases by increasing the information gained throughout the process so the correct decision may be made as early as possible.   Optimal Design (OD) methodology using the Fisher Information Matrix (FIM) based on Nonlinear Mixed Effect Models (NLMEM) has been proven to serve as a useful tool for making more informed decisions throughout the clinical investigation. The calculation of the FIM for NLMEM does however lack an analytic solution and is commonly approximated by linearization of the NLMEM. Furthermore, two structural assumptions of the FIM is available; a full FIM and a block-diagonal FIM which assumes that the fixed effects are independent of the random effects in the NLMEM. Once the FIM has been derived, it can be transformed into a scalar optimality criterion for comparing designs. The optimality criterion may be considered local, if the criterion is based on singe point values of the parameters or global (robust), where the criterion is formed for a prior distribution of the parameters.  Regardless of design criterion, FIM approximation or structural assumption, the design will be based on the prior information regarding the model and parameters, and is thus sensitive to misspecification in the design stage.  Model based adaptive optimal design (MBAOD) has however been shown to be less sensitive to misspecification in the design stage.   The aim of this thesis is to further the understanding and practicality when performing standard and MBAOD. This is to be achieved by: (i) investigating how two common FIM approximations and the structural assumptions may affect the optimized design, (ii) reducing runtimes complex design optimization by implementing a low level parallelization of the FIM calculation, (iii) further develop and demonstrate a framework for performing MBAOD, (vi) and investigate the potential advantages of using a global optimality criterion in the already robust MBAOD.
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Veloso, Ana C. A. "Optimização de estratégias de alimentação para identificação de parâmetros de um modelo de E. coli. utilização do modelo em monitorização e controlo". Doctoral thesis, Universidade do Minho, 2007. http://hdl.handle.net/10198/1049.

Texto completo
Resumen
Os principais objectivos desta tese são: o desenho óptimo de experiências para a identificação de coeficientes de rendimento de um modelo não estruturado de um processo de fermentação semicontínua de Escherichia coli; a verificação experimental das trajectórias de alimentação obtidas por simulação; o desenvolvimento de estratégias de monitorização avançada para a estimação em linha de variáveis de estado e parâmetros cinéticos; e por fim o desenvolvimento de uma lei de controlo adaptativo para controlar a taxa específica de crescimento, com base em estratégias de alimentação de substrato com vista à maximização do crescimento e/ou produção. São apresentadas metodologias para o desenho óptimo de experiências, que visam a optimização da riqueza informativa das mesmas, quantificada por índices relativos à Matriz de Informação de Fisher. Embora, o modelo utilizado para descrever a fermentação semi-contínua de E. coli não esteja ainda optimizado em termos cinéticos e de algumas dificuldades encontradas na implementação prática dos resultados obtidos por simulação para o desenho óptimo de experiências, a qualidade da estimativa dos parâmetros, especialmente os do regime respirativo, é promissora. A incerteza das estimativas foi avaliada através de índices relacionados com o modelo de regressão linear múltipla, índices relativos à matriz de Fisher e pelo desenho das correspondentes elipses dos desvios. Os desvios associados a cada coeficiente mostram que ainda não foram encontrados os melhores valores. Procedeu-se também à investigação do papel do modelo dinâmico geral no desenho de sensores por programação. Foram aplicados três observadores – observador estendido de Kalman, observador assimptótico e observador por intervalo – para estimar a concentração de biomassa, tendo sido avaliado e comparado o seu desempenho bem como a sua flexibilidade. Os observadores estudados mostraram-se robustos, apresentando comportamentos complementares. Os observadores assimptóticos apresentam, em geral, um melhor desempenho que os observadores estendidos de Kalman. Os observadores por intervalo apresentam vantagens em termos de implementação prática, apresentando-se bastante promissores embora a sua validação experimental seja necessária. É apresentada uma lei de controlo adaptativo com modelo de referência que se traduz num controlo por antecipação/retroacção cuja acção de retroacção é do tipo PI, para controlar a taxa específica de crescimento. A robustez do algoritmo de controlo foi estudada por simulação numérica gerando dados “pseudo reais”, por aplicação de um ruído branco às variáveis medidas em linha, por alteração do valor de referência, por alteração do valor da concentração da glucose na alimentação e variando os valores nominais dos parâmetros do modelo. O estudo realizado permite concluir que a resposta do controlador é em geral satisfatória, sendo capaz de manter o valor da taxa específica de crescimento na vizinhança do valor de referência pretendido e inferior a um valor que conduz à formação de acetato, revestindo-se este facto de grande importância numa situação real, em especial, numa fermentação cujo objectivo seja a produção, nomeadamente de proteínas recombinadas. Foram ainda, analisados diferentes métodos de sintonização dos parâmetros do controlador, podendo concluir-se que, em geral, o método de sintonização automática com recurso à regra de adaptação dos parâmetros em função do erro relativo do controlador foi o que apresentou um melhor desempenho global. Este mecanismo de sintonização automática demonstrou capacidade para melhorar o desempenho do controlador ajustando continuamente os seus parâmetros.
The main objectives of this thesis are: the optimal experiment design for yield coefficients estimation in an unstructured growth model for Escherichia coli fed-batch fermentation; the experimental validation of the simulated feed trajectories; the development of advanced monitoring strategies for the on-line estimation of state variables and kinetic parameters; and at last the development of an adaptive control law, based on optimal substrate feed strategies in order to increase the growth and/or the production. Methodologies for the optimal experimental design are presented, in order to optimise the richness of data coming out from experiments, quantified by indexes based on the Fisher Information Matrix. Although the model used to describe the E. coli fed-batch fermentation is not optimised from the kinetic properties point of view and the fact that some difficulties were encountered in practical implementation of the simulated results obtained with the optimal experimental design, the estimated parameter quality, especially for the oxidative regimen, is promising. The estimation uncertainty was evaluated by means of indexes related with multiple linear regression model, indexes related to the Fisher matrix as well as by the construction of the related deviation ellipses. The deviations associated to each coefficient show that the best values were not yet found. The role of the general dynamical model was also investigated in which concerns the design of state observers, also called software sensors. The performance of three observer classes was compared: Kalman extended observer, assimptotic observer and interval observer. The studied observers showed good performance and robustness, being complementary of each other. Assimptotic observers showed, in general, a better performance than the Kalman extended observer. Interval observers presented advantages concerning practical implementation, showing a promising behaviour although experimental validation is needed. A model reference adaptive control law is presented and can be interpreted as a PI like feedforward/feedback controller, for specific growth rate control. Algorithm robustness was studied using “pseudo real” data obtained by numerical simulation, by applying a white noise to the on-line measured variables, by modifying the set-point value, by changing the glucose concentration value of the feed rate and varying the nominal model parameter value. The study made allowed to conclude that the controller response is, generally, satisfactory being able to keep the specific growth rate value in the proximity of the desired set-point and lower than the value that permits acetate formation, which is of major importance namely for real cases, specially, in a fermentation which objective was the production of recombinant proteins. Different tuning devices for controller parameters were analysed being the better performance achieved by the automatic tuning method with an adaptation rate as a function of the controller relative error. This automatic tuning mechanism was able to improve the controller performance adjusting continuously its parameters.
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Sousa, Sília Maria de Negreiros. "Relação entre energia e proteína digestíveis para matrizes de tilápia do Nilo (Oreochromis niloticus)". Universidade Estadual do Oeste do Paraná, 2012. http://tede.unioeste.br:8080/tede/handle/tede/1651.

Texto completo
Resumen
Made available in DSpace on 2017-07-10T17:48:32Z (GMT). No. of bitstreams: 1 Silia_Maria_de_Negreiros_Sousa.PDF: 2393965 bytes, checksum: 2141d53619fac032628eddd0a98157dc (MD5) Previous issue date: 2012-02-24
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
The goal of this study was to evaluate the effects of different levels of digestible protein (DP) and energy (DE) over growth, breeding performance and offspring quality in females of Nile tilapia (Oreochromis niloticus). Nine food formulae were evaluated using a combination of three DP levels (28, 34 and 40%) and three DE levels (2,800, 3,400 and 4,000 kcal.kg-1) in three repetitions. Fish were kept in net cages along a naturally carved tank. Temperature was measured daily while pH and dissolved oxygen values were taken weekly. The breeding management was carried out in 260 days (September 2010 to April 2010) using a sex ratio of 3 females: 1 male with ten days of resting and four days of mating. Mean weight, standard length, weight gain, condition factor, specific growth rate, feed conversion and survival rate were evaluated in females each 14 days. As for breeding performance, the analyzed parameters were: mean egg weight, egg diameter, absolute fecundity, relative fecundity and mean larval weight at hatching. For that, eggs were collected from oral cavity after mating for subsequent artificial incubation. In January 2011, offspring samples were collected and raised up to 30 days of age to evaluate growth parameters during sex reversal stage. A broodstock sample was dissected to measure visceral-somatic, hepatosomatic and gonadosomatic indexes. Water quality remained adequate to maintain this fish species, but temperature was lower than that recommended for broodstock. The tested food formulae influenced female growth (p<0.05) throughout the experimental period, as well as the visceral-somatic index in the selected month (p<0.05). In relation to reproductive features, the treatments had no effects on egg production (p>0.05). Nonetheless, energy levels affected relative fecundity (p<0.05) and protein levels influenced both egg and larval weight (p<0.05). No differences in the growth of offspring derived from broodstock fed on distinct formulae were detected (p>0.05). Thus, food formulae containing 28% of DP and 2,800 kcal of DE.kg of food -1 are indicated to Nile tilapia once they assure a higher egg production per gram of female without affecting offspring performance
O trabalho visou analisar o efeito entre os diferentes níveis de proteína (PD) e energia digestíveis (ED) sobre o crescimento, desempenho reprodutivo e qualidade da prole de fêmeas de tilápia do Nilo (Oreochromis niloticus). Foram avaliadas nove rações a partir da combinação entre três níveis de PD (28, 34 e 40%) e três níveis de ED (2.800, 3.400 e 4.000 kcal.kg ração-1) com três repetições. Os peixes foram acomodados em tanques-rede distribuídos em tanque escavado sob condições naturais. Diariamente verificou a temperatura e semanalmente o pH e oxigênio dissolvido. Os animais foram submetidos a um manejo reprodutivo com dez dias de descanso e quatro dias de acasalamento, com 3 fêmeas para 1 macho durante 260 dias (setembro de 2010 a abril de 2011). As fêmeas foram avaliadas, a cada 14 dias, quanto ao peso médio, comprimento padrão, ganho de peso, fator de condição, taxa de crescimento específico, conversão alimentar e sobrevivência. Quanto ao desempenho reprodutivo, verificaram-se os parâmetros de peso médio dos ovos, diâmetro dos ovos, fecundidade absoluta, fecundidade relativa e peso médio das larvas no momento da eclosão. Para isso, após o período de acasalamento, foi realizada coleta de ovos da cavidade bucal que foram submetidos a incubação artificial. No mês de janeiro de 2011, amostras da prole foram coletadas e mantidas em sistema de criação até os 30 dias, e verificados os parâmetros de crescimento durante a fase de reversão sexual. Uma amostra de matrizes foi dissecada para mensuração dos índices víscerossomático, hepatossomático e gonadossomático. A qualidade da água permaneceu dentro dos níveis ideais para a espécie, porém a temperatura manteve-se abaixo do recomendado para reprodutores. As rações testadas mostraram influência no crescimento das fêmeas (p<0,05) ao longo do período experimental, assim como para o índice viscerossomático no mês de coleta (p<0,05). Para os aspectos reprodutivos, as matrizes não apresentaram maior produção de ovos de acordo com os tratamentos (p>0,05). Porém, sofreram efeito dos níveis energéticos para fecundidade relativa (p<0,05) e dos níveis protéicos para peso dos ovos e das larvas no momento da eclosão (p<0,05). As proles provenientes dos reprodutores alimentados com as diferentes rações, não evidenciaram diferença no seu crescimento (p>0,05). Rações contendo 28% de PD e 2.800 kcal de ED.kg de ração-1 são indicadas para tilápia do Nilo para garantir maior produção de ovos por grama de fêmeas sem afetar o desempenho da prole
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Marina, Kalić. "Fizičko-hemijska i reološka karakterizacija mikrokapsula ribljeg ulja inkorporiranih u čokoladni matriks". Phd thesis, Univerzitet u Novom Sadu, Medicinski fakultet u Novom Sadu, 2020. https://www.cris.uns.ac.rs/record.jsf?recordId=114094&source=NDLTD&language=en.

Texto completo
Resumen
Omega-3 masne kiseline su uslovne za zdravlje ljudi i imaju značajne fiziološke uloge. Dijetetski proizvodi na bazi omega-3 masnih kiselina predstavljaju značajan izvor omega-3 masnih kiselina. Riblje ulje je dobar izvor polinezasićenih masnih kiselina (PUFA). Dnevni unos omega-3 polinezasićenih masnih kiselina je u većini delova sveta ispod preporučenog, uglavnom usled nedovoljne zastupljenosti ribe u ishrani. Zbog toga se danas riblje ulje nalazi u obliku različitih dijetetskih proizvoda i ponuda ovih preparata na tržištu je veoma široka. Problem sa unosom ribljeg ulja kao dodatka ishrani je njegov intenzivan i neprijatan ukus i miris, što može da dovede do neadekvatne suplementacije. Sušenje raspršivanjem (engl. spray drying) predstavlja tehniku koja omogućava trenutno sušenje rastvora, suspenzija ili emulzija. U pitanju je metoda koja ima široku primenu u farmaceutskoj industriji, a između ostalog se primenjuje i u cilju maskiranja neprijatnog ukusa lekova. Kao omotač mikrokapsula dobijenih sušenjem raspršivanjem je moguće koristiti proteine, ali je neophodno dobro ispitati i poznavati njihove fizičko-hemijske osobine i funkcionalnost. Inkorporiranjem mikrokapsula ribljeg ulja u čokoladu bi se dobila funkcionalna ili obogaćena hrana, što predstavlja i finalnu formulaciju u ovom radu. Obogaćivanjem čokolade sa visokim sadržajem kakao delova mikrokapsulama ribljeg ulja kreirao bi se višestruko funkcionalan proizvod. Odabir čokolade kao matriksa za obogaćivanje uslovljen je činjenicom da je ona široko konzumiran proizvod. Ciljevi ovog rada bili su da se ispita uticaj metode sušenja raspršivanjem na stabilnost preformulacije ribljeg ulja, da se utvrde karakteristike mikrokapsula dobijenih sušenjem raspršivanjem (prinos i efikasnost mikrokapsulacije, oksidativnu stabilnost ulja, morfološke osobine i veličinu mikrokapsula), zatim da se utvrdi uticaj veličine čestica na kristalizaciju u masnoj fazi suspenzije koja se koristi za izradu konditorskih proizvoda i da se utvrde fizičko-hemijske karakteristike (teksturu, boju, reološke osobine) čokolade koja sadrži inkorporirane mikrokapsule ribljeg ulja u odnosu na čokoladu bez dodatka mikrokapsula. Metode su obuhvatale karakterizaciju proteina dobijenih iz soje, graška, krompira, pirinča i surutke, njihovih rastvora, kao i emulzija ribljeg ulja u vodenim rastvorima tih proteina, određivanje prinosa i efikasnosti mikrokapsulacije i karakterizaciju dobijenih mikrokapsula. Prilikom ispitivanja uticaja veličine čestica na kristalizaciju u masnoj fazi suspenzije koja se koristi za izradu konditorskih proizvoda i fizičko-hemijskih osobina čokolade koja sadrži inkorporirane mikrokapsule ribljeg ulja primenjivane su metode za određivanje teksture, reoloških karakteristika, sadržaja čvrstih masti i boje dobijenih formulacija. Dobijeni rezultati su pokazali da se proteini ponašaju kao dobri emulgatori i da sušenje raspršivanjem predstavlja efikasan način za dobijanje mikrokapsula ribljeg ulja sa proteinima kao omotačima mikrokapsula. Kristalizacija masne faze u suspenziji koja predstavlja model čokolade zavisi od veličine čvrstih čestica. Kada je u pitanju proizvodnja čokolade sa inkorporiranim mikrokapsulama ribljeg ulja kod kojih su kao omotači korišćeni proteini soje, surutke i krompira, dodatak tih mikrokapsula ne utiče na karakteristike čokolade u meri dovoljnoj da bi se narušio proizvodni proces izrade. Sve navedeno upućuje na zaključak da bi proizvodnja čokolade sa inkorporiranim mikrokapsulama ribljeg ulja, tehnološki bila moguća.
Omega-3 fatty acids are essential for human health and have significant physiological roles. Dietary products based on omega-3 fatty acids are a significant source of omega-3 fatty acids. Fish oil is a good source of polyunsaturated fatty acids (PUFA). The daily intake of omega-3 polyunsaturated fatty acids is below the recommended level in most parts of the world, mainly due to the lack of fish in the diet. This is why fish oil is now found as various dietary products which are widely present in the world’s market. The problem with fish oil intake as a dietary supplement is its intense and unpleasant taste and odor, which can lead to inadequate supplementation. Spray drying is a technique that allows instantaneous drying of solutions, suspensions or emulsions. It is a widely used method in the pharmaceutical industry and is used, among other things, to mask the unpleasant taste of medicines. It is possible to use proteins as a coating of spray-dried microcapsules, but it is necessary to test and know their physicochemical properties and functionality. Incorporating fish oil microcapsules into chocolate would make functional or enriched foods, which is consider as a final formulation in this work. Enriching the high cocoa content chocolate with fish oil microcapsules would create a multi-functional product. The choice of chocolate as a base is conditioned by the fact that it is a widely consumed product. The aim of this study was to investigate the effect of spray drying method on the stability of fish oil pre-formulation, to determine the characteristics of microcapsules obtained by spray drying (yield and efficiency of microencapsulation, oxidative stability of oil, morphological properties and size of microcapsules), to determine the effect of particle size crystallization in the oil phase of the suspension used for confectionery products and to determine the physicochemical characteristics (texture, color, rheological properties) of chocolate containing fish oil microcapsules in comparison with chocolate without the addition of microcapsules. Methods included characterization of proteins obtained from soybeans, peas, potatoes, rice and whey, their solutions, as well as fish oil emulsions in aqueous solutions of these proteins, determination of the yield and efficiency of microencapsulation, and characterization of the microcapsules obtained. In the examination of the effect of particle size on crystallization in the oil phase of the suspension used for the manufacture of confectionery and physicochemical properties of chocolate containing fish oil microcapsules, methods were used to determine the texture, rheological characteristics, solid fat content and color of the formulations obtained. The results show that proteins act as good emulsifiers and that spray drying is an effective way to obtain fish oil microcapsules with proteins as microcapsule shells. The crystallization of the oil phase in the suspension representing the chocolate model depends on the size of the solid particles. In the case of production of chocolate with incorporated fish oil microcapsules using soybean, whey and potato proteins as coating material, the addition of these microcapsules does not affect the chocolate characteristics to a degree sufficient to impair the manufacturing process. All of the above points to the conclusion that the production of chocolate with incorporated fish oil microcapsules would be technologically possible.
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Veloso, Ana C. A. "Optimização de estratégias de alimentação para identificação de parâmetros de um modelo de E. coli. utilização do modelo em monitorização e controlo". Doctoral thesis, Universidade do Minho, 2007. http://hdl.handle.net/1822/6289.

Texto completo
Resumen
Doutoramento em Engenharia Química e Biológica
Os principais objectivos desta tese são: o desenho óptimo de experiências para a identificação de coeficientes de rendimento de um modelo não estruturado de um processo de fermentação semicontínua de Escherichia coli; a verificação experimental das trajectórias de alimentação obtidas por simulação; o desenvolvimento de estratégias de monitorização avançada para a estimação em linha de variáveis de estado e parâmetros cinéticos; e por fim o desenvolvimento de uma lei de controlo adaptativo para controlar a taxa específica de crescimento, com base em estratégias de alimentação de substrato com vista à maximização do crescimento e/ou produção. São apresentadas metodologias para o desenho óptimo de experiências, que visam a optimização da riqueza informativa das mesmas, quantificada por índices relativos à Matriz de Informação de Fisher. Embora, o modelo utilizado para descrever a fermentação semi-contínua de E. coli não esteja ainda optimizado em termos cinéticos e de algumas dificuldades encontradas na implementação prática dos resultados obtidos por simulação para o desenho óptimo de experiências, a qualidade da estimativa dos parâmetros, especialmente os do regime respirativo, é promissora. A incerteza das estimativas foi avaliada através de índices relacionados com o modelo de regressão linear múltipla, índices relativos à matriz de Fisher e pelo desenho das correspondentes elipses dos desvios. Os desvios associados a cada coeficiente mostram que ainda não foram encontrados os melhores valores. Procedeu-se também à investigação do papel do modelo dinâmico geral no desenho de sensores por programação. Foram aplicados três observadores – observador estendido de Kalman, observador assimptótico e observador por intervalo – para estimar a concentração de biomassa, tendo sido avaliado e comparado o seu desempenho bem como a sua flexibilidade. Os observadores estudados mostraram-se robustos, apresentando comportamentos complementares. Os observadores assimptóticos apresentam, em geral, um melhor desempenho que os observadores estendidos de Kalman. Os observadores por intervalo apresentam vantagens em termos de implementação prática, apresentando-se bastante promissores embora a sua validação experimental seja necessária. É apresentada uma lei de controlo adaptativo com modelo de referência que se traduz num controlo por antecipação/retroacção cuja acção de retroacção é do tipo PI, para controlar a taxa específica de crescimento. A robustez do algoritmo de controlo foi estudada por simulação numérica gerando dados “pseudo reais”, por aplicação de um ruído branco às variáveis medidas em linha, por alteração do valor de referência, por alteração do valor da concentração da glucose na alimentação e variando os valores nominais dos parâmetros do modelo. O estudo realizado permite concluir que a resposta do controlador é em geral satisfatória, sendo capaz de manter o valor da taxa específica de crescimento na vizinhança do valor de referência pretendido e inferior a um valor que conduz à formação de acetato, revestindo-se este facto de grande importância numa situação real, em especial, numa fermentação cujo objectivo seja a produção, nomeadamente de proteínas recombinadas. Foram ainda, analisados diferentes métodos de sintonização dos parâmetros do controlador, podendo concluir-se que, em geral, o método de sintonização automática com recurso à regra de adaptação dos parâmetros em função do erro relativo do controlador foi o que apresentou um melhor desempenho global. Este mecanismo de sintonização automática demonstrou capacidade para melhorar o desempenho do controlador ajustando continuamente os seus parâmetros.
The main objectives of this thesis are: the optimal experiment design for yield coefficients estimation in an unstructured growth model for Escherichia coli fed-batch fermentation; the experimental validation of the simulated feed trajectories; the development of advanced monitoring strategies for the on-line estimation of state variables and kinetic parameters; and at last the development of an adaptive control law, based on optimal substrate feed strategies in order to increase the growth and/or the production. Methodologies for the optimal experimental design are presented, in order to optimise the richness of data coming out from experiments, quantified by indexes based on the Fisher Information Matrix. Although the model used to describe the E. coli fed-batch fermentation is not optimised from the kinetic properties point of view and the fact that some difficulties were encountered in practical implementation of the simulated results obtained with the optimal experimental design, the estimated parameter quality, especially for the oxidative regimen, is promising. The estimation uncertainty was evaluated by means of indexes related with multiple linear regression model, indexes related to the Fisher matrix as well as by the construction of the related deviation ellipses. The deviations associated to each coefficient show that the best values were not yet found. The role of the general dynamical model was also investigated in which concerns the design of state observers, also called software sensors. The performance of three observer classes was compared: Kalman extended observer, assimptotic observer and interval observer. The studied observers showed good performance and robustness, being complementary of each other. Assimptotic observers showed, in general, a better performance than the Kalman extended observer. Interval observers presented advantages concerning practical implementation, showing a promising behaviour although experimental validation is needed. A model reference adaptive control law is presented and can be interpreted as a PI like feedforward/feedback controller, for specific growth rate control. Algorithm robustness was studied using “pseudo real” data obtained by numerical simulation, by applying a white noise to the on-line measured variables, by modifying the set-point value, by changing the glucose concentration value of the feed rate and varying the nominal model parameter value. The study made allowed to conclude that the controller response is, generally, satisfactory being able to keep the specific growth rate value in the proximity of the desired set-point and lower than the value that permits acetate formation, which is of major importance namely for real cases, specially, in a fermentation which objective was the production of recombinant proteins. Different tuning devices for controller parameters were analysed being the better performance achieved by the automatic tuning method with an adaptation rate as a function of the controller relative error. This automatic tuning mechanism was able to improve the controller performance adjusting continuously its parameters.
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Figueiredo, Cléber da Costa. "Calibração linear assimétrica". Universidade de São Paulo, 2009. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-08032013-141153/.

Texto completo
Resumen
A presente tese aborda aspectos teóricos e aplicados da estimação dos parâmetros do modelo de calibração linear com erros distribuídos conforme a distribuição normal-assimétrica (Azzalini, 1985) e t-normal-assimétrica (Gómez, Venegas e Bolfarine, 2007). Aplicando um modelo assimétrico, não é necessário transformar as variáveis a fim de obter erros simétricos. A estimação dos parâmetros e das variâncias dos estimadores do modelo de calibração foram estudadas através da visão freqüentista e bayesiana, desenvolvendo algoritmos tipo EM e amostradores de Gibbs, respectivamente. Um dos pontos relevantes do trabalho, na óptica freqüentista, é a apresentação de uma reparametrização para evitar a singularidade da matriz de informação de Fisher sob o modelo de calibração normal-assimétrico na vizinhança de lambda = 0. Outro interessante aspecto é que a reparametrização não modifica o parâmetro de interesse. Já na óptica bayesiana, o ponto forte do trabalho está no desenvolvimento de medidas para verificar a qualidade do ajuste e que levam em consideração a assimetria do conjunto de dados. São propostas duas medidas para medir a qualidade do ajuste: o ADIC (Asymmetric Deviance Information Criterion) e o EDIC (Evident Deviance Information Criterion), que são extensões da ideia de Spiegelhalter et al. (2002) que propôs o DIC ordinário que só deve ser usado em modelos simétricos.
This thesis focuses on theoretical and applied estimation aspects of the linear calibration model with skew-normal (Azzalini, 1985) and skew-t-normal (Gómez, Venegas e Bolfarine, 2007) error distributions. Applying the asymmetrical distributed error methodology, it is not necessary to transform the variables in order to have symmetrical errors. The frequentist and the Bayesian solution are presented. The parameter estimation and its variance estimation were studied using the EM algorithm and the Gibbs sampler, respectively, in each approach. The main point, in the frequentist approach, is the presentation of a new parameterization to avoid singularity of the information matrix under the skew-normal calibration model in a neighborhood of lambda = 0. Another interesting aspect is that the reparameterization developed to make the information matrix nonsingular, when the skewness parameter is near to zero, leaves the parameter of interest unchanged. The main point, in the Bayesian framework, is the presentation of two measures of goodness-of-fit: ADIC (Asymmetric Deviance Information Criterion) and EDIC (Evident Deviance Information Criterion ). They are natural extensions of the ordinary DIC developed by Spiegelhalter et al. (2002).
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Silva, Michel Aguena da. "Cosmologia usando aglomerados de galáxias no Dark Energy Survey". Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/43/43134/tde-22102017-163407/.

Texto completo
Resumen
Aglomerados de galáxias são as maiores estruturas no Universo. Sua distribuição mapeia os halos de matéria escura formados nos potenciais profundos do campo de matéria escura. Consequentemente, a abundância de aglomerados é altamente sensível a expansão do Universo, assim como ao crescimento das perturbações de matéria escura, constituindo uma poderosa ferramenta para fins cosmológicos. Na era atual de grandes levantamentos observacionais que produzem uma quantidade gigantesca de dados, as propriedades estatísticas dos objetos observados (galáxias, aglomerados, supernovas, quasares, etc) podem ser usadas para extrair informações cosmológicas. Para isso, é necessária o estudo da formação de halos de matéria escura, da detecção dos halos e aglomerados, das ferramentas estatísticas usadas para o vínculos de parâmetros, e finalmente, dos efeitos da detecções ópticas. No contexto da formulação da predição teórica da contagem de halos, foi analisada a influência de cada parâmetro cosmológico na abundância dos halos, a importância do uso da covariância dos halos, e a eficácia da utilização dos halos para vincular cosmologia. Também foi analisado em detalhes os intervalos de redshift e o uso de conhecimento prévio dos parâmetros ({\\it priors}). A predição teórica foi testada um uma simulação de matéria escura, onde a cosmologia era conhecida e os halos de matéria escura já haviam sido detectados. Nessa análise, foi atestado que é possível obter bons vínculos cosmológicos para alguns parâmetros (Omega_m,w,sigma_8,n_s), enquanto outros parâmetros (h,Omega_b) necessitavam de conhecimento prévio de outros testes cosmológicos. Na seção dos métodos estatísticos, foram discutidos os conceitos de {\\it likelihood}, {\\it priors} e {\\it posterior distribution}. O formalismo da Matriz de Fisher, bem como sua aplicação em aglomerados de galáxias, foi apresentado e usado para a realização de predições dos vínculos em levantamentos atuais e futuros. Para a análise de dados, foram apresentados métodos de Cadeias de Markov de Monte Carlo (MCMC), que diferentemente da Matriz de Fisher não assumem Gaussianidade entre os parâmetros vinculados, porém possuem um custo computacional muito mais alto. Os efeitos observacionais também foram estudados em detalhes. Usando uma abordagem com a Matriz de Fisher, os efeitos de completeza e pureza foram extensivamente explorados. Como resultado, foi determinado em quais casos é vantajoso incluir uma modelagem adicional para que o limite mínimo de massa possa ser diminuído. Um dos principais resultados foi o fato que a inclusão dos efeitos de completeza e pureza na modelagem não degradam os vínculos de energia escura, se alguns outros efeitos já estão sendo incluídos. Também foi verificados que o uso de priors nos parâmetros não cosmológicos só afetam os vínculos de energia escura se forem melhores que 1\\%. O cluster finder(código para detecção de aglomerados) WaZp foi usado na simulação, produzindo um catálogo de aglomerados. Comparando-se esse catálogo com os halos de matéria escura da simulação, foi possível investigar e medir os efeitos observacionais. A partir dessas medidas, pôde-se incluir correções para a predição da abundância de aglomerados, que resultou em boa concordância com os aglomerados detectados. Os resultados a as ferramentas desenvolvidos ao longo desta tese podem fornecer um a estrutura para a análise de aglomerados com fins cosmológicos. Durante esse trabalho, diversos códigos foram desenvolvidos, dentre eles, estão um código eficiente para computar a predição teórica da abundância e covariância de halos de matéria escura, um código para estimar a abundância e covariância dos aglomerados de galáxias incluindo os efeitos observacionais, e um código para comparar diferentes catálogos de halos e aglomerados. Esse último foi integrado ao portal científico do Laboratório Interinstitucional de e-Astronomia (LIneA) e está sendo usado para avaliar a qualidade de catálogos de aglomerados produzidos pela colaboração do Dark Energy Survey (DES), assim como também será usado em levantamentos futuros.
Abstract Galaxy clusters are the largest bound structures of the Universe. Their distribution maps the dark matter halos formed in the deep potential wells of the dark matter field. As a result, the abundance of galaxy clusters is highly sensitive to the expansion of the universe as well as the growth of dark matter perturbations, representing a powerful tool for cosmological purposes. In the current era of large scale surveys with enormous volumes of data, the statistical quantities from the objects surveyed (galaxies, clusters, supernovae, quasars, etc) can be used to extract cosmological information. The main goal of this thesis is to explore the potential use of galaxy clusters for constraining cosmology. To that end, we study the halo formation theory, the detection of halos and clusters, the statistical tools required to quarry cosmological information from detected clusters and finally the effects of optical detection. In the composition of the theoretical prediction for the halo number counts, we analyze how each cosmological parameter of interest affects the halo abundance, the importance of the use of the halo covariance, and the effectiveness of halos on cosmological constraints. The redshift range and the use of prior knowledge of parameters are also investigated in detail. The theoretical prediction is tested on a dark matter simulation, where the cosmology is known and a dark matter halo catalog is available. In the analysis of the simulation we find that it is possible to obtain good constraints for some parameters such as (Omega_m,w,sigma_8,n_s) while other parameters (h,Omega_b) require external priors from different cosmological probes. In the statistical methods, we discuss the concept of likelihood, priors and the posterior distribution. The Fisher Matrix formalism and its application on galaxy clusters is presented, and used for making forecasts of ongoing and future surveys. For the real analysis of data we introduce Monte Carlo Markov Chain (MCMC) methods, which do not assume Gaussianity of the parameters distribution, but have a much higher computational cost relative to the Fisher Matrix. The observational effects are studied in detail. Using the Fisher Matrix approach, we carefully explore the effects of completeness and purity. We find in which cases it is worth to include extra parameters in order to lower the mass threshold. An interesting finding is the fact that including completeness and purity parameters along with cosmological parameters does not degrade dark energy constraints if other observational effects are already being considered. The use of priors on nuisance parameters does not seem to affect the dark energy constraints, unless these priors are better than 1\\%.The WaZp cluster finder was run on a cosmological simulation, producing a cluster catalog. Comparing the detected galaxy clusters to the dark matter halos, the observational effects were investigated and measured. Using these measurements, we were able to include corrections for the prediction of cluster counts, resulting in a good agreement with the detected cluster abundance. The results and tools developed in this thesis can provide a framework for the analysis of galaxy clusters for cosmological purposes. Several codes were created and tested along this work, among them are an efficient code to compute theoretical predictions of halo abundance and covariance, a code to estimate the abundance and covariance of galaxy clusters including multiple observational effects and a pipeline to match and compare halo/cluster catalogs. This pipeline has been integrated to the Science Portal of the Laboratório Interinstitucional de e-Astronomia (LIneA) and is being used to automatically assess the quality of cluster catalogs produced by the Dark Energy Survey (DES) collaboration and will be used in other future surveys.
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Li, Zhonggai. "Objective Bayesian Analysis of Kullback-Liebler Divergence of two Multivariate Normal Distributions with Common Covariance Matrix and Star-shape Gaussian Graphical Model". Diss., Virginia Tech, 2008. http://hdl.handle.net/10919/28121.

Texto completo
Resumen
This dissertation consists of four independent but related parts, each in a Chapter. The first part is an introductory. It serves as the background introduction and offer preparations for later parts. The second part discusses two population multivariate normal distributions with common covariance matrix. The goal for this part is to derive objective/non-informative priors for the parameterizations and use these priors to build up constructive random posteriors of the Kullback-Liebler (KL) divergence of the two multivariate normal populations, which is proportional to the distance between the two means, weighted by the common precision matrix. We use the Cholesky decomposition for re-parameterization of the precision matrix. The KL divergence is a true distance measurement for divergence between the two multivariate normal populations with common covariance matrix. Frequentist properties of the Bayesian procedure using these objective priors are studied through analytical and numerical tools. The third part considers the star-shape Gaussian graphical model, which is a special case of undirected Gaussian graphical models. It is a multivariate normal distribution where the variables are grouped into one "global" group of variable set and several "local" groups of variable set. When conditioned on the global variable set, the local variable sets are independent of each other. We adopt the Cholesky decomposition for re-parametrization of precision matrix and derive Jeffreys' prior, reference prior, and invariant priors for new parameterizations. The frequentist properties of the Bayesian procedure using these objective priors are also studied. The last part concentrates on the discussion of objective Bayesian analysis for partial correlation coefficient and its application to multivariate Gaussian models.
Ph. D.
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Votavová, Helena. "Statistická analýza výběrů ze zobecněného exponenciálního rozdělení". Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2014. http://www.nusl.cz/ntk/nusl-231401.

Texto completo
Resumen
Diplomová práce se zabývá zobecněným exponenciálním rozdělením jako alternativou k Weibullovu a log-normálnímu rozdělení. Jsou popsány základní charakteristiky tohoto rozdělení a metody odhadu parametrů. Samostatná kapitola je věnována testům dobré shody. Druhá část práce se zabývá cenzorovanými výběry. Jsou uvedeny ukázkové příklady pro exponenciální rozdělení. Dále je studován případ cenzorování typu I zleva, který dosud nebyl publikován. Pro tento speciální případ jsou provedeny simulace s podrobným popisem vlastností a chování. Dále je pro toto rozdělení odvozen EM algoritmus a jeho efektivita je porovnána s metodou maximální věrohodnosti. Vypracovaná teorie je aplikována pro analýzu environmentálních dat.
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Trias, Cornellana Miquel. "Gravitational wave observation of compact binaries Detection, parameter estimation and template accuracy". Doctoral thesis, Universitat de les Illes Balears, 2011. http://hdl.handle.net/10803/37402.

Texto completo
Resumen
La tesi tracta, des del punt de vista de l’anàlisi de dades, la possibilitat de detecció directa d’ones gravitatòries emeses per sistemes binaris d’objectes compactes de massa similar: forats negres, estels de neutrons, nanes blanques. En els capítols introductoris, a) es dóna una descripció detallada i exhaustiva de com passar dels patrons d’ona teòrics a la senyal detectada; b) s’introdueixen les eines més emprades en l’anàlisi de dades d’ones gravitatòries, amb especial menció a la discussió sobre les amplituds efectiva i característica. A més, els resultats originals de la tesi segueixen tres línies de recerca diferents: 1) S’ha predit la precisió amb la que el futur detector interferomètric espacial LISA, estimarà els paràmetres (posició, masses, velocitat de rotació, paràmetres cosmològics…) de les observacions de xocs entre dos forats negres supermassius en la fase “inspiral”. 2) S’ha desenvolupat un algorisme propi de cerca de senyals gravitatòries procedents de sistemes binaris estel•lars, basat en teories de probabilitat Bayesiana i MCMC. Aquest algorisme distingeix alhora milers de senyals superposades en una única sèrie temporal de dades, extraient paràmetres individuals de cadascuna d’elles. 3) S’ha definit de manera matemàtica rigorosa com determinar el rang de validesa (per a extracció de paràmetres i detecció) de models aproximats de patrons d’ones gravitatòries, aplicant-ho a un cas concret de models semi-analítics
La tesis trata, desde el punto de vista del análisis de datos, la posibilidad de detección directa de ondas gravitacionales emitidas por sistemas binarios de objetos compactos de masa similar: agujeros negros, estrellas de neutrones, enanas blancas. En los capítulos introductorios, a) se desarrolla una descripción detallada y exhaustiva de como pasar de los patrones de onda teóricos a la señal detectada; b) se introducen las herramientas más utilizadas en el análisis de datos de ondas gravitacionales, con especial mención a la discusión sobre las amplitudes efectiva y característica. Además, los resultados originales de la tesis siguen tres líneas de investigación diferentes: 1) Se ha predicho la precisión con la que el futuro detector interferométrico espacial LISA, estimará los parámetros (posición, masas, velocidad de rotación, parámetros cosmológicos…) de las observaciones de choques entre dos agujeros negros supermasivos en la fase “inspiral”. 2) Se ha desarrollado un algoritmo propio de búsqueda de señales gravitacionales procedentes de sistemas binarios estelares, basado en teorías de probabilidad Bayesiana y MCMC. Este algoritmo distingue a la vez miles de señales superpuestas en una única serie temporal de datos, extrayendo parámetros individuales de cada una de ellas. 3) Se ha definido de manera matemática rigurosa como determinar el rango de validez (para extracción de parámetros y detección) de modelos aproximados de patrones de ondas gravitacionales, aplicándolo a un caso concreto de modelos semi-analíticos.
In this PhD thesis one studies, from the data analysis perspective, the possibility of direct detection of gravitational waves emitted by similar mass compact binary objects: black holes, neutron stars, white dwarfs. In the introductory chapters, a) a detailed and exhaustive description about how to derive the detected strain from the theoretical emitted waveform predictions is given; b) the most used gravitational wave data analysis results are derived, being worth pointing out the discussion about effective and characteristic amplitudes. Moreover, three different research lines have been followed in the thesis: 1) It has been predicted the parameter estimation (position, masses, spin, cosmological parameters…) of supermassive black hole binary inspiral signals, observed with the future interferometric space detector, LISA. 2) A new algorithm, based on Bayesian probability and MCMC techniques, has been developed in order to search for gravitational wave signals from stellar-mass binary systems. The algorithm is able to distinguish thousands of overlapping signals from a single observed time series, allowing for individual parameter extraction. 3) It has been, mathematically and rigorously, defined how to compute the validity range (for parameter estimation and detection purposes) of approximated gravitational waveform models, applying it to the particular case of closed-form models
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Moura, ?caro Kennedy Francelino. "Testes cosmol?gicos aplicados a modelos de energia escura". Universidade Federal do Rio Grande do Norte, 2016. http://repositorio.ufrn.br/handle/123456789/21086.

Texto completo
Resumen
Submitted by Automa??o e Estat?stica (sst@bczm.ufrn.br) on 2016-07-25T21:57:03Z No. of bitstreams: 1 IcaroKennedyFrancelinoMoura_DISSERT.pdf: 6308092 bytes, checksum: 65c9e0d99b3ea645902b37237e873ed1 (MD5)
Approved for entry into archive by Arlan Eloi Leite Silva (eloihistoriador@yahoo.com.br) on 2016-08-03T20:32:51Z (GMT) No. of bitstreams: 1 IcaroKennedyFrancelinoMoura_DISSERT.pdf: 6308092 bytes, checksum: 65c9e0d99b3ea645902b37237e873ed1 (MD5)
Made available in DSpace on 2016-08-03T20:32:51Z (GMT). No. of bitstreams: 1 IcaroKennedyFrancelinoMoura_DISSERT.pdf: 6308092 bytes, checksum: 65c9e0d99b3ea645902b37237e873ed1 (MD5) Previous issue date: 2016-03-02
Coordena??o de Aperfei?oamento de Pessoal de N?vel Superior (CAPES)
Grandes esfor?os observacionais t?m sido direcionados para investigar a natureza da chamada energia escura. Nesta disserta??o derivamos v?nculos sobre modelos de energia escura utilizando tr?s diferentes observ?veis: medidas da taxa de expans?o H(z) (compiladas por Meng et al. em 2015); m?dulo de dist?ncia de 580 Supernovas do Tipo Ia (cat?logo Union Compilation 2.1, 2011); e as observa??es do pico de oscila??o de b?rions (BAO) e a radia??o c?smica de fundo (CMB) utilizando a chamada raz?o CMB/BAO, que relaciona 6 picos de BAO (um pico determinado atrav?s dos dados do Survey 6dFGS, dois atrav?s do SDSS e tr?s atrav?s do WiggleZ). A an?lise estat?stica utilizada foi o m?todo do ?2 m?nimo (marginalizado ou minimizado sobre h sempre que poss?vel) para vincular os par?metro cosmol?gicos: ?m, ??, ? e ??0. Esses testes foram aplicados em duas parametriza??es do par?metro ? da equa??o de estado da energia escura, p=?? (aqui, p ? a press?o e ? ? a densidade de energia da componente). Numa, ? ? considerado constante e menor que -1/3, conhecido como modelo XCDM; na outra parametriza??o, o par?metro da equa??o de estado varia com o redshift, no qual o chamamos de Modelo GS. Esta ?ltima parametriza??o ? baseada em argumentos que surgem da teoria da infla??o cosmol?gica. Para efeitos de compara??o tamb?m foi feita a an?lise do modelo ?CDM. A compara??o dos modelos cosmol?gicos com as diferentes observa??es leva a diferentes melhores ajustes. Assim, para classificar a viabilidade observacional dos diferentes modelos te?ricos, utilizamos dois crit?rios de informa??o, ou seja, o crit?rio de informa??o bayesiana (BIC) e o crit?rio de informa??o Akaike (AIC). A ferramenta matriz de Fisher foi incorporada aos nossos testes para nos fornecer a incerteza dos par?metros de cada modelo te?rico. Verificamos que a complementariedade dos testes ? necess?ria para n?o termos espa?os param?tricos degenerados. Fazendo o processo de minimiza??o encontramos, dentro da regi?o de 1? (68%), que para o Modelo XCDM os melhores ajustes dos par?metros s?o ?m=0,28?0,012 e ?X=-1,01?0,052. Enquanto que para o Modelo GS os melhores ajustes s?o ?m=0,28?0,011 e ??0=0,00?0,059. E realizando uma marginaliza??o encontramos, dentro da regi?o de 1? (68%), que para o Modelo XCDM os melhores ajustes dos par?metros s?o ?m=0,28?0,012 e ?X=-1,01?0,052. Enquanto que para o Modelo GS os melhores ajustes s?o ?m=0,28?0,011 e ??0=0,00?0,059.
A significant observational effort has been directed to investigate the nature of the so-called dark energy. In this dissertation we derive constraints on dark energy models using three different observable: measurements of the Hubble rate H(z) (compiled by Meng et al. in 2015.); distance modulus of 580 Supernovae Type Ia (Union catalog Compilation 2.1, 2011); and the observations of baryon acoustic oscilations (BAO) and the cosmic microwave background (CMB) by using the so-called CMB/BAO of six peaks of BAO (a peak determined through the Survey 6dFGS data, two through the SDSS and three through WiggleZ). The statistical analysis used was the method of the ?2 minimum (marginalized or minimized over h whenever possible) to link the cosmological parameter: m, ? and ??0. These tests were applied in two parameterization of the parameter ? of the equation of state of dark energy, p = ?? (here, p is the pressure and ? is the component of energy density). In one, ? is considered constant and less than -1/3, known as XCDM model; in the other the parameter of state equantion varies with the redshift, where we the call model GS. This last model is based on arguments that arise from the theory of cosmological inflation. For comparison it was also made the analysis of model CDM. Comparison of cosmological models with different observations lead to different optimal settings. Thus, to classify the observational viability of different theoretical models we use two criteria information, the Bayesian information criterion (BIC) and the Akaike information criteria (AIC). The Fisher matrix tool was incorporated into our testing to provide us with the uncertainty of the parameters of each theoretical model. We found that the complementarity of tests is necessary inorder we do not have degenerate parametric spaces. Making the minimization process we found (68%), for the Model XCDM the best fit parameters are m = 0.28 ? 0, 012 and ?X = ?1.01 ? 0, 052. While for Model GS the best settings are m = 0.28 ? 0, 011 and ??0 = 0.00 ? 0, 059. Performing a marginalization we found (68%), for the Model XCDM the best fit parameters are m = 0.28 ? 0, 012 and ?X = ?1.01 ? 0, 052. While for Model GS the best settings are M = 0.28 ? 0, 011 and ??0 = 0.00 ? 0, 059.
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Karagiannis, Dionysios. "The bispectrum of Large Scale Structures: modelling, prediction and estimation". Doctoral thesis, Università degli studi di Padova, 2018. http://hdl.handle.net/11577/3427289.

Texto completo
Resumen
In this thesis we study the higher-order statistics of Large Scale Structures (LSS). In particular, we examine the potential of the bispectrum (Fourier transform of the three-point correlator) of galaxies for both probing the non-linear regime of structure growth and setting constraints on primordial non-Gaussianity. The starting step is to construct accurate models for the power spectrum (Fourier transform of the two-point correlator) and bispectrum of galaxies by using the predictions of perturbation methods. In addition, the recent developments on the relation between dark matter and galaxy distributions (i.e. bias) are discussed and incorporated into the modelling, in order to have an accurate theoretical formalism on the galaxy formation. In order to build models that are as realistic as possible, we take into account additional non-linear effects, such as redshift space distortions. The analysis is mainly restricted to the large and intermediate scales, where the available perturbation theories have been heavily tested and give predictions that are in agreement with simulation and past LSS surveys. The reasoning for constructing accurate models for the non-linear gravitational evolution of galaxies is that, it is crucial to distinguish the primordial non-Gaussian (PNG) signal from the late time non-linearities. Furthermore, we investigate forecasted constraints on primordial non-Gaussianity and bias parameters from measurements of galaxy power spectrum and bispectrum in future radio continuum (EMU and SKA) and optical surveys (Euclid, DESI, LSST and SPHEREx). In the galaxy bispectrum modelling, we consider the bias expansion for non-Gaussian initial conditions up to second order, including trispectrum (Fourier transform of the four-point correlator) scale-dependant contributions, originating from the galaxy bias expansion, where for the first time we extend such correction to redshift space. We study the impact of uncertainties in the theoretical modelling of the bispectrum expansion and of redshift space distortions (theoretical errors), showing that they can all affect the final predicted bounds. We find that the bispectrum generally has a strong constraining power and can lead to improvements up to a factor ~5 over bounds based on the power spectrum alone. Our results show that constraints for local-type PNG can be significantly improved compared to current limits: future radio (e.g. SKA) and photometric surveys could obtain a measurement error on $f_{NL}^\text{loc}$, $\sigma(f_{NL}^\text{loc}) \approx 0.2-0.3$. More specifically, near future optical spectroscopic surveys, such as Euclid, will also improve over Planck by a factor of a few, while LSST will provide competitive constraints to radio continuum. In the case of equilateral PNG, galaxy bispectrum constraints are very weak, and current constraints could be tightened only if significant improvements in the redshift determinations of large volume surveys could be achieved. For orthogonal non-Gaussianity, expected constraints are comparable to the ones from Planck, e.g. $\sigma(f_{NL}^\text{ortho})\approx18$ for radio surveys. In the last part of the thesis we development a pipeline that measures the bispectrum from N-body simulations or galaxy surveys, which is based on the modal estimation formalism. This computationally demanding task is reduced from $O(N^6)$ operations to $O(N^3)$, where N is the number of modes per dimension inside the said simulation box or survey. The main idea of the modal estimator is to construct a suitable basis (``modes'') on the domain defined by the triangle condition and decompose on it the desired theoretical or observational bispectrum. This allows for massive data compression, making it an extremely useful tool for future LSS surveys. We show the results of tests performed to improve the performance of the pipeline and the convergence of the modal expansion. In addition, we present the measured bispectrum from a set of simulations with Gaussian initial condition, where the small amount of modes needed to accurately reconstruct the matter bispectrum shows the power of the modal expansion. The effective $f_{NL}$ value, corresponding to the bispectrum of the non-linear gravitational evolution, comes at no computational cost. In order to further test the pipeline, we proceed to measure the bispectrum of a few realisations with non-Gaussian initial conditions of the local type. We show that the modal decomposition can accurately separate the primordial signal, from the late-time non-Gaussianity, and put tight constraints on its amplitude.
In this thesis we study the higher-order statistics of Large Scale Structures (LSS). In particular, we examine the potential of the bispectrum (Fourier transform of the three-point correlator) of galaxies for both probing the non-linear regime of structure growth and setting constraints on primordial non-Gaussianity. The starting step is to construct accurate models for the power spectrum (Fourier transform of the two-point correlator) and bispectrum of galaxies by using the predictions of perturbation methods. In addition, the recent developments on the relation between dark matter and galaxy distributions (i.e. bias) are discussed and incorporated into the modelling, in order to have an accurate theoretical formalism on the galaxy formation. In order to build models that are as realistic as possible, we take into account additional non-linear effects, such as redshift space distortions. The analysis is mainly restricted to the large and intermediate scales, where the available perturbation theories have been heavily tested and give predictions that are in agreement with simulation and past LSS surveys. The reasoning for constructing accurate models for the non-linear gravitational evolution of galaxies is that, it is crucial to distinguish the primordial non-Gaussian (PNG) signal from the late time non-linearities. Furthermore, we investigate forecasted constraints on primordial non-Gaussianity and bias parameters from measurements of galaxy power spectrum and bispectrum in future radio continuum (EMU and SKA) and optical surveys (Euclid, DESI, LSST and SPHEREx). In the galaxy bispectrum modelling, we consider the bias expansion for non-Gaussian initial conditions up to second order, including trispectrum (Fourier transform of the four-point correlator) scale-dependant contributions, originating from the galaxy bias expansion, where for the first time we extend such correction to redshift space. We study the impact of uncertainties in the theoretical modelling of the bispectrum expansion and of redshift space distortions (theoretical errors), showing that they can all affect the final predicted bounds. We find that the bispectrum generally has a strong constraining power and can lead to improvements up to a factor ~5 over bounds based on the power spectrum alone. Our results show that constraints for local-type PNG can be significantly improved compared to current limits: future radio (e.g. SKA) and photometric surveys could obtain a measurement error on $f_{NL}^\text{loc}$, $\sigma(f_{NL}^\text{loc}) \approx 0.2-0.3$. More specifically, near future optical spectroscopic surveys, such as Euclid, will also improve over Planck by a factor of a few, while LSST will provide competitive constraints to radio continuum. In the case of equilateral PNG, galaxy bispectrum constraints are very weak, and current constraints could be tightened only if significant improvements in the redshift determinations of large volume surveys could be achieved. For orthogonal non-Gaussianity, expected constraints are comparable to the ones from Planck, e.g. $\sigma(f_{NL}^\text{ortho})\approx18$ for radio surveys. In the last part of the thesis we development a pipeline that measures the bispectrum from N-body simulations or galaxy surveys, which is based on the modal estimation formalism. This computationally demanding task is reduced from $O(N^6)$ operations to $O(N^3)$, where N is the number of modes per dimension inside the said simulation box or survey. The main idea of the modal estimator is to construct a suitable basis (``modes'') on the domain defined by the triangle condition and decompose on it the desired theoretical or observational bispectrum. This allows for massive data compression, making it an extremely useful tool for future LSS surveys. We show the results of tests performed to improve the performance of the pipeline and the convergence of the modal expansion. In addition, we present the measured bispectrum from a set of simulations with Gaussian initial condition, where the small amount of modes needed to accurately reconstruct the matter bispectrum shows the power of the modal expansion. The effective $f_{NL}$ value, corresponding to the bispectrum of the non-linear gravitational evolution, comes at no computational cost. In order to further test the pipeline, we proceed to measure the bispectrum of a few realisations with non-Gaussian initial conditions of the local type. We show that the modal decomposition can accurately separate the primordial signal, from the late-time non-Gaussianity, and put tight constraints on its amplitude.
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Purutcuoglu, Vilda. "Unit Root Problems In Time Series Analysis". Master's thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/2/12604701/index.pdf.

Texto completo
Resumen
In time series models, autoregressive processes are one of the most popular stochastic processes, which are stationary under certain conditions. In this study we consider nonstationary autoregressive models of order one, which have iid random errors. One of the important nonstationary time series models is the unit root process in AR (1), which simply implies that a shock to the system has permanent effect through time. Therefore, testing unit root is a very important problem. However, under nonstationarity, any estimator of the autoregressive coefficient does not have a known exact distribution and the usual t &ndash
statistic is not accurate even if the sample size is very large. Hence,Wiener process is invoked to obtain the asymptotic distribution of the LSE under normality. The first four moments of under normality have been worked out for large n. In 1998, Tiku and Wong proposed the new test statistics and whose type I error and power values are calculated by using three &ndash
moment chi &ndash
square or four &ndash
moment F approximations. The test statistics are based on the modified maximum likelihood estimators and the least square estimators, respectively. They evaluated the type I errors and the power of these tests for a family of symmetric distributions (scaled Student&rsquo
s t). In this thesis, we have extended this work to skewed distributions, namely, gamma and generalized logistic.
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Escorihuela, Roca Sara. "Novel gas-separation membranes for intensified catalytic reactors". Doctoral thesis, Universitat Politècnica de València, 2019. http://hdl.handle.net/10251/121139.

Texto completo
Resumen
[ES] La presente tesis doctoral se centra en el desarrollo de nuevas membranas de separación de gases, así como su empleo in-situ en reactores catalíticos de membrana para la intensificación de procesos. Para este propósito, se han sintetizado varios materiales, como polímeros para la fabricación de membranas, catalizadores tanto para la metanación del CO2 como para la reacción de síntesis de Fischer-Tropsch, y diversas partículas inorgánicas nanométricas para su uso en membranas de matriz mixta. En lo referente a la fabricación de las membranas, la tesis aborda principalmente dos tipos: orgánicas e inorgánicas. Con respecto a las membranas orgánicas, se han considerado diferentes materiales poliméricos, tanto para la capa selectiva de la membrana, así como soporte de la misma. Se ha trabajado con poliimidas, puesto que son materiales con temperaturas de transición vítrea muy alta, para su posterior uso en reacciones industriales que tienen lugar entre 250-300 ºC. Para conseguir membranas muy permeables, manteniendo una buena selectividad, es necesario obtener capas selectivas de menos de una micra. Usando como material de soporte otro tipo de polímero, no es necesario estudiar la compatibilidad entre ellos, siendo menos compleja la obtención de capas finas. En cambio, si el soporte es de tipo inorgánico, un exhaustivo estudio de la relación entre la concentración y la viscosidad de la solución polimérica es altamente necesario. Diversas partículas inorgánicas nanométricas se estudiaron para favorecer la permeación de agua a través de los materiales poliméricos. En segundo lugar, en cuanto a membranas inorgánicas, se realizó la funcionalización de una membrana de paladio para favorecer la permeación de hidrógeno y evitar así la contaminación por monóxido de carbono. El motivo por el cual se dopó con otro metal la capa selectiva de la membrana metálica fue para poder emplearla en un reactor de Fischer-Tropsch. Con relación al diseño y fabricación de los reactores, durante esta tesis, se desarrolló el prototipo de un microreactor para la metanación de CO2, donde una membrana polimérica de capa fina selectiva al agua se integró para evitar la desactivación del catalizador, y a su vez desplazar el equilibrio y aumentar la conversión de CO2. Por otro lado, se rediseñó un reactor de Fischer-Tropsch para poder introducir una membrana metálica selectiva a hidrogeno y poder inyectarlo de manera controlada. De esta manera, y siguiendo estudios previos, el objetivo fue mejorar la selectividad a los productos deseados mediante el hidrocraqueo y la hidroisomerización de olefinas y parafinas con la ayuda de la alta presión parcial de hidrógeno.
[CAT] La present tesi doctoral es centra en el desenvolupament de noves membranes de separació de gasos, així com el seu ús in-situ en reactors catalítics de membrana per a la intensificació de processos. Per a aquest propòsit, s'han sintetitzat diversos materials, com a polímers per a la fabricació de membranes, catalitzadors tant per a la metanació del CO2 com per a la reacció de síntesi de Fischer-Tropsch, i diverses partícules inorgàniques nanomètriques per al seu ús en membranes de matriu mixta. Referent a la fabricació de les membranes, la tesi aborda principalment dos tipus: orgàniques i inorgàniques. Respecte a les membranes orgàniques, diferents materials polimèrics s'ha considerat com a candidats prometedors, tant per a la capa selectiva de la membrana, així com com a suport d'aquesta. S'ha treballat amb poliimides, ja que són materials amb temperatures de transició vítria molt alta, per al seu posterior ús en reaccions industrials que tenen lloc entre 250-300 °C. Per a aconseguir membranes molt permeables, mantenint una bona selectivitat, és necessari obtindre capes selectives de menys d'una micra. Emprant com a material de suport altre tipus de polímer, no és necessari estudiar la compatibilitat entre ells, sent menys complexa l'obtenció de capes fines. En canvi, si el suport és de tipus inorgànic, un exhaustiu estudi de la relació entre la concentració i la viscositat de la solució polimèrica és altament necessari. Diverses partícules inorgàniques nanomètriques es van estudiar per a afavorir la permeació d'aigua a través dels materials polimèrics. En segon lloc, quant a membranes inorgàniques, es va realitzar la funcionalització d'una membrana de pal¿ladi per a afavorir la permeació d'hidrogen i evitar la contaminació per monòxid de carboni. El motiu pel qual es va dopar amb un altre metall la capa selectiva de la membrana metàl¿lica va ser per a poder emprar-la en un reactor de Fischer-Tropsch. En relació amb el disseny i fabricació dels reactors, durant aquesta tesi, es va desenvolupar el prototip d'un microreactor per a la metanació de CO2, on una membrana polimèrica de capa fina selectiva a l'aigua es va integrar per a així evitar la desactivació del catalitzador i al seu torn desplaçar l'equilibri i augmentar la conversió de CO2. D'altra banda, un reactor de Fischer-Tropsch va ser redissenyat per a poder introduir una membrana metàl¿lica selectiva a l'hidrogen i poder injectar-lo de manera controlada. D'aquesta manera, i seguint estudis previs, el objectiu va ser millorar la selectivitat als productes desitjats mitjançant el hidrocraqueix i la hidroisomerització d'olefines i parafines amb l'ajuda de l'alta pressió parcial d'hidrogen.
[EN] The present thesis is focused on the development of new gas-separation membranes, as well as their in-situ integration on catalytic membrane reactors for process intensification. For this purpose, several materials have been synthesized such as polymers for membrane manufacture, catalysts for CO2 methanation and Fischer-Tropsch synthesis reaction, and inorganic materials in form of nanometer-sized particles for their use in mixed matrix membranes. Regarding membranes manufacture, this thesis deals mainly with two types: organic and inorganic. With regards to the organic membranes, different polymeric materials have been considered as promising candidates, both for the selective layer of the membrane, as well as a support thereof. Polyimides have been selected since they are materials with very high glass transition temperatures, in order to be used in industrial reactions which take place at temperatures around 250-300 ºC. To obtain highly permeable membranes, while maintaining a good selectivity, it is necessary to develop selective layers of less than one micron. Using another type of polymer as support material, it is not necessary to study the compatibility between membrane and support. On the other hand, if the support is inorganic, an exhaustive study of the relation between the concentration and the viscosity of the polymer solution is highly necessary. In addition, various inorganic particles were studied to favor the permeation of water through polymeric materials. Secondly, as regards to inorganic membranes, the functionalization of a palladium membrane to favor the permeation of hydrogen and avoid carbon monoxide contamination was carried out. The membrane selective layer was doped with another metal in order to be used in a Fischer-Tropsch reactor. Regarding the design and manufacture of the reactors used during this thesis, a prototype of a microreactor for CO2 methanation was carried out, where a thin-film polymer membrane selective to water was integrated to avoid the deactivation of the catalyst and to displace the equilibrium and increase the CO2 conversion. On the other hand, a Fischer-Tropsch reactor was redesigned to introduce a hydrogen-selective metal membrane and to be able to inject it in a controlled manner. In this way, and following previous studies, the aim is to enhance the selectivity to the target products by hydrocracking and hydroisomerization the olefins and paraffins assisted by the presence of an elevated partial pressure of hydrogen.
I would like to acknowledge the Spanish Government, for funding my research with the Severo Ochoa scholarship.
Escorihuela Roca, S. (2019). Novel gas-separation membranes for intensified catalytic reactors [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/121139
TESIS
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Ghanawi, Joly Karim. "Direct and indirect ecological interactions between aquaculture activities and marine fish communities in Scotland". Thesis, University of Stirling, 2018. http://hdl.handle.net/1893/27258.

Texto completo
Resumen
Presence of coastal aquaculture activities in marine landscapes is growing. However, there is insufficient knowledge on the subsequent ecological interactions between these activities and marine fish communities. The overall aim of this thesis was to evaluate the direct and indirect ecological effects of aquaculture activities on marine fish communities in Scotland. A combination of empirical and modelling approaches was employed to collect evidence of how aquaculture activities affect marine fish communities at the individual, population and ecosystem levels around coastal sea cages. The two fish farms evaluated in this research provided the wild fish sampled near the sea cages with a habitat rich in food resources which is reflected in an overall better biological condition. Results of the stomach content analysis indicated that mackerel (Scomber scombrus), whiting (Merlangius merlangus) and saithe (Pollachius virens) sampled near sea cages consumed wasted feed which was also reflected in their modified FA profiles. The overall effects of the two fish farms were more pronounced in young whiting and saithe than in mixed aged mackerel sampled near the sea cages. The phase space modelling approach indicated that the overall potential for fish farms to act at the extremes as either population sources (a habitat that is rich in resources and leads to an overall improved fitness) or ecological traps (a habitat that appears to be rich in resources but is not and leads to an overall poor fitness) are higher for juvenile whiting than for mackerel. Based on the empirical evidence and literature the two fish farms are more likely to be a population source for wild fishes. Using an ecosystem modelling approach indicated that fish farming impacts the food web in a sea loch via nutrient loading. Mussel farming relies on the natural food resources and has the potential to affect the food web in a sea loch via competing with zooplankton for resources which can affect higher trophic levels. The presence of both activities can balance the overall impact in a sea loch as compared to the impact induced if each of these activities were present on their own. Both activities have the potential do induce direct and indirect effects on the wild fish and the entire sea loch system. The results of this PhD identified several gaps in data and thus could be used to improve future sampling designs. It is important to evaluate the cumulative effect of the presence of aquaculture activities in terms of nutrient loading and physical structure in the environment. Using a combination of empirical and modelling approaches is recommended to gain further insight into the ecological impacts of aquaculture activities on wild fish communities. Results of this PhD study could lead to more informed decisions in managing the coastal aquaculture activities. Establishing coastal fish farms as aquatic sanctuaries can be of an advantage to increase fish production and conserve species that are endangered provided that no commercial and recreational fishing is allowed nearby. It would be useful to have long term monitoring of the fish stocks around the cages and if there is any production at the regional level. Additionally, information on behaviour, migration patterns should be collected to understand the impacts of aquaculture activities on fish stocks. From an aquaculture perspective, ecologically engineered fish farms in addition to careful site selection in new aquaculture developments may improve nutrient loading into the ecosystem.
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

CAMPOS, Joelson da Cruz. "Modelos de regressão log-Birnbaum-Saunders generalizados para dados com censura intervalar". Universidade Federal de Campina Grande, 2011. http://dspace.sti.ufcg.edu.br:8080/jspui/handle/riufcg/1332.

Texto completo
Resumen
Submitted by Johnny Rodrigues (johnnyrodrigues@ufcg.edu.br) on 2018-08-02T21:04:42Z No. of bitstreams: 1 JOELSON DA CRUZ CAMPOS - DISSERTAÇÃO PPGMAT 2011..pdf: 1767673 bytes, checksum: bf8d4ae8ead7124a54cdc7b566c2a9ff (MD5)
Made available in DSpace on 2018-08-02T21:04:42Z (GMT). No. of bitstreams: 1 JOELSON DA CRUZ CAMPOS - DISSERTAÇÃO PPGMAT 2011..pdf: 1767673 bytes, checksum: bf8d4ae8ead7124a54cdc7b566c2a9ff (MD5) Previous issue date: 2011-12
Capes
Neste trabalho, propomos o modelo de regressão log-Birnbaum-Saunders generalizado para analisar dados com censura intervalar. As funções escore e a matriz de informação de Fisher observada foram obtidas, bem como foi discutido o processo de estimação dos parâmetros do modelo. Como medida de influência, consideramos o afastamento pela verossimilhança (likelihood displacement) sob vários esquemas de perturbação. Derivamos as matrizes apropriadas para obter a influência local nos parâmetros estimados do modelo e realizamos uma análise residual baseada nos resíduos de Cox-Snell ajustado, Martingale e componente do desvio modificado. Apresentamos também um estudo de simulação de Monte Carlo a fim de investigar o comportamento da distribuição empírica dos resíduos propostos. Finalmente, uma aplicação com dados reais é apresentada.
In this work, we propose the model of regression log-Birnbaum-Saunders generalized to analyze data with interval censored. The score functions and the observed Fisher information matrix were obtained, as well as the process for estimating of the parameters was discussed. As measure of influence, we considered the likelihood displacement under several schemes of perturbation. The normal curvatures of local influence were derived and we conducted a residual analysis based on residuals Cox-Snell adjusted, Martingale and modified deviance. A Monte Carlo simulation was carried in order to investigate the behavior of empiric distribution of the proposed residuals. Finally, an application with real data is presented.
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Хуень, Ву Хань. "Управління збутовою діяльністю підприємства". Thesis, Одеський національний економічний університет, 2021. http://local.lib/diploma/Vu.pdf.

Texto completo
Resumen
Доступ до роботи тільки на території бібліотеки ОНЕУ, для переходу натисніть на посилання нижче
У роботі розглядаються теоретичні аспекти управління збутовою діяльністю підприємства. Проаналізовано ринок риби України, практичні аспекти організації збутової діяльності ТОВ «Ніка» Запропоновано заходи з розвитку збутової діяльності ТОВ «Ніка», що полягають у відкритті роздрібної торгової точки.
The work considers the theoretical aspects of sales management of the enterprise. Measures for the development of sales activities of Nika LLC are proposed, which are to open a retail outlet.
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Čížek, Ondřej. "Makroekonometrický model měnové politiky". Doctoral thesis, Vysoká škola ekonomická v Praze, 2010. http://www.nusl.cz/ntk/nusl-165290.

Texto completo
Resumen
First of all, general principals of contemporary macroeconometric models are described in this dissertation together with a brief sketch of alternative approaches. Consequently, the macroeconomic model of a monetary policy is formulated in order to describe fundamental relationships between real and nominal economy. The model originated from a linear one by making some of the parameters endogenous. Despite this nonlinearity, I expressed my model in a state space form with time-varying coefficients, which can be solved by a standard Kalman filter. Using outcomes of this algorithm, likelihood function was then calculated and maximized in order to obtain estimates of the parameters. The theory of identifiability of a parametric structure is also described. Finally, the presented theory is applied on the formulated model of the euro area. In this model, the European Central Bank was assumed to behave according to the Taylor rule. The econometric estimation, however, showed that this common assumption in macroeconomic modeling is not adequate in this case. The results from econometric estimation and analysis of identifiability also indicated that the interest rate policy of the European Central Bank has only a very limited effect on real economic activity of the European Union. Both results are influential, as monetary policy in the last two decades has been modeled as interest rate policy with the Taylor rule in most macroeconometric models.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía