Pour voir les autres types de publications sur ce sujet consultez le lien suivant : PreCICE.

Thèses sur le sujet « PreCICE »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleures thèses pour votre recherche sur le sujet « PreCICE ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.

1

Lemaire, Olivier. « Numeration des reticulocytes en cytofluorometrie : une methode pratique et precice ». Lille 2, 1990. http://www.theses.fr/1990LIL2M227.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Ménez, Lucas. « Développement de méthodes des frontières immergées pour l’étude de problèmes couplés fluide-structure ». Electronic Thesis or Diss., Chasseneuil-du-Poitou, Ecole nationale supérieure de mécanique et d'aérotechnique, 2024. http://www.theses.fr/2024ESMA0025.

Texte intégral
Résumé :
L’érosion de cavitation représente un problème majeur pour les applications hydrauliques et marines. Ce phénomène est la conséquence de la superposition de pits de cavitation en surface du matériau, formés par l’implosion (ou collapse) de bulles au voisinage de la paroi. Afin d’apporter une meilleure compréhension du phénomène et des problématiques liées à sa résolution numérique, une stratégie de couplage fort entre le fluide et la structure, basée sur une approche partitionnée, a été développée. Elle est basée sur le code interne SCB (volumes finis), le solveur libre FEniCS (éléments finis) et sur la librairie de couplage preCICE. L’originalité de notre approche réside dans l’utilisation d’une méthode des frontières immergées (IBM) pour modéliser l’interface déformable, dans le but de réduire le coût de calcul par rapport à une approche classique de type maillage conforme. À l’issue d’une étude comparative de deux IBM, la méthode de pénalisation a été retenue pour sa précision et son efficacité. Elle a ensuite été étendue à un modèle diphasique à interface diffuse dans le but d’étudier la formation et l’évolution d’un écoulement cavitant puis le collapse d’une bulle au voisinage d’une paroi rigide, révélant une possible accélération de l’érosion de cavitation lorsque la paroi est déjà endommagée. Notre stratégie SCB-FEniCS-preCICE a finalement été appliquée à l’étude de la formation d’un pit de cavitation. La version originale de la méthode de pénalisation engendre une erreur de l’ordre de 30 % concernant l’endommagement du matériau. Une variante de la méthode a été imaginée afin de surmonter ce problème en contrôlant l’impédance acoustique du milieu pénalisé. Deux études paramétriques croisées ont montré que les pits les plus profonds se forment suite au collapse d’une bulle initialement très proche d’un matériau de faible limite d’élasticité. Le couplage faible surestime l’impact (20 %) et l’endommagement (50 %). L’effet du couplage est accentué localement lorsque la plasticité est plus importante
Cavitation erosion is a major issue for hydraulic and marine applications. This phenomenon results from the accumulation of cavitation pits on the surface of the material, formed by the collapse of bubbles near the wall. To provide a better understanding of the phenomenon and the challenges related to its numerical resolution, a strong fluid-structure coupling strategy, based on a partitioned approach, has been developed. It is based on the in-house code SCB (Finite Volume Method), the open-source solver FEniCS (Finite Element Method), and the coupling library preCICE. The originality of our approach lies in the use of an Immersed Boundary Method (IBM) to model the deformable interface in order to reduce computational cost compared to a classical body-fitted approach. After a comparative study of two IBMs, the penalization method was chosen for its accuracy and efficiency. It was then extended to a diffuse interface two-phase flow model to study the formation and evolution of a cavitating flow, followed by the collapse of a bubble near a rigid wall, revealing a possible acceleration of cavitation erosion when the wall is already damaged. Finally, the SCB-FEniCS-preCICE strategy was applied to the study of cavitation pit formation. The original version of the penalization method generates an error of about 30 % regarding material damage. A variant of the method was developed to overcome this issue by controlling the acoustic impedance of the penalized region. Two parametric studies showed that the deepest pits form after the collapse of a bubble initially very close to a material with a low yield strength. Weak coupling overestimates impact pressure (20 %) and material damage (50 %). The effect of coupling is locally amplified when plasticity is more important
Styles APA, Harvard, Vancouver, ISO, etc.
3

Sara, Z. M. Francisca. « Micro-prices and aggregate stickiness : evidence for Chile ». Tesis, Universidad de Chile, 2016. http://repositorio.uchile.cl/handle/2250/138665.

Texte intégral
Résumé :
Tesis para optar al grado de Magíster en Economía
This research describes price-setting over time and across items in Chile, an emerging market economy. The microeconomic database underlying the consumer price index (CPI) is used to characterize microeconomic pricing behavior, and to study its implications for the transmission of monetary policy to the real economy. Prices are found to be relatively flexible at a microeconomic level, in contrast to macroeconomic findings. Price changes are also mainly small and quite synchronized, and display a decreasing hazard rate. An evaluation of the relevance of microeconomic price data moments for forecasting aggregate inflation finds that the frequency of price increases and decreases, and their respective absolute magnitudes-which can only be computed from disaggregated data-can significantly improve on inflation forecasting based solely on aggregate variables.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Julio, Lindaura dos Santos. « Desmame precoce ». reponame:Repositório Institucional da UFSC, 2012. http://repositorio.ufsc.br/xmlui/handle/123456789/84465.

Texte intégral
Résumé :
Dissertação (mestrado) - Universidade Federal de Santa Catarina, Centro de Ciências da Saúde, Programa de Pós-Graduação em Enfermagem, Florianópolis, 2002.
Made available in DSpace on 2012-10-20T09:34:42Z (GMT). No. of bitstreams: 1 273404.pdf: 1135894 bytes, checksum: 67f50374cafd51f28946461c4934fc9f (MD5)
Apresento, nesta dissertação, os resultados de uma pesquisa relacionada aos aspectos que permeiam o processo educativo com trabalhadoras de enfermagem, o qual objetivou favorecer o cuidado em situações de desmame precoce por opção materna, em uma unidade de alojamento conjunto, na maternidade de um hospital universitário, em Florianópolis. Essa proposta educativa foi desenvolvida de maio a julho de 2001 e complementada em março de 2002; teve como fundamentação a pedagogia libertadora e problematizadora de Paulo Freire, norteada pela metodologia da problematização. Utiliza, portanto, os cinco passos do arco da problematização de Charles Maguarez, adaptados para este estudo como processo metodológico. Participaram, efetivamente, do processo educativo 25 trabalhadoras da maternidade do hospital universitário; dessas, vinte e uma trabalham no alojamento; três, na central de aleitamento materno; e uma atua no serviço de nutrição da maternidade. Para operacionalizar esse processo educativo, realizaram-se seis encontros, uma oficina e cinco grupos de discussão. Foram utilizados, para a coleta de dados, a observação, a observação participante e o material coletado nos grupos de discussão. Dessa relação dialógica que se estabeleceu entre a facilitadora do processo e as trabalhadoras de enfermagem, emergiram dois temas centrais. O primeiro está relacionado ao prejulgamento que dificulta o cuidado quando a mãe opta pelo desmame precoce. Assim, os debates suscitados a partir desse tema levaram as participantes a repensar sobre as questões éticas e sobre a educação continuada no trabalho de enfermagem. No segundo tema, tratou-se das reações da cliente versus as reações da trabalhadora de enfermagem como entrave para a realização do cuidado. A partir disso, realizou-se uma reflexão sobre os sentimentos da mãe-nutriz que opta pelo desmame precoce, e também sobre as crenças, valores e sentimentos das trabalhadoras, os quais influenciam no cuidado. Desse modo, a relação dialógico-dialética, que se estabeleceu entre os membros do grupo e que permeou todo o processo educativo, permitiu que a problematização da situação fosse ampliada por meio da discussão. Além disso, a conscientização das trabalhadoras de enfermagem contribuiu para a busca de soluções. Construíram-se, em vista disso, alternativas de atendimento com o objetivo de proporcionar um cuidado humanizado e com qualidade para a mãe-nutriz e para a sua família.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Yáñez, Lagos Tomás Humberto. « "Levantamiento de reglas de precios, integrando la sensibilidad al precio, para una categoría de productos" ». Tesis, Universidad de Chile, 2014. http://repositorio.uchile.cl/handle/2250/131467.

Texte intégral
Résumé :
Memoria para optar al título de Ingeniero Civil Industrial
Este trabajo de memoria consiste en el análisis de los datos transaccionales de un Proveedor de Productos de Consumo Masivo, para mejorar el desempeño de las contribuciones de una categoría de productos. Se busca establecer relaciones entre los productos, que resulten beneficiosas para la contribución. Estas relaciones pueden ser complejas de encontrar, debido a la gran cantidad de data transaccional. Para lidiar con esto se define y aplica una metodología de Reglas de Precio, que permite comparar las relaciones existentes, extrayendo indicadores de las veces que suceden y lo beneficiosa, o no, que son. Se utiliza programación SQL y Perl, para asistir la extracción de Reglas. Las Reglas a extraer incluyen: Regla de Orden (para establecer qué producto mejora la contribución al tener mayor precio que otro), Regla de Canibalización entre Productos (para establecer en qué rango de precio deben situarse dos productos), Regla de Orden Agregada (para establecer cómo se relacionan los atributos de los productos) y Reglas de Elasticidad (que relacionan la respuesta al precio de los clientes con lo observado transaccionalmente). Se generaron indicadores de Soporte y Confianza para cada una de las reglas, para poder comparar el desempeño. Se emplea la metodología KDD, adaptada para la extracción de Reglas de Precios usando los principios de Reglas de Asociación. Se extraen datos transaccionales para la categoría Aceites Domésticos , identificando los precios semanales de los productos. Se filtran las reglas según sus indicadores y se determina que el beneficio de haber mantenido las reglas encontradas, sin considerar las Reglas de Elasticidad, es de $ 6.096.405 USD. Para las Reglas de Elasticidad, se estima la respuesta al precio mediante regresiones lineales, bajo el método de regresiones doble-log. Se determina si estas se reflejan correctamente en la información transaccional. Bajo este análisis, se determinó el mejor rango de precios, para los productos, según su nivel de elasticidad. El cumplimiento del rango encontrado generaría beneficios potenciales de $1.540.289 USD. Las reglas encontradas presentan distintas utilidades, dentro de las que se cuentan: establecer rangos de precios, observar efectos de canibalización, verificar que la segmentación de precios en la categoría se esté cumpliendo y evaluar sensibilidades al precio de los productos. El beneficio de haber mantenido las relaciones de precio, encontradas en este trabajo, ascienden a $ 7.636.694 USD. Finalmente, se establecen los lineamientos generales a seguir, para la construcción de un sistema de soporte a la decisión. Se construye, según lo anterior, un prototipo para la extracción de reglas, utilizando lenguajes de programación web.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Maia, Filho Nelson Lourenço. « A adolescente precoce : aspectos relacionados ao parto, puerperio imediato e recem-nascido, comparativamente as não-precoces e as gestantes adultas ». [s.n.], 1993. http://repositorio.unicamp.br/jspui/handle/REPOSIP/309542.

Texte intégral
Résumé :
Orientador : Gustavo A. de Souza
Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Ciencias Medicas
Made available in DSpace on 2018-07-17T09:37:03Z (GMT). No. of bitstreams: 1 MaiaFilho_NelsonLourenco_D.pdf: 1910045 bytes, checksum: 8e1d44f606cd558285cf4e3f275454fb (MD5) Previous issue date: 1993
Resumo: Este trabalho foi realizado com pacientes internadas na Maternidade do "Hospital de Clínicas Especializadas" de Franco da Rocha, ERSA - 14, atual Hospital-Escola da Faculdade de Medicina de Jundiaí, entre março de 1989 e dezembro de 1991. Estudaram-se 846 pacientes, divididas em três grupos, o grupo de estudo, constituído por 204 adolescentes precoces, de 11 a 15 anos; o grupo-controle, representado por 320 adolescentes não-precoces, de 16 a 19 anos; e outro, incluindo 322 gestantes adultas, de 20 a 25 anos. Todas as pacientes estudadas eram primíparas, para tornar mais uniforme a amostragem do trabalho, já que outras variáveis como condição social, econômica, cultural, educacional e habitacional eram comuns. Os três grupos foram analisados comparativamente, quanto aos seguintes parâmetros: tipo de apresentação fetal, período de internação antes do parto, duração do período de dilatação, duração do período expulsivo, tipo de parto, indicações dos fórceps, indicações das cesáreas, complicações maternas do parto e puerpério imediato, índices de Apgar ao primeiro e quinto minuto, peso dos recém-nascidos, idade fetal, relação idade e peso dos recém-nascidos e complicações dos recém-nascidos. Após análise dos resultados e confrontação com a literatura, o autor concluiu que as adolescentes precoces comportaram-se de maneira distinta dos demais grupos, nas seguintes variáveis: menor índice de partos abdominais; maior indicação de cesáreas por vício pélvico e por eclampsia; taxas significativamente superiores de recém-nascidos com Apgar menor ou igual a 6 ao primeiro minuto; maior quantidade de recém-nascidos com peso inferior a 2.500g e com idade gestacional ao nascer menor ou igual a 36 semanas, pelo método de Capurro; houve ainda maior índice de recém-nascidos com complicações ao nascer e com icterícia neonatal. Finalmente, o autor reconheceu ser a gestação nas adolescentes precoces de "alto risco", sugerindo que sejam criadas mais instituições multidisciplinares para o atendimento a esta população, que já é severamente castigada pelos fatores socioeconômicos nos países do Terceiro Mundo.
Abstract: Eight hundred and forty-six inpatients at the maternity of "Hospital de Clínicas Especializadas" in Franco da Rocha, Hospital-Escola da Faculdade de Medicina de Jundiaí, between March, 1989 and December, 1991 were divided in three groups: a study group, with 204 very young adolescents, between 11 and 15 years of age; a control group, with 320 older adolescents, between 16 and 19 years old age; and a third group, with 322 adult pregnant women, between 20 and 25 years of age. All patients were primigravidas in order to achieve a homogeneous group for the sampling study, whereas some other variants such as social, economical, cultural, educational and habitational conditions were the same. The three groups were compared within the following parameters: types of fetal outcome, internation period antepartum, duration of the first stage, duration of the second stage, kind of labor, indication of forceps, indication of cesarean, maternal labor complications, and imediat puerperal period, Apgar scores at the first and the fifth minutes, infants weight, fetal age, infants age and weight relation, and infants complications. The very young adolescents group, behaved unlikely the others groups in the following variants: lower rate of cesarean, higher number of cesarean indications by contracted pelvis and eclampsia; a significant higher rate in infants with Apgar scores lower or equal to six at the first minute; higher number of infants rate less than 2,500g, less or equal than 36 weeks through the Capurro method; there was still a higher rate of infants complication at the moment of birth and with neonatal icterus. It is known that very young adolescents pregnancy is of high risk and multidisciplinaries institutions are suggested to be created to assist this population, which is already so punished by the social-economic problems in developing countries.
Doutorado
Tocoginecologia
Doutor em Tocoginecologia
Styles APA, Harvard, Vancouver, ISO, etc.
7

Fernandes, Maria de Fátima Valente Martins. « Que intervenção precoce ( ? ¿) : satisfação das famílias em intervenção precoce ». Master's thesis, [s.n.], 2008. http://hdl.handle.net/10284/1564.

Texte intégral
Résumé :
Dissertação apresentada à Universidade Fernando Pessoa como parte dos requisitos para obtenção do grau de Mestre em Psicologia, especialização em Psicologia da Educação e Intervenção Comunitária.
Este estudo tem por objectivo analisar e discutir a satisfação das famílias abrangidas pela Equipa de Intervenção Precoce de Estarreja, no Projecto Pluriconcelhio de Intervenção Precoce de Aveiro. Realizou-se um estudo exploratório no sentido de obtermos resultados que nos possam demonstrar que, apesar de as famílias vivenciarem problemas comuns, cada uma delas possui particularidades específicas e as suas necessidades diferem mediante existência de diversas características/factores. É isso que deve caracterizar a proposta de apoio a ser oferecida pelo profissional. A nosso ver, tal proposta deve ser pautada na realidade e ter em vista a satisfação das necessidades familiares, em busca de uma melhor qualidade de vida da criança em risco e da família como um todo. Os resultados obtidos neste estudo foram, de uma forma geral, satisfatórios, uma vez que foi o primeiro trabalho realizado directamente com famílias no Concelho e do qual podemos deduzir que, de uma forma geral, e na apreciação global dos resultados, as nossas famílias estão “satisfeitas” (50%), embora apenas 34% se sintam realmente “muito satisfeita”, relativamente ao Serviço de Intervenção Precoce. Da nossa amostra, 10% está “pouco satisfeita” e uma minoria de 3%, mas que não deixa de ser para nós significativa, apresenta um resultado de “não satisfeita”. This study aims at analyzing and discussing the satisfaction of families included in the “Early Intervention Team of Estarreja”, from the Project “Pluriconcelhio de Intervenção Precoce“ of Aveiro. An exploratory study was designed with the purpose of getting results that can demonstrate us that, although the families share common problems, each one of them possesses specificities and their necessities differ due to the existence of diverse characteristics/factors. It is this that must characterize the support to be offered by professionals. As far as we can see, such proposal must be set out in reality and pursuit the satisfaction of family needs, in search of a better quality of life for the child at risk and the family as a whole. The results obtained in this study were in general satisfactory, since it was the first work carried directly with families in our council and of which we can deduce that in general and with in the global appreciation of the results, our families are “satisfied” (50%) although only 34% really feel “very satisfied”, relatively to the Precocious Intervention service. From our sample, 10% is “little satisfied” and a minority of 3% revealed to be “not satisfied”, although they still keep on being significant to us. Cette étude vise examiner et discuter la satisfaction des familles incluses par l’équipe de l’intervention précoce de Estarreja dans le Project Pluriconcelhio d’intervention précoce de Aveiro. Il y avait une étude exploratoire, pour obtenir les résultats qui puissent démontrer que, malgré les problèmes vécus par les familles, chacune a des caractéristiques spécifiques et leurs nécessitées sont différentes selon les caractéristiques/facteurs. C’est ce que doit caractériser le Project de support que sera offert par des professionnels. À notre avis, cette proposition devrait être basée sur la réalité visant répondre aux besoins de la famille, à la recherche d’une meilleure qualité de vie de l’enfant en risque et de la famille dans son ensemble. Les résultats obtenus dans cette étude ont été en général satisfaisants, car il a été le premier de travail directement avec les familles dans le conté, et dans nous pouvons déduire que, en général, et dans l’évolution global des résultats, dans les familles sont «satisfaits» (50%), bien que seulement 34% se sentent vraiment «très satisfaits», pour le service d’intervention précoce. De notre sample, 10% sont «plutôt satisfaits» et une minorité de 3%, mais qu’il est important pour nous, de présenter une suite de «pas satisfait».
Styles APA, Harvard, Vancouver, ISO, etc.
8

Castellón, Peña José Manuel. « Efectos de la seguridad ciudadana en el precio de las viviendas : un análisis de precios hedónicos ». Tesis, Universidad de Chile, 2005. http://www.repositorio.uchile.cl/handle/2250/108324.

Texte intégral
Résumé :
Seminario de título para optar al grado de Ingeniero Comercial, Mención Economía
El negocio de la construcción ha sido reconocido como uno de los sectores económicos más relevantes en la actividad económica del país. La influencia directa del sector en la creación de empleo2 y su estrecha relación con el ciclo económico ha llevado a que los resultados del sector sean utilizados, en determinadas ocasiones, como indicadores importantes del dinamismo de la economía. De esta forma, no es de extrañar que tanto las autoridades públicas como los diversos agentes privados presenten una preocupación significativa sobre dicho rubro, a sabiendas que es uno de los motores del desarrollo del país3. Uno de los sub-sectores más importantes dentro del rubro de la construcción corresponde a la edificación de viviendas, representando aproximadamente una participación del 2.8% del sector. La inversión residencial no sólo influye directamente, a través de su participación en la inversión agregada, en el dinamismo de la economía si no que además repercute fuertemente en el patrimonio de las personas. Según los datos provenientes de la Encuesta Casen4 2003 el 69,9% de los chilenos es propietario de la vivienda que reside, representando esta un importante patrimonio y potencial vía de ingreso a través de conceptos de arriendo5. De esta forma, variaciones en el valor de las viviendas tendrán importantes repercusiones en el patrimonio de los chilenos y en la actividad economía. El análisis de la evolución del precio de las viviendas se ha realizado, por lo general, a través de la construcción de índices que permiten ver la evolución agregada del valor de estas. Para construir estos índices se han utilizado metodologías de precios hedónicos, las cuales identifican la importancia relativa de una serie de atributos en el valor final de mercado de cada vivienda en particular. Las características individuales de cada vivienda constituyen un bien único, situándonos en un mercado en donde el bien transado es altamente heterogéneo. Por ejemplo, supongamos que tenemos dos viviendas idénticas con la única diferencia que están localizadas en diferentes sectores de la esperarse que la ubicación sea relevante en determinar su precio de mercado. El precio de mercado observado para cada vivienda es el resumen de múltiples características de esta, del contorno en el que se encuentra y de la interacción de ellas. Estas características son valoradas en forma conjunta, sin especificarse de manera detallada y aislada la importancia relativa de cada una de ellas independiente de las demás. El trabajo que se presenta a continuación, realiza una valoración de dichas características para cuatro tipos de viviendas: casas nuevas y usadas y departamentos nuevos y usados. Todas las viviendas utilizadas corresponden a transacciones de las comunas de la ciudad de Santiago. Uno de los aspectos más importante y novedosos que se presenta en este estudio corresponde a la valoración de la seguridad en el precio de las viviendas. Para tales efectos se procedió a incluir en las estimaciones indicadores de frecuencia relativa de la actividad delictual comunal. Es de esperarse que mayores niveles de actividad delictual diminuya el valor final de las viviendas. La metodología utilizada se basa en la estimación de un modelo de precios hedónicos mediante regresiones no lineales. Esta metodología se caracteriza por la capacidad para asignar precios a características que, en general, suelen venir “en paquete” y a entregar mejores ajuste en comparación a los modelos lineales que son utilizados generalmente en trabajos de este tipo. Si bien ya se han realizado algunos trabajos similares a nivel local, el trabajo que aquí se presenta posee varias fortalezas, destacando principalmente tres: la primera, tiene relación con la base de datos que se utilizó. Todos los trabajos anteriores se caracterizan por utilizar datos recogidos de los medios de comunicación. La recolección de la información de acuerdo a esta metodología sesga los resultados al incluir características de las viviendas con error de medición. Las características publicadas por los oferentes representan solo el conjunto de atributos positivos de la vivienda, omitiendo posibles defectos o características que interesan a los demandantes. Una segunda y quizás más importante omisión realizada al utilizar estas bases de datos es la inclusión del precio de oferta como variable dependiente y no el de transacción efectiva. Estos problemas se minimizan en este trabajo al utilizar una base de datos del Conservador de Bienes Raíces de Santiago, la que consigna todas las transacciones efectivas realizadas durante el año 2001 a 2004. La segunda fortaleza presente en este trabajo es la utilización de una especificación no lineal mediante la aplicación de la metodología Box Cox. Elemento importante que marca diferencia con los trabajos anteriores y que entrega argumentos técnicos para identificar el mejor ajuste del modelo. Por último, pero no menos importante, la tercera fortaleza se refiere a la identificación de la importancia de la seguridad ciudadana en la determinación del precio de las viviendas. Para esto se incluyó en las estimaciones la diferencia entre las tasas de denuncia de delitos de cada mes. Estadísticas provenientes del Ministerio del Interior, a través de la división de Seguridad Ciudadana. Las conclusiones encontradas en este trabajo señalan que la actividad delictual tiene un impacto negativo en el precio de las viviendas. Los resultados obtenidos son robustos y significativos a diferentes especificaciones. La variaciones en la tasa de denuncia de delitos dsimuye el valor de las viviendas, con mayor importancia en el mercado de las casas que en el de departamentos. Reflejando, posiblemente, la mayor seguridad presente en las viviendas de departamentos. El trabajo se desarrolla en 6 secciones. La siguiente sección entrega una completa revisión a la bibliografía existente; la tercera parte del trabajo describe y analiza univariadamente los datos a utilizar en las estimaciones; la cuarta sección analiza la metodología a utilizar en las estimaciones presentadas en la quinta sección. Finalmente se entregan las principales conclusiones en la sección 6.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Moura, Carina Helena Barros. « Intervenção precoce na primeira infância ». Master's thesis, [s.n.], 2011. http://hdl.handle.net/10400.26/15983.

Texte intégral
Résumé :
Mestrado, Enfermagem em Saúde Mental e Psiquiátrica, 2011, Escola Superior de Enfermagem de Lisboa
Enfermagem de Saúde Mental e Psiquiátrica, centrando-se nas intervenções do Enfermeiro Especialista de Saúde Mental e Psiquiátrica numa equipa de Intervenção Precoce na Primeira Infância (0-6anos). O estágio hospitalar em serviço de psiquiatria, decorreu na Unidade de Internamento de Pedopsiquiatria do Hospital Dona Estefânia (HDE) e os serviços da comunidade de saúde mental escolhidos foram a Unidade da Primeira Infância (UPI) do HDE e a Equipa de Intervenção Precoce (IP) do Centro de Saúde da Amora. No presente relatório podemos encontrar delineadas as competências e objectivos que o formando se propõem a atingir, as actividades desenvolvidas e reflexões criticas, fundamentadas do ponto de vista conceptual, tendo em consideração os resultados pretendidos. Salienta-se o papel fundamental que um Enfermeiro Especialista de Saúde Mental e Psiquiátrica teria numa Equipa de IP da Primeira Infância, o que actualmente não acontece. Aborda-se também, uma perspectiva de continuidade para a problemática apresentada. Nos diferentes locais de estágio foi possível ao formando identificar a vinculação como um factor decisivo no desenvolvimento da criança e jovem. Pode também identificar as intervenções da competência do Enfermeiro Especialista de Saúde Mental e Psiquiátrica na comunidade, e reflectir sobre o seu papel, se estivesse inserido numa equipa de Intervenção Precoce, apresentando, para tal, propostas da sua acção junto da criança e família.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Giurgea, Constantin. « Precise motion with piezoelectric actuator ». Thesis, University of Ottawa (Canada), 2002. http://hdl.handle.net/10393/6184.

Texte intégral
Résumé :
The purpose of this study is to determine the displacement performance of a high accuracy-positioning device equipped with three piezo ceramic actuators. The displacement performance of each individual actuator is investigated in order to perform controlled motion over a very small range. A nanometric precision three-degrees of freedom positioner was designed and fabricated. In order to design a proper closed-loop controller, the open loop characteristics of the nanopositioner were experimentally investigated.
Styles APA, Harvard, Vancouver, ISO, etc.
11

Pravia, Marco Antonio (Pravia Hernandez) 1975. « Precise control of quantum information ». Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/29999.

Texte intégral
Résumé :
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Nuclear Engineering, 2002.
Includes bibliographical references (p. 93-101).
Theoretical discoveries in the nascent field of quantum information processing hold great promise, suggesting the means for increased computational power and unconditionally secure communications. To achieve these advances in practice, however, quantum information must be stored and manipulated with high fidelity. Here, we describe how quantum information stored in a nuclear spin system can be controlled accurately. We describe a method creating strongly-modulating single-spin gates that faithfully produce the desired unitary transformations. The simulated fidelity of the best gate (under ideal conditions) reaches close to 0.99999, a value close to estimates of the fault-tolerant threshold. In addition, we show how knowledge of experimental errors can be used correct or compensate the gates. The experimental demonstration of these methods yields estimated single-spin and coupling gate fidelities close to 0.99. The methods are applicable to a variety of experimental studies in quantum information processing. We used the gates to implement strategies for combating decoherence, including the realization of a noiseless subsystem and the concatenation of quantum error correction with dynamical decoupling. The gates were also used to demonstrate the quantum Fourier transform, the disentanglement eraser, and an entanglement swap. Finally, we describe a nuclear magnetic resonance (NMR) implementation of a quantum lattice gas (QLG) algorithm. Recently, it has been suggested that an array of small quantum information processors sharing classical information can be used to solve selected computational problems. The concrete implementation demonstrated here solves the diffusion equation, and it provides a test example from which to
(cont.) probe the strengths and limitations of this new computation paradigm. The NMR experiment consists of encoding a mass density onto an array of 16 two-qubit quantum information processors and then following the computation through 7 time steps of the algorithm. The results show good agreement with the analytic solution for diffusive dynamics.
by Marco Antonio Pravia.
Ph.D.
Styles APA, Harvard, Vancouver, ISO, etc.
12

Monteiro, Rui Miguel de Jesus. « Desfolha precoce na casta Aragonez ». Master's thesis, ISA, 2014. http://hdl.handle.net/10400.5/6802.

Texte intégral
Résumé :
Mestrado em Viticultura e Enologia - Instituto Superior de Agronomia / Faculdade de Ciências. Universidade do Porto
This study aimed to the influence of two techniques, early defoliation and cluster thinning, in the agronomic behavior of Aragonez, its effects on yield, health and quality of grapes. The trial was conducted in 2013, on a vineyard of Quinta do Pinto, in Lisbon wine region, Portugal. The treatments studied were defoliation at flowering (ED), defoliation at the end of pea size and cluster thinning at verasion (D&T) and a non-defoliated and non-thinned treatment was used as a control. The early defoliation consisted of the removal of 6 basal leaves from all shoots one week before flowering while D&T consisted of a defoliation of about 3 basal leaves only in the east side of the canopy at the end of pea size and bunch thinning of all second order clusters at veraison. The early leaf removal caused significant changes in the structure of canopy, by reducing leaf area, the exposed leaf surface and increased cluster exposure, thus improving the microclimate covered. This improvement was reflected in the decrease of botrytis. This treatment also caused a decrease of 57 % in yield compared to the control, however no significant differences in the quality of the grapes to harvest.
Styles APA, Harvard, Vancouver, ISO, etc.
13

Nosek, Jakub. « Testování metody Precise Point Positioning ». Master's thesis, Vysoké učení technické v Brně. Fakulta stavební, 2020. http://www.nusl.cz/ntk/nusl-414313.

Texte intégral
Résumé :
This diploma thesis deals with the Precise Point Positioning (PPP) method in various variants. The thesis describes the theoretical foundations of the PPP method and the most important systematic errors that affect accuracy. The accuracy of the PPP method was evaluated using data from the permanent GNSS station CADM, which is part of the AdMaS research center. Data of the period 2018 – 2019 were processed. The results of combinations of different GNSS and the results of different observation periods were compared. Finally, the accuracy was verified at 299 IGS GNSS stations.
Styles APA, Harvard, Vancouver, ISO, etc.
14

LALAUX, VINCENT. « Readaptation precoce des valvules operes ». Amiens, 1991. http://www.theses.fr/1991AMIEM121.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
15

Wang, Yu. « Localization Precise in Urban Area ». Thesis, Toulouse, INPT, 2019. http://www.theses.fr/2019INPT0045.

Texte intégral
Résumé :
Récemment, la précision qu’on peut obtenir avec le GNSS-autonome positionnement est insatisfaisant pour de plus en plus d’utilisateurs au terrain. Une précision au-dessous de mètre ou encore de niveau centimètre est devenue plus cruciale pour pleine d’applications. Surtout pour des véhicules dans le milieu urbain, la précision finale est considérablement pire comme les manques et les contaminations des GNSS signaux sont présentes. Afin d’avoir un positionnement plus précis, les mesures carrier phases sont dispensables. Les erreurs associés à ces mesures sont plus précises de factor centaine que les mesures pseudoranges. Cependant, un paramètre de nature entière, appelée ambiguïté, empêche les mesures phases de comporter comme les pseudoranges ‘précises’. Pendant que les mesures carrier phases sont largement utilisées par les applications localisées dans un environnement ouvert, cette thèse s’intéresse sur les exploitations dans un environnement urbain. Pour cet objectif, la méthodologie RTK est appliquée, qui est principalement basée sur les caractéristiques que les erreurs sur les mesures pseudoranges et phases sont corrélées spatialement. De plus, cette thèse profite de la double GNSS constellation, GPS et GLONASS, pour renforcer la solution de position et l’utilisation fiable des mesures carrier phases. Enfin, un low-cost MEMS est aussi intégré pour compenser des désavantages de GNSS dans un milieu urbain. A propos des mesures phases, une version modifiée de Partial-IAR (Partial Ambiguity Resolution) est proposée afin de faire comporter de la façon la plus fiable possible les mesures phases comme les pseudoranges. Par ailleurs, les glissements de cycle sont plus fréquents dans un milieu urbain, qui introduisent des discontinuités des mesures phases. Un nouveau mécanisme pour détecter et corriger les glissements de cycle est du coup mise en place, pour bénéficier de la haute précision des mesures phases. Des tests basés sur les données collectées autour de Toulouse sont faits pour montrer la performance
Nowadays, stand-alone Global Navigation Satellite System (GNSS) positioning accuracy is not sufficient for a growing number of land users. Sub-meter or even centimeter accuracy is becoming more and more crucial in many applications. Especially for navigating rovers in the urban environment, final positioning accuracy can be worse as the dramatically lack and contaminations of GNSS measurements. To achieve a more accurate positioning, the GNSS carrier phase measurements appear mandatory. These measurements have a tracking error more precise by a factor of a hundred than the usual code pseudorange measurements. However, they are also less robust and include a so-called integer ambiguity that prevents them to be used directly for positioning. While carrier phase measurements are widely used in applications located in open environments, this thesis focuses on trying to use them in a much more challenging urban environment. To do so, Real Time-Kinematic (RTK) methodology is used, which is taking advantage on the spatially correlated property of most code and carrier phase measurements errors. Besides, the thesis also tries to take advantage of a dual GNSS constellation, GPS and GLONASS, to strengthen the position solution and the reliable use of carrier phase measurements. Finally, to make up the disadvantages of GNSS in urban areas, a low-cost MEMS is also integrated to the final solution. Regarding the use of carrier phase measurements, a modified version of Partial Integer Ambiguity Resolution (Partial-IAR) is proposed to convert as reliably as possible carrier phase measurements into absolute pseudoranges. Moreover, carrier phase Cycle Slip (CS) being quite frequent in urban areas, thus creating discontinuities of the measured carrier phases, a new detection and repair mechanism of CSs is proposed to continuously benefit from the high precision of carrier phases. Finally, tests based on real data collected around Toulouse are used to test the performance of the whole methodology
Styles APA, Harvard, Vancouver, ISO, etc.
16

Lewis, Matt. « Precise verification of C programs ». Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:34b5ed5a-160b-4e2c-8dac-eab62a24f78c.

Texte intégral
Résumé :
Most current approaches to software verification are one-sided -- a safety prover will try to prove that a program is safe, while a bug-finding tool will try to find bugs. It is rare to find an analyser that is optimised for both tasks, which is problematic since it is hard to know in advance whether a program you wish to analyse is safe or not. The result of taking a one-sided approach to verification is false alarms: safety provers will often claim that safe programs have errors, while bug-finders will often be unable to find errors in unsafe programs. Orthogonally, many software verifiers are designed for reasoning about idealised programming languages that may not have widespread use. A common assumption made by verification tools is that program variables can take arbitrary integer values, while programs in most common languages use fixed-width bitvectors for their variables. This can have a real impact on the verification, leading to incorrect claims by the verifier. In this thesis we will show that it is possible to analyse C programs without generating false alarms, even if they contain unbounded loops, use non-linear arithmetic and have integer overflows. To do this, we will present two classes of analysis based on underapproximate loop acceleration and second-order satisfiability respectively. Underapproximate loop acceleration addresses the problem of finding deep bugs. By finding closed forms for loops, we show that deep bugs can be detected without unwinding the program and that this can be done without introducing false positives or masking errors. We then show that programs accelerated in this way can be optimised by inlining trace automata to reduce their reachability diameter. This inlining allows acceleration to be used as a viable technique for proving safety, as well as finding bugs. In the second part of the thesis, we focus on using second-order logic for program analysis. We begin by defining second-order SAT: an extension of propositional SAT that allows quantification over functions. We show that this problem is NEXPTIME-complete, and that it is polynomial time reducible to finite-state program synthesis. We then present a fully automatic, sound and complete algorithm for synthesising C programs from a specification written in C. Our approach uses a combination of bounded model checking, explicit-state model checking and genetic programming to achieve surprisingly good performance for a problem with such high complexity. We conclude by using second-order SAT to precisely and directly encode several program analysis problems including superoptimisation, de-obfuscation, safety and termination for programs using bitvector arithmetic and dynamically allocated lists.
Styles APA, Harvard, Vancouver, ISO, etc.
17

Adami, Fernando. « Obesidade e maturação sexual precoce ». Florianópolis, SC, 2007. http://repositorio.ufsc.br/xmlui/handle/123456789/90614.

Texte intégral
Résumé :
Dissertação (mestrado) - Universidade Federal de Santa Catarina, Centro de Ciências da Saúde. Programa de Pós-Graduação em Nutrição.
Made available in DSpace on 2012-10-23T12:57:03Z (GMT). No. of bitstreams: 1 246998.pdf: 1211247 bytes, checksum: 2d7ec94aac1e4da5652be092a39e6683 (MD5)
O objetivo deste estudo foi investigar a associação entre maturação sexual precoce e obesidade em meninos e meninas de 10 a 14 anos de Florianópolis. Participaram do estudo 629 escolares entre 10 e 14 anos (277 meninos e 352 meninas), obtidos de duas escolas públicas e duas escolas privadas da região central do município. Foi utilizado o índice de massa corporal (IMC) para determinação de sobrepeso e obesidade, com os pontos de corte e os valores de LMS por sexo e idade de Conde & Monteiro (2006). Os valores de LMS de peso e estatura foram obtidos do Centers of Disease Control (CDC 2000). Os valores de LMS foram utilizados para obtenção do Z escore. A maturação sexual foi avaliada de acordo com os estágios de Tanner (1962). Os indivíduos foram agrupados por tercis da idade de acordo com cada estágio e sexo. O 1º tercil foi considerado como maturação sexual precoce; e o 2º tercil, como grupo de referência. As meninas com maturação sexual precoce são mais pesadas, altas e possuem maiores valores de IMC por idade, bem como maior prevalência de sobrepeso incluindo obesidade do que meninas do grupo de referência. Já os meninos com maturação sexual precoce são mais altos por idade; contudo não apresentam maior prevalência de sobrepeso incluindo obesidade. Os achados deste estudo corroboram aqueles da literatura que afirmam que meninas com maturação sexual precoce têm maiores prevalências de obesidade do que meninas sem maturação sexual precoce. Em relação aos meninos, o presente estudo não encontrou associação entre maturação sexual precoce e obesidade.
Styles APA, Harvard, Vancouver, ISO, etc.
18

Kostrygin, Anatolii. « Precise Analysis of Epidemic Algorithms ». Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLX042/document.

Texte intégral
Résumé :
La dissémination collaborative d'une information d'un agent à tous les autres agents d'un système distribué est un problème fondamental qui est particulièrement important lorsque l'on veut obtenir des algorithmes distribués qui sont à la fois robustes et fonctionnent dans un cadre anonyme, c'est-à-dire sans supposer que les agents possèdent des identifiants distincts connus. Ce problème, connu sous le nom de problème de propagation de rumeur , est à la base de nombreux algorithmes de communication sur des réseaux de capteurs sans-fil [Dimakis et al. (2010)] ou des réseaux mobiles ad-hoc. Il est aussi une brique de base centrale pour de nombreux algorithmes distribués avancés [Mosk-Aoyama et Shah (2008)].Les méthodes les plus connues pour surmonter les défis de robustesse et d'anonymat sont les algorithmes basés sur les ragots ( gossip-based algorithms ), c'est-à-dire sur la paradigme que les agents contact aléatoirement les autres agents pour envoyer ou récupérer l'information. Nousproposons une méthode générale d'analyse de la performance des algorithmes basés sur les ragots dans les graphes complets. Contrairement aux résultats précédents basés sur la structure précise des processus étudiés, notre analyse est basée sur la probabilité et la covariance des évènements correspondants au fait qu'un agent non-informé s'informe. Cette universalité nous permet de reproduire les résultats basiques concernant les protocoles classiques de push, pull et push-pull ainsi qu'analyser les certaines variantions telles que les échecs de communications ou les communications simultanés multiples réalisées par chaque agent. De plus, nous sommescapables d'analyser les certains modèles dynamiques quand le réseaux forme un graphe aléatoire échantillonné à nouveau à chaque étape [Clementi et al. (ESA 2013)]. Malgré sa généralité, notre méthode est simple et précise. Elle nous permet de déterminer l'espérance du temps de la diffusion à une constante additive près, ce qu'il est plus précis que la plupart des résultatsprécédents. Nous montrons aussi que la déviation du temps de la diffusion par rapport à son espérance est inférieure d'une constante r avec la probabilité au moins 1 − exp(Ω(r)).À la fin, nous discutons d'une hypothèse classique que les agents peuvent répondre à plusieurs appels entrants. Nous observons que la restriction à un seul appel entrant par agent provoque une décélération importante du temps de la diffusion pour un protocole de push-pull. En particulier, une phase finale du processus prend le temps logarithmique au lieu du temps double logarithmique. De plus, cela augmente le nombre de messages passés de Θ(n log log n) (valeur optimale selon [Karp et al. (FOCS 2000)]) au Θ(n log n) . Nous proposons une variation simple du protocole de push-pull qui rétablit une phase double logarithmique à nouveau et donc le nombre de messages passés redescend sur sa valeur optimal
Epidemic algorithms are distributed algorithms in which the agents in thenetwork involve peers similarly to the spread of epidemics. In this work, we focus on randomized rumor spreading -- a class of epidemic algorithms based on the paradigm that nodes call random neighbors and exchange information with these contacts. Randomized rumor spreading has found numerous applications from the consistency maintenance of replicated databases to newsspreading in social networks. Numerous mathematical analyses of different rumor spreading algorithms can be found in the literature. Some of them provide extremely sharp estimates for the performance of such processes, but most of them are based on the inherent properties of concrete algorithms.We develop new simple and generic method to analyze randomized rumor spreading processes in fully connected networks. In contrast to all previous works, which heavily exploit the precise definition of the process under investigation, we only need to understand the probability and the covariance of the events that uninformed nodes become informed. This universality allows us to easily analyze the classic push, pull, and push-pull protocols both in their pure version and in several variations such as when messages fail with constant probability or when nodes call a random number of others each round. Some dynamic models can be analyzed as well, e.g., when the network is a random graph sampled independently each round [Clementi et al. (ESA 2013)]. Despite this generality, our method determines the expected rumor spreading time precisely apart from additive constants, which is more precise than almost all previous works. We also prove tail bounds showing that a deviation from the expectation by more than an additive number of r rounds occurs with probability at most exp(−Ω(r)).We further use our method to discuss the common assumption that nodes can answer any number of incoming calls. We observe that the restriction that only one call can be answered leads to a significant increase of the runtime of the push-pull protocol. In particular, the double logarithmic end phase of the process now takes logarithmic time. This also increases the message complexity from the asymptotically optimal Θ(n log log n) [Karp, Shenker, Schindelhauer, Vöcking (FOCS 2000)] to Θ(n log n). We propose a simple variation of the push-pull protocol that reverts back to the double logarithmic end phase and thus to the Θ(n log log n) message complexity
Styles APA, Harvard, Vancouver, ISO, etc.
19

Bordi, John Joseph. « The precise range and range-rate equipment (PRARE) and its application to precise orbit determination / ». Digital version accessible at:, 1999. http://wwwlib.umi.com/cr/utexas/main.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
20

Bemmerer, Daniel. « Precise nuclear physics for the Sun ». Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2012. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-95439.

Texte intégral
Résumé :
For many centuries, the study of the Sun has been an important testbed for understanding stars that are further away. One of the first astronomical observations Galileo Galilei made in 1612 with the newly invented telescope concerned the sunspots, and in 1814, Joseph von Fraunhofer employed his new spectroscope to discover the absorption lines in the solar spectrum that are now named after him. Even though more refined and new modes of observation are now available than in the days of Galileo and Fraunhofer, the study of the Sun is still high on the agenda of contemporary science, due to three guiding interests. The first is connected to the ages-old human striving to understand the structure of the larger world surrounding us. Modern telescopes, some of them even based outside the Earth’s atmosphere in space, have succeeded in observing astronomical objects that are billions of light- years away. However, for practical reasons precision data that are important for understanding stars can still only be gained from the Sun. In a sense, the observations of far-away astronomical objects thus call for a more precise study of the closeby, of the Sun, for their interpretation. The second interest stems from the human desire to understand the essence of the world, in particular the elementary particles of which it consists. Large accelerators have been constructed to produce and collide these particles. However, man-made machines can never be as luminous as the Sun when it comes to producing particles. Solar neutrinos have thus served not only as an astronomical tool to understand the Sun’s inner workings, but their behavior on the way from the Sun to the Earth is also being studied with the aim to understand their nature and interactions. The third interest is strictly connected to life on Earth. A multitude of research has shown that even relatively slight changes in the Earth’s climate may strongly affect the living conditions in a number of densely populated areas, mainly near the ocean shore and in arid regions. Thus, great effort is expended on the study of greenhouse gases in the Earth’s atmosphere. Also the Sun, via the solar irradiance and via the effects of the so-called solar wind of magnetic particles on the Earth’s atmosphere, may affect the climate. There is no proof linking solar effects to short-term changes in the Earth’s climate. However, such effects cannot be excluded, either, making it necessary to study the Sun. The experiments summarized in the present work contribute to the present-day study of our Sun by repeating, in the laboratory, some of the nuclear processes that take place in the core of the Sun. They aim to improve the precision of the nuclear cross section data that lay the foundation of the model of the nuclear reactions generating energy and producing neutrinos in the Sun. In order to reach this goal, low-energy nuclear physics experiments are performed. Wherever possible, the data are taken in a low-background, underground environment. There is only one underground accelerator facility in the world, the Laboratory Underground for Nuclear Astro- physics (LUNA) 0.4 MV accelerator in the Gran Sasso laboratory in Italy. Much of the research described here is based on experiments at LUNA. Background and feasibility studies shown here lay the base for future, higher-energy underground accelerators. Finally, it is shown that such a device can even be placed in a shallow-underground facility such as the Dresden Felsenkeller without great loss of sensitivity.
Styles APA, Harvard, Vancouver, ISO, etc.
21

Morgan, Charlotte Ann Biotechnology &amp Biomolecular Sciences Faculty of Science UNSW. « Development of precise microbiological reference materials ». Publisher:University of New South Wales. Biotechnology & ; Biomolecular Sciences, 2008. http://handle.unsw.edu.au/1959.4/41848.

Texte intégral
Résumé :
Quality Control (QC) reference materials are widely used in microbiology to demonstrate the efficacy of testing methods and culture media. The current method for preparation of QC materials is by serial dilution of a microbial broth culture to obtain a suspension that contains an estimated number of colony forming units (cfu). Commercial reference material products are available with dried microbial cells, however, the numbers of cells are variable between batches as the production processes are reliant on cell suspensions of estimated cell number. This study developed a method to produce precise microbial reference materials with a accurate number of viable cells. Flow cytometry was used to count and dispense precise numbers of cells into a single droplet of fluid. The droplets were then mixed with a lyoprotectant solution and subjected to freeze-drying. The resultant freeze-dried pellets showed consistent average cfu counts between 28-33 cfu with a standard deviation < 3 cfu. The freeze-drying methodology and developed conditions of cell growth enabled > 90% of the cells to survive freeze-drying and remain viable for one year at a storage temperature below -18??C. The methodology for the production of freeze-dried pellet was applied to a range of genera including, different E. coli strains, Gram positive bacteria such as Listeria and Staphylococcus, the yeast Candida albicans and a spore-producing Bacillus cereus. The precision of cell numbers was comparable between different microbial genera and strains and a consistent standard deviation below 3 cfu was achieved. The same freeze-dried pellet method was used for the different micro-organisms, except for changes to preparation of cell suspensions. Different methods of broth culture were developed to ensure freeze-dried cell survival. A measurement of method reproducibility was obtained when 99 batches of pellets were produced, and within batch and between batch variation was determined.
Styles APA, Harvard, Vancouver, ISO, etc.
22

Palmer, Ronald J. « Precise positioning with AM radio stations ». Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/nq23650.pdf.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
23

Palmer, Jason. « Precise pressure sensor temperature compensation algorithms ». Diss., Online access via UMI:, 2007.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
24

Bemmerer, Daniel. « Precise nuclear physics for the sun ». Forschungszentrum Dresden, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:d120-qucosa-97364.

Texte intégral
Résumé :
For many centuries, the study of the Sun has been an important testbed for understanding stars that are further away. One of the first astronomical observations Galileo Galilei made in 1612 with the newly invented telescope concerned the sunspots, and in 1814, Joseph von Fraunhofer employed his new spectroscope to discover the absorption lines in the solar spectrum that are now named after him. Even though more refined and new modes of observation are now available than in the days of Galileo and Fraunhofer, the study of the Sun is still high on the agenda of contemporary science, due to three guiding interests. The first is connected to the ages-old human striving to understand the structure of the larger world surrounding us. Modern telescopes, some of them even based outside the Earth’s atmosphere in space, have succeeded in observing astronomical objects that are billions of lightyears away. However, for practical reasons precision data that are important for understanding stars can still only be gained from the Sun. In a sense, the observations of far-away astronomical objects thus call for a more precise study of the closeby, of the Sun, for their interpretation. The second interest stems from the human desire to understand the essence of the world, in particular the elementary particles of which it consists. Large accelerators have been constructed to produce and collide these particles. However, man-made machines can never be as luminous as the Sun when it comes to producing particles. Solar neutrinos have thus served not only as an astronomical tool to understand the Sun’s inner workings, but their behavior on the way from the Sun to the Earth is also being studied with the aim to understand their nature and interactions. The third interest is strictly connected to life on Earth. A multitude of research has shown that even relatively slight changes in the Earth’s climate may strongly affect the living conditions in a number of densely populated areas, mainly near the ocean shore and in arid regions. Thus, great effort is expended on the study of greenhouse gases in the Earth’s atmosphere. Also the Sun, via the solar irradiance and via the effects of the so-called solar wind of magnetic particles on the Earth’s atmosphere, may affect the climate. There is no proof linking solar effects to short-term changes in the Earth’s climate. However, such effects cannot be excluded, either, making it necessary to study the Sun. The experiments summarized in the present work contribute to the present-day study of our Sun by repeating, in the laboratory, some of the nuclear processes that take place in the core of the Sun. They aim to improve the precision of the nuclear cross section data that lay the foundation of the model of the nuclear reactions generating energy and producing neutrinos in the Sun. In order to reach this goal, low-energy nuclear physics experiments are performed. Wherever possible, the data are taken in a low-background, underground environment. There is only one underground accelerator facility in the world, the Laboratory Underground for Nuclear Astrophysics (LUNA) 0.4MV accelerator in the Gran Sasso laboratory in Italy. Much of the research described here is based on experiments at LUNA. Background and feasibility studies shown here lay the base for future, higher-energy underground accelerators. Finally, it is shown that such a device can even be placed in a shallow-underground facility such as the Dresden Felsenkeller without great loss of sensitivity.
Styles APA, Harvard, Vancouver, ISO, etc.
25

Woodman, George Henry. « Precise laser spectroscopy of atomic hydrogen ». Thesis, University of Oxford, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.316894.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
26

Whalley, Stephen. « Precise orbit determination for GPS satellites ». Thesis, University of Nottingham, 1990. http://eprints.nottingham.ac.uk/14554/.

Texte intégral
Résumé :
The NAVSTAR Global Positioning System (GPS) has been under development by the US Department of Defense since 1973. Although GPS was developed for precise instantaneous position and velocity determination, it can be used for high precision relative positioning, with numerous applications for both surveyors and geodesists. The high resolution of the satellite's carrier phase has enabled relative positioning accuracies of the order of one part per million to be routinely obtained, from only one or two hours of data. These accuracies are obtained using the broadcast ephemeris, which is the orbit data that is broadcast in the satellite's radio transmission. However, the broadcast ephemeris is estimated to be in error by up to twenty five metres and this error is one of the principle limitations for precise relative positioning with GPS. An alternative to the broadcast ephemeris, is to determine the satellite orbits using the carrier phase measurements, obtained from a network of GPS tracking stations. This thesis describes the algorithms and processing techniques used for the determination of GPS satellite orbits using double differenced carrier phase measurements. The data from three different GPS campaigns have been analysed, which demonstrate a GPS orbital accuracy of between two and four metres, giving baseline accuracies of the order of one or two parts in ten million.
Styles APA, Harvard, Vancouver, ISO, etc.
27

Spiegel, Philip [Verfasser]. « Precise Surface Aided Navigation / Philip Spiegel ». München : Verlag Dr. Hut, 2018. http://d-nb.info/1161250719/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
28

Shardlow, Peter John. « Propagation effects on precise GPS heighting ». Thesis, University of Nottingham, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.239483.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
29

Davison, M. « Refraction effects in precise surveying measurements ». Thesis, University of Nottingham, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.378767.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
30

Pass, Rafael (Rafael Nat Josef). « A precise computational approach to knowledge ». Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/38303.

Texte intégral
Résumé :
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.
Includes bibliographical references (p. 100-103).
The seminal work of Goldwasser, Micali and Rackoff put forward a computational approach to knowledge in interactive systems, providing the foundation of modern Cryptography. Their notion bounds the knowledge of a player in terms of his potential computational power (technically defined as polynomial-time computation). In this thesis, we put forward a stronger notion that precisely bounds the knowledge gained by a player in an interaction in terms of the actual computation he has performed (which can be considerably less than any arbitrary polynomial-time computation). Our approach not only remains valid even if P = NP, but is most meaningful when modeling knowledge of computationally easy properties. As such, it broadens the applicability of Cryptography and weakens the complexity theoretic assumptions on which Cryptography can be based.
by Rafael Pass.
Ph.D.
Styles APA, Harvard, Vancouver, ISO, etc.
31

Ortiz, Mauricio, Sabine Reffert, Trifon Trifonov, Andreas Quirrenbach, David S. Mitchell, Grzegorz Nowak, Esther Buenzli et al. « Precise radial velocities of giant stars ». EDP SCIENCES S A, 2016. http://hdl.handle.net/10150/622444.

Texte intégral
Résumé :
Context. For over 12 yr, we have carried out a precise radial velocity (RV) survey of a sample of 373 G- and K-giant stars using the Hamilton Echelle Spectrograph at the Lick Observatory. There are, among others, a number of multiple planetary systems in our sample as well as several planetary candidates in stellar binaries. Aims. We aim at detecting and characterizing substellar and stellar companions to the giant star HD 59686 A (HR 2877, HIP 36616). Methods. We obtained high-precision RV measurements of the star HD 59686 A. By fitting a Keplerian model to the periodic changes in the RVs, we can assess the nature of companions in the system. To distinguish between RV variations that are due to non-radial pulsation or stellar spots, we used infrared RVs taken with the CRIRES spectrograph at the Very Large Telescope. Additionally, to characterize the system in more detail, we obtained high-resolution images with LMIRCam at the Large Binocular Telescope. Results. We report the probable discovery of a giant planet with a mass of m(p) sin i = 6.92(-0.24)(+0.18) M-Jup orbiting at a(p) = 1.0860(-0.0007)(+0.0006) aufrom the giant star HD 59686 A. In addition to the planetary signal, we discovered an eccentric (e(B) = 0.729(-0.003)(+0.004)) binary companionwith a mass of m(B) sin i = 0.5296(-0.0008)(+0.0011) M-circle dot orbiting at a close separation from the giant primary with a semi-major axis of a(B) = 13.56(-0.14)(+0.18) au. Conclusions. The existence of the planet HD 59686 Ab in a tight eccentric binary system severely challenges standard giant planet formation theories and requires substantial improvements to such theories in tight binaries. Otherwise, alternative planet formation scenarios such as second-generation planets or dynamical interactions in an early phase of the system's lifetime need to be seriously considered to better understand the origin of this enigmatic planet.
Styles APA, Harvard, Vancouver, ISO, etc.
32

Pascalicchio, Francisco Vanin. « O acidentar-se no trabalho precoce ». [s.n.], 2002. http://repositorio.unicamp.br/jspui/handle/REPOSIP/313236.

Texte intégral
Résumé :
Orientador: Heleno Rodrigues Correa Filho
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Ciências Médicas
Made available in DSpace on 2018-08-03T00:18:13Z (GMT). No. of bitstreams: 1 Pascalicchio_FranciscoVanin_M.pdf: 203417 bytes, checksum: 1728c39e6affac1555ec543de7a7cd29 (MD5) Previous issue date: 2002
Resumo: Não informado
Abstract: Not informed
Mestrado
Saude Coletiva
Mestre em Saude Coletiva
Styles APA, Harvard, Vancouver, ISO, etc.
33

Viana, Cândida Adelaide Fernandes. « Diagnóstico Precoce do Carcinoma da Próstata ». Bachelor's thesis, [s.n], 2010. http://hdl.handle.net/10284/1605.

Texte intégral
Résumé :
Monografia apresentada à Universidade Fernando Pessoa para obtenção do grau Licenciada em Ciências Farmacêuticas.
A Próstata é uma glândula do aparelho genital masculino. Está localizada abaixo da bexiga, à frente do recto, envolvendo a porção inicial da uretra. Tem o tamanho e o aspecto de uma castanha e pesa cerca de 15-30 gramas. O presente estudo foi desenvolvido com o objectivo de caracterizar uma população quanto aos níveis de (antigénio especifico da próstata) PSA indicador de doenças na próstata. Trata-se de um estudo descritivo exploratório com abordagem quantitativa, tendo como alvo uma pequena população masculina da freguesia Fonte de Aldeia, localizada no concelho de Miranda do Douro distrito de Bragança. Para caracterização da amostra foi utilizado um inquérito aplicado aos utentes. Do n.º inicial de indivíduos da população (51) e após a sua triagem e validação, restou uma amostra final de 36 utentes (objecto de estudo) aos quais foram retirada uma amostra de sangue para análise dos níveis de PSA. A amostra de sangue foi obtida por punção venosa. Após a separação dos soros as análises do PSA total foram realizadas no equipamento VIDAS pelo método imunoenzimático sandwich com detecção final em fluorescência (ELFA- enzime linked fluorescent assay). Assumiram-se como valores normais para a faixa etária dos 50 aos 59 anos valores de PSA inferiores a 3,5 ng/ml; para a faixa etária dos 60 aos 69 anos um valor menor que 4,5 ng/ml e para a faixa etária maior que 70 anos assumiu-se o valor até 6,5ng/ml de acordo com Madeira (2007). Analisando os resultados por faixa etária constatamos que nos 7 indivíduos com idades compreendidas entre 50 e 59 anos foi detectado um caso Borderline num indivíduo de 52 anos (com um valor de 3,51 ng/ml de PSA para o máximo estabelecido de 3,50 ng/ml), o que corresponde a 14,28%. Na faixa compreendida entre os 60-69 anos foram detectados dois casos positivos em indivíduos com 68 anos (com valores de 5,99 ng/ml e 6,50 ng/ml respectivamente para o máximo aceitável de 4,50 ng/ml), o que corresponde a 12,5% da amostra analisada desta faixa etária (16 indivíduos). Por último na faixa acima dos 70 anos, foram detectados seis casos positivos em indivíduos com 74, 77, 79, 80, 81 e 84 anos e valores de 6,55; 7,77; 10,70-;26,8 e 61,12 ng/ml de PSA respectivamente, (sendo o máximo aceitável para esta faixa etária de 6,50 ng/ml) o que corresponde a 46,15% da população inquirida nesta faixa etária (13 indivíduos). OS valores elevados foram posteriormente confirmados no Laboratório de análises clínicas Vale Do Sousa, Lda um laboratório certificado em Penafiel, por uma segunda técnica em que o método utilizado foi Quimioluminescência. Todos os valores confirmaram e os resultados foram cedidos quer aos doentes quer ao seu médico de família após contacto. Clinicamente qualquer um destes casos é compatível com HPB (Hipertrofia Benigna da Próstata) ou mesmo cancro prostático, dependendo dos valores da relação PSA Livre/ PSA Total do exame clínico com observação prostática por toque rectal e/ou ecográfico e finalmente biópsia. O ponto anterior não se enquadra no âmbito deste estudo mas será continuado pelos respectivos clínicos. Como resumo final, os resultados obtidos com este estudo permitem concluir que embora a maioria dos participantes no estudo (75%) apresentem valores considerados normais, estes valores mesmo na escala da normalidade vão aumentando com a idade o que nos indica que o valor do PSA cresce moderadamente com o aumento da idade. Este facto vem corroborar o descrito pelos especialistas nesta matéria como McNeal e Stenman. Em relação aos valores patológicos concluímos que eles acontecem mais e com maior gravidade nas faixas etárias superiores, pelo que este exame exploratório é fundamental e deve ser mais frequente à medida que a idade do homem avança.
Styles APA, Harvard, Vancouver, ISO, etc.
34

Fonseca, Elvira Fernanda Rodrigues Pereira da. « Assimetria mandibular : diagnóstico precoce em ortodontia ». Master's thesis, [s.n.], 2015. http://hdl.handle.net/10284/5115.

Texte intégral
Résumé :
Projeto de Pós-Graduação/Dissertação apresentado à Universidade Fernando Pessoa como parte dos requisitos para obtenção do grau de Mestre em Medicina Dentária
A assimetria facial, mais propriamente a assimetria mandibular, está relacionada com diversos fatores etiológicos. Estes passam por fatores de ordem congénita e de desenvolvimento ou fatores de ordem adquirida. Neste trabalho serão abordados os possíveis fatores etiológicos, sendo o diagnóstico precoce da anomalia o aspeto mais relevante do trabalho. A assimetria mandibular em conjunto com alterações progressivas na oclusão não é frequente contudo tem importância clínica, pois apresenta uma alteração na postura da mandibula e pode estar associado a diferentes decursos patológicos. Neste contexto, o diagnóstico deve ser centrado para as seguintes e possíveis causas: hiperplasias, hipoplasias, fatores traumáticos, fatores patológicos e fatores funcionais. The facial asymmetry, more properly the mandibular asymmetry, is related to several etiological factors. These are related to congenital and development factors or acquired factors. This assignment will approach the possible etiologies, being the early diagnosis, the main and most important approached theme. Mandibular asymmetry in alliance with others oclusal progressive alterations is not frequent but it has clinical relevance, as it shows an alteration on the posture of the mandible and it can be associated to different pathological subjects. In this context, the diagnosis will be centered to the possible causes, such as: hyperplasia, hypoplasia, traumatic factors, pathological factors and functional factors.
Styles APA, Harvard, Vancouver, ISO, etc.
35

Tamura, Yoshiaki. « Study on Precise Tidal Data Processing ». 京都大学 (Kyoto University), 2000. http://hdl.handle.net/2433/157208.

Texte intégral
Résumé :
本文データは平成22年度国立国会図書館の学位論文(博士)のデジタル化実施により作成された画像ファイルを基にpdf変換したものである
Kyoto University (京都大学)
0048
新制・論文博士
博士(理学)
乙第10362号
論理博第1378号
新制||理||1180(附属図書館)
UT51-2000-F428
(主査)教授 竹本 修三, 助教授 福田 洋一, 教授 古澤 保
学位規則第4条第2項該当
Styles APA, Harvard, Vancouver, ISO, etc.
36

BLASZKA, THIERRY. « Approches par modeles en vision precoce ». Nice, 1997. http://www.theses.fr/1997NICE5071.

Texte intégral
Résumé :
Le travail de cette these porte sur l'etude de l'extraction de primitives telle qu'elle est presentee dans la litterature et sur le developpement d'une solution aux limitations des approches classiques. Tout d'abord il sera question de contours, coins et jonctions triples qui sont des primitives image tres utiles dans le traitement d'images de notre environnement. Une etude comparative des detecteurs classiques de coins est presentee afin de mettre en lumiere leurs qualites et leurs defauts. A partir de cette etude, une methode a base de modeles est developpee pour caracteriser precisement ces primitives importantes que sont les contours, les angles et les triples jonctions. L'idee maitresse de cette approche consiste a definir des modeles correspondant a chacune des primitives et ensuite a les caracteriser de maniere robuste directement a partir des images. Les modeles definis incluent un grand nombre de parametres radiometriques et geometriques mais aussi un important parametre qui est associe a l'effet de lissage introduit par le systeme d'acquisition. Le probleme important de l'initialisation du processus de caracterisation iterative est aussi considere et une solution originale et efficace est proposee. Afin de tester et de comparer la validite, la robustesse et l'efficacite des differentes approches presentees, un grand nombre d'experimentations sur des images synthetiques bruitees et des images reelles ont ete effectuees. Les resultats obtenus ayant ete tres satisfaisants, cette approche a ete etendue a des primitives courbes basees sur des ellipses et des courbes b-splines fermees. Deux applications illustrent la qualite des primitives extraites a l'aide l'approche par modeles : la reconstruction projective 3-d et la creation de mosaique d'images.
Styles APA, Harvard, Vancouver, ISO, etc.
37

Khare, Vinod. « Precise Image Registration and Occlusion Detection ». The Ohio State University, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=osu1308246730.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
38

BOURRIERES, CATHERINE. « Histoire de la syphilis maligne precoce ». Aix-Marseille 2, 1994. http://www.theses.fr/1994AIX20090.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
39

Zhang, Xiangyu. « Fault Location via Precise Dynamic Slicing ». Diss., The University of Arizona, 2006. http://hdl.handle.net/10150/195292.

Texte intégral
Résumé :
Developing automated techniques for identifying a fault candidate set (i.e., subset of executed statements that contains the faulty code responsible for the failure during a program run), can greatly reduce the effort of debugging. Over 15 years ago precise dynamic slicing was proposed to identify a fault candidate set as consisting of all executed statements that influence the computation of an incorrect value through a chain of data and/or control dependences. However, the challenge of making precise dynamic slicing practical has not been addressed. This dissertation addresses this challenge and makes precise dynamic slicing useful for debugging realistic applications. First, the cost of computing precise dynamic slices is greatly reduced. Second, innovative ways of using precise dynamic slicing are identified to produce small failure candidate sets. The key cause of high space and time cost of precise dynamic slicing is the very large size of dynamic dependence graphs that are constructed and traversed for computing dynamic slices. By developing a novel series of optimizations the size of the dynamic dependence graph is greatly reduced leading to a compact representation that can be rapidly traversed. Average space needed is reduced from 2 Gigabytes to 94 Megabytes for dynamic dependence graphs corresponding to executions with average lengths of 130 Million instructions. The precise dynamic slicing time is reduced from up to 20 minutes for a demand-driven algorithm to 16 seconds. A compression algorithm is developed to further reduce dependence graph sizes. The resulting representation achieves the space efficiency such that the dynamic execution history of executing a couple of billion instructions can be held in a Gigabyte of memory. To further scale precise dynamic slicing to longer program runs, a novel approach is proposed that uses checkpointing/logging to enable collection of dynamic history of only the relevant window of execution. Classical backward dynamic slicing can often produce fault candidate sets that contain thousands of statements making the task of identifying faulty code very time consuming for the programmer. Novel techniques are proposed to improve effectiveness of dynamic slicing for fault location. The merit of these techniques lies in identifying multiple forms of dynamic slices in a failed run and then intersecting them to produce smaller fault candidate sets. Using these techniques, the fault candidate set size corresponding to the backward dynamic slice is reduced by nearly a factor of 3. A fine-grained statistical pruning technique based on value profiles is also developed and this technique reduces the sizes of backward dynamic slices by a factor of 2.5. In conclusion, this dissertation greatly reduces the cost of precise dynamic slicing and presents techniques to improve its effectiveness for fault location.
Styles APA, Harvard, Vancouver, ISO, etc.
40

Mukherjee, Rajdeep. « Precise abstract interpretation of hardware designs ». Thesis, University of Oxford, 2018. http://ora.ox.ac.uk/objects/uuid:680f0093-0405-4a0b-88dc-c4d7177d840f.

Texte intégral
Résumé :
This dissertation shows that the bounded property verification of hardware Register Transfer Level (RTL) designs can be efficiently performed by precise abstract interpretation of a software representation of the RTL. The first part of this dissertation presents a novel framework for RTL verification using native software analyzers. To this end, we first present a translation of the hardware circuit expressed in Verilog RTL into the software in C called the software netlist. We then present the application of native software analyzers based on SAT/SMT-based decision procedures as well as abstraction-based techniques such as abstract interpretation for the formal verification of the software netlist design generated from the hardware RTL. In particular, we show that the path-based symbolic execution techniques, commonly used for automatic test case generation in system softwares, are also effective for proving bounded safety as well as detecting bugs in the software netlist designs. Furthermore, by means of experiments, we show that abstract interpretation techniques, commonly used for static program analysis, can also be used for bounded as well as unbounded safety property verification of the software netlist designs. However, the analysis using abstract interpretation shows high degree of imprecision on our benchmarks which is handled by manually guiding the analysis with various trace partitioning directives. The second part of this dissertation presents a new theoretical framework and a practical instantiation for automatically refining the precision of abstract interpretation using Conflict Driven Clause Learning (CDCL)-style analysis. The theoretical contribution is the abstract interpretation framework that generalizes CDCL to precise safety verification for automatic transformer refinement called Abstract Conflict Driven Learning for Safety (ACDLS). The practical contribution instantiates ACDLS over a template polyhedra abstract domain for bounded safety verification of the software netlist designs. We experimentally show that ACDLS is more efficient than a SAT-based analysis as well as sufficiently more precise than a commercial abstract interpreter.
Styles APA, Harvard, Vancouver, ISO, etc.
41

ARENARE, FRANCESCA. « Precoce attivazione neuroadrenergica nell'insufficienza renale cronica ». Doctoral thesis, Università degli Studi di Milano-Bicocca, 2011. http://hdl.handle.net/10281/20413.

Texte intégral
Résumé :
Evidenze dirette ed indirette hanno mostrato che l'insufficienza renale cronica è caratterizzata da un'iperattivazione adrenergica di grado severo. Non è noto, tuttavia, se questo fenomeno rappresenti una peculiarità dell'insufficienza renale avanzata o se esso sia presente al contrario anche nelle fasi precoci della malattia. Questo studio è stato condotto in 73 pazienti ipertesi, dei quali 43 (età:60.7±1.8 anni, media±SEM) erano affetti da insufficienza renale cronica stabile di grado moderato (valore medio di filtrato glomerulare stimato: 40.7 ml/min/1.73 m2, formula MDRD) e 31 soggetti di controllo di età sovrapponibile con funzione renale conservata. Le misurazioni includevano variabili antropometriche, valori pressori sfigmomanometrici e battito-a-battito, frequenza cardiaca (ECG), dosaggio della noradrenalina plasmatica (cromatografia liquida ad alta risoluzione) e valori di traffico nervoso simpatico (microneurografia, nervo peroneale). A fronte di simili valori antropometrici ed emodinamici, i pazienti con insufficienza renale presentavano valori di traffico nervoso simpatico marcatamente e significativamente più elevati rispetto ai soggetti di controllo (60.0±2.1 vs 45.7±2.0 sc/100 hb; P<0.001). Il traffico nervoso simpatico mostrava un graduale significativo incremento dal primo al quarto quartile dei valori stimati di filtrato glomerulare (primo: 41.0±2.7; secondo: 51.9±1.7; terzo: 59.8±3.0; quarto:61.9±3.3 scariche per 100 battiti cardiaci), con una significatività statistica (P<0.05) mantenuta anche dopo correzione per eventuali fattori confondenti: Nella popolazione nel suo insieme, il traffico nervoso simpatico mostrava una significativa correlazione inversa con i valori di filtrato glomerulare stimato (r=-0.59; P<0.0001). L'iperattivazione simpatica rappresenta pertanto un fenomeno non confinato alla condizione di insufficienza renale avanzata, ma già evidenziabile nelle fasi iniziali di malattia e che con essa procede in parallelo partecipando, assieme ad altri fattori a determinarne la progressione.
Styles APA, Harvard, Vancouver, ISO, etc.
42

Revolledo, Montalvo Claudia Karina, Santacruz Brenda Lizeth Quevedo, Santacruz Brenda Lizeth Quevedo et Montalvo Claudia Karina Revolledo. « Determinación del impacto económico del ruido en el precio de las viviendas de la ciudad de Chiclayo : una aplicación de precios hedónicos ». Bachelor's thesis, Universidad Católica Santo Toribio de Mogrovejo, 2015. http://tesis.usat.edu.pe/jspui/handle/123456789/589.

Texte intégral
Résumé :
Trabajo de suficiencia profesional
La presente investigación busca analizar la influencia de la contaminación sonora en el precio de las viviendas en Chiclayo, producida principalmente por actividades productivas y de transporte, especialmente por el parque automotor. El exceso de tráfico ocasionado por las diversas unidades de transporte, produce un excesivo uso de la bocina, esto ha afectado de otras maneras las actividades económicas en Chiclayo, por ejemplo a empresas inmobiliarias. Es por eso que se busca determinar la importancia de la contaminación sonora en la determinación del valor de mercado de las viviendas en la ciudad de Chiclayo. En esta investigación, se utilizó el método de precios hedónicos, para determinar que características medioambientales, tienen influencia en el precio de las propiedades inmobiliarias. Este método nos permite encontrar el valor de la vivienda asociado a la existencia de ruido. Para desarrollar esta investigación, se utilizaron datos de las diferentes constructoras de la ciudad de Chiclayo, de las cuales se obtuvo el precio de mercado de las viviendas. Se midió el nivel del ruido en las diversas zonas de la ciudad, utilizando instrumentos y programas informáticos, obteniendo como resultado que en nuestra ciudad las personas aún no valoran la calidad ambiental sino que prefieren adquirir una vivienda que presente mejores características estructurales y de vecindad; y no están valorando más su bienestar, tanto físico como ambiental.
Styles APA, Harvard, Vancouver, ISO, etc.
43

Quevedo, Santacruz Brenda Lizeth, et Montalvo Claudia Karina Revolledo. « Determinación del impacto económico del ruido en el precio de las viviendas de la ciudad de Chiclayo : una aplicación de precios hedónicos ». Bachelor's thesis, Universidad Católica Santo Toribio de Mogrovejo, 2015. http://hdl.handle.net/20.500.12423/18.

Texte intégral
Résumé :
La presente investigación busca analizar la influencia de la contaminación sonora en el precio de las viviendas en Chiclayo, producida principalmente por actividades productivas y de transporte, especialmente por el parque automotor. El exceso de tráfico ocasionado por las diversas unidades de transporte, produce un excesivo uso de la bocina, esto ha afectado de otras maneras las actividades económicas en Chiclayo, por ejemplo a empresas inmobiliarias. Es por eso que se busca determinar la importancia de la contaminación sonora en la determinación del valor de mercado de las viviendas en la ciudad de Chiclayo. En esta investigación, se utilizó el método de precios hedónicos, para determinar que características medioambientales, tienen influencia en el precio de las propiedades inmobiliarias. Este método nos permite encontrar el valor de la vivienda asociado a la existencia de ruido. Para desarrollar esta investigación, se utilizaron datos de las diferentes constructoras de la ciudad de Chiclayo, de las cuales se obtuvo el precio de mercado de las viviendas. Se midió el nivel del ruido en las diversas zonas de la ciudad, utilizando instrumentos y programas informáticos, obteniendo como resultado que en nuestra ciudad las personas aún no valoran la calidad ambiental sino que prefieren adquirir una vivienda que presente mejores características estructurales y de vecindad; y no están valorando más su bienestar, tanto físico como ambiental.
Tesis
Styles APA, Harvard, Vancouver, ISO, etc.
44

DUBOIS, GUILBERT MARTINE. « Teratome malin retroperitoneal et pseudo-puberte precoce : a propos d'une observation et revue de la litterature ». Lille 2, 1991. http://www.theses.fr/1991LIL2M283.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
45

Cerezetti, Fernando Valvano. « Arbitragem nos mercados financeiros : uma proposta bayesiana de verificação ». Universidade de São Paulo, 2013. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-27042014-171844/.

Texte intégral
Résumé :
Hipóteses precisas são características naturais das teorias econômicas de determinação do valor ou preço de ativos financeiros. Nessas teorias, a precisão das hipóteses assume a forma do conceito de equilíbrio ou da não arbitragem. Esse último possui um papel fundamental nas teorias de finanças. Sob certas condições, o Teorema Fundamental do Apreçamento de Ativos estabelece um sistema único e coerente para valorização dos ativos em mercados não arbitrados, valendo-se para tal das formulações para processos de martingal. A análise da distribuição estatística desses ativos financeiros ajuda no entendimento de como os participantes se comportam nos mercados, gerando assim as condições para se arbitrar. Nesse sentido, a tese defendida é a de que o estudo da hipótese de não arbitragem possui contrapartida científica, tanto do lado teórico quanto do empírico. Utilizando-se do modelo estocástico Variância Gama para os preços dos ativos, o teste Bayesiano FBST é implementado com o intuito de se verificar a existência da arbitragem nos mercados, potencialmente expressa nos parâmetros destas densidades. Especificamente, a distribuição do Índice Bovespa é investigada, com os parâmetros risco-neutros sendo estimados baseandose nas opções negociadas no Segmento de Ações e no Segmento de Derivativos da BM&FBovespa. Os resultados aparentam indicar diferenças estatísticas significantes em alguns períodos de tempo. Até que ponto esta evidência é a expressão de uma arbitragem perene nesses mercados ainda é uma questão em aberto.
Precise hypotheses are natural characteristics of the economic theories for determining the value or prices of financial assets. Within these theories the precision is expressed in terms of equilibrium and non-arbitrage hypotheses. The former concept plays an essential role in the theories of finance. Under certain conditions, the Fundamental Theorem of Asset Pricing establishes a coherent and unique asset pricing framework in non-arbitraged markets, grounded on martingales processes. Accordingly, the analysis of the statistical distributions of financial assets can assist in understanding how participants behave in the markets, and may or may not engender conditions to arbitrage. On this regard, the dissertation proposes that the study of non-arbitrage hypothesis has a scientific counterparty, theoretically and empirically. Using a variance gamma stochastic model for prices, the Bayesian test FBST is conducted to verify the presence of arbitrage potentially incorporated on these densities parameters. Specifically, the Bovespa Index distribution is investigated, with risk neutral parameters estimated based on options traded in the Equities Segment and the Derivatives Segment at the BM&FBovespa Exchange. Results seem to indicate significant statistical differences at some periods of time. To what extent this evidence is actually the expression of a perennial arbitrage between the markets still is an open question.
Styles APA, Harvard, Vancouver, ISO, etc.
46

Castro, Lavinia Barros de. « História precoce das idéias do Plano Real ». Universidade Federal do Rio de Janeiro, 1999. http://hdl.handle.net/11422/2759.

Texte intégral
Résumé :
Submitted by Alberto Vieira (martins_vieira@ibest.com.br) on 2017-08-31T18:48:30Z No. of bitstreams: 1 272297.pdf: 12753717 bytes, checksum: e7dda3580d7c00516824178526f75c3f (MD5)
Made available in DSpace on 2017-08-31T18:48:30Z (GMT). No. of bitstreams: 1 272297.pdf: 12753717 bytes, checksum: e7dda3580d7c00516824178526f75c3f (MD5) Previous issue date: 1999-06
CAPES
Plano Real foi concebido em três fases: a primeira tinha como função promover um ajuste fiscal que levasse ao "estabelecimento do equilíbrio das contas do Governo, com o objetivo de eliminar a principal causa da inflação brasileira", a segunda fase visava "a criação de um padrão estável de valor denominado Unidade real de Valor - URV"; finalmente, a terceira concedia poder liberatório à unidade de conta e estabelecia "as regras de emissão e lastreamento da nova moeda (REAL) de forma a garantir a sua estabilidade" - etapas definidas pelo próprio governo de acordo com a Exposição de Motivos (E.M.) no̲ 205 de 30 de junho de 1994. Esta dissertação tem por objetivo discutir a concepção teórica de cada uma destas fases, respectivamente.
The Plano Real was conceived to be implemented in three stages: 1) fiscal adjustment: 2) the introduction of a new unity of account (URV): 3) the transformation of the URV in a new currency, the "real", and the establishment of a nominal anchor. Each chapter discusses the theoretical conception of each phase of the plan.
Styles APA, Harvard, Vancouver, ISO, etc.
47

Contti, Mariana Moraes [UNESP]. « Ultrassom seriado no pós-transplante renal precoce ». Universidade Estadual Paulista (UNESP), 2014. http://hdl.handle.net/11449/110541.

Texte intégral
Résumé :
Made available in DSpace on 2014-11-10T11:09:50Z (GMT). No. of bitstreams: 0 Previous issue date: 2014-02-24Bitstream added on 2014-11-10T11:57:55Z : No. of bitstreams: 1 000783243.pdf: 2198332 bytes, checksum: e6d5af1ca78ae6332a73905241309825 (MD5)
Introdução: O ultrassom (US) é um importante método diagnóstico das causas precoces de disfunção do enxerto renal. Não estão definidos parâmetros seguros para a distinção destas causas de disfunção: Necrose Tubular Aguda (NTA) e rejeição. Torna-se necessário, portanto, o aprimoramento dos métodos complementares para auxílio diagnóstico. Objetivos: O objetivo primário foi definir os parâmetros ultrassonográficos no exame seriado no pós-transplante renal precoce e identificar os preditores de evolução normal, retardo de função do enxerto (NTA) e rejeição. Os objetivos secundários foram: avaliar o Índice de Resistividade (IR), a perfusão renal - pelo Power Doppler (PD) – e o Índice Sistólico (IS). Materiais e Métodos: No período de Junho de 2012 a Agosto de 2013, 79 pacientes que receberam transplante renal de doador vivo ou falecido foram submetidos a dois exames de US: entre o 1º e o 3º pós-operatório (PO) e entre o 7º e o 10º PO. Nos dois exames foram avaliados: IR (nas três segmentares), IS, PD e IR+PD. Os pacientes foram divididos em três grupos: normal, NTA e rejeição. Os achados ultrassonográficos foram correlacionados com o desfecho clínico. Resultados: Ao 1º US o IR nas segmentares superior e média e o PD foram maiores no grupo NTA em relação ao normal. Ao 2º US o IR nas três segmentares e o PD foram maiores no grupo NTA em relação ao normal. O IS não foi diferente entre os grupos em nenhum dos dois exames. A repetição do exame de US não forneceu informações adicionais aos parâmetros já analisados. Com base na análise da curva de característica de operação do receptor (ROC), como chance de NTA, o IR na segmentar média foi o melhor índice isolado (ponto de corte 0,73) e o IR + PD o melhor índice composto (ponto de corte 0,84). O tempo de retardo de função do enxerto e o número de sessões de diálise foram maiores no grupo de IR+PD elevado ...
Styles APA, Harvard, Vancouver, ISO, etc.
48

Shirazian, Masoud. « Quality description in GPS precise point positioning ». Doctoral thesis, KTH, Geodesi och geoinformatik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-118349.

Texte intégral
Résumé :
GPS processing, like every processing method for geodetic applications, relies upon least-squares estimation. Quality measures must be defined to assure that the estimates are close to reality. These quality measures are reliable provided that, first, the covariance matrix of the observations (the stochastic model) is well defined and second, the systematic effects are completely removed (i.e., the functional model is good). In the GPS precise point positioning (PPP) the stochastic and functional models are not as complicated as in the differential GPS processing. We will assess the quality of the GPS Precise Point Positioning in this thesis by trying to define more realistic standard deviations for the station position estimates. To refine the functional model from systematic errors, we have 1) used the phase observations to prevent introducing any hardware bias to the observation equations, 2) corrected observations for all systematic effects with amplitudes of more than 1cm, 3) used undifferenced observations to prevent having complications (e.g. linearly related parameters) in the system of observation equations. To have a realistic covariance matrix for the observations we have incorporated the ephemeris uncertainties into the system of observation equations. Based on the above-mentioned issues a PPP processing method is designed and numerically tested on the real data of some of the International GNSS Service stations. The results confirm that undifferenced stochastic-related properties (e.g. degrees of freedom) can be reliable means to recognize the parameterization problem in differenced observation equations. These results also imply that incorporation of the satellite ephemeris uncertainties might improve the estimates of the station positions. The effect of troposphere on the GPS data is also focused in this thesis. Of particular importance is the parameterization problem of the wet troposphere in the observation equations.

QC 20130218

Styles APA, Harvard, Vancouver, ISO, etc.
49

Kawrykow, David. « Enabling precise interpretations of software change data ». Thesis, McGill University, 2012. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=106419.

Texte intégral
Résumé :
Numerous techniques mine change data captured in software archives to assist software engineering efforts. These change-based approaches typically analyze change sets -- groups of co-committed changes -- under the assumption that the development work represented by change sets is both meaningful and related to a single change task. However, we have found that change sets often violate this assumption by containing changes that we consider to be non-essential, or less likely to be representative of the kind of meaningful software development effort that is most interesting to typical change-based approaches. Furthermore, we have found many change sets addressing multiple subtasks -- groups of isolated changes that are related to each other, but not to other changes within a change set. Information mined from such change sets has the potential for interfering with the analyses of various change-based approaches. We propose a catalog of non-essential changes and describe an automated technique for detecting such changes within version histories. We used our technique to conduct an empirical investigation of over 30000 change sets capturing over 25 years of cumulative development activity in ten open-source Java systems. Our investigation found that between 3% and 26% of all modified code lines and between 2% and 16% of all method updates consisted entirely of non-essential modifications. We further found that eliminating such modifications reduces the amount of false positive recommendations that would be made by an existing association rule miner. These findings are supported by a manual evaluation of our detection technique, in which we found that our technique falsely identifies non-essential method updates in only 0.2% of all cases. These observations should be kept in mind when interpreting insights derived from version repositories. We also propose a formal definition of "subtasks" and present an automated technique for detecting subtasks within change sets. We describe a new benchmark containing over 1800 manually classified change sets drawn from seven open-source Java systems. We evaluated our technique on the benchmark and found that the technique classifies single- and multi-task change sets with a precision of 80% and a recall of 24%. In contrast, the current "default strategy" of assuming all change sets are single-task classifies single- and multi-task change sets with a precision of 95% and a recall of 0%. We further characterized the performance of our technique by manually assessing its false classifications. We found that in most cases (78%), false classifications made by our technique can be further refined to produce useful recommendations for change-based approaches. Our observations should aid future change-based seeking to derive more precise representations of the changes they analyze.
De nombreuses techniques de génie logiciel exploitent l'information stockée dans des systèmes de gestion de versions. Ces techniques analysent généralement des groupes de changements (ou change sets) sous l'hypothèse que le travail de développement contenus dans ces change sets est à la fois pertinent et relié à une seule tâche. Nous avons constaté que les change sets violent souvent cette hypothèse lorsqu'ils contiennent des changements que nous considérons comme non-essentiels, c'est-à-dire, non-représentatif des changements normalement associés au développement de logiciel. Par ailleurs, nous avons trouvé de nombreux change sets qui contiennent plusieurs sous-tâches -- des groupes de changements isolés qui sont reliés les uns aux autres, mais pas à d'autres changements du même change set. L'information extraite de change sets contenants des changements non-essentiels ou des changements reliés à plusieurs sous-tâches peut interférer avec les diverses techniques qui exploitent des systèmes de gestion de versions. Nous proposons un catalogue de modifications non-essentielles et une technique automatisée pour détecter de tels changements dans les systèmes de gestion de versions. Nous avons utilisé notre technique pour mener une étude empirique de plus de 30,000 change sets dans dix logiciels libres en Java. Notre étude a révélé que entre 3% et 26% de toutes les lignes de code modifiés et entre 2% et 16% de toutes les méthodes modifiées sont modifiés seulement par des modifications non-essentielles. Nous avons également constaté que l'élimination de telles modifications réduit la quantité de fausses recommandations qui seraient faites par un analyse de type "associtation rule mining." Ces conclusions sont appuyées par une évaluation manuelle de notre technique de détection, par laquelle nous avons constaté que notre technique identifie faussement des méthodes non-essentielles dans seulement 0,2% des cas. Ces observations devraient être tenues en compte dans l'interprétation des résultats d'analyse de données contenues das des systèmes de gestion de versions. Nous proposons aussi une définition formelle de "sous-tâches" et présentons une technique automatisée pour détecter les sous-tâches dans les change sets. Nous décrivons un benchmark contenant plus de 1800 change sets tirées de sept systèmes Java. Nous avons évalué notre technique sur cette référence et avons trouvé que la technique classifie des change sets mono-tâche et multi-tâche avec une précision de 80% et un rappel de 24%. En revanche, la "stratégie par défaut" qui assume que tous les change sets sont mono-tâches classifie les change sets avec une précision de 95% et un rappel de 0%. Nous avons également caractérisé la performance de notre technique en évaluant manuellement ses classifications erronées. Nous avons constaté que dans la plupart des cas (78%), les classifications fausses faites par notre technique peuvent être améliorées pour produire des recommandations utiles.
Styles APA, Harvard, Vancouver, ISO, etc.
50

Good, P. « Inferring precise tracer transport from stratospheric measurements ». Thesis, University of Cambridge, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.599493.

Texte intégral
Résumé :
This thesis examines tracer transport at mid- and high northern latitudes in the lower stratosphere during early 1992, using lidar observations of aerosol from the eruption of Mount Pinatubo and coordinate fields based on meteorological analyses. The errors and resolution of the measurement, the spatial and temporal coverage made by the observations and the specific behaviour of the constituent being measured are all important. The lidar observations used are of relatively high resolution, and offer good coverage of the mid- and high latitude stratosphere during early 1992. The suitability of the aerosol lidar data to tracer transport studies is discussed, in terms of aerosol microphysics and the properties of the specific measurement. The random error in the measurement is assessed as part of the analysis. In all the analyses, care is taken to avoid spatial averaging of the data, since that can lead to an overestimate of the mixing in the real atmosphere. The northern midlatitude aerosol distribution is found to be relatively stable by early 1992. Transport to high latitude is investigated, with some variability demonstrated as a function of time and potential temperature, covering the region from 520K down to 350K. This is the first time that such a large number of reasonably high resolution, mid- and high latitude observations have been analysed without averaging, to demonstrate irreversible winter-time poleward transport of a tracer to northern high latitude. The vortex edge data near 500K is examined further, using a novel statistical approach. The random error in the equivalent latitude coordinate is quantified for this region, and detailed aerosol tracer transport inferred.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie