Tesis sobre el tema "Propagation risk"

Siga este enlace para ver otros tipos de publicaciones sobre el tema: Propagation risk.

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 37 mejores tesis para su investigación sobre el tema "Propagation risk".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Garg, Tushar. "Estimating change propagation risk using TRLs and system architecture". Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/110134.

Texto completo
Resumen
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, School of Engineering, System Design and Management Program, 2017.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 27-28).
Risk estimation is a key activity for product development and technology integration programs. There are a number of decision support tools that help project managers identify and mitigate risks in a project, however few explicitly consider the effects of architecture on risk. We propose a novel risk estimation framework that includes considerations of the system architecture. By starting with traditional project management literature, we define risk as a combination of likelihood and impact. We use Technology Readiness Levels as our measure for likelihood, and given that change propagates through interfaces, we used metrics that relate to connectivity to estimate impact. To analyze the connectivity, we model systems using networks of nodes and edges and calculate centrality metrics. This framework is applied to an industry example and we visualize the data in different formats to aid in analysis. The insights gained from this analysis are discussed, and we conclude that the risk estimation framework provides estimates that are in line with the experience of engineers at the company.
by Tushar Garg.
S.M. in Engineering and Management
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Selda, Konukcu. "Application Of Risk Management Process On Wave Propagation In Aerospace Medium". Master's thesis, METU, 2006. http://etd.lib.metu.edu.tr/upload/3/12607606/index.pdf.

Texto completo
Resumen
In this thesis, risk management methods are investigated in order to integrate risk management practices into the Turkish Aerospace industry. The research presents the sequence of risk management processes as identification of risk, analysis of risk, risk planning etc. Risk analysis methods named as Risk Ranking and Analytical Hierarchy Process (AHP) are investigated in order to improve reliability and safety of the systems or processes in the aerospace industry. The main aim of using risk ranking and AHP together is to translate the knowledge in the Turkish Aviation Industry to a tangible form with a quantitative approach and to prepare a basis for probabilistic risk analysis. Instrument Landing System (ILS) has been considered only in order to facilitate a demonstration how risk management can be done in this context. This study investigates and seeks to create awareness for risk management practices within Turkish Aviation industry.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Ghadge, Abhijeet. "A systems thinking approach for modelling supply chain risk propagation". Thesis, Loughborough University, 2013. https://dspace.lboro.ac.uk/2134/13561.

Texto completo
Resumen
Supply Chain Risk Management (SCRM) is rapidly becoming a most sought after research area due to the influence of recent supply chain disruptions on global economy. The thesis begins with a systematic literature review of the developments within the broad domain of SCRM over the past decade. Thematic and descriptive analysis supported with modern knowledge management techniques brings forward seven distinctive research gaps for future research in SCRM. Overlapping research findings from an industry perspective, coupled with SCRM research gaps from the systematic literature review has helped to define the research problem for this study. The thesis focuses on a holistic and systematic approach to modelling risks within supply chain and logistics networks. The systems thinking approach followed conceptualises the phenomenon of risk propagation utilising several recent case studies, workshop findings and focus studies. Risk propagation is multidimensional and propagates beyond goods, finance and information resource. It cascades into technology, human resource and socio-ecological dimensions. Three risk propagation zones are identified that build the fundamentals for modelling risk behaviour in terms of cost and delay. The development of a structured framework for SCRM, a holistic supply chain risk model and a quantitative research design for risk assessment are the major contributions of this research. The developed risk assessment platform has the ability to capture the fracture points and cascading impact within a supply chain and logistics network. A reputed aerospace and defence organisation in UK was used to test the experimental modelling set up for its viability and for bridging the gap between theory and practice. The combined statistical and simulation modelling approach provides a new perspective to assessing the complex behavioural performance of risks during multiple interactions within network.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Kumar, Vikas. "Soft computing approaches to uncertainty propagation in environmental risk mangement". Doctoral thesis, Universitat Rovira i Virgili, 2008. http://hdl.handle.net/10803/8558.

Texto completo
Resumen
Real-world problems, especially those that involve natural systems, are complex and composed of many nondeterministic components having non-linear coupling. It turns out that in dealing with such systems, one has to face a high degree of uncertainty and tolerate imprecision. Classical system models based on numerical analysis, crisp logic or binary logic have characteristics of precision and categoricity and classified as hard computing approach. In contrast soft computing approaches like probabilistic reasoning, fuzzy logic, artificial neural nets etc have characteristics of approximation and dispositionality. Although in hard computing, imprecision and uncertainty are undesirable properties, in soft computing the tolerance for imprecision and uncertainty is exploited to achieve tractability, lower cost of computation, effective communication and high Machine Intelligence Quotient (MIQ). Proposed thesis has tried to explore use of different soft computing approaches to handle uncertainty in environmental risk management. The work has been divided into three parts consisting five papers.
In the first part of this thesis different uncertainty propagation methods have been investigated. The first methodology is generalized fuzzy α-cut based on the concept of transformation method. A case study of uncertainty analysis of pollutant transport in the subsurface has been used to show the utility of this approach. This approach shows superiority over conventional methods of uncertainty modelling. A Second method is proposed to manage uncertainty and variability together in risk models. The new hybrid approach combining probabilistic and fuzzy set theory is called Fuzzy Latin Hypercube Sampling (FLHS). An important property of this method is its ability to separate randomness and imprecision to increase the quality of information. A fuzzified statistical summary of the model results gives indices of sensitivity and uncertainty that relate the effects of variability and uncertainty of input variables to model predictions. The feasibility of the method is validated to analyze total variance in the calculation of incremental lifetime risks due to polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/F) for the residents living in the surroundings of a municipal solid waste incinerator (MSWI) in Basque Country, Spain.
The second part of this thesis deals with the use of artificial intelligence technique for generating environmental indices. The first paper focused on the development of a Hazzard Index (HI) using persistence, bioaccumulation and toxicity properties of a large number of organic and inorganic pollutants. For deriving this index, Self-Organizing Maps (SOM) has been used which provided a hazard ranking for each compound. Subsequently, an Integral Risk Index was developed taking into account the HI and the concentrations of all pollutants in soil samples collected in the target area. Finally, a risk map was elaborated by representing the spatial distribution of the Integral Risk Index with a Geographic Information System (GIS). The second paper is an improvement of the first work. New approach called Neuro-Probabilistic HI was developed by combining SOM and Monte-Carlo analysis. It considers uncertainty associated with contaminants characteristic values. This new index seems to be an adequate tool to be taken into account in risk assessment processes. In both study, the methods have been validated through its implementation in the industrial chemical / petrochemical area of Tarragona.
The third part of this thesis deals with decision-making framework for environmental risk management. In this study, an integrated fuzzy relation analysis (IFRA) model is proposed for risk assessment involving multiple criteria. The fuzzy risk-analysis model is proposed to comprehensively evaluate all risks associated with contaminated systems resulting from more than one toxic chemical. The model is an integrated view on uncertainty techniques based on multi-valued mappings, fuzzy relations and fuzzy analytical hierarchical process. Integration of system simulation and risk analysis using fuzzy approach allowed to incorporate system modelling uncertainty and subjective risk criteria. In this study, it has been shown that a broad integration of fuzzy system simulation and fuzzy risk analysis is possible.
In conclusion, this study has broadly demonstrated the usefulness of soft computing approaches in environmental risk analysis. The proposed methods could significantly advance practice of risk analysis by effectively addressing critical issues of uncertainty propagation problem.
Los problemas del mundo real, especialmente aquellos que implican sistemas naturales, son complejos y se componen de muchos componentes indeterminados, que muestran en muchos casos una relación no lineal. Los modelos convencionales basados en técnicas analíticas que se utilizan actualmente para conocer y predecir el comportamiento de dichos sistemas pueden ser muy complicados e inflexibles cuando se quiere hacer frente a la imprecisión y la complejidad del sistema en un mundo real. El tratamiento de dichos sistemas, supone el enfrentarse a un elevado nivel de incertidumbre así como considerar la imprecisión. Los modelos clásicos basados en análisis numéricos, lógica de valores exactos o binarios, se caracterizan por su precisión y categorización y son clasificados como una aproximación al hard computing. Por el contrario, el soft computing tal como la lógica de razonamiento probabilístico, las redes neuronales artificiales, etc., tienen la característica de aproximación y disponibilidad. Aunque en la hard computing, la imprecisión y la incertidumbre son propiedades no deseadas, en el soft computing la tolerancia en la imprecisión y la incerteza se aprovechan para alcanzar tratabilidad, bajos costes de computación, una comunicación efectiva y un elevado Machine Intelligence Quotient (MIQ). La tesis propuesta intenta explorar el uso de las diferentes aproximaciones en la informática blanda para manipular la incertidumbre en la gestión del riesgo medioambiental. El trabajo se ha dividido en tres secciones que forman parte de cinco artículos.
En la primera parte de esta tesis, se han investigado diferentes métodos de propagación de la incertidumbre. El primer método es el generalizado fuzzy α-cut, el cual está basada en el método de transformación. Para demostrar la utilidad de esta aproximación, se ha utilizado un caso de estudio de análisis de incertidumbre en el transporte de la contaminación en suelo. Esta aproximación muestra una superioridad frente a los métodos convencionales de modelación de la incertidumbre. La segunda metodología propuesta trabaja conjuntamente la variabilidad y la incertidumbre en los modelos de evaluación de riesgo. Para ello, se ha elaborado una nueva aproximación híbrida denominada Fuzzy Latin Hypercube Sampling (FLHS), que combina los conjuntos de la teoría de probabilidad con la teoría de los conjuntos difusos. Una propiedad importante de esta teoría es su capacidad para separarse los aleatoriedad y imprecisión, lo que supone la obtención de una mayor calidad de la información. El resumen estadístico fuzzificado de los resultados del modelo generan índices de sensitividad e incertidumbre que relacionan los efectos de la variabilidad e incertidumbre de los parámetros de modelo con las predicciones de los modelos. La viabilidad del método se llevó a cabo mediante la aplicación de un caso a estudio donde se analizó la varianza total en la cálculo del incremento del riesgo sobre el tiempo de vida de los habitantes que habitan en los alrededores de una incineradora de residuos sólidos urbanos en Tarragona, España, debido a las emisiones de dioxinas y furanos (PCDD/Fs).
La segunda parte de la tesis consistió en la utilización de las técnicas de la inteligencia artificial para la generación de índices medioambientales. En el primer artículo se desarrolló un Índice de Peligrosidad a partir de los valores de persistencia, bioacumulación y toxicidad de un elevado número de contaminantes orgánicos e inorgánicos. Para su elaboración, se utilizaron los Mapas de Auto-Organizativos (SOM), que proporcionaron un ranking de peligrosidad para cada compuesto. A continuación, se elaboró un Índice de Riesgo Integral teniendo en cuenta el Índice de peligrosidad y las concentraciones de cada uno de los contaminantes en las muestras de suelo recogidas en la zona de estudio. Finalmente, se elaboró un mapa de la distribución espacial del Índice de Riesgo Integral mediante la representación en un Sistema de Información Geográfico (SIG). El segundo artículo es un mejoramiento del primer trabajo. En este estudio, se creó un método híbrido de los Mapas Auto-organizativos con los métodos probabilísticos, obteniéndose de esta forma un Índice de Riesgo Integrado. Mediante la combinación de SOM y el análisis de Monte-Carlo se desarrolló una nueva aproximación llamada Índice de Peligrosidad Neuro-Probabilística. Este nuevo índice es una herramienta adecuada para ser utilizada en los procesos de análisis. En ambos artículos, la viabilidad de los métodos han sido validados a través de su aplicación en el área de la industria química y petroquímica de Tarragona (Cataluña, España).
El tercer apartado de esta tesis está enfocado en la elaboración de una estructura metodológica de un sistema de ayuda en la toma de decisiones para la gestión del riesgo medioambiental. En este estudio, se presenta un modelo integrado de análisis de fuzzy (IFRA) para la evaluación del riesgo cuyo resultado depende de múltiples criterios. El modelo es una visión integrada de las técnicas de incertidumbre basadas en diseños de valoraciones múltiples, relaciones fuzzy y procesos analíticos jerárquicos inciertos. La integración de la simulación del sistema y el análisis del riesgo utilizando aproximaciones inciertas permitieron incorporar la incertidumbre procedente del modelo junto con la incertidumbre procedente de la subjetividad de los criterios. En este estudio, se ha demostrado que es posible crear una amplia integración entre la simulación de un sistema incierto y de un análisis de riesgo incierto.
En conclusión, este trabajo demuestra ampliamente la utilidad de aproximación Soft Computing en el análisis de riesgos ambientales. Los métodos propuestos podría avanzar significativamente la práctica de análisis de riesgos de abordar eficazmente el problema de propagación de incertidumbre.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Fang, Chao. "Modeling and Analysing Propagation Behavior in Complex Risk Network : A Decision Support System for Project Risk Management". Phd thesis, Ecole Centrale Paris, 2011. http://tel.archives-ouvertes.fr/tel-01018574.

Texto completo
Resumen
Project risk management is a crucial activity in project management. Nowadays, projects are facing a growing complexity and are thus exposed to numerous and interdependent risks. However, existing classical methods have limitations for modeling the real complexity of project risks. For example, some phenomena like chain reactions and loops are not properly taken into account. This Ph.D. thesis aims at analyzing propagation behavior in the project risk network through modelling risks and risk interactions. An integrated framework of decision support system is presented with a series of proposed methods. The construction of the project risk network requires the involvement of the project manager and the team of experts using the Design Structure Matrix (DSM) method. Simulation techniques are used and several network theory-based methods are developed for analyzing and prioritizing project risks, with respect to their role and importance in the risk network in terms of various indicators. The proposed approach serves as a powerful complement to classical project risk analysis. These novel analyses provide project managers with improved insights on risks and risk interactions under complexity and help them to design more effective response actions. Considering resource constraints, a greedy algorithm and a genetic algorithm are developed to optimize the risk response plan and the allocation of budget reserves dedicated to the risk management. Two examples of application, 1) to a real musical staging project in the entertainment industry and 2) to a real urban transportation system implementation project, are presented to illustrate the utility of the proposed decision support system.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Esperon, Miguez Manuel. "Financial and risk assessment and selection of health monitoring system design options for legacy aircraft". Thesis, Cranfield University, 2013. http://dspace.lib.cranfield.ac.uk/handle/1826/8062.

Texto completo
Resumen
Aircraft operators demand an ever increasing availability of their fleets with constant reduction of their operational costs. With the age of many fleets measured in decades, the options to face these challenges are limited. Integrated Vehicle Health Management (IVHM) uses data gathered through sensors in the aircraft to assess the condition of components to detect and isolate faults or even estimate their Remaining Useful Life (RUL). This information can then be used to improve the planning of maintenance operations and even logistics and operational planning, resulting in shorter maintenance stops and lower cost. Retrofitting health monitoring technology onto legacy aircraft has the capability to deliver what operators and maintainers demand, but working on aging platforms presents numerous challenges. This thesis presents a novel methodology to select the combination of diagnostic and prognostic tools for legacy aircraft that best suits the stakeholders’ needs based on economic return and financial risk. The methodology is comprised of different steps in which a series of quantitative analyses are carried out to reach an objective solution. Beginning with the identification of which components could bring higher reduction of maintenance cost and time if monitored, the methodology also provides a method to define the requirements for diagnostic and prognostic tools capable of monitoring these components. It then continues to analyse how combining these tools affects the economic return and financial risk. Each possible combination is analysed to identify which of them should be retrofitted. Whilst computer models of maintenance operations can be used to analyse the effect of retrofitting IVHM technology on a legacy fleet, the number of possible combinations of diagnostic and prognostic tools is too big for this approach to be practicable. Nevertheless, computer models can go beyond the economic analysis performed thus far and simulations are used as part of the methodology to get an insight of other effects or retrofitting the chosen toolset.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Xuan, Yunqing. "Uncertainty propagation in complex coupled flood risk models using numerical weather prediction and weather radars". Thesis, University of Bristol, 2007. http://hdl.handle.net/1983/c76c4eb0-9c9e-4ddc-866c-9bbdbfa4ec25.

Texto completo
Resumen
The role of flood forecasting is becoming increasingly important as the concept of risk-based approach is accepted in flood risk management. The risk-based approach not only requires efficient and abundant information for decision making in a risk framework. but needs the uncertainty appropriately accounted for and expressed. The rapid development in numerical weather prediction and weather radar technology make it feasible to provide precipitation predictions and observations for flood warning and forecasting that benefit from the extended lead-time. Although the uncertainty issues related to standalone models have been addressed. little attention has been focused on the complex behaviour of coupled modelling systems when the uncertainty-bearing information propagates through the model cascade. The work presented in this thesis focuses on the issue of uncertainty propagation in this complex coupled modelling environment. A prototype system that integrates the high reso- lution numerical weather prediction. weather radar. and distributed hydrological models. was developed to facilitate the study. The uncertainty propagation and interactions were then analysed covering the uncertainty associated with the data. model structures, chaotic dynamics and coupling processes. The ensemble method was concluded to be the choice for the coupled system to produce forecasts able to account for the uncertainty cascaded from the precipitation prediction to the hydrological and hydraulic models. Finally. recommendations are made in relation to the exploration of complex coupled systems for uncertainty propagation in flood risk management.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

LARI, SERENA. "Multi scale heuristic and quantitative multi-risk assessment in the Lombardy region, with uncertainty propagation". Doctoral thesis, Università degli Studi di Milano-Bicocca, 2009. http://hdl.handle.net/10281/7550.

Texto completo
Resumen
In this thesis, some methodologies for multi-risk assessment are presented, that can be applied to regional or local scale. At the local scale, the problem of uncertainty propagation in risk assessment is treated, testing different methodology for calculation. The work is organised in four parts: 1. Multi risk analysis at the regional scale in Lombardy (PRIM project, 2007). The methodology integrates information with different degree of accuracy into an indicator based approach, in order to develop a regional scale multirisk assessment and to identify “hot spot” risk areas for more detailed analysis. Eventually, the sensitivity of weights is investigated, and the effect on risk assessment of different individual attitudes and perception (i.e., expert, social, political, risk aversion). 2. Quantitative multi risk assessment (QRA) at the local scale on the hot spots, for lower Valtellina and the area of Brescia and lower Val Trompia, Val Sabbia, and Valcamonica. The methodology is based on the use of historical data and modelling to assess for each threat the expected number of casualties and the expected economic damage. 3. Quantitative risk assessment (QRA) for floods, earthquakes and industrial accidents in the area of Brescia (420 km2), with uncertainty propagation analysis. Frequency-damage curves were calculated. Three methods were 6 used and compared to calculate the uncertainty of the expected economic losses: Monte Carlo Simulation, First Order Second Moment approach, and Point Estimate. 4. Realization of a tool based on a system of indicators aimed at assigning a priority for the realization of new mitigation works, at the evaluation of efficacy of existent works, and at the comparison of different alternatives for the same risk scenario. Indicators are referred to the risk scenario, to the most recent and most significant event occurred in the analysed area, to the planning stage of the work, and to the technical characteristics of realization and maintenance of the work itself.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Akiode, Olukemi Adejoke. "Examination and management of human African Trypanosomiasis propagation using geospatial techniques". Thesis, Abertay University, 2014. https://rke.abertay.ac.uk/en/studentTheses/9419b401-6604-4530-9938-57ab03234e67.

Texto completo
Resumen
Human African Trypanosomiasis (HAT) is a vector-borne disease transmitted by the bite of the tsetse fly that results in high human morbidity and mortality. The propagation of the disease has been linked to environmental factors, and understanding the vector’s habitat is vital to its control. There is no HAT vaccine, but biological control of the vector has been successful in reducing HAT incidence. However, in recent years the disease has re-emerged and spread. Due to insufficient knowledge of HAT endemic foci, the disease management remains challenging. To achieve effective deployment of control strategies, accurate knowledge of the spatial distribution of the HAT vector is vital. The current study is based in Nigeria, and looks at part of Delta State, and a part of Jigawa State, in which HAT has been identified. The work utilizes remote sensing satellite imaging and fuzzy logic to develop a HAT vector habitat classification scheme, to explore the dynamics of HAT propagation. The goal was to develop a surveillance methodology to identify factors that influence HAT epidemiology. Land cover and ancillary data were integrated to classify HAT vector habitat using geospatial-fuzzy multicriteria analysis. The work highlights the significance of geospatial techniques where epidemiological data are limited, for improving understanding of HAT. This study helped distinguish HAT vector habitat into different zones (breed, feed and rest), which allowed the direction and magnitude of HAT, a n d factors influencing propagation to be determined. This helped identify ‘HAT priority intervention areas’. The study findings suggested propagation of HAT resulted from suitability of water bodies, shrub and less-dense forest for the HAT vector, and continued exposure of human populations to these land cover classes. Overlapping of HAT vector habitat zones within built-up areas was also a cause. The study also found that HAT propagation was multidirectional, and that this may have been influenced by landscape characteristics. This novel approach can also be used in other part of Nigeria as well as adapted to investigate other diseases. In conclusion, the HAT vector habitat classification scheme is a transparent tool for policy makers for identifying vulnerable and at risk areas.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Aksen, Ernest, Jacek Cukrowski y Manfred M. Fischer. "Propagation of Crises Across Countries: Trade Roots of Contagion Effects". WU Vienna University of Economics and Business, 2001. http://epub.wu.ac.at/4235/1/WGI_DP_7801.pdf.

Texto completo
Resumen
The paper provides an explanation of the mechanisms underlying trade roots of the contagion effects emanating from the recent turmoils. It is argued that under demand uncertainty risk averse behavior of firms provides a basis for international trade. The paper shows by means of a simple two-country model that risk averse firms operating in perfectly competitive markets with uncertainty of demand tend to diversify markets what gives a basis for international trade in identical commodities even between identical countries. It is shown that such trade may be welfare improving despite efficiency losses due to cross-hauling and transportation costs. The analysis reveals that change of the expectations concerning market conditions caused by the turmoil in the neighbor country (i.e., shift in the perception of market conditions) may lead to macroeconomic destabilization (increase in price level and unemployment, worsening of terms of trade, and deterioration of trade balance).
Series: Discussion Papers of the Institute for Economic Geography and GIScience
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Okhulkova, Tatiana. "Integration of uncertainty and definition of critical thresholds for CO2 storage risk assessment". Thesis, Université Paris-Saclay (ComUE), 2015. http://www.theses.fr/2015SACLC021/document.

Texto completo
Resumen
L'objectif principal de la thèse est de définir comment l'incertitude peut être prise en compte dans leprocessus d'évaluation des risques pour le stockage de CO2 et de quantifier, à l'aide de modèles numériques,les scénarios de fuite par migration latérale et à travers la couverture. Les scénarios choisis sont quantifiéspar l'approche de modélisation de système pour laquelle des modèles numériques prédictifs ad-hoc sontdéveloppés. Une étude probabiliste de propagation d'incertitude paramétrique par un méta-modèle depolynômes de chaos est réalisée. La problématique de la prise en compte de la variabilité spatiale comme unesource d'incertitude est éclairée et une étude comparative entre représentations homogène et hétérogène de laperméabilité est fournie
The main goal of the thesis is to define how the uncertainty can be accounted for in the process of riskassessment for CO2 storage and to quantify by means of numerical models the scenarii of leakage by lateralmigration and through the caprock. The chosen scenarii are quantified using the system modeling approachfor which ad-hoc predictive numerical models are developed. A probabilistic parametric uncertaintypropagation study using polynomial chaos expansion is performed. Matters of spatial variability are alsodiscussed and a comparison between homogeneous and heterogeneous representations of permeability isprovided
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Zimmerman, Lora Leigh. "Propagation of Juvenile Freshwater Mussels (Bivalvia: Unionidae) and Assessment of Habitat Suitability for Restoration of Mussels in the Clinch River, Virginia". Thesis, Virginia Tech, 2003. http://hdl.handle.net/10919/10055.

Texto completo
Resumen
Freshwater mussel propagation techniques were tested at the Virginia Department of Game and Inland Fisheries Aquatic Wildlife Conservation Center through a series of three experiments. Experiment 1 tested the suitability of a pond and raceway for rearing juvenile oystermussels (Epioblasma capsaeformis) and wavyrayed lampmussel (Lampsilis fasciola). This experiment was prematurely terminated due to predation on mussels by fathead minnows (Pimephales promelas). Experiment 2 evaluated growth and survival of juvenile rainbow mussels in outdoor troughs and indoor aquaria. There was no significant difference in survival or growth between the two systems. Experiment 3 used troughs similar to those in Experiment 2 to rear E. capsaeformis and L. fasciola under two silt regimes. Survival for Experiment 3 was significantly greater for L. fasciola. The comparison between silt regimes indicated that individuals in the high-silt treatment had better survival than those in the low-silt treatment. The difference between these 2 treatments may be a reflection of increased escapement in the low-silt treatment, which may have resulted from more frequent disturbance during sampling. Growth of L. fasciola was significantly greater than E. capsaeformis, and was greater in the low-silt treatment. A habitat survey of the Clinch River, Virginia was conducted from Blackford, Clinch River Kilometer (CRK) 478 to the Tennessee border, CRK 325. Physical characteristics identified in the survey were combined with water quality and impact source data to develop a habitat suitability index for freshwater mussels within this study reach. Model parameters were indexed and weighted to give a final suitability ranking. Habitat units having the highest overall ranking included: Nash Ford (CRK 449), Artrip (CRK 442), several riffles and runs below Carterton (CRK 417), upstream of Mill Island (CRK 389.5), and Pendleton Island (CRK 365), and Speers Ferry (CRK 333-325). Potential locations for habitat restoration projects and additional monitoring were also identified.
Master of Science
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Perrin, Guillaume. "Random fields and associated statistical inverse problems for uncertainty quantification : application to railway track geometries for high-speed trains dynamical responses and risk assessment". Phd thesis, Université Paris-Est, 2013. http://pastel.archives-ouvertes.fr/pastel-01001045.

Texto completo
Resumen
Les nouvelles attentes vis-à-vis des nouveaux trains à grande vitesse sont nombreuses: on les voudrait plus rapides, plus confortables, plus stables, tout en étant moins consommateur d'énergie, moins agressif vis-à-vis des voies, moins bruyants... Afin d'optimiser la conception de ces trains du futur, il est alors nécessaire de pouvoir se baser sur une connaissance précise de l'ensemble des conditions de circulations qu'ils sont susceptibles de rencontrer au cours de leur cycle de vie. Afin de relever ces défis, la simulation a un très grand rôle à jouer. Pour que la simulation puisse être utilisée dans des perspectives de conception, de certification et d'optimisation de la maintenance, elle doit alors être tout à fait représentative de l'ensemble des comportements physiques mis en jeu. Le modèle du train, du contact entre les roues et le rail, doivent ainsi être validés avec attention, et les simulations doivent être lancées sur des ensembles d'excitations qui sont réalistes et représentatifs de ces défauts de géométrie. En ce qui concerne la dynamique, la géométrie de la voie, et plus particulièrement les défauts de géométrie, représentent une des principales sources d'excitation du train, qui est un système mécanique fortement non linéaire. A partir de mesures de la géométrie d'un réseau ferroviaire, un paramétrage complet de la géométrie de la voie et de sa variabilité semblent alors nécessaires, afin d'analyser au mieux le lien entre la réponse dynamique du train et les propriétés physiques et statistiques de la géométrie de la voie. Dans ce contexte, une approche pertinente pour modéliser cette géométrie de la voie, est de la considérer comme un champ aléatoire multivarié, dont les propriétés sont a priori inconnues. En raison des interactions spécifiques entre le train et la voie, il s'avère que ce champ aléatoire n'est ni Gaussien ni stationnaire. Ce travail de thèse s'est alors particulièrement concentré sur le développement de méthodes numériques permettant l'identification en inverse, à partir de mesures expérimentales, de champs aléatoires non Gaussiens et non stationnaires. Le comportement du train étant très non linéaire, ainsi que très sensible vis-à-vis de la géométrie de la voie, la caractérisation du champ aléatoire correspondant aux défauts de géométrie doit être extrêmement fine, tant du point de vue fréquentiel que statistique. La dimension des espaces statistiques considérés est alors très importante. De ce fait, une attention toute particulière a été portée dans ces travaux aux méthodes de réduction statistique, ainsi qu'aux méthodes pouvant être généralisées à la très grande dimension. Une fois la variabilité de la géométrie de la voie caractérisée à partir de données expérimentales, elle doit ensuite être propagée au sein du modèle numérique ferroviaire. A cette fin, les propriétés mécaniques d'un modèle numérique de train à grande vitesse ont été identifiées à partir de mesures expérimentales. La réponse dynamique stochastique de ce train, soumis à un très grand nombre de conditions de circulation réalistes et représentatives générées à partir du modèle stochastique de la voie ferrée, a été ainsi évaluée. Enfin, afin d'illustrer les possibilités apportées par un tel couplage entre la variabilité de la géométrie de la voie et la réponse dynamique du train, ce travail de thèse aborde trois applications
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Jaber, Hadi. "Modeling and analysis of propagation risks in complex projects : application to the development of new vehicles". Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLC022/document.

Texto completo
Resumen
La gestion de projets complexes nécessite d’orchestrer la coopération de centaines de personnes provenant de diverses entreprises, professions et compétences, de travailler sur des milliers d'activités, livrables, objectifs, actions, décisions et risques. En outre, ces nombreux éléments du projet sont de plus en plus interconnectés, et aucune décision ou action n’est indépendante. Cette complexité croissante est l'un des plus grands défis de la gestion de projet et l'une des causes de l'échec du projet en termes de dépassements de coûts et des retards. Par exemple, dans l'industrie automobile, l'augmentation de l'orientation du marché et de la complexité croissante des véhicules a changé la structure de gestion des projets de développement de nouveaux véhicules à partir d'une structure hiérarchique à une structure en réseau, y compris le constructeur, mais aussi de nombreux fournisseurs. Les dépendances entre les éléments du projet augmentent les risques, car les problèmes dans un élément peuvent se propager à d'autres éléments qui en dépendent directement ou indirectement. La complexité génère un certain nombre de phénomènes, positifs ou négatifs, isolés ou en chaînes, locaux ou globaux, qui vont plus ou moins interférer avec la convergence du projet vers ses objectifs.L'objectif de la thèse est donc de réduire les risques associés à la complexité des projets véhicules en augmentant la compréhension de cette complexité et de la coordination des acteurs du projet. Pour ce faire, une première question de recherche est de prioriser les actions pour atténuer les risques liés à la complexité. Puis, une seconde question de recherche est de proposer un moyen d'organiser et de coordonner les acteurs afin de faire face efficacement avec les phénomènes liés à la complexité identifiés précédemment.La première question sera abordée par la modélisation de complexité du projet en analysant les phénomènes liés à la complexité dans le projet, à deux niveaux. Tout d'abord, une modélisation descriptive de haut niveau basée facteur est proposé. Elle permet de mesurer et de prioriser les zones de projet où la complexité peut avoir le plus d'impact. Deuxièmement, une modélisation de bas niveau basée sur les graphes est proposée. Elle permet de modéliser plus finement les éléments du projet et leurs interdépendances. Des contributions ont été faites sur le processus complet de modélisation, y compris l'automatisation de certaines étapes de collecte de données, afin d'augmenter les performances et la diminution de l'effort et le risque d'erreur. Ces deux modèles peuvent être utilisés en conséquence; une première mesure de haut niveau peut permettre de se concentrer sur certains aspects du projet, où la modélisation de bas niveau sera appliquée, avec un gain global d'efficacité et d'impact. Basé sur ces modèles, certaines contributions sont faites pour anticiper le comportement potentiel du projet. Des analyses topologiques et de propagation sont proposées pour détecter et hiérarchiser les éléments essentiels et les interdépendances critiques, tout en élargissant le sens du mot polysémique "critique".La deuxième question de recherche sera traitée en introduisant une méthodologie de « Clustering » pour proposer des groupes d'acteurs dans les projets de développement de nouveaux produits, en particulier pour les acteurs impliqués dans de nombreuses interdépendances liées aux livrables à différentes phases du cycle de vie du projet. Cela permet d'accroître la coordination entre les acteurs interdépendants qui ne sont pas toujours formellement reliés par la structure hiérarchique de l'organisation du projet. Cela permet à l'organisation du projet d’être effectivement plus proche de la structure en « réseau » qu’elle devrait avoir. L'application industrielle aux projets de développement de nouveaux véhicules a montré des résultats prometteurs pour les contributions aux deux questions de recherche
The management of complex projects requires orchestrating the cooperation of hundreds of individuals from various companies, professions and backgrounds, working on thousands of activities, deliverables, and risks. As well, these numerous project elements are more and more interconnected, and no decision or action is independent. This growing complexity is one of the greatest challenges of project management and one of the causes for project failure in terms of cost overruns and time delays. For instance, in the automotive industry, increasing market orientation and growing complexity of automotive product has changed the management structure of the vehicle development projects from a hierarchical to a networked structure, including the manufacturer but also numerous suppliers. Dependencies between project elements increase risks, since problems in one element may propagate to other directly or indirectly dependent elements. Complexity generates a number of phenomena, positive or negative, isolated or in chains, local or global, that will more or less interfere with the convergence of the project towards its goals. The thesis aim is thus to reduce the risks associated with the complexity of the vehicle development projects by increasing the understanding of this complexity and the coordination of project actors. To do so, a first research question is to prioritize actions to mitigate complexity-related risks. Then, a second research question is to propose a way to organize and coordinate actors in order to cope efficiently with the previously identified complexity-related phenomena.The first question will be addressed by modeling project complexity and by analyzing complexity-related phenomena within the project, at two levels. First, a high-level factor-based descriptive modeling is proposed. It permits to measure and prioritize project areas where complexity may have the most impact. Second, a low-level graph-based modeling is proposed, based on the finer modeling of project elements and interdependencies. Contributions have been made on the complete modeling process, including the automation of some data-gathering steps, in order to increase performance and decrease effort and error risk. These two models can be used consequently; a first high-level measure can permit to focus on some areas of the project, where the low-level modeling will be applied, with a gain of global efficiency and impact. Based on these models, some contributions are made to anticipate potential behavior of the project. Topological and propagation analyses are proposed to detect and prioritize critical elements and critical interdependencies, while enlarging the sense of the polysemous word “critical."The second research question will be addressed by introducing a clustering methodology to propose groups of actors in new product development projects, especially for the actors involved in many deliverable-related interdependencies in different phases of the project life cycle. This permits to increase coordination between interdependent actors who are not always formally connected via the hierarchical structure of the project organization. This allows the project organization to be actually closer to what a networked structure should be. The automotive-based industrial application has shown promising results for the contributions to both research questions. Finally, the proposed methodology is discussed in terms of genericity and seems to be applicable to a wide set of complex projects for decision support
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Favier, Philomène. "Une approche intégrée du risque avalanche : quantification de la vulnérabilité physique et humaine et optimisation des structures de protection". Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENU051/document.

Texto completo
Resumen
La quantification du risque avalanche à long terme dans un but de zonage et d'optimisation des moyens de protection est fait dans la plupart des pays sur la base de la connaissance des événements de forte intensité. Ces approches fondées sur les périodes de retours, centrées uniquement sur l'aléa, ne considèrent pas explicitement les éléments à risque étudiés (bâtiments, personnes à l'intérieur, etc.) et négligent les possibles contraintes budgétaires. Afin de palier à ces limitations, les méthodes de zonage basés sur le risque et les analyses coût-bénéfice ont récemment émergées. Elles combinent la distribution de l'aléa avec les relations de vulnérabilité des éléments étudiés. Ainsi, l'évaluation systématisée de la vulnérabilité des bâtiments permet de mieux quantifier le risque dans un couloir d'avalanche donné. Cependant, en pratique, les relations de vulnérabilité disponibles restent principalement limitées à de rares estimations empiriques déduites de l'analyse de quelques catastrophes survenues. De plus, les méthodes existantes basées sur le risque font face à des calculs encore lourds, et les hypothèses sur la modélisation de l'aléa sont discutables (choix de quelques scénarios, faible considération des valeurs extrêmes, etc.). Dans cette thèse, ces problèmes sont abordés en construisant grâce à une approche fiabiliste des relations de fragilité de différents configurations de bâtiments en béton armé (BA) sollicités par des avalanches de neige et également des relations de fragilité pour les personnes potentiellement à l'intérieur de ces bâtiments. Ces relations sont ensuite utilisées dans un cadre de quantification du risque et de recherche de structure de défense optimale. L'apport de cette thèse est donc l'enrichissement de la caractérisation de la vulnérabilité et du risque face aux avalanches par des approches de complexités variables utilisables en fonction de la spécificité du cas et du temps imparti pour conduire l'étude. La thèse est composée de quatre volets. D'abord, les courbes de fragilité associées à différents états limites de murs en BA soumis au chargement uniforme d'une avalanche sont obtenues à partir d'approches classiques de dimensionnement du BA. Ensuite, l'approche est étendue à des modèles numériques de bâtis plus riches (modèle masse-ressort) permettant de décrire en particulier l'évolution temporelle de la réponse du système. A partir de ces relations de fragilité, de nouvelles relations pour les personnes à l'intérieur de ces bâtiments sont proposées. Ces relations pour les bâtiments et les personnes sont utilisées dans une analyse complète de sensibilité du risque. Enfin, une formule analytique du risque basée sur la statistique des valeurs extrêmes est proposée pour efficacement quantifier le risque et obtenir une caractéristique optimale de digue paravalanche
Long term avalanche risk quantification for mapping and the design of defense structures is done in mostcountries on the basis of high magnitude events. Such return period/level approaches, purely hazardoriented,do not consider elements at risk (buildings, people inside, etc.) explicitly, and neglect possiblebudgetary constraints. To overcome these limitations, risk based zoning methods and cost-benefit analyseshave emerged recently. They combine the hazard distribution and vulnerability relations for the elementsat risk. Hence, the systematic vulnerability assessment of buildings can lead to better quantify the riskin avalanche paths. However, in practice, available vulnerability relations remain mostly limited to scarceempirical estimates derived from the analysis of a few catastrophic events. Besides, existing risk-basedmethods remain computationally intensive, and based on discussable assumptions regarding hazard modelling(choice of few scenarios, little consideration of extreme values, etc.). In this thesis, we tackle theseproblems by building reliability-based fragility relations to snow avalanches for several building types andpeople inside them, and incorporating these relations in a risk quantification and defense structure optimaldesign framework. So, we enrich the avalanche vulnerability and risk toolboxes with approaches of variouscomplexity, usable in practice in different conditions, depending on the case study and on the time availableto conduct the study. The developments made are detailed in four papers/chapters.In paper one, we derive fragility curves associated to different limit states for various reinforced concrete(RC) buildings loaded by an avalanche-like uniform pressure. Numerical methods to describe the RCbehaviour consist in civil engineering abacus and a yield line theory model, to make the computations asfast as possible. Different uncertainty propagation techniques enable to quantify fragility relations linkingpressure to failure probabilities, study the weight of the different parameters and the different assumptionsregarding the probabilistic modelling of the joint input distribution. In paper two, the approach is extendedto more complex numerical building models, namely a mass-spring and a finite elements one. Hence, muchmore realistic descriptions of RC walls are obtained, which are useful for complex case studies for whichdetailed investigations are required. However, the idea is still to derive fragility curves with the simpler,faster to run, but well validated mass-spring model, in a “physically-based meta-modelling” spirit. Inpaper three, we have various fragility relations for RC buildings at hand, thus we propose new relationsrelating death probability of people inside them to avalanche load. Second, these two sets of fragilitycurves for buildings and human are exploited in a comprehensive risk sensitivity analysis. By this way,we highlight the gap that can exist between return period based zoning methods and acceptable riskthresholds. We also show the higher robustness to vulnerability relations of optimal design approaches ona typical dam design case. In paper four, we propose simplified analytical risk formulas based on extremevalue statistics to quantify risk and perform the optimal design of an avalanche dam in an efficient way. Asensitivity study is conducted to assess the influence of the chosen statistical distributions and flow-obstacleinteraction law, highlighting the need for precise risk evaluations to well characterise the tail behaviour ofextreme runouts and the predominant patterns in avalanche - structure interactions
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Yang, Guanglin. "Geothermal Development Opportunity and Risk of Using Abandoned Oil-Gas Wells and Mines with MRI tests". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2022. http://amslaurea.unibo.it/25836/.

Texto completo
Resumen
Much work in geothermal is focused on technologies that make deeper drilling more economical, no matter what types of geothermal power plants are. The most important of these problems are well drilling investment cost and energy conversion efficiency, and geological earthquakes problems that can be induced by using enhanced geothermal systems (EGS). The drilling cost and energy efficiency of geothermal are caused by drilling depth. So, finding solution to reduce drilling depth and risk are essential for geothermal development. In this study, we analyze one possible geothermal power plant development method recycling abandoned coal mines which could saving well drilling cost and reducing geology surveying risk, the geothermal wells will be drilled at the bottom of the abandoned coal mines. Then we summarize present project recycling abandoned gas-oil wells for geothermal power plant development. Based on Paris agreement and IPCC special report on 1.5°C, we analyze the situation of renewable and conventional energy to provide the reason why we have many abandoned coal mines and abandoned gas-oil wells to recycle, then we calculate geothermal power plant capacity using abandoned coal mines and using abandoned gas-oil wells. In the second part of this study, we analyze the geothermal risk using EGS, as long as the reservoir rock using hydraulic fracturing, the geothermal risk is the induced earthquake which is caused by fracture propagation of the reservoir rock mass, the MRI (magnetic resonance image) experiments were conducted to obtain morphology of fracture propagation process of rock samples under uniaxial load stress to simulate reservoir rock failure process and measure rock fracture propagation velocity. The results show MRI could provide high quality image of rock fracture propagation process, rock fracture propagation velocity will start from low speed to high speed when approaching the maximum stress of rock.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Anilkumar, A. K. "NEW PERSPECTIVES FOR ANALYZING THE BREAKUP, ENVIRONMENT, EVOLUTION, COLLISION RISK AND REENTRY OF SPACE DEBRIS OBJECTS". Thesis, Indian Institute of Science, 2004. https://etd.iisc.ac.in/handle/2005/80.

Texto completo
Resumen
In the space surrounding the earth there are two major regions where orbital debris causes concern. They are the Low Earth Orbits (LEO) up to about 2000 km, and Geosynchronous Orbits (GEO) at an altitude of around 36000 km. The impact of the debris accumulations are in principle the same in the two regions; nevertheless they require different approaches and solutions, due to the fact that the perturbations in the orbital decay due to atmospheric drag effects predominates in LEO, gravitational forces including earth’s oblateness and luni solar effects dominating in GEO are different in these two regions. In LEO it is generally known that the debris population dominates even the natural meteoroid population for object sizes 1 mm and larger. This thesis focuses the study mainly in the LEO region. Since the first satellite breakup in 1961 up to 01 January 2003 more than 180 spacecraft and rocket bodies have been known to fragment in orbit. The resulting debris fragments constitute nearly 40% of the 9000 or more of the presently tracked and catalogued objects by USSPACECOM. The catalogued fragment count does not include the much more numerous fragments, which are too small to be detected from ground. Hence in order to describe the trackable orbital debris environment, it is important to develop mathematical models to simulate the trackable fragments and later expand it to untrackable objects. Apart from the need to better characterize the orbital debris environment down to sub millimeter particles, there is also a pressing necessity of simulation tools able to model in a realistic way the long term evolution of space debris, to highlight areas, which require further investigations, and to study the actual mitigation effects of space policy measures. The present thesis has provided newer perspectives for five major issues in space debris modeling studies. The issues are (i) breakup modeling, (ii) environment modeling, (iii) evolution of the debris environment, (iv) collision probability analysis and (v) reentry prediction. The Chapter 1 briefly describes an overview of space debris environment and the issues associated with the growing space debris populations. A literature survey of important earlier work carried out regarding the above mentioned five issues are provided in the Chapter 2. The new contributions of the thesis commence from Chapter 3. The Chapter 3 proposes a new breakup model to simulate the creation of debris objects by explosion in LEO named “A Semi Stochastic Environment Modeling for Breakup in LEO” (ASSEMBLE). This model is based on a study of the characteristics of the fragments from on orbit breakups as provided in the TLE sets for the INDIAN PSLV-TES mission spent upper stage breakup. It turned out that based on the physical mechanisms in the breakup process the apogee, perigee heights (limited by the breakup altitude) closely fit suitable Laplace distributions and the eccentricity follows a lognormal distribution. The location parameters of these depend on the orbit of the parent body at the time of breakup and their scale parameters on the intensity of explosion. The distribution of the ballistic coefficient in the catalogue was also found to follow a lognormal distribution. These observations were used to arrive at the proper physical, aerodynamic, and orbital characteristics of the fragments. Subsequently it has been applied as an inverse problem to simulate and further validate it based on some more typical well known historical on orbit fragmentation events. All the simulated results compare quite well with the observations both at the time of breakup and at a later epoch. This model is called semi stochastic in nature since the size and mass characteristics have to be obtained from empirical relations and is capable of simulating the complete scenario of the breakup. A new stochastic environment model of the debris scenario in LEO that is simple and impressionistic in nature named SIMPLE is proposed in Chapter 4. Firstly among the orbital debris, the distribution of the orbital elements namely altitude, perigee height, eccentricity and the ballistic coefficient values for TLE sets of data in each of the years were analyzed to arrive at their characteristic probability distributions. It is observed that the altitude distribution for the number of fragments exhibits peaks and it turned out that such a feature can be best modeled with a tertiary mixture of Laplace distributions with eight parameters. It was noticed that no statistically significant variations could be observed for the parameters across the years. Hence it is concluded that the probability density function of the altitude distribution of the debris objects has some kind of equilibrium and it follows a three component mixture of Laplace distributions. For the eccentricity ‘e’ and the ballistic parameter ‘B’ values the present analysis showed that they could be acceptably quite well fitted by Lognormal distributions with two parameters. In the case of eccentricity also the describing parameter values do not vary much across the years. But for the parameters of the B distribution there is some trend across the years which perhaps may be attributed to causes such as decay effect, miniaturization of space systems and even the uncertainty in the measurement data of B. However in the absence of definitive cause that can be attributed for the variation across the years, it turns out to be best to have the most recent value as the model value. Lastly the same kind of analysis has also been carried out with respect to the various inclination bands. Here the orbital parameters are analyzed with respect to the inclination bands as is done in ORDEM (Kessler et al 1997, Liou et al 2001) for near circular orbits in LEO. The five inclination bands considered here are 0-36 deg (in ORDEM they consider 19-36 deg, and did not consider 0-19 deg), 36-61 deg, 61-73 deg, 73-91 deg and 91- 180 deg, and corresponding to each band, the altitude, eccentricity and B values were modeled. It is found that the third band shows the models with single Laplace distribution for altitude and Lognormal for eccentricity and B fit quite well. The altitude of other bands is modeled using tertiary mixture of Laplace distributions, with the ‘e’ and ‘B’ following once again a Lognormal distribution. The number of parameter values in SIMPLE is, in general, just 8 for each description of altitude or perigee distributions whereas in ORDEM96 it is more. The present SIMPLE model captures closely all the peak densities without losing the accuracy at other altitudes. The Chapter 5 treats the evolution of the debris objects generated by on orbit breakup. A novel innovative approach based on the propagation of an equivalent fragment in a three dimensional bin of semi major axis, eccentricity, and the ballistic coefficient (a, e, B) together with a constant gain Kalman filter technique is described in this chapter. This new approach propagates the number density in a bin of ‘a’ and ‘e’ rapidly and accurately without propagating each and every of the space debris objects in the above bin. It is able to assimilate the information from other breakups as well with the passage of time. Further this approach expands the scenario to provide suitable equivalent ballistic coefficient values for the conglomeration of the fragments in the various bins. The heart of the technique is to use a constant Kalman gain filter, which is optimal to track the dynamically evolving fragment scenario and further expand the scenario to provide time varying equivalent ballistic coefficients for the various bins. In the next chapter 6 a new approach for the collision probability assessment utilizing the closed form solution of Wiesel (1989) by the way of a three dimensional look up table, which takes only air drag effect and an exponential model of the atmosphere, is presented. This approach can serve as a reference collision probability assessment tool for LEO debris cloud environment. This approach takes into account the dynamical behavior of the debris objects propagation and the model utilizes a simple propagation for quick assessment of collision probability. This chapter also brings out a comparison of presently available collision probability assessment algorithms based on their complexities, application areas and sample space on which they operate. Further the quantitative assessment of the collision probability estimates between different presently available methods is carried out and the obtained collision probabilities are match qualitatively. The Chapter 7 utilizes once again the efficient and robust constant Kalman gain filter approach that is able to handle the many uncertain, variable, and complex features existing in the scenario to predict the reentry time of the risk objects. The constant gain obtained by using only a simple orbit propagator by considering drag alone is capable of handling the other modeling errors in a real life situation. A detailed validation of the approach was carried out based on a few recently reentered objects and comparison of the results with the predictions of other agencies during IADC reentry campaigns are also presented. The final Chapter 8 provides the conclusions based on the present work carried together with suggestions for future efforts needed in the study of space debris. Also the application of the techniques evolved in the present work to other areas such as atmospheric data assimilation and forecasting have also been suggested.
Vikram Sarabhai Space Centre,Trivandrum
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Anilkumar, A. K. "NEW PERSPECTIVES FOR ANALYZING THE BREAKUP, ENVIRONMENT, EVOLUTION, COLLISION RISK AND REENTRY OF SPACE DEBRIS OBJECTS". Thesis, Indian Institute of Science, 2004. http://hdl.handle.net/2005/80.

Texto completo
Resumen
Vikram Sarabhai Space Centre,Trivandrum
In the space surrounding the earth there are two major regions where orbital debris causes concern. They are the Low Earth Orbits (LEO) up to about 2000 km, and Geosynchronous Orbits (GEO) at an altitude of around 36000 km. The impact of the debris accumulations are in principle the same in the two regions; nevertheless they require different approaches and solutions, due to the fact that the perturbations in the orbital decay due to atmospheric drag effects predominates in LEO, gravitational forces including earth’s oblateness and luni solar effects dominating in GEO are different in these two regions. In LEO it is generally known that the debris population dominates even the natural meteoroid population for object sizes 1 mm and larger. This thesis focuses the study mainly in the LEO region. Since the first satellite breakup in 1961 up to 01 January 2003 more than 180 spacecraft and rocket bodies have been known to fragment in orbit. The resulting debris fragments constitute nearly 40% of the 9000 or more of the presently tracked and catalogued objects by USSPACECOM. The catalogued fragment count does not include the much more numerous fragments, which are too small to be detected from ground. Hence in order to describe the trackable orbital debris environment, it is important to develop mathematical models to simulate the trackable fragments and later expand it to untrackable objects. Apart from the need to better characterize the orbital debris environment down to sub millimeter particles, there is also a pressing necessity of simulation tools able to model in a realistic way the long term evolution of space debris, to highlight areas, which require further investigations, and to study the actual mitigation effects of space policy measures. The present thesis has provided newer perspectives for five major issues in space debris modeling studies. The issues are (i) breakup modeling, (ii) environment modeling, (iii) evolution of the debris environment, (iv) collision probability analysis and (v) reentry prediction. The Chapter 1 briefly describes an overview of space debris environment and the issues associated with the growing space debris populations. A literature survey of important earlier work carried out regarding the above mentioned five issues are provided in the Chapter 2. The new contributions of the thesis commence from Chapter 3. The Chapter 3 proposes a new breakup model to simulate the creation of debris objects by explosion in LEO named “A Semi Stochastic Environment Modeling for Breakup in LEO” (ASSEMBLE). This model is based on a study of the characteristics of the fragments from on orbit breakups as provided in the TLE sets for the INDIAN PSLV-TES mission spent upper stage breakup. It turned out that based on the physical mechanisms in the breakup process the apogee, perigee heights (limited by the breakup altitude) closely fit suitable Laplace distributions and the eccentricity follows a lognormal distribution. The location parameters of these depend on the orbit of the parent body at the time of breakup and their scale parameters on the intensity of explosion. The distribution of the ballistic coefficient in the catalogue was also found to follow a lognormal distribution. These observations were used to arrive at the proper physical, aerodynamic, and orbital characteristics of the fragments. Subsequently it has been applied as an inverse problem to simulate and further validate it based on some more typical well known historical on orbit fragmentation events. All the simulated results compare quite well with the observations both at the time of breakup and at a later epoch. This model is called semi stochastic in nature since the size and mass characteristics have to be obtained from empirical relations and is capable of simulating the complete scenario of the breakup. A new stochastic environment model of the debris scenario in LEO that is simple and impressionistic in nature named SIMPLE is proposed in Chapter 4. Firstly among the orbital debris, the distribution of the orbital elements namely altitude, perigee height, eccentricity and the ballistic coefficient values for TLE sets of data in each of the years were analyzed to arrive at their characteristic probability distributions. It is observed that the altitude distribution for the number of fragments exhibits peaks and it turned out that such a feature can be best modeled with a tertiary mixture of Laplace distributions with eight parameters. It was noticed that no statistically significant variations could be observed for the parameters across the years. Hence it is concluded that the probability density function of the altitude distribution of the debris objects has some kind of equilibrium and it follows a three component mixture of Laplace distributions. For the eccentricity ‘e’ and the ballistic parameter ‘B’ values the present analysis showed that they could be acceptably quite well fitted by Lognormal distributions with two parameters. In the case of eccentricity also the describing parameter values do not vary much across the years. But for the parameters of the B distribution there is some trend across the years which perhaps may be attributed to causes such as decay effect, miniaturization of space systems and even the uncertainty in the measurement data of B. However in the absence of definitive cause that can be attributed for the variation across the years, it turns out to be best to have the most recent value as the model value. Lastly the same kind of analysis has also been carried out with respect to the various inclination bands. Here the orbital parameters are analyzed with respect to the inclination bands as is done in ORDEM (Kessler et al 1997, Liou et al 2001) for near circular orbits in LEO. The five inclination bands considered here are 0-36 deg (in ORDEM they consider 19-36 deg, and did not consider 0-19 deg), 36-61 deg, 61-73 deg, 73-91 deg and 91- 180 deg, and corresponding to each band, the altitude, eccentricity and B values were modeled. It is found that the third band shows the models with single Laplace distribution for altitude and Lognormal for eccentricity and B fit quite well. The altitude of other bands is modeled using tertiary mixture of Laplace distributions, with the ‘e’ and ‘B’ following once again a Lognormal distribution. The number of parameter values in SIMPLE is, in general, just 8 for each description of altitude or perigee distributions whereas in ORDEM96 it is more. The present SIMPLE model captures closely all the peak densities without losing the accuracy at other altitudes. The Chapter 5 treats the evolution of the debris objects generated by on orbit breakup. A novel innovative approach based on the propagation of an equivalent fragment in a three dimensional bin of semi major axis, eccentricity, and the ballistic coefficient (a, e, B) together with a constant gain Kalman filter technique is described in this chapter. This new approach propagates the number density in a bin of ‘a’ and ‘e’ rapidly and accurately without propagating each and every of the space debris objects in the above bin. It is able to assimilate the information from other breakups as well with the passage of time. Further this approach expands the scenario to provide suitable equivalent ballistic coefficient values for the conglomeration of the fragments in the various bins. The heart of the technique is to use a constant Kalman gain filter, which is optimal to track the dynamically evolving fragment scenario and further expand the scenario to provide time varying equivalent ballistic coefficients for the various bins. In the next chapter 6 a new approach for the collision probability assessment utilizing the closed form solution of Wiesel (1989) by the way of a three dimensional look up table, which takes only air drag effect and an exponential model of the atmosphere, is presented. This approach can serve as a reference collision probability assessment tool for LEO debris cloud environment. This approach takes into account the dynamical behavior of the debris objects propagation and the model utilizes a simple propagation for quick assessment of collision probability. This chapter also brings out a comparison of presently available collision probability assessment algorithms based on their complexities, application areas and sample space on which they operate. Further the quantitative assessment of the collision probability estimates between different presently available methods is carried out and the obtained collision probabilities are match qualitatively. The Chapter 7 utilizes once again the efficient and robust constant Kalman gain filter approach that is able to handle the many uncertain, variable, and complex features existing in the scenario to predict the reentry time of the risk objects. The constant gain obtained by using only a simple orbit propagator by considering drag alone is capable of handling the other modeling errors in a real life situation. A detailed validation of the approach was carried out based on a few recently reentered objects and comparison of the results with the predictions of other agencies during IADC reentry campaigns are also presented. The final Chapter 8 provides the conclusions based on the present work carried together with suggestions for future efforts needed in the study of space debris. Also the application of the techniques evolved in the present work to other areas such as atmospheric data assimilation and forecasting have also been suggested.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Bartlett, Alastair Ian. "Auto-extinction of engineered timber". Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/31052.

Texto completo
Resumen
Engineered timber products are becoming increasingly popular in the construction industry due to their attractive aesthetic and sustainability credentials. Cross-laminated timber (CLT) is one such engineered timber product, formed of multiple layers of timber planks glued together with adjacent layers perpendicular to each other. Unlike traditional building materials such as steel and concrete, the timber structural elements can ignite and burn when exposed to fire, and thus this risk must be explicitly addressed during design. Current design guidance focusses on the structural response of engineered timber, with the flammability risk typically addressed by encapsulation of any structural timber elements with the intention of preventing their involvement in a fire. Exposed structural timber elements may act as an additional fuel load, and this risk must be adequately quantified to satisfy the intent of the building regulations in that the structure does not continue burning. This can be achieved through timber’s natural capacity to auto-extinguish when the external heat source is removed or sufficiently reduced. To address these issues, a fundamental understanding of auto-extinction and the conditions necessary to achieve it in real fire scenarios is needed. Bench-scale flammability studies were undertaken in the Fire Propagation Apparatus to explore the conditions under which auto-extinction will occur. Critical conditions were determined experimentally as a mass loss rate of 3.48 ± 0.31 g/m2s, or an incident heat flux of ~30 kW/m2. Mass loss rate was identified as the better criterion, as critical heat flux was shown by comparison with literature data to be heavily dependent on apparatus. Subsequently, full-scale compartment fire experiments with exposed timber surfaces were performed to determine if auto-extinction could be achieved in real fire scenarios. It was demonstrated that auto-extinction could be achieved in a compartment fire scenario, but only if significant delamination of the engineered timber product could be prevented. A full-scale compartment fire experiment with an exposed back wall and ceiling achieved auto-extinction after around 21 minutes, at which point no significant delamination of the first lamella had been observed. Experiments with an exposed back and side wall, and experiments with an exposed back wall, side wall, and ceiling underwent sustained burning due to repeated delamination, and an increased quantity of exposed timber respectively. Firepoint theory was used to predict the mass loss rate as a function of external heat flux and heat losses, and was successfully applied to the bench-scale experiments. This approach was then extended to the full-scale compartment fire experiment which achieved auto-extinction. A simplified approach based on experimentally obtained internal temperature fields was able to predict auto-extinction if delamination had not occurred – predicting an extinction time of 20-21 minutes. This demonstrates that the critical mass loss rate of 3.48 ± 0.31 g/m2s determined from bench-scale experiments was valid for application to full-scale compartment fire experiments. This was further explored through a series of reduced-scale compartment fire experiments, demonstrating that auto-extinction can only reliably be achieved if burnout of the compartment fuel load is achieved before significant delamination of the outer lamella takes place. The quantification of the auto-extinction phenomena and their applicability to full-scale compartment fires explored herein thus allows greater understanding of the effects of exposed timber surfaces on compartment fire dynamics.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Chandra, Johanes. "Analyses expérimentales de la réponse sismique non-linéaire du système sol-structure". Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENU023/document.

Texto completo
Resumen
La concentration de plus en plus importante de la population dans les milieux urbains exposés à une forte sismicité peut générer de plus en plus de dommages et de pertes. La réponse sismique en milieu urbain dépend des effets du site (direct amplification et non-linéarité du sol) et du couplage entre le sol et les structures (interaction sol-structure et site-ville). Par conséquent, la compréhension de la sismologie urbaine, c'est-à-dire le mouvement du sol intégrant l'environnement urbain, est critique pour réduire les dommages. Cela passe par la prédiction du mouvement du sol dans le milieu urbain, ingrédient fondamental à l'évaluation de l'aléa sismique. La prise en compte de l'amplification provoquée par la présence de sédiments est largement étudiée. Au contraire, la réponse non-linéarité du sol et du couplage entre le sol et la structure est rarement intégrée à la prédiction du mouvement du sol. A cause de leur complexité, ces problèmes ont toujours été abordés séparément. Dans ce contexte, cette thèse analyse la réponse non-linéaire du système sol-structure en intégrant la non-linéarité du sol et de l'interaction sol-structure. Deux travaux expérimentaux ont été conduits, avec comme but de proposer un proxy, rendant compte de la non-linéarité du sol. Le premier est l'essai en centrifugeuse qui reproduit à échelle réduite la réponse du sol et des structures. L'état de contrainte et de déformation est conservé en appliquant une accélération artificielle au modèle. Cet essai a été effectué à IFSTTAR Nantes dans le cadre de l'ANR ARVISE. Différentes configurations ont été testées, avec et sans bâtiments, sous différents niveaux de sollicitation, pour analyser la réponse du sol et des structures. Le deuxième utilise les enregistrements des réseaux accélérométriques verticaux de deux sites tests californiens : Garner Valley Downhole Arrat (GVDA) et Wildlife Liquefaction Array (WLA), gérés tout deux par l'Université de Californie, Santa Barbara (UCSB), Etats-Unis. La réponse in-situ est importante car elle décrit le comportement réel du site. Plusieurs informations décrivant les conditions de sites sont disponibles et les séismes enregistrés ont permis de tester plusieurs niveaux de déformations pour reconstruire la réponse globale de chaque site. De plus, le site GVDA est équipé d'une structure Soil-Foundation-Structure-Interaction (SFSI) qui a comme objectif d'étudier les problèmes d'interaction sol-structure. Dans les deux expériences, grace au réseau accélérométrique vertical dans le sol et la structure, on peut appliquer la méthode de propagation d'ondes 1D pour extraire la réponse de ces systèmes. Les ondes sont considérées comme des ondes SH qui se propage horizontalement dans une couche 1D. La méthode interférométrie sismique par déconvolution est appliquée pour extraire l'Impulse Response Function (IRF) du système 1D. On analyse ainsi la variation de Vs en fonction de la solliictation et à différente position dans le sol ainsi que la variation des éléments expliquant la réponse dynamique du système sol-structure. On propose au final un proxy de déformation permettant de rendre compte mais aussi de prédire la nonlinéarité des sols en fonction des niveaux sismiques subits
The concentration of population in urban areas in seismic-prone regions can generate more and more damages and losses. Seismic response in urban areas depends on site effects (direct amplification and nonlinearity of the soil) and the coupling between the soil and structures (soil-structure and site-city interaction). Therefore, the understanding of urban seismology, that is the ground motion incorporating the urban environment, is critical to reduce the damage. This requires the prediction of ground motion in urban areas, a fundamental element in the evaluation of the seismic hazard. Taking into account the amplification caused by the presence of sediment has been widely studied. However, the non-linearity of the soil and the coupling between the ground and the structure is seldom integrated to the prediction of the ground motion. Because of their complexity, these problems have been addressed separately. In this context, this dissertation analyzes the non-linear response of the soil-structure by integrating the non-linearity of the soil and the soil-structure interaction. Two experimental studies were performed, with the aim of providing a proxy that reflects the non-linearity of the soil. The first is the centrifuge test that reproduces the response of soil and structures at reduced scale. The state of stress and strain is conserved by applying an artificial acceleration model. This test was performed at IFSTTAR Nantes in the framework of the ANR ARVISE. Different configurations were tested with and without buildings, under different stress levels, to analyze the response of the soil and structures. The second uses the vertical accelerometric networks of two sites in California: Garner Valley Downhole (GVDA) and the Wildlife Liquefaction Array (WLA), both managed by the University of California, Santa Barbara (UCSB), USA. In-situ response is important since it describes the actual behavior of the site. Information describing the conditions of sites is widely available and the earthquakes recorded were used to test several levels of shaking to reconstruct the overall response of each site. In addition, the GVDA site is equipped with a Soil-Foundation-Structure-Interaction structure (SFSI) which aims to study the problems of soil-structure interaction. In both experiments, thanks to the vertical accelerometer network in the ground and the structure we are able to apply the 1D wave propagation method to extract the response of these systems. The waves are considered as an SH wave which propagates in a 1D horizontal layer. Seismic interferometry by deconvolution method is applied to extract the Impulse Response Function (IRF) of the 1D system. Thus the analysis of the variation in function of elastic properties of the soil and the structure is done under several magnitude of shaking, including variation in depth and the elements of the total response of the structure including the soil-structure interaction. At the end, a deformation proxy to evaluate and also to predict the nonlinear response of the soil, the structure and the soil-structure interaction is proposed
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Darmet, Natacha. "Prise en compte du risque de propagation de l'emballement thermique dans le développement des modules de batteries. Approche expérimentale tenant compte de la génération de gaz". Electronic Thesis or Diss., Université Grenoble Alpes, 2024. http://www.theses.fr/2024GRALI024.

Texto completo
Resumen
Dans le cadre de la transition énergétique, les batteries au lithium-ion prennent une place prépondérante, notamment dans l’optique du stockage et du transport. L’analyse des modes de défaillance montre que même si la probabilité reste faible, l’auto-emballement d’une batterie présente des conséquences à maitriser. Il est donc indispensable d’étudier les phénomènes menant à la propagation de l’emballement d’une cellule aux cellules voisines. Ainsi, cette thèse propose une évaluation fine des paramètres moteurs de la propagation de l’ dans un module de batteries. L’état actuel de l’état de l’art propose différentes approches basées sur une pluralité de méthodes de caractérisation. Cependant, certains mécanismes restent encore à approfondir via des mesures particulières à chaque étape de l’évènement.Tout d’abord, lors d’incidents, la matière présente dans la cellule réagit en produisant du gaz, sollicitant ainsi mécaniquement les parois de la cellule. Un dispositif expérimental de mesure de pression interne de la cellule a ainsi été développé, permettant un suivi operando de cette génération de gaz. Le couplage de cette mesure aux déformations des parois a confirmé par ailleurs l’impact non négligeable des phénomènes de gonflement des électrodes sur la tenue mécanique du godet.A partir d’une température critique, l’emballement thermique de la cellule lithium-ion se déclenche, suivi de l’éjection de matière (gaz, flamme, particules) à haute température. La dynamique de ce jet a été analysée, de sa genèse, dans la cellule, à son éjection par l’utilisation de capteurs de pression et caméras rapides à différents spectres.Des transferts thermiques s’opèrent ensuite entre les éjectas et les cellules proches, pouvant provoquer la propagation de l’emballement thermique. Afin de prévoir ces échanges thermiques, l’énergie libérée par la 1ere cellule a été caractérisée par calorimétrie selon différentes configurations. Puis, à partir de la connaissance de ce paramètre clef et de la dynamique d’éjection, un calcul de flux thermique émis a alors été proposé. Par ailleurs, l’observation aux rayons X de la matière active pendant l’évènement a mis en évidence le lien direct entre la cinétique de dégradation et la génération de gaz, tout en soulevant l’importance des particules de la cellule gâchette dans l’étude thermique des risques de propagation.Enfin, la propagation de l’emballement thermique entre cellules tout solide reconstituées a été évaluée expérimentalement, pour la première fois. Ces travaux ont démontré que la propagation est possible sur ces nouvelles technologies de batterie et que cette étude doit être approfondie
Lithium-ion batteries are playing a prominent role in the energy transition, particularly in terms of storage and transportation. The analysis of failure modes reveals that despite of a low probability, the battery self heating has consequences that requires mitigation. Studying the phenomena that lead to the propagation from a cell’s thermal runaway to neighboring cells is essential. The key parameters involved in the thermal runaway propagation for a battery module are evaluated in details in this thesis. Different approaches based on a plurality of characterization methods are proposed. Still, some mechanisms need further developments through specific measures at each stage of the phenomenon.When incidents occur, the cell’s materials react by producing gas, which mechanically affects the cell casing. An experimental device has been developed to measure the internal pressure of the cell, allowing an operando monitoring of this gas generation. The correlation of this measurement with the casing deformations also confirms the significant impact of electrode swelling on the casing’s mechanical strength.At a critical temperature, the lithium-ion cell initiates thermal runaway, which leads to the ejection of materials (gas, flame, particles) at high temperature. Pressure sensors and fast cameras at different spectra were used to analyze the jet dynamics, from its genesis in the cell to its ejection.Heat transfers then occur between the ejecta and surrounding cells, which can results in a thermal runaway propagation. Calorimetry was used to characterize the first cell in order to predict its thermal exchanges in different configurations. Then, based on the knowledge of this energy release and the jet dynamics, a calculation of the emitted heat flow is proposed. The X-rays observations of the active material during the thermal runaway propagation revealed that degradation kinetics and gas generation are linked. It also stresses the importance of trigger cell particles in the thermal propagation risks.Lastly, the thermal runaway propagation between rebuilt charged all-solid-states batteries cells were evaluated experimentally, for the first time. This research has demonstrated that the propagation is feasible with these new battery technologies. The study must be conducted further
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Salines, Morgane. "Modélisation de la propagation du virus de l'hépatite E dans la filière porcine et évaluation de stratégies de réduction du risque d'exposition humaine". Thesis, Rennes 1, 2019. http://www.theses.fr/2019REN1B039/document.

Texto completo
Resumen
Le virus de l’hépatite E (HEV) est un agent zoonotique dont les porcs représentent le principal réservoir dans les pays industrialisés. Le présent projet de recherche a combiné études épidémiologiques, modélisation mathématique et sciences sociales pour proposer des leviers de réduction du risque d’exposition humaine au HEV par consommation de produits à base de porc. Deux essais expérimentaux et une étude en conditions naturelles ont mis en évidence le rôle majeur des co-infections immunomodulatrices dans la dynamique de l’infection par le HEV chez le porc, ces pathogènes intercurrents conduisant à une infection chronique par le HEV et à un risque augmenté de présence du virus dans le foie, le sang et les muscles des animaux abattus. Le développement d’un modèle intra-élevage, stochastique, individu-centré et multi-pathogènes, a permis de dégager des pistes de maîtrise à la fois zootechniques et sanitaires pour réduire la prévalence du virus en élevage. En complément, la conception d’un modèle inter-troupeaux a rendu possible l’analyse des facteurs de diffusion du virus dans un réseau d’élevages français. L’ensemble de ces mesures de gestion du HEV a été soumis à l’avis des organisations publiques et privées et des acteurs individuels de la filière porcine (éleveurs, conseillers, vétérinaires) par des approches de sciences humaines et sociales. Finalement, ce projet transversal et multi-disciplinaire a permis de définir des axes d’action tangibles et réalisables de gestion du HEV dans la filière porcine tout en apportant des contributions méthodologiques significatives en épidémiologie et en modélisation
Hepatitis E virus (HEV) is a zoonotic pathogen whose main reservoir in industrialised countries is pigs. This research project combined epidemiological studies, mathematical modelling and social sciences to propose levers for reducing the risk of human exposure to HEV through the consumption of pork products. Two experimental trials and one study under natural conditions highlighted the major role of immunomodulating co-infections on the dynamics of HEV infection in pigs, as these intercurrent pathogens led to chronic HEV infection and an increased risk of the virus in the liver, blood and muscles of slaughtered animals. The development of a within-herd, stochastic, individual-based and multi-pathogen model has made it possible to identify both zootechnical and sanitary control measures to reduce the prevalence of the virus on farms. In addition, the design of a between-herd model has enabled to analyse the factors responsible for the spread of the virus in a network of French farms. All these HEV control measures have been submitted for the opinion of public and private organisations and individual players in the pig sector (farmers, farming advisors, veterinarians) through social science approaches. Finally, this transversal and multidisciplinary project made it possible to define tangible and achievable lines of action for the management of HEV in the pig sector while making significant methodological contributions in epidemiology and modelling
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Boulanger, Xavier. "Modélisation du canal de propagation Terre-Espace en bandes Ka et Q/V : synthèse de séries temporelles, variabilité statistique et estimation de risque". Thesis, Toulouse, ISAE, 2013. http://www.theses.fr/2013ESAE0009/document.

Texto completo
Resumen
Les bandes de fréquences utilisées conventionnellement pour les systèmes fixes de télécommunication par satellites (bandes C et Ku i.e. 4-15 GHz) sont congestionnées. Néanmoins, le marché des télécommunications civil et de défense accuse une demande de plus en plus importante en services multimédia haut-débit. Par conséquent, l'augmentation de la fréquence porteuse vers les bandes Ka et Q/V (20-40/50 GHz)est activement étudiée. Pour des fréquences supérieures à 5 GHz, la propagation des signaux radioélectriques souffre de l'atténuation troposphérique. Parmi les différents contributeurs à l'affaiblissement troposphérique total(atténuation, scintillation, dépolarisation, température de bruit du ciel), les précipitations jouent un rôle prépondérant. Pour compenser la détérioration des conditions de propagation, des techniques de compensation des affaiblissements (FMT: Fade Mitigation Technique) permettant d'adapter en temps réel les caractéristiques du système en fonction de l'état du canal de propagation doivent être employées. Une alternative à l'utilisation de séries temporelles expérimentales peu nombreuses est la génération de séries temporelles synthétiques d'atténuation due à la pluie et d'atténuation totale représentatives d'une liaison donnée.Le manuscrit est organisé autour de cinq articles. La première contribution est dédiée à la modélisation temporelle de l'affaiblissement troposphérique total. Le deuxième article porte sur des améliorations significatives du modèle de génération de séries temporelles d'atténuation due à la pluie recommandé par l'UITR.Les trois contributions suivantes constituent une analyse critique et une modélisation de la variabilité des statistiques du 1er ordre utilisées lors des tests des modèles de canal. La variance de l'estimateur statistique des distributions cumulatives complémentaires de l'atténuation due à la pluie et de l'intensité de précipitation est alors mise en évidence. Un modèle à application mondiale paramétré au moyen de données expérimentales est proposé. Celui-ci permet, d'une part, d'estimer les intervalles de confiance associés aux mesures de propagation et d'autre part, de quantifier le risque en termes de disponibilité annuelle associée à la prédiction d'une marge de propagation donnée. Cette approche est étendue aux variabilités des statistiques jointes. Elle permet alors une évaluation statistique de l'impact des techniques de diversité de site sur les performances systèmes, tant à microéchelle(quelques kms) qu'à macro-échelle (quelques centaines de kms)
Nowadays, C and Ku bands used for fixed SATCOM systems are totally congested. However, the demand of the end users for high data rate multimedia services is increasing. Consequently, the use of higher frequency bands (Ka: 20 GHz and Q/V 40/50 GHz) is under investigation. For frequencies higher than 5 GHz, radiowave propagation is strongly affected by tropospheric attenuation. Among the different contributors, rain is the most significant. To compensate the deterioration of the propagation channel, Fade Mitigation Techniques (FMT) are used. The lack of experimental data needed to optimize the real-time control loops of FMT leads tothe use of rain attenuation and total attenuation time series synthesizers. The manuscript is a compilation of five articles. The first contribution is dedicated to the temporal modelling of total impairments. The second article aims at providing significant improvements on the rain attenuation time series synthesizer recommended by ITU-R. The last three contributions are a critical analysis and a modelling of the variability observed on the 1st order statistics used to validate propagation channel models. The variance of the statistical estimator of the complementary cumulative distribution functions of rainfall rate and rain attenuation is highlighted. A worldwide model parameterized in compliance with propagation measurements is proposed. It allows the confidence intervals to be estimated and the risk on a required availability associated with a given propagation margin prediction to be quantified. This approach is extended to the variability of joint statistics. It allows the impact of site diversity techniques on system performances at small scale (few kms) and large scale (few hundred of kms) to be evaluated
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Boukraa, Lotfi. "Simulation of wireless propagation in a high-rise building". Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2004. http://library.nps.navy.mil/uhtbin/hyperion/04Dec%5FBoukraa.pdf.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Kaya, Yildirim. "Simulation of wireless propagation and jamming in a high-rise building". Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2005. http://library.nps.navy.mil/uhtbin/hyperion/05Sep%5FKaya.pdf.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Nekkab, Narimane. "Spread of hospital-acquired infections and emerging multidrug resistant enterobacteriaceae in healthcare networks : assessment of the role of interfacility patient transfers on infection risks and control measures". Thesis, Paris, CNAM, 2018. http://www.theses.fr/2018CNAM1180/document.

Texto completo
Resumen
The spread of healthcare-associated infections (HAIs) and multi-drug resistance in healthcare networks is a major public health issue. Evaluating the role of inter-facility patient transfers that form the structure of these networks may provide insights on novel infection control measures. Identifying novel infection control strategies is especially important for multi-drug resistant pathogens such as Carbapenemase-producing Enterobacteriaceae (CPE) due to limited treatment options. The increasing use of inter-individual contact and inter-facility transfer network data in mathematical modelling of HAI spread has helped these models become more realistic; however, they remain limited to a few settings and pathogens. The main objectives of this thesis were two-fold: 1) to better understand the structure of the healthcare networks of France and their impact on HAI spread dynamics; and 2) to assess the role of transfers on the spread of CPE in France during the 2012 to 2015 period. The French healthcare networks are characterized by centralized patient flows towards hubs hospitals and a two-tier community clustering structure. We also found that networks of patients with HAIs form the same underlying structure as that of the general patient population. The number of CPE episodes have increased over time in France and projections estimate that the number of monthly episodes could continue to increase with seasonal peaks in October. The general patient network was used to show that, since 2012, patient transfers have played an increasingly important role over time in the spread of CPE in France. Multiple spreading events of CPE linked to patient transfers were also observed. Despite subtle differences in the flows of patients with an HAI and the general patient population, the general patient network may best inform novel infection control measures for pathogen spread. The structure of healthcare networks may help serve as a basis for novel infection control strategies to tackle HAIs in general but also CPE in particular. Key healthcare hubs in large metropoles and key patient flows connecting hospital communities at the local and regional level should be considered in the development of coordinated regional strategies to control pathogen spread in healthcare systems
La propagation des infections nosocomiales (IN), notamment liées aux bactéries multi-résistantes, au sein du réseau des hôpitaux, est un grand enjeu de santé publique. L’évaluation du rôle joué par les transferts inter-établissements des patients sur cette propagation pourrait permettre l’élaboration de nouvelles mesures de contrôle. L’identification de nouvelles mesures de contrôle est particulièrement importante pour les bactéries résistantes aux antibiotiques comme les entérobactéries productrices de carbapenemase (EPC) pour lesquelles les possibilités de traitement sont très limitées. L’utilisation des données de réseaux de contact inter-individus et de transferts inter-établissement dans la modélisation mathématique ont rendu ces modèles plus proches de la réalité. Toutefois, ces derniers restent limités à quelques milieux hospitaliers et quelques pathogènes. La thèse a eu pour objectifs de 1) mieux comprendre la structure des réseaux hospitaliers français et leur impact sur la propagation des IN ; et 2) évaluer le rôle des transferts sur la propagation des EPC.Les réseaux hospitaliers français sont caractérisés par des flux de patients vers des hubs et par deux niveaux de communautés des hôpitaux. La structure du réseau de transfert des patients présentant une IN n’est pas différente de celle du réseau général de transfert des patients. Au cours des dernières années, le nombre d’épisode d’EPC a augmenté en France et les prédictions prévoient une poursuite de cette augmentation, avec des pics de saisonnalité en octobre. Ce travail a également montré que, depuis 2012, les transferts de patients jouent avec les années un rôle de plus en plus important sur la diffusion des EPC en France. Des évènements de propagation multiple liée aux transferts sont également de plus en plus souvent observés.En conséquence, la structure du réseau des hôpitaux pourrait servir de base pour la proposition des nouvelles stratégies de contrôles des IN en général, et des EPC en particulier. Les hôpitaux très connectés des grandes métropoles et les flux des patients entre les communautés locale et régionale doivent être considérés pour le développement de mesures de contrôle coordonnées entre établissements de santé
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Dutozia, Jérôme. "Espaces à enjeux et effets de réseaux dans les systèmes de risques". Phd thesis, Université Nice Sophia Antipolis, 2013. http://tel.archives-ouvertes.fr/tel-00785772.

Texto completo
Resumen
Les territoires sont innervés par une multitude de réseaux sur lesquels se sont construits les modes de vie modernes et le fonctionnement de la société tout entière; cette dépendance de la société à leur égard se double d'une interdépendance entre les territoires eux-mêmes puisque de par leur structure, les réseaux d'énergie, de télécommunication, d'assainissement ou de circulation intègrent des territoires distincts dans un système commun. Ainsi peut-on poser l'hypothèse que le développement de réseaux va de pair avec le développement de la vulnérabilité des territoires. Cette interdépendance multi-niveau des territoires et des réseaux tend à complexifier les situations à risque et place l'anticipation de systèmes de risques et la détection des espaces à enjeux, dans un contexte de très forte incertitude. L'incomplétude, l'imprécision et l'incertitude autour de la connaissance rétrospective, spatiale et technique de ces systèmes de risques contraignent d'autant plus cette démarche. Malgré cela, pour parvenir à un niveau de prédictibilité et d'anticipation satisfaisant, la modélisation spatiale des systèmes de risque doit alors être capable d'intégrer différents types d'effets dominos et d'envisager les nombreuses trajectoires possibles de la diffusion spatiale des impacts d'un aléa. Dans ce cadre possibiliste, la modélisation spatiale des systèmes de risques et la détection des espaces à enjeux s'appuient sur l'articulation originale de quatre concepts liés à l'analyse des risques et la stabilité des systèmes : la susceptibilité, la criticité, la résilience et la dépendance, que nous avons regroupées sous l'acronyme SCReD. L'estimation de ces propriétés territoriales est réalisée à partir de traitements géomatiques permettant de prendre en compte l'incertitude et l'imprécision, et aboutit à une délimitation des espaces à risques, influencée notamment par les concepts d'espaces géographiques flous. La démarche, orientée à la fois vers le théorique et l'opérationnel, repose sur la compréhension des liens entre vulnérabilité des territoires, vulnérabilité des réseaux et vulnérabilité des populations. Ces méthodes sont appliquées de manière rétrospective aux cas de Barcelone et du département du Var, et dans une logique anticipative et de détection des espaces à enjeux dans le cas de Marseille. Les systèmes de risques associés à ces applications peuvent être d'une complexité variable et déclenchés par différents types d'aléas ; dans le cas de Barcelone, l'évènement est initialement une coupure électrique, dans le cas du Var, on considère également les interactions du système territoire-réseau avec les aléas naturels incendies et tempêtes de neige, alors que pour le cas Marseillais, les systèmes de risques sont initiés par une inondation. Progressivement, au fil des exemples proposés, la complexité des méthodes de spatialisation des systèmes de risques s'accroit en prenant en compte l'incertitude et l'imprécision de manière de plus en plus explicite. Les résultats apparaissent alors plus nuancés, les limites spatiales des systèmes de risques s'estompent et perdent en termes de netteté ce qu'elles gagnent en termes d'exactitude.
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Hedin, Svante. "Data Propagation and Self-Configuring Directory Services in a Distributed Environment". Thesis, Linköping University, Department of Science and Technology, 2001. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-1105.

Texto completo
Resumen

The Swedish field of digital X-ray imaging has since several years relied heavily on distributed information systems and digital storage containers.

To ensure accurate and safe radiological reporting, Swedish software-firm eCare AB delivers a system called Feedback—the first and only quality assurance IT support product of its kind. This thesis covers several aspects of the design and implementation of future versions of this software platform.

The focus lies on distributed directory services and models for secure and robust data propagation in TCP/IP networks. For data propagation, a new application, InfoBroker, has been designed and implemented to facilitate integration between Feedback and other medical IT support systems. The directory services, introduced in this thesis as the Feedback Directory Services, have been designed on the architectural level. A combination of CORBA and Java Enterprise Edition is suggested as the implementation platform.

Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Amitrano, Davide. "Emission acoustique des roches et endommagement : approches experimentale et numerique, application a la sismicite miniere". Grenoble 1, 1999. http://www.theses.fr/1999GRE10002.

Texto completo
Resumen
Lors de la sollicitation mecanique des roches, la propagation de fissures provoque l'emission d'une onde acoustique (ea) ainsi qu'une modification des proprietes elastiques du materiau. L'ea est donc un moyen direct pour etudier l'evolution de l'endommagement, depuis le stade diffus jusqu'a celui ou il se localise pour former une discontinuite macroscopique. L'ea est egalement consideree comme un modele a petite echelle de la sismicite induite par les travaux souterrains ou de celle de la croute terrestre. En effet, on observe a ces differentes echelles des distributions en loi puissance (exposant b) qui indiquent une invariance d'echelle. La premiere partie de ce travail est consacree a une etude experimentale de l'ea du granite du sidobre en compression triaxiale. Nous montrons que l'ea est liee a un comportement macroscopique non lineaire et peut etre utilisee comme un estimateur de l'endommagement. L'augmentation de la pression de confinement rend le comportement mecanique plus ductile et fait diminuer l'exposant b. Nous proposons, dans la deuxieme partie, un modele numerique base sur l'endommagement elastique et sur la methode des elements finis. Ce modele permet de simuler les principales observations concernant l'ea et le comportement des roches. En particulier, il permet de simuler un comportement macroscopique allant du fragile avec un endommagement localise au ductile avec un endommagement diffus avec un seul parametre de controle : l'angle de frottement interne. En utilisant des valeurs experimentales de ce parametre, nous parvenons a simuler les transitions fragile-ductile et localise-diffus observees lorsque la pression de confinement augmente. La troisieme partie concerne l'etude de la sismicite induite dans une mine. Le caractere critique de la distribution de la taille des seismes est propose comme un critere de surveillance du risque sismique. La relation entre l'exposant b et la dimension de correlation spatiale des sources sismiques est etudiee.
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Picot, Alexis. "2P optogenetics : simulation and modeling for optimized thermal dissipation and current integration Temperature rise under two-photon optogenetics brain stimulation". Thesis, Sorbonne Paris Cité, 2018. http://www.theses.fr/2018USPCB227.

Texto completo
Resumen
Depuis maintenant quinze ans, l’optogénétique a bouleversé la recherche en neurosciences en permettant de contrôler les circuits neuronaux. Le développement récent de plusieurs approches d’illumination, combinées à de nouvelles protéines photosensibles, les opsines, ont permis d’ouvrir une voie vers le contrôle neuronale avec la précision de la cellule unique. L’ambition nouvelle d’utiliser ces approches afin d’activer des dizaines, centaines, milliers de cellules in vivo a soulevé de nombreuses questions, notamment concernant les possibles dégâts photoinduits et l’optimisation du choix du couple illumination/opsine. Lors de mon doctorat, j’ai conçu une simulation vérifiée expérimentalement qui permet de calculer, dans toutes les conditions actuelles d’illumination, quel sera l’échauffement au sein du tissus cérébral dû à l’absorption de la lumière par le cerveau. Parallèlement, j’ai paramétré à partir de données expérimentales des modèles de dynamique des populations, à partir d’enregistrements d’électrophysiologie, qui permettent de simuler les courants intracellulaires observés lors de ces photostimulations, pour trois protéines différentes. Ces modèles permettront les chercheurs d’optimiser leurs protocoles d’illumination afin de garantir l’échauffement le plus faible possible dans l’échantillon, tout en favorisant des dynamiques de photocourant adaptées aux besoins expérimentaux
Over the past fifteen years, optogenetics has revolutionized neuroscience research by enabling control of neuronal circuits. The recent development of several illumination approaches, combined with new photosensitive proteins, opsins, have paved the way to neuronal control with the single-cell precision. The new ambition to use these approaches in order to activate tens, hundreds, thousands of cells in vivo has raised many questions, in particular concerning the possible photoinduced damages and the optimization of the choice of the illumination / opsin couple. During my PhD, I developed an experimentally verified simulation that calculates, under all actual illumination protocols, what will be the temperature rise in the brain tissue due to the absorption of light. In parallel, I modeled, from electrophysiology recordings, the intracellular currents observed during these photostimulations, for three different opsins, allowing me to simulate them. These models will allow the researchers to optimize their illumination protocols to keep heating as low as possible in the sample, while helping to generate optimized photocurrent dynamics according to experimental requirements
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Bzduch, Pavel. "Využití počítačové grafiky v silnoproudé elektrotechnice". Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2008. http://www.nusl.cz/ntk/nusl-217593.

Texto completo
Resumen
The aim of this work is to describe news in Autodesk Inventor Professional 2008. In second part of the work there are described the possibilities of technical drawing in Autodesk Inventor. In third part there is mentioned basic information about ANSYS Workbench. There is thermal simulation of asynchronous motor too.
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

May, Daniel. "Transiente Methoden der Infrarot-Thermografie zur zerstörungsfreien Fehleranalytik in der mikroelektronischen Aufbau- und Verbindungstechnik". Doctoral thesis, Universitätsbibliothek Chemnitz, 2015. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-163082.

Texto completo
Resumen
In dieser Arbeit wurde eine neue fehleranalytische Methode zur industriellen Anwendung an neuen Technologien der Aufbau- und Verbindungstechnik entwickelt. Das Verfahren basiert auf der Wechselwirkung von thermischen Wellen und Defekten. Die Besonderheit ist dabei die Zerstörungsfreiheit, die Geschwindigkeit, das Auflösungsvermögen und die durch neueste IR-Detektoren erreichte Temperaturempfindlichkeit. Es wurden grundlegende Studien bezüglich Auflösung und parasitären Effekten bei der Anwendung unter industriellen Randbedingungen durchgeführt. Dabei wurde eine systematische Vorgehensweise bezüglich der Komplexität gewählt. Dies ermöglicht nun u.a. eine Vorhersage der zu erwartenden Prüfdauer zur Auflösung vergrabener Defekte, der Begrenzung der maximalen Anregungsimpulsbreite (bei gegebener Defekttiefe) und die quantitative Ermittlung des Einflusses einer Lackschicht. Methodisch kamen grundsätzlich Simulationen und vergleichende Experimente zum Einsatz. Es wurden spezielle Proben zur Isolierung und Klärung parasitärer Effekte verwendet. Letztlich konnte das Messsystem erfolgreich an industriellen Problemstellung demonstriert werden. Das entwickelte Messsystem zeichnet sich durch hohe Flexibilität aus. Verschiedene problemangepasste Anregungsquellen (interne und externe Anregung durch zahlreiche physikalische Effekte) kommen zum Einsatz. Das Messsystem besteht aus vier Hauptmodulen, der Differenzbild-Methode, der Impulsthermografie, und zwei Varianten der LockIn-Thermografie. Zusammen ist das System in der Lage, Voids, Delaminationen und Risse in verschiedenen Bereichen auch der modernen AVT sicher zu erkennen. Es werden dabei Temperaturauflösugnen bis zu 5 mK und laterale Auflösungen bis 17 µm erreicht. Diese Arbeit legt einen Grundstein für die Anwendung der thermischen Fehleranalytik in der Industrie, indem hier die Grenzen der IR-Messtechnik aufgezeigt und charakterisiert werden
In this work a new failure analytical method for the industrial application of new technologies in electronic packaging has been developed. The developed method is based on the interaction of the thermal waves and defects. The special fature is non-destructive, speed, resolution and high temperature sensitivity due to latest IR-detectors. It fundamental studies regarding resolution and parasitic effects in the application were carried out cinsidering industrial conditions. Here, a systematic approach regarding the complexity has been selected. This now enables a prediction of the expected test period for detecting buried defects, limits for excitation pulse width (for a given defect depth) and the quantitative determination of the influence of parasitic paints. Methodically always simulations and comparative experiments were used. Simple samples for the isolation and purification of parasitic effects has been used. Finally, the measurement system has been successfully demonstrated on an industrial applications. The developed measurement system is characterized by high flexibility. Different problem-adapted excitation sources (internal and external excitation by numerous physical effects) are used. The measurement system currently consists of four main modules, the difference image method, the pulse thermography, and two variants of LockIn-thermography. Together, the system is capable of detecting voids, delaminations and cracks in various fields of electronic packageing. It will reach temperature resolutions up to 5 mK and lateral resolutions down to 17 µm. This work stes a foundation for the application of thermal failure analysis for industry by showing and charcterization the limits of IR imaging
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Deobarro, Mikaël. "Etude de l'immunité des circuits intégrés face aux agressions électromagnétiques : proposition d'une méthode de prédiction des couplages des perturbations en mode conduit". Thesis, Toulouse, INSA, 2011. http://www.theses.fr/2011ISAT0002/document.

Texto completo
Resumen
Avec les progrès technologiques réalisés au cours de ces dernières décennies, la complexité et les vitesses de fonctionnement des circuits intégrés ont beaucoup été augmentées. Bien que ces évolutions aient permis de diminuer les dimensions et les tensions d’alimentations des circuits, la compatibilité électromagnétique (CEM) des composants a fortement été dégradée. Identifiée comme étant un verrou technologique, la CEM est aujourd’hui l’une des principales causes de « re-design » des circuits car les problématiques liées aux mécanismes de génération et de couplage du bruit ne sont pas suffisamment étudiées lors de leur conception.Ce manuscrit présente donc une méthodologie visant à étudier la propagation du bruit à travers les circuits intégrés par mesures et par simulations. Afin d’améliorer nos connaissances sur la propagation d’interférences électromagnétiques (IEM) et les mécanismes de couplage à travers les circuits, nous avons conçu un véhicule de test développé dans la technologie SMOS8MV® 0,25 µm de Freescale Semiconductor. Dans ce circuit, plusieurs fonctions élémentaires telles qu’un bus d’E/S et des blocs numériques ont été implémentées. Des capteurs de tensions asynchrones ont également été intégrés sur différentes alimentations de la puce pour analyser la propagation des perturbations injectées sur les broches du composant (injection DPI) et sur les conducteurs permettant d’alimenter ce dernier (injection BCI). En outre, nous proposons différents outils pour faciliter la modélisation et les simulations d’immunité des circuits intégrés (extraction des modèles de PCB, approches de modélisation des systèmes d’injection, méthode innovante permettant de prédire et de corréler les niveaux de tension/ de puissance injectés lors de mesures d’immunité conduite, flot de modélisation). Chaque outil et méthode de modélisation proposés sont évalués sur différents cas test. Enfin, pour évaluer notre démarche de modélisation, nous l’appliquons sur un bloc numérique de notre véhicule de test et comparons les résultats de simulations aux différentes mesures internes et externes réalisées sur le circuit
With technological advances in recent decades, the complexity and operating speeds of integrated circuits have greatly increased. While these developments have reduced dimensions and supply voltages of circuits, electromagnetic compatibility (EMC) of components has been highly degraded. Identified as a technological lock, EMC is now one of the main causes of circuits re-designs because issues related to generating and coupling noise mechanisms are not sufficiently studied during their design. This manuscript introduces a methodology to study propagation of electromagnetic disturbances through integrated circuits by measurements and simulations. To improve our knowledge about propagation of electromagnetic interferences (EMI) and coupling mechanisms through integrated circuits, we designed a test vehicle developed in the SMOS8MV® 0.25µm technology from Freescale Semiconductor. In this circuit, several basic functions such as I/O bus and digital blocks have been implemented. Asynchronous on-chip voltage sensors have also been integrated on different supplies of the chip to analyze propagation of disturbances injected on supply pins and wires of the component (DPI and BCI injection). In addition, we propose various tools to facilitate modeling and simulations of Integrated Circuit’s immunity (PCB model extraction, injection systems modeling approaches, innovative method to predict and correlate levels of voltage / power injected during conducted immunity measurements, modeling flow). Each tool and modeling method proposed is evaluated on different test cases. To assess our modeling approach, we finally apply it on a digital block of our test vehicle and compare simulation results to various internal and external measurements performed on the circuit
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Jhao, Wun-Jie y 趙文傑. "Using Malware for Threat Risk Analysis based on a Dynamic Taint Propagation Approach". Thesis, 2015. http://ndltd.ncl.edu.tw/handle/49513444934381853966.

Texto completo
Resumen
碩士
崑山科技大學
資訊管理研究所
103
Due to the popularity of 3G network and the fast growing of mobile device, it leads to the increasing situations of network-dependent for people. However, the number of malware for smartphone attacks is several times the number of the previous one in the past three years. To guarantee the mobile security of cloud computing services, cloud service providers (CSPs) use Taint Checking (TC) approaches to examine how a developing cloud app uses sensitive information before deploying an app. Accordingly, this paper proposes a taint propagation analysis model incorporating a weighted spanning tree analysis scheme to track data with taint marking using several taint checking tools. In performing malware threat analysis against unspecified malware attacks, CSPs can use a TC approach for tracking the spreads and scopes of information flows between attack sources (malware) and detect vulnerabilities of targeted network applications. To verifiy the proposed model, we perform android programs using dynamic taint propagation to analyse the spread of and risks posed by suspicious apps were connected to the cloud computing server. For probabilistic analysis of taint propagation, risk and defence capability are used for each taint path for assisting a defender in recognising the attack results against network threats caused by malware infection and estimate the losses of associated taint sources.
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

LIN, ZHI-TING y 林智婷. "A novel portfolio diversification and risk reduction strategy based on affinity propagation clustering algorithm". Thesis, 2016. http://ndltd.ncl.edu.tw/handle/61185905112765890814.

Texto completo
Resumen
碩士
國立交通大學
資訊管理研究所
104
In this paper, an intelligent portfolio selection method based on Affinity Propagation clustering algorithm is proposed to solve the stable investment problem. The goal of this work is to minimize the volatility of the selected portfolio from the component stocks of S&P 500 index. Each independent stock can be viewed as a node in graph, and the similarity measurements between companies are calculated as the edge weights. Affinity Propagation clustering algorithm solve the graph theory problem by repeatedly update responsibility and availability message passing matrices. This research tried to find most representative and discriminant features to model the stock similarity. The testing features are divided into four major categories, including time-series covariance, technical indicators, previous return information, paired return value. The historical price and trading volume data is used to simulate the portfolio selection and volatility measurement. After grouping these investment targets into a small set of clusters, the selection process will choose fixed number of stocks from different clusters to form the portfolio. The experimental results show that the proposed system can effectively generate more stable portfolio by Affinity Propagation clustering algorithm with proper similarity features than average cases with similar settings.
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Rist, Stefan [Verfasser]. "Light propagation in ultracold atomic gases / vorgelegt von Stefan Rist". 2010. http://d-nb.info/1010685287/34.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

GENTILI, FILIPPO. "Multi-physics modelling for the safety assessment of complex structural systems under fire. The case of high-rise buildings". Doctoral thesis, 2013. http://hdl.handle.net/11573/918045.

Texto completo
Resumen
Among all structures, high-rise buildings pose specific design challenges with respect of fire safety for a number of reasons, in particular the evaluation of both the fire development (fire action) and response of the structural system to fire (structural behaviour). In relation to the fire action, large compartments and open hallways often present in modern high-rise buildings don’t let themselves to be designed within compliance to current codes and standards. A comprehensive analysis of the fire environment is required to understand the fire dynamics in these cases. A Computational Fluid Dynamic (CFD) model allows a quite accurate representation of realistic fire scenarios, because it takes into account the distribution of fuel, the geometry, the occupancy of individual compartments and the temperature rise in structural elements that are located outside the tributary area of fire scenario. In relation to the structural behaviour under fire, the passive fire resistance of structural elements and the intrinsic robustness of the system are the only measures to rely on in order to maintain the structural integrity of the building during and after the fire and avoid major economic losses due to structural failures and prolonged inoperability of the premises. Disproportionate damages induced by fire can be avoided with a proper design of the structure, aimed at reducing the vulnerability of the elements to fire (i.e. their sensitivity to fire) or at increasing the robustness of the structural system (i.e. its sensitivity to local damages). The topic of this thesis is the evaluation of the structural safety in case of fire by means of advanced multi-physics analyses with direct reference to the modern Performance-Based Fire Design (PBFD) framework. A fundamental aspect is how some basic failure mechanisms can be triggered or modified by the presence of fire on a part of a structural system, such as three hinge mechanism, bowing effects, catenary action, thermal buckling and snap-through, sway and non-sway collapse. High rise buildings, which are expected to be susceptible to fire-induced progressive collapse, will be investigated. Critical elements will be identified in the system and countermeasure for enhancement of structural integrity will be suggested. The investigation of the response of such a complex structures subjected to fire scenarios requires the use of CFD and Finite Element (FE) models for a realistic evaluation of the fire action and of the structural response respectively.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía