To see the other types of publications on this topic, follow the link: Large-Scale fire.

Dissertations / Theses on the topic 'Large-Scale fire'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Large-Scale fire.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

McKenzie, Donald. "Modeling large-scale fire effects : concepts and applications /." Thesis, Connect to this title online; UW restricted, 1998. http://hdl.handle.net/1773/5602.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Gales, John Adam Brian. "Unbonded post-tensioned concrete structures in fire." Thesis, University of Edinburgh, 2013. http://hdl.handle.net/1842/8083.

Full text
Abstract:
To achieve thinner and longer floor slabs, rapid construction, and tight control of inservice deflections, modern concrete structures increasingly use high-strength, posttensioned prestressing steel as reinforcement. The resulting structures are called posttensioned (PT) concrete. Post-tensioned concrete slabs are widely believed to benefit from ‘inherent fire endurance.’ This belief is based largely on results from a series of standard fire tests performed on simply-supported specimens some five decades ago. Such tests are of debatable credibility; they do not capture the true structural behaviour of real buildings in real fires, nor do they reflect modern PT concrete construction materials or optimization methods. This thesis seeks to develop a more complete understanding of the structural and thermal response of modern prestressing steel and PT concrete slabs, particularly those with unbonded prestressing steel conditions, to high temperature, in an effort to steer current practice and future research towards the development of defensible, performance-based, safe fire designs. An exhaustive literature review of previous experimentation and real case studies of fire exposed PT concrete structures is presented to address whether current code guidance is adequate. Both bonded and unbonded prestressing steel configurations are considered, and research needs are identified. For unbonded prestressing steel in a localised fire, the review shows that the interaction between thermal relaxation and plastic deformation could result in tendon failure and loss of tensile reinforcement to the concrete, earlier than predicted by available design guidance. Since prestressing steel runs continuously in unbonded PT slabs, local damage to prestressing steel will affect the integrity of adjacent bays in a building. In the event that no bonded steel reinforcement is provided (as permitted by some design codes) a PT slab could lose tensile reinforcement across multiple bays; even those remote from fire. Using existing literature and design guidance, preliminary simplified modelling is presented to illustrate the stress-temperature-time interactions for stressed, unbonded prestressing steel under localised heating. This exercise showed that the observed behaviour cannot be rationally described by the existing design guidance. The high temperature mechanical properties of modern prestressing steel are subsequently considered in detail, both experimentally and analytically. Tests are presented on prestressing steel specimens under constant axial stress at high temperature using a high resolution digital image correlation (DIC) technique to accurately measure deformations. A novel, accurate analytical model of the stresstemperature- time dependent deformation of prestressing steel is developed and validated for both transient and steady-state conditions. Modern prestressing steel behaviour is then compared to its historical prestressing steel counterparts, showing significant differences at high temperature. Attention then turns to other structural actions of a real PT concrete structure (e.g. thermal bowing, restraint, concrete stiffness loss, continuity, spalling, slab splitting etc.) all of which also play inter-related roles influencing a PT slab’s response in fire. A series of three non-standard structural fire experiments on heavily instrumented, continuous, restrained PT concrete slabs under representative sustained service loads were conducted in an effort to better understand the response of PT concrete structures to localised heating. To the author’s knowledge this is the first time a continuous PT slab which includes axial, vertical and rotational restraint has been studied at high temperature, particularly under localised heating. The structural response of all three tests indicates a complex deflection trend in heating and in cooling which differs considerably from the response of a simply supported slab in a standard fire test. Deflection trends in the continuous slab tests were due to a combination of thermal expansion and plastic damage. The test data will enable future efforts to validate computational models which account for the requisite complexities. Overall, the research presented herein shows that some of the design guidance for modern PT concrete slabs is inadequate and should be updated. The high temperature deformation of prestressing steel under localised heating, as would be expected in a real fire, should be considered, since uniform heating of simplysupported elements is both unrealistic and unconservative with respect to tensile rupture of prestressing steel tendons. The most obvious impact of this finding would be to increase the minimum concrete covers required for unbonded PT construction, and to require adequate amounts of bonded steel reinforcement to allow load shedding to the bonded steel at high temperature in the event that the prestressing steel fails or is severely damaged by fire.
APA, Harvard, Vancouver, ISO, and other styles
3

Klinck, Amanda. "An Experimental Investigation of the Fire Characteristics of the University of Waterloo Burn House Structure." Thesis, University of Waterloo, 2006. http://hdl.handle.net/10012/774.

Full text
Abstract:
This thesis reports on the procedure, results and analysis of four full scale fire tests that were performed at the University of Waterloo's Live Fire Research Facility. The purpose of these tests was to investigate the thermal characteristics of one room of the Burn House structure. Comparisons were made of Burn House experimental data to previous residential fire studies undertaken by researchers from the University of Waterloo. This analysis showed similarities in growth rate characteristics, illustrating that fire behaviour in the Burn House is typical of residential structure fire behaviour. The Burn House experimental data was also compared to predictions from a fire model, CFAST. Recommendations were made for future work in relation to further investigation of the fire characteristics of the Burn House.
APA, Harvard, Vancouver, ISO, and other styles
4

Henderson, Erik. "Metal Thermoelectrics: An Economical Solution to Large Scale Waste Heat Recovery." University of Cincinnati / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1512038554977884.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Foster, Andrew. "Understanding, predicting and improving the performance of foam filled sandwich panels in large scale fire resistance tests." Thesis, University of Manchester, 2015. https://www.research.manchester.ac.uk/portal/en/theses/understanding-predicting-and-improving-the-performance-of-foam-filled-sandwich-panels-in-large-scale-fire-resistance-tests(3dc4bf07-82f0-4e3e-9cab-37e9244fe2a2).html.

Full text
Abstract:
This thesis presents the results of research on sandwich panel construction, with the aims of developing tools for modelling sandwich panel fire performance and hence to use the tools to aid the development of sandwich panel construction with improved fire resistance. The research focuses on sandwich panels made of thin steel sheeting and a polyisocyanurate (PIR) foam core. For non-loadbearing sandwich panel construction, fire resistance is measured in terms of thermal insulation and integrity only. However, these two parameters are affected by mechanical performance of sandwich panel construction due to the high distortion and large deformation nature of sandwich panel construction under fire attack. Therefore, it is necessary to consider both thermal and mechanical performances of sandwich panels under fire conditions. The work in this thesis includes development of a thermal conductivity model for PIR foam as this thermal property is one of the key values in determining heat transfer through sandwich panels; this thermal conductivity model is based on the effective thermal conductivity of porous foams proposed by Glicksman (1994) and includes the effects of polymer decomposition and increases in foam cell size. It is validated against fire tests carried out on PIR sandwich panels 80mm and 100mm thick with steel facings of thickness 0.5mm. A large 3D sequentially coupled thermal-stress model of a full scale fire test has been developed in the commercial finite element analysis (FEA) software ABAQUS to provide insight into the way sandwich panels behave in a fire resistance test and also to assess different modelling techniques. Aspects and stages of the simulation that agree well with test data are explained. Limitations of the ABAQUS software for simulating sandwich panel fire tests are highlighted; namely, it is not possible to simulate the correct radiation heat transfer through panel joints, as cavity radiation cannot be specified in a fully coupled thermal-stress analysis. Joints are key components of sandwich panel construction. In order to obtain temperature development data for modelling joints, a number of fire tests have been carried out. These fire tests were conducted with different joint configurations and panel thicknesses under realistic fire conditions using timber cribs. The joint fire tests revealed significant ablation of the foam core within the joints of sandwich panels at high temperatures. At the beginning of fire exposure, the joint temperature on the unexposed surface was lower than that on the panel due to the better insulation property of air compared to the foam. However, as the joint gap increased due to ablation of the foam, the joint temperatures became higher than in the panel. A numerical simulation model has been created to investigate this behaviour. Using the aforementioned thermal model, numerical simulations have been carried out to examine the influences of possible changes to sandwich panel design on sandwich panel construction fire performance. It was suggested that if the maximum gap in the joints can be limited to 5mm, for example, by applying intumescent coating strips within the sandwich panel joints to counter the increasing gap formed due to core ablation, then the joint temperature on the unexposed surface would not exceed that of the panel surface, hence the joint would cease to be the weak link. To increase the panel fire resistance, the use of graphite particles in the PIR foam formulation may be considered to lower the contribution of radiative heat transfer within the foam cells by reducing the transmissivity of the cell walls. Graphite particles may offer considerable increases in the thermal resistance of PIR foam at high temperatures by limiting the radiation contribution which dominates heat transfer above 300oC.
APA, Harvard, Vancouver, ISO, and other styles
6

Del, Valle Marcelo. "Benchmark sensitivity of the container analysis fire environment (CAFE) computer code using a rail-cask-size pipe calorimeter in large-scale pool fires." abstract and full text PDF (UNR users only), 2008. http://0-gateway.proquest.com.innopac.library.unr.edu/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:1460792.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Horvath, Istva'n. "Extreme PIV Applications: Simultaneous and Instantaneous Velocity and Concentration Measurements on Model and Real Scale Car Park Fire Scenarios." Doctoral thesis, Universite Libre de Bruxelles, 2012. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209641.

Full text
Abstract:
This study is a presentation of an instantaneous and simultaneous velocity and concentration measurement technique and its applications on car park fire scenarios.

In this actual chapter 1 general introduction is given to each chapter. Chapter 2 is dedicated to a detailed description of the instantaneous and simultaneous velocity and concentration measurement technique and its associated error assessment methodology. The name of the new technique is derived from the names of the acquired parameters (VELocity and COncentration) and shall be hereafter referred to as VELCO. After having validated and performed an error assessment of this technique, it is applied to an investigation of full-scale car park (30 m x 30 m x 2.6 m – Gent / WFRGENT) fire cases in chapter 3. The measurements were carried out with the financial support of IWT-SBO program. In the full-scale measurements only the velocity part is applied of VELCO, yet it can be considered as its application since the special data treating was developed and implemented in the Rabon (see: §2.1.2) program, which is the software of the new technique along with Tucsok (see: §2.1.1) and they will be both discussed in the related chapter. Here it is enough to mention that the concentration and velocity information can be obtained independently as well. During the full-scale measurements, beyond of VELCO the smoke back-layering distances (SBL) are also derived from the temperature values, which were measured by thermocouples under the ceiling in the midline of the car park. The critical velocity, which is an important measure of fire safety, can be obtained from the SBL results. In chapter 4, isothermal fire modeling is surveyed in order to present how full-scale fires are modeled in small-scale. In this part of the study the theory of fire related formulae and an isothermal model are described. Here it is important to stress the fact that the fire modeling is not directly related to the VELCO technique. However it connects the full-scale to the small-scale measurements, which the technique is applied on. Chapter 5 discusses small-scale measurements (1:25 – Rhode Saint Genese / VKI) on the car park introduced in chapter 3 and their validation. After the validation, more complex car parks scenarios are also investigated due to the easy to change layout in the small-scale model with respect to the full-scale car park. In this chapter the smoke back-layering distances are obtained by VELCO. Finally, in chapter 6 important conclusions are drawn with the objective of increasing fire safety.


Doctorat en Sciences de l'ingénieur
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO, and other styles
8

Betting, Benjamin. "Etudes Expérimentales et lois prédictives des foyers d'incendies." Thesis, Normandie, 2018. http://www.theses.fr/2018NORMR054/document.

Full text
Abstract:
Chaque année en France, les feux de compartiments donnent lieu à plus de 88 000 interventions impliquant plus de 15 000 personnes dont plusieurs centaines de décès et de blessés graves. Aujourd’hui, lors de ces feux, les prises de décision et les délais d’intervention des équipes de secours sont principalement basés sur des décisions humaines, fruits de l'expérience. Une connaissance parfaite de la situation, de son évolution dans le temps et des dangers qui peuvent apparaître est impossible. C’est en partie la cause majeure des mauvais chiffres répertoriés ci-dessus. En effet, les processus physicochimiques qui régissent les feux de compartiments et les situations collatérales extrêmement réactives et dangereuses sont complexes. La transition entre un feu localisé et un feu généralisé peut prendre plusieurs formes. L'un des vecteurs les plus importants dans la propagation de la combustion pour les feux de compartiments sont les fumées, du fait de leur température élevée (souvent supérieure à 600°C) et des quantités importantes d’énergie, sous forme de chaleur, qu'elles contiennent. Malgré leur extrême dangerosité, les fumées restent importantes à étudier car elles véhiculent de précieuses informations, notamment sur l’apparition de phénomènes thermiques redoutés par les pompiers. Afin de mener cette étude, une cellule expérimentale composé de deux containers maritimes a été installée sur le site de formation incendie des sapeurs-pompiers de Seine-Maritime. Cette plateforme va permettre, grâce à un brûleur alimenté en propane, de produire des fumées chaudes dans uneconfiguration dite « feu réel ». Elles seront analysées en partie grâce à une technique de mesure non intrusive, la PIV. Les mesures par PIV grands champs seront comparées à des simulations LES de l’expérience (FDS). La double compétence (numérique / expérimentale) est essentielle dans ce type d’étude où les données expérimentales souffrent d’un manque de résolution (spatiale et temporelle) mais pour autant représentent des informations nécessaires à la validation des codes
Each year in France, compartment fires result in more than 88,000 interventions involving more than 15,000 people, including several hundred deaths and serious injuries. Today, during fire compartments, the decision-making of the rescue teams is mainly based on human decisions, as a result of the accumulated experience. However, a perfect knowledge of the situation, its evolution over time and the dangers that may appear is impossible. Therefore, studying the fumes is of major interest. Indeed, smoke remains important to study because it conveys valuable information, especially on the appearance of thermal phenomena feared by firemen. In order to carry out this study, an experimental cell made up of two maritime containers was installed on the site of the Seine-Maritime fire brigade fire training. This platform will produce hot smokes in a configuration called "real fire" thanks to a propane burner. In this study, the smoke dynamics in a large scale experimental setup is analyzed using a non-intrusive measurement technique such as PIV (Particle Image Velocity). All the performed measurements are compared with LES (Large Eddy Simulation) simulations of the experiment using Fire Dynamics Simulator (FDS). The double expertise (numerical / experimental) is essential in this type of study where the experimental data suffer from a lack of resolution (spatial and temporal) but nevertheless represents an important source of information necessary for the validation of the codes
APA, Harvard, Vancouver, ISO, and other styles
9

Covi, Patrick. "Multi-hazard analysis of steel structures subjected to fire following earthquake." Doctoral thesis, Università degli studi di Trento, 2021. http://hdl.handle.net/11572/313383.

Full text
Abstract:
Fires following earthquake (FFE) have historically produced enormous post-earthquake damage and losses in terms of lives, buildings and economic costs, like the San Francisco earthquake (1906), the Kobe earthquake (1995), the Turkey earthquake (2011), the Tohoku earthquake (2011) and the Christchurch earthquakes (2011). The structural fire performance can worsen significantly because the fire acts on a structure damaged by the seismic event. On these premises, the purpose of this work is the investigation of the experimental and numerical response of structural and non-structural components of steel structures subjected to fire following earthquake (FFE) to increase the knowledge and provide a robust framework for hybrid fire testing and hybrid fire following earthquake testing. A partitioned algorithm to test a real case study with substructuring techniques was developed. The framework is developed in MATLAB and it is also based on the implementation of nonlinear finite elements to model the effects of earthquake forces and post-earthquake effects such as fire and thermal loads on structures. These elements should be able to capture geometrical and mechanical non-linearities to deal with large displacements. Two numerical validation procedures of the partitioned algorithm simulating two virtual hybrid fire testing and one virtual hybrid seismic testing were carried out. Two sets of experimental tests in two different laboratories were performed to provide valuable data for the calibration and comparison of numerical finite element case studies reproducing the conditions used in the tests. Another goal of this thesis is to develop a fire following earthquake numerical framework based on a modified version of the OpenSees software and several scripts developed in MATLAB to perform probabilistic analyses of structures subjected to FFE. A new material class, namely SteelFFEThermal, was implemented to simulate the steel behaviour subjected to FFE events.
APA, Harvard, Vancouver, ISO, and other styles
10

Alcasena, Urdíroz Fermín J. "Wildfire risk management in southern European landscapes: Towards a long‐term comprehensive strategy." Doctoral thesis, Universitat de Lleida, 2019. http://hdl.handle.net/10803/667939.

Full text
Abstract:
Pocs incendis grans i destructius representen els impactes més negatius en els valors socioeconòmics i naturals de les zones mediterrànies. Com a conseqüència de l’augment de l’acumulació de biomassa en els paisatges culturals prèviament elaborats, aquests esdeveniments no característics que es produeixen en condicions meteorològiques extremes són resistents als esforços de supressió a causa de les brases massives de dutxa, les intensitats de foc aclaparadores i les taxes d’expansió molt elevades. D'altra banda, l'augment de les àrees d'interfície de zones silvestres-urbanes representa un factor condicionant que exigeix protecció i augmenta substancialment la complexitat de la gestió d'emergències. Les polítiques de prevenció d’ignició i de supressió d’incendis només resulten ineficaces per mitigar les pèrdues dels focs contemporanis. En aquesta tesi he implementat un marc analític a escala múltiple per informar sobre la presa de decisions d’una estratègia de gestió de riscos d’incendis forestals amb l'objectiu de crear paisatges resistents al foc, restaurar el règim de foc cultural, donar suport a la supressió d’incendis segura i eficient i crear comunitats adaptades al foc. En descompondre el risc d'incendis forestals en els principals factors causals a les escales relacionades amb les capacitats de gestió dels diferents agents, des dels propietaris individuals fins als governs regionals, aquesta tesi intenta proporcionar una solució integral per aconseguir aquests objectius bàsics a mig termini a la Unió Europea del sud regions. Es va implementar un model de simulació contra incendis per obtenir els factors causals de risc requerits o els indicadors d’exposició. La propagació del foc i el comportament en grans àrees es van modelar tenint en compte els règims de bombers variables en termes d’estacionalitat, gran nombre de focs i distribució espacial. Les relacions de susceptibilitat definides per experts o models de mortalitat es van utilitzar per avaluar els efectes de foc com a possibles pèrdues econòmiques en valors de risc. A més, vam utilitzar una anàlisi de transmissió per definir els incendis de la comunitat i avaluar l'intercanvi de foc entre els municipis veïns. La gestió de combustibles és la principal estratègia de mitigació de riscos d'incendis forestals a escala paisatgística i s'han utilitzat models d'optimització espacial per ajudar en el disseny del tractament del paisatge estratègic i explorar les oportunitats de col·locació sota restriccions pressupostàries. Els resultats es van proporcionar a les escales operatives adequades per informar de diferents estratègies de gestió d’incendis forestals. Els perfils d’exposició i l’avaluació de riscos a escales finals per a les estructures d’habitatges individuals i els valors dels boscos de fustes intenten promoure la participació dels propietaris i exigir les bones pràctiques dels gestors forestals amb l'objectiu de mitigar les pèrdues derivades dels incendis en el mateix lloc (unitats de tractament) i les terres veïnes. Els esforços de gestió dins de les àrees de planificació articulats com a projectes de planificació col·laborativa entre diversos agents socioeconòmics inclouen tractaments sobre el combustible del paisatge en llocs estratègics que redueixen la probabilitat general d’incendis forestals i la intensitat del foc, la planificació del paisatge per excloure àrees perilloses per al desenvolupament urbà, la preparació de la comunitat reduint la vulnerabilitat social i les ordenances del municipi a reduir la vulnerabilitat de l’habitatge. La producció conjunta de tractaments representa una oportunitat en ecosistemes forestals mediterranis multifuncionals per organitzar solucions complexes. La formulació de polítiques a escala regional dóna prioritat a nivell municipal a les diferents estratègies de gestió, com ara programes de prevenció d'ignició, pre-posicionament de recursos, assignació de subvencions per a tractaments de combustible i aplicació de la llei per a la gestió de combustibles en comunitats d'interfície de zones silvestres-urbanes amb major risc. Els diferents treballs es van desenvolupar en diverses àrees mediterrànies per ressaltar l'aplicabilitat del marc en altres llocs.
Pocos incendios grandes y destructivos representan la mayoría de los impactos negativos sobre los valores socioeconómicos y naturales en las áreas mediterráneas. Como resultado de la creciente acumulación de biomasa en los paisajes culturales que antes eran de grano fino, estos eventos no característicos que ocurren en condiciones climáticas extremas son resistentes a los esfuerzos de supresión debidos a las brasas de lluvia masiva, las intensidades de fuego abrumadoras y las tasas de propagación muy altas. Además, el aumento de las áreas de interfaz urbano-forestal representa un factor de condicionamiento que exige protección y aumenta sustancialmente la complejidad de la gestión de emergencias. Las políticas de prevención de ignición y extinción de incendios por sí solas resultan ineficaces para mitigar las pérdidas de incendios contemporáneos. En esta Tesis, implementé un marco analítico de múltiples escalas para informar la toma de decisiones de una estrategia de gestión de riesgos de incendios forestales con el objetivo de crear paisajes resistentes a incendios, restaurar el régimen cultural de incendios, apoyar la supresión segura y eficiente de incendios y crear comunidades adaptadas a incendios. Al disolver el riesgo de incendios forestales en los principales factores causales en escalas relacionadas con las capacidades de gestión de los diferentes agentes, desde los propietarios individuales hasta los gobiernos regionales, esta tesis intenta ofrecer una solución integral para lograr esos objetivos centrales a medio plazo en el sur de la Unión Europea regiones. Se implementó un enfoque de modelado de simulación de incendios para obtener los factores causales de riesgo requeridos o las métricas de exposición. La propagación y el comportamiento de los incendios en grandes áreas se modelaron teniendo en cuenta los regímenes de incendios variables en términos de estacionalidad, gran número de incendios y distribución espacial. Las relaciones de susceptibilidad definidas por los expertos o los modelos de mortalidad se utilizaron para evaluar los efectos del fuego como posibles pérdidas económicas a valores en riesgo. Además, utilizamos un análisis de transmisión para delimitar las cuencas comunitarias y evaluar el intercambio de incendios entre los municipios vecinos. La gestión de combustibles es la principal estrategia de mitigación del riesgo de incendios forestales a escala del paisaje, y se utilizaron modelos de optimización espacial para ayudar en el diseño estratégico del tratamiento del paisaje y explorar oportunidades de colocación bajo restricciones presupuestarias. Los resultados se proporcionaron en escalas operativas apropiadas para informar diferentes estrategias de manejo de incendios forestales. Los perfiles de exposición y la evaluación del riesgo a escalas finas para las estructuras de viviendas individuales y los valores forestales de los bosques de madera intentan promover la participación de los propietarios y demandan las buenas prácticas de los administradores forestales con el objetivo de mitigar las pérdidas por incendios encendidos en el mismo sitio (unidades de tratamiento) y las tierras vecinas. Los esfuerzos de gestión dentro de las áreas de planificación articulados como proyectos de planificación colaborativa entre diversos agentes socioeconómicos incluyen tratamientos de combustible de paisaje en lugares estratégicos que reducen la probabilidad general de incendios forestales y la intensidad de incendios, la planificación del paisaje para excluir áreas peligrosas para el desarrollo urbano, la preparación de la comunidad para reducir la vulnerabilidad social y las ordenanzas municipales para reducir la vulnerabilidad de la vivienda. El tratamiento conjunto de la producción representa una oportunidad en los ecosistemas forestales mediterráneos multifuncionales para organizar soluciones complejas. La formulación de políticas a escala regional prioriza a nivel municipal las diferentes estrategias de manejo, como los programas de prevención de ignición, el posicionamiento previo de recursos de supresión, la asignación de subsidios para tratamientos de combustible y la aplicación de la ley para el manejo de combustibles en comunidades de interfaz urbano-forestal en mayor riesgo. Los diferentes documentos se desarrollaron en varias áreas mediterráneas para resaltar la aplicabilidad del marco en otros lugares.
Few large and destructive fires account for most negative impacts on socioeconomic and natural values in Mediterranean areas. As a result of an increasing amount of biomass accumulation on the previously fine-grained cultural landscapes, these uncharacteristic events occurring under extreme weather conditions are resistant to suppression efforts due to massive showering embers, overwhelming fire intensities, and very high spread rates. Moreover, increasing wildland-urban interface areas represent a conditioning factor demanding protection and substantially increasing emergency management complexity. Ignition prevention and fire suppression policies alone result ineffective to mitigate losses from contemporary fires. In this Thesis I implemented a multiple-scale analytical framework to inform the decision-making of a wildfire risk management strategy aiming at creating fire resilient landscapes, restoring the cultural fire regime, supporting safe and efficient fire suppression, and creating fire-adapted communities. By decomposing wildfire risk into the main causative factors at scales related to management capabilities for the different agents, from the individual homeowners to Regional Governments, this dissertation attempts to provide a comprehensive solution to achieve those core goals on the mid-term in southern European Union regions. A fire simulation modeling approach was implemented to obtain the required risk causative factors or exposure metrics. Fire spread and behavior in large areas were modeled accounting for variable fire regimes in terms of seasonality, large fire number, and spatial distribution. Expert-defined susceptibility relations or mortality models were then used to assess fire effects as potential economic losses to values at risk. Moreover, we used a transmission analysis to delineate community firesheds and assess fire exchange among neighboring municipalities. Fuels management is the main wildfire risk mitigation strategy at the landscape scale, and spatial optimization models were used to help in strategic landscape treatment design and explore collocation opportunities under budgetary restrictions. Results were provided at appropriate operational scales to inform different wildfire management strategies. Exposure profiles and risk assessment at fine scales for individual housing structures and timber stand forest values attempt to promote homeowners’ involvement and demand forest managers’ good practices aiming at mitigating losses from fires ignited on the same site (treatment units) and the neighboring lands. Management efforts within Planning Areas articulated as collaborative planning projects among various socioeconomic agents include landscape fuel treatments on strategic locations reducing overall wildfire likelihood and fire intensity, landscape planning to exclude hazardous areas for the urban development, community preparedness reducing social vulnerability, and municipality ordinances to reduce housing vulnerability. Treatment joint-production represents an opportunity in multi-functional Mediterranean forest ecosystems to arrange complex solutions. Regional scale policy-making prioritizes at municipality level the different management strategies such as ignition prevention programs, suppression resource pre-positioning, assignation of subsidies for fuel treatments, and law enforcement for managing fuels in wildland-urban interface communities at highest risk. The different papers were developed in various Mediterranean areas to highlight the applicability of the framework elsewhere.
APA, Harvard, Vancouver, ISO, and other styles
11

Launay, Emilie. "Modélisation inverse pour la dispersion atmosphérique de polluants suite à un incendie de grande ampleur à l'échelle urbaine." Electronic Thesis or Diss., Marne-la-vallée, ENPC, 2023. http://www.theses.fr/2023ENPC0047.

Full text
Abstract:
Les incendies de grande ampleur survenus en milieu urbain, tels que ceux de l'usine Lubrizol ou de la cathédrale Notre-Dame de Paris en 2019 en France, mettent en évidence la nécessité de développer des moyens d'évaluation des risques engendrés par les panaches de fumées pour la population et l'environnement. L'un des enjeux est de fournir rapidement aux autorités des informations sur les zones impactées par le panache et les niveaux de concentration de polluants auxquels les personnes ont pu être exposées.La modélisation de la dispersion atmosphérique est une méthode utilisée pour évaluer la propagation des concentrations de polluants dans l'atmosphère. En particulier, la simulation de la dispersion des substances toxiques issues d'un rejet ponctuel peut permettre d'orienter des stratégies de prélèvements. Pour les incendies, les caractéristiques de la source polluante peuvent être déterminées au moyen de corrélations qui dépendent des propriétés thermocinétiques du feu. Cependant, en cas de rejet accidentel, les émissions sont a priori inconnues et les simulations visant à analyser le comportement du panache de fumées sont alors réalisées avec des hypothèses et des incertitudes importantes.Si l’on dispose de mesures de concentrations dans l’atmosphère, il devient intéressant de mettre en œuvre une approche de modélisation inverse basée sur l'utilisation conjointe de ces mesures et d'un modèle de dispersion. Deux méthodes basées sur le cadre de la modélisation inverse bayésienne sont développées pour retrouver le terme source d'un incendie de grande ampleur par l'assimilation de mesures de concentration de polluants in-situ. Une méthode semi-bayésienne et une méthode bayésienne de type Monte Carlo par chaîne de Markov sont considérées pour la caractérisation du rejet.La source à retrouver est décrite par un taux d'émission variable dans le temps et une hauteur d'émission. Cette dernière, liée au phénomène d'élévation du panache, est un paramètre important pour évaluer l'impact de la pollution à proximité de l'incendie. Deux stratégies de paramétrisation des hauteurs d'émission sont développées. La première consiste à retrouver les taux de rejet pour toutes les hauteurs d'émission prédéfinies depuis la modélisation directe. La seconde est une proposition d'inversion qui consiste à inverser la hauteur d'émission pour obtenir une intensité de rejet associée. En outre, plusieurs ajustements des méthodes inverses sont proposées pour les rendre plus robustes, notamment avec la caractérisation des niveaux de pollution ambiants.Ces méthodes inverses sont appliquées dans le cadre d'une expérience de simulation d'un système d'observation ("OSSE") correspondant à l'incendie de la cathédrale Notre-Dame en 2019 et d'une étude de cas réel correspondant à l'incendie d'un grand entrepôt à Aubervilliers, près de Paris, en 2021
Past large-scale fires in urban areas, such as the Lubrizol warehouse fire or the Notre-Dame de Paris cathedral fire in France in 2019, have highlighted the need to develop means of assessing the risks posed by smoke plumes to the population and the environment. One of the challenges is to quickly provide the authorities with information on the areas impacted by the plume and the pollutant concentration levels to which people may have been exposed.Atmospheric dispersion modelling is a method used to assess the spread of pollutant concentrations in the atmosphere. By simulating the dispersion of toxic substances from accidental releases, it is possible to guide sampling strategies. For fires, the characterisatics of the pollutant source can be determined using correlations depending on the thermokinetic properties of the fire. However, in the event of an accidental release, the emissions are a priori unknown, and simulations designed to analyse the behaviour of the smoke plume are then carried out with significant assumptions and uncertainties.If atmospheric concentration measurements are available, it is worth using an inverse modelling approach based on the joint use of these measurements and a dispersion model. Two methods based on the Bayesian inverse modelling framework are developed to find the source term of a large-scale fire, by assimilating in-situ pollutant concentration measurements. A semi-Bayesian method and a Markov Chain Monte Carlo Bayesian method are considered for the characterisation of the release. The source to be retreived is described by a time-varying emission rate and an emission height. The latter, linked to the phenomenon of plume rise, is an important parameter for assessing the pollution impact in the vicinity of the fire. Two emission heights parametrisation strategies are studied. The first consists of finding the distributions associated with the release rates for all the predefined emission heights from forward modelling. The second is an inversion proposal, which involves inverting the emission height to obtain an associated release intensity. Moreover, several adjustments to inverse methods are proposed to make them more robust, particularly with regard to the characterisation of ambient pollution levels.These inverse methods are being applied in an Observation System Simulation Experiment (OSSE) corresponding to the Notre-Dame Cathedral fire in 2019 and a real case study corresponding to a large warehouse fire in Aubervilliers, near Paris, in 2021
APA, Harvard, Vancouver, ISO, and other styles
12

Stender, Jan. "Snapshots in large-scale distributed file systems." Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, 2013. http://dx.doi.org/10.18452/16660.

Full text
Abstract:
Viele moderne Dateisysteme unterstützen Snapshots zur Erzeugung konsistenter Online-Backups, zur Wiederherstellung verfälschter oder ungewollt geänderter Dateien, sowie zur Rückverfolgung von Änderungen an Dateien und Verzeichnissen. Während frühere Arbeiten zu Snapshots in Dateisystemen vorwiegend lokale Dateisysteme behandeln, haben moderne Trends wie Cloud- oder Cluster-Computing dazu geführt, dass die Datenhaltung in verteilten Speichersystemen an Bedeutung gewinnt. Solche Systeme umfassen häufig eine Vielzahl an Speicher-Servern, was besondere Herausforderungen mit Hinblick auf Skalierbarkeit, Verfügbarkeit und Ausfallsicherheit mit sich bringt. Diese Arbeit beschreibt einen Snapshot-Algorithmus für großangelegte verteilte Dateisysteme und dessen Integration in XtreemFS, ein skalierbares objektbasiertes Dateisystem für Grid- und Cloud-Computing-Umgebungen. Die zwei Bausteine des Algorithmus sind ein System zur effizienten Erzeugung und Verwaltung von Dateiinhalts- und Metadaten-Versionen, sowie ein skalierbares, ausfallsicheres Verfahren zur Aggregation bestimmter Versionen in einem Snapshot. Um das Problem einer fehlenden globalen Zeit zu bewältigen, implementiert der Algorithmus ein weniger restriktives, auf Zeitstempeln lose synchronisierter Server-Uhren basierendes Konsistenzmodell für Snapshots. Die wesentlichen Beiträge der Arbeit sind: 1) ein formales Modell von Snapshots und Snapshot-Konsistenz in verteilten Dateisystemen; 2) die Beschreibung effizienter Verfahren zur Verwaltung von Metadaten- und Dateiinhalts-Versionen in objektbasierten Dateisystemen; 3) die formale Darstellung eines skalierbaren, ausfallsicheren Snapshot-Algorithmus für großangelegte objektbasierte Dateisysteme; 4) eine detaillierte Beschreibung der Implementierung des Algorithmus in XtreemFS. Eine umfangreiche Auswertung belegt, dass der vorgestellte Algorithmus die Nutzerdatenrate kaum negativ beeinflusst, und dass er mit großen Zahlen an Snapshots und Versionen skaliert.
Snapshots are present in many modern file systems, where they allow to create consistent on-line backups, to roll back corruptions or inadvertent changes of files, and to keep a record of changes to files and directories. While most previous work on file system snapshots refers to local file systems, modern trends like cloud and cluster computing have shifted the focus towards distributed storage infrastructures. Such infrastructures often comprise large numbers of storage servers, which presents particular challenges in terms of scalability, availability and failure tolerance. This thesis describes snapshot algorithm for large-scale distributed file systems and its integration in XtreemFS, a scalable object-based file system for grid and cloud computing environments. The two building blocks of the algorithm are a version management scheme, which efficiently records versions of file content and metadata, as well as a scalable and failure-tolerant mechanism that aggregates specific versions in a snapshot. To overcome the lack of a global time in a distributed system, the algorithm implements a relaxed consistency model for snapshots, which is based on timestamps assigned by loosely synchronized server clocks. The main contributions of the thesis are: 1) a formal model of snapshots and snapshot consistency in distributed file systems; 2) the description of efficient schemes for the management of metadata and file content versions in object-based file systems; 3) the formal presentation of a scalable, fault-tolerant snapshot algorithm for large-scale object-based file systems; 4) a detailed description of the implementation of the algorithm as part of XtreemFS. An extensive evaluation shows that the proposed algorithm has no severe impact on user I/O, and that it scales to large numbers of snapshots and versions.
APA, Harvard, Vancouver, ISO, and other styles
13

Covi, Patrick. "Multi-hazard analysis of steel structures subjected to fire following earthquake." Doctoral thesis, Università degli studi di Trento, 2021. http://hdl.handle.net/11572/313383.

Full text
Abstract:
Fires following earthquake (FFE) have historically produced enormous post-earthquake damage and losses in terms of lives, buildings and economic costs, like the San Francisco earthquake (1906), the Kobe earthquake (1995), the Turkey earthquake (2011), the Tohoku earthquake (2011) and the Christchurch earthquakes (2011). The structural fire performance can worsen significantly because the fire acts on a structure damaged by the seismic event. On these premises, the purpose of this work is the investigation of the experimental and numerical response of structural and non-structural components of steel structures subjected to fire following earthquake (FFE) to increase the knowledge and provide a robust framework for hybrid fire testing and hybrid fire following earthquake testing. A partitioned algorithm to test a real case study with substructuring techniques was developed. The framework is developed in MATLAB and it is also based on the implementation of nonlinear finite elements to model the effects of earthquake forces and post-earthquake effects such as fire and thermal loads on structures. These elements should be able to capture geometrical and mechanical non-linearities to deal with large displacements. Two numerical validation procedures of the partitioned algorithm simulating two virtual hybrid fire testing and one virtual hybrid seismic testing were carried out. Two sets of experimental tests in two different laboratories were performed to provide valuable data for the calibration and comparison of numerical finite element case studies reproducing the conditions used in the tests. Another goal of this thesis is to develop a fire following earthquake numerical framework based on a modified version of the OpenSees software and several scripts developed in MATLAB to perform probabilistic analyses of structures subjected to FFE. A new material class, namely SteelFFEThermal, was implemented to simulate the steel behaviour subjected to FFE events.
APA, Harvard, Vancouver, ISO, and other styles
14

Leung, Andrew W. "Organizing, indexing and searching large-scale file systems /." Diss., Digital Dissertations Database. Restricted to UC campuses, 2009. http://uclibs.org/PID/11984.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Cranch, Geoffrey Alan. "Large-scale remotely interrogated arrays of fibre-optic interferometric sensors and fibre lasers." Thesis, Heriot-Watt University, 2001. http://hdl.handle.net/10399/1197.

Full text
Abstract:
The development of fibre-optic interferometric sensor arrays for application in underwater acoustics has been an area of active research since the late 1970's. The technology has reached a level whereby prototype arrays have been successfully demonstrated in sea-trials. However, the recent development of several new technologies may significantly increase the size and performance of these arrays. We demonstrate the potential increase in multiplexed array sizes using architectures based on combining dense wavelength division multiplexing and time division multiplexing. These architectures also include erbium doped fibre amplifiers for post, pre, inline and remote amplification in order to increase the standoff distance between the array and electronics unit. We also theoretical investigate the limitations imposed on the number of sensors that can be multiplexed, due to nonlinear transmission effects in the link fibre in the presence of high optical powers and multiple wavelengths. We also demonstrate novel DFB erbium doped fibre lasers as optical sources. These sources exhibit linewidths significantly narrower than semiconductor DFB lasers, which are currently used in many sensor arrays, and thus may provide a significant improvement in sensor resolution. We investigate the intensity and frequency noise properties of these lasers, their modulation properties and successfully develop intensity noise and frequency noise reduction techniques. We also investigate the potential of fibre-optic acoustic vector sensors and demonstrate fibre-optic flexural disk accelerometers. Finally, we demonstrate polymer coated in-fibre Bragg gratings as pressure and temperature sensors and investigate polymer coatings as a means to increase the acoustic responsivity of fibre laser acoustic sensors.
APA, Harvard, Vancouver, ISO, and other styles
16

Blom, Västberg Oskar. "Five papers on large scale dynamic discrete choice models of transportation." Doctoral thesis, KTH, Systemanalys och ekonomi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-219882.

Full text
Abstract:
Travel demand models have long been used as tools by decision makers and researchers to analyse the effects of policies and infrastructure investments. The purpose of this thesis is to develop a travel demand model which is: sensitive to policies affecting timing of trips and time-space constraints; is consistent with microeconomics; and consistently treats the joint choice of the number of trips to perform during day as well as departure time, destination and mode for all trips. This is achieved using a dynamic discrete choice model (DDCM) of travel demand. The model further allows for a joint treatment of within-day travelling and between-day activity scheduling assuming that individuals are influenced by the past and considers the future when deciding what to do on a certain day. Paper I develops and provides estimation techniques for the daily component of the proposed travel demand model and present simulation results provides within sample validation of the model. Paper II extends the model to allow for correlation in preferences over the course of a day using a mixed-logit specification. Paper III introduces a day-to-day connection by using an infinite horizon DDCM. To allow for estimation of the combined model, Paper III develops conditions under which sequential estimation can be used to estimate very large scale DDCM models in situations where: the discrete state variable is partly latent but transitions are observed; the model repeatedly returns to a small set of states; and between these states there is no discounting, random error terms are i.i.d Gumble and transitions in the discrete state variable is deterministic given a decision. Paper IV develops a dynamic discrete continuous choice model for a household deciding on the number of cars to own, their fuel type and the yearly mileage for each car. It thus contributes to bridging the gap between discrete continuous choice models and DDCMs of car ownership. Infinite horizon DDCMs are commonly found in the literature and are used in, e.g., Paper III and IV in this thesis. It has been well established that the discount factor must be strictly less than one for such models to be well defined.Paper V show that it is possible to extend the framework to discount factors greater than one, allowing DDCM's to describe agents that: maximize the average utility per stage (when there is no discounting); value the future greater than the present and thus prefers improving sequences of outcomes implying that they take high costs early and reach a potential terminal state sooner than optimal.
Modeller för reseefterfrågan har länge använts av besultsfattare såväl somforskare för att analysera effekterna av transportpolitiska åtgärder. Avhandlingenshuvudsakliga syfte har varit att bidra till utvecklandet av modellerför reseefterfrågan som är: känsliga för åtgärder som påverkar tidsvalför resor eller tids-rums begränsningar; och konsistent behandlar valet avantalet resor, avresetid, destination och färdmedel för en individ. Dettauppnås genom användandet av en dynamisk diskret valmodell (DDCM) förreseefterfrågan. Modellen klarar vidare av att gemensamt modellera bådedagligt resande med hänsyn till hur det påverkar behovet av andra resoröver en längre tidshorisont, där individer antas ta hänsyn till både när desenaste utfört olika aktiviteter samt framtida effekter av sina besult. Papper I utvecklar den dagliga komponenten i den föreslagna modellenför reseefterfrågan, presenterar en estimeringsteknik samt resultat från simuleringarmed valideringsresultat. Papper II förbättrar modellen genom attinkludera korrelation i preferenser under dagen med hjälp av en mixed-logitspecifikation. Papper III introducerar en koppling mellan dagar genom enDDCM med oändlig tidshorisont. För att den kombinerade modellen skullevara möjlig att estimera härleddes vilkor under vilka sekvensiell estimeringvar möjlig. Dessa vilkor möjligör därmed estimering av en specific typ avstorskaliga DDCM modeller i situationer när: den diskreta tillståndsvariabelnär delvis latent men där val observeras; där modellen återkommer tillett mindre tillståndrum; och där det mellan återkomsten till detta mindretillståndrum inte sker någon diskontering, nyttofunktionernas feltermer gesav i.i.d Gumble termer och övergångarna mellan disrekta tillståndsvariablerär deterministisk givet valet. Papper IV utvecklar en dynamiskt diskret-kontinuerlig valmodell för etthushålls beslut gällande antalet bilar att äga, deras bränsletyp samt årligamiltal för varje bil. Det därmed till att komibinera dynamiska och diskretkontinulerligavalmodeller för bilägande. DDCM med oändliga tidshorisonter är vanligt förekommande och användsi bland annat Papper III och IV i den här avhandlingen. Det harvarit väl etablerat att diskonteringsfaktorn måste vara strikt mindre än ettför att sådana modeller ska vara väldefinerade. Papper V visar hur det ärmöjligt tillåta diskonteringsfaktorer större än eller lika med ett, och därmedbeskriva agenter som: maximerar den genomsnittliga nyttan per steg (närdet inte sker någon diskontering); värderar framtiden högre än nutiden ochdärmed föredrar förbättrande sekvenser vilket också implicerar att de tarhöga kostnader så tidigt som möjligt och når ett potentiellt sluttillståndtidigare än optimalt.
APA, Harvard, Vancouver, ISO, and other styles
17

Iwashita, Takeshi. "Study on Stabilization of Large-Scale Coal-Fired Linear MHD Generators." Kyoto University, 1997. http://hdl.handle.net/2433/77867.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Kettimuthu, Rajkumar. "Type- and Workload-Aware Scheduling of Large-Scale Wide-Area Data Transfers." The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1437747493.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Battat, Jonathan. "A fine-grained geospatial representation and framework for large-scale indoor environments." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/61278.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 110-112).
This thesis describes a system and method for extending the current paradigm of geographic information systems (GIS) to support indoor environments. It introduces features and properties of indoor multi-building environments that do not exist in other geographic environments or are not characterized in existing geospatial models, and proposes a comprehensive representation for describing such spatial environments. Specifically, it presents enhanced notions of spatial containment and graph topology for indoor environments, and extends existing geometric and semantic constructs. Furthermore, it describes a framework to: automatically extract indoor spatial features from a corpus of semi-structured digital floor plans; populate the aforementioned indoor spatial representation with these features; store the spatial data in a descriptive yet extensible data model; and provide mechanisms for dynamically accessing, mutating, augmenting, and distributing the resulting large-scale dataset. Lastly, it showcases an array of applications, and proposes others, which utilize the representation and dataset to provide rich location-based services within indoor environments.
by Jonathan Battat.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
20

Schnerch, David Alan. "Shear behavior of large-scale concrete beams strengthened with Fibre Reinforced Polymer, FRP, sheets." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp05/MQ62842.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Chabbi, Charef. "VLSI NMOS hardware design of a linear phase FIR low pass digital filter." Ohio University / OhioLINK, 1985. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1183749814.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Xiao, Shucai. "Generalizing the Utility of Graphics Processing Units in Large-Scale Heterogeneous Computing Systems." Diss., Virginia Tech, 2013. http://hdl.handle.net/10919/51845.

Full text
Abstract:
Today, heterogeneous computing systems are widely used to meet the increasing demand for high-performance computing. These systems commonly use powerful and energy-efficient accelerators to augment general-purpose processors (i.e., CPUs). The graphic processing unit (GPU) is one such accelerator. Originally designed solely for graphics processing, GPUs have evolved into programmable processors that can deliver massive parallel processing power for general-purpose applications. Using SIMD (Single Instruction Multiple Data) based components as building units; the current GPU architecture is well suited for data-parallel applications where the execution of each task is independent. With the delivery of programming models such as Compute Unified Device Architecture (CUDA) and Open Computing Language (OpenCL), programming GPUs has become much easier than before. However, developing and optimizing an application on a GPU is still a challenging task, even for well-trained computing experts. Such programming tasks will be even more challenging in large-scale heterogeneous systems, particularly in the context of utility computing, where GPU resources are used as a service. These challenges are largely due to the limitations in the current programming models: (1) there are no intra-and inter-GPU cooperative mechanisms that are natively supported; (2) current programming models only support the utilization of GPUs installed locally; and (3) to use GPUs on another node, application programs need to explicitly call application programming interface (API) functions for data communication. To reduce the mapping efforts and to better utilize the GPU resources, we investigate generalizing the utility of GPUs in large-scale heterogeneous systems with GPUs as accelerators. We generalize the utility of GPUs through the transparent virtualization of GPUs, which can enable applications to view all GPUs in the system as if they were installed locally. As a result, all GPUs in the system can be used as local GPUs. Moreover, GPU virtualization is a key capability to support the notion of "GPU as a service." Specifically, we propose the virtual OpenCL (or VOCL) framework for the transparent virtualization of GPUs. To achieve good performance, we optimize and extend the framework in three aspects: (1) optimize VOCL by reducing the data transfer overhead between the local node and remote node; (2) propose GPU synchronization to reduce the overhead of switching back and forth if multiple kernel launches are needed for data communication across different compute units on a GPU; and (3) extend VOCL to support live virtual GPU migration for quick system maintenance and load rebalancing across GPUs. With the above optimizations and extensions, we thoroughly evaluate VOCL along three dimensions: (1) show the performance improvement for each of our optimization strategies; (2) evaluate the overhead of using remote GPUs via several microbenchmark suites as well as a few real-world applications; and (3) demonstrate the overhead as well as the benefit of live virtual GPU migration. Our experimental results indicate that VOCL can generalize the utility of GPUs in large-scale systems at a reasonable virtualization and migration cost.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
23

Corrales, Duque Carolina. "Population Genetic Structure of Black Grouse (Tetrao tetrix) : From a Large to a Fine Scale Perspective." Doctoral thesis, Uppsala universitet, Institutionen för ekologi och genetik, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-150117.

Full text
Abstract:
Black grouse (Tetrao tetrix) is a bird species with a lek mating system found in the Palearctic boreal taiga. It is assumed that it has a continuous distribution along Scandinavia and Siberia, whereas in Central Europe it has declined during the last decades. The primary objective of this thesis was to obtain a deeper understanding of the history, systematic classification and the genetic structure of black grouse on different geographical scales using microsatellites and control region mtDNA sequences (CR). I determined how much the mating system, habitat fragmentation and historical population processes have influenced the partitioning of genetic diversity in this species. Phylogeographical results are consistent with a demographic population expansion, and the patterns of postglacial dispersal suggest that a glacial refugium was located somewhere in central Asia, and from there black grouse spread out to Europe following the retreat of glacial ice sheets. I suggest that the two European black grouse subspecies, T. t. Tetrix and T. t. britannicus correspond to only one subspecies: T. t. tetrix, and that this lineage has diverged from T.t. viridanus, a subspecies found in Kazakhstan. The British population is significantly divergent from the remaining Eurasian samples for microsatellites but it is not for mtDNA. Therefore, they should regard as a separate Management Unit and not as a subspecies. Furthermore, British black grouse occur in three independent genetic units, corresponding to Wales, northern England/southern Scotland and northern Scotland. There was also genetic structure within Sweden. Habitat fragmentation is the main cause of population genetic structure in southern Swedish black grouse. In contrast, low levels of genetic differentiation and high connectivity were found in northern Sweden due to female-biased dispersal. On a finer geographical scale, I found genetic differences between leks due to a mixture of related and unrelated individuals within leks. However, mean relatedness values hardly differed from zero. Some leks were similar to one another and I interpret this as a result of variation in local reproductive success and philopatry. These factors would cause genetic structuring but this by itself would not reveal that kin selection is operating within black grouse leks.
APA, Harvard, Vancouver, ISO, and other styles
24

Stender, Jan [Verfasser], Alexander [Akademischer Betreuer] Reinefeld, Miroslaw [Akademischer Betreuer] Malek, and Guillaume [Akademischer Betreuer] Pierre. "Snapshots in large-scale distributed file systems / Jan Stender. Gutachter: Alexander Reinefeld ; Miroslaw Malek ; Guillaume Pierre." Berlin : Humboldt Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, 2013. http://d-nb.info/1030313644/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Barrias, António. "Development of optical fibre distributed sensing for the structural health monitoring of bridges and large scale structures." Doctoral thesis, Universitat Politècnica de Catalunya, 2018. http://hdl.handle.net/10803/666240.

Full text
Abstract:
In this doctoral thesis it is proposed to research and assess the performance of the use of distributed optical fiber sensors (DOFS), more specifically the case of the optical backscattered reflectometry (OBR) based system, to the structural health monitoring (SHM) of bridges and large scale structures. This is a relatively recent technology that has demonstrated great promise for monitoring applications in a wide range of fields but due to its novelty, still presents several uncertainties which prevent its use in a more systematic and efficient way in civil engineering infrastructures. This is even more evident and relevant in the case of the application of this sensing technique to concrete structures. In this way, this thesis pretends to continue and further analyse this topic following the initial applications using the OBR system as a possible alternative/complementary monitoring tool in concrete structures. Therefore, in the present thesis, after an initial and thorough literature review on the use of DOFS in civil engineering applications, a set of experiments and analysis is planned and carried out. Firstly, different laboratory experimental campaigns are devised where multiple aspects of the instrumentation of DOFS technology in civil engineering applications are assessed and scrutinized. Consequently, the study of new implementation methods, comparison and performance analysis of different bonding adhesives and spatial resolution is performed through the conduction of load tests in reinforced concrete beam elements instrumented with OBR DOFS technology. Moreover, the long-term reliability of this sensing typology is also assessed through the conduction of a fatigue load test on two additional reinforced concrete beams. Afterwards, the use of the OBR system technology is assessed for the application in two real world structures in Barcelona, Spain. The first application corresponds to a previous monitoring work conducted in a historical masonry building and UNESCO World Heritage Site, which was subjected to rehabilitation works and where the collected data was analysed and interpreted in this thesis. The second real world structure application is an urban prestressed concrete viaduct that was exposed to major renovation actions, which included the widening of its deck and the introduction of new steel elements on the improved pedestrian sidewalks. This second application was conducted through a relatively extended period of time, which spanned from early summer to deep winter and therefore causing subsequent important thermal variations effects implications on the performance of the instrumented OBR system leading to the necessity of its compensation. Finally, taking into account the previous points, several conclusions are obtained related with the proficiency and limitations on the use of this particular type of optical sensing system in concrete structures. The advantages and disadvantages on the use of different types of bonding adhesives, implementation methodologies and spatial resolutions are described. Additionally, the performance of this technology in real world conditions is studied and characterized.
En aquesta tesi doctoral es proposa investigar i avaluar la possibilitat d´aplicació de sensors de fibra òptica distribuïda (DOFS), més concretament un sistema del tipus OBR (Optical Backscattered Reflectometry), a la monitorització de la salut estructural (SHM) de ponts i estructures de grans dimensions. Es tracta d'una tecnologia relativament recent que ha demostrat una gran versatilitat i validesa en diferents aplicacions en un ampli ventall de camps, però que, a causa de la seva novetat, encara presenta diverses incerteses que impedeixen el seu ús d'una manera més sistemàtica i eficient en el cas de les infraestructures d'enginyeria civil. Sent això especialment cert i rellevant en el cas de l'aplicació d'aquesta tipologia de detecció en estructures de formigó. D'aquesta manera, aquesta tesi pretén continuar i analitzar aquest tema seguint les aplicacions inicials utilitzant el sistema OBR com una possible eina i de control alternatiu o complementari en estructures de formigó. Per tant, en aquesta tesi, després d'una revisió inicial i exhaustiva de la literatura sobre l'ús de DOFS en aplicacions d'enginyeria civil, es planifiquen i executen un conjunt d'assaigs experimentals i el seu posterior anàlisi. En primer lloc, es desenvolupen diferents campanyes experimentals de laboratori on s'avaluen i examinen múltiples aspectes de la tecnologia DOFS en aplicacions d'enginyeria civil. Com a conseqüència, s´estudien nous mètodes d'implementació, de comparació i anàlisi de rendiment de diferents adhesius de connexió i de resolució espaial mitjançant la realització de proves experimentals en elements a flexió a de formigó armat equipats amb tecnologia OBR DOFS. A més, la fiabilitat a llarg termini d'aquesta tipologia de sensors també s'avalua mitjançant la realització d'un assaig de fatiga en dos bigues de formigó armat addicionals. Posteriorment, l'ús de la tecnologia del sistema OBR s'avalua de cara a la seva aplicació en dues estructures reals a Barcelona, Espanya. La primera aplicació correspon a un treball de seguiment previ dut a terme en un edifici històric de maçoneria i que és Patrimoni de la Humanitat de la UNESCO (l´hospital de Sant Pau), que es va sotmetre a obres de rehabilitació i on es van analitzar i interpretar les dades recollides durant l´execució de les obres. La segona aplicació és un pont de formigó pretensat urbà que va estar exposat a una important intervenció de renovació, que va incloure l'ampliació de la coberta i la introducció de nous elements d'acer a les voreres de vianants. Aquesta segona aplicació es va dur a terme a través d'un període de temps relativament estès, que va des del començament de l'estiu fins a ben entrat l'hivern i, per tant, va provocar variacions tèrmiques importants tant als materials com als propis sensors, que van tenir conseqüències sobre el rendiment del sistema OBR instrumentat i que va comportar la necessitat de la seva compensació. Finalment, tenint en compte els punts anteriors, s'obtenen diverses conclusions relacionades amb la competència i les limitacions sobre l'ús d'aquest tipus particular de sistema de detecció òptica en estructures de formigó. Es descriuen els avantatges i desavantatges sobre l'ús de diferents tipus d'adhesius de connexió, metodologies d'implementació i resolucions espaials. Addicionalment, s'estudia i caracteritza l'acompliment d'aquesta tecnologia en condicions reals i no de laboratori.
APA, Harvard, Vancouver, ISO, and other styles
26

Erez, Giacomo. "Modélisation du terme source d'incendie : montée en échelle à partir d'essais de comportement au feu vers l'échelle réelle : approche "modèle", "numérique" et "expérimentale"." Electronic Thesis or Diss., Université de Lorraine, 2019. http://www.theses.fr/2019LORR0189.

Full text
Abstract:
Le recours à la simulation numérique peut être précieux pour un investigateur cherchant à évaluer une hypothèse dans le cadre de la recherche des causes et circonstances d'un incendie. Pour utiliser cet outil, il est primordial de définir précisément le terme source, c'est-à-dire la quantité d'énergie ou de gaz combustibles que le foyer va dégager au cours du temps. Il peut être déterminé par des essais en échelle réduite ou réelle. La première approche est souvent préférée car plus facile à mettre en œuvre et moins coûteuse. Dans ce cas, il est ensuite nécessaire de transposer les résultats vers l'échelle réelle : pour cela, plusieurs types de modèles ont été proposés. Les plus complets reposent sur des modèles de pyrolyse, qui décrivent le comportement au feu à partir des réactions chimiques dans la phase condensée. Toutefois, ils ne sont pour l'instant pas mûrs pour les applications en investigation. C'est pourquoi une autre famille de modèles, dits thermiques, a été choisie dans l'étude présentée ici. Ces modèles visent à prédire le terme source en fonction de la sollicitation thermique uniquement. Les travaux sont divisés en deux parties principales, à savoir une caractérisation approfondie des transferts de chaleur au sein d'une flamme et une investigation de leur influence sur la décomposition des matériaux. Pour le premier sujet, l'accent est mis sur les transferts radiatifs car ils jouent un rôle prédominant dans l'entretien de la combustion et la propagation. Le rayonnement des flammes a donc été caractérisé pour plusieurs combustibles (kérosène, gazole, heptane, mousse polyuréthane et bois) et de nombreuses tailles de foyer (de 0,3 m à 3,5 m de côté). Les mesures, incluant de l'imagerie visible mais aussi un dispositif d'opacimétrie multispectral et un spectromètre infrarouge, ont permis de décrire la forme et l'émission des flammes. Ces données ont ensuite été utilisées dans un modèle (méthode de Monte-Carlo) pour prédire les flux thermiques reçus à différentes positions. Ces prédictions reproduisent bien les valeurs mesurées lors des essais, ce qui montre que les principaux phénomènes contrôlant le rayonnement ont été identifiés et pris en compte, pour toutes les tailles de foyer. Étant donné que l'objectif final est de fournir un outil de simulation complet, il a été choisi d'évaluer Fire Dynamics Simulator (FDS) afin de déterminer si le code permet de décrire correctement ces transferts radiatifs. Ce travail a été fait grâce aux données et connaissances acquises précédemment et montre que le code prédit correctement les flux reçus. Il a donc été choisi, pour la suite des travaux, de se reposer sur le modèle de rayonnement déjà incorporé dans FDS, pour profiter de son couplage avec le reste des modèles utiles à la simulation incendie. Concernant le second thème de l'étude, à savoir l'effet du rayonnement sur la décomposition, un travail expérimental a été mené sur la mousse polyuréthane au cône calorimètre, afin de lier la vitesse de perte de masse (MLR, pour Mass loss rate) au flux thermique imposé. Ces données ont permis de construire un modèle prédisant le MLR en fonction du temps et de l'éclairement, ce qui représente bien l'approche thermique évoquée précédemment. Des essais à plus grande échelle ont servi à caractériser la propagation du feu à la surface du combustible mais aussi à l'intérieur des échantillons de mousse, en employant différents moyens de mesure (imagerie visible, thermométrie, photogrammétrie). En plus des connaissances acquises, cette étude indique que l'utilisation de données obtenues à petite échelle est possible pour prédire le comportement au feu à échelle réelle. C'est donc ce qui a été fait, en modifiant le code source de FDS, pour intégrer une approche thermique en utilisant les données du modèle décrivant le MLR en fonction du temps et de l'éclairement. Les premières simulations montrent des résultats encourageants, et seront complétées par l'étude de géométries plus complexes
Numerical simulations can provide valuable information to fire investigators, but only if the fire source is precisely defined. This can be done through full- or small-scale testing. The latter is often preferred because these tests are easier to perform, but their results have to be extrapolated in order to represent full-scale fire behaviour. Various approaches have been proposed to perform this upscaling. An example is pyrolysis models, which involve a detailed description of condensed phase reactions. However, these models are not ready yet for investigation applications. This is why another approach was chosen for the work presented here, employing a heat transfer model: the prediction of mass loss rate for a material is determined based on a heat balance. This principle explains the two-part structure of this study: first, a detailed characterisation of heat transfers is performed; then, the influence of these heat transfers on thermal decomposition is studied. The first part focuses on thermal radiation because it is the leading mechanism of flame spread. Flame radiation was characterised for several fuels (kerosene, diesel, heptane, polyurethane foam and wood) and many fire sizes (from 0.3 m up to 3.5 m wide). Measurements included visible video recordings, multispectral opacimetry and infrared spectrometry, which allowed the determination of a simplified flame shape as well as its emissive power. These data were then used in a model (Monte-Carlo method) to predict incident heat fluxes at various locations. These values were compared to the measurements and showed a good agreement, thus proving that the main phenomena governing flame radiation were captured and reproduced, for all fire sizes. Because the final objective of this work is to provide a comprehensive fire simulation tool, a software already available, namely Fire Dynamics Simulator (FDS), was evaluated regarding its ability to model radiative heat transfers. This was done using the data and knowledge gathered before, and showed that the code could predict incident heat fluxes reasonably well. It was thus chosen to use FDS and its radiation model for the rest of this work. The second part aims at correlating thermal decomposition to thermal radiation. This was done by performing cone calorimeter tests on polyurethane foam and using the results to build a model which allows the prediction of MLR as a function of time and incident heat flux. Larger tests were also performed to study flame spread on top and inside foam samples, through various measurements: videos processing, temperatures analysis, photogrammetry. The results suggest that using small-scale data to predict full-scale fire behaviour is a reasonable approach for the scenarios being investigated. It was thus put into practice using FDS, by modifying the source code to allow for the use of a thermal model, in other words defining the fire source based on the model predicting MLR as a function of time and incident heat flux. The results of the first simulations are promising, and predictions for more complex geometries will be evaluated to validate this method
APA, Harvard, Vancouver, ISO, and other styles
27

Verrecht, Bart. "Optimisation of a hollow fibre membrane bioreactor for water reuse." Thesis, Cranfield University, 2010. http://dspace.lib.cranfield.ac.uk/handle/1826/6779.

Full text
Abstract:
Over the last two decades, implementation of membrane bioreactors (MBRs) has increased due to their superior effluent quality and low plant footprint. However, they are still viewed as a high-cost option, both with regards to capital and operating expenditure (capex and opex). The present thesis extends the understanding of the impact of design and operational parameters of membrane bioreactors on energy demand, and ultimately whole life cost. A simple heuristic aeration model based on a general algorithm for flux vs. aeration shows the benefits of adjusting the membrane aeration intensity to the hydraulic load. It is experimentally demonstrated that a lower aeration demand is required for sustainable operation when comparing 10:30 to continuous aeration, with associated energy savings of up to 75%, without being penalised in terms of the fouling rate. The applicability of activated sludge modelling (ASM) to MBRs is verified on a community-scale MBR, resulting in accurate predictions of the dynamic nutrient profile. Lastly, a methodology is proposed to optimise the energy consumption by linking the biological model with empirical correlations for energy demand, taking into account of the impact of high MLSS concentrations on oxygen transfer. The determining factors for costing of MBRs differ significantly depending on the size of the plant. Operational cost reduction in small MBRs relies on process robustness with minimal manual intervention to suppress labour costs, while energy consumption, mainly for aeration, is the major contributor to opex for a large MBR. A cost sensitivity analysis shows that other main factors influencing the cost of a large MBR, both in terms of capex and opex, are membrane costs and replacement interval, future trends in energy prices, sustainable flux, and the average plant utilisation which depends on the amount of contingency built in to cope with changes in the feed flow.
APA, Harvard, Vancouver, ISO, and other styles
28

Plaza, Piotr. "The development of a slagging and fouling predictive methodology for large scale pulverised boilers fired with coal/biomass blends." Thesis, Cardiff University, 2013. http://orca.cf.ac.uk/58453/.

Full text
Abstract:
This dissertation deals with the development of a co-firing advisory tool capable of predicting the effects of biomass co-firing with coal on the ash deposition and thermal performance of pulverised fired (pf) boilers. The developed predictive methodology integrates a one-dimensional zone model of a pf boiler to determine the heat transfer conditions and midsection temperature profile throughout the boiler, with the phase equilibrium–based ash deposition mechanistic model that utilises FactSageTM thermo-chemical data. The designed model enables advanced thermal analysis of a boiler for investigating the impact of fuel switching on boiler performance including the ash deposition effects. With respect to the ash deposition predictive model, the improved phase equilibrium approach, adjusted to the pf boiler conditions was proposed that allows the assessment of the slagging and high temperature fouling severity caused by the deposition of the sticky ash as well as low-temperature fouling due to salts condensation. An additional ash interaction phase equilibrium module was designed in order to estimate the interactions occurring in the furnace between alumino-silicate fly ash and alkali metals originating from biomass. Based on the developed model, the new slagging/fouling indices were defined which take into account the ash burden, slag ratio in the fly ash approaching the tube banks as well as the slag viscosity corresponded to the conditions within the pf boiler. The developed model was validated against field observations data derived from semiindustrial pf coal-fired furnace as well as a large scale 518 MWe pf boiler fired with a blend of imported bituminous coals and biomass mix composed of the various quality biomass/residues, such as meat and bone meal, wood pellets and biomass mix pellets produced on-site: the power plant typically fired up to 20wt% coal substitution. Good agreement has been found for the comparison between predictions and slagging/fouling observations. Based on the validated model the fuel blend optimisation was performed up to 30wt% co-firing shares revealing highly non-additive ash behaviour of the investigated fuel blends.
APA, Harvard, Vancouver, ISO, and other styles
29

Leveau, Valentin. "Représentations d'images basées sur un principe de voisins partagés pour la classification fine." Thesis, Montpellier, 2016. http://www.theses.fr/2016MONTT257/document.

Full text
Abstract:
Dans cette thèse, nous nous sommes intéressés au problème de la classification à « grain fin » qui est une tâche de classification particulière où les classes peuvent être visuellement distinguables seulement à partir de détails subtils et où le contexte agit souvent comme une source de bruit. Ce travail est principalement motivé par le besoin de concevoir des représentations d'images plus « fines » pour adresser de telles tâches de classification qui nécessitent un encodage d’informations discriminantes très fines et localisées. L'originalité principale de notre approche est d’intégrer dans une représentation globale de haute dimension une mesure de consistance géométrique locale entre l’image à représenter et les images d’une base de référence (que nous considérons comme un vocabulaire visuel possiblement constitué d’un grand nombre d’images). Ceci nous permet d’encoder dans une représentation vectorielle des motifs très localisés et géométriquement consistant avec l’image (contrairement aux méthodes de codage traditionnelles comme les Bag-of-Visual-Word, les vecteurs de Fisher ou les vecteurs VLAD). Plus en détails : Nous proposons dans un premier temps une approche de classification d'instances d'entités visuelles basée sur un classificateur par plus proches voisins qui agrège les similarités entre l'image requête et celles de la base d'apprentissage. Les similarités sont calculées avec prise en compte de la consistance géométrique locale entre les descripteurs locaux de la requête et ceux des images de la base d'apprentissage. Cette base pouvant être constituée de nombreux descripteurs locaux, nous proposons de passer notre méthode à l’échelle en utilisant des méthodes de recherche approximatives de plus proches voisins. Par la suite, nous avons mis au point un nouveau noyau de similarité entre des images basé sur les descripteurs locaux qu'elles partagent dans une base de référence. Nous avons nommé ce noyau Shared Nearest Neighbors Kernel (SNN Kernel), qui peut être utilisé comme n'importe quel autre noyau dans les machines à noyau. Nous avons dérivé, à partir de ce dernier, une représentation explicite globale des images à décrire. Cette représentation encode la similarité de l'image considérée avec les différentes régions visuelles des images de la base correspondant au vocabulaire visuel. Nous avons également rendu possible l'intégration de l'information de consistance géométrique dans nos représentations à l'aide de l'algorithme RANSAC amélioré que nous avons proposé dans notre contribution précédente. La classification des images se fait ensuite par un modèle linéaire appris sur ces représentations. Finalement, nous proposons, comme troisième contribution, une stratégie permettant de considérablement réduire, jusqu'à deux ordres de grandeur, la dimension de la représentation d'image sur-complète précédemment présentée tout en conservant une performance de classification compétitive aux méthodes de l’état de l’art. Nous avons validé nos approches en conduisant une série d’expérimentations sur plusieurs tâches de classification impliquant des objets rigides comme FlickrsLogos32 ou Vehicles29, mais aussi sur des tâches impliquant des concepts visuels plus finement discriminables comme la base FGVC-Aircrafts, Oxford-Flower102 ou CUB-Birds200. Nous avons aussi démontré des résultats significatifs sur des tâches de classification audio à grain fin comme la tâche d'identification d'espèce d'oiseau de LifeCLEF2015 en proposant une extension temporelle de notre représentation d'image. Finalement, nous avons montré que notre technique de réduction de dimension permet d’obtenir un vocabulaire visuel très interprétable composé des régions d'image les plus représentatives pour les concepts visuels représentés dans la base d’apprentissage
This thesis focuses on the issue of fine-grained classification which is a particular classification task where classes may be visually distinguishable only from subtle localized details and where background often acts as a source of noise. This work is mainly motivated by the need to devise finer image representations to address such fine-grained classification tasks by encoding enough localized discriminant information such as spatial arrangement of local features.To this aim, the main research line we investigate in this work relies on spatially localized similarities between images computed thanks to efficient approximate nearest neighbor search techniques and localized parametric geometry. The main originality of our approach is to embed such spatially consistent localized similarities into a high-dimensional global image representation that preserves the spatial arrangement of the fine-grained visual patterns (contrary to traditional encoding methods such as BoW, Fisher or VLAD Vectors). In a nutshell, this is done by considering all raw patches of the training set as a large visual vocabulary and by explicitly encoding their similarity to the query image. In more details:The first contribution proposed in this work is a classification scheme based on a spatially consistent k-nn classifier that relies on pooling similarity scores between local features of the query and those of the similar retrieved images in the vocabulary set. As this set can be composed of a lot of local descriptors, we propose to scale up our approach by using approximate k-nearest neighbors search methods. Then, the main contribution of this work is a new aggregation-based explicit embedding derived from a newly introduced match kernel based on shared nearest neighbors of localized feature vectors combined with local geometric constraints. The originality of this new similarity-based representation space is that it directly integrates spatially localized geometric information in the aggregation process.Finally, as a third contribution, we proposed a strategy to drastically reduce, by up to two orders of magnitude, the high-dimensionality of the previously introduced over-complete image representation while still providing competitive image classification performance.We validated our approaches by conducting a series of experiments on several classification tasks involving rigid objects such as FlickrsLogos32 or Vehicles29 but also on tasks involving finer visual knowledge such as FGVC-Aircrafts, Oxford-Flower102 or CUB-Birds200. We also demonstrated significant results on fine-grained audio classification tasks such as the LifeCLEF 2015 bird species identification challenge by proposing a temporal extension of our image representation. Finally, we notably showed that our dimensionality reduction technique used on top of our representation resulted in highly interpretable visual vocabulary composed of the most representative image regions for different visual concepts of the training base
APA, Harvard, Vancouver, ISO, and other styles
30

Michel, Dian. "Implementation and optimization of a large-scale ENU mouse mutagenesis screen and characterisation of five ENU-induced limb mutant lines." kostenfrei, 2009. http://mediatum2.ub.tum.de/node?id=737365.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Coulston, John Wesley Riitters Kurt Smith Gretchen Cole. "Large-scale analysis of sustainable forest management indicators assessments of air pollution, forest disturbance, and biodiviersity [sic] /." Connect to this title online, 2004. http://www.lib.ncsu.edu/theses/available/etd-03282004-103433/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Jones, Simon Richard. "Investigation into the wafer-scale integration of fine-grain parallel processing computer systems." Thesis, Brunel University, 1986. http://bura.brunel.ac.uk/handle/2438/11135.

Full text
Abstract:
This thesis investigates the potential of wafer-scale integration (WSI) for the implementation of low-cost fine-grain parallel processing computer systems. As WSI is a relatively new subject, there was little work on which to base investigations. Indeed, most WSI architectures existed only as untried and sometimes vague proposals. Accordingly, the research strategy approached this problem by identifying a representative WSI structure and architecture on which to base investigations. An analysis of architectural proposals identified associative memory to be general purpose parallel processing component used in a wide range of WSI architectures. Furthermore, this analysis provided a set of WSI-level design requirements to evaluate the sustainability of different architectures as research vehicles. The WSI-ASP (WASP) device, which has a large associative memory as its main component is shown to meet these requirements and hence was chosen as the research vehicle. Consequently, this thesis addresses WSI potential through an in-depth investigation into the feasibility of implementing a large associative memory for the WASP device that meets the demanding technological constraints of WSI. Overall, the thesis concludes that WSI offers significant potential for the implementation of low-cost fine-grain parallel processing computer systems. However, due to the dual constraints of thermal management and the area required for the power distribution network, power density is a major design constraint in WSI. Indeed, it is shown that WSI power densities need to be an order of magnitude lower than VLSI power densities. The thesis demonstrates that for associative memories at least, VLSI designs are unsuited to implementation in WSI. Rather, it is shown that WSI circuits must be closely matched to the operational environment to assure suitable power densities. These circuits are significantly larger than their VLSI equivalents. Nonetheless, the thesis demonstrates that by concentrating on the most power intensive circuits, it is possible to achieve acceptable power densities with only a modest increase in area overheads.
APA, Harvard, Vancouver, ISO, and other styles
33

Michel, David Daniel. "Linear-cavity tunable fibre lasers employing an Opto-VLSI processor and a MEMS-based device." Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2012. https://ro.ecu.edu.au/theses/520.

Full text
Abstract:
This thesis proposes and demonstrates experimentally two novel linear-cavity tunable fibre lasers employing an erbium-doped fibre (EDF) in conjunction with an Opto- VLSI processor and a MEMS-based device for wavelength selection. The Opto-VLSI processor and the MEMS-based device along with an optical collimator, a Bragg grating plate and an optical lens, enable the realisation of an optical filter for continuous tuning of wavelengths over the amplified spontaneous emission (ASE) range of the EDF. We also propose the use of a section of un-pumped EDF as a saturable absorber (SA), which suppresses noise spikes caused by the high optical pumping power. Experimental results show that by optimising a length of the SA a single wavelength, high power laser signal can be achieved. In addition, we experimentally demonstrate that the performance of the proposed linear-cavity tunable fibre lasers is better than that of ring-cavity tunable laser counterparts. Specifically, we show that linear-cavity based tunable fibre lasers can achieve higher output power, a larger side mode rejection ratio (SMRR) and narrower laser linewidth than ring-cavity tunable fibre lasers.
APA, Harvard, Vancouver, ISO, and other styles
34

Vankeuren, Jody L. "Parasites Predators and Symbionts." Kent State University / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=kent1619475426952694.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Donnelly, Andrea. "Mind, Body, and Handwoven Cloth." VCU Scholars Compass, 2010. http://scholarscompass.vcu.edu/etd/2104.

Full text
Abstract:
My work explores the nature of individual perception, and the side of our lives lived entirely within our minds. I do this through the lens of self-reflection, examining the images of my own mental life and translating them into delicately handwoven cloth. These images and their structures become sensory experiences of the intangible, and a meeting place for my internal life and that of my viewer. The cloth I weave is simultaneously familiar and strange. Through woven surface and imbedded imagery, I attempt to illuminate the deep emotions that necessarily isolate us from each other, and the shared experiences of our physical beings, which connect us. The quiet, ritualistic act of weaving expresses an overlapping of mental and physical space: the resulting cloth bears within each line of warp and weft the metaphor of that process.
APA, Harvard, Vancouver, ISO, and other styles
36

Corrêa, da Silva Rodrigo [Verfasser], and Hans Joachim [Akademischer Betreuer] Krautz. "Investigation of pulverized, pre-dried lignite combustion under oxy-fired conditions in a large-scale laboratory furnace / Rodrigo Corrêa da Silva. Betreuer: Hans Joachim Krautz." Cottbus : Universitätsbibliothek der BTU Cottbus, 2013. http://d-nb.info/1032171162/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Gashgari, Reema. "Exploring the implications of corporate governance practices and frameworks for large-scale business organisations : a case study on the Kingdom of Saudi Arabia." Thesis, Brunel University, 2017. http://bura.brunel.ac.uk/handle/2438/14347.

Full text
Abstract:
In 2006 the Kingdom of Saudi Arabia (KSA) introduced new legislation related to corporate governance (CG). Initial evaluation by the World Bank three years later showed relatively modest implementation of the regulations. This thesis investigates the extent to which this has been adopted over the past ten years. Saudi business has become more globalized, and a more standardised approach to CG is naturally expected by international partners and investors who must themselves justify investment. This research expands the existing literature on CG by examining the progress of countries with developing economies and relatively weak or new histories of regulated CG. This thesis explores the extent and form of the uptake of the newest generation of CG regulations, the existing roadblocks and the general current attitudes to corporate governance in KSA, examining the extent of KSA company compliance with KSA corporate governance regulations, the reasons for non-compliance when that exists, and any relevant deficits in the 2006 legislation with respect to international best practice. This is investigated through the use of a series of interviews and surveys with major Saudi organizations, as well as analysis of secondary information. The mixed method approach of quantitative and qualitative data analysis was selected as providing a means to generate both benchmarking data (i.e. quantitative) and further insight as to obstacles for further adoption (i.e. qualitative). As the basis for the investigation, questions are structured around four basic pillars of corporate governance: transparency; stakeholder value; responsibility; and fairness. This linkage of these factors with organisational structure, decision-making and the overall image of the firm within the industry is combined with an examination of how CG affects Saudi business expansion and investments, particularly in relation to how parties from other countries perceive the governance of a company. This perception of governance may condition their views concerning, for example, partnering with and investing in that company. The secondary data relates to The Saudi Arabian Monetary Agency (SAMA), Sanabil Investments and Saudi Arabia Basic Industries Corporation (SABIC). The qualitative data analysis was taken from interviews conducted from fifteen top managers of large-scale organisations. The quantitative data was collected from three organisations: Almarai, Saudi Aramco and Albaik. The overall results of the qualitative analysis and the secondary analysis showed that CG plays a vital role in business development. Quantitative analysis supported the idea that transparency, stakeholder value and corporate image are the main attributes of CG in a Saudi context, with statistical analysis indicating that both are essential to company access to private investment and market liquidity The overall findings indicate KSA’s need to improve its CG standards further, and taht whilst benchmarking of government-supported institutions such as SAMA and SABIC would be of assistance, the KSA government could play a pro-active role in encouraging businesses to expand best international corporate governance practices.
APA, Harvard, Vancouver, ISO, and other styles
38

Wang, Ligang [Verfasser], George [Akademischer Betreuer] Tsatsaronis, André [Akademischer Betreuer] Bardow, Yongping [Akademischer Betreuer] Yang, George [Gutachter] Tsatsaronis, and André [Gutachter] Bardow. "Thermo-economic evaluation, optimization and synthesis of large-scale coal-fired power plants / Ligang Wang ; Gutachter: George Tsatsaronis, André Bardow ; George Tsatsaronis, André Bardow, Yongping Yang." Berlin : Technische Universität Berlin, 2016. http://d-nb.info/1156179602/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Gómez-Navarro, Laura. "Techniques de débruitage d'image pour améliorer l'observabilité de la fine échelle océanique par SWOT." Thesis, Université Grenoble Alpes, 2020. http://www.theses.fr/2020GRALU024.

Full text
Abstract:
Les observations de la hauteur de la surface de la mer (SSH) décrivant des échelles entre 10 et 100 km sont cruciales pour mieux comprendre les transferts d'énergie et pour quantifier les échanges verticaux de chaleur et de traceurs biogéochimiques. La mission Surface Water Ocean Topography (SWOT) est un nouveau satellite altimétrique à large fauchée dont le lancement est prévu en 2022. SWOT fournira des informations sur la SSH à une résolution kilométrique, mais des incertitudes dues à diverses sources d'erreurs mettront à l'épreuve notre capacité à extraire le signal physique des structures inférieures à quelques dizaines de kilomètres. Le filtrage du bruit et des erreurs SWOT est une étape clé vers une interprétation optimale des données.L'objectif de cette étude est d'explorer des techniques de débruitage d'image afin d'évaluer les capacités des futures données SWOT à résoudre les fines échelles océaniques. Les données pseudo-SWOT sont générées avec le simulateur SWOT pour l'océanographie, qui utilise comme données d'entrée les sorties SSH des modèles de circulation générale océanique (OGCMs) à haute résolution. Plusieurs techniques de débruitage sont testées, afin de trouver celle qui rend le plus précisément les champs de SSH et de ses dérivées tout en préservant l'amplitude et la forme des structures océaniques présentes. Les techniques sont évaluées sur la base de la racine carrée de l'erreur quadratique moyenne, des spectres et d'autres diagnostiques.Au Chapitre 3, les données pseudo-SWOT pour la phase scientifique sont analysées pour évaluer les capacités de résolution de la méso et la sousmésoéchelle en Méditerranée occidentale. Une technique de débruitage par diffusion laplacienne est mise en œuvre permettant de récupérer la SSH, la vitesse géostrophique et la vorticité relative jusqu'à 40 - 60 km. Cette première étape a permis d'observer correctement la mésoéchelle, mais des améliorations sont possibles à la sousmesoéchelle, notamment pour mieux préserver l'intensité du signal SSH.Au Chapitre 4, une autre technique de débruitage est explorée dans la même région pour la phase d'échantillonnage rapide du satellite. Elle vise à retrouver adéquatement des dérivées de SSH, en récupérant leur structure et en préservant leur ampleur. Une méthode variationnelle est mise en œuvre qui peut pénaliser les dérivées de la SSH de premier, deuxième, troisième ordre ou une combinaison de ceux-ci. Le meilleur paramétrage est basé sur une pénalisation de second ordre, et nous avons trouvé les paramètres optimaux de cette configuration. Grâce à cette technique, les longueurs d'onde résolues par SWOT dans cette région sont réduites d'un facteur 2, tout en préservant l'ampleur des champs.Au Chapitre 5, nous étudions l'échelle spatiale la plus fine que SWOT pourrait résoudre après avoir débruité dans plusieurs régions, saisons et en utilisant différents OGCMs. Notre étude se concentre sur différentes régions et afin de documenter la variété des régimes que SWOT échantillonnera. L'algorithme de débruitage fonctionne bien même en présence de mouvements rapides non équilibrés intenses, et permet de réduire systématiquement la plus petite longueur d'onde résolue. Algorithmes de débruitage avancés permettent également de reconstruire de manière fiable les gradients SSH et les dérivées de second ordre. Nos résultats montrent également qu'une incertitude importante subsiste quant à l'échelle la plus fine résolue par SWOT dans une région et saison données en raison de la grande dispersion du niveau de variance estimé par nos simulations.La technique de débruitage développée, mise en œuvre et testée dans cette thèse doctorale permet de récupérer, dans certains cas, des échelles spatiales SWOT jusqu'à 15 km. Cette méthode est une contribution très utile pour atteindre les objectifs de la mission SWOT. Les résultats trouvé aideront à mieux comprendre la dynamique et les structures océaniques et leur rôle dans le système climatique
Sea Surface Height (SSH) observations describing scales in the range 10 - 100 km are crucial to better understand energy transfers across scales in the open ocean and to quantify vertical exchanges of heat and biogeochemical tracers. The Surface Water Ocean Topography (SWOT) mission is a new wide-swath altimetric satellite which is planned to be launched in 2022. SWOT will provide information on SSH at a kilometric resolution, but uncertainties due to various sources of errors will challenge our capacity to extract the physical signal of structures below a few tens of kilometers. Filtering SWOT noise and errors is a key step towards an optimal interpretation of the data.The aim of this study is to explore image de-noising techniques to assess the capabilities of the future SWOT data to resolve the oceanic fine scales. Pseudo-SWOT data are generated with the SWOT simulator for Ocean Science, which uses as input the SSH outputs from high-resolution Ocean General Circulation Models (OGCMs). Several de-noising techniques are tested, to find the one that renders the most accurate SSH and its derivatives fields while preserving the magnitude and shape of the oceanic features present. The techniques are evaluated based on the root mean square error, spectra and other diagnostics.In Chapter 3, the pseudo-SWOT data for the Science phase is analyzed to assess the capabilities of SWOT to resolve the meso- and submesoscale in the western Mediterranean. A Laplacian diffusion de-noising technique is implemented allowing to recover SSH, geostrophic velocity and relative vorticity down to 40 - 60 km. This first step allowed to adequately observe the mesoscale, but space is left for improvement at the submesoscale, specially in better preserving the intensity of the SSH signal.In Chapter 4, another de-noising technique is explored and implemented in the same region for the satellite's fast-sampling phase. This technique is motivated by recent advances in data assimilation techniques to remove spatially correlated errors based on SSH and its derivatives. It aims at retrieving accurate SSH derivatives, by recovering their structure and preserving their magnitude. A variational method is implemented which can penalize the SSH derivatives of first, second, third order or a combination of them. We find that the best parameterization is based on a second order penalization, and find the optimal parameters of this setup. Thanks to this technique the wavelengths resolved by SWOT in this region are reduced by a factor of 2, whilst preserving the magnitude of the SSH fields and its derivatives.In Chapter 5, we investigate the finest spatial scale that SWOT could resolve after de-noising in several regions, seasons and using different OGCMs. Our study focuses on different regions and seasons in order to document the variety of regimes that SWOT will sample. The de-noising algorithm performs well even in the presence of intense unbalanced motions, and it systematically reduces the smallest resolvable wavelength. Advanced de-noising algorithms also allow to reliably reconstruct SSH gradients (related to geostrophic velocities) and second order derivatives (related to geostrophic vorticity). Our results also show that a significant uncertainty remains about SWOT's finest resolved scale in a given region and season because of the large spread in the level of variance predicted among our high-resolution ocean model simulations.The de-noising technique developed, implemented and tested in this doctoral thesis allows to recover, in some cases, SWOT spatial scales as low as 15 km. This method is a very useful contribution to achieving the objectives of the SWOT mission. The results found will help better understand the ocean's dynamics and oceanic features and their role in the climate system
APA, Harvard, Vancouver, ISO, and other styles
40

Mohebbi-Kalhori, Davod. "Le développement et la modélisation numérique d'un bioréacteur pour l'ingénierie des tissus de grande masse." Thèse, Université de Sherbrooke, 2008. http://savoirs.usherbrooke.ca/handle/11143/1909.

Full text
Abstract:
This present thesis comprise two major parts both experimental and numerical study which have been conducted in four distinct steps as following: (1) Design, construction, and evaluation of control and hydrodynamic of a bioreactor system. (2) Visualization of fluid flow perfusion in the hollow fibre membrane bioreactor (HFMB) using a biomedical noninvasive imaging technique, i.e. positron emission tomography (PET). (3) Development of a mathematical model for analyzing a hybrid hollow fibre membrane bioreactor (hHFMB) and (4) Development of a dynamic and two-porous media model for analyzing the HFMB with the aid of computational fluid dynamics (CFD), specifically for bone tissue engineering application. The experimental part includes the steps 1 and 2. In the step 1, the flow perfusion bioreactor system has been designed and constructed. The experimental evaluations of hydrodynamic, and control were performed. In this system, mean pressure, mean flow rate, frequency and waveform of the pulsatile pressure and flow rate can be modulated and controlled over the time to simulate both physiological and non-physiological conditions. The temperature, dissolved oxygen, and pH can be controlled.This bioreactor system can be applied to a variety of scaffold configurations, geometries, and sizes as the cell/tissue culture chamber is adjustable in length.This system is autoclavable, and compatible with noninvasive medical imaging techniques. Designing of the inlet and outlet manifold of the bioreactor were performed according to data obtained from CFD simulation of the flow distribution to achieve high efficiencies in the uniformity of flow perfusion. In the second step, PET was proposed for the very first time and a small animal PET system was used to obtain new information about steady and pulsatile flow patterns in the HFMB for tissue engineering applications. The non-homogeneous tracer distribution, as found with PET imaging, implies the occurrence of non-efficient regions with respect to mass transfer. In steady inlet flow condition, a non-uniform distribution of radioactive tracer was obtained. In contrast, the pulsatile inlet flow generated more uniform perfusion than that of steady flow. Further, it was found that in the case of pulsatile flow, the accumulation of the tracer within the bioreactor was efficiently less than that of steady inlet flow at the same condition. Therefore, in one hand these findings have the potential to improve bioreactor design and in the other hand can explore a very important rout to employ PET in developing bioreactors for tissue engineering applications. The numerical part includes the step 3 and 4 in which the numerical study has been performed for 3-D bone tissue growth in HFMB as a case study for large-scale tissue culture. In the step 3, the feasibility of utilizing newly proposed hHFMB for the growth of mesenchymal stem cells (MSCs) to form bone tissue was investigated using numerical simulations. To this aim, a mathematical model using a CFD code was developed to optimize the design and operation parameters of hHFMB for the growth of MSCs. The volume averaging method was used to formulate mass balance for the nutrients and the cells in the porous extracapillary space (ECS) of the hHFMB. The cell-scaffold construct in the ECS of the hollow fibres and membrane wall were treated as porous medium. Cell volume fraction dependent porosity, permeability, and diffusivity of mass were used in the model. The simulations allowed the simultaneous prediction of nutrient distribution and nutrient-dependent cell volume fraction. In addition, this model was used to study the effects of the operating and design parameters on the nutrient distribution and cell growth within the bioreactor. The modeling results demonstrated that the fluid dynamics within the ECS and transport properties and uptake rates in hHFMB were sufficient to support MSCs required for clinical-scale bone tissue growth in vitro and enabled to solve nutrition difficulties because of high cell density and scaffold size. In the step 4, the new dynamic and two-porous media model has been used for analyzing the nutrient-dependent MSCs growth in order to form the bone tissue in the HFMB. In the present model, hollow fibre scaffold within the bioreactor was treated as a porous domain. The domain consists of the porous lumen region available for fluid flow and the porous ECS region, filled with collagen gel containing cells, for growing tissue mass. Furthermore, the contributions of several design and process parameters, which enhance the performance of the bioreactor, were studied. In addition, the dynamic evaluation of cell growth, oxygen and glucose distributions were quantitatively analyzed. The obtained information can be used for better designing of the bioreactor, determining of suitable operational conditions and scale up of the bioreactor for engineering of clinical-scale bone tissue.--Résumé abrégé par UMI.
APA, Harvard, Vancouver, ISO, and other styles
41

Andrae, Jannis. "The handling of an extraordinary economic and financial effort in a backward country The Russian Empire's efforts to sustain the economic and financial struggle during the First World War and the importance of the preceding large-scale reforms in economy and society /." St. Gallen, 2009. http://www.biblio.unisg.ch/org/biblio/edoc.nsf/wwwDisplayIdentifier/04607917101/$FILE/04607917101.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Liu, Chao Yuan. "Analyse quantitative des propriétés mécaniques de fraises dentaires rotatives en NiTi et étude de la fabrication de larges microstructures par polymérisation induite à deux photons." Phd thesis, Université de Grenoble, 2014. http://tel.archives-ouvertes.fr/tel-01071805.

Full text
Abstract:
Un tiers des urgences dentaires et un pourcentage élevé de maux de dents sont endodontie liés. Instruments rotatifs utilisés dans le traitement endodontique peuvent se briser à l'intérieur du canal radiculaire en raison de la fatigue des matériaux. Une fois cassé, l'extraction de la partie fracturée du canal est un travail difficile et ennuyeux de le patient et le dentiste. Par conséquent, l'alerte d'une fracture imminente lors de l'utilisation clinique ou développer de bonnes stratégies pour augmenter ses propriétés mécaniques sera d'une grande aide pour éviter les complications médicales / juridique. La recherche est étudié à partir de deux parties. La première partie a établi une plate-forme de test standard, simulant plusieurs paramètres de canal, propose une série de stratégies visant à améliorer la vie de la fatigue et des propriétés mécaniques de matériaux. Aussi, un système de surveillance utilisant de réseaux de Bragg (FBG) capteurs a été tentée. la raison de l'utilisation de la glycémie à jeun est sa petite taille qui est très prometteur dans l'intégration avec la pièce à main de l'équipement endodontique. dans le travail actuel, en ramassant et en analysant l'onde de contrainte par transformée de Fourier rapide (FFT), nous peut révéler la variation d'énergie et la fréquence phénomène déplaçant dans certaines fréquences caractéristiques. on espère que, avec ces informations, nous pouvons éviter / atténuer la survenue de fracture inattendue. Comme pour l'essai de fatigue, les données ont montré que la résistance à la fatigue peut être améliorée certain traitement thermique ou à mouvement alternatif méthode de rotation appliqué. tel phénomène peut être étroitement liée à la composition de la phase en ni-Ti en alliage et la contrainte de traction maximale est réduite lorsque le mouvement alternatif appliqué. études ont montré que plus la teneur de la phase martensite dans l'aiguille, l'plus de vie à la fatigue peut être atteint. Cependant, il peut être nécessaire de prendre des compromis avec l'efficacité de coupe de l'aiguille. pour cette question, nous pouvons combiner le traitement cryogénique et traitement thermique pour obtenir une meilleure résistance à la fatigue sans compromettre son efficacité de coupe. La deuxième partie est de fabriquer haute résolution, grande taille de la nouvelle Type aiguilles d'endodontie en utilisant la polymérisation à deux photons (TTP) technique. Le travail se fait à l'université de Joseph-Fourier LiPhy laboratoire, France. Contrairement à la fabrication traditionnelle du PPT, qui avait une limitation de sa taille de produits en raison du faible pouvoir lase, taux de redoublement et piézo entraîné stade, nous utilisons résine Ormocer, 130 kHz, 1W puissant laser de 532 nm à l'étape motorisé étage XY pour fabriquer une grande 800 échafaud um cellulaire bio-compatible et 1.2 cm aiguille de hauteur. aussi, pour améliorer la qualité du produit de TPP, l'approche de correction de la puissance du laser avait été tentée. Pendant TPP fabrication, la forme de focalisation laser changé lorsque la surface de la fabrication a été déplacé dans la direction z. Cela se traduit par le fait que nous avons besoin de plus afin d'assurer la taille de voxel est identique à z différent. pour corriger ce défaut, un procédé pour la correction de la puissance du laser et de la formule pour la puissance de correction sont proposées. la formule est dérivé du concept de maintien de conditions d'exposition identiques.
APA, Harvard, Vancouver, ISO, and other styles
43

Tsai, Ming-ju, and 蔡銘儒. "Development and Application of a Large Scale Fire Test Facility." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/08529963678468580003.

Full text
Abstract:
博士
國立臺灣科技大學
營建工程系
95
By participating the establishment of the fire experimental facility, ABRI, Minister of the Interior, offer this study an opportunity to set-up and tune-up the facilities of cone calometer(ISO5660),Room Test device(ISO9705),10MW full scale calometer, and even the following applications to real scale experiments. This study starts a new mileage of fire research of Taiwan. The first part is deal with 10MW full scale calometer device testing. The test results seems coincidence with oil trays burns theories. Under the conditions of controling fan speed and the distances to the hoods of 10MW calometer, the relation is , R2 =0.986, the comparative slope of these two is 1.006 to near 1. Secondly , from motorcycle burning testing, this study found that the most amount of heat come from the parts of body cover made by pp. which is up to 62% of whole amounts. In order to mearsure the Heat Release Rate(HRR), we ultize the methods of gas temperature arise rule and calculation of oxygen consumption. The convection heat is about 40-50% of whole HRR, and 50-60% from the radiation and conduction heat. A rarry of motorcycle with no matter one or two or three, the fire growth curve is equals to NFPA92B Ultra fast, and the . For one motorcycle, HRR under full development is ,two motorcycles are , three motorcycles are . the relation with the quantity can be concluded as . Thirdly, in the research of furniture burning test, upholstered furniture is the main source for testing. The frame of the upholstered furniture is made of wood and takes about 60-70% of whole weight. The wooden frame supplies about 56-64%of whole heat. After burning testing, the rest weights of burnt sofa is about 20%, and all counts to the wooden frame. When compare with calculation of 80% heat of upholstered furniture, the testing data is under 3% deviation. Therefore, the results of material heat release per-unit from cone calormeter could be used for measuring the fire loads of materials. In the room fire testing research and applications, this study utilizes three types of incombustible boards, class A, class B ,and class C, to find the highest HRR and FIGRA. This comes up to a good relationship of R2 =0.97. Through comparing the results total weight loss of both cone calometer and Room test, the class C one still have combustion character. Under the room test, the class C boards will all burn out. Finally, by using Wickstrom and Goransson model under cone calometer device to estimate the fire growth of 3 different sizes of wall finishing materials, the results shows almost the same before they grow to 1MW. Under the fire loads of 12.5kg/m2 in such 3 different types of room tests, the results come out . Construction , correcting and verifying the result with the application study via the testing facilities, show that has already stepped one more step to fire engineering research in Taiwan. There are still lots of efforts to enhance our technologic level of fire safety.
APA, Harvard, Vancouver, ISO, and other styles
44

Chen, Jung-Tsung, and 陳榮宗. "Research on the fire extinguishing system for large-scale exhibition." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/61553454987196574308.

Full text
Abstract:
碩士
吳鳳科技大學
消防研究所
104
Abstract The main purpose of this paper is analyzing the design of the fire extinguisher systems which used at the large-scale exhibition hall in Southern Taiwan . And then , considering choose which fire-fighting system is better for museums , libraries , and so on . In this paper , we research several kinds of fire system , including fire sprinkler system, water mist system , chemical gases and inert gases system , and other systems , wanting to find which is more appropriate for the exhibition areas of artworks , books , and historical relics . Especially , we analyze and compare about the applicability of automatic fire-fighting system , the fire load of historical relics , and the microenvironment factors in many ways . We also organize and explore the relevant laws and regulations ,technical specifications about the museums in both Taiwan and foreign countries systematically . Finally , illustrating by the case of the painting exhibition area , we use FDS ( Fire Dynamics Simulator) to simulate the automatic sprinkler system that show the effect of controlling the initial fire. Keywords—large-scale exhibition hall , fire extinguishing system , automatic sprinkler system .
APA, Harvard, Vancouver, ISO, and other styles
45

Chiu, Chao-Ho, and 邱朝和. "Fire Safety and Risk Management in Large Scale Department Stores." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/35566055352428188177.

Full text
Abstract:
碩士
世新大學
企業管理研究所(含碩專班)
103
The primary objective of risk management is to identify potential hazards and prevent irreparable losses from major disaster. This paper discusses risk management from the perspective of fire safety in large scale department stores. The research process was segmented into four phases, including risk identification and crisis prevention, risk evaluation and mitigation, risk monitoring and simulation, and result evaluation and documentation. Through in-depth interviews and participant observation, this paper recognizes that while fire safety and risk prevention is a high priority of large department stores, most behaviors and processes are driven by bureaucracy and are formalistic in nature. In particular, the selection of training participants, methods and processes have been limited by the enterprises’ focus on profits and the existing laws and regulations. Consequently, fire safety training overlooks the perspectives and interests of enterprises, site representatives and customers. Therefore, this analysis provides various suggestions for the development of a proactive and comprehensive risk prevention and simulation program to effectively identity, manage and mitigate risks. Department stores can increase their responsibility in fire safety and risk management. In addition, regulatory authorities should refine fire safety laws relating to implementing facility inspections, improving operational security and increasing public awareness and participation among consumers.
APA, Harvard, Vancouver, ISO, and other styles
46

Lam, Cecilia. "Thermal Characterization of a Pool Fire in Crosswind With and Without a Large Downwind Blocking Object." Thesis, 2009. http://hdl.handle.net/10012/4522.

Full text
Abstract:
Experiments were conducted to investigate the macroscopic thermal behaviour of 2m diameter Jet A fires in crosswinds of 3m/s to 13m/s. Two scenarios were considered: with and without a 2.7m diameter, 10.8m long, blocking object situated 3.4m downwind of the fire. These scenarios simulated transportation accidents with the fire representing a burning pool of aviation fuel and the object simulating an aircraft fuselage. To date, the limited number of experiments that have been conducted to examine wind effects on fire behaviour have been performed at small scale, which does not fully simulate the physics of large fires, or in outdoor facilities, with poorly controlled wind conditions. This thesis presents the first systematic characterization of the thermal environment in a large, turbulent fire under controlled wind conditions, with and without a large downwind blocking object. In experiments without the object, flame geometry was measured using temperature contour plots and video images, and the results compared to values predicted using published correlations. Results were greatly affected by the method used to measure flame geometry and by differences in boundary conditions between experiments. Although the presence of the blocking object prevented direct measurement of flame geometry due to interaction between the fire plume and object, temperature and heat flux measurements were analyzed to describe overall effects of the object on fire plume development. The fire impinged on the blocking object at wind speeds below 7m/s and interacted with the low-pressure wake region behind the object. Laboratory-scale experiments were also conducted to examine the responses of different heat flux gauges to controlled heating conditions simulating those found in wind-blown fires. Schmidt-Boelter, Gardon and Hemispherical Heat Flux gauges and a Directional Flame Thermometer were exposed to a convective flow and to radiation from a cone calorimeter heater. Measurements were influenced by differences between the calibration and measurement environments, differences in sensor surface temperature, and unaccounted thermal losses from the sensor plate. Heat flux results from the fires were consistent with those from the cone calorimeter, but were additionally affected by differences in location relative to the hot central core of the fire.
APA, Harvard, Vancouver, ISO, and other styles
47

YuChang, Lin, and 林育璋. "Large-scale underground fire-fighting management and rescue effort -A case study of the Taipei Train Station." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/71284379283727468574.

Full text
Abstract:
碩士
中華科技大學
土木防災工程研究所
99
Abstract Taipei City is the capital of Taiwan where the demarcated zone around the Taipei Main Station serves as the hub of the nation’s transportation center. Hence, disaster prevention and response measures for the Taipei Main Station demarcated zone should possess the necessary contingency and response action expertise; this is especially essential in the demarcated zone containing the common infrastructures of the Mass Rapid Transit (MRT), the High Speed Rail (HSR), and the Taiwan Railway (TR) systems where travelers from different corners of the country converge or pass. Disaster prevention and preparedness in these areas bear great importance. Within the demarcated zone are complex composites of the space, such as massive moving crowds, semi-closed interior space, and flammable merchandises in shops. This makes rescue operation and disaster control operation quite difficult in the event of a fire outbreak. Moreover, with the particularity in interior utilization, management, organization structure, and zone features, it is imperative to establish a measure of safety management and fire rescue system for the demarcated zone. his paper studies firefighters’ rescue operation and internal units’ safety management in the demarcated zone. The research method was performed by on-the-spot investigation through real drill procedure by ways of theoretical maneuver. Questionnaire was designed via expert interviews. The surveyed results were induced and statistically analyzed through SWOT analysis. It is expected that the findings would provide a reference to the implementation of safety management model for government officials. For minimizing the loss of casualties and properties in the event of underground fire, the results showed 54% by specialized equipment, 75% by ordinarily training, 78% by proper handle of event updates, and 88% by fully-realized training. It is necessary to raise public awareness by training in fire safety drills so that prevention planning and measures are familiarized, thereby ensuring readiness in the event of disasters. Keywords: Disaster Prevention and Rescue, SWOT, Fire Safety Management, Theoretical Maneuver
APA, Harvard, Vancouver, ISO, and other styles
48

CHEN, HUNG-KANG, and 陳弘康. "A Study of Fire Rescue Sophistication for Compound Use Buildings – Taking a Large Scale Market Place in Wan-Hua District as an Example." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/58207416363177678194.

Full text
Abstract:
碩士
中華科技大學
土木防災與管理碩士班
105
With the economic development of industry and commerce, people's income increased. There is an increase in demand for a comfortable and convenient shopping environment. There are more and more large scale market place opened in Taiwan recently. This kind of buildings are big and complicated. Therefore, the fire risks are relatively high. How to provide a safe consumer environment is a very important issue. The research will examine the issue from fire management, building fire safety equipment, and fire fighting strategies. Hope to be sophisticated firefighters on large shopping malls fire rescue measures This study is aimed at a large scale market place in Wanhua District . Through the literature, case analysis, field investigation to explore the scene of the internal space characteristics of the building, large scale market place fire safety equipment. Interview with fire professional experience and advice. Fire units combined with shopping malls self-defense fire marshals for fire drills, to study how the fire occurred when the composite use of large scale market place rescue. By literature reviewing, case studying, and field investigating, we found that in the beginning of fire, the shortage of manpower of firefighter can not afford establishing Rapid Intervention Team. Using numbers of persons accommodated control system, we can control emergency evacuation in the mall. Implementing daily check up of fire safety equipment and fire prevention facilities to make it workable. As fire occurs, the responses of self-protect fire protection team are very important. Combining all the above strategies, we can reduce the hazard and damage of fire.
APA, Harvard, Vancouver, ISO, and other styles
49

Cheng, Cher-Sheng, and 鄭哲聖. "Inverted File Design for Large-Scale Information Retrieval System." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/35663252020207351257.

Full text
Abstract:
博士
國立交通大學
資訊工程系所
93
This dissertation investigates a variety of techniques to improve efficiency in information retrieval (IR). Information retrieval systems (IRSs) are widely used in many applications, such as search engines, digital libraries, genomic sequence analyses, etc. To efficiently search vast amount of data, a compressed inverted file is used in an IRS to locate the desired data quickly. An inverted file contains, for each distinct term in the collection, a posting list. The query processing time of a large-scale IRS is dominated by the time needed to read and decompress the posting list for each query term. Moreover, adding a document into the collection is to add one document identifier into the posting list for each term appearing in the document, hence the length of a posting list increases with the size of document collection. This implies that the time needed to process posting lists increase as the size of document collection grows. Therefore, efficient approaches to reduce the time needed to read, decompress, and merge the posting lists are the key issues in designing a large-scale IRS. Research topics to be studied in this dissertation are (1) Efficient coding method for inverted file size reduction The first topic is to propose a novel size reduction method for compressing inverted files. Compressing an inverted file can greatly improve query performance by reducing disk I/Os, but this adds to the decompression time required. The objective of this topic is to develop a method that has both the advantages of compression ratio and fast decompression. Our approach is as follows. The foundation is interpolative coding, which compresses the document identifiers with a recursive process taking care of clustering property and yields superior compression. However, interpolative coding is computationally expensive due to a stack required in its implementation. The key idea of our proposed method is to facilitate coding and decoding processes for interpolative coding by using recursion elimination and loop unwinding. Experimental results show that our method provides fast decoding speed and excellent compression. (2) Two-level skipped inverted file for redundant decoding elimination The second topic is to propose a two-level skipped inverted file, in which a two-level skipped index is created on each compressed posting list, to reduce decompression time. A two-level skipped index can greatly reduce decompress time by skipping over unnecessary portions of the list. However, well-known skipping mechanisms are unable to efficiently implement the two-level skipped index due to their high storage overheads. The objective of this topic is to develop a space-economical two-level skipped inverted file to eliminate redundant decoding and allow fast query evaluation. For this purpose, we propose a novel skipping mechanism based on block size calculation, which can create a skipped index on each compressed posting list with very little or no storage overhead, particularly if the posting list is divided into very small blocks. Using a combination of our skipping mechanism and well-known skipping mechanisms can implement a two-level skipped index with very little storage overheads. Experimental results showed that using such a two-level skipped index can simultaneously allow extremely fast query processing of both conjunctive Boolean queries and ranked queries. (3) Document identifier assignment algorithm design for inverted file optimization The third topic is to propose a document identifier assignment (DIA) algorithm for fast query evaluation. We observe that a good DIA can make the document identifiers in the posting lists more clustered, and result in better compression as well as shorter query processing time. The objective of this topic is to develop a fast algorithm that finds an optimal DIA to minimize the average query processing time in an IRS. In a typical IRS, the distribution of query terms is skewed. Based on this fact, we propose a partition-based DIA (PBDIA) algorithm, which can efficiently assign consecutive document identifiers to those documents containing frequently used query terms. Therefore, the posting lists for frequently used query terms can be compressed better without increasing the complexity of decoding processes. This can result in reduced query processing time. (4) Inverted file partitioning for parallel IR The fourth topic is to propose an inverted file partitioning approach for parallel IR. The inverted file is generally partitioned into disjoint sub-files, each for one workstation, in an IRS that runs on a cluster of workstations. When processing a query, all workstations have to consult only their own sub-files in parallel. The objective of this topic is to develop an inverted file partitioning approach that minimizes the average query processing time of parallel query processing. Our approach is as follows. The foundation is interleaving partitioning scheme, which generates a partitioned inverted file with interleaved mapping rule and produces a near-ideal speedup. The key idea of our proposed approach is to use the PBDIA algorithm to enhance the clustering property of posting lists for frequently used query terms before performing the interleaving partitioning scheme. This can aid the interleaving partitioning scheme to produce superior query performance. The results of this dissertation include: • For inverted file size reduction, the proposed coding method allows query throughput rate of approximately 30% higher than well-known Golomb coding and still provides superior compression. • For redundant decoding elimination, the proposed two-level skipped inverted file improves the query speed for conjunctive Boolean queries by up to 16%, and for ranked queries up to 44%, compared with the conventional one-level skipped inverted file. • For inverted file optimization, the PBDIA algorithm only takes a few seconds to generate a DIA for a collection of 1GB, and improves query speed by up to 25%. • For parallel IR, the proposed approach can further improve the parallel query speed for interleaving partitioning scheme by 14% to 17% no matter how many workstations are in the cluster.
APA, Harvard, Vancouver, ISO, and other styles
50

Cruz, João Rodrigo Romão Marinho Pinto da. "Deep learning for large-scale fine-grained recognition of cars." Master's thesis, 2018. http://hdl.handle.net/10071/17850.

Full text
Abstract:
Deep learning (DL) is widely used nowadays, with several applications in image classification and object detection. Among many of these applications is the use of Convolutional Neural Networks (CNNs) whose operation is: for a given input (image) and output (label/class), generate representations that define and allow to distinguish different kinds of objects. Neural Networks are computationally demanding, taking hours to train. Convolutional Neural Networks are even more demanding since their input data are usually images – a rich data type that holds a lot of information. The fast evolution in Computer Vision, using deep learning techniques, and computing power recently allowed to train CNNs which can classify images with high precision. In car classifieds websites images are one of the most important types of content. However, until today, little knowledge/metadata is produced from such images. In order to insert an advert in the platform, the user must upload an image of the car for sale and fill a certain number of fields, among them the vehicle category, the color of the car and its respective make, model and version. In this dissertation, CNNs are used for the recognition of the make, model and version of cars where transfer learning and fine-tuning are two approaches used for transferring the knowledge learned in one task and adapting it to another. We extend the work to also validate the efficacy of these neural networks on the tasks of vehicle category and cars’ color recognition. We pretend to validate how CNNs behave in these different tasks. Approaches like background removal and data augmentation are explored for reducing overfitting. We collected one of the largest datasets to date for the task of make, model and version recognition of cars, composed of 1.2 million images belonging to 790 labels.The results obtained in the scope of this dissertation set a new state-of-the-art performance for this type of task (accuracy of 92.7% on an ensemble method) considering the number of classes to classify and the number of images used. It is demonstrated the efficacy of the recent advances in CNN architectures in fine-grained classification where intra-class variation is small and viewpoint variation is high, when a largescale dataset is used.
Deep Learning (DL) é um termo cada vez mais mencionado nos dias de hoje, com vastas aplicações em classificação de imagens e detecção de objectos. Por detrás de muitas destas aplicações está a utilização de Convolutional Neural Networks (CNN) cujo funcionamento é, para um dado input (imagem) e output (nome do objecto representado/classe), produzir representações que definem e permitem distinguir vários tipos de objectos. As redes neuronais são computacionalmente exigentes e podem levar horas a ser treinadas. Convolutional Neural Networks são ainda mais exigentes visto o seu input ser, usualmente, imagens - um tipo de dados rico que contém muita informação. Com a rápida evolução do poder computacional aliada à evolução no campo de Computer Vision com recurso a CNNs é possível, somente nos últimos anos, treinar CNNs para classificação de imagens com alto nível de precisão. Em sites de classificados de carros as imagens são um dos tipos de conteúdo mais importante. Todavia até aos dias de hoje, pouco conhecimento/metadados são gerados a partir das mesmas. O utilizador tem sempre que, para inserir um anúncio na plataforma, preencher um vasto número de campos, entre eles a categoria do veículo, a cor do carro e a respectiva marca, modelo e versão, e inserir uma imagem do carro para venda. Nesta dissertação são utilizadas CNNs para o reconhecimento da marca, modelo e versão de carros em que se utiliza transfer learning e fine-tuning para transferir o conhecimento “aprendido” numa tarefa e adaptá-lo para outra. O trabalho é estendido de forma a demonstrar, também, a eficácia destas redes neuronais para as tarefas de reconhecimento da categoria do veículo e reconhecimento de cor de carros. Pretendemos validar como as CNNs se comportam nestes diferentes tipos de tarefas. Abordagens como remoção do fundo da imagem e data augmentation são utilizadas para reduzir overfitting.É obtido um dos maiores datasets para a tarefa de reconhecimento de marca, modelo e versão de carros, composto por 1,2 milhões de imagens pertencentes a 790 classes. Os resultados apresentados são dos melhores para este tipo de tarefa (precisão de 92.7% com um ensemble) considerando tanto o número de classes a classificar como o número de imagens utilizadas. Os resultados obtidos evidenciam a eficácia das arquitecturas de CNNs modernas para a classificação granular onde a variação intra-classe é reduzida e a variação da perspectiva é elevada, quando é utilizado um dataset de grandes dimensões.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography