Academic literature on the topic 'Natural Hazards not elsewhere classified'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Natural Hazards not elsewhere classified.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Natural Hazards not elsewhere classified"

1

Kelly, Matthew, and Yuriy Kuleshov. "Flood Hazard Assessment and Mapping: A Case Study from Australia’s Hawkesbury-Nepean Catchment." Sensors 22, no. 16 (August 19, 2022): 6251. http://dx.doi.org/10.3390/s22166251.

Full text
Abstract:
Floods are among the costliest natural hazards, in Australia and globally. In this study, we used an indicator-based method to assess flood hazard risk in Australia’s Hawkesbury-Nepean catchment (HNC). Australian flood risk assessments are typically spatially constrained through the common use of resource-intensive flood modelling. The large spatial scale of this study area is the primary element of novelty in this research. The indicators of maximum 3-day precipitation (M3DP), distance to river—elevation weighted (DREW), and soil moisture (SM) were used to create the final Flood Hazard Index (FHI). The 17–26 March 2021 flood event in the HNC was used as a case study. It was found that almost 85% of the HNC was classified by the FHI at ‘severe’ or ‘extreme’ level, illustrating the extremity of the studied event. The urbanised floodplain area in the central-east of the HNC had the highest FHI values. Conversely, regions along the western border of the catchment had the lowest flood hazard risk. The DREW indicator strongly correlated with the FHI. The M3DP indicator displayed strong trends of extreme rainfall totals increasing towards the eastern catchment border. The SM indicator was highly variable, but featured extreme values in conservation areas of the HNC. This study introduces a method of large-scale proxy flood hazard assessment that is novel in an Australian context. A proof-of-concept methodology of flood hazard assessment developed for the HNC is replicable and could be applied to other flood-prone areas elsewhere.
APA, Harvard, Vancouver, ISO, and other styles
2

Chaudhary, Muhammad T., and Awais Piracha. "Natural Disasters—Origins, Impacts, Management." Encyclopedia 1, no. 4 (October 30, 2021): 1101–31. http://dx.doi.org/10.3390/encyclopedia1040084.

Full text
Abstract:
Natural hazards are processes that serve as triggers for natural disasters. Natural hazards can be classified into six categories. Geophysical or geological hazards relate to movement in solid earth. Their examples include earthquakes and volcanic activity. Hydrological hazards relate to the movement of water and include floods, landslides, and wave action. Meteorological hazards are storms, extreme temperatures, and fog. Climatological hazards are increasingly related to climate change and include droughts and wildfires. Biological hazards are caused by exposure to living organisms and/or their toxic substances. The COVID-19 virus is an example of a biological hazard. Extraterrestrial hazards are caused by asteroids, meteoroids, and comets as they pass near earth or strike earth. In addition to local damage, they can change earth inter planetary conditions that can affect the Earth’s magnetosphere, ionosphere, and thermosphere. This entry presents an overview of origins, impacts, and management of natural disasters. It describes processes that have potential to cause natural disasters. It outlines a brief history of impacts of natural hazards on the human built environment and the common techniques adopted for natural disaster preparedness. It also lays out challenges in dealing with disasters caused by natural hazards and points to new directions in warding off the adverse impact of such disasters.
APA, Harvard, Vancouver, ISO, and other styles
3

Read, Laura K., and Richard M. Vogel. "Hazard function theory for nonstationary natural hazards." Natural Hazards and Earth System Sciences 16, no. 4 (April 11, 2016): 915–25. http://dx.doi.org/10.5194/nhess-16-915-2016.

Full text
Abstract:
Abstract. Impact from natural hazards is a shared global problem that causes tremendous loss of life and property, economic cost, and damage to the environment. Increasingly, many natural processes show evidence of nonstationary behavior including wind speeds, landslides, wildfires, precipitation, streamflow, sea levels, and earthquakes. Traditional probabilistic analysis of natural hazards based on peaks over threshold (POT) generally assumes stationarity in the magnitudes and arrivals of events, i.e., that the probability of exceedance of some critical event is constant through time. Given increasing evidence of trends in natural hazards, new methods are needed to characterize their probabilistic behavior. The well-developed field of hazard function analysis (HFA) is ideally suited to this problem because its primary goal is to describe changes in the exceedance probability of an event over time. HFA is widely used in medicine, manufacturing, actuarial statistics, reliability engineering, economics, and elsewhere. HFA provides a rich theory to relate the natural hazard event series (X) with its failure time series (T), enabling computation of corresponding average return periods, risk, and reliabilities associated with nonstationary event series. This work investigates the suitability of HFA to characterize nonstationary natural hazards whose POT magnitudes are assumed to follow the widely applied generalized Pareto model. We derive the hazard function for this case and demonstrate how metrics such as reliability and average return period are impacted by nonstationarity and discuss the implications for planning and design. Our theoretical analysis linking hazard random variable X with corresponding failure time series T should have application to a wide class of natural hazards with opportunities for future extensions.
APA, Harvard, Vancouver, ISO, and other styles
4

Read, L. K., and R. M. Vogel. "Hazard function theory for nonstationary natural hazards." Natural Hazards and Earth System Sciences Discussions 3, no. 11 (November 13, 2015): 6883–915. http://dx.doi.org/10.5194/nhessd-3-6883-2015.

Full text
Abstract:
Abstract. Impact from natural hazards is a shared global problem that causes tremendous loss of life and property, economic cost, and damage to the environment. Increasingly, many natural processes show evidence of nonstationary behavior including wind speeds, landslides, wildfires, precipitation, streamflow, sea levels, and earthquakes. Traditional probabilistic analysis of natural hazards based on peaks over threshold (POT) generally assumes stationarity in the magnitudes and arrivals of events, i.e. that the probability of exceedance of some critical event is constant through time. Given increasing evidence of trends in natural hazards, new methods are needed to characterize their probabilistic behavior. The well-developed field of hazard function analysis (HFA) is ideally suited to this problem because its primary goal is to describe changes in the exceedance probability of an event over time. HFA is widely used in medicine, manufacturing, actuarial statistics, reliability engineering, economics, and elsewhere. HFA provides a rich theory to relate the natural hazard event series (X) with its failure time series (T), enabling computation of corresponding average return periods, risk and reliabilities associated with nonstationary event series. This work investigates the suitability of HFA to characterize nonstationary natural hazards whose POT magnitudes are assumed to follow the widely applied Generalized Pareto (GP) model. We derive the hazard function for this case and demonstrate how metrics such as reliability and average return period are impacted by nonstationarity and discuss the implications for planning and design. Our theoretical analysis linking hazard event series X, with corresponding failure time series T, should have application to a wide class of natural hazards with rich opportunities for future extensions.
APA, Harvard, Vancouver, ISO, and other styles
5

Britton, Neil R. "Uncommon Hazards and Orthodox Emergency Management: Toward a Reconciliation." International Journal of Mass Emergencies & Disasters 10, no. 2 (August 1992): 329–48. http://dx.doi.org/10.1177/028072709201000206.

Full text
Abstract:
Effective emergency management requires a close fit between state of risk and stale of hazard management. If these components get out of phase, a marked increase in societal vulnerability is likely to prevail. Recognizing that the major burden for developed societies has shifted from risks associated with natural processes to those arising from technological development and application, disaster-relevant organizational networks have adopted a Comprehensive Emergency Management “all-hazards” approach. However, in Australia, as elsewhere, technological hazards present major problems for emergency managers because they pose different and often more difficult predicaments than do the more familiar natural hazards. While CEM is a good “in principle” strategy, the practices needed to pratect society from a diversity of disaster-producing agents are more difficult to achieve. Two explanations are given for this: misperceptions about common features of hazard types; and differential progress between social components. The concept of cultural lag provides an explanatory framework as to why predicaments like this occur; and the concept of disaster subculture may provide a solution.
APA, Harvard, Vancouver, ISO, and other styles
6

Llasat-Botija, M., M. C. Llasat, and L. López. "Natural Hazards and the press in the western Mediterranean region." Advances in Geosciences 12 (July 30, 2007): 81–85. http://dx.doi.org/10.5194/adgeo-12-81-2007.

Full text
Abstract:
Abstract. This study analyses press articles published between 1982 and 2005 in an attempt to describe the social perception of natural hazards in Catalonia. The articles included in the database have been classified according to different types of risk. In addition, the study examines the evolution of each type of risk in the press coverage during the study period. Finally, the results have been compared to data provided by insurance companies with respect to compensations paid out for damages. Conclusions show that floods are the most important natural hazard in the region, but that the number of headlines for each event is greater in the case of snowfalls and forest fires. Factors such as the season of the year, the proximity of the affected region to the capital, the topical issues at the time, and the presence of other important news must be considered when the impact in the press is analysed.
APA, Harvard, Vancouver, ISO, and other styles
7

Hussain, Muhammad Awais, Shuai Zhang, Muhammad Muneer, Muhammad Aamir Moawwez, Muhammad Kamran, and Ejaz Ahmed. "Assessing and Mapping Spatial Variation Characteristics of Natural Hazards in Pakistan." Land 12, no. 1 (December 31, 2022): 140. http://dx.doi.org/10.3390/land12010140.

Full text
Abstract:
One nation with the highest risk of climate catastrophes is Pakistan. Pakistan’s geographical nature makes it susceptible to natural hazards. Pakistan is facing regional differences in terms of climate change. The frequency and intensity of natural hazards due to climate change vary from place to place. There is an urgent need to recognize the spatial variations in natural hazards inside the country. To address such problems, it might be useful to map out the areas that need resources to increase resilience and accomplish adaptability. Therefore, the main goal of this research was to create a district-level map that illustrates the multi-hazard zones of various regions in Pakistan. In order to comprehend the geographical differences in climate change and natural hazards across Pakistan, this study examines the relevant literature and data currently available regarding the occurrence of natural hazards in the past. Firstly, a district-level comprehensive database of Pakistan’s five natural hazards (floods, droughts, earthquakes, heatwaves, and landslides) was created. Through consultation with specialists in related areas, hazard and weighting factors for a specific hazard were specified based on the structured district-level historical disaster database of Pakistan. After that, individual and multi-hazard ratings were computed for each district. Then, using estimated multi-hazard scores, the districts of Pakistan were classified into four zones. Finally, a map of Pakistan’s multi-hazard zones was created per district. The study results are essential and significant for policymakers to consider when making decisions on disaster management techniques, that is, when organizing disaster preparedness, mitigation, and prevention plans.
APA, Harvard, Vancouver, ISO, and other styles
8

Paterson, Barbara, and Anthony Charles. "A global comparison of community-based responses to natural hazards." Natural Hazards and Earth System Sciences 19, no. 11 (November 7, 2019): 2465–75. http://dx.doi.org/10.5194/nhess-19-2465-2019.

Full text
Abstract:
Abstract. Community-based disaster preparedness is an important component of disaster management. Knowledge of interventions that communities utilize in response to hazards is important to develop local-level capacity and increase community resilience. This paper systematically examines empirical information about local-level responses to hazards based on peer-reviewed, published case studies. We developed a data set based on 188 articles providing information from 318 communities from all regions of the world. We classified response examples to address four key questions: (i) what kinds of responses are used by communities all over the world? (ii) Do communities in different parts of the world use different kinds of responses? (iii) Are communities using hazard-specific responses? (iv) Are communities using a multi-hazard approach? We found that within an extensive literature on hazards, there is relatively little empirical information about community-based responses to hazards. Across the world, responses aiming at securing basic human needs are the most frequently reported kinds of responses. Although the notion of community-based disaster preparedness is gaining importance, very few examples of responses that draw on the social fabric of communities are reported. Specific regions of the world are lacking in their use of certain hazard responses classes. Although an all-hazard approach for disaster preparedness is increasingly recommended, there is a lack of multi-hazard response approaches on the local level.
APA, Harvard, Vancouver, ISO, and other styles
9

Hasan Zaid, Hussein Ahmed, T. A. Jamaluddin, and Mohd Hariri Arifin. "Overview Of Slope Stability, Earthquakes, Flash Floods And Expansive Soil Hazards In The Republic Of Yemen." Bulletin of the Geological Society of Malaysia 71 (May 31, 2021): 71–78. http://dx.doi.org/10.7186/bgsm71202106.

Full text
Abstract:
Yemen has harsh natural conditions that increase certain geological processes more than other regions, leading to a variety of geological hazards. Yemen’s typical topography is distinguished by coastal plains of the Red Sea and cliff foothillls, followed by mountains of the Arabian Shield. These types of geological hazards can be classified into slope stability, earthquakes, flash floods and expansive soils. The current literature review presents a description backed with examples of the certain geological hazards in Yemen. The obtained results indicate that further consideration and thought are highly required for semi-arid regions. National and foreign organizations have to collaborate together with other individuals to maintain the adjusted environmental system and reduce the potential geological hazards. Therefore, mitigation measures should be implemented to avoid and minimize these geological hazards.
APA, Harvard, Vancouver, ISO, and other styles
10

Tuo, Fei, Xuan Peng, Qiang Zhou, and Jing Zhang. "ASSESSMENT OF NATURAL RADIOACTIVITY LEVELS AND RADIOLOGICAL HAZARDS IN BUILDING MATERIALS." Radiation Protection Dosimetry 188, no. 3 (January 20, 2020): 316–21. http://dx.doi.org/10.1093/rpd/ncz289.

Full text
Abstract:
Abstract Radioactivity of 226Ra, 232Th, and 40K were measured in a total of 92 samples, including eight commonly used types of building materials that were obtained from local manufacturers and suppliers in Beijing. Concentrations were determined using high-purity germanium gamma-ray spectrometry. The 226Ra, 232Th, and 40K activity concentrations in all samples varied from 10.1 to 661, 3.3 to 555 and 3.2 to 2945 Bq per kg with an average of 127.8, 114.8, and 701.5 Bq per kg, respectively. The potential radiological hazards were estimated by calculating the absorbed dose rate (D), radium equivalent activity (Raeq), external hazard (Hex), and internal hazard (Hin) indices. The investigated building materials were classified into different types according to the radioactivity levels. Results from this research will provide a reference for the acquisition, sales, and use of building materials. Attention should be paid to the use of coal cinder brick, ceramic, and granite in the construction of dwellings.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Natural Hazards not elsewhere classified"

1

Jones, Christopher Charles Rawlinson. "A study of novel computing methods for solving large electromagnetic hazards problems." Thesis, University of Central Lancashire, 2002. http://clok.uclan.ac.uk/18842/.

Full text
Abstract:
The aim of this work is to explore means to improve the speed of the computational electromagnetics (CEM) processing in use for aircraft design and certification work by a factor of 1000 or so. The investigation addresses particularly the set of problems described as electromagnetic hazards comprising lightning, EMC and the illumination of an aircraft by external radio sources or HIRF (high intensity radiated fields). These are areas which are very much aspects of the engineering of the aircraft where the requirement for accuracy of simulations is of the order of 6dB as build and test repeatability cannot achieve better than this. Computer simulations of these interactions at the outset of this work were often taking 10 days and more on the largest parallel computers then available in the UK (Cray T3D - 40 GFLOPS nominal peak). Such run times made any form of optimisation impossibly lengthy. While the future offered the certain prospect of more powerful computers, the simulations had to become more comprehensive in their representation of materials and features, geometry of the object, and particularly the representation of wires and cables had to improve radically, and tum around times for analysis had to be improved for design assessment as well as to make design optimisation by trade-off studies feasible. All of these could easily consume all the advantage that the new computers would give. The investigation has centred around techniques that might be applied via alteration to the most widely used and usable numerical methods in CEM applied to the electromagnetic hazards, and to techniques that might be applied to the manner of their use. In one case, the investigation has explored a particular possibility for minimising the duration of computation and extrapolating the resulting data to the longest time-scales required. Future improvements in the capabilities of radiating boundary conditions to mimic the effect of an infinite boundary at close range will further improve the benefits already established in this work, but this is not yet realisable. However, it has been established that a combination of techniques with some processes devised through this work can and does deliver the performance improvement sought. It has further been shown that the issues such as object resonance that could have incurred significant error and distrust of computational results can be satisfactorily overcome within the required accuracy. Four papers have been published arsing from this work. Some of these techniques are now in use in routine analyses contributing to BAE SYSTEMS programmes. Plans are in place to incorporate all of the successful techniques and processes.
APA, Harvard, Vancouver, ISO, and other styles
2

Grahn, Tonje. "A Nordic Perspective on Data Availability for Quantification of Losses due to Natural Hazards." Licentiate thesis, Karlstads universitet, Institutionen för miljö- och livsvetenskaper, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-41138.

Full text
Abstract:
Natural hazards cause enormous amounts of damage worldwide every year. Since 1994 more than 1.35 billion people have lost their lives and more than 116 million homes have been damaged. Understanding of disaster risk implies knowledge about vulnerability, capacity, exposure of persons and assets, hazard characteristics and the environment. Quantitative damage assessments are a fundamental part of disaster risk management. There are, however, substantial challenges when quantifying damage which depends on the diversity of hazards and the fact that one hazardous event can negatively impact a society in multiple ways. The overall aim of the thesis is to analyze the relationship between climate-related natural hazards and subsequent damage for the purpose of improving the prerequisite for quantitative risk assessments in the future. The thesis concentrates on two specific types of consequences due to two types of hazards, 1) damage to buildings caused by lake floods, and 2) loss of lives caused by quick clay landslides.  Several causal relationships were established between risk factors and the extent of damages. Lake water levels increased the probability of structural building damage. Private damage reducing measures decreased the probability of structural building damage. Extent of damage decreased with distance to waterfront but increased with longer flood duration while prewar houses suffered lower flood damage compared to others. Concerning landslides, the number of fatalities increased when the number of humans in the exposed population increased. The main challenges to further damage estimation are data scarcity, insufficient detail level and the fact that the data are rarely systematically collected for scientific purposes. More efforts are needed to create structured, homogeneous and detailed damage databases with corresponding risk factors in order to further develop quantitative damage assessment of natural hazards in a Nordic perspective.
Naturolyckor orsakar enorma mängder skador över hela världen varje år. Under åren 1994-2013 förlorade mer än 1,35 miljoner människor sina liv och mer än 116 miljoner hem skadades. Förståelse av risk för naturolyckor innebär kunskap om sårbarhet, kapacitet, exponering av personer och tillgångar, hot och miljö. Kvantitativa skadebedömningar, som är en viktig del av riskbedömningar, omfattas av stora utmaningar som grundar sig i hotens mångfaldighet och det faktum att en naturolycka kan påverka ett samhälle negativt på många olika sätt. Det övergripande syftet med avhandlingen är att analysera förhållandet mellan naturkatastrofer och potentiellt påföljande skador i syfte att förbättra förutsättningarna för kvantitativa riskbedömningar i framtiden. Avhandlingen koncentrerar sig på två typer av naturolyckor med specifika konsekvenser, 1) skador på byggnader till följd av sjö-översvämningar, och 2) förlust av liv orsakat av lerskred. Flera orsakssamband mellan riskfaktorer och omfattning av skador har identifierats. Sjöarnas vattennivåer ökade sannolikheten att drabbas av strukturell byggnadsskada, samtidigt som privat initierade åtgärder minskande sannolikheten.. När avstånd mellan sjö och byggnad ökade minskade omfattningen av översvämningsskador, men ökade ju längre sjööversvämningen varade. Hus byggda före 1940 fick mindre skador jämfört med andra hus. Andelen dödsfall i samband med skred i kvicklera ökade när antal människor i den exponerade befolkningen ökade. Den största utmaningen i att förbättra dagens kvantitativa skadebedömningar är den rådande databristen vad gäller förluster och tillhörande riskfaktorer. Denna brist beror på otillgänglig skadedata, bristande detaljnivå på skadedata och tillhörande risk faktorer, och att uppgifterna sällan samlas systematiskt i syfte att studera kausalitet.
The overall aim of the thesis is to analyze the relationship between climate-related natural hazards and subsequent damage for the purpose of improving the prerequisite for quantitative risk assessments in the future. The thesis concentrates on two specific types of hazards with specific types of consequences, 1) damage to buildings caused by lake floods, and 2) loss of lives caused by quick clay landslides.  Several causal relationships were established between risk factors and the extent of damages. Lake water levels increased the probability of structural building damage. Private damage reducing measures decreased the probability of structural building damage. Extent of damage decreased with distance to waterfront but increased with longer flood duration while prewar houses suffered lower flood damage compared to others. Concerning landslides, the number of fatalities increased when the number of humans in the exposed population increased. The main challenges to further damage estimation are data scarcity, insufficient detail level and the fact that the data are rarely systematically collected for scientific purposes.
APA, Harvard, Vancouver, ISO, and other styles
3

Pool, Ursula. "The impact of water and anthropogenic objects on implicit evaluations of natural scenes : a restorative environments perspective." Thesis, University of Central Lancashire, 2017. http://clok.uclan.ac.uk/17669/.

Full text
Abstract:
Research has consistently demonstrated that exposure to nature, as opposed to urban environments, can be beneficial to health and wellbeing. Among natural landscapes, aquatic (blue space) scenes are among the most preferred and psychologically restorative. Since such landscapes face an increasing range of demands, there is a need to understand how their restorative qualities arise and might be preserved, both in terms of the content of a scene and the psychological processes involved in its interpretation. This thesis examines the cognitive impact of placing artificial (human-made) objects in natural landscapes with and without water. It reports new findings regarding the importance of specific scene content for the restorative potential of blue space. The research also explores some of the underlying psychological processes, addressing novel questions about implicit (subconscious) attitudes towards natural landscapes. It compares implicit and explicit attitudes for the first time in this context. In four studies, methods from social and experimental psychology were used to investigate attitudes towards blue and green space with and without artificial objects. To examine the issues of both artificial objects and implicit attitudes, Study 1 used the Affect Misattribution Procedure (which measures implicit attitudes towards images) to assess whether implicit affect (subconscious positive or negative emotion) differed when the same natural scene was viewed with and without artificial objects. Results showed that introducing objects into natural scenes had a negative impact on implicit affect, particularly when the scene contained water. In order to be able to compare implicit and explicit attitudes, Study 2 examined explicit affective reactions to the images from Study 1 using questions adapted from the Perceived Restorativeness Scale (a measure of the restorative potential of environments). Blue space scenes were rated more highly than green space scenes on all components except aesthetics. The presence of artificial objects resulted in lower ratings on all measures for both blue and green scenes. Study 3 was motivated by an indication in the results from Study 1 that implicit attitudes towards blue and green space may differ. The Affect Misattribution Procedure was used to investigate this for natural landscapes without artificial objects. The study also examined whether implicit attitudes differ according to the type of blue or green environment. Viewing blue space scenes resulted in more positive implicit affect than green space, with sea views generating the most positive implicit affect of all. Following the discovery that artificial objects had a more negative impact on implicit attitudes to blue space than green space, Study 4 tested the possibility that this could be due to such objects being more disruptive to the conceptual coherence of aquatic scenes. The conceptual-semantic congruence of artificial objects was assessed using a lexical decision task, in which participants reacted to object words superimposed on scenes. Results did not support the hypothesis that artificial objects are less congruent in blue space than green space. Overall, the studies provide evidence that placing artificial objects in natural landscapes, particularly aquatic landscapes, adversely affects both implicit and explicit attitudes towards the scenes and may reduce their restorative potential. By successfully combining methods from social and experimental psychology, this research validates novel ways of formulating and addressing questions about why some environments have a more positive psychological impact than others. The new results reported here are not easily explained by current restorative theory, therefore might contribute to refining the theoretical framework within which restorative environments are studied.
APA, Harvard, Vancouver, ISO, and other styles
4

Pedder-Smith, Rachel. "The glow of significance : narrating stories using natural history specimens." Thesis, Royal College of Art, 2011. http://researchonline.rca.ac.uk/430/.

Full text
Abstract:
The subject of this project is natural history specimens and the exploration of their qualities in visual artwork. The first part is a 533cm watercolour painting composed of an image of at least one specimen (or part thereof) to represent each flowering plant family, of which there are 505. The ‘Herbarium Specimen Painting’ was created using dried plant specimens from the herbarium collection at the Royal Botanic Gardens, Kew. The plant families are painted in systematic order following one of the recently developed DNA classification systems. The painting was produced with scientific rigor and under the constant supervision of Kew botanists. It aims not only to illustrate the chosen classification system but to explore the aesthetic beauty of herbarium specimens and celebrate many of the incredible and varied narratives contained within the Kew collection. The second element of this thesis constructs a context for the above artwork among similar projects. Natural history institutions worldwide were contacted for information about artists using natural history collections to produce art with a strong narrative element that ‘discussed’ the notion of the specimen. These artists were then contacted and many interviewed. In parallel, the literature review concentrated on theories developed in the field of material culture where the human relationships between groups of objects are analysed. These theories proved fundamental and on occasion inspirational in uncovering deeper meanings and narrative possibilities. The concluding section of this research discusses whether the findings of this project, which uses and develops material culture theory can contribute to that field of research. It analyses the possibility that specimen-based artwork can benefit science and/or help revitalise museum collections, and comments on whether institutions can improve the public communicability of the objects in their care by treating them as a potential source for new art.
APA, Harvard, Vancouver, ISO, and other styles
5

Caon, Maurizio. "Context-aware gestural interaction in the smart environments of the ubiquitous computing era." Thesis, University of Bedfordshire, 2014. http://hdl.handle.net/10547/344619.

Full text
Abstract:
Technology is becoming pervasive and the current interfaces are not adequate for the interaction with the smart environments of the ubiquitous computing era. Recently, researchers have started to address this issue introducing the concept of natural user interface, which is mainly based on gestural interactions. Many issues are still open in this emerging domain and, in particular, there is a lack of common guidelines for coherent implementation of gestural interfaces. This research investigates gestural interactions between humans and smart environments. It proposes a novel framework for the high-level organization of the context information. The framework is conceived to provide the support for a novel approach using functional gestures to reduce the gesture ambiguity and the number of gestures in taxonomies and improve the usability. In order to validate this framework, a proof-of-concept has been developed. A prototype has been developed by implementing a novel method for the view-invariant recognition of deictic and dynamic gestures. Tests have been conducted to assess the gesture recognition accuracy and the usability of the interfaces developed following the proposed framework. The results show that the method provides optimal gesture recognition from very different view-points whilst the usability tests have yielded high scores. Further investigation on the context information has been performed tackling the problem of user status. It is intended as human activity and a technique based on an innovative application of electromyography is proposed. The tests show that the proposed technique has achieved good activity recognition accuracy. The context is treated also as system status. In ubiquitous computing, the system can adopt different paradigms: wearable, environmental and pervasive. A novel paradigm, called synergistic paradigm, is presented combining the advantages of the wearable and environmental paradigms. Moreover, it augments the interaction possibilities of the user and ensures better gesture recognition accuracy than with the other paradigms.
APA, Harvard, Vancouver, ISO, and other styles
6

(7026707), Siddharth Saksena. "Integrated Flood Modeling for Improved Understanding of River-Floodplain Hydrodynamics: Moving beyond Traditional Flood Mapping." Thesis, 2019.

Find full text
Abstract:
With increasing focus on large scale planning and allocation of resources for protection against future flood risk, it is necessary to analyze and improve the deficiencies in the conventional flood modeling approach through a better understanding of the interactions between river hydrodynamics and subsurface processes. Recent studies have shown that it is possible to improve the flood inundation modeling and mapping using physically-based integrated models that incorporate observable data through assimilation and simulate hydrologic fluxes using the fundamental laws of conservation of mass at multiple spatiotemporal scales. However, despite the significance of integrated modeling in hydrology, it has received relatively less attention within the context of flood hazard. The overall aim of this dissertation is to study the heterogeneity in complex physical processes that govern the watershed response during flooding and incorporate these effects in integrated models across large scales for improved flood risk estimation. Specifically, this dissertation addresses the following questions: (1) Can physical process incorporation using integrated models improve the characterization of antecedent conditions and increase the accuracy of the watershed response to flood events? (2) What factors need to be considered for characterizing scale-dependent physical processes in integrated models across large watersheds? (3) How can the computational efficiency and process representation be improved for modeling flood events at large scales? (4) Can the applicability of integrated models be improved for capturing the hydrodynamics of unprecedented flood events in complex urban systems?

To understand the combined effect of surface-subsurface hydrology and hydrodynamics on streamflow generation and subsequent inundation during floods, the first objective incorporates an integrated surface water-groundwater (SW-GW) modeling approach for simulating flood conditions. The results suggest that an integrated model provides a more realistic simulation of flood hydrodynamics for different antecedent soil conditions. Overall, the findings suggest that the current practice of simulating floods which assumes an impervious surface may not be providing realistic estimates of flood inundation, and that an integrated approach incorporating all the hydrologic and hydraulic processes in the river system must be adopted.

The second objective focuses on providing solutions to better characterize scale-dependent processes in integrated models by comparing two model structures across two spatial scales and analyzing the changes in flood responses. The results indicate that since the characteristic length scales of GW processes are larger than SW processes, the intrinsic scale (or resolution) of GW in integrated models should be coarser when compared to SW. The results also highlight the degradation of streamflow prediction using a single channel roughness when the stream length scales are increased. A distributed channel roughness variable along the stream length improves the modeled basin response. Further, the results highlight the ability of a dimensionless parameter 𝜂1, representing the ratio of the reach length in the study region to maximum length of the single stream draining at that point, for identifying which streams may require a distributed channel roughness.

The third objective presents a hybrid flood modeling approach that incorporates the advantages of both loosely-coupled (‘downward’) and integrated (‘upward’) modeling approaches by coupling empirically-based and physically-based approaches within a watershed. The computational efficiency and accuracy of the proposed hybrid modeling approach is tested across three watersheds in Indiana using multiple flood events and comparing the results with fully- integrated models. Overall, the hybrid modeling approach results in a performance comparable to a fully-integrated approach but at a much higher computational efficiency, while at the same time, providing objective-oriented flexibility to the modeler.

The fourth objective presents a physically-based but computationally-efficient approach for modeling unprecedented flood events at large scales in complex urban systems. The application of the proposed approach results in accurate simulation of large scale flood hydrodynamics which is shown using Hurricane Harvey as the test case. The results also suggest that the ability to control the mesh development using the proposed flexible model structure for incorporating important physical and hydraulic features is as important as integration of distributed hydrology and hydrodynamics.
APA, Harvard, Vancouver, ISO, and other styles
7

(7484483), Soohyun Yang. "COUPLED ENGINEERED AND NATURAL DRAINAGE NETWORKS: DATA-MODEL SYNTHESIS IN URBANIZED RIVER BASINS." Thesis, 2019.

Find full text
Abstract:

In urbanized river basins, sanitary wastewater and urban runoff (non-sanitary water) from urban agglomerations drain to complex engineered networks, are treated at centralized wastewater treatment plants (WWTPs) and discharged to river networks. Discharge from multiple WWTPs distributed in urbanized river basins contributes to impairments of river water-quality and aquatic ecosystem integrity. The size and location of WWTPs are determined by spatial patterns of population in urban agglomerations within a river basin. Economic and engineering constraints determine the combination of wastewater treatment technologies used to meet required environmental regulatory standards for treated wastewater discharged to river networks. Thus, it is necessary to understand the natural-human-engineered networks as coupled systems, to characterize their interrelations, and to understand emergent spatiotemporal patterns and scaling of geochemical and ecological responses.


My PhD research involved data-model synthesis, using publicly available data and application of well-established network analysis/modeling synthesis approaches. I present the scope and specific subjects of my PhD project by employing the Drivers-Pressures-Status-Impacts-Responses (DPSIR) framework. The defined research scope is organized as three main themes: (1) River network and urban drainage networks (Foundation-Pathway of Pressures); (2) River network, human population, and WWTPs (Foundation-Drivers-Pathway of Pressures); and (3) Nutrient loads and their impacts at reach- and basin-scales (Pressures-Impacts).


Three inter-related research topics are: (1) the similarities and differences in scaling and topology of engineered urban drainage networks (UDNs) in two cities, and UDN evolution over decades; (2) the scaling and spatial organization of three attributes: human population (POP), population equivalents (PE; the aggregated population served by each WWTP), and the number/sizes of WWTPs using geo-referenced data for WWTPs in three large urbanized basins in Germany; and (3) the scaling of nutrient loads (P and N) discharged from ~845 WWTPs (five class-sizes) in urbanized Weser River basin in Germany, and likely water-quality impacts from point- and diffuse- nutrient sources.


I investigate the UDN scaling using two power-law scaling characteristics widely employed for river networks: (1) Hack’s law (length-area power-law relationship), and (2) exceedance probability distribution of upstream contributing area. For the smallest UDNs, length-area scales linearly, but power-law scaling emerges as the UDNs grow. While area-exceedance plots for river networks are abruptly truncated, those for UDNs display exponential tempering. The tempering parameter decreases as the UDNs grow, implying that the distribution evolves in time to resemble those for river networks. However, the power-law exponent for mature UDNs tends to be larger than the range reported for river networks. Differences in generative processes and engineering design constraints contribute to observed differences in the evolution of UDNs and river networks, including subnet heterogeneity and non-random branching.


In this study, I also examine the spatial patterns of POP, PE, and WWTPs from two perspectives by employing fractal river networks as structural platforms: spatial hierarchy (stream order) and patterns along longitudinal flow paths (width function). I propose three dimensionless scaling indices to quantify: (1) human settlement preferences by stream order, (2) non-sanitary flow contribution to total wastewater treated at WWTPs, and (3) degree of centralization in WWTPs locations. I select as case studies three large urbanized river basins (Weser, Elbe, and Rhine), home to about 70% of the population in Germany. Across the three river basins, the study shows scale-invariant distributions for each of the three attributes with stream order, quantified using extended Horton scaling ratios; a weak downstream clustering of POP in the three basins. Variations in PE clustering among different class-sizes of WWTPs reflect the size, number, and locations of urban agglomerations in these catchments.


WWTP effluents have impacts on hydrologic attributes and water quality of receiving river bodies at the reach- and basin-scales. I analyze the adverse impacts of WWTP discharges for the Weser River basin (Germany), at two steady river discharge conditions (median flow; low-flow). This study shows that significant variability in treated wastewater discharge within and among different five class-sizes WWTPs, and variability of river discharge within the stream order <3, contribute to large variations in capacity to dilute WWTP nutrient loads. For the median flow, reach-scale water quality impairment assessed by nutrient concentration is likely at 136 (~16%) locations for P and 15 locations (~2%) for N. About 90% of the impaired locations are the stream order < 3. At basin-scale analysis, considering in stream uptake resulted 225 (~27%) P-impaired streams, which was ~5% reduction from considering only dilution. This result suggests the dominant role of dilution in the Weser River basin. Under the low flow conditions, water quality impaired locations are likely double than the median flow status for the analyses. This study for the Weser River basin reveals that the role of in-stream uptake diminishes along the flow paths, while dilution in larger streams (4≤ stream order ≤7) minimizes the impact of WWTP loads.


Furthermore, I investigate eutrophication risk from spatially heterogeneous diffuse- and point-source P loads in the Weser River basin, using the basin-scale network model with in-stream losses (nutrient uptake).Considering long-term shifts in P loads for three representative periods, my analysis shows that P loads from diffuse-sources, mainly from agricultural areas, played a dominant role in contributing to eutrophication risk since 2000s, because of ~87% reduction of point-source P loads compared to 1980s through the implementation of the EU WFD. Nevertheless, point-sources discharged to smaller streams (stream order < 3) pose amplification effects on water quality impairment, consistent with the reach-scale analyses only for WWTPs effluents. Comparing to the long-term water quality monitoring data, I demonstrate that point-sources loads are the primary contributors for eutrophication in smaller streams, whereas diffuse-source loads mainly from agricultural areas address eutrophication in larger streams. The results are reflective of spatial patterns of WWTPs and land cover in the Weser River basin.


Through data-model synthesis, I identify the characteristics of the coupled natural (rivers) – humans – engineered (urban drainage infrastructure) systems (CNHES), inspired by analogy, coexistence, and causality across the coupled networks in urbanized river basins. The quantitative measures and the basin-scale network model presented in my PhD project could extend to other large urbanized basins for better understanding the spatial distribution patterns of the CNHES and the resultant impacts on river water-quality impairment.


APA, Harvard, Vancouver, ISO, and other styles
8

(6594389), Mahsa Modiri-Gharehveran. "INDIRECT PHOTOCHEMICAL FORMATION OF COS AND CS2 IN NATURAL WATERS: KINETICS AND REACTION MECHANISMS." Thesis, 2019.

Find full text
Abstract:

COS and CS2 are sulfur compounds that are formed in natural waters. These compounds are also volatile, which leads them move into the atmosphere and serve as critical precursors to sulfate aerosols. Sulfate aerosols are known to counteract global warming by reflecting solar radiation. One major source of COS and CS2 stems from the ocean. While previous studies have linked COS and CS2 formation in these waters to the indirect photolysis of organic sulfur compounds, much of the chemistry behind how this occurs remains unclear. This study examined this chemistry by evaluating how different organic sulfur precursors, water quality constituents, and temperature affected COS and CS2 formation in natural waters.

In the first part of this thesis (chapters 2 and 3), nine natural waters ranging in salinity were spiked with various organic sulfur precursors (e.g. cysteine, cystine, dimethylsulfide (DMS) and methionine) exposed to simulated sunlight over varying exposures. Other water quality conditions including the presence of O2, CO and temperature were also varied. Results indicated that COS and CS2 formation increased up to 11× and 4×, respectively, after 12 h of sunlight while diurnal cycling exhibited varied effects. COS and CS2 formation were also strongly affected by the DOC concentration, organic sulfur precursor type, O2 concentration, and temperature while salinity differences and CO addition did not play a significant role.

To then specifically evaluate the role of DOM in cleaner matrices, COS and CS2 formation was examined in synthetic waters (see chapters 4 and 5). In this case, synthetic waters were spiked with different types of DOM isolates ranging from freshwater to ocean water along with either cysteine or DMS and exposed to simulated sunlight for up to 4 h. Surprisingly, CS2 was not formed under any of the tested conditions, indicating that other water quality constituents, aside from DOM, were responsible for its formation. However, COS formation was observed. Interestingly, COS formation with cysteine was fairly similar for all DOM types, but increasing DOM concentration actually decreased formation. This is likely due to the dual role of DOM on simultaneously forming and quenching the reactive intermediates (RIs). Additional experiments with quenching agents to RIs (e.g. 3DOM* and ·OH) further indicated that ·OH was not involved in COS formation with cysteine but 3DOM* was involved. This result differed with DMS in that ·OH and 3DOM* were both found to be involved. In addition, treating DOM isolates with sodium borohydride (NaBH4) to reduce ketone/aldehydes to their corresponding alcohols increased COS formation, which implied that the RIs formed by these functional groups in DOM were not involved. The alcohols formed by this process were not likely to act as quenching agents since they have been shown to low in reactivity. Since ketones are known to form high-energy-triplet-states of DOM while quinones are known to form low-energy-triplet-states of DOM, removing ketones from the system further supported the role of low-energy-triplet-states on COS formation. This was initially hypothesized by findings from the testes on DOM types. In the end there are several major research contributions from this thesis. First, cysteine and DMS have different mechanisms for forming COS. Second, adding O2 decreased COS formation, but it did not stop it completely, which suggests that further research is required to evaluate the role of RI in the presence of O2. Lastly, considering the low formation yields of COS and CS2 formation from the organic sulfur precursors tested in this study, it is believed that some other organic sulfur precursors are missing which are likely to generate these compounds to higher levels and this needs to be investigated in future research.


APA, Harvard, Vancouver, ISO, and other styles
9

(5929508), Travis J. Beckett. "Selection and Characterization of Previously Plant-Variety-Protected Commercial Maize Inbreds." Thesis, 2019.

Find full text
Abstract:
The use of genotypic markers in plant breeding has greatly increased in the last few decades. In this dissertation, I report on three topics that illustrate how genotypic marker information can be applied in maize breeding to increase genetic gain. In the first chapter1, I describe how genotypic and phenotypic data can be used to predict the mean, variance, and superior progeny mean of virtual biparental populations. I use these predictions to identify optimal breeding crosses out of a commercially relevant collection of North American dent inbreds. In the second chapter, within the context of early generation maize inbred development, and using a hybrid testcross data set, I report on the change in genomic prediction accuracy as the size of the training set increases and compare the accuracy of different genomic selection models. In the third chapter2, I used a multi-variable linear regression approach known as genomewide association (GWA) analysis to identify particular genetic locations, known as quantitative trait loci (QTL), that are associated with maize in orescence traits.
APA, Harvard, Vancouver, ISO, and other styles
10

(6624152), Simran Kaur. "Interfacial Rheological Properties of Protein Emulsifiers, Development of Water Soluble b-Carotene Powder and Food Science Engagement (Emulsifier Exploration)." Thesis, 2019.

Find full text
Abstract:

Interfacial rheology describes the functional relationship between the deformation of an interface, the stresses exerted in and on it, and the resulting flows in the adjacent fluid phases. These interfacial properties are purported to influence emulsion stability. Protein emulsifiers tend to adsorb to the interface of immiscible phases, reduce interfacial tension as well as generate repulsive interactions. A magnetic interfacial shear rheometer was used to characterize the surface pressure-area isotherms as well as interfacial rheological properties of two proteins- sodium caseinate and b-lactoglobulin. Then, sodium caseinate was used as a carrier for b-carotene encapsulation.

b-carotene is a carotenoid that exhibits pro-vitamin A activity, antioxidant capacity and is widely used as a food colorant. It is however, highly hydrophobic and sensitive to heat, oxygen and light exposure. Thus b-carotene as food ingredient is mainly available as purified crystals or as oil-in-water emulsions. In this study, b-carotene stability, and solubility in water for application as a natural colorant was improved by preparation of a sodium caseinate/ b-carotene powder using high pressure homogenization, solvent evaporation and spray drying. The powders thus prepared showed good solubility in water and yielded an orange coloration from b-carotene. The effect of medium chain triglyceride concentration (1%, 10%) and incorporation of a natural antioxidant (Duralox, Kalsec) on powder stability was studied as a function of storage time and temperature.b-carotene stability was reduced at higher storage temperature (4oC> 21oC> 50oC) over 60 days and followed first order degradation kinetics at all temperatures. Incorporation of natural antioxidant improved b-carotene stability and resulted in a second first order degradation period at 50oC. As b-carotene content decreased, Hunter Lab color values denoting lightness increased, while those for redness and yellowness of the powder decreased. This sodium caseinate based b-carotene powder could be used as a food ingredient to deliver natural b-carotene to primarily aqueous food formulations.

In the last part of this work, an engagement workshop was developed as a means to educate young consumers about the function of emulsifiers in foods. Food additives are important for food product development, however to consumers, a discord between their objective purpose and subjective quality has led to confusion. Food emulsifiers, in particular, are associated with lower healthiness perception due to their unfamiliar names. In collaboration with the 4H Academy at Purdue, a workshop high school student was conducted to develop an increased understanding of emulsions and emulsifiers. A survey was conducted with the participants who self-evaluated their gain in knowledge and tendency to perform certain behaviors with regards to food ingredient labels. The participants reported a gain in knowledge in response to four key questions on emulsions and emulsifiers, as well as an increased likelihood to read ingredients on a food label and look up information on unfamiliar ingredients.

APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Natural Hazards not elsewhere classified"

1

Santos-Reyes, Jaime. "The Risk of Tsunamis in Mexico." In Natural Hazards - Impacts, Adjustments and Resilience [Working Title]. IntechOpen, 2020. http://dx.doi.org/10.5772/intechopen.94201.

Full text
Abstract:
The paper reviews the risk of tsunamis in Mexico. It is highlighted that the Pacific coast of the country forms part of the so called “Ring of fire”. Overall, the risk of tsunami that has the potentiality to affect communities along the Pacific coast of the country are twofold: a). Local tsunami; i.e., those triggered by earthquakes originating from the “Cocos”, “Rivera” and the “North American” plates (high risk); and b) the remote tsunamis, those generated elsewhere (e.g, Alaska, Japan, Chile) (low risk). Further, a preliminary model for a “tsunami early warning” system for the case of Mexico is put forward.
APA, Harvard, Vancouver, ISO, and other styles
2

Zhong, Ming. "Enhancing the Community Resilience with a Network Structuring Model." In Flood Impact Mitigation and Resilience Enhancement. IntechOpen, 2020. http://dx.doi.org/10.5772/intechopen.92715.

Full text
Abstract:
Community resilience is a key index for describing the response of human habitat system against hazards. Enhancing the community resilience to flood disaster requires indicator identification and measurement system establishment, especially for flooding risk management. In this study, an advanced index framework for measuring community resilience to flood disaster is proposed integrating fuzzy Delphi method (FDM) and interpretative structural model (ISM). Based on the definition of community resilience, the indicators are classified into six dimensions, including environmental factors, social factors, economic factors, psychological factors, institutional factors, and information and communication factors. A simplified community resilience evaluation index system is established by using FDM, and the hierarchical network structure of the community resilience to flood disasters is confirmed, in which the direct influence indicators and the root influence indicators are analyzed. The proposed framework in this study contributes to the interdisciplinary understanding of community resilience to flooding disasters and building a more resilience community; it is also expected to be extended for risk reduction in other natural hazards.
APA, Harvard, Vancouver, ISO, and other styles
3

Jiménez-Ruano, Adrián, Pere Joan Gelabert, Victor Resco de Dios, Cristina Vega-García, Luis Torres, Jaime Ribalaygua, and Marcos Rodrigues. "Modeling daily natural-caused ignition probability in the Iberian Peninsula." In Advances in Forest Fire Research 2022, 1214–19. Imprensa da Universidade de Coimbra, 2022. http://dx.doi.org/10.14195/978-989-26-2298-9_184.

Full text
Abstract:
In the European Mediterranean region natural-caused wildfires are a small fraction of total ignitions. Lightning strikes are the most common source of non-human fires, being strongly tied to specific synoptic conditions and patterns associated with atmospheric instability, such as dry thunderstorms. Likewise, lightning-related ignitions often associate with dry fuels and dense vegetation layers. In the case of Iberian Peninsula, the confluence of these factors favors recurrent lightning fires in the eastern Mediterranean mountain ranges and the. However, under appropriate conditions lightning fires can start elsewhere, holding the potential to propagate over vast distances. In this work, we assessed the likelihood of ignition leveraging a large dataset of lightning strikes and historical fires available in Spain. We trained and tested a machine learning model to evaluate the probability of ignition provided that a lightning strikes the ground. Our model was calibrated in the period 2009-2015 using data for mainland Spain plus the Balearic Islands. To build the binary response variable we classified lightning strikes between that triggered a fire event. For each lightning strike we extracted a set of covariates relating fuel moisture conditions, the presence and density of the vegetation layer and the shape of the relief. The final model was subsequently applied to forecast daily probabilities at 1x1 km resolution for the entire Iberian Peninsula. Although the model was originally calibrated in Spain, we extended the predictions to the entire Iberian Peninsula. By doing so we were able to validate in the future our outputs against the Portuguese dataset of recent natural-caused fires (bigger than 1 ha) from 2001 to 2021. Overall, the model attained a great predictive performance with a median AUC of 0.82. Natural-caused ignitions triggered mainly in low dead (dFMC 250) fuel moisture conditions. Lightning strikes with negative polarity seem to trigger fires more frequently when the mean density of discharger was greater than 5. Finally, natural wildfires usually started at higher elevations (above 500 m.a.s.l.).
APA, Harvard, Vancouver, ISO, and other styles
4

Naorem, Anandkumar, Shiva Kumar Udayana, Jaison Maverick, and Sachin Patel. "Soil Microbe-Mediated Bioremediation: Mechanisms and Applications in Soil Science." In Industrial Applications of Soil Microbes, 133–50. BENTHAM SCIENCE PUBLISHERS, 2022. http://dx.doi.org/10.2174/9789815039955122010013.

Full text
Abstract:
Bioremediation is a prominent and novel technology among decontamination studies because of its economic practicability, enhanced proficiency, and environmental friendliness. The continuously deteriorating environment due to pollutants was taken care of by the use of various sustainable microbial processes. It is a process that uses microorganisms like bacteria and fungi, green plants, or their enzymes to restore the natural environment altered by contaminants to its native condition. Contaminant compounds are altered by microorganisms through reactions that come off as a part of their metabolic processes. Bioremediation technologies can be generally classified as in situ or ex situ. In situ bioremediation involves treating the pollutants at the site, while ex situ bioremediation involves the elimination of the pollutants to be treated elsewhere. This chapter deals with several aspects, such as the detailed description of bioremediation, factors of bioremediation, the role of microorganisms in bioremediation, different microbial processes and mechanisms involved in the remediation of contaminants by microorganisms, and types of bioremediation technologies such as bioventing, land farming, bioreactors, composting, bioaugmentation, biofiltration, and bio-stimulation.
APA, Harvard, Vancouver, ISO, and other styles
5

McElroy, Michael B. "Power from the Sun Abundant But Expensive." In Energy and Climate. Oxford University Press, 2016. http://dx.doi.org/10.1093/oso/9780190490331.003.0015.

Full text
Abstract:
As discussed in the preceding chapter, wind resources available from nonforested, nonurban, land-based environments in the United States are more than sufficient to meet present and projected future US demand for electricity. Wind resources are comparably abundant elsewhere. As indicated in Table 10.2, a combination of onshore and offshore wind could accommodate prospective demand for electricity for all of the countries classified as top- 10 emitters of CO2. Solar energy reaching the Earth’s surface averages about 200 W m– 2 (Fig. 4.1). If this power source could be converted to electricity with an efficiency of 20%, as little as 0.1% of the land area of the United States (3% of the area of Arizona) could supply the bulk of US demand for electricity. As discussed later in this chapter, the potential source of power from the sun is significant even for sun- deprived countries such as Germany. Wind and solar energy provide potentially complementary sources of electricity in the sense that when the supply from one is low, there is a good chance that it may be offset by a higher contribution from the other. Winds blow strongest typically at night and in winter. The potential supply of energy from the sun, in contrast, is highest during the day and in summer. The source from the sun is better matched thus than wind to respond to the seasonal pattern of demand for electricity, at least for the United States (as indicated in Fig. 10.5).There are two approaches available to convert energy from the sun to electricity. The first involves using photovoltaic (PV) cells, devices in which absorption of radiation results directly in production of electricity. The second is less direct. It requires solar energy to be captured and deployed first to produce heat, with the heat used subsequently to generate steam, the steam applied then to drive a turbine. The sequence in this case is similar to that used to generate electricity in conventional coal, oil, natural gas, and nuclear- powered systems. The difference is that the energy source is light from the sun rather than a carbon- based fossil fuel or fissionable uranium.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Natural Hazards not elsewhere classified"

1

Caputo, Antonio C., and Alessandro Vigna. "Numerical Simulation of Seismic Risk and Loss Propagation Effects in Process Plants: An Oil Refinery Case Study." In ASME 2017 Pressure Vessels and Piping Conference. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/pvp2017-65465.

Full text
Abstract:
Process plants are vulnerable to natural hazards and, in particular, to earthquakes. Nevertheless, the quantitative assessment of seismic risk of process plants is a complex task because available methodologies developed in the field of civil and nuclear engineering are not readily applicable to process plants, while technical standards and regulations do not establish any procedure for the overall seismic risk assessment of industrial process plants located in earthquake-prone areas. This paper details the results of a case study performing a seismic risk assessment of an Italian refinery having a 85,000 barrels per day production capacity, and a storage capacity of over 1,500,000 m3. The analysis has been carried out resorting to a novel quantitative methodology developed in the framework of a European Union research program (INDUSE 2 SAFETY). The method is able to systematically generate potential starting scenarios, deriving from simultaneous interactions of the earthquake with each separate equipment, and to account for propagation of effects between distinct equipment (i.e. Domino effects) keeping track of multiple simultaneous and possibly interacting chains of accidents. In the paper the methodology, already described elsewhere, is briefly resumed, and numerical results are presented showing relevant accident chains and expected economic loss, demonstrating the capabilities of the developed tool.
APA, Harvard, Vancouver, ISO, and other styles
2

Marchenko, Nataliya. "Navigation in the Russian Arctic: Sea Ice Caused Difficulties and Accidents." In ASME 2013 32nd International Conference on Ocean, Offshore and Arctic Engineering. American Society of Mechanical Engineers, 2013. http://dx.doi.org/10.1115/omae2013-10546.

Full text
Abstract:
The 5 Russian Arctic Seas have common features, but differ significantly from each other in the sea ice regime and navigation specifics. Navigation in the Arctic is a big challenge, especially during the winter season. However, it is necessary, due to limited natural resources elsewhere on Earth that may be easier for exploitation. Therefore sea ice is an important issue for future development. We foresee that the Arctic may become ice free in summer as a result of global warming and even light yachts will be able to pass through the Eastern Passage. There have been several such examples in the last years. But sea ice is an inherent feature of Arctic Seas in winter, it is permanently immanent for the Central Arctic Basin. That is why it is important to get appropriate knowledge about sea ice properties and operations in ice conditions. Four seas, the Kara, Laptev, East Siberian, and Chukchi have been examined in the book “Russian Arctic Seas. Navigation Condition and Accidents”, Marchenko, 2012 [1]. The book is devoted to the eastern sector of the Arctic, with a description of the seas and accidents caused by heavy ice conditions. The traditional physical-geographical characteristics, information about the navigation conditions and the main sea routes and reports on accidents that occurred in the 20th century have reviewed. An additional investigation has been performed for more recent accidents and for the Barents Sea. Considerable attention has been paid to problems associated with sea ice caused by the present development of the Arctic. Sea ice can significantly affect shipping, drilling, and the construction and operation of platforms and handling terminals. Sea ice is present in the main part of the east Arctic Sea most of the year. The Barents Sea, which is strongly influenced and warmed by the North Atlantic Current, has a natural environment that is dramatically different from those of the other Arctic seas. The main difficulties with the Barents Sea are produced by icing and storms and in the north icebergs. The ice jet is the most dangerous phenomenon in the main straits along the Northern Sea Route and in Chukchi Seas. The accidents in the Arctic Sea have been classified, described and connected with weather and ice conditions. Behaviour of the crew is taken into consideration. The following types of the ice-induced accidents are distinguished: forced drift, forced overwintering, shipwreck, and serious damage to the hull in which the crew, sometimes with the help of other crews, could still save the ship. The main reasons for shipwrecks and damages are hits of ice floes (often in rather calm ice conditions), ice nipping (compression) and drift. Such investigation is important for safety in the Arctic.
APA, Harvard, Vancouver, ISO, and other styles
3

Wan, Ping K., and Alice C. Carson. "Design- and Operating-Bases Regional Meteorological Conditions for Permitting New Nuclear Power Plants in the United States." In 17th International Conference on Nuclear Engineering. ASMEDC, 2009. http://dx.doi.org/10.1115/icone17-75111.

Full text
Abstract:
Power generation is well recognized as a major prerequisite for a country’s economic development. Nuclear power has become an increasingly attractive alternative in the power market worldwide due to several factors: growing demand for electric power, increasing global competition for fossil fuels, concern over greenhouse gas emission impacts on global warming, and the desire for energy independence. Protecting people and the environment is of concern to nuclear power generators. Thus, sound engineering design that provides adequate protection against natural and man-made hazards is of utmost importance. Meteorological parameters related to structure design and system operation are the extreme and mean values for wind speed, temperature, humidity, and precipitation, as well as the seasonal and annual frequencies of severe weather conditions such as tornadoes and hurricanes, ice and snow accumulation, hail and lightning. Inherent in ascertaining values for these parameters is the need for reasonable assurance that the chosen values and frequencies will not be exceeded during the expected life of the plant. All regional meteorological and air quality conditions are classified as climate site characteristics for consideration in evaluating the design and operation of a nuclear power plant [1]. This paper discusses the regulatory requirements, methodology and sources of data for development of the design- and operating-basis regional meteorological conditions used in preparing a Combined License Permit Application (COLA) in the United States. Additionally, the differences in methodology for determination of these meteorological conditions by reactor type (i.e., Advanced Passive 1000–AP1000, Advanced Boiling Water Reactor–ABWR, Economic Simplified Boiling Water Reactor–ESBWR, U.S. Evolutionary Power Reactor–U.S. EPR, and Advanced Pressurized Water Reactor–APWR) are explored and summarized.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography