To see the other types of publications on this topic, follow the link: Coal liquefaction Data processing.

Dissertations / Theses on the topic 'Coal liquefaction Data processing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 15 dissertations / theses for your research on the topic 'Coal liquefaction Data processing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Leggett, Miles. "Crosshole seismic processing of physical model and coal measures data." Thesis, Durham University, 1992. http://etheses.dur.ac.uk/5623/.

Full text
Abstract:
Crosshole seismic techniques can be used to gain a large amount of information about the properties of the rock mass between two or more boreholes. The bulk of this thesis is concerned with two crosshole seismic processing techniques and their application to real data. The first part of this thesis describes the application of traveltime and amplitude tomographic processing in the monitoring of a simulated EOR project. Two physical models were made, designed to simulate 'pre-flood' and 'post-flood' stages in an EOR project. The results of the tomography work indicate that it is beneficial to perform amplitude tomographic processing of cross-well data, as a complement to traveltime inversion, because of the different response of velocity and absorption to changes in liquid/gas saturations for real reservoir rocks. The velocity tomograms image the flood zone quite accurately. Amplitude tomography shows the flood zone as an area of higher absorption but does not image its boundaries as precisely, because multi-pathing and diffraction effects are not accounted for by the ray-based techniques used. Part two is concerned with the crosshole seismic reflection technique, using data acquired from a site in northern England. The processing of these data is complex and includes deconvolution, wavefield separation and migration to a depth section. The two surveys fail to pin-point accurately the position of a large fault; the disappointing results, compared to earlier work in Yorkshire, are attributed to poorer generation of compressional body waves in harder Coal Measures strata. The final part of this thesis describes the results from a pilot seismic reflection test over the Tertiary igneous centre on the Isle of Skye, Scotland. The results indicate that the base of a large granite body consists of interlayered granites and basic rocks between 2.1 and 2.4km below mean sea level.
APA, Harvard, Vancouver, ISO, and other styles
2

Davenport, George Andrew 1965. "A process control system for biomass liquefaction." Thesis, The University of Arizona, 1989. http://hdl.handle.net/10150/558114.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Jifon, Francis. "Processing and modelling of seismic reflection data acquired off the Durham coast." Thesis, Durham University, 1985. http://etheses.dur.ac.uk/9315/.

Full text
Abstract:
Off the Durham coast, the Permian succession above the Coal Measures contains limestones and anhydrite bands with high seismic velocities and reflection coefficients. The consequent reduction in penetration of seismic energy makes it difficult to determine Coal Measures structure by the seismic reflection method. Seismic data sets acquired from this region by the National Coal Board in 1979 and 1982 are used to illustrate that satisfactory results are difficult to achieve. Synthetic seismograms, generated for a simplified geological section of the region, are also used to study various aspects of the overall problem of applying the seismic technique in the area. Standard and non-standard processing sequences are applied to the seismic data to enhance the quality of the stacked sections and the results are discussed. This processing showed that in the 1979 survey, in which a watergun source and a 600m streamer were used, some penetration was achieved but Coal Measures resolution on the final sections is poor. The 1982 data set, shot along a segment of the 1979 line using a sleeve exploder source and a 150m streamer, showed no Coal Measures after processing. Synthetic seismograms, generated using the reflectivity method and a broadband source wavelet, are processed to confirm that a streamer with a length of 360 to 400m towed at a depth of 5-7.5m will be optimal for future data acquisition in the area. It is also shown that the erosion of the surface of the limestone lowers the horizontal resolution of the Coal Measures. Scattering
APA, Harvard, Vancouver, ISO, and other styles
4

Jośī, Dilīpa. "REAL-TIME DIGITAL CONTROL FOR BIOMASS LIQUEFACTION SYSTEM (HIGH PRESSURE, TEMPERATURE, MICROPROCESSOR, AUTOCLAVE)." Thesis, The University of Arizona, 1985. http://hdl.handle.net/10150/275423.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Liao, Tianfei. "Post processing of cone penetration data for assessing seismic ground hazards, with application to the New Madrid seismic zone." Diss., Available online, Georgia Institute of Technology, 2005, 2005. http://etd.gatech.edu/theses/available/etd-05042005-133640/.

Full text
Abstract:
Thesis (Ph. D.)--Civil and Environmental Engineering, Georgia Institute of Technology, 2006.
Mayne, Paul W., Committee Chair ; Goldsman, David, Committee Member ; Lai, James, Committee Member ; Rix, Glenn J., Committee Member ; Santamarina, J. Carlos, Committee Member.
APA, Harvard, Vancouver, ISO, and other styles
6

Weisenburger, Kenneth William. "Reflection seismic data acquisition and processing for enhanced interpretation of high resolution objectives." Thesis, Virginia Polytechnic Institute and State University, 1985. http://hdl.handle.net/10919/74518.

Full text
Abstract:
Reflection seismic data were acquired (by CONOCO, Inc.) which targeted known channel interruption of an upper Pennsylvanian coal seam (Herrin #6) in the Illinois basin. The data were reprocessed and interpreted by the Regional Geophysics Laboratory, Virginia Tech. Conventional geophysical techniques involving field acquisition and data processing were modified to enhance and maintain high frequency content in the signal bandwidth. Single sweep processing was employed to increase spatial sampling density and reduce low pass filtering associated with the array response. Whitening of the signal bandwidth was accomplished using Vibroseis whitening (VSW) and stretched automatic gain control (SAGC). A zero-phase wavelet-shaping filter was used to optimize the waveform length allowing a thinner depositional sequence to be resolved. The high resolution data acquisition and processing led to an interpreted section which shows cyclic deposition in a deltaic environment. Complex channel development interrupted underlying sediments including the Herrin coal seam complex. Contrary to previous interpretations of channel development in the study area by Chapman and others (1981), and Nelson (1983), the channel has been interpreted as having bimodal structure leaving an"island" of undisturbed deposits. Channel activity affects the younger Pennsylvanian sediments and also the unconsolidated Pleistocene till. A limit to the eastern migration of channel development affecting the the Pennsylvanian sediments considered in this study can be identified by the abrupt change in event characteristics.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
7

Kwiatkowski, Terese Marie. "The miniature electrical cone penetrometer and data acquisition system." Thesis, Virginia Polytechnic Institute and State University, 1985. http://hdl.handle.net/10919/90934.

Full text
Abstract:
The static cone penetrometer is an in-situ testing tool which was originally developed to derive information on soil type and soil strength. More recently, it has found application in liquefaction assessment. Typical cone penetrometers are heavy duty devices which are operated with the assistance of a drill rig. However, this capacity is not necessary in the case of field studies of liquefaction, since liquefaction usually occurs at relatively shallow depths. This thesis is directed to the goal of the development of a miniature, lightweight cone penetrometer which can be used in earthquake reconnaissance studies related to liquefaction problems. The research for this thesis involved four principal objectives: 1. Development of procedures to automatically acquire and process measurements from a miniature electrical cone; 2. Develop and perform tests in a model soil-filled bin to calibrate the cone; 3. Evaluate the utility and accuracy of the cone results as a means to assess conventional soil properties; and, 4. Conduct a preliminary evaluation of the cone results in the context of recently developed methods to predict liquefaction potential. The work in regard to the first objective involved assembling and writing software for a microcomputer based data acquisition system. Successful implementation of this system allowed data from the tests to be rapidly processed and displayed. Calibration tests with the cone were carried out in a four foot high model bin which was filled ten times with sand formed to variety of densities. The sand used is Monterey No. 0/30, a standard material with well known behavioral characteristics under static and dynamic loading. The test results showed the cone to produce consistent data, and to be able to readily distinguish the varying density configurations of the sand. Using the results in conventional methods for converting cone data into soil parameters yielded values which were consistent with those expected. Liquefaction potential predictions were less satisfying, although not unreasonable. Further research is needed in this area both to check the reliability of the prediction procedures and the ability to achieve the desired objectives.
M.S.
APA, Harvard, Vancouver, ISO, and other styles
8

Chakraborty, Amal. "An integrated computer simulator for surface mine planning and design." Thesis, Virginia Polytechnic Institute and State University, 1985. http://hdl.handle.net/10919/90920.

Full text
Abstract:
In the increasingly competitive coal market, it is becoming more important for coal operators to develop mathematical models for surface mining which can estimate mining costs before the actual mining begins. The problem becomes even more acute with the new reclamation laws, as they affect surface coal mining methods, productivity, and costs. This study presents a computer simulator for a mountaintop removal type of surface mining operation. It will permit users to compare the costs associated with different overburden handling and reclamation plans. It may be used to minimize productivity losses, and, perhaps, to increase productivity and consequently to reduce operating costs through design and implementation of modified mountain top removal methods.
M.S.
APA, Harvard, Vancouver, ISO, and other styles
9

Kumar, Arun. "Ground control ramifications and economic impact of retreat mining on room and pillar coal mines." Diss., Virginia Polytechnic Institute and State University, 1986. http://hdl.handle.net/10919/49815.

Full text
Abstract:
As the coal reserves at shallow depths become exhausted companies have to develop deeper deposits and increase percentage extraction to maintain production levels. Total extraction for room and pillar mines can only be achieved by pillar extraction. The unsupported roof increases during pillar extraction and hence the cost of ground control also increases. Nevertheless, pillar extraction where possible has many potential advantages such as decreased operating cost, increased utilization of reserves, and extended life of the mine. There are several variables such as depth, mining height, rock strength, mining geometry, roof and floor conditions, and retreat mining methods, which affect pillar extraction cost. Cost components of pillar extraction are classified as direct, indirect, fixed, and subsidence compensation costs. A discounted cash flow pillar extraction cost simulator has been developed and used to compute total pillar extraction cost for a variety of conditions and to explore the possibilities of optimizing ground control and retreat mining techniques to maximize extraction ratio. The computer program computes the safe and optimum pillar dimensions and determines the suitable pillar extraction method for the computed pillar width. Pillar extraction cost components are generated and totalled using the net present value method by the simulator. The total extraction cost simulator evaluates the potential advantages of pillar extraction and tests individual variables for sensitivity to changes in other variables attributable to ground control and pillar extraction techniques. Cost of pillar extraction per ton of coal versus depth is presented in the form of a simple nomogram by the simulator. The simulator can be used to determine the economic feasibility of pillar extraction at a particular depth, geologic and mining environment when the market price of mined coal is known.
Ph. D.
incomplete_metadata
APA, Harvard, Vancouver, ISO, and other styles
10

Titus, Willard Sidney III. "Development and application of some quantitative stratigraphic techniques to the Coos Bay coalfield, a Tertiary fluvio-deltaic complex in southwestern Oregon." PDXScholar, 1987. https://pdxscholar.library.pdx.edu/open_access_etds/3730.

Full text
Abstract:
A computer technique for interpreting geophysical logs of drill-holes in quantitative lithologic terms has been developed and tested on the deposits of the late Eocene Coaledo Formation, a well-studied fluvio-deltaic complex in southwestern Oregon. The technique involves the use of induced and natural gamma logs for separation of coal and claystone from coarse-grained detrital rocks and the use of the ratio of resistivity and natural gamma responses (defined here as the "grain size index") to divide the coarse elastic rocks into a series of textural classes corresponding to the Wentworth-Odden particle size scale.
APA, Harvard, Vancouver, ISO, and other styles
11

CRAESMEYER, GABRIEL R. "Tratamento de efluente contendo urânio com zeólita magnética." reponame:Repositório Institucional do IPEN, 2013. http://repositorio.ipen.br:8080/xmlui/handle/123456789/10578.

Full text
Abstract:
Made available in DSpace on 2014-10-09T12:42:11Z (GMT). No. of bitstreams: 0
Made available in DSpace on 2014-10-09T14:05:08Z (GMT). No. of bitstreams: 0
Dissertação (Mestrado)
IPEN/D
Instituto de Pesquisas Energeticas e Nucleares - IPEN-CNEN/SP
APA, Harvard, Vancouver, ISO, and other styles
12

Naidoo, Simone. "Feasibility study for maize as a feedstock for liquid fuels production based on a simulation developed in Aspen Plus®." Thesis, 2018. https://hdl.handle.net/10539/25034.

Full text
Abstract:
A research report submitted in partial fulfilment requiremenrs of degree Master of Science tothe School of Chemical and Metallurgical Engineering, Faculty of Engineering and the Built Environment, University of the Witwatersrand, Johannesburg, South Africa, January 2018
South Africa’s energy sector is vital to the development of its economy. Instability in the form of disruption in supply affects production costs, investments, and social and economic growth. Domestic sources are no longer able to meet the country’s demands. South Africa must find a local alternative fuel source in order to reclaim stability and encourage social and economic development. Biomass is one of the most abundant renewable energy sources, and has great potential as a fuel source. Currently biomass contributes 12% of the world’s energy supply, while in some developing countries it is responsible for up to 50% of the energy supply. South Africa is the highest maize producer on the African continent. Many studies carried out indicated that maize, and its residue contain valuable materials, and has the highest lower heating value in comparison to other agricultural crops. This indicates that maize can be a potential biomass for renewable energy generation in South Africa. A means for energy conversion for biomass, is the process of gasification. Gasification results in gaseous products H2, CO and CO2. Since the process of biomass gasification involves a series of complex chemical reactions involving a number of parameters, which include flow, heat transfer and mass transfer, it is very difficult to study the process of gasification by relying on experimentation only. Numerical simulation was used to provide further insight on this process, and accelerate development and application of maize gasification in a cost effective and efficient manner. The objective of this study was therefore, to verify and evaluate the feasibility of maize gasification and liquid fuels production in South Africa from an economic and energy perspective. The simulation model was developed in Aspen Plus® based on two thermodynamic models specified as Soave – Redlich – Kwong and the Peng Robinson equation of state. All binary parameters required for this simulation were available in Aspen Plus®. The gasification unit was modelled based on a modified Gibbs free energy minimization model. Gasification of maize and downstream processing in the form of Fischer-Tropsch (FT) synthesis and gas to liquids (GTL) processing for liquid fuels production was modelled in Aspen Plus®. Sensitivity analyses were carried out on the process variables: equivalence ratio (ER), steam to biomass ratio (SBR), temperature and pressure, to obtain the optimum gasification conditions. The optimum reactor conditions, which maximized syngas volume and quality was found to be an ER of 0.22 and SBR of 0.2 at a temperature of 611ºC. An increase in pressure was found to have a negative effect; therefore atmospheric conditions of 101.325 kPa were chosen, in order to maximize CO and H2 molar volumes. Based on these conditions the produced syngas consisted of 35% H2, 16% CO, 24% CO2 and 3%CH4. The results obtained from gasification, based on a modified Gibbs free energy model, show a closer agreement with experimental data, than other simulations based on the assumption that equilibrium is reached and no tar is formed. However, these results were still idealistic as it under predicted the formation of CO and CH4. Although tar was accounted for as 5.5% of the total product from the gasifier (Barman et al., 2012), it may have been an insufficient estimation resulting in the discrepancy in CO and CH4. The feasibility of maize as a feed for gasification was examined based on quality of syngas produced in relation to the requirements for FT synthesis. A H2/CO ratio of 2.20 was found, which is within range of 2.1 – 2.56 found to support greater conversions of CO with deactivation of the FT catalyst (Lillebo et al., 2017). The syngas produced from maize was found to have a higher H2/CO ratio than conventional fossil fuel feeds; implying that maize can result in a syngas feed which is both renewable and richer in CO and H2 molar volumes. Liquid fuels generation was modelled based on experimental production distributions obtained from literature for FT synthesis and hydrocracking. The liquid fuel production for 1000 kg/hr maize feed, was found to be 152 kg/hr LPG, 517 kg/hr petrol and 155 kg/hr diesel. The simulation of liquid fuels production via the Fischer-Tropsch synthesis and hydrocracking process showed fair agreement with literature. Where significant deviations were found, they could be reasonably explained and supported. This simulation was found to be a suitable means to predict liquid fuels production from maize gasification and downstream processing. The feasibility of liquid fuels production from maize in South Africa was examined based on the country’s resource capacity to support additional maize generation. It was found that based on 450 000 hectares of underutilized land found in the Homelands, an additional 1.216 billion litre/annum of synthetic fuels in the form of diesel and petrol could be produced. This has the potential to supplement South African liquid fuels demand by 6% using a renewable fuel source. This fuel generation from maize will not impact food security due to the use of underutilized arable land for maize cultivation, or impact water supply as maize does not require irrigation. In addition, fuel generation in this manner supports the Biofuels Industry Strategy (2007) by targeting the use of underutilized land, ensuring minimal impact on food security, and exceeds its primary objective of achieving a 2% blending rate from renewable sources. The economic feasibility of liquid fuels derived from maize was determined based on current economic conditions in 2016. Based on these conditions of 49 $/bbl Brent Crude, 40 $/MT coal and 6.5 $/mmBTU of natural gas at a R/$ exchange rate of R14.06 per U.S. dollar, it was found that coal, natural gas and oil processing are more economically viable feeds for fuel generation relative to maize. However, based on projected market conditions for South Africa, the R/$ exchange rate is expected to weaken further, the coal supply is expected to diminish and supply of natural gas is expected to be a continued issue for South Africa. Based on this, maize should be considered as a feed for fuel generation to reduce the dependency on non-renewable fossil fuel sources. The energy feasibility of liquid fuels produced from maize was only evaluated from a thermal energy perspective. It was found that maize gasification and FT processing requires 0.91 kg steam/kg feed. This 0.91kg of steam accounts for the raw material feed, distillation and heating required for every 1kg of maize processed. It was found that 2.56 kg steam/kg feed was generated from the reactor units. This was assumed to be in the form of 10 bar steam, as in this form it can be sent to steam turbines for electricity generation to assist with overall energy efficiency for this process. In addition, the amount of CO2 (kg/kg feed) produced, was examined for maize processing in comparison to fossil fuel feeds: natural gas and coal. The CO2 production from liquid fuels processing based on a maize feed, was found to be the highest at 0.66 kg/kg feed. However, a coal feed has higher ash and fix carbon content indicating greater solid waste generation in the gasifer. While dry reforming of natural gas is a net consumer of CO2, but had significantly higher steam requirements in order to achieve the same H2/CO ratio as maize. This indicates that although maize results in more CO2/kg feed, it is 88% more energy efficient than dry methane reforming. Additional experimental work on FT processing using syngas derived from maize is recommended. This will assist in further verification of liquid fuels quantity, quality and process energy requirements.
XL2018
APA, Harvard, Vancouver, ISO, and other styles
13

Khesa, Neo. "Exergy analysis and heat integration of a pulverized coal oxy combustion power plant using ASPEN plus." Thesis, 2017. http://hdl.handle.net/10539/22961.

Full text
Abstract:
A dissertation submitted to the faculty of Engineering and the Built Environment, University of the Witwatersrand, in fulfillment of the requirements for the degree of Master of Science in Engineering. 21 November 2016
In this work a comprehensive exergy analysis and heat integration study was carried out on a coal based oxy-combustion power plant simulated using ASPEN plus. This is an extension on the work of Fu and Gundersen (2013). Several of the assumptions made in their work have been relaxed here. Their impact was found to be negligible with the results here matching closely with those in the original work. The thermal efficiency penalty was found to be 9.24% whilst that in the original work was 9.4%. The theoretical minimum efficiency penalty was determined to be 3% whilst that in the original work was 3.4%. Integrating the compression processes and the steam cycle was determined to have the potential to increase net thermal efficiency by 0.679%. This was close to the 0.72% potential reported in the original work for the same action.
MT2017
APA, Harvard, Vancouver, ISO, and other styles
14

Jooste, Chrisna. "Guidelines for the usability evaluation of a BI application within a coal mining organization." Diss., 2012. http://hdl.handle.net/10500/13329.

Full text
Abstract:
Business Intelligence (BI) applications are consulted by their users on a daily basis. BI information obtained assist users to make business decisions and allow for a deeper understanding of the business and its driving forces. In a mining environment companies need to derive maximum benefit from BI applications, therefore these applications need to be used optimally. Optimal use depends on various factors including the usability of the product. The documented lack of usability evaluation guidelines provides the rationale for this study. The purpose is to investigate the usability evaluation of BI applications in the context of a coal mining organization. The research is guided by the question: What guidelines should be used to evaluate the usability of BI applications. The research design included the identification of BI usability issues based on the observation of BI users at the coal mining organization. The usability criteria extracted from the usability issues were compared and then merged with general usability criteria from literature to form an initial set of BI usability evaluation criteria. These criteria were used as the basis for a heuristic evaluation of the BI application used at the coal mining organization. The same application was also evaluated using the Software Usability Measurement Inventory (SUMI) standardised questionnaire. The results from the two evaluations were triangulated to provide a refined set of criteria. The main contribution of the study is the heuristic evaluation guidelines for BI applications (based on these criteria). These guidelines are grouped in the following functional areas: visibility, flexibility, cognition, application behaviour, error control and help, affect and BI elements.
Information Science
M.Sc. (Information Systems)
APA, Harvard, Vancouver, ISO, and other styles
15

Kolathayar, Sreevalsa. "Comprehensive Seismic Hazard Analysis of India." Thesis, 2012. http://hdl.handle.net/2005/3170.

Full text
Abstract:
Planet earth is restless and one cannot control its inside activities and vibrations those leading to natural hazards. Earthquake is one of such natural hazards that have affected the mankind most. Most of the causalities due to earthquakes happened not because of earthquakes as such, but because of poorly designed structures which could not withstand the earthquake forces. The improper building construction techniques adopted and the high population density are the major causes of the heavy damage due to earthquakes. The damage due to earthquakes can be reduced by following proper construction techniques, taking into consideration of appropriate forces on the structure that can be caused due to future earthquakes. The steps towards seismic hazard evaluation are very essential to estimate an optimal and reliable value of possible earthquake ground motion during a specific time period. These predicted values can be an input to assess the seismic vulnerability of an area based on which new construction and the restoration works of existing structures can be carried out. A large number of devastating earthquakes have occurred in India in the past. The northern region of India, which is along the plate boundary of the Indian plate with the Eurasian plate, is seismically very active. The north eastern movement of Indian plate has caused deformation in the Himalayan region, Tibet and the North Eastern India. Along the Himalayan belt, the Indian and Eurasian plates converge at the rate of about 50 mm/year (Bilham 2004; Jade 2004). The North East Indian (NEI) region is known as one of the most seismically active regions in the world. However the peninsular India, which is far away from the plate boundary, is a stable continental region, which is considered to be of moderate seismic activity. Even though, the activity is considered to be moderate in the Peninsular India, world’s deadliest earthquake occurred in this region (Bhuj earthquake 2001). The rapid drifting of Indian plate towards Himalayas in the north east direction with a high velocity along with its low plate thickness might be the cause of high seismicity of the Indian region. Bureau of Indian Standard has published a seismic zonation map in 1962 and revised it in 1966, 1970, 1984 and 2002. The latest version of the seismic zoning map of India assigns four levels of seismicity for the entire Country in terms of different zone factors. The main drawback of the seismic zonation code of India (BIS-1893, 2002) is that, it is based on the past seismic activity and not based on a scientific seismic hazard analysis. Several seismic hazard studies, which were taken up in the recent years, have shown that the hazard values given by BIS-1893 (2002) need to be revised (Raghu Kanth and Iyengar 2006; Vipin et al. 2009; Mahajan et al. 2009 etc.). These facts necessitate a comprehensive study for evaluating the seismic hazard of India and development of a seismic zonation map of India based on the Peak Ground Acceleration (PGA) values. The objective of this thesis is to estimate the seismic hazard of entire India using updated seismicity data based on the latest and different methodologies. The major outcomes of the thesis can be summarized as follows. An updated earthquake catalog that is uniform in moment magnitude, has been prepared for India and adjoining areas for the period till 2010. Region specific magnitude scaling relations have been established for the study region, which facilitated the generation of a homogenous earthquake catalog. By carefully converting the original magnitudes to unified MW magnitudes, we have removed a major obstacle for consistent assessment of seismic hazards in India. The earthquake catalog was declustered to remove the aftershocks and foreshocks. Out of 203448 events in the raw catalog, 75.3% were found to be dependent events and remaining 50317 events were identified as main shocks of which 27146 events were of MW ≥ 4. The completeness analysis of the catalog was carried out to estimate completeness periods of different magnitude ranges. The earthquake catalog containing the details of the earthquake events until 2010 is uploaded in the website the catalog was carried out to estimate completeness periods of different magnitude ranges. The earthquake catalog containing the details of the earthquake events until 2010 is uploaded in the website the catalog was carried out to estimate completeness periods of different magnitude ranges. The earthquake catalog containing the details of the earthquake events until 2010 is uploaded in the website A quantitative study of the spatial distribution of the seismicity rate across India and its vicinity has been performed. The lower b values obtained in shield regions imply that the energy released in these regions is mostly from large magnitude events. The b value of northeast India and Andaman Nicobar region is around unity which implies that the energy released is compatible for both smaller and larger events. The effect of aftershocks in the seismicity parameters was also studied. Maximum likelihood estimations of the b value from the raw and declustered earthquake catalogs show significant changes leading to a larger proportion of low magnitude events as foreshocks and aftershocks. The inclusions of dependent events in the catalog affect the relative abundance of low and high magnitude earthquakes. Thus, greater inclusion of dependent events leads to higher b values and higher activity rate. Hence, the seismicity parameters obtained from the declustered catalog is valid as they tend to follow a Poisson distribution. Mmax does not significantly change, since it depends on the largest observed magnitude rather than the inclusion of dependent events (foreshocks and aftershocks). The spatial variation of the seismicity parameters can be used as a base to identify regions of similar characteristics and to delineate regional seismic source zones. Further, Regions of similar seismicity characteristics were identified based on fault alignment, earthquake event distribution and spatial variation of seismicity parameters. 104 regional seismic source zones were delineated which are inevitable input to seismic hazard analysis. Separate subsets of the catalog were created for each of these zones and seismicity analysis was done for each zone after estimating the cutoff magnitude. The frequency magnitude distribution plots of all the source zones can be found at http://civil.iisc.ernet.in/~sitharam . There is considerable variation in seismicity parameters and magnitude of completeness across the study area. The b values for various regions vary from a lower value of 0.5 to a higher value of 1.5. The a value for different zones vary from a lower value of 2 to a higher value of 10. The analysis of seismicity parameters shows that there is considerable difference in the earthquake recurrence rate and Mmax in India. The coordinates of these source zones and the seismicity parameters a, b & Mmax estimated can be directly input into the Probabilistic seismic hazard analysis. The seismic hazard evaluation of the Indian landmass based on a state-of-the art Probabilistic Seismic Hazard Analysis (PSHA) study has been performed using the classical Cornell–McGuire approach with different source models and attenuation relations. The most recent knowledge of seismic activity in the region has been used to evaluate the hazard incorporating uncertainty associated with different modeling parameters as well as spatial and temporal uncertainties. The PSHA has been performed with currently available data and their best possible scientific interpretation using an appropriate instrument such as the logic tree to explicitly account for epistemic uncertainty by considering alternative models (source models, maximum magnitude in hazard computations, and ground-motion attenuation relationships). The hazard maps have been produced for horizontal ground motion at bedrock level (Shear wave velocity ≥ 3.6 km/s) and compared with the earlier studies like Bhatia et al., 1999 (India and adjoining areas); Seeber et al, 1999 (Maharashtra state); Jaiswal and Sinha, 2007 (Peninsular India); Sitharam and Vipin, 2011 (South India); Menon et al., 2010 (Tamilnadu). It was observed that the seismic hazard is moderate in Peninsular shield (except the Kutch region of Gujarat), but the hazard in the North and Northeast India and Andaman-Nicobar region is very high. The ground motion predicted from the present study will not only give hazard values for design of structures, but also will help in deciding the locations of important structures such as nuclear power plants. The evaluation of surface level PGA values is of very high importance in the engineering design. The surface level PGA values were evaluated for the entire study area for four NEHRP site classes using appropriate amplification factors. If the site class at any location in the study area is known, then the ground level PGA values can be obtained from the respective map. In the absence of VS30 values, the site classes can be identified based on local geological conditions. Thus this method provides a simplified methodology for evaluating the surface level PGA values. The evaluation of PGA values for different site classes were evaluated based on the PGA values obtained from the DSHA and PSHA. This thesis also presents VS30 characterization of entire country based on the topographic gradient using existing correlations. Further, surface level PGA contour map was developed based on the same. Liquefaction is the conversion of formally stable cohesionless soils to a fluid mass, due to increase in pore pressure and is prominent in areas that have groundwater near the surface and sandy soil. Soil liquefaction has been observed during the earthquakes because of the sudden dynamic earthquake load, which in turn increases the pore pressure. The evaluation of liquefaction potential involves evaluation of earthquake loading and evaluation of soil resistance to liquefaction. In the present work, the spatial variation of the SPT value required to prevent liquefaction has been estimated using a probabilistic methodology, for entire India. To summarize, the major contribution of this thesis are the development of region specific magnitude correlations suitable for Indian subcontinent and an updated homogeneous earthquake catalog for India that is uniform in moment magnitude scale. The delineation and characterization of regional seismic source zones for a vast country like India is a unique contribution, which requires reasonable observation and engineering judgement. Considering complex seismotectonic set up of the country, the present work employed numerous methodologies (DSHA and PSHA) in analyzing the seismic hazard using appropriate instrument such as the logic tree to explicitly account for epistemic uncertainties considering alternative models (For Source model, Mmax estimation and Ground motion prediction equations) to estimate the PGA value at bedrock level. Further, VS30 characterization of India was done based on the topographic gradient, as a first level approach, which facilitated the development of surface level PGA map for entire country using appropriate amplification factors. Above factors make the present work very unique and comprehensive touching various aspects of seismic hazard. It is hoped that the methodology and outcomes presented in this thesis will be beneficial to practicing engineers and researchers working in the area of seismology and geotechnical engineering in particular and to the society as a whole.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography