Dissertations / Theses on the topic 'GEOSCIENZE'

To see the other types of publications on this topic, follow the link: GEOSCIENZE.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'GEOSCIENZE.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Cefis, Matteo <1976&gt. "LA CARTOTECA DELLA BIBLIOTECA DI GEOSCIENZE DI PADOVA NELL’ERA DIGITALE." Master's Degree Thesis, Università Ca' Foscari Venezia, 2016. http://hdl.handle.net/10579/9377.

Full text
Abstract:
La tesi si propone di analizzare il trattamento del materiale cartografico all’interno della biblioteca: sia da un punto di vista catalografico sia in rapporto con le nuove tecnologie e le innovazioni dell’era digitale. Nel primo capitolo si cerca di dare una risposta ad alcune domande cruciali sul futuro della carta: ha ancora senso acquisire, catalogare e conservare la carta nel suo formato a stampa? Quali sono le novità che l’era digitale ha portato alla scienza cartografica? Se è evidente che l’avvento di internet, del web 2.0 e di nuove tecnologie come il GIS, hanno stravolto la cartografia tanto da coniare un nuovo termine come “neogeografia”, viviamo ancora in un’epoca ibrida dove, specialmente in campo accademico, la carta stampata ha ancora una propria utilità e non può essere rimpiazzata dalla sua versione digitale. Nel secondo capitolo viene analizzato il processo catalografico della carta mettendo in relazione i diversi standard di riferimento e la loro evoluzione nel tempo. A termine del capitolo, con l’ausilio dei dati ricavati da un questionario sottoposto agli utenti della biblioteca di Geoscienze di Padova, si analizzano i limiti degli OPAC tradizionali sia in rapporto alla peculiarità della ricerca a catalogo delle carte, sia in rapporto alle nuove tendenze del web (web semantico e linked data). Nel terzo capitolo viene analizzato il caso concreto della cartoteca di Geoscienze evidenziandone criticità ed opportunità: vengono proposte alcune soluzioni che potrebbero aiutare la biblioteca ad integrarsi pienamente nell’era digitale.
APA, Harvard, Vancouver, ISO, and other styles
2

Boccali, Chiara. "Caratterizzazione e modellazione di colate detritiche." Doctoral thesis, Università degli studi di Trieste, 2014. http://hdl.handle.net/10077/10126.

Full text
Abstract:
2012/2013
La ricerca si inserisce all’interno dei numerosi lavori che da ormai qualche decennio hanno come scopo lo studio delle colate detritiche, fenomeni altamente pericolosi e distruttivi che hanno determinato e determinano la morfologia della rete idrografica secondaria, incrementando il rischio idrogeologico nei bacini montani dell’arco alpino e non solo. I bacini montani del Friuli Venezia Giulia risultano particolarmente vulnerabili dal punto di vista idrogeologico per i pesanti condizionamenti orografici, geologico-strutturali e sismici, che generano la coesistenza di tutti quei fattori negativi determinanti per l’innesco dei fenomeni di colate. Questo lavoro si è occupato del settore più orientale dell’arco alpino del Friuli Venezia Giulia: il bacino del fiume Fella e, nel dettaglio, la Val Canale, che nell’ultimo secolo è stata interessata da circa 20 eventi alluvionali “eccezionali”, con conseguente innesco di colate detritiche. Il ricordo più vivo è certamente quello dell’ultimo evento di piena, registrato a fine Agosto 2003, che dalla Val Canale al Canal del Ferro ha generato più di 1100 fenomeni franosi ed alluvionali, causando la perdita di due vite umane e danni stimati attorno al miliardo di euro. A tale disastroso evento hanno fatto seguito numerosi studi specifici da parte della Protezione Civile regionale, del Servizio Geologico e di diversi team di esperti, volti alla definizione e alla delimitazione dei fenomeni e, soprattutto, delle aree di pericolosità, nonché alla progettazione di adeguate opere di mitigazione con lo scopo di prevenire danni futuri. Proprio per l’abbondanza di dati già esistenti e per, purtroppo, l’abbondanza e la frequenza dei fenomeni di colata detritica in quest’area dell’arco alpino, è stato scelto di sviluppare la ricerca nell’ottica di individuare una metodologia valida ed adatta alla caratterizzazione dei bacini da colata e dei loro depositi. Tale metodo vorrebbe garantire la realizzazione di una modellazione numerica quanto più rispondente al vero, proprio perché compiuta a partire da dati reali e non solamente da informazioni di back analysis, che spesso vengono dedotte applicando modelli da letteratura, studiati e sviluppati in tutt’altre aree dell’arco alpino, dell’Europa e del mondo. Con tali scopi sono stati scelti sei bacini ricadenti nella Val Canale e nella Val Pontebbana, simili per il comportamento dei fenomeni alluvionali ma diversi dal punto di vista litologico e morfologico, soprattutto alla luce degli interventi di sistemazione effettuati nell’ultimo decennio. La metodologia utilizzata comprende in primo luogo un’adeguata ricerca bibliografica, volta ad acquisire una conoscenza completa del bacino e della sua “storia”. In seguito sono stati eseguiti sopralluoghi lungo le aste torrentizie, dalla zona di deposito alla zona di innesco, in funzione della morfologia degli alvei e delle sponde, nonché della stabilità dei depositi. Durante i rilievi di campagna sono stati eseguiti campionamenti di materiale detritico. La scelta dei punti di campionamento è risultata uno dei punti chiave della ricerca ed è legata all’estrema eterogeneità dei depositi da colata detritica, tipicamente mal classati e comprendenti materiale da molto fine a metrico ed oltre. Per acquisire uno spettro quanto più completo dell’andamento granulometrico dei depositi, i bacini sono stati campionati sia nella zona di innesco che in quella di deposito. Alla fase di campionamento ha fatto seguito una fase in laboratorio, comprendente analisi granulometriche, mineralogiche e reologiche. Le analisi granulometriche hanno lo scopo di risalire alla curva granulometrica dei campioni, andando ad identificare le percentuali di ciascuna classe granulometrica, e sono il primo passo per differenziare i depositi prelevati in posizioni diverse lungo l’asta torrentizia. Le analisi mineralogiche, volte al riconoscimento dei diversi minerali presenti nella frazione più fine del campione (< 4 µm), permettono di verificare l’omogeneità genetica dei depositi di uno stesso bacino e di individuare l’eventuale presenza di minerali argillosi, che possono incidere sul comportamento reologico della miscela acqua-sedimento, alla base dei fenomeni di colata. Di pari passo con la caratterizzazione granulometrica e mineralogica si è proceduto alle analisi reologiche. Scopo principale di tali analisi è la determinazione dei parametri reologici (yield stress e viscosità), i quali governano il comportamento dei fluidi e sono alla base della futura modellazione numerica. Le colate detritiche, secondo l’approccio reologico, sono considerate come fluidi omogenei e viscoelastici; si suppone cioè che il comportamento del flusso sia controllato dalle proprietà della cosiddetta matrice, ossia la miscela di acqua e particelle fini nella quale sono disperse quelle più grossolane. Date tali premesse, le analisi reologiche sui campioni prelevati nel corso di questo dottorato sono state eseguite solo sulla frazione più fine del materiale (< 62 µm) utilizzando il reometro a piatti zigrinati paralleli di diametro pari a 35 mm (Rheostress Haake RS150, Haake GmbH, Germania). I diversi campioni sono stati preparati a differenti concentrazioni volumetriche solide, la cui scelta è legata fondamentalmente alla stabilità del campione sul piatto inferiore dello strumento. I dati sperimentali sono stati quindi correlati tra loro tramite regressione esponenziale per ottenere i coefficienti reologici necessari alla futura modellazione numerica. L’ultima fase dello studio è stata dedicata alla modellazione numerica dei fenomeni di colata detritica. La modellazione fisico-matematica dei debris flow ha principalmente lo scopo di determinare la possibile area di propagazione dei fenomeni, nonché la loro velocità ed energia, con l’obiettivo finale di delimitare le aree di pericolosità e di fornire informazioni utili alla progettazione di adeguate opere di mitigazione. Nell’ambito di questo lavoro le simulazioni sono state eseguite con il modello idraulico bidimensionale FLO-2D, utilizzabile per simulare il flusso dell’acqua in corsi d’acqua anche di notevole larghezza o per la simulazione di flussi non-Newtoniani in aree di conoide. Alla luce dell’obiettivo di individuare una metodologia atta alla caratterizzazione dei bacini da colata anche in assenza di dati di back analysis, le simulazioni numeriche sono state eseguite con il duplice scopo di verificare l’effettivo areale di influenza dei fenomeni e di testare i valori dei parametri reologici derivati dai campioni prelevati lungo i rii studiati. Per ciascun bacino sono state confrontate le informazioni ricavate dalle analisi di laboratorio e dalla modellazione numerica al fine di individuare quale e/o quali caratteristiche dei depositi incidono sui risultati delle simulazioni e in che modo condizionano il comportamento del flusso detritico. In conclusione è stato possibile evidenziare pro e contro della metodologia sperimentale e definirne l’applicabilità allo studio delle colate detritiche.
XXVI Ciclo
1985
APA, Harvard, Vancouver, ISO, and other styles
3

Tiberi, Lara. "Tomografia crostale della Pianura Padana e calibrazione di procedure di localizzazione." Doctoral thesis, Università degli studi di Trieste, 2014. http://hdl.handle.net/10077/10125.

Full text
Abstract:
2012/2013
I terremoti costituiscono un disastro naturale ricorrente su tutto il territorio italiano e per questo sono estremamente importanti interventi mirati e rapidi di protezione civile. La rapidità di questi interventi dipende dalla produzione di localizzazioni veloci e possibilmente in tempo reale degli eventi sismici. La precisione delle localizzazioni, inoltre, è necessaria per identificare le faglie sismogenetiche. Per questi due aspetti, è necessario un miglioramento dei sistemi di monitoraggio esistenti in modo da poter accrescere la qualità delle localizzazioni automatiche in tempo reale. Lo scopo di questo studio è la scrittura di una procedura che localizza accuratamente eventi sismici in tempo reale. La qualità delle localizzazioni è fortemente dipendente dalla corretta determinazione delle fasi P ed S. A volte è difficile riconoscere il corretto arrivo di una fase, poiché il segnale sismico può essere di difficile lettura per differenti motivi, come, ad esempio, la complessità del meccanismo della faglia generatrice e la presenza di rumore sia naturale che artificiale. Per questo motivo abbiamo studiato, analizzato e comparato differenti metodi per la rilevazione delle fasi e per la localizzazione degli eventi sismici. Gli algoritmi di rilevazione delle fasi che sono stati valutati sono lo Short Time Average su Long Time Average ratio (STA/LTA) e la funzione di Akaike Information Criterion (AIC). Il primo di questi è una tecnica comune usata per distinguere il segnale sismico dal rumore. E’ basato sul calcolo continuo di due valori medi dell’ampiezza assoluta di un segnale sismico in due finestre di tempo di differente lunghezza: media sull’intervallo breve (STA) e media sull’intervallo lungo (LTA). Il rapporto di queste due medie (STA/LTA) viene comparato ad un valore di soglia. Quando questo rapporto è maggiore della soglia, viene rilevata una fase nel segnale sismico analizzato. Il settaggio di questo sistema dipende dalla scelta dei parametri, questo prouce instabilità. La funzione di AIC è una metodologia sofisticata e precisa [Akaike and Hirotugu, 1974], basata sul classico metodo della massima verosimiglianza. La sua applicazione più comune consiste nella selezione tra pi` modelli: la stima della massima verosimiglianza dei parametri del modello da il minimo della funzione AIC. Questo metodo è strettamente correlato alla scelta della finestra di tempo nella quale applicare la funzione. Per questo motivo è necessaria una combinazione di più tecniche in modo da poter scegliere automaticamente la finestra corretta. In un segnale sismico il minimo della funzione AIC identifica l’arrivo delle onde P o delle onde S. Questa funzione è utilizzata nella procedura dell’AutoPicker [Turino et al., 2010]. Una volta identificate le fasi, è necessario elaborarle in modo da poter localizzare eventi sismici. In Antelope la procedura di localizzazione è chiamata orbassoc. Questa metodologia legge le fasi rilevate tramite il metodo STA/LTA e cerca di produrre una localizzazione dell’evento sulle tre possibili griglie: telesismica, regionale e locale. La soluzione, che produce tempi teorici di percorrenza per ogni stazione, che si accordano maggiormente con le osservazioni, viene considerata la migliore. Nell’AutoPicker l’algoritmo di localizzazione è Hypoellipse [Lahr, 1979], nel quale i tempi di percorrenza sono stimati utilizzando una struttura a strati piani paralleli e gli ipocentri sono calcolati utilizzando il metodo di Geiger [Geiger, 1912]. In questo lavoro abbiamo utilizzato metodologie per la localizzazione diverse da quelle assolute come Hypoellipse. L’HypoDD [Waldhauser and Ellsworth, 2000] è un algoritmo relativo, ovvero le localizzazioni vengono calcolate in riferimento alla localizzazione di un evento principale o dal sito di una stazione. Questo metodo può essere applicato solo nel caso in cui la distanza ipocentrale tra i due terremoti è piccola comparata alla distanza evento-stazione e alle eterogeneità laterali del campo delle velocità. In questi casi il percorso del raggio tra le due sorgenti e una stazione comune sono simili per gran parte del percorso del raggio. Per testare le prestazioni dell’AutoPicker, lo abbiamo applicato ad un database di 250 eventi registrati nell’area di contatto tra le Alpi e le Dinaridi nell’anno 2011 dalla rete C3ERN - the Central Eastern European Earthquake Reasearch Network [Dipartimento di Matematica e Geoscienze (DMG), Istituto Nazionale di Oceanografia e di Geofisica Sperimentale (OGS), Agencija RS za okolje (ARSO) e Zentralanstalt fr Meteorologie und Geodynamik (ZAMG)]. L’algoritmo automatico proposto è risultato essere un utile strumento per l’assegnazione automatica degli arrivi delle onde P ed S. Questo risultato incoraggiante ci ha permesso di procedere nel confronto tra questa nuova metodologia e Antelope, utilizzato da noi quotidianamente in tempo reale per rilevare fasi e localizzare eventi. La complessità del contesto tettonico influenza il percorso dei raggi e conseguentemente la localizzazione degli eventi. In regioni dove sono presenti molte strutture sismogenetiche, una localizzazione precisa della sequenza sismica è essenziale, in modo da capire quale è la faglia generatrice. In questi casi l’uso di modelli 1-D potrebbe non essere sufficiente, mentre un modello 3-D potrebbe descrivere al meglio l’area interessata. La tomografia dei primi arrivi è una tecnica comune per ottenere un modello tridimensionale dalla localizzazione degli eventi. In questo studio abbiamo utilizzato una tomografia di eventi locali (Local Earthquake Tomography, LET) [Aki, 1982]. La tomografia dei primi arrivi e la localizzazione 3-D degli eventi sono state eseguite, rispettivamente, utilizzando il Computer Aided Tomography per modelli 3D (Cat3D) [Cat3D user manual, 2008] e il Non Linear Location (NonLinLoc) [Lomax et al., 2000] attraverso una procedura iterativa. Il Cat3D viene utilizzato solitamente in sismica attiva, mentre in questo studio è stato applicato ad un caso sismologico. La principale differenza tra la sismica attiva e la sismologia sono le incertezze nel sistema tomografico. Nella sismica attiva la localizzazione della sorgente è ben definita mentre nella sismologia è una variabile con incertezza elevata che si propaga nella stima del percorso del raggio e dei tempi di percorrenza. Per risolvere questo problema, abbiamo utilizzato una procedura iterativa composta dalla tomografia dei primi arrivi e dalla rilocalizzazione degli eventi con il modello 3-D risultante. Dopo il verificarsi della sequenza sismica emiliana nel Maggio-Giugno 2012, abbiamo deciso di analizzarla come interessante caso di studio. La sequenza sismica è iniziata il 20 Maggio (02:03:53 UTC), con un terremoto di Ml 5.9 [Scognamiglio et al., 2012]. Questa sequenza è composta da migliaia di eventi, sei dei quali con Ml maggiore di 5.0, tra cui un evento di magnitudo locale 5.8, il 29 Maggio (07:00:03 UTC). Su questi eventi abbiamo testato le prestazioni dell’AutoPicker e di Antelope. Per fare ciò abbiamo rilevato manualmente le fasi e localizzato alcuni degli eventi maggiori della sequenza sismica. Questi eventi sono caratterizzati da fasi P, ma in particolar modo fasi S, difficili da rilevare, probabilmente a causa del complesso meccanismo di faglia. Inoltre la complessità del sistema tettonico assieme all’incertezza della profondità focale rendono problematiche le localizzazioni degli eventi. La sequenza sismica emiliana ha interessato un’area di 50 km con andamento E-W localizzata nell’angolo sud della Pianura Padana, interessando il settore centrale dell’arco di Ferrara appartenente al sistema esterno della cintura degli Appennini Settentrionali. L’arco di Ferrara è composto da due sistemi: le pieghe di Ferrara nel nordest e la piega di Mirandola localizzata nella parte più interna a sudovest [Govoni et al., 2014]. Abbiamo elaborato gli arrivi P ed S in modo da poter localizzare la sequenza sismica utilizzando differenti modelli di velocità trovati in letteratura: Bragato et al. [2011], Ciaccio and Chiarabba [2002],Costa et al. [1992], Iside, Zollo et al. [1995], Malagnini et al. [2012], Massa [2012] e quattro modelli geologici proposti da Lavecchia et al. [in prep.] L’idea è di produrre un insieme di localizzazioni di eventi clusterizzati con residui minimi, in modo da poter capire quale è la faglia generatrice. Questo lavoro è stato svolto in collaborazione con l'Università di Chieti e il Dipartimento di Protezione Civile (DPC). Dalla distribuzione ipocentrale delle soluzioni, sembra che l'arco di Mirandola non sia coinvolto nella sequenza sismica, mentre i segmenti della parte interna e centrale del sistema di sovrascorrimento di Ferrara sembrano essere stati attivati dalle sequenze sismiche del 29 e del 20 Maggio, rispettivamente. La complessità dell'area interessata dalla sequenza sismica dell'Emilia, richiede il calcolo di modelli tridimensionali di velocità in modo da poter localizzare più precisamente gli eventi. Come già detto, abbiamo elaborato una procedura iterativa: tomografia dei primi arrivi e localizzazioni 3-D degli eventi, attraverso l'uso rispettivamente del Cat3D e del NonLinLoc, in collaborazione con l'OGS. La sequenza sismica copre solo una piccola parte della regione (30x30 km^2 di larghezza e 20 km di profondità), per questo l'area investigata si limiterà alla porzione superiore della crosta. Come modelli iniziali di velocità abbiamo scelto: Costa et al, 1992; Massa et al. 2013 e NewModel1 (LaVecchia et al., in prep., i quali avevano errori verticali inferiori al chilometro nello studio precedente. Il miglior modello iniziale sembra essere quello di Massa et al. (2013), il quale mostra valori di rms bassi rispetti alle altre soluzioni. I tre modelli tridimensionali di velocità per le onde P risultanti mostrano caratteristiche comuni: uno strato superficiale a bassa velocità e uno strato spesso (5-20 km in profondità) a 5.5km/s. I risultati tomografici per i modelli Vs presentano un comune strato superficiale a bassa velocità e uno strato caratterizzato da valori di velocità per le onde S di 3.0 km/s. Le tre serie di soluzioni, dei differenti modelli di velocità, sono comparabili all'interno dell'intervallo di errore, anche in termini di qualità. Le localizzazioni per la scossa principale del 20 maggio 2012 sono sparpagliate rispetto a quelle della seconda scossa principale del 29 maggio. Una possibile causa potrebbe essere l'installazione delle stazioni temporanee nel campo vicino della sequenza sismica dopo il 20 maggio 2012. Per l'evento del 29 maggio, infatti, si hanno molte più registrazioni che per il primo evento del 20 e tutte in campo vicino. Le localizzazioni degli eventi ottenute da modelli tomografici tridimensionali sono meno disperse di quelle ottenute con modelli unidimensionali, anche se le localizzazioni dei due eventi principali sono simili. In profondità le due serie di soluzioni non differiscono in modo significativo. Per migliorare la qualità della procedura di localizzazione nel nostro centro di raccolta dati, vorremo installare una procedura automatica sia rapida sia precisa. Per raggiungere questo risultato abbiamo comparato l'AutoPicker con Antelope sulla sequenza sismica dell'Emilia. Questo confronto è di fondamentale importanza per comprendere quale dei due algoritmi rileva fasi e/o localizza eventi in modo più preciso. Il nostro scopo, infatti, è quello di unire ed implementare queste due tecniche in modo da ottenere un miglior rilevatore di fasi e localizzatore. I risultati di questo confronto ci hanno portato a concludere che l'AutoPicker trova più fasi e con maggior precisione rispetto ad Antelope, sia per le fasi P che per le fasi S. Nonostante ciò il processo di associazione delle fasi in Antelope è in grado di correggere gli errori delle fasi e trovare la corretta localizzazione dell'evento. Questo ci ha suggerito di implementare l'algoritmo dell'AutoPicker nella procedura di Anteope, in modo tale che l' AutoPicker definisca gli arrivi P ed S e Antelope li associ e localizzi gli eventi. Con il miglioramento delle reti sismiche e la possibilità di raccogliere enormi quantitativi di dati, è necessario produrre enormi database, in modo da poter avere un rapido accesso ad essi e di poterli rielaborare in tempo reale o quasi reale. Per questi enormi database la rilevazione manuale delle fasi è un lavoro oneroso, che richiede tanto tempo. La possibilità di avere uno strumento che rilevi automaticamente fasi di ottima qualità, che producano risultati similari a quelli ottenuti dall'inversione tomografica utilizzando le fasi rilevate manualmente, è sicuramente conveniente ed utile. Per questa ragione abbiamo confrontato due differenti tomografie dei primi arrivi, prodotte con la stessa tecnica dell'analisi precedente, che differiscono solo per i dati di partenza: la prima è stata ottenuta dalle fasi rilevate manualmente, la seconda dalle fasi rilevate automaticamente con l'AutoPicker per la sequenza sismica dell'Emilia. I risultati ottenuti indicano un incremento del valore medio dell' rms sia nelle localizzazioni sia nella tomografia per le fasi automatiche. Nonostante questo i modelli tridimensionali ottenuti ( Vp, Vs and Vp/Vs) sono comparabili. Pertanto sembra che gli errori nelle localizzazioni non influenzino i risultati tomografici ma inficino la precisione del sistema tomografico stesso. Quindi per database contenenti enormi quantità di dati è possibile utilizzare le fasi automatiche come dati di partenza, ottenendo risultati comparabili a quelli ottenuti con le fasi manuali.
Earthquakes constitute a recurring natural disaster all over the Italian territory, and therefore civil defence focused interventions are extremely important. The rapidity of such interventions strongly depend on the production of fast and possibly real-time locations of the seismic events. The precise location of events is also needed to identify seismogenic faults. For these two aspects, an upgrade of the existing monitoring systems is fundamental to improve the automatic locations quality in a quasi real-time mode. The main purpose of this study is the production of a routine that will accurately locate seismic event in real-time. The quality of the locations strongly depends on the correct determination of the P- and S- phases. Sometimes it is hard to recognize the correct onset of a phase, since the signal can be blurred by various causes, such as, e.g., the complexity of the generating fault mechanism and the presence of natural or man-made noise. For this reason we have studied, analyzed and compared different phase picking and location methods. The picking algorithms that were evaluated are the Short Time Average over Long Time Average ratio (STA/LTA) and the Akaike Information Criterion (AIC) function. The first one is a common technique used to distinguish the seismic signal from noise. It is based on the continuous calculation of the average values of the absolute amplitude of a seismic signal in two moving-time windows with different lengths: the short-time average and the long-time average. The STA/LTA ratio is compared with a threshold value. When the ratio is larger than this threshold, the onset of a seismic signal is detected. The main disadvantage of this method is its instability, due to the parameters choice: a too long STA window could cause the non-detection of local events, whereas a too short STA window could cause the detection of man-made seismic noise. A high STA/LTA threshold records less events than the ones those have occurred, but false triggers are eliminated. If this value is chosen to be lower, more events will be detected, but more frequent false triggers could be recorded. This algorithm is part of the Antelope (BRRT, Boulder) detection procedure, used in this study. The AIC function is a precise and sophisticated methodology, being a revision of the classical maximum likelihood estimation procedure (Akaike, 1974). The AIC function is designed for statistical identification of model characteristics. Its most classical application consists in the selection of the best among several competing models; the maximum likelihood estimate of the model parameters gives the minimum of AIC function. It is strictly correlated to the correct choice of the time window in which apply the function, so it is necessary combined with other techniques, in order to automatically choose a correct window. This dependence on other methods, makes the application of the AIC function to detect phases, a complex methodology, which can be affected by errors in the parameter choices. The AIC function is used in the AutoPicker procedure (Turino et al., 2012). In a seismic signal the minimum of the AIC function identifies the P- or S- onset. In this automatic phase picker the time window in which to apply the function, in the case of P phases, is chosen by a combination of a band-pass filter and an envelope time function, used as “energy” detector to select the event in the waveform; for the S phases, the selection of the window is guided by a preliminary location of the P- phases. Once the P- and S- phases are identified, it is necessary to elaborate them in order to locate the seismic event. In Antelope the location procedure is called orbassoc. This methodology reads the pickings, determined through the use of the STA/LTA technique, and tries to produce an event location over three possible grids: teleseismic, regional and local. The solution that produces the minimum travel time residuals set (differences between synthetic travel times and observed travel times) is considered as the best one. In the AutoPicker the location algorithm is Hypoellipse (Lahr, 1979), in which the travel-times are estimated from a horizontally-layered velocity-structure and the hypocenter is calculated using Geiger's method (Geiger, 1912) to minimize the root mean square (rms) of the travel time residuals. In order to improve the location quality we have used in this work various location methodologies with respect to the absolute ones, such as Hypoellipse. The HypoDD (Waldhauser et al., 2000) is a relative algorithm, the locations depend either on the location of a master event or on a station site. This method can be applied only in the case when the hypocentral separation between two earthquakes is small compared to the event-station distance and the scale length of the velocity heterogeneities. In such cases the ray paths between the source region and a common station are similar along almost the entire ray path. In order to test the performances of the AutoPicker, we have applied it to a database of 250 events recorded in the year 2011 by the C3ERN - the Central Eastern European Earthquake Reasearch Network [Department of Mathematics and Geosciences (DMG), Istituto Nazionale di Oceanografia e di Geofisica Sperimentale (OGS), Agencija RS za okolje (ARSO) and Zentralanstalt für Meteorologie und Geodynamik (ZAMG)] – at the Alps-Dinarides contact. The proposed automatic picker appears to be a useful tool for assigning automatically onset P and S times to detected seismic signals for the purpose of rapid hypocenter calculations. These encouraging results have allowed us to proceed comparing this new picking methodology to another one, tested and used daily and in real-time by us to detect and locate events, the Antelope software. The complexity of the tectonic environment influences ray tracing and consequently the event locations. In regions where many seismogenic structures are present, a precise location of a seismic sequence is essential, in order to understand which fault is the generating one. In such cases the use of a 1-D velocity model might not be sufficient, so a 3-D velocity model is a better solution to describe the studied area. The travel-time tomography is a common technique to obtain a 3-D velocity model, from event locations. In this study we have chosen a local earthquake tomography (LET) (Aki, 1982). The travel time tomography and the 3-D event location are performed, respectively, using the Computer Aided Tomography for 3D models (Cat3D) software (Cat3D manual, 2008) and the Non Linear Location (NonLinLoc) software (Lomax et al., 2000) through an iterative procedure. The Cat3D is basically used in active seismics, but in this study it is applied to a seismological case. The main difference between active seismics and seismology are the unknowns in the tomographic system. In seismology the source location is an unknown parameter with a high uncertainty, while in active seismics the source locations are well defined. In this study, the introduction of the source location in the tomographic system, introduces uncertainties in obth the ray tracing and travel-times estimation. In order to solve this uncertainty, we used an iterative procedure composed by the application of tomography and the event location in resulting 3-D velocity model. After the occurrence of the Emilia seismic sequence in May-June 2012, we have decided to investigate it as an interesting study case. The sequence started on May 20 (02:03:53 UTC), with a ML 5.9 earthquake, preceded by a M_L 4.1 foreshock, three hours earlier (Scognamiglio et al., 2012). Theaftershock sequence comprised thousands of earthquakes, six of them with M_L ≥ 5.0. Among these, a M_L 5.8 earthquake, on May 29 (07:00:03 UTC), caused probably more damages than the first shock. Through the study of this seismic sequence we have tested the performances of the automatic picking algorithms. In order to do that, we have manually picked and located some of the major events of this seismic sequence. These events are characterized by P- and especially S-phases, which are really difficult to detect, probably because the fault system of the Emilia earthquake area is complex. Moreover, the complexity of the tectonic environment along with the focal depth uncertainty make the event locations problematic, because it is not always easy to assess which fault has moved. The Emilia sequence occurred in the central, roughly E-W trending, sector of the Ferrara arc belonging to the external fold-and-thrust system of the Northern Apennines belt. The Ferrara arc is structured into two major fold-and-thrust systems: the Ferrara system in the northeast and the Mirandola system located in a more internal position to the southwest (Govoni et al., 2014). We have processed the P- and S- onsets in order to locate the seismic sequence using different velocity models found in literature: Bragato et al. (2011), Ciaccio et al. (2002), Costa et al. (1992), “Iside”, Zollo et al. (1995), Malagnini et al. (2012), Massa (Rapporto DPC-INGV S1-2013) and four geological models proposed by Lavecchia et al. (in prep). The idea is to produce a set of clustered event locations with the lowest residuals, in order to understand which is the generating fault in the complex system of faults. This work is being performed in collaboration with Università di Chieti and Department of Civil Defence (DPC). From the hypocentral distribution, it seems that the Mirandola thrust was not involved during the Emilia sequence, whereas the internal and middle segments of the Ferrara thrust systems were activated by 29 and 20 May seismic sequences, respectively. The complexity of the seismic sequence area in Emilia requires the calculation of a tridimensional velocity model in order to locate more precisely the events. As already said, we elaborated an iterative procedure: travel-time tomography and 3-D event locations, through the use of the Cat3D and NonLinLoc softwares, in collaboration with OGS. This is done to minimize the uncertainties introduced in the tomographic system by the unknown source locations. Since the seismic sequence covers only a small part of this region (about 30x30km^2 wide and 0-20 km deep), the investigated area will be limited to its upper crustal part. As initial velocity models, we have chosen those ones: Costa et al, 1992; Massa et al. 2013 and NewModel1 (LaVecchia et al., in prep.) that have vertical errors lower than one km. The best velocity model is the one, obtained using as initial model the Massa et al. (2013), which shows rms values lower than the others. The three resulting 3-D Vp velocity models shows similar characteristcs: a surface layer (0 – 5 km) of low Vp velocity, about 1,8 km/s, and a thick layer (5 – 20 km) of 5.5 km/s. The tomographic results for Vs velocity model present a common shallow layer (0 - 3 km) of low velocity (about 1 km/s) and a thick layer (3 - 13 km) characterized by a Vs velocity value of about 3.0 km/s. The three set of solutions, from the different velocity models, are comparable in the errors range. The locations for the main-shock of the 20th of May, 2012 are more scattered respect the solutions for the 29th's. A possible reason could be the installations of temporary stations in the near field of the sequence after the 20th of May, 2012. For the 29th event, in fact, we have more waveforms than for the previous main-shock, and all of them in the near field. We calculated the rms for each event in order to discriminate a velocity model with respect to another from the quality of the locations. We obtained three similar rms values trends, so we were not able to choose a best velocity model. The events locations from 3-D tomographic models are less scattered than those one computed from the 1-D ones; otherwise the locations of the two main-shock events seem to be quite similar. In depth the two set of solutions do not differ in a significative way. To improve the quality of the location procedure in our datacenter, we would like to install a precise and rapid automatic procedure. Therefore, we have compared the AutoPicker method with a more tested and solid one, the Antelope picking method, on the Emilia seismic sequence of data, using as reference pickings and locations the manual ones. This comparison is of fundamental importance which one of the two algorithms better detects phases and/or locates events. Our aim is, in fact, to merge and implement these two techniques to obtain a better detector and locator. AutoPicker finds more and preciser phases than Antelope both P- and mainly S-phases. Despite that the associator process in Antelope, is able to correctly associate the detections and to find the correct location. The obtained results suggest us to implement the AutoPicker algorithm in the Antelope procedure in order to use the AutoPicker to define P- and S-onset and Antelope to associate them and locate the events. With the improvement of seismic networks and the possibility to store huge amounts of data, it is necessary to produce big databases, in order to have a rapid access to the data and to re-elaborate them in real time o quasi real time mode. For big databases, the manual picking is an onerous work, requiring a lot of time. The possibility to have a good-quality automatic tool for phase recognition and picking, which produces similar results to those obtained from the tomographic inversion by using manual phases picking, is certainly convenient and useful. For this reason, we have compared two different travel time tomographic inversions made with the same technique of the previous analysis, differing only in the input phase files: the first one obtained from manual pickings, the second one from the automatic AutoPicker pickings of the Emilia sequence. The obtained results indicate an increase of the average rms both on the locations and on the tomography. Despite that, the tridimensional velocity models (Vp, Vs and Vp/Vs) are comparable, therefore, it seems that the location errors do not influence the tomographic results but the precision of the tomographic system. So for a large database it is possible to use automatic phases as input in a travel-time tomography, obtaining similar results as those obtained using manually picked phases.
XXVI Ciclo
1985
APA, Harvard, Vancouver, ISO, and other styles
4

Bratus, Antonio. "MONITORAGGIO DI DISSESTI FRANOSI CON METODOLOGIA INTEGRATA BASATA SULL'USO DI SISTEMA RADAR INTERFEROMETRICO TERRESTRE (GBSAR)." Doctoral thesis, Università degli studi di Trieste, 2015. http://hdl.handle.net/10077/10925.

Full text
Abstract:
2013/2014
L’analisi critica del monitoraggio di frane con l’utilizzo dell’interferometria radar da terra è stata lo scopo di questa tesi di dottorato di ricerca in geoscienze. Il progetto prende lo spunto dalla possibilità di poter coniugare le esigenze di una struttura preposta al monitoraggio di dissesti franosi, la disponibilità di tecnologie innovative non invasive e la loro fattibilità nel contesto regionale. L’idea di poter utilizzare ed analizzare criticamente i risultati di una serie di monitoraggi è stata quindi presa come linea guida per questo ciclo di dottorato di ricerca in geoscienze. Nell’ambito delle opere di prevenzione da calamità naturali, il Servizio geologico della Regione Autonoma Friuli Venezia Giulia, di cui l’autore è un componente, ha ritenuto di attivare il monitoraggio di tre frane ubicate nel territorio di competenza con l’utilizzo di misure di superficie eseguite con tecnologie basate sull’uso del sistema radar interferometrico con lo scopo di identificare delle zone caratterizzate da movimenti di versante, così da: • integrare le conoscenze pregresse sulla determinazione della forma ed estensione della massa in movimento nonché della distribuzione di pressioni e sforzi; • determinare gli spostamenti differenziali dell’area di frana; • stimare il campo di velocità e la sua interrelazione con fattori esterni quali piogge o temperatura; I siti individuati per questo piano di monitoraggio sono caratterizzati da diverse tipologie di dissesto e di condizioni al contorno. La loro designazione è stata fatta seguendo questo criterio guida. Considerando l’eterogeneità del territorio regionale sono stati scelti: • Ligosullo (UD): il sito in oggetto è rappresentato dal centro urbano di Ligosullo, caratterizzato da un fenomeno di instabilità generalizzato con tassi di deformazione dell’ordine di alcuni cm/anno; • Cimolais (PN): Il sito in oggetto è rappresentato una parete rocciosa, caratterizzata da fenomeni localizzati di crollo; • Erto e Casso, località La Pineda (PN): il sito in oggetto è rappresentato da una parte dell’accumulo di una paleo frana del monte Salta. Caratterizzato da una zona calanchiva in evoluzione, caratterizzata da frane superficiali diffuse. I motivi che hanno individuato il radar interferometrico terrestre come principale metodo di monitoraggio sono legati alle principali caratteristiche della tecnica, ovvero: • sistema remoto che consente di misurare spostamenti del fronte instabile senza la necessità teorica di installare riflettori artificiali e quindi di accedere direttamente alla zona instabile; • capacità di fornire mappe di spostamento dell’intero versante; • misure in near real time: è possibile elaborare i dati acquisiti in maniera automatica e fornire i risultati in tempo quasi reali (con pochi minuti di ritardo rispetto all’acquisizione); • misure in qualsiasi condizione meteorologica, sia di giorno che di notte grazie all’uso di un sistema radar; • misure ad elevata accuratezza (tra il decimo di millimetro ed il millimetro in funzione della distanza) nate dall’applicazione della ricerca spaziale, che consente di determinare l’entità dello spostamento di un oggetto confrontando le informazioni di fase delle onde elettromagnetiche riflesse dall’oggetto in diversi istanti di tempo.
XXVI Ciclo
1970
APA, Harvard, Vancouver, ISO, and other styles
5

Bezzi, Annelore. "LE BARENE DELLA LAGUNA DI MARANO E GRADO: ANALISI DEGLI ASPETTI MORFO-EVOLUTIVI NELLA PROSPETTIVA GESTIONALE." Doctoral thesis, Università degli studi di Trieste, 2014. http://hdl.handle.net/10077/10375.

Full text
Abstract:
2012/2013
Lo studio delle barene della laguna di Grado e Marano è stato affrontato attraverso due differenti approcci: 1) una raccolta di dati morfologici e sedimentologici in 13 siti campione 2) un’indagine a macroscala attraverso il confronto di foto aeree (1954, 1990, 2006) e l’applicazione di differenti tecniche di analisi in GIS. L’integrazione dei due diversi approcci ha permesso in primo luogo di proporre una classificazione delle barene in base alle loro caratteristiche morfologiche e sedimentologiche: barene di margine lagunare, margine di canale, retro barriera, bacini paralagunari recenti, isolate. Inoltre si è giunti a una definizione e quantificazione dei processi evolutivi in atto e delle loro cause. Il declino significativo nelle superfici a barena corrispondente a 145ha (16% dell’estensione del 1954), una volta esclusa la forzante quantitativamente più rilevante (azione antropica diretta con 175ha), appare molto ridotto e limitato al secondo intervallo di tempo. Le variazioni areali riscontrate sono infatti il risultato di perdite e guadagni a scale differenti, spesso in grado di compensarsi, le quali sono state classificate in diverse tipologie morfoevolutive, associate ad altrettante forzanti. La metodologia proposta è originale e si mostra adatta all’analisi delle aree che presentano una certa scarsità di dati. Dall’analisi topologica sulle singole barene emerge che i fenomeni erosivi che si manifestano con rilevanza maggiore sono in ordine decrescente: l’annegamento (effetto combinato di eustatismo, subsidenza regionale e autocompattazione), erosione da ondazione indotta dal transito dei natanti, azione del moto ondoso da vento, i processi legati alla dinamica costiera. I processi di accrezione sono invece imputabili agli apporti fluviali, agli apporti legati alle correnti di marea, ai processi accrescitivi nei bacini paralagunari recenti e in ex valli da pesca. La mancanza di un comportamento unitario dell’intera laguna e la differenziazione temporale fanno intendere come siano predominanti le forzanti a breve termine rispetto alle forzanti che agiscono sul lungo termine, prima fra tutte l’eustatismo. Il confronto con i dati ipsometrici ottenuti dal confronto batimetrico 1966-2011 conferma però la tendenza trasgressiva in atto nell’intera laguna con approfondimenti diffusi soprattutto a carico dei fondali intertidali, ma anche la differenziazione esistente tra i bacini. I dati di bilancio sedimentario mostrano inoltre una relazione con la variazione areale delle barene per ogni singolo bacino. Parte integrante del lavoro è costituita da un geodatabase contenente tutti i dati e le informazioni relative alle barene; esso può rappresentare un valido strumento di supporto nei processi decisionali di gestione e di pianificazione territoriale. A tal fine sono state individuate una prima serie di filosofie gestionali e strategie d’intervento, associate alle differenti tipologie erosivo / accrescitive individuate.
XXVI Ciclo
1970
APA, Harvard, Vancouver, ISO, and other styles
6

Spagnul, Stefano. "Sviluppo e sperimentazione di metodi innovativi per l'analisi del segnale con applicazioni alla geofisica ed ai controlli non distruttivi." Doctoral thesis, Università degli studi di Trieste, 2015. http://hdl.handle.net/10077/10923.

Full text
Abstract:
2013/2014
La produzione dei semilavorati di acciaio è realizzata attraverso fasi diverse, che comprendono la fusione, la solidificazione e la deformazione a caldo. La fase di solidificazione avviene normalmente nella macchina di colata continua. In questo processo l’acciaio liquido, posto in un grande contenitore chiamato siviera, viene travasato in un polmone, chiamato paniera, che lo distribuisce alle linee di colata. Nella siviera l’acciaio è ricoperto di scoria liquida per proteggerlo dall’ossidazione, per ridurre la dispersione termica e per catturare le impurità flottanti. Quando lo svuotamento sta per completarsi, aumenta il rischio di trascinamento verso la paniera della scoria. Attualmente l’operatore umano attraverso l’uso dei propri sensi decide autonomamente di interrompere il flusso. Sarebbe invece utile disporre di sistemi automatici in grado di sostituirsi all’uomo in modo affidabile. Questa ricerca si pone l’obiettivo di esplorare sperimentalmente la possibilità di applicare tecniche di analisi comunemente applicate in geofisica per identificare l’evento catastrofico rappresentato dal passaggio della scoria. Per comprendere in maggior dettaglio gli spetti fisici coinvolti nel fenomeno è stato intrapreso anche lo studio di tecniche computazionali tipiche della CFD. L’attività di ricerca si è articolata nella realizzazione di un sistema hardware e software in tempo reale che è stato impiegato sia in impianti reali, sia in un modello in scala ad acqua. I risultati ottenuti, applicando la trasformata wavelet e il metodo degli attributi istantanei mediante la trasformata di Hilbert, hanno dimostrato la validità dell’approccio e aprono la strada a futuri approfondimenti.
XXVII Ciclo
1970
APA, Harvard, Vancouver, ISO, and other styles
7

Mazza, Isabella. "Sviluppo e sperimentazione metodi innovativi di imaging e caratterizzazione ad alta risoluzione basati su onde di Rayleigh." Doctoral thesis, Università degli studi di Trieste, 2015. http://hdl.handle.net/10077/10922.

Full text
Abstract:
2013/2014
Questa tesi di Dottorato rappresenta la prosecuzione e l'approfondimento delle ricerche che ho iniziato nell'ambito della tesi di Laurea Specialistica in Fisica, nella quale ho studiato a livello numerico e sperimentale le modalità di propagazione di onde di Rayleigh ultrasoniche su superfici metalliche e in mezzi stratificati. Nel presente lavoro di tesi ho approfondito le tecniche di deconvoluzione basate su filtri di Wiener con l'obiettivo di estenderne l'applicazione al trattamento degli ecogrammi ultrasonici caratteristici dei controlli non distruttivi. In questo contesto, ho elaborato una versione ottimizzata dell'algoritmo di spiking deconvolution, che si è dimostrata efficace nelle riduzione del riverbero degli ecogrammi sperimentali. Il lavoro di ricerca relativo al trattamento del segnale è stato complementato da uno studio sperimentale che mi ha consentito di approfondire alcuni aspetti fondamentali della fisica e della tecnologia degli ultrasuoni. In particolare, ho dedicato una parte consistente del lavoro di ricerca alla costruzione e all'ottimizzazione di sonde a ultrasuoni per la generazione di onde longitudinali e di Rayleigh; ho potuto selezionare i materiali ottimali nel corso di uno studio dedicato, condotto in collaborazione con il gruppo di ricerca del Prof. Cesàro, attivo presso il Dipartimento di Scienze della Vita dell'Università degli Studi di Trieste: lo studio ha permesso di analizzare le proprietà termiche dei diversi materiali costituenti le sonde tramite misure di calorimetria differenziale a flusso di calore (DSC). Dopo la fase di costruzione, caratterizzazione e ottimizzazione, ho utilizzato le sonde ultrasoniche per studiare sperimentalmente le modalità di generazione, propagazione e ricezione degli ultrasuoni in sistemi stratificati multifase, costituiti da domini solidi e liquidi. In particolare, ho approfondito il meccanismo di interazione delle onde di volume e di Rayleigh ultrasoniche con una lega bassofondente allo stato liquido. Lo studio sperimentale mi ha permesso di acquisire la sensibilità operativa e critica necessaria per una corretta interpretazione dei segnali misurati, creando la premessa per la successiva analisi dedicata al trattamento del segnale e all'elaborazione di un algoritmo per la riduzione del riverbero. L'algoritmo è stato implementato in Matlab e validato su segnali sintetici e sperimentali, dimostrandone l'efficacia nella riduzione del riverbero e nell'aumento della risoluzione temporale del segnale. I risultati ottenuti potranno essere sviluppati ulteriormente in uno studio successivo, nel quale potranno essere applicati alla ricostruzione tomografica di interfacce complesse di interesse nell'ambito dei controlli non distruttivi.
XXVII Ciclo
1985
APA, Harvard, Vancouver, ISO, and other styles
8

Zoppè, Giuliana. "Seismological contribution to Passo della Morte landslide characterization (North Eastern Italy)." Doctoral thesis, Università degli studi di Trieste, 2015. http://hdl.handle.net/10077/10924.

Full text
Abstract:
2013/2014
In the North-East of Italy, along the left side of the Tagliamento River, a large area, named Passo della Morte, is involved in several landslides. The western part is characterized by sub-vertical limestone layers and a dense system of discontinuities and fractures, which create ideal conditions for rapid rockfalls. The eastern part of this area is interested by several slides which differ in movement direction and velocity. Their origin is linked to the reactivation of an ancient landslide. The most active blocks move up to 5 cm/year. The main risks are represented by the possible consequence of landslides interaction with the National Road that crosses this area and the Tagliamento River. A microseismicity and acoustic emission monitoring system was installed in the western part of Passo della Morte, inside a tunnel, to monitor the signals generated by rock slope deformation and to recognize precursory phenomena. The system consists of five sensors, two for seismic signal identification and three for acoustic emissions detection. A surface seismometer and a borehole accelerometer arrange the seismic station. Three waveguide with the relative piezoelectric arrange the acoustic emission stations. The orientation of the borehole accelerometer was evaluated indirectly. A program that applied the maximum cross-correlation method and used the surface seismometer as a reference for relative orientation estimation was developed in Fortran90. The analysis of some earthquake allowed the reconstruction of the accelerometer orientation. The microseisms and the acoustic emissions detected by the monitoring system were analyzed and correlated to rainfalls in order evaluate their effects on the rock mass. A direct relation between different rainfall events and sharp increases in microseisms and acoustic emissions was found. Site effects were investigated analyzing seismic noise and earthquake recordings with the Horizontal to Vertical Spectral Ratio method. The seismic noise recorded at different site along the reactivated landslide was analyzed in order to detect the landslide slip surface and reconstruct its geometry. Data processing not give the expected results. The analysis of earthquakes recorded by the surface seismometer showed the absence of amplification able to bring in resonance the unstable rock mass. The cooperation of our research group with Civil Protection Department of Rome allowed to apply at San Giuliano di Puglia seismic stations the same methods used at Passo della Morte in order to estimate relative orientation of the accelerometers and evaluate site effects. The correct orientation of the borehole accelerometers was verified using the program developed. The surface accelerometer was used as reference. Nakamura method was applied at each station to determine the relative resonant frequencies. The resonant frequency found for one site was linked to the bedrock depth. H/V spectral ratio and reference station methods were applied at each station in order to investigate the frequencies strongly amplified. For one site, the amplifications found were very high.
Nell’Italia nord occidentale, lungo il versante sinistro del fiume Tagliamento, una vasta area, denominata Passo della Morte, è convolta in diversi fenomeni franosi. La parte più occidentale è caratterizzata dalla presenza di un ammasso roccioso particolarmente suscettibile a crolli a causa delle peculiari caratteristiche dell’ammasso quali stratificazione sub-verticale e denso sistema di discontinuità. La parte più orientale, in seguito alla riattivazione di un’antica frana post glaciale, è interessata dalla presenza di alcuni fenomeni franosi differenti per direzione di movimento e velocità. I movimenti in atto sono dell’ordine di circa 5 cm/anno per i blocchi più attivi. I rischi principali di quest’area sono rappresentati dalle possibili conseguenze dell’interazione delle frane con la strada statale che attraversa quest’area e con il fiume Tagliamento che scorre ai piedi di questo versante. Nella parte più occidentale del Passo della Morte, all’interno di una galleria, è stato installato un sistema di monitoraggio bastato sulla microsismicità e sulle emissioni acustiche al fine di monitorare i segnali prodotti dalla deformazione dell’ammasso roccioso e di riconoscere la presenza di eventuali segnali precursori. Il sistema di monitoraggio si compone di 5 sensori, due per la registrazione di segnali sismici e tre per le emissioni acustiche. Un sismometro in superficie e un accelerometro in pozzo compongono la stazione sismica. Le stazioni delle emissioni acustiche sono composte invece da guide d’onda e dai relativi trasduttori piezoelettrici. L’orientazione dell’accelerometro da pozzo è stata determinata indirettamente sviluppando in Fortran90 un programma che applica il metodo della massima cross-correlazione ed utilizza il sismometro di superficie per determinare l’orientazione relativa. L’analisi di alcuni terremoti attraverso questo programma ha permesso di ricostruire l’orientazione dell’accelerometro all’interno del pozzo. I microsismi e le emissioni acustiche registrate dal sistema di monitoraggio sono state analizzate e correlate con le precipitazioni dell’area al fine di valutare il loro effetto sull’ammasso roccioso potenzialmente instabile. È stata riconosciuta una relazione diretta tra i diversi eventi piovosi e i bruschi incrementi dei microsismi e delle emissioni acustiche. Gli effetti di sito sono stati investigati al Passo della Morte analizzando il rumore sismico e le registrazioni di terremoti con il metodo dei rapporti spetrali H/V. Il rumore sismico registrato in diversi siti posti sull’antica frana riattivata è stato analizzato per individuare la superficie di scivolamento della frana e ricostruire la sua geometria. L’elaborazione dei dati non ha prodotto i risultati sperati. L’analisi dei terremoti registrati dal sismometro in superficie ha mostrato l’assenza di amplificazioni tali da poter mandare in risonanza l’ammasso roccioso. La collaborazione del nostro gruppo di ricerca con il Dipartimento della Protezione Civile di Roma ha permesso di avere accesso ai dati delle tre stazioni accelerometriche di San Giuliano di Puglia e di applicare i metodi usati al Passo della Morte per determinare l’orientazione relativa degli accelerometri e per valutare gli effetti di sito. La corretta orientazione degli accelerometri da pozzo è stata verificata attraverso l’utilizzo del programma sviluppato. L’accelerometro in superficie è stato usato come riferimento. Il metodo di Nakamura è stato utilizzato in ciascuna stazione per determinare le relative frequenze di risonanza. La frequenza di risonanza ottenuta per un sito è stata associata alla profondità del bedrock. I metodi dei rapporti spettrali H/V e della stazione di riferimento sono stati applicati a ciascuna stazione per investigare le frequenze maggiormente amplificate. Per un sito le amplificazioni trovate sono risultate molto elevate.
XXVII Ciclo
1986
APA, Harvard, Vancouver, ISO, and other styles
9

Aviani, Umberto. "Applicazione della sistematica isotopica dello Sr alla tracciabilità e alla qualificazione di prodotti vitivinicoli: studio sul Prosecco veneto." Doctoral thesis, Università degli studi di Trieste, 2014. http://hdl.handle.net/10077/10127.

Full text
Abstract:
2012/2013
Viene studiata l'applicabilità della sistematica isotopica dello Sr alla qualificazione e alla tracciabilità di origine geografica di prodotti vitivinicoli, in questo caso il Prosecco veneto DOC. Vengono caratterizzati campioni di uva provenienti da dieci diversi vigneti e delle vendemmie 2010, 2011 e 2012, oltre ai campioni di suolo corrispondenti. Vengono usati anche i dati di chimismo di suoli ed uve al fine di comprendere meglio i complessi equilibri nel sistema pianta-acqua-suolo. Si osserva che i campioni possono essere differenziati sia in funzione della localizzazione geografica che secondo l'anno di vendemmia. Le variazioni geografiche vengono attribuite alla diversa geologia dei siti, le variazioni annuali agli apporti atmosferici e/o antropici di Sr. Si osserva una correlazione positiva tra composizione isotopica dell'uva e quella della frazione labile del suolo corrispondente, permettendo una modellazione di massima, consistente nella previsione del valore dell'uva a partire da quello del suolo. Le diverse componenti dell'uva (bucce, succo, semi e raspo) risultano avere composizioni isotopiche sovrapponibili, confermando l'assenza di frazionamento isotopico. La sistematica isotopica dello Sr si conferma come buona tecnica da applicare a problemi di caratterizzazione e tracciabilità, possibilmente in un contesto multivariabile. Ciò nonostante, le dinamiche presenti tra suolo e biosfera sono molto complesse e variabili, e questo si traduce spesso in elevati margini di errore di cui bisogna tenere conto durante le modellazioni e le conclusioni.
XXVI Ciclo
1985
APA, Harvard, Vancouver, ISO, and other styles
10

Ricchezza, Victor J. "Framing Geologic Numeracy for the Purpose of Geoscience Education: The Geoscience Quantitative Preparation Survey." Scholar Commons, 2019. https://scholarcommons.usf.edu/etd/7908.

Full text
Abstract:
The Geoscience Quantitative Preparation Survey (GQPS) was developed to address a deficiency in the available literature regarding the competency and preparation of early-career geologists in geoscience job-related quantitative skills – namely, geologic numeracy. The final version of the GQPS included self-confidence, usage, satisfaction, and demographic sections. The GQPS was expected to produce data that would allow for an evaluation of the geologic numeracy of early-career geologists and the success of approximately 20 years of increased focus on quantitatively literate geoscience graduates. The self-confidence section of the GQPS included quantitative methods and quantitative skills. The usage section asked whether participants used methods or skills from the confidence section in both work and non-work settings. Satisfaction items asked how satisfied participants were with the quantitative preparation they received as undergraduates, relative to career needs, and included items on quantitative problem solving, quantitative communication, and computers. Limited demographic information was collected including time since bachelor’s graduation, years of related experience, undergraduate alma mater, current job status and field, and highest level of education. Satisfaction values for quantitative problem solving and quantitative communication indicate that respondents were largely satisfied with their undergraduate preparation, with values slightly higher for the geoscience department than for the university as a whole. Satisfaction items related to the use of computers were nearly uniform across all response levels and were not indicative of satisfaction (or any other particular response). Demographic responses indicate it is reasonable to make some generalizations to the overall population of early-career geologists. Early-career geologists in the sample population showed indications of geologic numeracy. This result indicates the educational trend of the last 20 years of focus on quantitatively literate geoscience graduates has had some success, although this focus cannot be compared to prior years due to lack of data. The GQPS was successful for answering its research questions, but requires validation as a complete scale before it is likely to be used by outside parties.
APA, Harvard, Vancouver, ISO, and other styles
11

Apel, Marcus. "A 3d geoscience information system framework." Doctoral thesis, Technische Universitaet Bergakademie Freiberg Universitaetsbibliothek &quot;Georgius Agricola&quot, 2009. http://nbn-resolving.de/urn:nbn:de:swb:105-3300478.

Full text
Abstract:
Two-dimensional geographical information systems are extensively used in the geosciences to create and analyse maps. However, these systems are unable to represent the Earth's subsurface in three spatial dimensions. The objective of this thesis is to overcome this deficiency, to provide a general framework for a 3d geoscience information system (GIS), and to contribute to the public discussion about the development of an infrastructure for geological observation data, geomodels, and geoservices. Following the objective, the requirements for a 3d GIS are analysed. According to the requirements, new geologically sensible query functionality for geometrical, topological and geological properties has been developed and the integration of 3d geological modeling and data management system components in a generic framework has been accomplished. The 3d geoscience information system framework presented here is characterized by the following features: - Storage of geological observation data and geomodels in a XML-database server. According to a new data model, geological observation data can be referenced by a set of geomodels. - Functionality for querying observation data and 3d geomodels based on their 3d geometrical, topological, material, and geological properties were developed and implemented as plug-in for a 3d geomodeling user application. - For database queries, the standard XML query language has been extended with 3d spatial operators. The spatial database query operations are computed using a XML application server which has been developed for this specific purpose. This technology allows sophisticated 3d spatial and geological database queries. Using the developed methods, queries can be answered like: &quot;Select all sandstone horizons which are intersected by the set of faults F&quot;. This request contains a topological and a geological material parameter. The combination of queries with other GIS methods, like visual and statistical analysis, allows geoscience investigations in a novel 3d GIS environment. More generally, a 3d GIS enables geologists to read and understand a 3d digital geomodel analogously as they read a conventional 2d geological map.
APA, Harvard, Vancouver, ISO, and other styles
12

Apel, Marcus. "A 3d geoscience information system framework." Doctoral thesis, Vandoeuvre-les-Nancy, INPL, 2004. https://tubaf.qucosa.de/id/qucosa%3A22479.

Full text
Abstract:
Two-dimensional geographical information systems are extensively used in the geosciences to create and analyse maps. However, these systems are unable to represent the Earth's subsurface in three spatial dimensions. The objective of this thesis is to overcome this deficiency, to provide a general framework for a 3d geoscience information system (GIS), and to contribute to the public discussion about the development of an infrastructure for geological observation data, geomodels, and geoservices. Following the objective, the requirements for a 3d GIS are analysed. According to the requirements, new geologically sensible query functionality for geometrical, topological and geological properties has been developed and the integration of 3d geological modeling and data management system components in a generic framework has been accomplished. The 3d geoscience information system framework presented here is characterized by the following features: - Storage of geological observation data and geomodels in a XML-database server. According to a new data model, geological observation data can be referenced by a set of geomodels. - Functionality for querying observation data and 3d geomodels based on their 3d geometrical, topological, material, and geological properties were developed and implemented as plug-in for a 3d geomodeling user application. - For database queries, the standard XML query language has been extended with 3d spatial operators. The spatial database query operations are computed using a XML application server which has been developed for this specific purpose. This technology allows sophisticated 3d spatial and geological database queries. Using the developed methods, queries can be answered like: &quot;Select all sandstone horizons which are intersected by the set of faults F&quot;. This request contains a topological and a geological material parameter. The combination of queries with other GIS methods, like visual and statistical analysis, allows geoscience investigations in a novel 3d GIS environment. More generally, a 3d GIS enables geologists to read and understand a 3d digital geomodel analogously as they read a conventional 2d geological map.
APA, Harvard, Vancouver, ISO, and other styles
13

Stokes, Philip J., and Philip J. Stokes. "Diversity in Geoscience: Critical Incidents and Factors Affecting Choice of Major." Diss., The University of Arizona, 2016. http://hdl.handle.net/10150/621290.

Full text
Abstract:
Geoscience attracts few African American and Hispanic/Latino students to the major and has historically not retained women at the same rate as men. Many factors have been proposed to explain these disparities but no quantitative study addressed geoscience diversity at the undergraduate level. To examine potential barriers to recruitment and retention, we interviewed geoscience majors from two large public universities in the U.S. and gathered 'critical incidents,' or life experiences that affected choice of a geoscience major. Critical incidents were classified by time period (when they occurred), grouped by outcome, sorted into categories, and compared by race/ethnicity and gender. Three manuscripts -- each involving different analyses of the critical incident dataset -- comprise this dissertation. Among many findings, our study showed that that white, Hispanic/Latino, and African American students reported different types of experiences affecting major choice while growing up. For instance, 81% of white students reported outdoor experiences (e.g., camping, hiking) as children, whereas Hispanics (33%) and African Americans (22%) reported significantly fewer outdoor experiences from the same time period. Men and women geoscience majors also reported differences. In one example, men (92%) reported at least one positive experience involving career and economics factors; far fewer women (50%) reported the same. Our results can inform recruiting and retention practices. Geoscience programs can provide field trips for all prospective majors, target on-campus advertising towards diverse student groups, meet with academic advisors of incoming freshmen to encourage African American and Hispanic students to enroll in introductory geology courses, and provide major and career information to parents of prospective majors. To better recruit and retain women, geoscience programs can emphasize other, non-economic factors when advertising the degree, promoting internships, and developing field and academic experiences.
APA, Harvard, Vancouver, ISO, and other styles
14

Morgan, Ruth. "Forensic geoscience : sedimentary materials in forensic enquiry." Thesis, University of Oxford, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.439810.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Croft, Debra Jane. "Forensic geoscience : development of techniques for soil analysis." Thesis, Royal Holloway, University of London, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.406177.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Hauck, Roslin V., Robin R. Sewell, Tobun Dorbin Ng, and Hsinchun Chen. "Concept-based searching and browsing: a geoscience experiment." Wiley Periodicals, Inc, 2001. http://hdl.handle.net/10150/105489.

Full text
Abstract:
Artificial Intelligence Lab, Department of MIS, University of Arizona
In the recent literature, we have seen the expansion of information retrieval techniques to include a variety of different collections of information. Collections can have certain characteristics that can lead to different results for the various classification techniques. In addition, the ways and reasons that users explore each collection can affect the success of the information retrieval technique. The focus of this research was to extend the application of our statistical and neural network techniques to the domain of geological science information retrieval. For this study, a test bed of 22,636 geoscience abstracts was obtained through the NSF/DARPA/NASA funded Alexandria Digital Library Initiative project at the University of California at Santa Barbara. This collection was analyzed using algorithms previously developed by our research group: concept space algorithm for searching and a Kohonen self-organizing map (SOM) algorithm for browsing. Included in this paper are discussions of our techniques, user evaluations and lessons learned.
APA, Harvard, Vancouver, ISO, and other styles
17

Burrell, Shondricka. "TOWARDS A GEOSCIENCE PEDAGOGY: A SOCIO-COGNITIVE MODEL." Diss., Temple University Libraries, 2019. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/569610.

Full text
Abstract:
Teaching & Learning
Ph.D.
ABSTRACT Students attending schools in poor and historically marginalized communities lack access to curricula that combines both relevant science content and investigative practices—components the National Research Council (2012) has identified as necessary for effective learning. This lack of curricular access is also problematic in that it: (1) undermines student interest and value of the discipline; (2) fails to educate students about science issues relevant to their lived experience; and (3) hinders student preparation to convert science content into actionable knowledge (Basu & Barton, 2007; Buxton, 2010; Brkich, 2014). I have designed a pedagogical model for geoscience learning as an attempt to address this educational opportunity gap. Geoscience as a content area is particularly important because students attending schools in poor and historically marginalized communities are more likely to be exposed to poor indoor and outdoor air quality (Pastor, Morello-Frosch & Sadd, 2006), have access to poor quality drinking water (Balazs, Morello-Frosch, Hubbard, & Ray, 2011; Balazs & Ray, 2014), and attend schools located near or on brown fields (areas of high exposure to environmental hazards) (Pastor, Sadd & Morello-Frosch, 2004). Given an overall concern for environmental justice (Pais, Crowder & Downey, 2014) and more specific concerns about recent cases of water quality in Flint, Michigan (BBC, 2016) and the greater Philadelphia area (Milman & Glenza, 2016; Rumpler & Schlegel, 2017), the topic of water quality has curricular relevance and potential to engage students in learning geoscience. Based on the pedagogical model, I designed both a water-quality themed transformative learning experience (intervention), and a comparison experience focused on exploration of geoscience careers. Each experience consisted of activities totaling 220 minutes of instruction that can be completed within 5-6 traditional class periods. I applied a mixed methods approach to examine the student generated data from both experiences. First, I used quantitative analyses to test the efficacy of the model with respect to pre to post and delayed post instructional shifts in interest; self-efficacy; and perceived value, perceived relevance, and application of Earth science content. Secondly, I examined between group comparisons on each measure. Results of repeated measures ANOVA indicated statistically significant and meaningful shifts in knowledge for those students in the intervention group, F(1, 159) = 7.34 p = .007 η2 = .044 (small effect size). Though the analysis did not detect statistically significant gains in interest, results revealed statistically significant and meaningful shifts in perceived value, perceived relevance, and application of Earth science content over time by grade for both the intervention and comparison groups, F(2, 155) =7.13 p = .001 η2 = 0.84 (large effect size; Tabachnick & Fidell, 2013). I confirmed these results using structural equation modeling (SEM) and path analysis. I also applied SEM and path analysis to the student generated data in order to test the theoretical soundness of the model. Interest, Transformative Experience (or TE, is operationalized as perceived value, perceived relevance, and application of Earth science content), and pre-instruction knowledge were all identified as significant pathways contributing to post-instruction knowledge. Output statistics confirmed that the model is both viable and trustworthy and indicated that it explained 34.4% of the variance. Lastly, iterative qualitative content analysis of student written responses during the intervention revealed elements of TE with respect to perceived value, perceived relevance and application of Earth science content confirming that the intervention was transformative. This work integrated knowledge from two disciplines—geoscience and education—to present an instructional model designed to support student interest, self-efficacy, TE, and knowledge. Results have implications for science teaching and learning, specifically that contextualizing science is an effective pedagogy. Additionally, embedding both science content and scientific practices in current socio-scientific issues, including issues of environmental injustice, supports knowledge gains, positive shifts in student perception of Earth science content as relevant, valuable, and useful for problem solving; and positive shifts in student application of science content to their lives outside of the classroom context.
Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
18

Geosciences, University of Arizona Department of. "UA Geosciences Newsletter, (Spring 1996)." Department of Geosciences, University of Arizona (Tucson, AZ), 1996. http://hdl.handle.net/10150/296635.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Geosciences, University of Arizona Department of. "UA Geosciences Newsletter, (Fall 1996)." Department of Geosciences, University of Arizona (Tucson, AZ), 1996. http://hdl.handle.net/10150/296613.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Baldwin, Tammy Katherine. "Spatial Ability Development in the Geosciences." Thesis, The University of Arizona, 2003. http://hdl.handle.net/10150/249233.

Full text
Abstract:
We designed an experiment to evaluate change in students' spatial skills as a result of specific interventions. Our test subjects included high school students in earth science classes, college level non-science majors enrolled in large enrollment introductory geoscience courses and introductory level geoscience students. All students completed spatial tests to measure their ability to mentally rotate three-dimensional objects and to construct a three-dimensional object from a two-dimensional representation. Results show a steady improvement in spatial skills for all groups. They also indicate that students choosing science majors typically have much higher spatial skills as they enter college. Specific interventions to improve spatial skills included having a subgroup of the non-science majors and high school students complete a suite of Geographic Information System (GIS) activities. The intervention at the high school level was more extensive and resulted in significant improvements in both categories of spatial ability. At the college level, the non-science majors that received the intervention showed no significant difference from those that did not, probably because the time spent on the intervention was too short. The geoscience majors had nearly three times the improvement of non-science majors in both categories of spatial ability attributed to hands-on weekly laboratory experiences. These results reveal a wide range of abilities among all groups of students, and suggest that we evaluate teaching strategies in all courses to ensure that students can interpret and understand the visual imagery used in lectures.
APA, Harvard, Vancouver, ISO, and other styles
21

Ferreira, Bruno Morais. "The dinamica virtual machine for geosciences." Universidade Federal de Minas Gerais, 2015. http://hdl.handle.net/1843/ESBF-9WVNYW.

Full text
Abstract:
This work describes DinamicaVM, the virtual machine that runs applications developed in Dinamica EGO. Dinamica EGO is a framework used in the development of geomodeling applications. Behind its multitude of visual modes and graphic elements, Dinamica EGO runs on top of a virtual machine. This machine - DinamicaVM - offers developers a rich instruction set architecture, featuring elements such as map and reduce, which are typical in the functional/parallel world. Ensuring that these very expressive components work together efficiently is a challenging endeavour. Dinamica's runtime addresses this challenge through a suite of optimizations, which borrows ideas from functional programming languages, and leverages specific behavior expected in geo-scientific programs. As we show in this work some of these optimizations deliver speedups of almost 100x, and are key to the industrial-quality performance of one of the world's most widely used geomodeling tools.
Este trabalho descreve a DinamicaVM, a máquina virtual para execução aplicações desenvolvidas em Dinamica EGO. Dinamica EGO é uma plataforma utilizada em modelagem de uso de solo, dinâmica de paisagens e simulação ambiental. Por detrás da sua biblioteca de elementos visuais em modo gráfico, Dinamica EGO roda em cima de uma máquina virtual. Esta máquina - DinamicaVM - oferece aos desenvolvedores um rico conjunto de instruções, com elementos como o 'map' e 'reduce', que são típicos no mundo de linguagens funcional e paralelismo. Garantir que estes componentes, muito expressivos, trabalhem juntos de forma eficiente é uma tarefa desafiadora. O ambiente de execução do Dinamica vence este desafio através de um conjunto de otimizações, emprestando ideias de linguagens de programação funcional, levando ao comportamento específico esperado em programas de alta performance para geociências. Como mostramos neste trabalho algumas dessas otimizações levam speedups de quase 100 vezes, e são fundamentais para o desempenho e qualidade de uma das ferramentas de modelagem ambiental mais utilizadas do mundo.
APA, Harvard, Vancouver, ISO, and other styles
22

Rucker, Justin Thomas. "3D wavelet-based algorithms for the compression of geoscience data." Master's thesis, Mississippi State : Mississippi State University, 2005. http://sun.library.msstate.edu/ETD-db/ETD-browse/browse.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Carabajal, Ivan G. "Understanding Field-Based Accessibility from the Perspective of Geoscience Departments." University of Cincinnati / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1504802786772905.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Ribes, David. "Universal informatics building cyberinfrastructure, interoperating the geosciences /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC IP addresses, 2006. http://wwwlib.umi.com/cr/ucsd/fullcit?p3237063.

Full text
Abstract:
Thesis (Ph. D.)--University of California, San Diego, 2006.
Title from first page of PDF file (viewed December 7, 2006). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references (p. 447-467).
APA, Harvard, Vancouver, ISO, and other styles
25

Stephenson, John Alexander. "Non-stationary spatial statistics in the geosciences." Thesis, Imperial College London, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.441977.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Le, Hai Ha. "Spatio-temporal information system for the geosciences." Doctoral thesis, Technische Universitaet Bergakademie Freiberg Universitaetsbibliothek "Georgius Agricola", 2014. http://nbn-resolving.de/urn:nbn:de:bsz:105-qucosa-154510.

Full text
Abstract:
The development of spatio–temporal geoscience information systems (TGSIS) as the next generation of geographic information systems (GIS) and geoscience information systems (GSIS) was investigated with respect to the following four aspects: concepts, data models, software, and applications. These systems are capable of capturing, storing, managing, and querying data of geo–objects subject to dynamic processes, thereby causing the evolution of their geometry, topology and geoscience properties. In this study, five data models were proposed. The first data model represents static geo–objects whose geometries are in the 3–dimensional space. The second and third data models represent geological surfaces evolving in a discrete and continuous manner, respectively. The fourth data model is a general model that represents geo–objects whose geometries are n–dimensional embedding in the m–dimensional space R^m, m >= 3. The topology and the properties of these geo–objects are also represented in the data model. In this model, time is represented as one dimension (valid time). Moreover, the valid time is an independent variable, whereas geometry, topology, and the properties are dependent (on time) variables. The fifth data model represents multiple indexed geoscience data in which time and other non–spatial dimensions are interpreted as larger spatial dimensions. To capture data in space and time, morphological interpolation methods were reviewed, and a new morphological interpolation method was proposed to model geological surfaces evolving continuously in a time interval. This algorithm is based on parameterisation techniques to locate the cross–reference and then compute the trajectories complying with geometrical constraints. In addition, the long transaction feature was studied, and the data schema, functions, triggers, and views were proposed to implement the long transaction feature and the database versioning in PostgreSQL. To implement database versioning tailored to geoscience applications, an algorithm comparing two triangulated meshes was also proposed. Therefore, TGSIS enable geologists to manage different versions of geoscience data for different geological paradigms, data, and authors. Finally, a prototype software system was built. This system uses the client/server architecture in which the server side uses the PostgreSQL database management system and the client side uses the gOcad geomodeling system. The system was also applied to certain sample applications.
APA, Harvard, Vancouver, ISO, and other styles
27

Cuddy, Steven John. "The development of genetic algorithms and fuzzy logic for geoscience applications." Thesis, University of Aberdeen, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.275019.

Full text
Abstract:
This thesis describes how I have researched and developed new methods for the prediction of rock physical properties using genetic algorithms and fuzzy logic (GAFL).  These techniques are improvements on conventional methods providing two original but dissimilar tools for formation evaluation and reservoir characterisation. The premise behind the use of fuzzy logic in this context is that a reservoir can be broken down into several lithotypes, each having characteristic statistical distributions for electrical log values.  Fuzzy logic attempt to uncover the relationships between these distributions.  Genetic algorithms use a feedback technique that assumes a continuous functional relationship between the electrical log values and rock properties, generating and testing equations that fit predicted and observed responses.  Complex non-linear equations are “evolved” until the best fit is obtained.  Genetic algorithms provide the functional form of the equation as well as the constant parameters of the relationship. I have modified conventional GAFL techniques so that they can be more precisely calibrated and applied to geoscience problems more successfully.  This research analysed the characteristics of large data sets from several North Sea and Middle Eastern fields, and led to the design of software that automatically calibrates GAFL in a way that is less sensitive to noise and data outliers.  I describe the applications of these new techniques to permeability, litho-facies, porosity and shear velocity prediction;  the repair of poor electrical logs and the modelling of shaly sand equations. Permeability governs the movement of fluids through reservoir rocks and is therefore a critical input into reservoir models.  Permeability estimation is extremely challenging, as it is difficult to measure directly using current sub-surface logging technology.  GAFL was applied to predict permeability in the Northern North Sea oil fields.  The newly developed software provides an important and visual indication of the uncertainty associated with the predicted permeabilities.
APA, Harvard, Vancouver, ISO, and other styles
28

Tomlin, Teagan L. "Using Geoscience Education Graduate Students to Help Faculty Transform Teaching Practice." BYU ScholarsArchive, 2008. https://scholarsarchive.byu.edu/etd/2027.

Full text
Abstract:
Universities make claims about student learning that graduates don't often achieve and are under pressure to show improvement in teaching and learning in their undergraduate programs. This has been the constant focus of university-level professional development programs, but most teachers are still not using the most effective teaching methods. Individual departments need to find ways to help their instructors overcome three main challenges associated with adopting more effective student-centered teaching methods. No matter what strategy is adopted, instructors need considerable support to 1) change their beliefs about what constitutes effective teaching and learning, 2) learn to effectively implement new strategies, and 3) help their students change their beliefs about teaching and learning. We investigated whether M.S. Geoscience Education graduate students could offer the support instructors need to overcome the challenges listed above. We successfully piloted this approach during 2006 to 2008. Receiving consistent and individualized support from a Geoscience Education graduate student, the instructor changed his beliefs about teaching and learning and learned to effectively implement active learning strategies. His teaching satisfaction and student ratings also increased. Advantages of our approach include 1) the time the graduate student devoted to making course changes, 2) the consistent support the instructor received which allowed him to transfer research supported educational theory into his teaching practice, and 3) the instructor is now a departmental resource that other instructors can go to for guidance. Disadvantages include 1) the graduate student's lack of experience as a teaching consultant and 2) the difficulty of transforming a professor/student relationship into a client/consultant relationship.
APA, Harvard, Vancouver, ISO, and other styles
29

Gillan, Bonnie Jean. "Snow Accumulation and Melt Timing at High Elevations in Northwest Montana." The University of Montana, 2009. http://etd.lib.umt.edu/theses/available/etd-12162008-160418/.

Full text
Abstract:
The sensitivity of snowmelt driven water supply to climate variability and change is difficult to assess in the mountain west, where strong climatic gradients coupled with complex topography are sampled by sparse ground measurements. We developed a snowmelt model, which ingests daily satellite imagery and meteorological data and is suitable for application to areas greater than 1000 km2, yet captures important spatial variability in steep mountain terrain. We applied the model to the Middle Fork of the Flathead Basin, a 2900 km2 snowmelt-dominated watershed in northwest Montana. Time integration of the melt model yielded a history of snow water equivalent distribution for the years 2000-2008. We found that over 25% of the total annual snow falls above the highest measurement station in the basin, and over 70% falls above the mean elevation of the nine nearest SNOTEL stations. Furthermore, elevation lapse rates in snow water equivalent are variable from year-to-year and are not described by the poorly distributed ground measurements. Consequently, scaling point measurements of snow water equivalent to describe basin conditions leads to significant misrepresentation. Numerical melt simulations performed on the basins peak snow accumulation elucidated the control of temperature variability on snowmelt timing under modern climate and future climate projected by downscaled GCMs. Natural temperature variability affects snowmelt timing on the order of 4 weeks, and plays an even larger role in a warmer climate. Timing of melt in a large snowpack year was found to be more susceptible to natural temperature variability than in a small snowpack year. On average, snowmelt timing occurs 24 days earlier in our projected future climate, but the range of variability is such that an overlap of todays conditions occurs as often as 50% of the time.
APA, Harvard, Vancouver, ISO, and other styles
30

Kortz, Karen Melissa. "Alternative conceptions of introductory geoscience students and a method to decrease them /." View online ; access limited to URI, 2009. http://0-digitalcommons.uri.edu.helin.uri.edu/dissertations/AAI3367995.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Ramirez, Monica E. "Implementation of instructional processes and the examination of asynchronous complex geoscience communications." Laramie, Wyo. : University of Wyoming, 2007. http://proquest.umi.com/pqdweb?did=1338875281&sid=1&Fmt=2&clientId=18949&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Geosciences, University of Arizona Department of. "UA Geosciences Newsletter, Volume 4, Number 1 (Fall 1998)." Department of Geosciences, University of Arizona (Tucson, AZ), 1998. http://hdl.handle.net/10150/295172.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Geosciences, University of Arizona Department of. "UA Geosciences Newsletter, Volume 3, Number 1 (Fall 1997)." Department of Geosciences, University of Arizona (Tucson, AZ), 1997. http://hdl.handle.net/10150/295173.

Full text
Abstract:
Letter from the Chair / SESS Club Outreach / News Around the Department / From Zion to Bryce and Beyond / Alumni News / Viewing Live Seismic Data / R/V Gould Dedicated / Spring 1997 Degrees / A Field Trip c. 1910 / Ed McCullough and Denis Norton Retire / Faculty Awards and Honors / List of the Lost - Do You Know Where They Are? / Emeritus Profile: Joe Schreiber
APA, Harvard, Vancouver, ISO, and other styles
34

Geosciences, University of Arizona Department of. "UA Geosciences Newsletter, Volume 4, Number 2 (Spring 1999)." Department of Geosciences, University of Arizona (Tucson, AZ), 1999. http://hdl.handle.net/10150/295174.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Geosciences, University of Arizona Department of. "UA Geosciences Newsletter, Volume 5, Number 1 (Fall 1999)." Department of Geosciences, University of Arizona (Tucson, AZ), 1999. http://hdl.handle.net/10150/295175.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Geosciences, University of Arizona Department of. "UA Geosciences Newsletter, Volume 5, Number 2 (Spring 2000)." Department of Geosciences, University of Arizona (Tucson, AZ), 2000. http://hdl.handle.net/10150/295176.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Geosciences, University of Arizona Department of. "UA Geosciences Newsletter, Volume 3, Number 2 (Spring 1997)." Department of Geosciences, University of Arizona (Tucson, AZ), 1997. http://hdl.handle.net/10150/296633.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

White, Jeremy. "Computer Model Inversion and Uncertainty Quantification in the Geosciences." Scholar Commons, 2014. https://scholarcommons.usf.edu/etd/5329.

Full text
Abstract:
The subject of this dissertation is use of computer models as data analysis tools in several different geoscience settings, including integrated surface water/groundwater modeling, tephra fallout modeling, geophysical inversion, and hydrothermal groundwater modeling. The dissertation is organized into three chapters, which correspond to three individual publication manuscripts. In the first chapter, a linear framework is developed to identify and estimate the potential predictive consequences of using a simple computer model as a data analysis tool. The framework is applied to a complex integrated surface-water/groundwater numerical model with thousands of parameters. Several types of predictions are evaluated, including particle travel time and surface-water/groundwater exchange volume. The analysis suggests that model simplifications have the potential to corrupt many types of predictions. The implementation of the inversion, including how the objective function is formulated, what minimum of the objective function value is acceptable, and how expert knowledge is enforced on parameters, can greatly influence the manifestation of model simplification. Depending on the prediction, failure to specifically address each of these important issues during inversion is shown to degrade the reliability of some predictions. In some instances, inversion is shown to increase, rather than decrease, the uncertainty of a prediction, which defeats the purpose of using a model as a data analysis tool. In the second chapter, an efficient inversion and uncertainty quantification approach is applied to a computer model of volcanic tephra transport and deposition. The computer model simulates many physical processes related to tephra transport and fallout. The utility of the approach is demonstrated for two eruption events. In both cases, the importance of uncertainty quantification is highlighted by exposing the variability in the conditioning provided by the observations used for inversion. The worth of different types of tephra data to reduce parameter uncertainty is evaluated, as is the importance of different observation error models. The analyses reveal the importance using tephra granulometry data for inversion, which results in reduced uncertainty for most eruption parameters. In the third chapter, geophysical inversion is combined with hydrothermal modeling to evaluate the enthalpy of an undeveloped geothermal resource in a pull-apart basin located in southeastern Armenia. A high-dimensional gravity inversion is used to define the depth to the contact between the lower-density valley fill sediments and the higher-density surrounding host rock. The inverted basin depth distribution was used to define the hydrostratigraphy for the coupled groundwater-flow and heat-transport model that simulates the circulation of hydrothermal fluids in the system. Evaluation of several different geothermal system configurations indicates that the most likely system configuration is a low-enthalpy, liquid-dominated geothermal system.
APA, Harvard, Vancouver, ISO, and other styles
39

Ledoux, Hugo. "Modelling three-dimensional fields in geoscience with the Voronoi diagram and its dual." Thesis, University of South Wales, 2006. https://pure.southwales.ac.uk/en/studentthesis/modelling-threedimensional-fields-in-geoscience-with-the-voronoi-diagram-and-its-dual(0cbb5565-4493-4f07-bd6a-3e41c47c056e).html.

Full text
Abstract:
The objects studied in geoscience are often not man-made objects, but rather the spatial distribution of three-dimensional continuous geographical phenomena such as the salinity of a body of water, the humidity of the air or the percentage of gold in the rock (phenomena that tend to change over time). These are referred to as fields, and their modelling with geographical information systems is problematic because the structures of these systems are usually two dimensional and static. Raster structures (voxels or octrees) are the most popular solutions, but, as I argue in this thesis, they have several shortcomings for geoscientific fields. As an alternative to using rasters for representing and modelling three-dimensional fields, I propose using a new spatial model based the Voronoi diagram (VD) and its dual the Delaunay tetrahedralization (DT). I argue that constructing the VD/DT of the samples that were collected to study the field can be beneficial for extracting meaningful information from it. Firstly, the tessellation of space obtained with the VD gives a clear and consistent definition of neighbourhood for unconnected points in three dimensions, which is useful since geoscientific datasets often have highly anisotropic distributions. Secondly, the efficient and robust reconstruction of the field can be obtained with natural neighbour interpolation, which is entirely based on the properties of the VD. Thirdly, the tessellations of the VD and the DT make possible, and even optimise, several spatial analysis and visualisation operations. A further important consideration is that the VD/DT is locally modifiable (insertion, deletion and movement of points), which permits us to model the temporal dimension, and also to interactively explore a dataset, thus gaining insight by observing on the fly the consequences of manipulations and spatial analysis operations. In this thesis, the development of this new spatial model is from an algorithmic point of view, i.e. I describe in details algorithms to construct, manipulate, analyse and visualise fields represented with the VD/DT. A strong emphasis is put on the implementation of the spatial model, and, for this reason, the many degeneracies that arise in three-dimensional geometric computing are described and handled. A new data structure, the augmented quad-edge, is also presented. It permits us to store simultaneously both the VD and the DT, and helps in the analysis of fields. Finally, the usefulness of this Voronoi-based spatial model is demonstrated with a series of potential applications in geoscience.
APA, Harvard, Vancouver, ISO, and other styles
40

Dohaney, Jacqueline Anne Marie. "Educational Theory & Practice for Skill Development in the Geosciences." Thesis, University of Canterbury. Geological Sciences, 2013. http://hdl.handle.net/10092/8045.

Full text
Abstract:
A movement from the traditional to the modern in geoscience education occurs through piecemeal application of educational theory to geology teaching. This dissertation developed and examined four traditional and innovative geoscience skills-based learning activities through qualitative, quantitative and mixed-methods methods: A. Mineralogy laboratories were designed to improve learning gains (i.e., knowledge) and students’ perceptions of mineralogy topics, primarily using group work. Groups of sizes 3 and 4 were most effective (compared with pairs, and groups of 5 and 6) in improving student collaboration. B. An inquiry-style videogame was designed and tested in order to compare learning gains to that of a geological field trip. Though learning gains were slightly higher in the fieldtrip, some aspects of the videogame were more successful at increasing the depth and awareness of observation skills needed. C. Field notebooks were analysed for uniqueness and completeness to quantify differences among participants’ note-taking. We found that previous geologic experience, gender, and lecturer teaching styles all contributed to the students note taking abilities and perceptions of note-taking. D. The design research of the Volcanic Hazards Simulation resulted in identification of critical pedagogical variables that encourage students’ transferable skills: a) the pace of the simulation, b) the preparedness of the students, c) the role and team authenticity and d) communication best practices. Meaningful changes to the curriculum of labs, field and experiential teaching methods resulted in the improvement of content knowledge, perceptions and skills of geoscience students. Collectively, these results suggest practical and theory-based solutions grounded in Constructivist paradigms to provide improved geoscience teaching at Universities.
APA, Harvard, Vancouver, ISO, and other styles
41

Raanes, Patrick Nima. "Improvements to ensemble methods for data assimilation in the geosciences." Thesis, University of Oxford, 2015. https://ora.ox.ac.uk/objects/uuid:9f9961f0-6906-4147-a8a9-ca9f2d0e4a12.

Full text
Abstract:
Data assimilation considers the problem of using a variety of data to calibrate model-based estimates of dynamic variables and static parameters. Geoscientific examples include (i) satellite observations and atmospheric models for weather forecasting, and (ii) well-log data and reservoir flow simulators for oil production optimization. Approximate solutions are provided by the set of techniques deriving from the ensemble Kalman filter (EnKF), which combines a Monte Carlo approach with assumptions of linearity and Gaussianity. This thesis proposes some improvements to the accuracy and understanding of such ensemble methods. Firstly, a new scheme is developed to account for model noise in the forecast step of the EnKF. The main aim is to eliminate the sampling errors of additive, simulated noise. The scheme is based on the previously developed "square root" schemes for the analysis step, but requires further consideration due to the limited subspace spanned by the ensemble. The properties of the square root scheme in general are surveyed. Secondly, the "finite size" ensemble Kalman filter (EnKF-N) is reviewed. The EnKF-N explicitly considers the uncertainty in the forecast moments (mean and covariance), thereby not requiring the multiplicative inflation commonly used to compensate for an intrinsic bias of the analysis step of the standard EnKF. Thus, in the perfect model setting, it avoids the process of tuning the inflation factor. This presentation consolidates the earlier literature on the EnKF-N, substantiates the scalar inflation perspective, and rectifies a deficiency. Thirdly, two ensemble "smoothers" expressed by different recursions, used in different applications, and hitherto thought to yield different results, are shown to be equivalent. The theory is revisited under practical considerations, where equivalence is broken due to inflation and localization, but the methods remain equally capable. In each case, the theory is tested and the accuracy performance is benchmarked against standard methods using numerical twin experiments.
APA, Harvard, Vancouver, ISO, and other styles
42

Bruun, Johanne Margrethe. "Grounding territory : geoscience and the territorial ordering of Greenland during the early Cold War." Thesis, Durham University, 2018. http://etheses.dur.ac.uk/12491/.

Full text
Abstract:
Following recent calls for a more ‘earthly’ geopolitics, this thesis contributes to the ongoing momentum within Political Geography to add depth, volume, and matter to the concept of territory. Merging insights from Science and Technology Studies with geographical studies of territory, this thesis asks how the sciences of the Earth may serve as technologies of territory. How, in other words, might states use science to forge a seemingly stable ordering of space which is extendable through time from a world defined by chaos, instability, and incessant change? To address this question, the thesis mobilises two instances of territory construction in Greenland during the early Cold War, when two differently motivated intruding powers, Denmark and the USA, both used Earth Science as a means of territorialising Greenlandic geographies. Firstly, the high-profile case of Danish uranium prospecting at Ilímaussaq exemplifies Danish attempts at casting Greenland as a space of extraction – as land upon which the nation might capitalise. Secondly, the practices of two interrelated US military scientific expeditionary outfits are used to show how the US sought to cast Greenlandic landscapes as a military terrain serving as an extra-sovereign extension of American state space. Despite the apparent differences between these two cases, the empirical findings of this thesis complicate simplistic distinctions between land and terrain, the voluminous and the horizontal, and also between bio- and geo-political orderings of state space. Reading across these two instances of territory formation, the thesis draws attention to the temporal and processual characteristics of territory by showing how territory’s formation in Greenland was informed by a complicated interplay between stability and flow rather than a rigid ‘logic of solids’. Building on Stuart Elden’s work on territory and Elizabeth Grosz’s philosophies of Earth, this thesis thus argues that territory is, in part, a geo-political technology which allows the state to attune to the rhythmic forcefulness of Earth and draw on and over its latent power.
APA, Harvard, Vancouver, ISO, and other styles
43

Powell, Wayne G. "Iterative application of case-study analysis to the design of a Web-based geoscience lab." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/mq65130.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Ashworth, Andrew Lee. "Predicting Southeastern forest canopy heights and fire fuel models using Geoscience Laser Altimeter System data." Master's thesis, Mississippi State : Mississippi State University, 2008. http://library.msstate.edu/etd/show.asp?etd=etd-05102008-141659.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Stanley, Michael Clare. "A quantitative estimation of the value of geoscience information in mineral exploration: Optimal search sequences." Diss., The University of Arizona, 1994. http://hdl.handle.net/10150/186863.

Full text
Abstract:
This research provides quantitative estimates of value of geoscience information in the exploration for porphyry copper deposits of Arizona. As part of the study, an expert system named DISCOVERY is designed to integrate geological models, statistical decision theory, and mineral economics within a Monte Carlo simulation framework. The system requires, for each exploration survey, a probability that a simulated deposit will be detected. These detection probabilities are estimated using expert opinion from a panel of experienced geoscientists. This dissertation pioneers the application of influence allocation processes in geoscience, to alleviate criticisms associated with expert opinion. The work has five major focuses: (1) an adaptation of Grayson's (1960) exploration decision theory into a modified Bayesian framework; (2) the use of electronic brainstorming to define principal recognition features that define exploration deposit models; (3) the use of influence allocation voting tools to estimate detection probability by survey type and sampling intensity; (4) a comprehensive engineering cost model to derive the net present value of deposits simulated within the system; and (5) a comprehensive drilling model to describe optimal sampling intensity in regional exploration. The system operates using two models: (1) an estimate of the value of information based upon 'static' estimates; and (2) a 'dynamic' simulation model that replicates the activities of the exploration industry. The static model provides value estimates on a survey by survey basis, consistent with prevailing economic conditions. The dynamic model loosens the economic constraints in order to simulate exploration practices and determine the optimal sequence of search surveys. Collectively these two models provide estimates of the value of information derived from exploration surveys and determine the optimum search strategy for porphyry copper deposits in Arizona. The static model produces estimates of net gain displaying a high level of consistency for each survey technology and sampling intensity across many thousands of iterations. The dynamic model does not produce satisfying results, requiring additional modifications to the Bayesian structure in order to better simulate exploration.
APA, Harvard, Vancouver, ISO, and other styles
46

au, k. smith@curtin edu, and Kerry Smith. "Performance measurement of Australian geoscientific minerals researchers in the changing funding regimes." Murdoch University, 2003. http://wwwlib.murdoch.edu.au/adt/browse/view/adt-MU20040625.122025.

Full text
Abstract:
The thesis examines the performance of geoscience minerals researchers from three Australian geoscientific research centres. The study explores whether the changing funding regimes for geoscientific research in Australia have impacted on the research performance of these geoscientists, measured through analysis of activity and output. The context of the study is the literature outlining the settings for the general culture of geoscientific research and the Australian scientific policy and research environment, in particular, including an evaluation of bibliometric methods. The case study of three geoscience minerals research centres and their researchers finds that journal and book publishing is only one component of the researchers' performance and that conferences, technical reports as well as teaching have an important place in the dissemination of research results. The study also finds that the use of the Institute of Scientific Information (ISI) indices not only influences the policy directions for scientific and geoscientific research funding in Australia, but also directs the ways in which the geoscientists publish. It also tends to restrict publishing output: the tail wags the dog. The study recommends: that the various ways through which research outcomes are disseminated, as well as other components of the research continuum including the processes of education and professional activity, receive wider acceptance and recognition in Australian government policy; that the Australian geoscientific community re-assess its educational and research directions through a considered auditing and strategic planning process; and that a more comprehensive approach to the dissemination of geoscientific research outcomes into the public domain be enacted.
APA, Harvard, Vancouver, ISO, and other styles
47

Gaulton, Rachel. "Remote sensing for continuous cover forestry : quantifying spatial structure and canopy gap distribution." Thesis, University of Edinburgh, 2009. http://hdl.handle.net/1842/3419.

Full text
Abstract:
The conversion of UK even-aged conifer plantations to continuous cover forestry (CCF), a form of forest management that maintains forest cover over time and avoids clear-cutting, requires more frequent and spatially explicit monitoring of forest structure than traditional systems. Key aims of CCF management are to increase the spatial heterogeneity of forest stands and to make increased use of natural regeneration, but judging success in meeting these objectives and allowing an adaptive approach to management requires information on spatial structure at a within-stand scale. Airborne remote sensing provides an alternative approach to field survey and has potential to meet these monitoring needs over large areas. An integral part of CCF is the creation of canopy gaps, allowing regeneration by increasing understorey light levels. This study examined the use of airborne lidar and passive optical data for the identification and characterisation of canopy gaps within UK Sitka spruce (Picea sitchensis) plantations. The potential for using the distribution of canopy and gaps within a stand to quantify spatial heterogeneity and allow the detection of changes in spatial structure, between stands and over time, was assessed. Detailed field surveys of six study plots, located in three UK spruce plantations, allowed assessment of the accuracy of gap delineation from remotely sensed data. Airborne data (multispectral, hyperspectral and lidar) were acquired for all sites. A novel approach to the delineation of gaps from lidar data was developed, delineating gaps directly from the lidar point cloud, avoiding the interpolation errors (and associated under-estimation of gap area) resulting from conversion to a canopy height model. This method resulted in improved accuracy of delineation compared to past techniques (overall accuracy of 78% compared to field gap delineations), especially when applied to lidar data collected at relatively low point densities. However, lidar data can be costly to acquire and provides little information about the presence of natural regeneration or other understorey vegetation within gaps. For these reasons, the potential of passive optical (and in particular, hyperspectral) data for gap delineation was also considered. The use of spectral indices, based on shortwave infrared reflectance or hyperspectral characteristics of the red- edge and chlorophyll absorption well, were shown to enhance the discrimination of canopy and gap and reduce the influence of illumination conditions. An average overall accuracy of 71% was obtained using hyperspectral characteristics for gap delineation, suggesting the use of optical data compares reasonably to results from lidar. Methods based on shortwave infrared (SWIR) reflectance were shown to be sensitive to within gap vegetation type, with SWIR reflectance being lower in the presence of natural regeneration. Potential for using optical data to classify within gap vegetation type was also demonstrated. Methods of quantifying spatial structure through the use of indices describing variations in gap size, shape and distribution were found to allow the detection of structural differences between stands and changes over time. Gap distribution based indices were also found to be strongly related to alternative methods based on relative tree positions, suggesting significant potential for consistent monitoring of structural changes during conversion of plantations to CCF. Remotely sensed delineations of canopy gap distribution may also allow spatially explicit modelling of understorey light conditions and potential for regeneration, providing further information to aid the effective management of CCF forests.
APA, Harvard, Vancouver, ISO, and other styles
48

Price, David P. "Statistical correlation and modelling of carbonate heterogeneity." Thesis, University of Edinburgh, 2009. http://hdl.handle.net/1842/3137.

Full text
Abstract:
In many carbonate reservoirs, much of the porosity is in the form of micropores (with diameter 1-10 microns). This porosity lies far below the resolution of any conventional wireline logging tools and can only be observed through the analysis of extracted core. To investigate the spatial distribution of the microporosity over a large range of length scales requires accurate depth matching of extracted core to wireline data. With such a correlation up- and down-scaling relationships can be developed between porosity relationships observed at different length scales. The scaling relationships can then be used to infer the distribution of microporosity in regions of the borehole without extracted core. This thesis presents a new, general method for the accurate correlation of extracted core to wireline logs using their statistical properties. The method was developed using an X-ray computed tomography (CT) scan of a section of extracted carbonate core and well log data from the so-called Fullbore MicroImager (FMI) resistivity tool. Using geological marker features the extracted core was constrained to correspond to a 2ft (609mm) section of FMI data. Using a combination of statistics (mean, variance and the range from variograms of porosity), combined in a likelihood function, the correlation was reduced to an uncertainty of 0.72" (18.29mm). When applied to a second section of core, the technique reduced the uncertainty from 2ft (609mm) down to 0.3ft (91mm). With accurate correlation between core and wireline logs, the scaling relationships required to transfer porosity information between scales could be investigated. Using variogram scaling relationships, developed for the mining industry, variograms from the CT scan were up-scaled and compared with those calculated from associated FMI data. To simulate core samples in regions of the borehole without extracted core, two statistical simulation techniques were developed. The techniques both capture twopoint spatial statistics from binarised, horizontal slices of FMI data. These statistics iv are combined to obtain multi-point statistics, using either neighbourhood averaging or least squares estimation weighted by variance. The multi-point statistics were then used to simulate 2-D slices of 'virtual' core. Comparisons between the two techniques, using a variety of features, revealed that the neighbourhood averaging produced the most reliable results. This thesis thus enables, for the first time, core-to-log depth matching to the resolution of the logging tools employed. Spatial statistics extracted from the core and up-scaled can then be compared with similar statistics from the precisely-located log data sampling the volume of rock around the borehole wall. Finally simulations of 'virtual' core can be created using the statistical properties of the logs in regions where no core is available.
APA, Harvard, Vancouver, ISO, and other styles
49

Merrell, Tia L. "DEVELOPMENT OF AN INTERACTIVE MODEL PREDICTING CLIMATOLOGICAL AND CULTURAL INFLUENCES ON ANNUAL GROUNDWATER VOLUME IN THE MISSISSIPPI DELTA SHALLOW ALLUVIAL AQUIFER." MSSTATE, 2009. http://sun.library.msstate.edu/ETD-db/theses/available/etd-01072009-124509/.

Full text
Abstract:
Water volume in the shallow alluvial aquifer in the Mississippi Delta region is subject to seasonal declines and annual fluctuations caused by both climatological variability and crop water use variations from year-to-year. The most recently documented water volume decline in the aquifer is estimated at 500,000 acre-feet. Available climate, crop acreage, irrigation water use, and groundwater decline data from Sunflower County, MS are used to evaluate the climate-groundwater interactions in the Mississippi Delta region. This research produced a model that simulates the effects of climatic variability, crop acreage changes, and specific irrigation methods on consequent variations in the water volume in the aquifer. Climatic variability is accounted for in the model by predicative equations that relate annual measured plant water use (irrigation) to growing season precipitation amounts. This derived relationship allows the application of a long-term climatological record to simulate the cumulative impact of climate on groundwater used for irrigation.
APA, Harvard, Vancouver, ISO, and other styles
50

Call, Jennifer Michelle. "The Spatial and Temporal Distributions and Thermodynamic Characteristics of Tornadoes in Mississippi." MSSTATE, 2008. http://sun.library.msstate.edu/ETD-db/theses/available/etd-02112008-160036/.

Full text
Abstract:
The vast majority of severe storm and tornado research is conducted in the natural laboratory of the Great Plains region of the United States. As a result, much of the knowledge and technology applied to storm forecasting is developed in the Great Plains environment. However, it has been shown that there is a maximum of strong and violent tornadoes in the region extending from Arkansas eastward into Alabama. In addition, various researchers have found strong severe storm thermodynamic signatures unique to regions such as the Northeast and Mid-Atlantic. This study has analyzed five decades of tornado data for the state of Mississippi. Thermodynamic results indicate that Mississippi has a tornado environment distinctly different than that of the Great Plains. The spatial distribution of the tornado events also indicates that mesoscale processes between the Earth's surface and the lower troposphere may play a significant role in determining the genesis location of violent tornadoes in the historical Delta region of Mississippi. It is anticipated that an understanding of environments unique to Mississippi tornadoes will lead to better forecasts and more comprehensive storm analysis, which will ultimately save lives and property.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography