Rozprawy doktorskie na temat „Trigger analysis”

Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Trigger analysis.

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „Trigger analysis”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Melek, Luiz Alberto Pasini. "Analysis and design of a subthreshold CMOS Schmitt trigger circuit". reponame:Repositório Institucional da UFSC, 2017. https://repositorio.ufsc.br/xmlui/handle/123456789/183242.

Pełny tekst źródła
Streszczenie:
Tese (doutorado) - Universidade Federal de Santa Catarina, Centro Tecnológico, Programa de Pós-Graduação em Engenharia Elétrica, Florianópolis, 2017.
Made available in DSpace on 2018-02-06T03:20:46Z (GMT). No. of bitstreams: 1 349773.pdf: 5894584 bytes, checksum: f7cc1810c920ff756724711896de8791 (MD5) Previous issue date: 2017
Nesta tese, o disparador Schmitt (ou Schmitt trigger) CMOS clássico (ST) operando em inversão fraca é analisado. A transferência de tensão DC completa é determinada, incluindo expressões analíticas para as tensões dos nós internos. A transferência de tensão DC resultante do ST apresenta um comportamento contínuo mesmo na presença da histerese. Nesse caso, a característica da tensão de saída entre os limites da histerese é formada por um segmento metaestável, que pode ser explicado em termos das resistências negativas dos subcircuitos NMOS e PMOS do ST. A tensão mínima para o aparecimento da histerese é determinada fazendo-se a análise de pequenos sinais. A análise de pequenos sinais também é utilizada para a estimativa da largura do laço de histerese. É mostrado que a histerese não aparece para tensões de alimentação menores que 75 mV em 300 K. A análise do ST operando como amplificador também foi feita. A razão ótima dos transistores foi determinada com o objetivo de se maximizar o ganho de tensão. A comparação do disparador Schmitt com o inversor CMOS convencional destaca as vantagens e desvantagens de cada um para aplicações de ultra-baixa tensão. Também é mostrado que o ST é teoricamente capaz de operar (com ganho de tensão absoluto ?1) com uma tensão de alimentação tão baixa quanto 31.5 mV, a qual é menor do que o conhecido limite prévio de 36 mV, para o inversor convencional. Como amplificador, o ST possui ganho de tensão absoluto consideravelmente maior que o inversor convencional na mesma tensão de alimentação. Três circuitos integrados foram projetados e fabricados para estudar o comportamento do ST com tensões de alimentação entre 50 mV e 1000 mV.
Abstract : In this thesis, the classical CMOS Schmitt trigger (ST) operating in weak inversion is analyzed. The complete DC voltage transfer characteristic is determined, including analytical expressions for the internal node voltage. The resulting voltage transfer characteristic of the ST presents a continuous output behavior even when hysteresis is present. In this case, the output voltage characteristic between the hysteresis limits is formed by a metastable segment, which can be explained in terms of the negative resistance of the NMOS and PMOS subcircuits of the ST. The minimum supply voltage at which hysteresis appears is determined carrying out small-signal analysis, which is also used to estimate the hysteresis width. It is shown that hysteresis does not appear for supply voltages lower than 75 mV at 300 K. The analysis of the ST operating as a voltage amplifier was also carried out. Optimum transistor ratios were determined aiming at voltage gain maximization. The comparison of the ST with the standard CMOS inverter highlights the relative benefits and drawbacks of each one in ULV applications. It is also shown that the ST is theoretically capable of operating (voltage gain ?1) at a supply voltage as low as 31.5 mV, which is lower than the well-known limit of 36 mV, for the standard CMOS inverter. As an amplifier, the ST shows considerable higher absolute voltage gains than those showed by the conventional inverter at the same supply voltages. Three test chips were designed and fabricated to study the operation of the ST at supply voltages between 50 mV and 1000 mV.
Style APA, Harvard, Vancouver, ISO itp.
2

Pettigrew, John Robert. "Molecular analysis of the germination trigger mechanism of Bacillus megaterium KM". Thesis, University of Cambridge, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.627239.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Ridolfi, Riccardo <1993&gt. "The FOOT experiment: Trigger and Data Acquisition (TDAQ) development and data analysis". Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2022. http://amsdottorato.unibo.it/10323/1/Ridolfi_phd_thesis.pdf.

Pełny tekst źródła
Streszczenie:
Hadrontherapy employs high-energy beams of charged particles (protons and heavier ions) to treat deep-seated tumours: these particles have a favourable depth-dose distribution in tissue characterized by a low dose in the entrance channel and a sharp maximum (Bragg peak) near the end of their path. In these treatments nuclear interactions have to be considered: beam particles can fragment in the human body releasing a non-zero dose beyond the Bragg peak while fragments of human body nuclei can modify the dose released in healthy tissues. These effects are still in question given the lack of interesting cross sections data. Also space radioprotection can profit by fragmentation cross section measurements: the interest in long-term manned space missions beyond Low Earth Orbit is growing in these years but it has to cope with major health risks due to space radiation. To this end, risk models are under study: however, huge gaps in fragmentation cross sections data are currently present preventing an accurate benchmark of deterministic and Monte Carlo codes. To fill these gaps in data, the FOOT (FragmentatiOn Of Target) experiment was proposed. It is composed by two independent and complementary setups, an Emulsion Cloud Chamber and an electronic setup composed by several subdetectors providing redundant measurements of kinematic properties of fragments produced in nuclear interactions between a beam and a target. FOOT aims to measure double differential cross sections both in angle and kinetic energy which is the most complete information to address existing questions. In this Ph.D. thesis, the development of the Trigger and Data Acquisition system for the FOOT electronic setup and a first analysis of 400 MeV/u 16O beam on Carbon target data acquired in July 2021 at GSI (Darmstadt, Germany) are presented. When possible, a comparison with other available measurements is also reported.
L’adroterapia è una tecnica di radioterapia esterna nella quale vengono utilizzati fasci di ioni (protoni e ioni più pesanti) ad alta energia per il trattamento di tumori profondi: tali particelle hanno una distribuzione dose-profondità nel tessuto molto favorevole, caratterizzata da un basso rilascio di dose nel canale di entrata e un massimo pronunciato (picco di Bragg) vicino alla fine del loro percorso. In tali trattamenti devono essere prese in considerazione anche le interazioni nucleari: le particelle del fascio possono frammentare nel corpo umano rilasciando una dose non nulla oltre il picco di Bragg mentre i frammenti dei nuclei del paziente possono modificare la dose rilasciata nei tessuti sani. L’entità di tali effetti è attualmente oggetto di studio vista l’assenza di misure sulle sezioni d’urto di interesse. Anche il campo della radioprotezione spaziale può trarre benificio da queste misure poiché i rischi per la salute causati dalla radiazione spaziale rimangono un grande problema da affrontare: per questo motivo si studiano modelli di rischio che attualmente risentono della significativa mancanza di dati sulle sezioni d’urto. L’esperimento FOOT (FragmentatiOn Of Target) è composto da due apparati indipendenti e complementari, una Emulsion Cloud Chamber e un apparato elettronico composto da alcuni rivelatori che forniscono misure ridondanti delle quantità cinematiche dei frammenti nucleari prodotti dalle interazioni tra il fascio ed il bersaglio. FOOT ha l’obiettivo di misurare le sezioni d’urto differenziali sia in angolo che in energia cinetica, informazioni fondamentali per rispondere ai problemi aperti. In questa tesi sono presentati sia lo sviluppo del sistema di trigger e acquisizione dati (TDAQ) per l’apparato elettronico dell’esperimento sia una prima analisi dei dati del fascio di 16O a 400 MeV/u su un bersaglio di carbonio acquisiti a luglio 2021 presso il GSI (Darmstadt, Germania) oltre ad un confronto, quando possibile, con altre misure attualmente disponibili.
Style APA, Harvard, Vancouver, ISO itp.
4

Jomaa, Diala. "The Optimal trigger speed of vehicle activated signs". Licentiate thesis, Högskolan Dalarna, Mikrodataanalys, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:du-17538.

Pełny tekst źródła
Streszczenie:
The thesis aims to elaborate on the optimum trigger speed for Vehicle Activated Signs (VAS) and to study the effectiveness of VAS trigger speed on drivers’ behaviour. Vehicle activated signs (VAS) are speed warning signs that are activated by individual vehicle when the driver exceeds a speed threshold. The threshold, which triggers the VAS, is commonly based on a driver speed, and accordingly, is called a trigger speed. At present, the trigger speed activating the VAS is usually set to a constant value and does not consider the fact that an optimal trigger speed might exist. The optimal trigger speed significantly impacts driver behaviour. In order to be able to fulfil the aims of this thesis, systematic vehicle speed data were collected from field experiments that utilized Doppler radar. Further calibration methods for the radar used in the experiment have been developed and evaluated to provide accurate data for the experiment. The calibration method was bidirectional; consisting of data cleaning and data reconstruction. The data cleaning calibration had a superior performance than the calibration based on the reconstructed data. To study the effectiveness of trigger speed on driver behaviour, the collected data were analysed by both descriptive and inferential statistics. Both descriptive and inferential statistics showed that the change in trigger speed had an effect on vehicle mean speed and on vehicle standard deviation of the mean speed. When the trigger speed was set near the speed limit, the standard deviation was high. Therefore, the choice of trigger speed cannot be based solely on the speed limit at the proposed VAS location. The optimal trigger speeds for VAS were not considered in previous studies. As well, the relationship between the trigger value and its consequences under different conditions were not clearly stated. The finding from this thesis is that the optimal trigger speed should be primarily based on lowering the standard deviation rather than lowering the mean speed of vehicles. Furthermore, the optimal trigger speed should be set near the 85th percentile speed, with the goal of lowering the standard deviation.
Style APA, Harvard, Vancouver, ISO itp.
5

Feist, Josselin. "Finding the needle in the heap : combining binary analysis techniques to trigger use-after-free". Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAM016/document.

Pełny tekst źródła
Streszczenie:
La sécurité des systèmes est devenue un élément majeur du développement logicielle, pour les éditeurs, les utilisateurs et les agences gouvernementales. Un problème récurrent est la détection de vulnérabilités, qui consiste à détecter les bugs qui pourraient permettre à un attaquant de gagner des privilèges non prévues, comme la lecture ou l’écriture de donnée sensible, voir même l’exécution de code non autorisé. Cette thèse propose une approche pratique pour la détection d’une vulnérabilité particulière : le use-after-free, qui apparaît quand un élément du tas est utilisé après avoir été libéré. Cette vulnérabilité a été utilisé dans de nombreux exploits, et est, de par sa nature, difficile à détecter. Les problèmes récurrents pour sa détection sont, par exemple, le fait que les éléments déclenchant la vulnérabilité peuvent être répartis à de grande distance dans le code, le besoin de raisonner sur l’allocateur mémoire, ou bien la manipulation de pointeurs. L’approche proposé consiste en deux étapes. Premièrement, une analyse statique, basée sur une analyse légère, mais non sûre, appelé GUEB, permet de traquer les accès mémoire ainsi que l’état des éléments du tas (alloué / libéré / utilisé) . Cette analyse mène à un slice de programme contenant de potentiel use-after-free. La seconde étape vient alors confirmer ou non la présence de vulnérabilité dans ces slices, et est basée sur un moteur d'exécution symbolique guidé, développé dans la plateforme Binsec. Ce moteur permet de générer des entrées du programme réel déclenchant un use-after-free. Cette combinaison s’est montré performante en pratique et a permis de détecter plusieurs use-after-free qui étaient précédemment inconnu dans plusieurs codes réels. L’implémentation des outils est disponible en open-source et fonctionne sur du code x86
Security is becoming a major concern in software development, both for software editors, end-users, and government agencies. A typical problem is vulnerability detection, which consists in finding in a code bugs able to let an attacker gain some unforeseen privileges like reading or writing sensible data, or even hijacking the program execution.This thesis proposes a practical approach to detect a specific kind of vulnerability, called use-after-free, occurring when a heap memory block is accessed after being freed. Such vulnerabilities have lead to numerous exploits (in particular against web browsers), and they are difficult to detect since they may involve several distant events in the code (allocating, freeingand accessing a memory block).The approach proposed consists in two steps. First, a coarse-grain and unsound binary level static analysis, called GUEB, allows to track heap memory blocks operation (allocation, free, and use). This leads to a program slice containing potential use-after-free. Then, a dedicated guided dynamic symbolic execution, developed within the Binsec plateform, is used to retrieve concreteprogram inputs aiming to trigger these use-after-free. This combination happened to be be effective in practice and allowed to detect several unknown vulnerabilities in real-life code. The implementation is available as an open-source tool-chain operating on x86 binary code
Style APA, Harvard, Vancouver, ISO itp.
6

May, Anna Michelle. "Individual Periodic Limb Movements with Arousal Trigger Non-sustained Ventricular Tachycardia: A Case-Crossover Analysis". Case Western Reserve University School of Graduate Studies / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=case151263984890719.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Vascelli, Francesco. "Analysis of the performance of DT trigger algorithms for the phase-2 upgrade of the CMS detector". Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/19890/.

Pełny tekst źródła
Streszczenie:
In questa tesi sono state studiate le prestazioni di due nuovi algoritmi per il trigger locale delle camere a deriva dell'esperimento CMS (Compact Muon Solenoid). Gli algoritmi sono stati sviluppati in vista dell'upgrade del collisionatore LHC (Large Hadron Collider), che diventerà High Luminosity LHC, e del corrispettivo upgrade di CMS. In particolare, sono stati svolti studi sull'efficienza degli algoritmi e si sono analizzati i casi in cui più segmenti di trigger vengono prodotti per un singolo muone che attraversa una camera del rivelatore. Le prestazioni dei nuovi algoritmi sono anche state comparate con quelle del sistema di trigger attualmente in uso. Questo lavoro è stato realizzato sviluppando uno strumento di analisi che sfrutta il pacchetto software di ROOT. I dati processati provengono da simulazioni nelle quali sono generate isotropicamente coppie di muoni con un'energia tra 2 e 100 GeV. Inoltre, vengono confrontate generazioni di muoni senza pile-up e con un pile-up medio di 200 collisioni per evento.
Style APA, Harvard, Vancouver, ISO itp.
8

Hill-Butler, C. "Evaluating the effect of large magnitude earthquakes on thermal volcanic activity : a comparative assessment of the parameters and mechanisms that trigger volcanic unrest and eruptions". Thesis, Coventry University, 2015. http://curve.coventry.ac.uk/open/items/5f612a7d-ebbf-4d38-90aa-89c4984a1c0f/1.

Pełny tekst źródła
Streszczenie:
Volcanic eruptions and unrest have the potential to have large impacts on society causing social, economic and environmental losses. One of the primary goals of volcanological studies is to understand a volcano’s behaviour so that future instances of unrest or impending eruptions can be predicted. Despite this, our ability to predict the onset, location and size of future periods of unrest remains inadequate and one of the main problems in forecasting is associated with the inherent complexity of volcanoes. In practice, most reliable forecasts have employed a probabilistic approach where knowledge of volcanic activity triggers have been incorporated into scenarios to indicate the probability of unrest. The proposed relationship between large earthquakes and volcanic activity may, therefore, indicate an important precursory signal for volcanic activity forecasting. There have been numerous reports of a spatial and temporal link between volcanic activity and high magnitude seismic events and it has been suggested that significantly more periods of volcanic unrest occur in the months and years following an earthquake than expected by chance. Disparities between earthquake-volcano assessments and variability between responding volcanoes, however, has meant that the conditions that influence a volcano’s response to earthquakes have not been determined. Using data from the MODVOLC algorithm, a proxy for volcanic activity, this research examined a globally comparable database of satellite-derived volcanic radiant flux to identify significant changes in volcanic activity following an earthquake. Cases of potentially triggered volcanic activity were then analysed to identify the earthquake and volcano parameters that influence the relationship and evaluate the mechansisms proposed to trigger volcanic activity following an earthquake. At a global scale, this research identified that 57% [8 out of 14] of all large magnitude earthquakes were followed by increases in global volcanic activity. The most significant change in volcanic radiant flux, which demonstrates the potential of large earthquakes to influence volcanic activity at a global scale, occurred between December 2004 and April 2005. During this time, new thermal activity was detected at 10 volcanoes and the total daily volcanic radiant flux doubled within 52 days. Within a regional setting, this research also identified that instances of potentially triggered volcanic activity were statistically different to instances where no triggering was observed. In addition, assessments of earthquake and volcano parameters identified that earthquake fault characteristics increase the probability of triggered volcanic activity and variable response proportions at individual volcanoes and regionally demonstrated the critical role of the state of the volcanic system in determining if a volcano will respond. Despite the identification of these factors, this research was not able to define a model for the prediction of volcanic activity following earthquakes and, alternatively, proposed a process for response. In doing so, this thesis confirmed the potential use of earthquakes as a precursory indicator to volcanic activity and identified the most likely mechanisms that lead to seismically triggered volcanic unrest.
Style APA, Harvard, Vancouver, ISO itp.
9

Kwee, Regina. "Development and deployment of an Inner Detector Minimum Bias Trigger and analysis of minimum bias data of the ATLAS experiment at the Large Hadron Collider". Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät I, 2012. http://dx.doi.org/10.18452/16549.

Pełny tekst źródła
Streszczenie:
Weiche inelastische QCD Prozesse dominieren am LHC. Über 20 solcher Kollisionen werden innerhalb einer Strahlkreuzung bei ATLAS stattfinden, sobald der LHC die nominelle Luminosität von L = 1034 cm−2 s−1 und die Schwerpunktsenergie von p s = 14 TeV erreicht. Diese inelastischen Wechselwirkungen sind durch einen geringen Impulsübertrag gekennzeichnet, welche theoretisch lediglich durch phänomenologische Modelle angenähernd beschrieben werden können. Zu Beginn des Strahlbetriebs des LHC’s 2009 war die Luminosität relativ niedrig mit L = 1027 bis 1031 cm−2 s−1, was ein sehr gutes Szenario bot, um einzelne Proton-Proton Kollisionen zu selektieren und deren allgemeine Eigenschaften experimentell zu untersuchen. Zunächst wurde ein Minimum-Bias Trigger entwickelt, um Daten mit ATLAS aufzunehmen. Dieser Trigger, mbSpTrk, verarbeitet Signale der Silizium-Spurdetektoren und verwirft effizient Ereignisse ohne eine Proton-Wechselwirkung, wobei zugleich eine mögliche Verschiebung zu bestimmten Ereignistypen hin minimier wird. Um einen flexiblen Einsatz des Triggers zu gewährleisten, wurde er mit einer Sequenz ausgestattet, welche effizient Machinenuntergrund unterdrückt. Im zweiten Teil der Arbeit wurden geladenen Teilchenmultiplizitäten im zentralen Bereich in zwei kinematisch definierten Phasenräumen gemessen. Mindestens ein geladenes Teilchen mit einer Pseudorapidität kleiner als 0.8 und einem Transversalimpuls von pT > 0.5 bzw. 1 GeV musste vorhanden sein. Vier typische Minimum-Bias Verteilungen wurden bei zwei Schwerpunktsenergien von p s = 0.9 und 7 TeV gemessen. Die Ergebnisse sind derart präsentiert, dass sie nur minimal von Monte Carlo Modellen abhängen. Die vorgestellten Messungen stellen zudem den Beitrag der ATLAS Kollaboration dar für die erste, LHC-weit durchgeführte Analyse, der auch die CMS und ALICE Kollaborationen zustimmten. Ein Vergleich konnte mit den Pseudorapiditätsverteilungen angestellt werden.
Soft inelastic QCD processes are the dominant proton-proton interaction type at the LHC. More than 20 of such collisions pile up within a single bunch-crossing at ATLAS, when the LHC is operated at design luminosity of L = 1034 cm−2 s−1 colliding proton bunches with an energy of p s = 14 TeV. Inelastic interactions are characterised by a small transverse momemtum transfer and can only be approximated by phenomenological models that need experimental data as input. The initial phase of LHC beam operation in 2009, with luminosites ranging from L = 1027 to 1031 cm−2 s−1, offered an ideal period to select single proton-proton interactions and study general aspects of their properties. As first part of this thesis, a Minimum Bias trigger was developed and used for data-taking in ATLAS. This trigger, mbSpTrk, processes signals of the silicon tracking detectors of ATLAS and was designed to fulfill efficiently reject empty events, while possible biases in the selection of proton-proton collisions is reduced to a minimum. The trigger is flexible enough to cope also with changing background conditions allowing to retain low-pT events while machine background is highly suppressed. As second part, measurements of inelastic charged particles were performed in two phase-space regions. Centrally produced charged particles were considered with a pseudorapidity smaller than 0.8 and a transverse momentum of pT > 0.5 or 1 GeV. Four characteristic distributions were measured at two centre-of-mass energies of p s = 0.9 and 7 TeV. The results are presented with minimal model dependency to compare them to predictions of different Monte Carlo models for soft particle production. This analysis represents also the ATLAS contribution for the first common LHC analysis to which the ATLAS, CMS and ALICE collaborations agreed. The pseudorapidity distributions for both energies and phase-space regions are compared to the respective results of ALICE and CMS.
Style APA, Harvard, Vancouver, ISO itp.
10

Moesser, Travis J. "Guidance and Navigation Linear Covariance Analysis for Lunar Powered Descent". DigitalCommons@USU, 2010. https://digitalcommons.usu.edu/etd/654.

Pełny tekst źródła
Streszczenie:
A linear covariance analysis is conducted to assess closed-loop guidance, navigation, and control system (GN&C) performance of the Altair vehicle during lunar powered descent. Guidance algorithms designed for lunar landing are presented and incorporated into the closed-loop covariance equations. Navigation-based event triggering is also included in the covariance formulation to trigger maneuvers and control dispersions. Several navigation and guidance trade studies are presented demonstrating the influence of triggering and guidance and study parameters on the vehicle GN&C performance.
Style APA, Harvard, Vancouver, ISO itp.
11

Lucht, Sebastian [Verfasser]. "Installation, commissioning and performance of the trigger system of the Double Chooz experiment and the analysis of hydrogen capture neutrino events / Sebastian Lucht". Aachen : Hochschulbibliothek der Rheinisch-Westfälischen Technischen Hochschule Aachen, 2013. http://d-nb.info/1047324512/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
12

Cumani, Paolo. "Analysis and estimation of the scientific performance of the GAMMA-400 experiment". Doctoral thesis, Università degli studi di Trieste, 2015. http://hdl.handle.net/10077/10888.

Pełny tekst źródła
Streszczenie:
2013/2014
Per uno studio completo che parte dalla materia oscura e va all'origine e propagazione dei raggi cosmici, quello multi canale è uno degli approcci migliori per risolvere i quesiti irrisolti della fisica delle astroparticelle. GAMMA-400, grazie alla sua natura duale, dedita allo studio di raggi cosmici (elettroni fino alle energie del TeV e protoni e nuclei fino a 10^{15}-10^{16} eV) e raggi gamma (da 50 MeV fino a qualche TeV), si dedicherà allo studio di questi problemi. Lo scopo di questa tesi è lo studio delle prestazioni di GAMMA-400 per l'osservazione dei raggi gamma. Due diverse configurazioni della geometria sono state studiate: la "baseline" e la cosiddetta configurazione "enhanced". Le principali differenze tra queste due configurazioni si trovano nel tracciatore e nel calorimetro. Il tracciatore della "baseline" è composto da dieci piani di silicio, otto dei quali comprendono anche uno strato di ~0.1 X_0 di tungsteno. Il tracciatore della configurazione "enhanced" è invece composto da 25 piani di silicio inframezzati da uno strato di tungsteno di ~0.03 X_0. Il calorimetro della "baseline" è diviso in due sezioni: una parte composta da due piani di ioduro di cesio e silicio (chiamata "pre-shower") e una seconda parte composta da 28x28x12 cubi di ioduro di cesio. Il calorimetro della configurazione "enhanced" è invece composto solo da 20x20x20 cubi di ioduro di cesio. Per stimare le prestazioni ho sviluppato un algoritmo di ricostruzione della direzione del raggio gamma incidente. La ricostruzione può fare uso delle informazioni provenienti dal tracciatore, dal "pre-shower" o dal calorimetro, sia combinandole che singolarmente. Le direzioni ottenuta grazie alle informazioni del solo "pre-shower" o del solo calorimetro, anche se di minor risoluzione, possono essere utili per aumentare il numero di fotoni visti ad alta energia e per fornire le informazioni necessarie all'osservazione di transienti con i telescopi Cherenkov a terra. La risoluzione angolare utilizzando il tracciatore è migliore nel caso della configurazione "enhanced". A basse energie ciò è possibile grazie al minore tungsteno, e di conseguenza minor "scattering" multiplo, presente all'interno del tracciatore. Il calorimetro più piccolo, e più profondo, seppur ostacolando la ricostruzione dell'energia di fotoni ad alta energia, produce anche un numero minore di particelle di "backsplash" che peggiorano la ricostruzione delle tracce. L'area efficace totale della "baseline", potendo contare su un calorimetro più grande ed il "pre-shower", è più grande rispetto alla configurazione "enhanced". La risoluzione angolare, l'area efficace e la strategia di osservazione dello strumento contribuiscono alla sensitività per sorgenti puntiformi. La sensitività totale dello strumento è migliore per la "baseline" per energie maggiori di 5 GeV. Ho implementato un set prelminare di condizioni di "trigger" per lo studio dei raggi gamma tramite l'utilizzo delle informazioni del tracciatore. La necessità di rigettare la maggior parte delle particelle cariche deriva dall'elevato fondo presente in orbita (~10^6 protoni per raggio gamma) e una capacità di "downlink" limitata (~100 GB/day). Tra le due configurazioni si nota una differenza di meno dell'1% nel numero rimanente di protoni. Seppur promettente, tale risultato deve essere migliorato e possibili miglioramenti sono descritti nella tesi. Gli algoritmi di ricostruzione e "trigger" sono applicati all'analisi della possibilità di studiare "gamma-ray burst" (GRB) con la principale strumentazione a bordo di GAMMA-400. Una stima del numero di eventi non ricostruiti, perchè avvengono nel tempo morto tra due "trigger", è effettuata tramite la simulazione di un ipotetico GRB accoppiata ai tempi di arrivo dei fotoni presi dai dati reali di due GRB osservati da Fermi. In nessuna delle due configurazioni è visibile una percentuale significativa di "pile-up". Anche aumentando il flusso dei GRB la percentuale di eventi non ricostruiti non supera mai il 6%. Nonostante questo risultato, molto dipenderà dal disegno finale dell’elettronica di lettura dei rivelatori che potrebbe aumentare i tempi morti dello strumento.
XXVII Ciclo
1987
Style APA, Harvard, Vancouver, ISO itp.
13

Wu, Zhongyu. "Wide Area Analysis and Application in Power System". Thesis, Virginia Tech, 2009. http://hdl.handle.net/10919/36427.

Pełny tekst źródła
Streszczenie:
Frequency monitoring network (FNET) is an Internet based GPS synchronized wide-area frequency monitoring network deployed at distribution level. At first part of this thesis, FNET structure and characteristics are introduced. After analysis and smoothing FDR signals, the algorithm of event trigger is present with Visual C++ DLL programming. Estimation of disturbance location method is discussed based on the time delay of arriving (TDOA) in the second part of this work. In this section, author shows the multiply method to calculate event time, which is important when deal with pre-disturbance frequency in TDOA part. Two event kinds are classified by the change of frequency and the linear relationship between change of frequency and imbalance of generation and load power is presented. Prove that Time Delay of Arrival (TDOA) is a good algorithm for estimation event location proved by real cases. At last, the interface of DLL module and the key word to import and export DLL variables and function is described.

At last, PSS compensation optimization with a set of nonlinear differential algebraic equations (DAE) is introduced in detail. With combining the bifurcation theory of nonlinear system and the optimization theory, the optimal control of small-signal stability of power electric systems are solved. From the perspective of stability margin, global coordination of controller parameters is studied to ensure the stable operation of power grids. The main contents of this thesis include:

ï¼ 1ï¼ Models of power systems and test power electric systems. Tht5e dynamic and static models of the elements of power systems, such as generatorbbs, AVRs, PSSs, loads and FACTS controllers are presented. Method of power system linearization modeling is introduced. Three test power systems, WSCC 9-bus system, 2-area system, New England 39-bus system, are used in thesis.

ï¼ 2ï¼ Multi-objective optimizations based on bifurcation theory. The optimization models, damping control-Hopf bifurcation control, voltage control-damping control, are presented. Pareto combined with evolutionary strategy (ES) are used to solve multi-objective optimizations. Based on traditional PSS parameters optimizations, it can be formulated as a multi-objective problem, in which, two objectives should be taken into account. The minimum damping torque should be identified.
Master of Science

Style APA, Harvard, Vancouver, ISO itp.
14

Fuentes, Guerrero César. "Grain size analysis of a short sediment core from the Lomonosov Ridge, central Arctic Ocean". Thesis, Stockholms universitet, Institutionen för geologiska vetenskaper, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-118414.

Pełny tekst źródła
Streszczenie:
Trigger core 07, is a 53 cm long sediment core that was collected during the Danish-Swedish expedition “Lomonosov Ridge off Greenland 2012” on the slope of the Lomonosov Ridge in the Arctic Ocean at a depth of 2522 m. This part of the world has experienced critical environmental changes during the Quaternary. Ice-sheets have advanced and retreated, and deposited sediments through all the Arctic Ocean. Glacial sediments contain coarser material and are gray, whereas interglacial sediments are brown, because of high amounts of manganese, and consist of fine-grained material.  The aim of this project is to make grain size analysis on TC 07 with the purpose to make an interpretation of the grain size data in relation to glaciation history and paleo-oceanography. For that, a correlation with piston core 07 has been made, and also a correlation between piston core 07 and the Arctic Coring Expedition, ACEX. The results showed that fine-grained material is more abundant in the top brown unit down to 32 cm, suggesting an interglacial period. This is followed by a gray-beige unit that goes down to 49 cm, and consist of coarser material, indicating glacial deposits. This unit can be linked to the Marine Isotope Stage 2, MIS 2, which began approximately 29000 years ago and ended about 14000 years ago
”Trigger core 07” är en 53 cm lång sedimentkärna som togs upp på ett djup av 2522 m från Lomonosovryggen i Arktisk under en dansk-svensk expedition kallad ”Lomonosov Ridge off Greenland 2012”. Den här delen av världen har genomgått kraftiga klimatförändringar under kvartär. Istäcken har vuxit fram och dragit sig tillbaka och avsatt sediment över hela Arktis. Sediment avsatta under istider, kännetecknas av att vara gråa med mycket grovt material, medan sediment avsatta under mellanistider är bruna, vilket är på grund av de höga halterna av mangan och består av finkornigt material. Målet med denna uppsats är att göra en kornstorleksanalys på sedimentkärnan, med syfte i åtanke på att göra en tolkning av informationen i förhållande till istidshistorik och paleo-oceanografi. För att kunna gå tillväga med det, har en korrelation gjorts mellan kärnan och ”piston core 07”, samt en korrelation mellan ”piston core 07” och ”Arctic Coring Expedition, ACEX”. Resultaten visar en brun enhet rik på finkornigt material ned till 32 cm, vilket är typiskt för mellanistider. Den följs av en grå-beige enhet som sträcker sig ned till 49 cm och består av grovkornigt material vilket tyder på istid. Den här enheten kan kopplas till ”Marine Isotope Stage 2, MIS ”, som varade mellan 14000 och 29000 år sedan.
Style APA, Harvard, Vancouver, ISO itp.
15

Krämer, Markus Verfasser], Stephan [Akademischer Betreuer] [Gutachter] Paul i Shawn [Gutachter] [Bishop. "Evaluation and Optimization of a Digital Calorimetric Trigger and Analysis of Pion-Photon-Interactions in π-Ni→π-π0π0Ni Reactions at COMPASS at CERN / Markus Krämer. Betreuer: Paul Stephan. Gutachter: Shawn Bishop ; Stephan Paul". München : Universitätsbibliothek der TU München, 2016. http://d-nb.info/1104368153/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
16

Kwee, Regina Verfasser], Hermann [Akademischer Betreuer] [Kolanoski, Nick [Akademischer Betreuer] Ellis i Klaus [Akademischer Betreuer] Mönig. "Development and deployment of an Inner Detector Minimum Bias Trigger and analysis of minimum bias data of the ATLAS experiment at the Large Hadron Collider / Regina Kwee. Gutachter: Hermann Kolanoski ; Nick Ellis ; Klaus Mönig". Berlin : Humboldt Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät I, 2012. http://d-nb.info/1025112350/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
17

Kwee, Regina Esther [Verfasser], Hermann [Akademischer Betreuer] Kolanoski, Nick [Akademischer Betreuer] Ellis i Klaus [Akademischer Betreuer] Mönig. "Development and deployment of an Inner Detector Minimum Bias Trigger and analysis of minimum bias data of the ATLAS experiment at the Large Hadron Collider / Regina Kwee. Gutachter: Hermann Kolanoski ; Nick Ellis ; Klaus Mönig". Berlin : Humboldt Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät I, 2012. http://nbn-resolving.de/urn:nbn:de:kobv:11-100203445.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
18

Ehrencrona, Kristina. "Experience-Based Co-Design ett användbart arbetssätt för psykiatrisk heldygnsvård? : Erfarenheter från ett förbättringsarbete inom psykiatrisk heldygnsvård i Stockholm". Thesis, Hälsohögskolan, Högskolan i Jönköping, HHJ. Kvalitetsförbättring och ledarskap inom hälsa och välfärd, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-35999.

Pełny tekst źródła
Streszczenie:
Bakgrund: Patientinvolvering och patientdelaktighet inom vården har blivit allt mer aktuellt de senaste åren. En metod för patientdelaktighet som testats framför allt inom somatisk vård är Experience-Based Co-Design (EBCD). Lokalt problem: Verksamheten har strukturer för att fånga erfarenheter från patienter, men det saknas strukturer för att fånga närståendes och personals erfarenheter. Det saknas ett forum där patienter, närstående och personal kan mötas och tillsammans arbeta med förbättringar. Syfte: För förbättringsarbetet, testa metoder från EBCD inom kontexten psykiatrisk heldygnsvård. För studien, beskriva deltagares erfarenheter av att involveras i förbättringsarbete utifrån EBCD, samt att belysa vad som gör det svårt att engagera patienter i förbättringsarbete. Metod: Övergripande struktur för förbättringsarbetet är Nolans förbättringsmodell och PDSA. Studien utgörs av kvalitativ innehållsanalys av två semistrukturerade fokusgruppsintervjuer. Interventioner: Metoder från EBCD har anpassats efter kontexten och testats. Resultat: Att delta i förbättringsarbete utifrån EBCD har varit uppskattat och utvecklande. Svårigheter har framför allt varit rekrytering av patienter. Slutsatser: EBCD går att använda inom psykiatrisk heldygnsvård, modifieringar är nödvändiga. Vilka och hur behöver studeras vidare. EBCD påverkar individen och organisationen. För att uppnå ett önskat utfall och för att engagera deltagare behöver vissa förutsättningar uppfyllas vad gäller strukturer och maktutjämning mellan patienter, närstående och personal.
Background: Patient involvement and patient participation within health care has been more and more important the last years. One method for patient involvement that has been tested (mostly in somatic care) is Experience-Based Co-Design (EBCD).  Local problem: The organization has structures to gather experiences from patients, but there is no structure to gather experiences from dependants or staff. There is no forum for patients, dependants and staff to meet and together work with improvement. Aim: For the Quality Improvement project (QIP) try methods from EBCD in the context of psychiatric in-patient care. For the study of the QIP describe participant’s experiences of being part of a QIP based on EBCD, and highlight what makes it difficult to engage patients in QIP. Method: The main structure for the QIP is Nolan’s model of change and PDSA. The study consists of a qualitative content analysis based on two semi-structured focus group interviews. Interventions: Methods from EBCD has been adjusted according to the context and then tested. Result: To participate in a QIP based on EBCD has been appreciated and developing. Difficulties have above all been the recruiting of patients. Conclusions: EBCD is possible to use in psychiatric in-patient care, modifications are necessary. Which modifications and how needs to be examined further. EBCD affects both the individual and the organization. To achieve asked goals and to engage patients there are some conditions that need to be fulfilled according to structures and equalisation of power between patients, dependants and staff.
Style APA, Harvard, Vancouver, ISO itp.
19

Úlehlová, Eva. "Návrh postupu kontroly vybraných součástí revolveru". Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2020. http://www.nusl.cz/ntk/nusl-417743.

Pełny tekst źródła
Streszczenie:
The goal of this master’s thesis was design of the inspection procedure for hammer and trigger of the specific revolver model. Thesis was developed in cooperation with the manufacturer of the revolvers. The theoretical part deals with the MSA methodology, which is used to assess acceptability of measurement systems. The practical part describes the current measurement system and performs gage repeatability and reproducibility study. It was confirmed that the current measurement system requires improvement. Subsequently coordinate systems were designed, based on functional features of the hammer and trigger. Automated optical measurements, based on the coordinate systems, were performed. The results from these measurements were again assessed by the gage R&R study. The analysis confirmed improvement of acceptability of the designed measurement systems. Based on these results, it is recommended to apply suggested procedures in practice. Results and recommendations of this master’s thesis can contribute to develop metrology in the company and improve the existing measurement system.
Style APA, Harvard, Vancouver, ISO itp.
20

Keller, Andrew James. "Part I -- The Forgotten Child of Zeal; Part II -- Scriabin's Mysterium Dream: An Analysis of Alexander Nemtin's Realization of Prefatory Action: Part I - Universe". Kent State University / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=kent1556932468905459.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
21

Prosser, Laura. "The backward inhibition effect in task switching : influences and triggers". Thesis, University of Aberdeen, 2018. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?pid=237859.

Pełny tekst źródła
Streszczenie:
It has been proposed that backward inhibition (BI) is a mechanism which facilitates task-switching by suppressing the previous task. One view is that BI is generated in response to conflict between tasks being experienced during task-performance. Across twelve experiments, this thesis investigated this proposition by addressing two questions: What affects the size/presence of BI? and When is BI triggered?: What affects the size/presence of BI? and When is BI triggered? The findings from Chapter 2 suggest that BI is increased when conflict stemming from shared target features is present, and that the expectation, as well as experience, of conflict might increase BI. Chapter 3 suggests that BI is increased when target features are shared (and that no BI is present otherwise), but contrary to previous findings, BI is not increased when response features are shared. Chapter 4 provided indirect support for the view that BI can be present without between-task conflict (i.e., neither shared targets nor responses), and indicated that in such a context BI (at least at item-level) requires trial-by-trial cuing. Chapter 5 indicates that BI is triggered prior to response execution and after the preparation stage of task processing, therefore indicating that either the target processing stage or the response selection stage of task processing are responsible for triggering BI. Together, the results of the experiments in this thesis indicate that BI can be driven by conflict stemming from target sharing. However, there was no evidence that conflict stemming from response sharing drives BI. In addition, the data suggested that BI might be generated by the expectation of conflict and by task preparation. Therefore, BI might be applied in response to conflict at any stage of task processing and the decision to apply BI might be decided in advance of such conflict.
Style APA, Harvard, Vancouver, ISO itp.
22

Nyberg, John-Levi. "Lightning Impulse Breakdown Tests : Triggered Spark Gap Analysis". Thesis, Umeå universitet, Institutionen för tillämpad fysik och elektronik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-141172.

Pełny tekst źródła
Streszczenie:
This project was made by student from UmeåUniversity and a request from the universityETH in Zürich, Switzerland. In this research project the electrical strengthof different natural gases and mixtures was investigated, and the aim was to finda gas or gas mixture with a natural origin or strongly attaching gases that couldreplace SF6 (Sulfur Hexafluoride). The gases were tested with breakdown experiments,one of those test was called lightning impulse breakdown test. The mainpart of this project was to investigate triggered spark gaps, which could be used inlightning impulse breakdown test. These spark gaps were made in a previous thesis,but have proved to not be reliable, therefore an investigation was needed. In thelab, a breakdown test setup, made up of a rectifying circuit and a transformer, wasused. In this project voltages up to 140kV were used. The two main parts of theproject were the spark gap unit and circuit analyzing and the spark gap characterization.These two parts contained test to see if the spark gap worked as it shouldor if there were any problems with it. The results from the tests showed that therewere problems with the spark gap, but these problems could be corrected or avoidedthrough controls of the spark gap before use.
Style APA, Harvard, Vancouver, ISO itp.
23

Konzal, Jan. "Analytický nástroj pro generování bicích triggerů z downmix záznamu". Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2020. http://www.nusl.cz/ntk/nusl-413278.

Pełny tekst źródła
Streszczenie:
This thesis deals with the design and implementation of a tool for generating drums triggers from a downmix record. The work describes the preprocessing of the input audio signal and methods for the classification of strokes. The drum classification is based on the similarity of the signals in the frequency domain. Principal component analysis (PCA) was used to reduce the number of dimensions and to find the characteristic properties of the input data. The method support vector machine (SVM) was used to classify the data into individual classes representing parts of the drum kit. The software was programmed in Matlab. The classification model was trained on a set of 728 drum samples for seven categories (kick, snare, hi-hat, crash, ride, kick + hi-hat, snare + hi-hat). The success of the system in the classification is 75 %.
Style APA, Harvard, Vancouver, ISO itp.
24

Gardner, Robert Matthew. "A Wide-Area Perspective on Power System Operation and Dynamics". Diss., Virginia Tech, 2008. http://hdl.handle.net/10919/26779.

Pełny tekst źródła
Streszczenie:
Classically, wide-area synchronized power system monitoring has been an expensive task requiring significant investment in utility communications infrastructures for the service of relatively few costly sensors. The purpose of this research is to demonstrate the viability of power system monitoring from very low voltage levels (120 V). Challenging the accepted norms in power system monitoring, the document will present the use of inexpensive GPS time synchronized sensors in mass numbers at the distribution level. In the past, such low level monitoring has been overlooked due to a perceived imbalance between the required investment and the usefulness of the resulting deluge of information. However, distribution level monitoring offers several advantages over bulk transmission system monitoring. First, practically everyone with access to electricity also has a measurement port into the electric power system. Second, internet access and GPS availability have become pedestrian commodities providing a communications and synchronization infrastructure for the transmission of low-voltage measurements. Third, these ubiquitous measurement points exist in an interconnected fashion irrespective of utility boundaries. This work offers insight into which parameters are meaningful to monitor at the distribution level and provides applications that add unprecedented value to the data extracted from this level. System models comprising the entire Eastern Interconnection are exploited in conjunction with a bounty of distribution level measurement data for the development of wide-area disturbance detection, classification, analysis, and location routines. The main contributions of this work are fivefold: the introduction of a novel power system disturbance detection algorithm; the development of a power system oscillation damping analysis methodology; the development of several parametric and non-parametric power system disturbance location methods, new methods of power system phenomena visualization, and the proposal and mapping of an online power system event reporting scheme.
Ph. D.
Style APA, Harvard, Vancouver, ISO itp.
25

Jung, Aera. "JEM-EUSO prototypes for the detection of ultra-high-energy cosmic rays (UHECRs) : from the electronics of the photo-detection module (PDM) to the operation and data analysis of two pathnders". Thesis, Sorbonne Paris Cité, 2017. http://www.theses.fr/2017USPCC108/document.

Pełny tekst źródła
Streszczenie:
L’expérience JEM-EUSO (traduction de Observatoire spatial de l’univers extrême à bord du module de l'expérience japonaise) est conçu pour observer les UHECR en détectant la lumière fluorescente UV émise par la gerbe qui se développe lorsque les UHECR interagissent avec l'atmosphère terrestre. Les gerbes atmosphériques sont constituées de dizaines de milliards de particules secondaires ou plus traversant l'atmosphère quasiment à la vitesse de la lumière, excitant les molécules d'azote qui émettent ensuite de la lumière dans la gamme UV. Alors que cette « technique de fluorescence » est habituellement utilisée au sol, en opérant ainsi à partir de l'espace, JEM-EUSO, pour la première fois, fournira des statistiques élevées sur ces événements. Avec un large champ de vue de ± 30 °, JEM-EUSO pourra observer depuis l’espace un volume d'atmosphère beaucoup plus grand que ce qui est possible du sol, en collectant un nombre sans précédent d'événements UHECR aux plus hautes énergies.Pour les quatre prototypes d’expériences construites par la collaboration, nous avons développé un ensemble commun d'électronique, en particulier le système central d'acquisition de données capable de fonctionner au sol, sur des ballons à haute altitude et dans l'espace.Ces expériences utilisent toutes un détecteur composé d'un module de détection de photo (PDM) identique aux 137 qui seront présents sur la surface focale JEM-EUSO. La lumière UV générée par les gerbes atmosphériques à haute énergie passe le filtre UV et frappe les tubes à photomultiplicateurs multi-anodes (MAPMT). Les photons UV sont alors transformés en électrons, qui sont multipliés par les MAPMT et le courant qu’ils créent est amplifié par des cartes ASIC de circuit intégré (EC-ASIC), qui effectuent également le comptage des photons et l'estimation de charge. Une carte FPGA nommé PDM board s'interface avec ces cartes ASIC, fournissant des paramètres d'alimentation et de configuration à ces cartes ASIC, collecte alors les données et exécute le déclenchement d’acquisition de niveau 1.Dans le cadre de ces travaux, je me suis occupée de la conception, du développement, de l'intégration et du test la carte FPGA PDM board pour les missions EUSO-TA et EUSO-Balloon ainsi que des tests d'algorithme de déclenchement autonomes d’acquisitions et j'ai également analysé les données de vol d’EUSO-Balloon et de la campagne sol EUSO-TA d’octobre 2015.Dans cette thèse, je donnerai un bref aperçu des rayons cosmiques à haute énergie, y compris de leur technique de détection et des principales expériences pour les détecter (chapitre 1), je décrirai JEM-EUSO et ses pathfinders (chapitre 2), je présenterai les détails de la conception et de la fabrication du PDM (chapitre 3) et de la carte FPGA PDM board (chapitre 4), ainsi que des tests d'intégration d’EUSO-TA et d’EUSO-Balloon (chapitre 5). Je ferai un rapport sur la campagne EUSO-Balloon de 2014 (chapitre 6) et sur ses résultats (chapitre 7), y compris une analyse spécifique développée pour rechercher des variations globales de l'émissivité UV au sol et j’appliquerai une analyse similaire aux données collectées sur le site de Telescope Array (Chapitre 8). Enfin, je présenterai la mise en œuvre et le test du déclencheur de premier niveau (L1) dans la carte de contrôle FPGA (chapitre 9). Un bref résumé de la thèse sera donné au chapitre 10
The JEM-EUSO (Extreme Universe Space Observatory on-board the Japanese Experiment Module) international space mission is designed to observe UHECRs by detecting the UV fluorescence light emitted by the so-called Extensive Air Shower (EAS) which develop when UHECRs interact with the Earth’s atmosphere. The showers consist of tens of billions or more secondary particles crossing the atmosphere at nearly the speed of light, which excite nitrogen molecules which then emit light in the UV range. While this so-called “fluorescence technique'” is routinely used from the ground, by operating from space, JEM-EUSO will, for the first time, provide high-statistics on these events. Operating from space, with a large Field-of-View of ±30 °, allows JEM-EUSO to observe a much larger volume of atmosphere, than possible from the ground, collecting an unprecedented number of UHECR events at the highest energies.For the four pathfinder experiments built within the collaboration, we have been developing a common set of electronics, in particular the central data acquisition system, capable of operating from the ground, high altitude balloons, and space.These pathfinder experiments all use a detector consisting of one Photo-detection Modules (PDMs) identical to the 137 that will be present on the JEM-EUSO focal surface. UV light generated by high-energy particle air showers passes the UV filter and impacts the Multi-anode Photomultiplier Tubes (MAPMT). Here UV photons are converted into electrons, which are multiplied by the MAPMTs and fed into Elementary Cell Application-Specific Integrated Circuit (EC-ASIC) boards, which perform the photon counting and charge estimation. The PDM control board interfaces with these ASIC boards, providing power and configuration parameters, collecting data and performing the level 1 trigger. I was in charge of designing, developing, integrating, and testing the PDM control board for the EUSO-TA and EUSO-Balloon missions as well as the autonomous trigger algorithm testing and I also performed some analysis of the EUSO-Balloon flight data and data from the EUSO-TA October 2015 run.In this thesis, I will give a short overview of high-energy cosmic rays, including their detection technique and the leading experiments (Chapter 1), describe JEM-EUSO and its pathfinders including a description of each instrument (Chapter 2), present the details of the design and the fabrication of the PDM (Chapter 3) and PDM control board (Chapter 4), as well as the EUSO-TA and EUSO-Balloon integration tests (Chapter 5). I will report on the EUSO-Balloon campaign (Chapter 6) and results (Chapter 7), including a specific analysis developed to search for global variations of the ground UV emissivity, and apply a similar analysis to data collected at the site of Telescope Array (Chapter 8). Finally, I will present the implementation and testing of the first-level trigger (L1) within the FPGA of the PDM control board (Chapter 9). A short summary of the thesis will be given in Chapter 10
Style APA, Harvard, Vancouver, ISO itp.
26

Ricci, Federica. "Analysis of past accident triggered by natural events (NaTech)". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019.

Znajdź pełny tekst źródła
Streszczenie:
In the last decades an increasing trend of NaTech accidents (Natural Hazard Triggering Technological Disasters) is affecting industry, possibly leading to major accidents, exacerbated by the occurrence of multiple simultaneous failures, cascading events (domino effects), and the disruption of utilities, safety systems, and lifelines. In this panorama, it becomes vital to collect and analyse data on past industrial accidents caused by natural events, being almost the unique source of information on NaTech scenarios. Indeed, the main objective of the present work is to understand the dynamic of NaTech accidental events, in order to draw lessons aimed at minimizing and preventing their footprint. Firstly, a database collecting 908 accidents that affected chemical and process industries has been created. Secondly, the database has been analysed in order to find out the trend of NaTech accident compared to the one of natural events, the geographical area involved, the triggering natural events, the final scenarios that occurs, the consequences on human and the economic damages. Particular attention has been given to the presence of safety barriers and their effectiveness in case of NaTech scenarios. Depending on the availability of information, some focuses have been made on specific natural events. Lastly, Multiple Correspondence Analysis has been used to evaluate additional specific correlations between the categorical variables present in the database. Through the present technique, the relationship between each natural event and the other characteristic of the accident are clarified, such as final scenarios, macro sectors involved and consequences on human and on assets. Also in this case, the presence and the effectiveness of barriers is evaluated according to the natural events.
Style APA, Harvard, Vancouver, ISO itp.
27

Yusuf, Shamil. "Triggers and substrates in atrial fibrillation : an in-depth proteomic and metabolomic analysis". Thesis, University of London, 2009. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.518118.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
28

Turel, Mesut. "Soft computing based spatial analysis of earthquake triggered coherent landslides". Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/45909.

Pełny tekst źródła
Streszczenie:
Earthquake triggered landslides cause loss of life, destroy structures, roads, powerlines, and pipelines and therefore they have a direct impact on the social and economic life of the hazard region. The damage and fatalities directly related to strong ground shaking and fault rupture are sometimes exceeded by the damage and fatalities caused by earthquake triggered landslides. Even though future earthquakes can hardly be predicted, the identification of areas that are highly susceptible to landslide hazards is possible. For geographical information systems (GIS) based deterministic slope stability and earthquake-induced landslide analysis, the grid-cell approach has been commonly used in conjunction with the relatively simple infinite slope model. The infinite slope model together with Newmark's displacement analysis has been widely used to create seismic landslide susceptibility maps. The infinite slope model gives reliable results in the case of surficial landslides with depth-length ratios smaller than 0.1. On the other hand, the infinite slope model cannot satisfactorily analyze deep-seated coherent landslides. In reality, coherent landslides are common and these types of landslides are a major cause of property damage and fatalities. In the case of coherent landslides, two- or three-dimensional models are required to accurately analyze both static and dynamic performance of slopes. These models are rarely used in GIS-based landslide hazard zonation because they are numerically expensive compared to one dimensional infinite slope models. Building metamodels based on data obtained from computer experiments and using computationally inexpensive predictions based on these metamodels has been widely used in several engineering applications. With these soft computing methods, design variables are carefully chosen using a design of experiments (DOE) methodology to cover a predetermined range of values and computer experiments are performed at these chosen points. The design variables and the responses from the computer simulations are then combined to construct functional relationships (metamodels) between the inputs and the outputs. In this study, Support Vector Machines (SVM) and Artificial Neural Networks (ANN) are used to predict the static and seismic responses of slopes. In order to integrate the soft computing methods with GIS for coherent landslide hazard analysis, an automatic slope profile delineation method from Digital Elevation Models is developed. The integrated framework is evaluated using a case study of the 1989 Loma Prieta, CA earthquake (Mw = 6.9). A seismic landslide hazard analysis is also performed for the same region for a future scenario earthquake (Mw = 7.03) on the San Andreas Fault.
Style APA, Harvard, Vancouver, ISO itp.
29

Enns-Ruttan, Jennifer Sylvia. "Analysis of electrophysiological models of spontaneous secondary spiking and triggered activity". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape15/PQDD_0005/NQ34522.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
30

Tatard, Lucile. "Statistical analysis of triggered landslides: implications for earthquake and weathering controls". Thesis, University of Canterbury. Geological Sciences, 2010. http://hdl.handle.net/10092/4011.

Pełny tekst źródła
Streszczenie:
We first aim to review the external perturbations which can lead to landslide failure. We review the perturbations and associated processes in five sections: i) increase of slope angle, ii) increase of load applied on the slope, iii) rise of groundwater level and pore pressure, iv) frost weathering processes and v) earthquake loading. Second, we analyse the New Zealand landslide catalogue, integrating all landslides recorded in New Zealand between 1996 and 2004. We analyse the New Zealand landslide series in time and rate and find a strong correlation in landslide occurrences. The time correlation found between landslide occurrences for events occurring more than 10 days apart is not found to be driven by the earthquake-landslide nor the landslide-landslide interactions. We suggest the climate-landslide interactions drive, non-linearly and beyond the empirically reported daily correlation, most of New Zealand landslide dynamics. Third, we compare the landslide dynamics in time, space and rate of New Zealand, Yosemite cliffs (California, USA), Grenoble cliffs (Is`ere, French Alps), Val d’Arly cliffs (Haute-Savoie, French Alps), Australia and Wollongong (New South Wales, Australia). Landslides are resolved as correlated to each other in time for all catalogues. The New Zealand, Yosemite, Australia and Wollongong landslide daily rates are well fitted by a power law for rates between 1 and 1000 events per day, suggesting that the same mechanism(s) are driving both the large landslide daily crises and the single events. The joint analysis of the six catalogues permits to derive parameters that allow to sort the relative landslide dynamics in each of the six areas. From the most re-active landslide area (New Zealand) to the less re-active area (Grenoble), the global trend of the different parameters are: i) decreasing departure from randomness; ii) decreasing maximum daily rates and area over which the trigger operates; iii) decreasing landslide triggering for landslides occurring one day apart; iv) decreasing global interaction to earthquake, rainfall and temperature. Fourth, we compare earthquake aftershock space distributions with landslide space distributions triggered by the Chi-Chi MW7.6 earthquake (Taiwan), by the MW7.6 Kashmir earthquake (Pakistan), by the MW7.2 Fiordland earthquake (New Zealand), by the MW6.6 Northridge earthquake (California) and by the MW5.6 Rotoehu earthquake (New Zealand). We resolve the seismic aftershock and landslide normalised number of events to display roughly similar patterns with distances, when comparing the landslide distributions to their aftershock distribution counterparts. When comparing the five landslide - aftershock distribution pairs for a given mainshock, we do not resolve a clear common pattern. Then we compare landslide and aftershock distance distributions to ground motion observations (Peak Ground Acceleration, Peak Ground Velocity and Peak Ground Displacement) and we find no linear scaling of the number of landslides or aftershocks with none of the ground motion variables. We suggest that landslides and aftershocks are driven by the same mechanisms and shed light on the Peak Ground Displacement and associated static stress changes on landslide triggering.
Style APA, Harvard, Vancouver, ISO itp.
31

Oosthuizen, Lizle. "The impact of GnRH-agonist triggers on autologous in vitro fertilization outcomes: A retrospective analysis". Master's thesis, Faculty of Health Sciences, 2021. http://hdl.handle.net/11427/33934.

Pełny tekst źródła
Streszczenie:
BACKGROUND: In vitro fertilization in assisted reproduction requires controlled ovarian stimulation with exogenous gonadotrophins and oocyte maturation before ultrasound guided aspiration. GnRH-agonists have been utilized as an alternative to hCG for oocyte maturation prior to follicle aspiration. GnRH-agonist triggers are proven to lower ovarian hyperstimulation syndrome risk, a condition that can be life threatening. Lower pregnancy rates have been reported in the literature with the GnRH-agonist trigger, leading to recommendations of elective embryo cryopreservation, delayed transfer and increased costs to the patient. AIM: To determine if intensive luteal phase support of GnRH-agonist triggered cycles with intramuscular progesterone and oral oestrogen can result in similar pregnancy rates when comparing fresh embryo transfer outcomes with those of hCG triggered cycles. STUDY DESIGN, SIZE, DURATION: The study was a retrospective analysis of 279 fresh embryo transfers in autologous IVF cycles, which took place over the period of one year at Cape Fertility Clinic in Cape Town. RESULTS: Biochemical (49.40% vs 41.84%), clinical (43.37% vs 36.22%) and ongoing pregnancy rates (37.35% vs 33.16%) were higher in the GnRH-agonist triggered arm in comparison to the hCG triggered arm, respectively. Miscarriage rates were similar at 24.29% in the GnRH-agonist arm, versus 20.73% in the hCG triggered arm. None of the results were statistically significant. CONCLUSION: Similar pregnancy rates can be achieved with both hCG and GnRH-agonist triggered IVF cycles by supporting the GnRH-agonist triggered luteal phase with intensive intramuscular progesterone support.
Style APA, Harvard, Vancouver, ISO itp.
32

Schwessinger, Benjamin. "Genetic analysis of signalling components of PAMP-triggered immunity (PTI) in plants". Thesis, University of East Anglia, 2010. https://ueaeprints.uea.ac.uk/25632/.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
33

Aggleton, Robin Cameron. "Searches for exotic Higgs bosons at CMS : from Level-1 jet trigger calibration to data analyses and their interpretation". Thesis, University of Southampton, 2016. https://eprints.soton.ac.uk/410358/.

Pełny tekst źródła
Streszczenie:
This thesis covers several topics investigating the nature of exotic Higgs bosons, using data from the Compact Muon Solenoid (CMS) at the Large Hadron Collider (LHC).A search was performed for invisible decays of a Higgs boson, using 19.5 fb−1 of data from proton-proton collisions collected during 2012. No significant excess was observed, and an observed (expected) upper limit on the invisible branching fraction of the125 GeV Higgs boson was set at BR(h → Invis.) < 0.65 (0.49) at 95% confidence level (CL).A search for a pair of light Higgs bosons with masses 4–8 GeV, produced by the discovered Higgs boson and each decaying into a pair of tau leptons, was also performed using the collision data from 2012. No significant excess was observed in this search either, and upper limits were set on the total production cross-section for such processes as a function of the light boson mass. The observed limit at 95% CL ranges from 4.5 to 10.3 pb, with corresponding expected limits 2.9 and 10.3 pb. The results of several light Higgs boson searches carried out at the LHC are interpreted within the context of the Next-to-Minimal Supersymmetric Standard Model. The limits from these searches are compared to model predictions, and theirimpact on model parameters is discussed. The upgrade of the CMS Level-1 trigger allows for more sophisticated object identification algorithms, including removal of contributions from overlapping collisions. The derivation of calibrations for the jet algorithm in both the interim and fullupgrades is examined, along with their performance.
Style APA, Harvard, Vancouver, ISO itp.
34

Weber, Marlene. "Automotive emotions : a human-centred approach towards the measurement and understanding of drivers' emotions and their triggers". Thesis, Brunel University, 2018. http://bura.brunel.ac.uk/handle/2438/16647.

Pełny tekst źródła
Streszczenie:
The automotive industry is facing significant technological and sociological shifts, calling for an improved understanding of driver and passenger behaviours, emotions and needs, and a transformation of the traditional automotive design process. This research takes a human-centred approach to automotive research, investigating the users' emotional states during automobile driving, with the goal to develop a framework for automotive emotion research, thus enabling the integration of technological advances into the driving environment. A literature review of human emotion and emotion in an automotive context was conducted, followed by three driving studies investigating emotion through Facial-Expression Analysis (FEA): An exploratory study investigated whether emotion elicitation can be applied in driving simulators, and if FEA can detect the emotions triggered. The results allowed confidence in the applicability of emotion elicitation to a lab-based environment to trigger emotional responses, and FEA to detect those. An on-road driving study was conducted in a natural setting to investigate whether natures and frequencies of emotion events could be automatically measured. The possibility of assigning triggers to those was investigated. Overall, 730 emotion events were detected during a total driving time of 440 minutes, and event triggers were assigned to 92% of the emotion events. A similar second on-road study was conducted in a partially controlled setting on a planned road circuit. In 840 minutes, 1947 emotion events were measured, and triggers were successfully assigned to 94% of those. The differences in natures, frequencies and causes of emotions on different road types were investigated. Comparison of emotion events for different roads demonstrated substantial variances of natures, frequencies and triggers of emotions on different road types. The results showed that emotions play a significant role during automobile driving. The possibility of assigning triggers can be used to create a better understanding of causes of emotions in the automotive habitat. Both on-road studies were compared through statistical analysis to investigate influences of the different study settings. Certain conditions (e.g. driving setting, social interaction) showed significant influence on emotions during driving. This research establishes and validates a methodology for the study of emotions and their causes in the driving environment through which systems and factors causing positive and negative emotional effects can be identified. The methodology and results can be applied to design and research processes, allowing the identification of issues and opportunities in current automotive design to address challenges of future automotive design. Suggested future research includes the investigation of a wider variety of road types and situations, testing with different automobiles and the combination of multiple measurement techniques.
Style APA, Harvard, Vancouver, ISO itp.
35

Plambeck, Nils. "Triggers of entrepreneurial actions : an analysis of the relationships between managerial interpretation, slack resources, and product innovation /". Berlin : WiKu-Verl, 2005. http://www.gbv.de/dms/zbw/391826948.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
36

Shen, Qian-Hua. "Functional analysis of barley MLA-triggered disease resistance to the powdery mildew pathogen". [S.l. : s.n.], 2004. http://deposit.ddb.de/cgi-bin/dokserv?idn=972530398.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
37

Yuan, Chun-Su. "ELECTROPHYSIOLOGICAL ANALYSIS OF THE RECURRENT RENSHAW CIRCUIT (MOTONEURON, INHIBITION, SPIKE-TRIGGERED AVERAGE, SPINAL CORD)". Diss., The University of Arizona, 1986. http://hdl.handle.net/10150/188170.

Pełny tekst źródła
Streszczenie:
One goal of the neurophysiological approach to the study of nervous systems is to analyze neuronal circuitry in terms of the synaptic actions of one cell on another, particularly in instances in which both cells are functionally identifiable and components of a circuit whose overall structural and functional properties can be analyzed with experimental techniques. The present project contributed to this type of effort by providing an analysis of the recurrent Renshaw circuit, a prominent pathway in the mammalian spinal cord which includes recurrent motoneuronal collaterals, Renshaw cells and other interneurons, which, in turn, project to motoneurons. The project describes the use of a relatively new data processing technique, spike-triggered averaging, to study the effects of the single impulses of single motor axons on the postsynaptic activity of single motoneurons which were responsive to the test impulses by way of components of the recurrent Renshaw circuit. The experimental paradigm involved intracellular recording from single motoneurons in low-spinal cats, either anesthetized with chloralose-urethane or unanesthetized after their ischemic decapitation. The synaptic noise recorded in each motoneuron served as the input to a signal averager which was triggered by brief electrical shocks used to activate single antidromic impulses in single motor axons, either by way of an intra-axonally positioned microelectrode in the muscle nerve or by microstimulation of the muscle supplied by the axon. The resultant average revealed the motoneuron's response to each single antidromic impulse; a recurrent inhibitory postsynaptic potential, recorded for the first-ever time in this project and termed a single-axon RIPSP. The experimental results described in the report include: first, the measurement, incidence and characterization of single-axon RIPSPs; and second, their use to test a hypothesis concerned with the distribution of Renshaw-cell effects within the spinal cord. The single-axon RIPSP measurement was shown to be the clearest example yet provided in the neurophysiological literature that spike-triggered averaging can be used to detect synaptic activity crossing two or more synapses within the central nervous system. Furthermore, the hypothesis was confirmed that Renshaw-cell effects within a single spinal motor nucleus are distributed according to the principle of topographic specificity.
Style APA, Harvard, Vancouver, ISO itp.
38

Blom, Magnus. "Light-Triggered Conformational Switches for Modulation of Molecular Recognition : Applications for Peptidomimetics and Supramolecular Systems". Doctoral thesis, Uppsala universitet, Syntetisk organisk kemi, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-267845.

Pełny tekst źródła
Streszczenie:
The main focus of this thesis is on photochemical modulation of molecular recognition in various host-guest systems. This involves the design, synthesis and integration of light-triggered conformational switches into peptidomimetic guests and molecular tweezer hosts. The impact of the switches on guest and host structures has been assessed by spectroscopic and computational conformational analysis. Effects of photochemical structure modulation on molecular recognition in protein-ligand and supramolecular host-guest systems are discussed. Phototriggerable peptidomimetic inhibitors of the enzyme M. tuberculosis ribonucleotide reductase (RNR) were obtained by incorporation of a stilbene based amino acid moiety into oligopeptides between 3-9 residues long (Paper I). Interstrand hydrogen bond probability in the E and Z forms of the peptidomimetics was used as a tool for predicting conformational preferences. Considerable differences in inhibitory potency for the E and Z photoisomers were demonstrated in a binding assay. In order to advance the concept of photomodulable inhibitors, synthetic routes towards amino acid derivatives based on the more rigid stiff-stilbene chromophore were developed (Paper II).  The effect of E-Z isomerization on the conformational properties of peptidomimetic inhibitors incorporating the stiff-stilbene chromophore was also assessed computationally (Paper III). It was indicated that inhibitors with the more rigid amino acid derivative should display larger conformational divergence between photoisomers than corresponding stilbene derivatives. Bisporphyrin tweezers with enediyne and stiff-stilbene spacers have been synthesized, and the conformational characteristics imposed by the spacers have been studied and compared to a glycoluril linked tweezer. The effects of spacers on tweezer binding of diamine guests and helicity induction by chiral guests have been investigated (Paper IV). Connections between spacer flexibility and host-guest binding strength have been established. The structural properties of the stiff-stilbene spaced tweezer made it particularly susceptible to helicity induction by both monotopic and bitopic chiral guests. Finally, the possibility of photochemical bite-size variation of tweezers with photoswitchable spacers has been assessed. Initial studies have shown that photoisomerization of the tweezers is possible without photochemical decomposition. Conformational analyses indicate that isomerization should impact binding characteristics of the tweezers to a significant extent (Paper V).
Style APA, Harvard, Vancouver, ISO itp.
39

Asbayou, Omar. "L'identification des entités nommées en arabe en vue de leur extraction et classification automatiques : la construction d’un système à base de règles syntactico-sémantique". Thesis, Lyon, 2016. http://www.theses.fr/2016LYSE2136.

Pełny tekst źródła
Streszczenie:
Cette thèse explique et présente notre démarche de la réalisation d’un système à base de règles de reconnaissance et de classification automatique des EN en arabe. C’est un travail qui implique deux disciplines : la linguistique et l’informatique. L’outil informatique et les règles la linguistiques s’accouplent pour donner naissance à une nouvelle discipline ; celle de « traitement automatique des langues », qui opère sur des niveaux différents (morphosyntaxique, syntaxique, sémantique, syntactico-sémantique etc.). Nous avons donc, dans ce qui nous concerne, mis en œuvre des informations et règles linguistiques nécessaires au service du logiciel informatique, qui doit être en mesure de les appliquer, pour extraire et classifier, par des annotations syntaxiques et/ou sémantiques, les différentes classes d’entités nommées.Ce travail de thèse s’inscrit donc dans un cadre général de traitement automatique des langues, mais plus particulièrement dans la continuité des travaux réalisés au niveau de l’analyse morphosyntaxique par la conception et la réalisation des bases des données lexicales SAMIA et ensuite DIINAR avec l’ensemble de résultats de recherches qui en découlent. C’est une tâche qui vise à l’enrichissement lexical par des entités nommées simples et complexes, et qui veut établir la transition de l’analyse morphosyntaxique vers l’analyse syntaxique, et syntatico-sémantique dans une visée plus générale de l’analyse du contenu textuel. Pour comprendre de quoi il s’agit, il nous était important de commencer par la définition de l’entité nommée. Et pour mener à bien notre démarche, nous avons distingué entre deux types principaux : pur nom propre et EN descriptive. Nous avons aussi établi une classification référentielle en se basant sur diverses classes et sous-classes qui constituent la référence de nos annotations sémantiques. Cependant, nous avons dû faire face à deux difficultés majeures : l’ambiguïté lexicale et les frontières des entités nommées complexes. Notre système adopte une approche à base de règles syntactico-sémantiques. Il est constitué, après le Niveau 0 d’analyse morphosyntaxique, de cinq niveaux de construction de patrons syntaxiques et syntactico-sémantiques basés sur les informations linguistique nécessaires (morphosyntaxiques, syntaxiques, sémantique, et syntactico-sémantique). Ce travail, après évaluation en utilisant deux corpus, a abouti à de très bons résultats en précision, en rappel et en F–mesure. Les résultats de notre système ont un apport intéressant dans différents application du traitement automatique des langues notamment les deux tâches de recherche et d’extraction d’informations. En effet, on les a concrètement exploités dans les deux applications (recherche et extraction d’informations). En plus de cette expérience unique, nous envisageons par la suite étendre notre système à l’extraction et la classification des phrases dans lesquelles, les entités classifiées, principalement les entités nommées et les verbes, jouent respectivement le rôle d’arguments et de prédicats. Un deuxième objectif consiste à l’enrichissement des différents types de ressources lexicales à l’instar des ontologies
This thesis explains and presents our approach of rule-based system of arabic named entity recognition and classification. This work involves two disciplines : linguistics and computer science. Computer tools and linguistic rules are merged to give birth to a new discipline : Natural Languge Processsing, which operates in different levels (morphosyntactic, syntactic, semantic, syntactico-semantic…). So, in our particular case, we have put the necessary linguistic information and rules to software sevice. This later should be able to apply and implement them in order to recognise and classify, by syntactic and semantic annotations, the different named entity classes.This work of thesis is incorporated within the general domain of natural language processing, but it particularly falls within the scope of the continuity of the accomplished work in terms of morphosyntactic analysis and the realisation of lexical data bases of SAMIA and then DIINAR as well as the accompanying scientific recearch. This task aimes at lexical enrichement with simple and complex named entities and at establishing the transition from the morphological analysis into syntactic and syntactico-semantic analysis. The ultimate objective is text analysis. To understand what it is about, it was important to start with named entity definition. To carry out this task, we distinguished between two main named entity types : pur proper name and descriptive named entities. We have also established a referential classification on the basis of different classes and sub-classes which constitue the reference for our semantic annotations. Nevertheless, we are confronted with two major difficulties : lexical ambiguity and the frontiers of complex named entities. Our system adoptes a syntactico-semantic rule-based approach. After Level 0 of morpho-syntactic analysis, the system is made up of five levels of syntactic and syntactico-semantic patterns based on tne necessary linguisic information (i.e. morphosyntactic, syntactic, semantic and syntactico-semantic information).This work has obtained very good results in termes of precision, recall and F-measure. The output of our system has an interesting contribution in different applications of the natural language processing especially in both tasks of information retrieval and information extraction. In fact, we have concretely exploited our system output in both applications (information retrieval and information extraction). In addition to this unique experience, we envisage in the future work to extend our system into the sentence extraction and classification, in which classified entities, mainly named entities and verbs, play respectively the role of arguments and predicates. The second objective consists in the enrichment of different types of lexical resources such as ontologies
Style APA, Harvard, Vancouver, ISO itp.
40

Darling, Ryan Daniel. "Single Cell Analysis of Hippocampal Neural Ensembles during Theta-Triggered Eyeblink Classical Conditioning in the Rabbit". Miami University / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=miami1225460517.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
41

Keskin, Ugur. "Time-triggered Controller Area Network (ttcan) Communication Scheduling: A Systematic Approach". Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/12609877/index.pdf.

Pełny tekst źródła
Streszczenie:
Time-Triggered Controller Area Network (TTCAN) is a hybrid communication paradigm with combining both time-triggered and event-triggered traffic scheduling. Different from the standard Controller Area Network (CAN), communication in TTCAN is performed according to a pre-computed, fixed (during system run) schedule that is called as TTCAN System Matrix. Thus, communication performance of TTCAN network is directly related to structure of the system matrix, which makes the design of system matrix a crucial process. The study in this thesis consists of the extended work on the development of a systematic approach for system matrix construction. Methods for periodic message scheduling and an approach for aperiodic message scheduling are proposed with the aim of constructing a feasible system matrix, combining three important aspects: message properties, protocol constraints and system performance requirements in terms of designated performance metrics. Also, system matrix design, analyses and performance evaluation are performed on example message sets with the help of two developed software tools.
Style APA, Harvard, Vancouver, ISO itp.
42

Hulbert, Sarah Marie HULBERT. "Biophysical Approaches for the Multi-System Analysis of Neural Control of Movement and Neurologic Rehabilitation". The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu1534678369235538.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
43

Tintor, Nico [Verfasser], Paul [Akademischer Betreuer] Schulze-Lefert, Ute [Akademischer Betreuer] Höcker i Cyril [Akademischer Betreuer] Zipfel. "Genetic analysis of MAMP-triggered immunity in Arabidopsis / Nico Tintor. Gutachter: Paul Schulze-Lefert ; Ute Höcker ; Cyril Zipfel". Köln : Universitäts- und Stadtbibliothek Köln, 2012. http://d-nb.info/1038378834/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
44

Wan, Wei-Lin [Verfasser], i Thorsten [Akademischer Betreuer] Nürnberger. "Comparative Analysis of Signaling Pathways Triggered by Different Pattern-recognition Receptor-types / Wei-Lin Wan ; Betreuer: Thorsten Nürnberger". Tübingen : Universitätsbibliothek Tübingen, 2017. http://d-nb.info/1167311361/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
45

Aminifar, Amir. "Analysis, Design, and Optimization of Embedded Control Systems". Doctoral thesis, Linköpings universitet, Programvara och system, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-124319.

Pełny tekst źródła
Streszczenie:
Today, many embedded or cyber-physical systems, e.g., in the automotive domain, comprise several control applications, sharing the same platform. It is well known that such resource sharing leads to complex temporal behaviors that degrades the quality of control, and more importantly, may even jeopardize stability in the worst case, if not properly taken into account. In this thesis, we consider embedded control or cyber-physical systems, where several control applications share the same processing unit. The focus is on the control-scheduling co-design problem, where the controller and scheduling parameters are jointly optimized. The fundamental difference between control applications and traditional embedded applications motivates the need for novel methodologies for the design and optimization of embedded control systems. This thesis is one more step towards correct design and optimization of embedded control systems. Offline and online methodologies for embedded control systems are covered in this thesis. The importance of considering both the expected control performance and stability is discussed and a control-scheduling co-design methodology is proposed to optimize control performance while guaranteeing stability. Orthogonal to this, bandwidth-efficient stabilizing control servers are proposed, which support compositionality, isolation, and resource-efficiency in design and co-design. Finally, we extend the scope of the proposed approach to non-periodic control schemes and address the challenges in sharing the platform with self-triggered controllers. In addition to offline methodologies, a novel online scheduling policy to stabilize control applications is proposed.
Style APA, Harvard, Vancouver, ISO itp.
46

Howard, Eddie J. Jr. "Institutional Strategies of Identified Involvement Triggers that Increase Campus Engagement: A Longitudinal Analysis Based on an Individual National Survey of Student Engagement Responses". Youngstown State University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=ysu1587745870664836.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
47

Tamulis, Tomas. "Association between area socioeconomic status and hospital admissions for childhood and adult asthma". [Tampa, Fla.] : University of South Florida, 2005. http://purl.fcla.edu/fcla/etd/SFE0001134.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
48

Anderson, Eric Ross. "Analysis of rainfall-triggered landslide hazards through the dynamic integration of remotely sensed, modeled and in situ environmental factors in El Salvador". Thesis, The University of Alabama in Huntsville, 2013. http://pqdtopen.proquest.com/#viewpdf?dispub=1543417.

Pełny tekst źródła
Streszczenie:

Landslides pose a persistent threat to El Salvador's population, economy and environment. Government officials share responsibility in managing this hazard by alerting populations when and where landslides may occur as well as developing and enforcing proper land use and zoning practices. This thesis addresses gaps in current knowledge between identifying precisely when and where slope failures may initiate and outlining the extent of the potential debris inundation areas. Improvements on hazard maps are achieved by considering a series of environmental variables to determine causal factors through spatial and temporal analysis techniques in Geographic Information Systems and remote sensing. The output is a more dynamic tool that links high resolution geomorphic and hydrological factors to daily precipitation. Directly incorporable into existing decision support systems, this allows for better disaster management and is transferable to other developing countries.

Style APA, Harvard, Vancouver, ISO itp.
49

Gong, Peijie [Verfasser], i P. [Akademischer Betreuer] Nick. "Die to Survive - Functional Analysis of Grapevine Metacaspases Responsive to Effector - Triggered Immunity (ETI)-Related Cell Death / Peijie Gong ; Betreuer: P. Nick". Karlsruhe : KIT-Bibliothek, 2017. http://d-nb.info/1138708577/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
50

Rajaguru, Mudiyanselage Thilanki Maneesha Dahigamuwa. "Enhancement of Rainfall-Triggered Shallow Landslide Hazard Assessment at Regional and Site Scales Using Remote Sensing and Slope Stability Analysis Coupled with Infiltration Modeling". Scholar Commons, 2018. https://scholarcommons.usf.edu/etd/7562.

Pełny tekst źródła
Streszczenie:
Landslides cause significant damage to property and human lives throughout the world. Rainfall is the most common triggering factor for the occurrence of landslides. This dissertation presents two novel methodologies for assessment of rainfall-triggered shallow landslide hazard. The first method focuses on using remotely sensed soil moisture and soil surface properties in developing a framework for real-time regional scale landslide hazard assessment while the second method is a deterministic approach to landslide hazard assessment of the specific sites identified during first assessment. In the latter approach, landslide inducing transient seepage in soil during rainfall and its effect on slope stability are modeled using numerical analysis. Traditionally, the prediction of rainfall-triggered landslides has been performed using pre-determined rainfall intensity-duration thresholds. However, it is the infiltration of rainwater into soil slopes which leads to an increase of porewater pressure and destruction of matric suction that causes a reduction in soil shear strength and slope instability. Hence, soil moisture, pore pressure and infiltration properties of soil must be direct inputs to reliable landslide hazard assessment methods. In-situ measurement of pore pressure for real-time landslide hazard assessment is an expensive endeavor and thus, the use of more practical remote sensing of soil moisture is constantly sought. In past studies, a statistical framework for regional scale landslide hazard assessment using remotely sensed soil moisture has not been developed. Thus, the first major objective of this study is to develop a framework for using downscaled remotely sensed soil moisture available on a daily basis to monitor locations that are highly susceptible to rainfall- triggered shallow landslides, using a well-structured assessment procedure. Downscaled soil moisture, the relevant geotechnical properties of saturated hydraulic conductivity and soil type, and the conditioning factors of elevation, slope, and distance to roads are used to develop an improved logistic regression model to predict the soil slide hazard of soil slopes using data from two geographically different regions. A soil moisture downscaling model with a proven superior prediction accuracy than the downscaling models that have been used in previous landslide studies is employed in this study. Furthermore, this model provides satisfactory classification accuracy and performs better than the alternative water drainage-based indices that are conventionally used to quantify the effect that elevated soil moisture has upon the soil sliding. Furthermore, the downscaling of soil moisture content is shown to improve the prediction accuracy. Finally, a technique that can determine the threshold probability for identifying locations with a high soil slide hazard is proposed. On the other hand, many deterministic methods based on analytical and numerical methodologies have been developed in the past to model the effects of infiltration and subsequent transient seepage during rainfall on the stability of natural and manmade slopes. However, the effects of continuous interplay between surface and subsurface water flows on slope stability is seldom considered in the above-mentioned numerical and analytical models. Furthermore, the existing seepage models are based on the Richards equation, which is derived using Darcy’s law, under a pseudo-steady state assumption. Thus, the inertial components of flow have not been incorporated typically in modeling the flow of water through the subsurface. Hence, the second objective of this study is to develop a numerical model which has the capability to model surface, subsurface and infiltration water flows based on a unified approach, employing fundamental fluid dynamics, to assess slope stability during rainfall-induced transient seepage conditions. The developed model is based on the Navier-Stokes equations, which possess the capability to model surface, subsurface and infiltration water flows in a unified manner. The extended Mohr-Coulomb criterion is used in evaluating the shear strength reduction due to infiltration. Finally, the effect of soil hydraulic conductivity on slope stability is examined. The interplay between surface and subsurface water flows is observed to have a significant impact on slope stability, especially at low hydraulic conductivity values. The developed numerical model facilitates site-specific calibration with respect to saturated hydraulic conductivity, remotely sensed soil moisture content and rainfall intensity to predict landslide inducing subsurface pore pressure variations in real time.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii