Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Context Encoder.

Dissertationen zum Thema „Context Encoder“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-25 Dissertationen für die Forschung zum Thema "Context Encoder" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Damecharla, Hima Bindu. „FPGA IMPLEMENTATION OF A PARALLEL EBCOT TIER-1 ENCODER THAT PRESERVES ENCODING EFFICIENCY“. University of Akron / OhioLINK, 2006. http://rave.ohiolink.edu/etdc/view?acc_num=akron1149703842.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Leufvén, Johan. „Integration of user generated content with an IPTV middleware“. Thesis, Linköping University, Linköping University, Department of Computer and Information Science, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-55029.

Der volle Inhalt der Quelle
Annotation:

IPTV is a growing form of distribution for TV and media. Reports show that the market will grow from the current 20-30 million subscribers to almost 100 million 2012. IPTV extends the traditional TV viewing with new services like renting movies from your TV. It could also be seen as a bridge between the traditional broadcast approach and the new on demand approach the users are used to from internet.

Since there are many actors in the IPTV market that all deliver the same basic functionality, companies must deliver better products that separate them from the competitors. This can be done either through doing things better than the others and/or delivering functionality that others can’t deliver.

This thesis project presents the development of a prototype system for serving user generated content in the IPTV middleware Dreamgallery. The developed prototype is a fully working system that includes (1) a fully automated system for transcoding, of video content. (2) A web portal presented with solutions for problems related to user content uploading and administration. (3) Seamless integration with the Dreamgallery middleware and end user GUI, with two different ways of viewing content. One way for easy exploration of new content and a second more structured way of browsing the content.

A study of three open source encoding softwares is also presented. The three encoders were subjects to tests of: speed, agility (file format support) and how well they handle files with corrupted data.

APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

May, Richard John. „Perceptual content loss in bit rate constrained IFS encoded speech“. Thesis, University of Portsmouth, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.396323.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Hsu, William. „Using knowledge encoded in graphical disease models to support context-sensitive visualization of medical data“. Diss., Restricted to subscribing institutions, 2009. http://proquest.umi.com/pqdweb?did=1925776141&sid=13&Fmt=2&clientId=1564&RQT=309&VName=PQD.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Anegekuh, Louis. „Video content-based QoE prediction for HEVC encoded videos delivered over IP networks“. Thesis, University of Plymouth, 2015. http://hdl.handle.net/10026.1/3377.

Der volle Inhalt der Quelle
Annotation:
The recently released High Efficiency Video Coding (HEVC) standard, which halves the transmission bandwidth requirement of encoded video for almost the same quality when compared to H.264/AVC, and the availability of increased network bandwidth (e.g. from 2 Mbps for 3G networks to almost 100 Mbps for 4G/LTE) have led to the proliferation of video streaming services. Based on these major innovations, the prevalence and diversity of video application are set to increase over the coming years. However, the popularity and success of current and future video applications will depend on the perceived quality of experience (QoE) of end users. How to measure or predict the QoE of delivered services becomes an important and inevitable task for both service and network providers. Video quality can be measured either subjectively or objectively. Subjective quality measurement is the most reliable method of determining the quality of multimedia applications because of its direct link to users’ experience. However, this approach is time consuming and expensive and hence the need for an objective method that can produce results that are comparable with those of subjective testing. In general, video quality is impacted by impairments caused by the encoder and the transmission network. However, videos encoded and transmitted over an error-prone network have different quality measurements even under the same encoder setting and network quality of service (NQoS). This indicates that, in addition to encoder settings and network impairment, there may be other key parameters that impact video quality. In this project, it is hypothesised that video content type is one of the key parameters that may impact the quality of streamed videos. Based on this assertion, parameters related to video content type are extracted and used to develop a single metric that quantifies the content type of different video sequences. The proposed content type metric is then used together with encoding parameter settings and NQoS to develop content-based video quality models that estimate the quality of different video sequences delivered over IP-based network. This project led to the following main contributions: (1) A new metric for quantifying video content type based on the spatiotemporal features extracted from the encoded bitstream. (2) The development of novel subjective test approach for video streaming services. (3) New content-based video quality prediction models for predicting the QoE of video sequences delivered over IP-based networks. The models have been evaluated using subjective and objective methods.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Sasko, Dominik. „Segmentace lézí roztroušené sklerózy pomocí hlubokých neuronových sítí“. Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2021. http://www.nusl.cz/ntk/nusl-442379.

Der volle Inhalt der Quelle
Annotation:
Hlavným zámerom tejto diplomovej práce bola automatická segmentácia lézií sklerózy multiplex na snímkoch MRI. V rámci práce boli otestované najnovšie metódy segmentácie s využitím hlbokých neurónových sietí a porovnané prístupy inicializácie váh sietí pomocou preneseného učenia (transfer learning) a samoriadeného učenia (self-supervised learning). Samotný problém automatickej segmentácie lézií sklerózy multiplex je veľmi náročný, a to primárne kvôli vysokej nevyváženosti datasetu (skeny mozgov zvyčajne obsahujú len malé množstvo poškodeného tkaniva). Ďalšou výzvou je manuálna anotácia týchto lézií, nakoľko dvaja rozdielni doktori môžu označiť iné časti mozgu ako poškodené a hodnota Dice Coefficient týchto anotácií je približne 0,86. Možnosť zjednodušenia procesu anotovania lézií automatizáciou by mohlo zlepšiť výpočet množstva lézií, čo by mohlo viesť k zlepšeniu diagnostiky individuálnych pacientov. Našim cieľom bolo navrhnutie dvoch techník využívajúcich transfer learning na predtrénovanie váh, ktoré by neskôr mohli zlepšiť výsledky terajších segmentačných modelov. Teoretická časť opisuje rozdelenie umelej inteligencie, strojového učenia a hlbokých neurónových sietí a ich využitie pri segmentácii obrazu. Následne je popísaná skleróza multiplex, jej typy, symptómy, diagnostika a liečba. Praktická časť začína predspracovaním dát. Najprv boli skeny mozgu upravené na rovnaké rozlíšenie s rovnakou veľkosťou voxelu. Dôvodom tejto úpravy bolo využitie troch odlišných datasetov, v ktorých boli skeny vytvárané rozličnými prístrojmi od rôznych výrobcov. Jeden dataset taktiež obsahoval lebku, a tak bolo nutné jej odstránenie pomocou nástroju FSL pre ponechanie samotného mozgu pacienta. Využívali sme 3D skeny (FLAIR, T1 a T2 modality), ktoré boli postupne rozdelené na individuálne 2D rezy a použité na vstup neurónovej siete s enkodér-dekodér architektúrou. Dataset na trénovanie obsahoval 6720 rezov s rozlíšením 192 x 192 pixelov (po odstránení rezov, ktorých maska neobsahovala žiadnu hodnotu). Využitá loss funkcia bola Combo loss (kombinácia Dice Loss s upravenou Cross-Entropy). Prvá metóda sa zameriavala na využitie predtrénovaných váh z ImageNet datasetu na enkodér U-Net architektúry so zamknutými váhami enkodéra, resp. bez zamknutia a následného porovnania s náhodnou inicializáciou váh. V tomto prípade sme použili len FLAIR modalitu. Transfer learning dokázalo zvýšiť sledovanú metriku z hodnoty približne 0,4 na 0,6. Rozdiel medzi zamknutými a nezamknutými váhami enkodéru sa pohyboval okolo 0,02. Druhá navrhnutá technika používala self-supervised kontext enkodér s Generative Adversarial Networks (GAN) na predtrénovanie váh. Táto sieť využívala všetky tri spomenuté modality aj s prázdnymi rezmi masiek (spolu 23040 obrázkov). Úlohou GAN siete bolo dotvoriť sken mozgu, ktorý bol prekrytý čiernou maskou v tvare šachovnice. Takto naučené váhy boli následne načítané do enkodéru na aplikáciu na náš segmentačný problém. Tento experiment nevykazoval lepšie výsledky, s hodnotou DSC 0,29 a 0,09 (nezamknuté a zamknuté váhy enkodéru). Prudké zníženie metriky mohlo byť spôsobené použitím predtrénovaných váh na vzdialených problémoch (segmentácia a self-supervised kontext enkodér), ako aj zložitosť úlohy kvôli nevyváženému datasetu.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

和基, 塩谷, und Kazuki Shiotani. „Olfactory cortex ventral tenia tecta neurons encode the distinct context-dependent behavioral states of goal-directed behaviors“. Thesis, 櫻井 芳雄, 2003. http://id.nii.ac.jp/1707/00028191/.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

和基, 塩谷, und Kazuki Shiotani. „Olfactory cortex ventral tenia tecta neurons encode the distinct context-dependent behavioral states of goal-directed behaviors“. Thesis, 櫻井 芳雄, 2021. https://doors.doshisha.ac.jp/opac/opac_link/bibid/BB13158521/?lang=0.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Munoz, Joshua. „Application of Multifunctional Doppler LIDAR for Non-contact Track Speed, Distance, and Curvature Assessment“. Diss., Virginia Tech, 2015. http://hdl.handle.net/10919/77876.

Der volle Inhalt der Quelle
Annotation:
The primary focus of this research is evaluation of feasibility, applicability, and accuracy of Doppler Light Detection And Ranging (LIDAR) sensors as non-contact means for measuring track speed, distance traveled, and curvature. Speed histories, currently measured with a rotary, wheel-mounted encoder, serve a number of useful purposes, one significant use involving derailment investigations. Distance calculation provides a spatial reference system for operators to locate track sections of interest. Railroad curves, using an IMU to measure curvature, are monitored to maintain track infrastructure within regulations. Speed measured with high accuracy leads to high-fidelity distance and curvature data through utilization of processor clock rate and left-and right-rail speed differentials during curve navigation, respectively. Wheel-mounted encoders, or tachometers, provide a relatively low-resolution speed profile, exhibit increased noise with increasing speed, and are subject to the inertial behavior of the rail car which affects output data. The IMU used to measure curvature is dependent on acceleration and yaw rate sensitivity and experiences difficulty in low-speed conditions. Preliminary system tests onboard a 'Hy-Rail' utility vehicle capable of traveling on rail show speed capture is possible using the rails as the reference moving target and furthermore, obtaining speed profiles from both rails allows for the calculation of speed differentials in curves to estimate degrees curvature. Ground truth distance calibration and curve measurement were also carried out. Distance calibration involved placement of spatial landmarks detected by a sensor to synchronize distance measurements as a pre-processing procedure. Curvature ground truth measurements provided a reference system to confirm measurement results and observe alignment variation throughout a curve. Primary testing occurred onboard a track geometry rail car, measuring rail speed over substantial mileage in various weather conditions, providing high-accuracy data to further calculate distance and curvature along the test routes. Tests results indicate the LIDAR system measures speed at higher accuracy than the encoder, absent of noise influenced by increasing speed. Distance calculation is also high in accuracy, results showing high correlation with encoder and ground truth data. Finally, curvature calculation using speed data is shown to have good correlation with IMU measurements and a resolution capable of revealing localized track alignments. Further investigations involve a curve measurement algorithm and speed calibration method independent from external reference systems, namely encoder and ground truth data. The speed calibration results show a high correlation with speed data from the track geometry vehicle. It is recommended that the study be extended to provide assessment of the LIDAR's sensitivity to car body motion in order to better isolate the embedded behavior in the speed and curvature profiles. Furthermore, in the interest of progressing the system toward a commercially viable unit, methods for self-calibration and pre-processing to allow for fully independent operation is highly encouraged.
Ph. D.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Sultana, Tania. „L'influence du contexte génomique sur la sélection du site d'intégration par les rétrotransposons humains L1“. Thesis, Université Côte d'Azur (ComUE), 2016. http://www.theses.fr/2016AZUR4133.

Der volle Inhalt der Quelle
Annotation:
Les rétrotransposons L1 (Long INterspersed Element-1) sont des éléments génétiques mobiles dont l'activité contribue à la dynamique du génome humain par mutagenèse insertionnelle. Les conséquences génétiques et épigénétiques d'une nouvelle insertion, et la capacité d'un L1 à être remobilisé, sont directement liées au site d’intégration dans le génome. Aussi, l’analyse des sites d’intégration des L1s est capitale pour comprendre leur impact fonctionnel - voire pathogène -, en particulier lors de la tumorigenèse ou au cours du vieillissement, et l’évolution de notre génome. Dans ce but, nous avons induit de façon expérimentale la rétrotransposition d'un élément L1 actif plasmidique dans des cellules en culture. Puis, nous avons cartographié les insertions obtenues de novo dans le génome humain grâce à une méthode de séquençage à haut-débit, appelée ATLAS-seq. Finalement, les sites pré-intégratifs identifiés par cette approche ont été analysés en relation avec un grand jeu de données publiques regroupant les caractéristiques structurales, génétiques ou épigénétiques de ces loci. Ces expériences ont révélé que les éléments L1 s’intègrent préférentiellement dans des régions de la chromatine faiblement exprimées et renfermant des activateurs faibles. Nous avons aussi trouvé plusieurs positions chromosomiques qui constituent des points chauds d'intégrations récurrentes. Nos résultats indiquent que la distribution des insertions de L1 de novo n’est pas aléatoire, que ce soit à l’échelle chromosomique ou à plus petite échelle, et ouvrent la porte à l'identification des déterminants moléculaires qui contrôlent la distribution chromosomique des L1s dans notre génome
Retrotransposons are mobile genetic elements that employ an RNA intermediate and a reverse transcription step for their replication. Long INterspersed Elements-1 (LINE-1 or L1) form the only autonomously active retrotransposon family in humans. Although most copies are defective due to the accumulation of mutations, each individual genome contains an average of 100 retrotransposition-competent L1 copies, which contribute to the dynamics of contemporary human genomes. L1 integration sites in the host genome directly determine the genetic consequences of the integration and the fate of the integrated copy. Thus, where L1 integrates in the genome, and whether this process is random, is critical to our understanding of human genome evolution, somatic genome plasticity in cancer and aging, and host-parasite interactions. To characterize L1 insertion sites, rather than studying endogenous L1 which have been subjected to evolutionary selective pressure, we induced de novo L1 retrotransposition by transfecting a plasmid-borne active L1 element into HeLa S3 cells. Then, we mapped de novo insertions in the human genome at nucleotide resolution by a dedicated deep-sequencing approach, named ATLAS-seq. Finally, de novo insertions were examined for their proximity towards a large number of genomic features. We found that L1 preferentially integrates in the lowly-expressed and weak enhancer chromatin segments. We also detected several hotspots of recurrent L1 integration. Our results indicate that the distribution of de novo L1 insertions is non-random both at local and regional scales, and pave the way to identify potential cellular factors involved in the targeting of L1 insertions
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Hilber, Susan Elizabeth. „Spatial and temporal patterns of feeding and food in three species of Mellitid sand dollars“. [Tampa, Fla] : University of South Florida, 2006. http://purl.fcla.edu/usf/dc/et/SFE0001652.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Swigart, James P. „Small Scale Distribution of the Sand Dollars Mellita tenuis and Encope spp. (Echinodermata)“. Scholar Commons, 2006. http://scholarcommons.usf.edu/etd/3930.

Der volle Inhalt der Quelle
Annotation:
Small scale distributions of Mellita tenuis and Encope spp. were quantified at Fort De Soto Park on Mullet Key, off Egmont Key and off Captiva Island, Florida during 2005. Off Captiva Island, Encope spp. were aggregated in 33.3% of plots in March. Off Egmont Key, M. tenuis were aggregated in 100% of plots in March but in no plots in September. At Fort De Soto Park, M. tenuis were aggregated in 37.5% of plots in May 12.5% in July and 50.0% in September. Sand dollars in 6.3% of the plots in September at Fort De Soto had a uniform distribution. Individuals in all other plots at all sites had random distributions. At Fort De Soto, each plot was revisited a few hours after the initial observation; 37.5% of plots had a different distribution at the second observation. Percent organic content of the smallest sediment grains (<105 μm) was not correlated with sand dollar distribution, except off Egmont Key. There was a significant negative correlation between nearest neighbor index and percent organic content. Mellita tenuis do aggregate on occasion. The cause of aggregation is not known. If localized differences in percent organic content of the sediment influence distribution, then homogeneity in the percent organic content of the sediment, as found in the majority of plots, would suggest random distribution of sand dollars.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

de, Cuetos Philippe. „Streaming de Vidéos Encodées en Couches sur Internet avec Adaptation au Réseau et au Contenu“. Phd thesis, Télécom ParisTech, 2003. http://pastel.archives-ouvertes.fr/pastel-00000489.

Der volle Inhalt der Quelle
Annotation:
Dans cette thèse nous proposons de nouvelles techniques et de nouveaux algorithmes pour améliorer la qualité des applications de streaming vidéo sur Internet. Nous formulons des problèmes d'optimisation et obtenons des politiques de contrôle pour la transmission sur le réseau Internet actuel sans qualité de service. Cette thèse étudie des techniques qui adaptent la transmission à la fois aux conditions variables du réseau (adaptation au réseau) et aux caractéristiques des vidéos transmises (adaptation au contenu). Ces techniques sont associées au codage en couche de la vidéo et au stockage temporaire de la vidéo au client. Nous évaluons leurs performances à partir de simulations avec des traces réseau (connexions TCP) et à partir de vidéos encodées en MPEG-4 FGS. Nous considérons tout d'abord des vidéos stockées sur un serveur et transmises sur une connexion TCP-compatible sans perte. Nous comparons les mécanismes d'ajout/retranchement de couches et de changement de versions; nous montrons que la flexibilité du codage en couches ne peut pas compenser, en général, le surcoût en bande passante par rapport au codage vidéo conventionnel. Deuxièmement, nous nous concentrons sur une nouvelle technique de codage en couches, la scalabilité à granularité fine (dite FGS), qui a été conçue spécifiquement pour le streaming vidéo. Nous proposons un nouveau cadre d'étude pour le streaming de vidéos FGS et nous résolvons un problème d'optimisation pour un critère qui implique la qualité des images et les variations de qualité durant l'affichage. Notre problème d'optimisation suggère une heuristique en temps réel dont les performances sont évaluées sur des protocoles TCP-compatibles différents. Nous montrons que la transmission sur une connexion TCP-compatible très variable, telle que TCP, résulte en une qualité comparable à une transmission sur des connexions TCP-compatibles moins variables. Nous présentons l'implémentation de notre heuristique d'adaptation dans un système de streaming de vidéos MPEG-4. Troisièmement, nous considérons le cadre d'étude général du streaming optimisé suivant les caractéristiques débit-distorsion de la vidéo. Nous analysons des traces débit-distorsion de vidéos de longue durée encodées en MPEG-4 FGS, et nous observons que le contenu sémantique a un impact important sur les propriétés des vidéos encodées. A partir de nos traces, nous examinons le streaming optimal à différents niveaux d'agrégation (images, groupes d'images, scènes); nous préconisons l'adaptation optimale scène par scène, qui donne une bonne qualité pour une faible complexité de calcul. Finalement, nous proposons un cadre d'optimisation unifié pour la transmission de vidéos encodées en couches sur des canaux à pertes. Le cadre d'étude proposé combine l'ordonnancement, la protection contre les erreurs par les FEC et la dissimulation d'erreur au décodeur. Nous utilisons des résultats sur les Processus de Décision de Markov (MDPs) à horizon infini et gain moyen, pour trouver des politiques de transmission optimales avec une faible complexité et pour un large éventail de mesures de qualité. Nous montrons qu'il est crucial de considérer la dissimulation d'erreur au décodeur dans la procédure d'optimisation de l'ordonnancement et de la protection contre les erreurs afin d'obtenir une transmission optimale.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Hosseinipour, Milad. „Electromechanical Design and Development of the Virginia Tech Roller Rig Testing Facility for Wheel-rail Contact Mechanics and Dynamics“. Diss., Virginia Tech, 2016. http://hdl.handle.net/10919/82542.

Der volle Inhalt der Quelle
Annotation:
The electromechanical design and development of a sophisticated roller rig testing facility at the Railway Technologies Laboratory (RTL) of Virginia Polytechnic and State University (VT) is presented. The VT Roller Rig is intended for studying the complex dynamics and mechanics at the wheel-rail interface of railway vehicles in a controlled laboratory environment. Such measurements require excellent powering and driving architecture, high-performance motion control, accurate measurements, and relatively noise-free data acquisition systems. It is critical to accurately control the relative dynamics and positioning of rotating bodies to emulate field conditions. To measure the contact forces and moments, special care must be taken to ensure any noise, such as mechanical vibration, electrical crosstalk, and electromagnetic interference (EMI) are kept to a minimum. This document describes the steps towards design and development of all electromechanical subsystems of the VT Roller Rig, including the powertrain, power electronics, motion control systems, sensors, data acquisition units, safety and monitoring circuits, and general practices followed for satisfying the local and international codes of practice. The VT Roller Rig is comprised of a wheel and a roller in a vertical configuration that simulate the single-wheel/rail interaction in one-fourth scale. The roller is five times larger than the scaled wheel to keep the contact patch distortion that is inevitable with a roller rig to a minimum. This setup is driven by two independent AC servo motors that control the velocity of the wheel and roller using state-of-the-art motion control technologies. Six linear actuators allow for adjusting the simulated load, wheel angle of attack, rail cant, and lateral position of the wheel on the rail. All motion controls are performed using digital servo drives, manufactured by Kollmorgen, VA, USA. A number of sensors measure the contact patch parameters including force, torque, displacement, rotation, speed, acceleration, and contact patch geometry. A unified communication protocol between the actuators and sensors minimizes data conversion time, which allows for servo update rates of up to 48kHz. This provides an unmatched bandwidth for performing various dynamics, vibrations, and transient tests, as well as static steady-state conditions. The VT Roller Rig has been debugged and commissioned successfully. The hardware and software components are tested both individually and within the system. The VT Roller Rig can control the creepage within 0.3RPM of the commanded value, while actively controlling the relative position of the rotating bodies with an unprecedented level of accuracy, no more than 16nm of the target location. The contact force measurement dynamometers can dynamically capture the contact forces to within 13.6N accuracy, for up to 10kN. The instantaneous torque in each driveline can be measured with better than 6.1Nm resolution. The VT Roller Rig Motion Programming Interface (MPI) is highly flexible for both programmers and non-programmers. All common motion control algorithms in the servo motion industry have been successfully implemented on the Rig. The VT Roller Rig MPI accepts third party motion algorithms in C, C++, and any .Net language. It successfully communicates with other design and analytics software such as Matlab, Simulink, and LabVIEW for performing custom-designed routines. It also provides the infrastructure for linking the Rig's hardware with commercial multibody dynamics software such as Simpack, NUCARS, and Vampire, which is a milestone for hardware-in-the-loop testing of railroad systems.
Ph. D.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Menegaz, Eugênio. „6 encores para piano de Luciano Berio : um estudo sobre a aprendizagem criativa em uma oficina de música para crianças“. Universidade do Estado de Santa Catarina, 2012. http://tede.udesc.br/handle/handle/1538.

Der volle Inhalt der Quelle
Annotation:
Made available in DSpace on 2016-12-08T17:06:44Z (GMT). No. of bitstreams: 1 114587.pdf: 8003773 bytes, checksum: 52fa5b746002919575b8c58c46054646 (MD5) Previous issue date: 2012-04-02
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
This dissertation presents a study of Berio 6 Encores for Piano. Based on the understanding of the compositional aspects, systems determination and its relationship with the pianistic writing, interpretive possibilities were outlined to optimize the process of instrumental performance. Berio belongs to the same generation of Cage, Stockhausen and Boulez, he was a pioneer in the exploration of new musical frontiers. He used a miriad of languages and techniques during his career as a composer. Berio produced the collection 6 Encores, a series of six pieces during 25 years (1965-1990), this fact makes them representative of the twentieth century piano repertoire. Berio defines music as a process. This definition reflects the treatment he uses in the sound material in his compositions: slow transformations, also calling the listener to participate actively in the musical enjoyment due to his peculiarities of writing and sound. The first chapter presents the composer and the philosophical and aesthetic aspects that permeates his work. The second presents a study of the 6 Encores for piano context. In the third chapter are shown specific aspects, structural components and study strategies for their instrumental realization. The understanding of philosophical premises can optimize the work of studying and building a solid interpretation by the interpreter who is not used with the contemporary repertoire.
Esta dissertação consta de um estudo da obra 6 Encores para Piano de Luciano Berio ao piano. Com base no entendimento da compreensão e reflexões sobre aspectos e sistemas composicionais, em sua relação com a escrita pianística, foram delineadas possibilidades interpretativas visando otimizar o processo de execução instrumental. Pertencendo à mesma geração que Cage, Boulez e Stockhausen, Berio foi pioneiro na exploração de novas fronteiras musicais. Empregou uma miríade de idiomas e técnicas durante sua carreira de compositor. Entre suas obras para piano Berio produziu a coleção 6 Encores, uma série de seis peças durante 25 anos (1965¿1990), fato que as tornam representativas do século XX. Berio define música como um processo. Esta definição reflete o tratamento que dispensa ao material sonoro em suas composições: lentas transformações; também chama o ouvinte a participar ativamente da fruição musical devido a sua escrita e peculiaridades sonoras. No primeiro capítulo são apresentados o compositor e os aspectos filosóficos e estéticos que permeiam sua obra, e o segundo é apresentado um estudo sobres o contexto das 6 Encores para piano. No terceiro capítulo são mostradas especificidades das peças, destacados componentes estruturais e apresentadas estratégias de estudo para sua realização instrumental. A compreensão de premissas filosóficas podem otimizar o trabalho de estudo e de construção de uma interpretação sólida por parte do intérprete não familiarizado com o repertório contemporâneo.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Dolz, Jose. „Vers la segmentation automatique des organes à risque dans le contexte de la prise en charge des tumeurs cérébrales par l’application des technologies de classification de deep learning“. Thesis, Lille 2, 2016. http://www.theses.fr/2016LIL2S059/document.

Der volle Inhalt der Quelle
Annotation:
Les tumeurs cérébrales sont une cause majeure de décès et d'invalidité dans le monde, ce qui représente 14,1 millions de nouveaux cas de cancer et 8,2 millions de décès en 2012. La radiothérapie et la radiochirurgie sont parmi l'arsenal de techniques disponibles pour les traiter. Ces deux techniques s’appuient sur une irradiation importante nécessitant une définition précise de la tumeur et des tissus sains environnants. Dans la pratique, cette délinéation est principalement réalisée manuellement par des experts avec éventuellement un faible support informatique d’aide à la segmentation. Il en découle que le processus est fastidieux et particulièrement chronophage avec une variabilité inter ou intra observateur significative. Une part importante du temps médical s’avère donc nécessaire à la segmentation de ces images médicales. L’automatisation du processus doit permettre d’obtenir des ensembles de contours plus rapidement, reproductibles et acceptés par la majorité des oncologues en vue d'améliorer la qualité du traitement. En outre, toute méthode permettant de réduire la part médicale nécessaire à la délinéation contribue à optimiser la prise en charge globale par une utilisation plus rationnelle et efficace des compétences de l'oncologue.De nos jours, les techniques de segmentation automatique sont rarement utilisées en routine clinique. Le cas échéant, elles s’appuient sur des étapes préalables de recalages d’images. Ces techniques sont basées sur l’exploitation d’informations anatomiques annotées en amont par des experts sur un « patient type ». Ces données annotées sont communément appelées « Atlas » et sont déformées afin de se conformer à la morphologie du patient en vue de l’extraction des contours par appariement des zones d’intérêt. La qualité des contours obtenus dépend directement de la qualité de l’algorithme de recalage. Néanmoins, ces techniques de recalage intègrent des modèles de régularisation du champ de déformations dont les paramètres restent complexes à régler et la qualité difficile à évaluer. L’intégration d’outils d’assistance à la délinéation reste donc aujourd’hui un enjeu important pour l’amélioration de la pratique clinique.L'objectif principal de cette thèse est de fournir aux spécialistes médicaux (radiothérapeute, neurochirurgien, radiologue) des outils automatiques pour segmenter les organes à risque des patients bénéficiant d’une prise en charge de tumeurs cérébrales par radiochirurgie ou radiothérapie.Pour réaliser cet objectif, les principales contributions de cette thèse sont présentées sur deux axes principaux. Tout d'abord, nous considérons l'utilisation de l'un des derniers sujets d'actualité dans l'intelligence artificielle pour résoudre le problème de la segmentation, à savoir le «deep learning ». Cet ensemble de techniques présente des avantages par rapport aux méthodes d'apprentissage statistiques classiques (Machine Learning en anglais). Le deuxième axe est dédié à l'étude des caractéristiques d’images utilisées pour la segmentation (principalement les textures et informations contextuelles des images IRM). Ces caractéristiques, absentes des méthodes classiques d'apprentissage statistique pour la segmentation des organes à risque, conduisent à des améliorations significatives des performances de segmentation. Nous proposons donc l'inclusion de ces fonctionnalités dans un algorithme de réseau de neurone profond (deep learning en anglais) pour segmenter les organes à risque du cerveau.Nous démontrons dans ce travail la possibilité d'utiliser un tel système de classification basée sur techniques de « deep learning » pour ce problème particulier. Finalement, la méthodologie développée conduit à des performances accrues tant sur le plan de la précision que de l’efficacité
Brain cancer is a leading cause of death and disability worldwide, accounting for 14.1 million of new cancer cases and 8.2 million deaths only in 2012. Radiotherapy and radiosurgery are among the arsenal of available techniques to treat it. Because both techniques involve the delivery of a very high dose of radiation, tumor as well as surrounding healthy tissues must be precisely delineated. In practice, delineation is manually performed by experts, or with very few machine assistance. Thus, it is a highly time consuming process with significant variation between labels produced by different experts. Radiation oncologists, radiology technologists, and other medical specialists spend, therefore, a substantial portion of their time to medical image segmentation. If by automating this process it is possible to achieve a more repeatable set of contours that can be agreed upon by the majority of oncologists, this would improve the quality of treatment. Additionally, any method that can reduce the time taken to perform this step will increase patient throughput and make more effective use of the skills of the oncologist.Nowadays, automatic segmentation techniques are rarely employed in clinical routine. In case they are, they typically rely on registration approaches. In these techniques, anatomical information is exploited by means of images already annotated by experts, referred to as atlases, to be deformed and matched on the patient under examination. The quality of the deformed contours directly depends on the quality of the deformation. Nevertheless, registration techniques encompass regularization models of the deformation field, whose parameters are complex to adjust, and its quality is difficult to evaluate. Integration of tools that assist in the segmentation task is therefore highly expected in clinical practice.The main objective of this thesis is therefore to provide radio-oncology specialists with automatic tools to delineate organs at risk of patients undergoing brain radiotherapy or stereotactic radiosurgery. To achieve this goal, main contributions of this thesis are presented on two major axes. First, we consider the use of one of the latest hot topics in artificial intelligence to tackle the segmentation problem, i.e. deep learning. This set of techniques presents some advantages with respect to classical machine learning methods, which will be exploited throughout this thesis. The second axis is dedicated to the consideration of proposed image features mainly associated with texture and contextual information of MR images. These features, which are not present in classical machine learning based methods to segment brain structures, led to improvements on the segmentation performance. We therefore propose the inclusion of these features into a deep network.We demonstrate in this work the feasibility of using such deep learning based classification scheme for this particular problem. We show that the proposed method leads to high performance, both in accuracy and efficiency. We also show that automatic segmentations provided by our method lie on the variability of the experts. Results demonstrate that our method does not only outperform a state-of-the-art classifier, but also provides results that would be usable in the radiation treatment planning
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Barbosa, Maria vanice Lacerda de Melo. „Modalização e polifonia no gênero resenha acadêmica:um olhar apreciativo sobre a voz da ciência“. Universidade Federal da Paraíba, 2015. http://tede.biblioteca.ufpb.br:8080/handle/tede/8410.

Der volle Inhalt der Quelle
Annotation:
Submitted by Maike Costa (maiksebas@gmail.com) on 2016-07-20T11:32:37Z No. of bitstreams: 1 arquivo total.pdf: 17599967 bytes, checksum: cb2c0f5624933bb75b6adac7eb251e41 (MD5)
Made available in DSpace on 2016-07-20T11:32:37Z (GMT). No. of bitstreams: 1 arquivo total.pdf: 17599967 bytes, checksum: cb2c0f5624933bb75b6adac7eb251e41 (MD5) Previous issue date: 2015-08-28
In order to building a spoken or written text, regardless of gender that it is being carried out, the speaker uses linguistic features such as semantic-argumentative strategies intending to guide the interlocutor to certain conclusions. The modalization and polyphony, accordingly, are phenomena that allow the speaker to leave printed his subjectivity in the content of the statements, while acting according to his interlocutor. Focusing in these discussions, this investigation aims to show that modalization and polyphony reveal, linguistically, subjectivity in the digest genre, acting therefore as argumentation features. It is a qualitative research, descriptive and interpretative, which adopts the theoretical and methodological principles of Argumentative Semantics. The corpus consists of ten digests collected in six editions of the Jornal de Resenhas, of the Discurso Editorial, ISSN 1984-6282, published in 2009, 2010 and 2012. The theoretical discussions concerning the Argumentation Theory of Language have based in Ducrot (1994, 1987, 1988), Espíndola (2004), Nascimento (2005, 2009), Koch (2006a, 2006b) and others arguing about the theory approach. The modalization is discussed under the postulates of Castilho and Castilho (1993), Koch (2006b), Cervoni (1989), Nascimento (2009), Neves (2011a), Palmer (2011) and García Negroni (2011). Besides, it was used as theoretical basis, Foucault (2011), Bakhtin (2010a, 2010b), Marcuschi (2008) and others to the formulations about the digest genre. The analysis reveals that digesters use modalization and the polyphony of speakers as phenomena that ultimately report the speakers‟ subjectivity in relation to the view of the voices of other speakers, that is, as a discursive strategy that guides the way the text of the digest should be read. Thus, the gender digest is defined as a place of interaction of voices and subjective impressions through which the speaker summarizes praises, criticizes and evaluates the most diverse academic intellectual productions.
Pour construire un texte parlé ou écrit, indépendamment du genre qui réalise, le locuteur utilise des fonctionnalités linguistiques comme les stratégies sémantiques argumentatif afin de guider les appelants à certaines conclusions. La modalisation et la polyphonie, en conséquence, sont des phénomènes qui permettent au locuteur laisser imprimé sa subjectivité dans le contenu des déclarations, tout en agissant en fonction de son interlocuteur. Avec l'accent dans ces discussions, cette recherche vise à ètidier les annonceurs de modalité et de la polyphonie, comme des phénomènes qui révèlent, linguistiquement, la subjectivité dans le genre compte-rendu, s‟agissant, de cette manière, comme des marques d'argumentation. Il est une recherche qualitative, descriptive et interprétative, qui adopte les principes théoriques et méthodologiques de la sémantique argumentative. Le corpus se compose de dix comptes-rendus dans six éditions du Jornal de Resenhas, imprimés par Discurso Editorial, ISSN 1984-6282, publiés en 2009, 2010 et 2012. Pour les discussions théoriques en concernant a la Théorie de L'argumentation de la Langue, nous fundamentons-nous en Ducrot (1994, 1987, 1988), Espíndola (2004), Nascimento (2005, 2009), Koch (2006a, 2006b) et d'autres qui discutent à propos de l'approche de la théorie. La modalisation est discuté sous les postulats de Castilho et Castilho (1993), Koch (2006b), Cervoni (1989), Nascimeto (2009), Neves (2011a), Palmer (2011) et García Negroni (2011). Et on utilise encore comme une base théorique, Foucault (2011), Bakhtin (2010a, 2010b), Marcuschi (2008) et d'autres pour les formulations sur la révision de genre. L'analyse révèle que les examinateurs utilisent la modalisation et la polyphonie des haut-parleurs comme des phénomènes qui relèvent finalement la subjectivité des intervenants des examens par rapport à la vue de la voix des autres orateurs, qui est, comme une stratégie qui guide discoursivement le chemin du texte de comme l'avis doit être lu. Le genre compte-rendu, dans cette recherche est considérée comme un lieu d'interaction des voix et impressions subjectives ainsi à travers laquelle l'orateur résume louanges, critique et évalue plus diverses productions intellectuelles academiques.
Ao construir um texto falado ou escrito, independente do gênero que o realize, o locutor se utiliza de recursos linguísticos como estratégias semântico-argumentativas com a finalidade de orientar o interlocutor para determinadas conclusões. A modalização e a polifonia, nesse sentido, são fenômenos que possibilitam ao locutor deixar impressa a sua subjetividade no conteúdo dos enunciados, ao mesmo tempo em que age em função de seu interlocutor. Com o foco nessa discussão, esta pesquisa objetiva investigar a modalização e a polifonia de locutores como fenômenos que revelam, linguisticamente, a subjetividade no gênero resenha acadêmico-científica, funcionando, portanto, como marcas de argumentação. Trata-se de uma investigação qualitativa, de caráter descritivo e interpretativista, que adota os princípios teórico-metodológicos da Semântica Argumentativa. O corpus é constituído de dez resenhas, coletadas em seis edições do Jornal de Resenhas, da Discurso Editorial, ISSN 1984-6282, publicadas nos anos de 2009, 2010 e 2012. Para as discussões teóricas concernentes à Teoria da Argumentação na Língua, embasamo-nos em Ducrot (1994, 1987, 1988), Espíndola (2004), Nascimento (2005, 2009), Koch (2006a, 2006b) entre outros que discutem a respeito da teoria em abordagem. A modalização é discutida sob os postulados de Castilho e Castilho (1993), Koch (2006b), Cervoni (1989), Nascimento (2009), Neves (2011a), Palmer (2011) e García Negroni (2011). Ainda servem de embasamento teórico, Foucault (2011), Bakhtin (2010a, 2010b), Marcuschi (2008) e outros, para as formulações acerca do gênero textual resenha. As análises revelam que os resenhistas utilizam a modalização e a polifonia de locutores como fenômenos que acabam por denunciar a subjetividade dos locutores das resenhas em relação ao ponto de vista das vozes de outros locutores, ou seja, como uma estratégia que orienta discursivamente a forma como o texto da resenha deve ser lido. O gênero resenha, nesta investigação, é visto como um lugar de interação de vozes e, portanto, de impressões subjetivas, através do qual o locutor resume, elogia, critica e avalia as mais diversas produções intelectuais acadêmicas.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Chang, Chi-Chin, und 張其勤. „Efficient Design of JPEG2000 EBCOT TIER-I Context Formation Encoder“. Thesis, 2006. http://ndltd.ncl.edu.tw/handle/82514225034333603693.

Der volle Inhalt der Quelle
Annotation:
碩士
國立交通大學
電機資訊學院碩士在職專班
94
JPEG2000 is a new still image compression standard. The most attractive feature of this new standard is that it can reduce the bit rate significantly while the image quality is also preserved. However, this feature requires more complex computations and hardware cost in comparison to other standards. Moreover, most of the computation time is in EBCOT. There are many design techniques have been proposed for its efficient realization. The Pass-Parallel architecture is one of the most efficient methods. In this thesis, we propose some methods to improve the computation efficiency, hardware utilization, and reduce hardware area for the Pass-Parallel EBCOT context formation (CF) engine. The Sample-Parallel Pass-Type Detection (SPPD) method is proposed to improve the performance in deciding the pass types of all four samples in the same column. The Column-Based Pass-Parallel Coding (CBPC) method is proposed to code all four samples in the same column concurrently. We design a CF encoder to verify both new methods. We use two steps to process the input samples to CF and optimize each steps. In step one we use SPPD to shorten the time for determining pass types, and thus improve the whole computation performance. In step two we use CBPC to code all four samples in the same column according to the pass types determined in step one, and thus reduce the hardware cost and improve the hardware utilization. Our design is synthesized by Synopsys® Design Compiler using TSMC CMOS 0.15μm process. The pre-layout synthesized area is 18127.31 μm2. In our simulation, the operation clock frequency can be up to 600 MHz in the WCCOM worst_case_tree environment. With this clock frequency, it needs 0.0116 second to encode an image with 2304 x 1728 image size. Both proposed methods can reduce 13.83% of the encoding time, 18.28% of the hardware cost, and 34.78% of the hardware utilization, in comparison to the original Pass-Parallel CF.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Wu, Li-Cian, und 武立千. „A High Throughput Context-Based Adaptive Binary Arithmetic Encoder for QFHD Resolution“. Thesis, 2008. http://ndltd.ncl.edu.tw/handle/51883139057907891983.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Liu, Po-Sheng, und 劉普昇. „A Hardware Context-Based Adaptive Binary Arithmetic Encoder for H.264 Advanced Video Coding“. Thesis, 2006. http://ndltd.ncl.edu.tw/handle/39127786299837173975.

Der volle Inhalt der Quelle
Annotation:
碩士
國立清華大學
資訊工程學系
94
We propose a full hardware implementation of Context-Based Adaptive Binary Arithmetic Encoder. Our architecture includes a 14-way context pair generator composed of binarization and context modeling, a 3-stage pipelined circuit for getting neighboring data and a 3-mode 4-stage pipelined arithmetic encoder with forwarding logic for context update. Our arithmetic encoder architecture can process one bin per cycle. The whole encoder is able to process 0.77 bins per cycle on the average.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Chen, Jian-Long, und 陳建隆. „Design of Context Adaptive Arithmetic Encoder and Decoder for H.264/AVC Video Coding“. Thesis, 2005. http://ndltd.ncl.edu.tw/handle/22542788201482743553.

Der volle Inhalt der Quelle
Annotation:
碩士
國立交通大學
電機資訊學院碩士在職專班
93
ABSTRACT H.264/AVC is the latest video compression standard. Relevant research shows that, comparing with MPEG-2 and MPEG-4, H.264/AVC has tremendously improved both compression ratio and video quality. Such feature makes H.264/AVC best fit in the applications of multimedia streaming and mobile TV. This thesis focused on fast CABAC encode/decode in a small area under H.264/AVC. CABAC is mainly composed of three function units: Binarization, Context Mode, and Arithmetic Coding. 13 steps will be required to renew a bin, if processed by traditional method. By re-arranging every step (including parallel processing and adding pipeline), the procedure can be successfully reduced to 4 steps. In term of Binarization, combination circuit is applied to Unary coding; and table partition is utilized to reduce extra computing complexity for UEGK coding. In contrast to performing the function by hardware, processing by software will interrupt the operation of the microprocessor in order to respond this service, whereas the hardware can share the loadings by additional 5k gate count only. In term of Context Mode, dual-port memory is adopted to read and write simultaneously and maximize the efficiency of the procedure. In term of arithmetic Coding, renormalization happens when range and low are less than 1/4*range. There are two loops in renormalization, and are traditionally processed bit by bit. In the experiment that this thesis based on, we followed the one-slipping method using LZD circuit to detect loop back times. Next, we use bit-parallel to generate mask hence to implement the second loop. Comparing to conventional method, the mask method saved AC around 10% time of execution with averagely 1.8 cycles to process a bin. Meanwhile, the thesis also integrates the three modes of Arithmetic Coding in one circuit in order to achieve the best hardware sharing. On the other hand, [3] also use prefix adder for renormalization. In order to lower the cost, we adopted shifter to remove MSB, take FSM for storing the status and eliminate two 10-bits adding circuits as result. Comparing with [3], that is about 50% of circuit saving. Overall, in respect of encoding, the implemented encoder can be operated at 333 MHz with gate counts at 13.3k, about 90% of the operation time of the traditional way. As for decoding, the decoder can also go up to 333 MHz with 16.7k gate counts. In short, this design fits in with the key feature of low-cost, high output.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Chang, Yi-Meng, und 張義孟. „Design of a High-Speed and Small-Area Pass-Parallel Context Formation Encoder for JPEG2000“. Thesis, 2006. http://ndltd.ncl.edu.tw/handle/90873947851267563124.

Der volle Inhalt der Quelle
Annotation:
碩士
國立交通大學
電機學院碩士在職專班電子與光電組
95
As the prompt development of Internet and digital still camera (DSC), still image is broadly used as storage and transmission contents. JPEG2000 is a relatively new still image compression standard. It has better compression performance than conventional JPEG standard, and it provides many useful features. However, these features require more complex computations and hardware resources. The Context Formation Encoder of EBCOT tire-1 is of high complexity in a JPEG2000 encoder. To improve performance, the Pass-Parallel architecture is one of the most efficient methods. In this thesis, a high performance and low-power hardware architecture design of Context Formation encoder for JPEG2000 is proposed. The new hardware architecture is implemented by three speedup methods and pipeline technique. The area of context window of Pass-Parallel Column-Based context formation encoder is reduced by 25% using the proposed Dual Column Pass1 generation method in comparison with existing techniques. The critical path of overall system architecture is reduced employing dual column pass1 generation, all coding pass and significance change generation method and sample-parallel column-based coding method. The new architecture is proposed to improve the computation efficiency and reduce hardware area in pass coding operations. Finally, Our design is described with Verilog HDL code and synthesized by Synopsys Design Compiler using TSMC CMOS 0.25μm process. The pre-layout synthesized area is 40037 μm2. In our simulation, the operation clock frequency can reach 330 MHz. With this clock frequency, it needs 0.021 second to encode an image with 2304 x 1728 image size.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Borges, Catarina Franco Quitério de Oliveira. „Assaying synaptic function using genetically -encoded optical reporters in the context of Alzheimer's disease“. Master's thesis, 2017. http://hdl.handle.net/10316/82883.

Der volle Inhalt der Quelle
Annotation:
Dissertação de Mestrado em Biologia Celular e Molecular apresentada à Faculdade de Ciências e Tecnologia
A doença do Alzheimer é uma doença que afecta maioritariamente a população sénior1. Estudos recentes comprovam que não é a patologia das proteínas amilóide β e Tau per se mas sim a disfuncção e/ou perda de sinapses, e consequente perda de conectividade entre regiões do cérebro, que originam os sintomas realcionados com a memória e os processos cognitivos na doença do Alzheimer2. Sendo que a disfunção sináptica é um processo reversivel que pode ser detectado em fases inciais da doença do Alzheimer, é importante desenvolver ferramentas necessárias para o estudo da função sináptica no contexto desta doença.Para tal, neste projecto foram usadas indicadores de cálcio geneticamente codificados denominados reporteres. Mudanças na concentração de cálcio na região présináptica foram estudadas usando repórteres como SyGCaMP6f, SyjRCaMP1b e SyjRGECO1a e na região pós-sináptica, usando reporters como Actina-GCaMP6f. A excitabilidade neuronal foi estudada através da caraterização de um reporter recentemente publicado que é direcionado para o soma denominado H2B-GCaMP6f. A excitabilidade neuronal advém de uma combinação de propriedades intrinsicas da membrana celular (onde se inclui o potencial de membrana em repouso e a resistência da membrana) com a plasticidade sináptica3,4. Fluoforos orgânicos como Fluo-4 ou Cal-520 podem ser usados para medir a excitabilidade neuronal, contudo esses fluoforos não exibem um sinal suficientemente sensivel a mudanças de voltagem na membrana porque são expressos e difusos ao longo do neurónio5. Desta forma, desenvolveu-se um construct - AAV6:hSynapsin1:H2B-GCaMP6f – em que o reporter mais sensivel desenvolvido até hoje – GCaMP6f – é ligado por engenharia genética a uma sequência da histona H2B7, 8. H2B-GCaMP6f foi caracterizado como um reporter somático sensível ao cálcio assim como a sua utilidade em testes preliminares de farmacologia, tendo sido usado três moduladores de excitabilidade neuronal: um inibidor de canais de sódio depedentes da voltagem (KV1.3 blocker) 9, um modulador positivo dos receptors AMPA (AMPA PAM ) - LY45010810 – e um composto confidencial vindo de uma análise fenotipica da J&J,denominado Composto X. Testes farmacológicos foram realizados em culturas neuronais primárias de hipocampo de rato em duas condições experimentais diferentes: na presença de 0.01 mM NBQX (antagonista dos receptors AMPA) e 0.05 mM D-AP5 (antagonista dos receptores NMDA) e na ausência deste inibidorees sinápticos. Para além da caracterização de H2B-GCaMP6f, a dinâmica de todos os reporters pre-sinápticos mencionados acima foi tambem caracterizada através de estimulação eléctrica de sequências com diferentes numeros de potenciais de ações a 20Hz. Gravações em tempo real foram também efectuadas usando SyjRGECO1a e H2B-GCaMP6f para comparar respostas sinápticas e somáticas simultaneamente.Combinando um reporter somático integralmente caracterizado, como o H2B-GCaMP6f, com diferentes reporters sinápticos emitindo fluorescência com o comprimento de onda na zona verde e vermelha do espectro, é possível desenvolver ferramentas para uma melhor compreensão da excitabilidade neuronal e funcção sináptica , que futuramente poderá ser usada para análises fenotipicas ou dirigidas de compostos no contexto da doença do Alzheimer ou outras doenças neurológicas.
Alzheimer’s disease (AD) is a devastating disease affecting mostly the elderly population1. Current studies show that importantly, not amyloid or Tau pathology per se, but synapse dysfunction/loss and the consequent loss of connectivity between brain regions correlates well and underlie the cognitive and memory symptoms of AD2. Since synaptic loss is a reversible process and can be detectable in the early phases of the disease progression, it is important to develop the tools necessary for assaying synaptic function in the context of AD pathology.In this projected, we used genetically-encoded calcium indicators (GECIs) to assay synaptic function. Changes in pre-and post-synaptic Ca2+ were monitored using SyGCaMP6f, SyjRCaMP1b, SyjRGECO1a as pre-synaptic calcium reporters and actin-GCaMP6f as post-synaptic calcium reporter. Neuronal excitability was evaluated by characterizing a recently published somatic calcium reporter - H2B-GCaMP6f. Neuronal excitability can be considered as a combination of intrinsic membrane properties - including membrane resting potential and input resistance - with synaptic plasticity events3,4. Organic dyes such as Fluo-4 or Cal-520 can be used to measure neuronal excitability, however fluorophores do not show a sensitive voltage-dependent signal with a single cell resolution due to its expression along the entire neuron5. To overcome this drawback we developed a new construct - AAV6:hSynapsin1:H2B-GCaMP6f -in which the fastest green genetically-encoded calcium reporter -GCaMP6f6 - was fused to a H2B sequence7, 8. H2B-GCaMP6f was characterized as a somatic calcium reporter as well as characterized its utility of this reporter in preliminary pharmacology assays by testing three different compounds that modulate neuronal excitability – a KV1.3 channel blocker9, LY45010810 (AMPA PAM) and a confidential compound from a phenotypic screening effort within J&J, called Compound X. Pharmacological tests were performed in primary neuronal cultures of rat hippocampus in two conditions - presence of 0.01 mM NBQX (AMPA antagonist) and 0.05 mM D-AP5 (NMDA antagonist), and absence of these synaptic blockers. Beside H2B-GCaMP6f characterization, the dynamic range of all the pre-synaptic reporters mentioned above was characterized by delivering trains of different number of APs at 20Hz. Multiplex imaging using SyjRGECO1a and H2B-GCaMP6f was also performed to analyze somatic and synaptic response simultaneously. Combining a full characterized somatic reporter -H2B-GCaMP6f- with green and red shifted variants of sensitive calcium reporters, we are developing tools to a better understanding of neuronal excitability and synaptic function that can be further used during phenotypic or target-based drug screening efforts in the context of Alzheimer’s disease or other neurological diseases.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

和基, 塩谷. „Olfactory cortex ventral tenia tecta neurons encode the distinct context-dependent behavioral states of goal-directed behaviors“. Thesis, 2003. http://id.nii.ac.jp/1707/00028191/.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Lin, Szu-Ti, und 林思狄. „An event-related brain potential study of the retrieval orientation for objects encoded in different emotional contexts“. Thesis, 2016. http://ndltd.ncl.edu.tw/handle/40070561519658036311.

Der volle Inhalt der Quelle
Annotation:
碩士
國立中央大學
認知與神經科學研究所
104
Different forms of processing are applied to physically identical retrieval cues based on the characteristics of memory that people try to retrieve. This phenomenon is called “retrieval orientations”. The purpose of the present study is to investigate whether the different retrieval orientations adopted for objects encoded modulated by different emotional contexts. In the present study, emotionally neutral object pictures superimposed on neutrally, negatively, or positively valenced background pictures were used as study materials, and object pictures without background were used as retrieval cues. In Experiment 1, subjects would experience one of the three emotional backgrounds in each study phase and then finished a recognition test. The results showed that although the memory performance of object pictures had no significant difference, different emotional contexts during study phase still let subjects adopt different retrieval orientations when trying to retrieve those objects. Objects encoded in neutral backgrounds elicited more positive-going event-related potentials (ERPs) than encoded in emotional backgrounds from 500 ms to 2000 ms over the right frontal scalp. In addition, objects encoded in positive contexts elicited more positive-going ERPs than encoded in negative contexts from 800 ms to 1200 ms over the right hemisphere scalp. The retrieval orientations observed from Experiment 1 were steady and maintaining states. For investigating whether our brain can switch retrieval orientations frequently, we let subjects experience two emotional contexts (neutral + positive or neutral + negative) during each study phase in Experiment 2, and gave expression cues during test phase for subjects to do exclusion tasks. The results showed that from the onset of object pictures to 600 ms, objects encoded in emotional contexts elicited more positive-going ERPs over the whole scalp (not including the frontal pole). On the other hand, the same phenomenon was also observed over frontal pole scalp from 700 ms to 1400 ms. To summarize the results of two experiments, we could conclude that different retrieval orientations were adopted when subjects tried to retrieve objects encoded in different emotional contexts. These orientations could be maintaining states, or switch to another frequently in response to task demands. Their distributions over scalp were different; the affected factor may be valence, arousal, or positive or negative feelings. These factors influenced retrieval orientations in distinct time-window.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie