Academic literature on the topic 'Context Encoder'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Context Encoder.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Context Encoder"

1

Pinho, M. S., and W. A. Finamore. "Context-based LZW encoder." Electronics Letters 38, no. 20 (2002): 1172. http://dx.doi.org/10.1049/el:20020807.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Han, Jialong, Aixin Sun, Haisong Zhang, Chenliang Li, and Shuming Shi. "CASE: Context-Aware Semantic Expansion." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 05 (April 3, 2020): 7871–78. http://dx.doi.org/10.1609/aaai.v34i05.6293.

Full text
Abstract:
In this paper, we define and study a new task called Context-Aware Semantic Expansion (CASE). Given a seed term in a sentential context, we aim to suggest other terms that well fit the context as the seed. CASE has many interesting applications such as query suggestion, computer-assisted writing, and word sense disambiguation, to name a few. Previous explorations, if any, only involve some similar tasks, and all require human annotations for evaluation. In this study, we demonstrate that annotations for this task can be harvested at scale from existing corpora, in a fully automatic manner. On a dataset of 1.8 million sentences thus derived, we propose a network architecture that encodes the context and seed term separately before suggesting alternative terms. The context encoder in this architecture can be easily extended by incorporating seed-aware attention. Our experiments demonstrate that competitive results are achieved with appropriate choices of context encoder and attention scoring function.
APA, Harvard, Vancouver, ISO, and other styles
3

Marafioti, Andres, Nathanael Perraudin, Nicki Holighaus, and Piotr Majdak. "A Context Encoder For Audio Inpainting." IEEE/ACM Transactions on Audio, Speech, and Language Processing 27, no. 12 (December 2019): 2362–72. http://dx.doi.org/10.1109/taslp.2019.2947232.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Yun, Hyeongu, Yongkeun Hwang, and Kyomin Jung. "Improving Context-Aware Neural Machine Translation Using Self-Attentive Sentence Embedding." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 05 (April 3, 2020): 9498–506. http://dx.doi.org/10.1609/aaai.v34i05.6494.

Full text
Abstract:
Fully Attentional Networks (FAN) like Transformer (Vaswani et al. 2017) has shown superior results in Neural Machine Translation (NMT) tasks and has become a solid baseline for translation tasks. More recent studies also have reported experimental results that additional contextual sentences improve translation qualities of NMT models (Voita et al. 2018; Müller et al. 2018; Zhang et al. 2018). However, those studies have exploited multiple context sentences as a single long concatenated sentence, that may cause the models to suffer from inefficient computational complexities and long-range dependencies. In this paper, we propose Hierarchical Context Encoder (HCE) that is able to exploit multiple context sentences separately using the hierarchical FAN structure. Our proposed encoder first abstracts sentence-level information from preceding sentences in a self-attentive way, and then hierarchically encodes context-level information. Through extensive experiments, we observe that our HCE records the best performance measured in BLEU score on English-German, English-Turkish, and English-Korean corpus. In addition, we observe that our HCE records the best performance in a crowd-sourced test set which is designed to evaluate how well an encoder can exploit contextual information. Finally, evaluation on English-Korean pronoun resolution test suite also shows that our HCE can properly exploit contextual information.
APA, Harvard, Vancouver, ISO, and other styles
5

Dakwale, Praveen, and Christof Monz. "Convolutional over Recurrent Encoder for Neural Machine Translation." Prague Bulletin of Mathematical Linguistics 108, no. 1 (June 1, 2017): 37–48. http://dx.doi.org/10.1515/pralin-2017-0007.

Full text
Abstract:
AbstractNeural machine translation is a recently proposed approach which has shown competitive results to traditional MT approaches. Standard neural MT is an end-to-end neural network where the source sentence is encoded by a recurrent neural network (RNN) called encoder and the target words are predicted using another RNN known as decoder. Recently, various models have been proposed which replace the RNN encoder with a convolutional neural network (CNN). In this paper, we propose to augment the standard RNN encoder in NMT with additional convolutional layers in order to capture wider context in the encoder output. Experiments on English to German translation demonstrate that our approach can achieve significant improvements over a standard RNN-based baseline.
APA, Harvard, Vancouver, ISO, and other styles
6

Dligach, Dmitriy, Majid Afshar, and Timothy Miller. "Toward a clinical text encoder: pretraining for clinical natural language processing with applications to substance misuse." Journal of the American Medical Informatics Association 26, no. 11 (June 24, 2019): 1272–78. http://dx.doi.org/10.1093/jamia/ocz072.

Full text
Abstract:
Abstract Objective Our objective is to develop algorithms for encoding clinical text into representations that can be used for a variety of phenotyping tasks. Materials and Methods Obtaining large datasets to take advantage of highly expressive deep learning methods is difficult in clinical natural language processing (NLP). We address this difficulty by pretraining a clinical text encoder on billing code data, which is typically available in abundance. We explore several neural encoder architectures and deploy the text representations obtained from these encoders in the context of clinical text classification tasks. While our ultimate goal is learning a universal clinical text encoder, we also experiment with training a phenotype-specific encoder. A universal encoder would be more practical, but a phenotype-specific encoder could perform better for a specific task. Results We successfully train several clinical text encoders, establish a new state-of-the-art on comorbidity data, and observe good performance gains on substance misuse data. Discussion We find that pretraining using billing codes is a promising research direction. The representations generated by this type of pretraining have universal properties, as they are highly beneficial for many phenotyping tasks. Phenotype-specific pretraining is a viable route for trading the generality of the pretrained encoder for better performance on a specific phenotyping task. Conclusions We successfully applied our approach to many phenotyping tasks. We conclude by discussing potential limitations of our approach.
APA, Harvard, Vancouver, ISO, and other styles
7

Trisedya, Bayu, Jianzhong Qi, and Rui Zhang. "Sentence Generation for Entity Description with Content-Plan Attention." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 05 (April 3, 2020): 9057–64. http://dx.doi.org/10.1609/aaai.v34i05.6439.

Full text
Abstract:
We study neural data-to-text generation. Specifically, we consider a target entity that is associated with a set of attributes. We aim to generate a sentence to describe the target entity. Previous studies use encoder-decoder frameworks where the encoder treats the input as a linear sequence and uses LSTM to encode the sequence. However, linearizing a set of attributes may not yield the proper order of the attributes, and hence leads the encoder to produce an improper context to generate a description. To handle disordered input, recent studies propose two-stage neural models that use pointer networks to generate a content-plan (i.e., content-planner) and use the content-plan as input for an encoder-decoder model (i.e., text generator). However, in two-stage models, the content-planner may yield an incomplete content-plan, due to missing one or more salient attributes in the generated content-plan. This will in turn cause the text generator to generate an incomplete description. To address these problems, we propose a novel attention model that exploits content-plan to highlight salient attributes in a proper order. The challenge of integrating a content-plan in the attention model of an encoder-decoder framework is to align the content-plan and the generated description. We handle this problem by devising a coverage mechanism to track the extent to which the content-plan is exposed in the previous decoding time-step, and hence it helps our proposed attention model select the attributes to be mentioned in the description in a proper order. Experimental results show that our model outperforms state-of-the-art baselines by up to 3% and 5% in terms of BLEU score on two real-world datasets, respectively.
APA, Harvard, Vancouver, ISO, and other styles
8

Cai, Yuanyuan, Min Zuo, Qingchuan Zhang, Haitao Xiong, and Ke Li. "A Bichannel Transformer with Context Encoding for Document-Driven Conversation Generation in Social Media." Complexity 2020 (September 17, 2020): 1–13. http://dx.doi.org/10.1155/2020/3710104.

Full text
Abstract:
Along with the development of social media on the internet, dialogue systems are becoming more and more intelligent to meet users’ needs for communication, emotion, and social intercourse. Previous studies usually use sequence-to-sequence learning with recurrent neural networks for response generation. However, recurrent-based learning models heavily suffer from the problem of long-distance dependencies in sequences. Moreover, some models neglect crucial information in the dialogue contexts, which leads to uninformative and inflexible responses. To address these issues, we present a bichannel transformer with context encoding (BCTCE) for document-driven conversation. This conversational generator consists of a context encoder, an utterance encoder, and a decoder with attention mechanism. The encoders aim to learn the distributed representation of input texts. The multihop attention mechanism is used in BCTCE to capture the interaction between documents and dialogues. We evaluate the proposed BCTCE by both automatic evaluation and human judgment. The experimental results on the dataset CMU_DoG indicate that the proposed model yields significant improvements over the state-of-the-art baselines on most of the evaluation metrics, and the generated responses of BCTCE are more informative and more relevant to dialogues than baselines.
APA, Harvard, Vancouver, ISO, and other styles
9

Zhang, Biao, Deyi Xiong, Jinsong Su, and Hong Duan. "A Context-Aware Recurrent Encoder for Neural Machine Translation." IEEE/ACM Transactions on Audio, Speech, and Language Processing 25, no. 12 (December 2017): 2424–32. http://dx.doi.org/10.1109/taslp.2017.2751420.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Pan, Yirong, Xiao Li, Yating Yang, and Rui Dong. "Multi-Source Neural Model for Machine Translation of Agglutinative Language." Future Internet 12, no. 6 (June 3, 2020): 96. http://dx.doi.org/10.3390/fi12060096.

Full text
Abstract:
Benefitting from the rapid development of artificial intelligence (AI) and deep learning, the machine translation task based on neural networks has achieved impressive performance in many high-resource language pairs. However, the neural machine translation (NMT) models still struggle in the translation task on agglutinative languages with complex morphology and limited resources. Inspired by the finding that utilizing the source-side linguistic knowledge can further improve the NMT performance, we propose a multi-source neural model that employs two separate encoders to encode the source word sequence and the linguistic feature sequences. Compared with the standard NMT model, we utilize an additional encoder to incorporate the linguistic features of lemma, part-of-speech (POS) tag, and morphological tag by extending the input embedding layer of the encoder. Moreover, we use a serial combination method to integrate the conditional information from the encoders with the outputs of the decoder, which aims to enhance the neural model to learn a high-quality context representation of the source sentence. Experimental results show that our approach is effective for the agglutinative language translation, which achieves the highest improvements of +2.4 BLEU points on Turkish–English translation task and +0.6 BLEU points on Uyghur–Chinese translation task.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Context Encoder"

1

Damecharla, Hima Bindu. "FPGA IMPLEMENTATION OF A PARALLEL EBCOT TIER-1 ENCODER THAT PRESERVES ENCODING EFFICIENCY." University of Akron / OhioLINK, 2006. http://rave.ohiolink.edu/etdc/view?acc_num=akron1149703842.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Leufvén, Johan. "Integration of user generated content with an IPTV middleware." Thesis, Linköping University, Linköping University, Department of Computer and Information Science, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-55029.

Full text
Abstract:

IPTV is a growing form of distribution for TV and media. Reports show that the market will grow from the current 20-30 million subscribers to almost 100 million 2012. IPTV extends the traditional TV viewing with new services like renting movies from your TV. It could also be seen as a bridge between the traditional broadcast approach and the new on demand approach the users are used to from internet.

Since there are many actors in the IPTV market that all deliver the same basic functionality, companies must deliver better products that separate them from the competitors. This can be done either through doing things better than the others and/or delivering functionality that others can’t deliver.

This thesis project presents the development of a prototype system for serving user generated content in the IPTV middleware Dreamgallery. The developed prototype is a fully working system that includes (1) a fully automated system for transcoding, of video content. (2) A web portal presented with solutions for problems related to user content uploading and administration. (3) Seamless integration with the Dreamgallery middleware and end user GUI, with two different ways of viewing content. One way for easy exploration of new content and a second more structured way of browsing the content.

A study of three open source encoding softwares is also presented. The three encoders were subjects to tests of: speed, agility (file format support) and how well they handle files with corrupted data.

APA, Harvard, Vancouver, ISO, and other styles
3

May, Richard John. "Perceptual content loss in bit rate constrained IFS encoded speech." Thesis, University of Portsmouth, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.396323.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hsu, William. "Using knowledge encoded in graphical disease models to support context-sensitive visualization of medical data." Diss., Restricted to subscribing institutions, 2009. http://proquest.umi.com/pqdweb?did=1925776141&sid=13&Fmt=2&clientId=1564&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Anegekuh, Louis. "Video content-based QoE prediction for HEVC encoded videos delivered over IP networks." Thesis, University of Plymouth, 2015. http://hdl.handle.net/10026.1/3377.

Full text
Abstract:
The recently released High Efficiency Video Coding (HEVC) standard, which halves the transmission bandwidth requirement of encoded video for almost the same quality when compared to H.264/AVC, and the availability of increased network bandwidth (e.g. from 2 Mbps for 3G networks to almost 100 Mbps for 4G/LTE) have led to the proliferation of video streaming services. Based on these major innovations, the prevalence and diversity of video application are set to increase over the coming years. However, the popularity and success of current and future video applications will depend on the perceived quality of experience (QoE) of end users. How to measure or predict the QoE of delivered services becomes an important and inevitable task for both service and network providers. Video quality can be measured either subjectively or objectively. Subjective quality measurement is the most reliable method of determining the quality of multimedia applications because of its direct link to users’ experience. However, this approach is time consuming and expensive and hence the need for an objective method that can produce results that are comparable with those of subjective testing. In general, video quality is impacted by impairments caused by the encoder and the transmission network. However, videos encoded and transmitted over an error-prone network have different quality measurements even under the same encoder setting and network quality of service (NQoS). This indicates that, in addition to encoder settings and network impairment, there may be other key parameters that impact video quality. In this project, it is hypothesised that video content type is one of the key parameters that may impact the quality of streamed videos. Based on this assertion, parameters related to video content type are extracted and used to develop a single metric that quantifies the content type of different video sequences. The proposed content type metric is then used together with encoding parameter settings and NQoS to develop content-based video quality models that estimate the quality of different video sequences delivered over IP-based network. This project led to the following main contributions: (1) A new metric for quantifying video content type based on the spatiotemporal features extracted from the encoded bitstream. (2) The development of novel subjective test approach for video streaming services. (3) New content-based video quality prediction models for predicting the QoE of video sequences delivered over IP-based networks. The models have been evaluated using subjective and objective methods.
APA, Harvard, Vancouver, ISO, and other styles
6

Sasko, Dominik. "Segmentace lézí roztroušené sklerózy pomocí hlubokých neuronových sítí." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2021. http://www.nusl.cz/ntk/nusl-442379.

Full text
Abstract:
Hlavným zámerom tejto diplomovej práce bola automatická segmentácia lézií sklerózy multiplex na snímkoch MRI. V rámci práce boli otestované najnovšie metódy segmentácie s využitím hlbokých neurónových sietí a porovnané prístupy inicializácie váh sietí pomocou preneseného učenia (transfer learning) a samoriadeného učenia (self-supervised learning). Samotný problém automatickej segmentácie lézií sklerózy multiplex je veľmi náročný, a to primárne kvôli vysokej nevyváženosti datasetu (skeny mozgov zvyčajne obsahujú len malé množstvo poškodeného tkaniva). Ďalšou výzvou je manuálna anotácia týchto lézií, nakoľko dvaja rozdielni doktori môžu označiť iné časti mozgu ako poškodené a hodnota Dice Coefficient týchto anotácií je približne 0,86. Možnosť zjednodušenia procesu anotovania lézií automatizáciou by mohlo zlepšiť výpočet množstva lézií, čo by mohlo viesť k zlepšeniu diagnostiky individuálnych pacientov. Našim cieľom bolo navrhnutie dvoch techník využívajúcich transfer learning na predtrénovanie váh, ktoré by neskôr mohli zlepšiť výsledky terajších segmentačných modelov. Teoretická časť opisuje rozdelenie umelej inteligencie, strojového učenia a hlbokých neurónových sietí a ich využitie pri segmentácii obrazu. Následne je popísaná skleróza multiplex, jej typy, symptómy, diagnostika a liečba. Praktická časť začína predspracovaním dát. Najprv boli skeny mozgu upravené na rovnaké rozlíšenie s rovnakou veľkosťou voxelu. Dôvodom tejto úpravy bolo využitie troch odlišných datasetov, v ktorých boli skeny vytvárané rozličnými prístrojmi od rôznych výrobcov. Jeden dataset taktiež obsahoval lebku, a tak bolo nutné jej odstránenie pomocou nástroju FSL pre ponechanie samotného mozgu pacienta. Využívali sme 3D skeny (FLAIR, T1 a T2 modality), ktoré boli postupne rozdelené na individuálne 2D rezy a použité na vstup neurónovej siete s enkodér-dekodér architektúrou. Dataset na trénovanie obsahoval 6720 rezov s rozlíšením 192 x 192 pixelov (po odstránení rezov, ktorých maska neobsahovala žiadnu hodnotu). Využitá loss funkcia bola Combo loss (kombinácia Dice Loss s upravenou Cross-Entropy). Prvá metóda sa zameriavala na využitie predtrénovaných váh z ImageNet datasetu na enkodér U-Net architektúry so zamknutými váhami enkodéra, resp. bez zamknutia a následného porovnania s náhodnou inicializáciou váh. V tomto prípade sme použili len FLAIR modalitu. Transfer learning dokázalo zvýšiť sledovanú metriku z hodnoty približne 0,4 na 0,6. Rozdiel medzi zamknutými a nezamknutými váhami enkodéru sa pohyboval okolo 0,02. Druhá navrhnutá technika používala self-supervised kontext enkodér s Generative Adversarial Networks (GAN) na predtrénovanie váh. Táto sieť využívala všetky tri spomenuté modality aj s prázdnymi rezmi masiek (spolu 23040 obrázkov). Úlohou GAN siete bolo dotvoriť sken mozgu, ktorý bol prekrytý čiernou maskou v tvare šachovnice. Takto naučené váhy boli následne načítané do enkodéru na aplikáciu na náš segmentačný problém. Tento experiment nevykazoval lepšie výsledky, s hodnotou DSC 0,29 a 0,09 (nezamknuté a zamknuté váhy enkodéru). Prudké zníženie metriky mohlo byť spôsobené použitím predtrénovaných váh na vzdialených problémoch (segmentácia a self-supervised kontext enkodér), ako aj zložitosť úlohy kvôli nevyváženému datasetu.
APA, Harvard, Vancouver, ISO, and other styles
7

和基, 塩谷, and Kazuki Shiotani. "Olfactory cortex ventral tenia tecta neurons encode the distinct context-dependent behavioral states of goal-directed behaviors." Thesis, 櫻井 芳雄, 2003. http://id.nii.ac.jp/1707/00028191/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

和基, 塩谷, and Kazuki Shiotani. "Olfactory cortex ventral tenia tecta neurons encode the distinct context-dependent behavioral states of goal-directed behaviors." Thesis, 櫻井 芳雄, 2021. https://doors.doshisha.ac.jp/opac/opac_link/bibid/BB13158521/?lang=0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Munoz, Joshua. "Application of Multifunctional Doppler LIDAR for Non-contact Track Speed, Distance, and Curvature Assessment." Diss., Virginia Tech, 2015. http://hdl.handle.net/10919/77876.

Full text
Abstract:
The primary focus of this research is evaluation of feasibility, applicability, and accuracy of Doppler Light Detection And Ranging (LIDAR) sensors as non-contact means for measuring track speed, distance traveled, and curvature. Speed histories, currently measured with a rotary, wheel-mounted encoder, serve a number of useful purposes, one significant use involving derailment investigations. Distance calculation provides a spatial reference system for operators to locate track sections of interest. Railroad curves, using an IMU to measure curvature, are monitored to maintain track infrastructure within regulations. Speed measured with high accuracy leads to high-fidelity distance and curvature data through utilization of processor clock rate and left-and right-rail speed differentials during curve navigation, respectively. Wheel-mounted encoders, or tachometers, provide a relatively low-resolution speed profile, exhibit increased noise with increasing speed, and are subject to the inertial behavior of the rail car which affects output data. The IMU used to measure curvature is dependent on acceleration and yaw rate sensitivity and experiences difficulty in low-speed conditions. Preliminary system tests onboard a 'Hy-Rail' utility vehicle capable of traveling on rail show speed capture is possible using the rails as the reference moving target and furthermore, obtaining speed profiles from both rails allows for the calculation of speed differentials in curves to estimate degrees curvature. Ground truth distance calibration and curve measurement were also carried out. Distance calibration involved placement of spatial landmarks detected by a sensor to synchronize distance measurements as a pre-processing procedure. Curvature ground truth measurements provided a reference system to confirm measurement results and observe alignment variation throughout a curve. Primary testing occurred onboard a track geometry rail car, measuring rail speed over substantial mileage in various weather conditions, providing high-accuracy data to further calculate distance and curvature along the test routes. Tests results indicate the LIDAR system measures speed at higher accuracy than the encoder, absent of noise influenced by increasing speed. Distance calculation is also high in accuracy, results showing high correlation with encoder and ground truth data. Finally, curvature calculation using speed data is shown to have good correlation with IMU measurements and a resolution capable of revealing localized track alignments. Further investigations involve a curve measurement algorithm and speed calibration method independent from external reference systems, namely encoder and ground truth data. The speed calibration results show a high correlation with speed data from the track geometry vehicle. It is recommended that the study be extended to provide assessment of the LIDAR's sensitivity to car body motion in order to better isolate the embedded behavior in the speed and curvature profiles. Furthermore, in the interest of progressing the system toward a commercially viable unit, methods for self-calibration and pre-processing to allow for fully independent operation is highly encouraged.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
10

Sultana, Tania. "L'influence du contexte génomique sur la sélection du site d'intégration par les rétrotransposons humains L1." Thesis, Université Côte d'Azur (ComUE), 2016. http://www.theses.fr/2016AZUR4133.

Full text
Abstract:
Les rétrotransposons L1 (Long INterspersed Element-1) sont des éléments génétiques mobiles dont l'activité contribue à la dynamique du génome humain par mutagenèse insertionnelle. Les conséquences génétiques et épigénétiques d'une nouvelle insertion, et la capacité d'un L1 à être remobilisé, sont directement liées au site d’intégration dans le génome. Aussi, l’analyse des sites d’intégration des L1s est capitale pour comprendre leur impact fonctionnel - voire pathogène -, en particulier lors de la tumorigenèse ou au cours du vieillissement, et l’évolution de notre génome. Dans ce but, nous avons induit de façon expérimentale la rétrotransposition d'un élément L1 actif plasmidique dans des cellules en culture. Puis, nous avons cartographié les insertions obtenues de novo dans le génome humain grâce à une méthode de séquençage à haut-débit, appelée ATLAS-seq. Finalement, les sites pré-intégratifs identifiés par cette approche ont été analysés en relation avec un grand jeu de données publiques regroupant les caractéristiques structurales, génétiques ou épigénétiques de ces loci. Ces expériences ont révélé que les éléments L1 s’intègrent préférentiellement dans des régions de la chromatine faiblement exprimées et renfermant des activateurs faibles. Nous avons aussi trouvé plusieurs positions chromosomiques qui constituent des points chauds d'intégrations récurrentes. Nos résultats indiquent que la distribution des insertions de L1 de novo n’est pas aléatoire, que ce soit à l’échelle chromosomique ou à plus petite échelle, et ouvrent la porte à l'identification des déterminants moléculaires qui contrôlent la distribution chromosomique des L1s dans notre génome
Retrotransposons are mobile genetic elements that employ an RNA intermediate and a reverse transcription step for their replication. Long INterspersed Elements-1 (LINE-1 or L1) form the only autonomously active retrotransposon family in humans. Although most copies are defective due to the accumulation of mutations, each individual genome contains an average of 100 retrotransposition-competent L1 copies, which contribute to the dynamics of contemporary human genomes. L1 integration sites in the host genome directly determine the genetic consequences of the integration and the fate of the integrated copy. Thus, where L1 integrates in the genome, and whether this process is random, is critical to our understanding of human genome evolution, somatic genome plasticity in cancer and aging, and host-parasite interactions. To characterize L1 insertion sites, rather than studying endogenous L1 which have been subjected to evolutionary selective pressure, we induced de novo L1 retrotransposition by transfecting a plasmid-borne active L1 element into HeLa S3 cells. Then, we mapped de novo insertions in the human genome at nucleotide resolution by a dedicated deep-sequencing approach, named ATLAS-seq. Finally, de novo insertions were examined for their proximity towards a large number of genomic features. We found that L1 preferentially integrates in the lowly-expressed and weak enhancer chromatin segments. We also detected several hotspots of recurrent L1 integration. Our results indicate that the distribution of de novo L1 insertions is non-random both at local and regional scales, and pave the way to identify potential cellular factors involved in the targeting of L1 insertions
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Context Encoder"

1

May, Richard John. Perceptual content loss in bit rate constrained IFS encoded speech. Portsmouth: University of Portsmouth, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bill, Stockting, and Queyroux Fabienne, eds. Encoding across frontiers: Proceedings of the European Conference on Encoded Archival Description and Context (EAD and EAC), Paris, France, 1-8 October, 2004. Binghamton, NY: Haworth Information Press, 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

L' Afrique noire, peut-elle encore parler français?: Essai sur la méthodologie de l'enseignement du français langue étrangère en Afrique noire francophone à travers l'étude du cas sénégalais. Paris: L'Harmattan, 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

M, Dooley Jackie, ed. Encoded Archival Description: Context, theory, and case studies. Chicago: Society of American Archivists, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Dooley, Jackie M. Encoded Archival Description: Context, Theory, and Case Studies. Society of American Archivists, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Mayle, Peter. Encore Provence Consumer Contest Display Kit. Alfred A. Knopf, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Lepore, Ernie, and Matthew Stone. Explicit Indirection. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198738831.003.0007.

Full text
Abstract:
Our goal in this chapter is to contest the traditional view of indirection in utterances such as, ‘Can you pass the salt?’ by developing a very different way of characterizing the interpretations involved. We argue that the felt “indirection” of such utterances reflects the kind of meaning the utterances have, rather than the way that meaning is derived. So understood, there is no presumption that indirect meanings involve the pragmatic derivation of enriched contents froma literal interpretation; rather, we argue that indirect meanings are explicitly encoded in grammar. We build on recent work on formalizing declarative, interrogative, and imperative meanings as distinct but compatible kinds of content for utterances.
APA, Harvard, Vancouver, ISO, and other styles
8

Duffley, Patrick. Linguistic Meaning Meets Linguistic Form. Oxford University Press, 2020. http://dx.doi.org/10.1093/oso/9780198850700.001.0001.

Full text
Abstract:
This book steers a middle course between two opposing conceptions that currently dominate the field of semantics, the logical and cognitive approaches. It brings to light the inadequacies of both frameworks, and argues along with the Columbia School that linguistic semantics must be grounded on the linguistic sign itself and the meaning it conveys across the full range of its uses. The book offers 12 case studies demonstrating the explanatory power of a sign-based semantics, dealing with topics such as complementation with aspectual and causative verbs, control and raising, wh- words, full-verb inversion, and existential-there constructions. It calls for a radical revision of the semantics/pragmatics interface, proposing that the dividing-line be drawn between semiologically-signified notional content (i.e. what is linguistically encoded) and non-semiologically-signified notional content (i.e. what is not encoded but still communicated). This highlights a dimension of embodiment that concerns the basic design architecture of human language itself: the ineludable fact that the fundamental relation on which language is based is the association between a mind-engendered meaning and a bodily produced sign. It is argued that linguistic analysis often disregards this fact and treats meaning on the level of the sentence or the construction, rather than on that of the lower-level linguistic items where the linguistic sign is stored in a stable, permanent, and direct relation with its meaning outside of any particular context. Building linguistic analysis up from the ground level provides it with a more solid foundation and increases its explanatory power.
APA, Harvard, Vancouver, ISO, and other styles
9

Balentine, Samuel E., ed. The Oxford Handbook of Ritual and Worship in the Hebrew Bible. Oxford University Press, 2020. http://dx.doi.org/10.1093/oxfordhb/9780190222116.001.0001.

Full text
Abstract:
The focus of this Handbook is on ritual and worship from the perspective of biblical studies, particularly on the Hebrew Bible and its ancient Near Eastern antecedents. Within this context, attention will be given to the development of ideas in Jewish, Christian, and Muslim thinking, but only insofar as they connect with or extend the trajectory of biblical precedents. The volume reflects a wide range of analytical approaches to ancient texts, inscriptions, iconography, and ritual artifacts. It examines the social history and cultural knowledge encoded in rituals, and explores the way rituals shape and are shaped by politics, economics, ethical imperatives, and religion itself. Toward this end, the volume is organized into six major sections: Historical Contexts, Interpretive Approaches, Ritual Elements (participants, places, times, objects, practices), Underlying Cultural and Theological Perspectives, History of Interpretation, Social-Cultural Functions, and Theology and Theological Heritage.
APA, Harvard, Vancouver, ISO, and other styles
10

Carston, Robyn. Pragmatics and Semantics. Edited by Yan Huang. Oxford University Press, 2016. http://dx.doi.org/10.1093/oxfordhb/9780199697960.013.19.

Full text
Abstract:
A cognitive-scientific approach to the pragmatic interpretive ability is presented, according to which it is seen as a specific cognitive system dedicated to the interpretation of ostensive stimuli, that is, verbal utterances and other overtly communicative acts. This approach calls for a dual construal of semantics. The semantics which interfaces with the pragmatic interpretive system is not a matter of truth-conditional content, but of whatever components of meaning (lexical and syntactic) are encoded by the language system (independent of any particular use of the system by speakers in specific contexts). This linguistically provided meaning functions as evidence that guides and constrains the addressee’s pragmatic inferential processes whose goal is the recovery of the speaker’s intended meaning. Speakers communicate thoughts (explicatures and implicatures)—that is, fully propositional (truth-evaluable) entities—and it is these that are the proper domain of a truth-conditional (referential) semantics.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Context Encoder"

1

Keya, Mumenunnessa, Abu Kaisar Mohammad Masum, Sheikh Abujar, Sharmin Akter, and Syed Akhter Hossain. "Bengali Context–Question Similarity Using Universal Sentence Encoder." In Advances in Intelligent Systems and Computing, 305–15. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-33-4367-2_30.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Tian, Xiaohua, Thinh M. Le, and Yong Lian. "Efficient Architecture for Context Modeling in the CABAC Encoder." In Entropy Coders of the H.264/AVC Standard, 99–121. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-14703-6_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Qiu, Hao, Zaiwang Gu, Lei Mou, Xiaoqian Mao, Liyang Fang, Yitian Zhao, Jiang Liu, and Jun Cheng. "The Channel Attention Based Context Encoder Network for Inner Limiting Membrane Detection." In Ophthalmic Medical Image Analysis, 104–11. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-32956-3_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Stagakis, Nikolaos, Evangelia I. Zacharaki, and Konstantinos Moustakas. "Hierarchical Image Inpainting by a Deep Context Encoder Exploiting Structural Similarity and Saliency Criteria." In Lecture Notes in Computer Science, 470–79. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-34995-0_42.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zhu, Kanghua, Yongfang Wang, Jian Wu, Yun Zhu, and Wei Zhang. "Content Oriented Video Quality Prediction for HEVC Encoded Stream." In Communications in Computer and Information Science, 338–48. Singapore: Springer Singapore, 2017. http://dx.doi.org/10.1007/978-981-10-4211-9_33.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

El-Haddad, Christiane, and Yiannis Laouris. "The Ability of Children with Mild Learning Disabilities to Encode Emotions through Facial Expressions." In Toward Autonomous, Adaptive, and Context-Aware Multimodal Interfaces. Theoretical and Practical Issues, 387–402. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-18184-9_34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Saito, Kuniaki, Kate Saenko, and Ming-Yu Liu. "COCO-FUNIT: Few-Shot Unsupervised Image Translation with a Content Conditioned Style Encoder." In Computer Vision – ECCV 2020, 382–98. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58580-8_23.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Modegi, Toshio. "Development of MIDI Encoder “Auto-F” for Creating MIDI Controllable General Audio Contents." In Entertainment Computing, 233–40. Boston, MA: Springer US, 2003. http://dx.doi.org/10.1007/978-0-387-35660-0_28.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Jallais, Maëliss, and Demian Wassermann. "Single Encoding Diffusion MRI: A Probe to Brain Anisotropy." In Mathematics and Visualization, 171–91. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-56215-1_8.

Full text
Abstract:
AbstractThis chapter covers anisotropy in the context of probing microstructure of the human brain using single encoded diffusion MRI. We will start by illustrating how diffusion MRI is a perfectly adapted technique to measure anisotropy in the human brain using water motion, followed by a biological presentation of human brain. The non-invasive imaging technique based on water motions known as diffusion MRI will be further presented, along with the difficulties that come with it. Within this context, we will first review and discuss methods based on signal representation that enable us to get an insight into microstructure anisotropy. We will then outline methods based on modeling, which are state-of-the-art methods to get parameter estimations of the human brain tissue.
APA, Harvard, Vancouver, ISO, and other styles
10

Bryant, Peter T. "Evaluation of Performance." In Augmented Humanity, 199–223. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-76445-6_7.

Full text
Abstract:
AbstractAgents evaluate their performances to assess progress, learn, and improve. In doing so, they refer to criteria of various kinds. Some criteria are deeply encoded in mental models, organizational procedures, or cultural norms and logics, while other evaluative criteria are adaptive and may upregulate or downregulate, depending on the agent’s goals, expectations, and context. Here, too, digitalization is transformative. Artificial agents bring unprecedented power to the evaluation of performance, including the rapid intra-cyclical evaluation of ongoing processes. These mechanisms support feedforward guidance in real time. Therefore, when human and artificial agents combine in the evaluation of augmented performance, they face additional risks. Artificial evaluative processing could be fast and precise, while at the same time, human evaluation may be relatively sluggish and imprecise. Overall evaluations of performance could be distorted and dysfunctional.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Context Encoder"

1

Qu, Zhenshen, Shimeng Yu, and Mengyu Fu. "Motion background modeling based on context-encoder." In 2016 Third International Conference on Artificial Intelligence and Pattern Recognition (AIPR). IEEE, 2016. http://dx.doi.org/10.1109/icaipr.2016.7585207.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Liao, Liang, Ruimin Hu, Jing Xiao, and Zhongyuan Wang. "Edge-Aware Context Encoder for Image Inpainting." In ICASSP 2018 - 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2018. http://dx.doi.org/10.1109/icassp.2018.8462549.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Altay, Fatih, and Senem Velipasalar. "Image Completion with Discriminator Guided Context Encoder." In 2018 52nd Asilomar Conference on Signals, Systems, and Computers. IEEE, 2018. http://dx.doi.org/10.1109/acssc.2018.8645434.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Xu, Hongfei, Deyi Xiong, Josef van Genabith, and Qiuhui Liu. "Efficient Context-Aware Neural Machine Translation with Layer-Wise Weighting and Input-Aware Gating." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/544.

Full text
Abstract:
Existing Neural Machine Translation (NMT) systems are generally trained on a large amount of sentence-level parallel data, and during prediction sentences are independently translated, ignoring cross-sentence contextual information. This leads to inconsistency between translated sentences. In order to address this issue, context-aware models have been proposed. However, document-level parallel data constitutes only a small part of the parallel data available, and many approaches build context-aware models based on a pre-trained frozen sentence-level translation model in a two-step training manner. The computational cost of these approaches is usually high. In this paper, we propose to make the most of layers pre-trained on sentence-level data in contextual representation learning, reusing representations from the sentence-level Transformer and significantly reducing the cost of incorporating contexts in translation. We find that representations from shallow layers of a pre-trained sentence-level encoder play a vital role in source context encoding, and propose to perform source context encoding upon weighted combinations of pre-trained encoder layers' outputs. Instead of separately performing source context and input encoding, we propose to iteratively and jointly encode the source input and its contexts and to generate input-aware context representations with a cross-attention layer and a gating mechanism, which resets irrelevant information in context encoding. Our context-aware Transformer model outperforms the recent CADec [Voita et al., 2019c] on the English-Russian subtitle data and is about twice as fast in training and decoding.
APA, Harvard, Vancouver, ISO, and other styles
5

Yamane, Satoshi, and Tetsuto Takano. "Machine Translation Considering Context Informaiton Using Encoder-Decoder Model." In 2018 IEEE 42nd Annual Computer Software and Applications Conference (COMPSAC). IEEE, 2018. http://dx.doi.org/10.1109/compsac.2018.00123.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Moris, Daniel I., Alvaro S. Hervella, Jose Rouco, Jorge Novo, and Marcos Ortega. "Context encoder self-supervised approaches for eye fundus analysis." In 2021 International Joint Conference on Neural Networks (IJCNN). IEEE, 2021. http://dx.doi.org/10.1109/ijcnn52387.2021.9533567.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Yu, Zhen, Youyi Song, and Jing Qin. "Dense Pyramid Context Encoder-Decoder Network for Kidney Lesion Segmentation." In 2019 Kidney Tumor Segmentation Challenge: KiTS19. University of Minnesota Libraries Publishing, 2019. http://dx.doi.org/10.24926/548719.046.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Tian, X. H., T. M. Le, H. C. Teo, B. L. Ho, and Y. Lian. "CABAC HW Encoder with RDO Context Management and MBIST Capability." In 2007 International Symposium on Integrated Circuits - ISIC 2007. IEEE, 2007. http://dx.doi.org/10.1109/isicir.2007.4441841.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Zeng, Yanhong, Jianlong Fu, Hongyang Chao, and Baining Guo. "Learning Pyramid-Context Encoder Network for High-Quality Image Inpainting." In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2019. http://dx.doi.org/10.1109/cvpr.2019.00158.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ma, Zhiqiang, Chunyu Wang, Ji Shen, and Baoxiang Du. "A Context-Aware Variational Auto-Encoder Model for Text Generation." In 2020 IEEE Intl Conf on Parallel & Distributed Processing with Applications, Big Data & Cloud Computing, Sustainable Computing & Communications, Social Computing & Networking (ISPA/BDCloud/SocialCom/SustainCom). IEEE, 2020. http://dx.doi.org/10.1109/ispa-bdcloud-socialcom-sustaincom51426.2020.00175.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Context Encoder"

1

Faltstrom, P., D. Crocker, and E. Fair. MIME Content Type for BinHex Encoded Files. RFC Editor, December 1994. http://dx.doi.org/10.17487/rfc1741.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography